text
stringlengths
11
320k
source
stringlengths
26
161
In mobility management , the restricted random waypoint model is a random model for the movement of mobile users, similar to the random waypoint model , but where the waypoints are restricted to fall within one of a finite set of sub-domains. It was originally introduced by Blaževic et al. [ 1 ] in order to model intercity examples and later defined in a more general setting by Le Boudec et al. [ 2 ] The restricted random waypoint models the trajectory of a mobile user in a connected domain A {\displaystyle A} . Given a sequence of locations M 0 , M 1 , . . . {\displaystyle M_{0},M_{1},...} in A {\displaystyle A} , called waypoints, the trajectory of the mobile is defined by traveling from one waypoint M n {\displaystyle M_{n}} to the next M n + 1 {\displaystyle M_{n+1}} along the shortest path in A {\displaystyle A} between them. In the restricted setting, the waypoints are restricted to fall within one of a finite set of subdomains A i ⊂ A {\displaystyle A_{i}\subset A} . On the trip between M n {\displaystyle M_{n}} and M n + 1 {\displaystyle M_{n+1}} , the mobile moves at constant speed V n {\displaystyle V_{n}} which is sampled from some distribution, usually a uniform distribution. The duration of the n {\displaystyle n} -th trip is thus: S n = d ( M n , M n + 1 ) V n {\displaystyle S_{n}={\frac {d(M_{n},M_{n+1})}{V_{n}}}} where d ( x , y ) {\displaystyle d(x,y)} is the length of the shortest path in A {\displaystyle A} between x {\displaystyle x} and y {\displaystyle y} . The mobile may also pause at a waypoint, in which case the n {\displaystyle n} -th trip is a pause at the location of the n {\displaystyle n} -th waypoint, i.e. M n + 1 = M n {\displaystyle M_{n+1}=M_{n}} . A duration S n {\displaystyle S_{n}} is drawn from some distribution F pause {\displaystyle F_{\text{pause}}} to indicate the end of the pause. The transition instants T n {\displaystyle T_{n}} are the time at which the mobile reaches the n {\displaystyle n} -th waypoint. They are defined as follow: { T 0 is chosen by some initialization rule T n + 1 = T n + S n {\displaystyle {\begin{cases}T_{0}{\text{ is chosen by some initialization rule }}\\T_{n+1}=T_{n}+S_{n}\end{cases}}} The sampling algorithm for the waypoints depends on the phase of the simulation. An initial phase I 0 = ( i , j , r , p ) {\displaystyle I_{0}=(i,j,r,p)} is chosen according to some initialization rule. Given phase I n = ( i , j , r , p ) {\displaystyle I_{n}=(i,j,r,p)} , the next phase I n + 1 {\displaystyle I_{n+1}} is chosen as follows. If r > 0 {\displaystyle r>0} then p ′ {\displaystyle p'} is sampled from some distribution and I n + 1 = ( i , j , r − 1 , p ′ ) {\displaystyle I_{n+1}=(i,j,r-1,p')} . Otherwise, a new sub-domain k {\displaystyle k} is sampled and a number r ′ {\displaystyle r'} of trip to undergo in sub-domain j {\displaystyle j} is sampled. The new phase is: I n + 1 = ( j , k , r ′ , move ) {\displaystyle I_{n+1}=(j,k,r',{\text{move}})} . Given a phase I n = ( i , j , r , p ) {\displaystyle I_{n}=(i,j,r,p)} the waypoint M n + 1 {\displaystyle M_{n+1}} is set to M n {\displaystyle M_{n}} if p = pause {\displaystyle p={\text{pause}}} . Otherwise, it is sampled from sub-domain A i {\displaystyle A_{i}} if r > 0 {\displaystyle r>0} and from sub-domain A j {\displaystyle A_{j}} if r = 0 {\displaystyle r=0} . In a typical simulation models, when the condition for stability is satisfied, simulation runs go through a transient period and converge to the stationary regime. It is important to remove the transients for performing meaningful comparisons of, for example, different mobility regimes. A standard method for avoiding such a bias is to (i) make sure the used model has a stationary regime and (ii) remove the beginning of all simulation runs in the hope that long runs converge to stationary regime. However the length of transients may be prohibitively long for even simple mobility models and a major difficulty is to know when the transient ends. [ 2 ] An alternative, called "perfect simulation", is to sample the initial simulation state from the stationary regime. There exists algorithms for perfect simulation of the general restricted random waypoint. They are described in Perfect simulation and stationarity of a class of mobility models (2005) [ 2 ] and a Python implementation is available on GitHub. [ 3 ]
https://en.wikipedia.org/wiki/Restricted_random_waypoint_model
A restriction digest is a procedure used in molecular biology to prepare DNA for analysis or other processing. It is sometimes termed DNA fragmentation , though this term is used for other procedures as well. In a restriction digest, DNA molecules are cleaved at specific restriction sites of 4-12 nucleotides in length by use of restriction enzymes which recognize these sequences. [ 1 ] The resulting digested DNA is very often selectively amplified using polymerase chain reaction (PCR), making it more suitable for analytical techniques such as agarose gel electrophoresis , and chromatography . It is used in genetic fingerprinting , plasmid subcloning , and RFLP analysis . A given restriction enzyme cuts DNA segments within a specific nucleotide sequence , at what is called a restriction site . These recognition sequences are typically four, six, eight, ten, or twelve nucleotides long and generally palindromic (i.e. the same nucleotide sequence in the 5' – 3' direction). Because there are only so many ways to arrange the four nucleotides that compose DNA (Adenine, Thymine, Guanine and Cytosine) into a four- to twelve-nucleotide sequence, recognition sequences tend to occur by chance in any long sequence. Restriction enzymes specific to hundreds of distinct sequences have been identified and synthesized for sale to laboratories, and as a result, several potential "restriction sites" appear in almost any gene or locus of interest on any chromosome. Furthermore, almost all artificial plasmids include a (often entirely synthetic) polylinker (also called "multiple cloning site") that contains dozens of restriction enzyme recognition sequences within a very short segment of DNA. This allows the insertion of almost any specific fragment of DNA into plasmid vectors , which can be efficiently "cloned" by insertion into replicating bacterial cells. After restriction digest, DNA can then be analysed using agarose gel electrophoresis . In gel electrophoresis, a sample of DNA is first "loaded" onto a slab of agarose gel (literally pipetted into small wells at one end of the slab). The gel is then subjected to an electric field, which draws the negatively charged DNA across it. The molecules travel at different rates (and therefore end up at different distances) depending on their net charge (more highly charged particles travel further), and size (smaller particles travel further). Since none of the four nucleotide bases carry any charge, net charge becomes insignificant and size is the main factor affecting rate of diffusion through the gel. Net charge in DNA is produced by the sugar-phosphate backbone . This is in contrast to proteins, in which there is no "backbone", and net charge is generated by different combinations and numbers of charged amino acids . Restriction digest is most commonly used as part of the process of the molecular cloning of DNA fragment into a vector (such as a cloning vector or an expression vector ). The vector typically contains a multiple cloning site where many restriction site may be found, and a foreign piece of DNA may be inserted into the vector by first cutting the restriction sites in the vector as well the DNA fragment, followed by ligation of the DNA fragment into the vector. Restriction digests are also necessary for performing any of the following analytical techniques: There are numerous types of restriction enzymes, each of which will cut DNA differently. Most commonly used restriction enzymes are Type II restriction endonuclease (See article on Restriction enzymes for examples). There are some that cut a three base pair sequence while others can cut four, six, and even eight. Each enzyme has distinct properties that determine how efficiently it can cut and under what conditions. Most manufacturers that produce such enzymes will often provide a specific buffer solution that contains the unique mix of cations and other components that aid the enzyme in cutting as efficiently as possible. Different restriction enzymes may also have different optimal temperatures under which they function. Note that for efficient digest of DNA, the restriction site should not be located at the very end of a DNA fragment. The restriction enzymes may require a minimum number of base pairs between the restriction site and the end of the DNA for the enzyme to work efficiently. [ 2 ] This number may vary between enzymes, but for most commonly used restriction enzymes around 6–10 base pair is sufficient.
https://en.wikipedia.org/wiki/Restriction_digest
A restriction enzyme , restriction endonuclease , REase , ENase or restrictase is an enzyme that cleaves DNA into fragments at or near specific recognition sites within molecules known as restriction sites . [ 1 ] [ 2 ] [ 3 ] Restriction enzymes are one class of the broader endonuclease group of enzymes. Restriction enzymes are commonly classified into five types, which differ in their structure and whether they cut their DNA substrate at their recognition site, or if the recognition and cleavage sites are separate from one another. To cut DNA, all restriction enzymes make two incisions, once through each sugar-phosphate backbone (i.e. each strand) of the DNA double helix . These enzymes are found in bacteria and archaea and provide a defense mechanism against invading viruses . [ 4 ] [ 5 ] Inside a prokaryote , the restriction enzymes selectively cut up foreign DNA in a process called restriction digestion ; meanwhile, host DNA is protected by a modification enzyme (a methyltransferase ) that modifies the prokaryotic DNA and blocks cleavage. Together, these two processes form the restriction modification system . [ 6 ] More than 3,600 restriction endonucleases are known which represent over 250 different specificities. [ 7 ] Over 3,000 of these have been studied in detail, and more than 800 of these are available commercially. [ 8 ] These enzymes are routinely used for DNA modification in laboratories, and they are a vital tool in molecular cloning . [ 9 ] [ 10 ] [ 11 ] The term restriction enzyme originated from the studies of phage λ , a virus that infects bacteria, and the phenomenon of host-controlled restriction and modification of such bacterial phage or bacteriophage . [ 12 ] The phenomenon was first identified in work done in the laboratories of Salvador Luria , Jean Weigle and Giuseppe Bertani in the early 1950s. [ 13 ] [ 14 ] It was found that, for a bacteriophage λ that can grow well in one strain of Escherichia coli , for example E. coli C, when grown in another strain, for example E. coli K, its yields can drop significantly, by as much as three to five orders of magnitude. The host cell, in this example E. coli K, is known as the restricting host and appears to have the ability to reduce the biological activity of the phage λ. If a phage becomes established in one strain, the ability of that phage to grow also becomes restricted in other strains. In the 1960s, it was shown in work done in the laboratories of Werner Arber and Matthew Meselson that the restriction is caused by an enzymatic cleavage of the phage DNA, and the enzyme involved was therefore termed a restriction enzyme. [ 4 ] [ 15 ] [ 16 ] [ 17 ] The restriction enzymes studied by Arber and Meselson were type I restriction enzymes, which cleave DNA randomly away from the recognition site. [ 18 ] In 1970, Hamilton O. Smith , Thomas Kelly and Kent Wilcox isolated and characterized the first type II restriction enzyme, HindII , from the bacterium Haemophilus influenzae . [ 19 ] [ 20 ] Restriction enzymes of this type are more useful for laboratory work as they cleave DNA at the site of their recognition sequence and are the most commonly used as a molecular biology tool. [ 21 ] Later, Daniel Nathans and Kathleen Danna showed that cleavage of simian virus 40 (SV40) DNA by restriction enzymes yields specific fragments that can be separated using polyacrylamide gel electrophoresis , thus showing that restriction enzymes can also be used for mapping DNA. [ 22 ] For their work in the discovery and characterization of restriction enzymes, the 1978 Nobel Prize for Physiology or Medicine was awarded to Werner Arber , Daniel Nathans , and Hamilton O. Smith . [ 23 ] The discovery of restriction enzymes allows DNA to be manipulated, leading to the development of recombinant DNA technology that has many applications, for example, allowing the large scale production of proteins such as human insulin used by diabetic patients. [ 13 ] [ 24 ] Restriction enzymes likely evolved from a common ancestor and became widespread via horizontal gene transfer . [ 25 ] [ 26 ] In addition, there is mounting evidence that restriction endonucleases evolved as a selfish genetic element. [ 27 ] Restriction enzymes recognize a specific sequence of nucleotides [ 2 ] and produce a double-stranded cut in the DNA. The recognition sequences can also be classified by the number of bases in its recognition site, usually between 4 and 8 bases, and the number of bases in the sequence will determine how often the site will appear by chance in any given genome, e.g., a 4-base pair sequence would theoretically occur once every 4^4 or 256bp, 6 bases, 4^6 or 4,096bp, and 8 bases would be 4^8 or 65,536bp. [ 28 ] Many of them are palindromic , meaning the base sequence reads the same backwards and forwards. [ 29 ] In theory, there are two types of palindromic sequences that can be possible in DNA. The mirror-like palindrome is similar to those found in ordinary text, in which a sequence reads the same forward and backward on a single strand of DNA, as in GTAATG. The inverted repeat palindrome is also a sequence that reads the same forward and backward, but the forward and backward sequences are found in complementary DNA strands (i.e., of double-stranded DNA), as in GTATAC (GTATAC being complementary to CATATG). [ 30 ] Inverted repeat palindromes are more common and have greater biological importance than mirror-like palindromes. EcoRI digestion produces "sticky" ends , whereas SmaI restriction enzyme cleavage produces "blunt" ends : Recognition sequences in DNA differ for each restriction enzyme, producing differences in the length, sequence and strand orientation ( 5' end or 3' end ) of a sticky-end "overhang" of an enzyme restriction. [ 31 ] Different restriction enzymes that recognize the same sequence are known as isoschizomers . Different enzymes that recognize the same location, but cut at a different position are known as neoschizomers . [ 32 ] Naturally occurring restriction endonucleases are categorized into five groups (Types I, II, III, IV, and V) based on their composition and enzyme cofactor requirements, the nature of their target sequence, and the position of their DNA cleavage site relative to the target sequence. [ 33 ] [ 34 ] [ 35 ] DNA sequence analysis of restriction enzymes however show great variations, indicating that there are more than four types. [ 36 ] All types of enzymes recognize specific short DNA sequences and carry out the endonucleolytic cleavage of DNA to give specific fragments with terminal 5'-phosphates. They differ in their recognition sequence, subunit composition, cleavage position, and cofactor requirements, [ 37 ] [ 38 ] as summarised below: Type I restriction enzymes were the first to be identified and were first identified in two different strains (K-12 and B) of E. coli . [ 39 ] These enzymes cut at a site that differs, and is a random distance (at least 1000 bp) away, from their recognition site. Cleavage at these random sites follows a process of DNA translocation, which shows that these enzymes are also molecular motors. The recognition site is asymmetrical and is composed of two specific portions—one containing 3–4 nucleotides, and another containing 4–5 nucleotides—separated by a non-specific spacer of about 6–8 nucleotides. These enzymes are multifunctional and are capable of both restriction digestion and modification activities, depending upon the methylation status of the target DNA. The cofactors S-Adenosyl methionine (AdoMet), hydrolyzed adenosine triphosphate ( ATP ), and magnesium (Mg 2+ ) ions , are required for their full activity. Type I restriction enzymes possess three subunits called HsdR, HsdM, and HsdS; HsdR is required for restriction digestion; HsdM is necessary for adding methyl groups to host DNA (methyltransferase activity), and HsdS is important for specificity of the recognition (DNA-binding) site in addition to both restriction digestion (DNA cleavage) and modification (DNA methyltransferase) activity. [ 33 ] [ 39 ] Typical type II restriction enzymes differ from type I restriction enzymes in several ways. They form homodimers , with recognition sites that are usually undivided and palindromic and 4–8 nucleotides in length. They recognize and cleave DNA at the same site, and they do not use ATP or AdoMet for their activity—they usually require only Mg 2+ as a cofactor. [ 29 ] These enzymes cleave the phosphodiester bond of double helix DNA. It can either cleave at the center of both strands to yield a blunt end, or at a staggered position leaving overhangs called sticky ends. [ 41 ] These are the most commonly available and used restriction enzymes. In the 1990s and early 2000s, new enzymes from this family were discovered that did not follow all the classical criteria of this enzyme class, and new subfamily nomenclature was developed to divide this large family into subcategories based on deviations from typical characteristics of type II enzymes. [ 29 ] These subgroups are defined using a letter suffix. Type IIB restriction enzymes (e.g., BcgI and BplI) are multimers , containing more than one subunit. [ 29 ] They cleave DNA on both sides of their recognition to cut out the recognition site. They require both AdoMet and Mg 2+ cofactors. Type IIE restriction endonucleases (e.g., NaeI) cleave DNA following interaction with two copies of their recognition sequence. [ 29 ] One recognition site acts as the target for cleavage, while the other acts as an allosteric effector that speeds up or improves the efficiency of enzyme cleavage. Similar to type IIE enzymes, type IIF restriction endonucleases (e.g. NgoMIV) interact with two copies of their recognition sequence but cleave both sequences at the same time. [ 29 ] Type IIG restriction endonucleases (e.g., RM.Eco57I) do have a single subunit, like classical Type II restriction enzymes, but require the cofactor AdoMet to be active. [ 29 ] Type IIM restriction endonucleases, such as DpnI , are able to recognize and cut methylated DNA. [ 29 ] [ 42 ] [ 43 ] Type IIS restriction endonucleases (e.g. FokI) cleave DNA at a defined distance from their non-palindromic asymmetric recognition sites; [ 29 ] this characteristic is widely used to perform in-vitro cloning techniques such as Golden Gate cloning . These enzymes may function as dimers . Similarly, Type IIT restriction enzymes (e.g., Bpu10I and BslI) are composed of two different subunits. Some recognize palindromic sequences while others have asymmetric recognition sites. [ 29 ] Type III restriction enzymes (e.g., EcoP15) recognize two separate non-palindromic sequences that are inversely oriented. They cut DNA about 20–30 base pairs after the recognition site. [ 44 ] These enzymes contain more than one subunit and require AdoMet and ATP cofactors for their roles in DNA methylation and restriction digestion, respectively. [ 45 ] They are components of prokaryotic DNA restriction-modification mechanisms that protect the organism against invading foreign DNA. Type III enzymes are hetero-oligomeric, multifunctional proteins composed of two subunits, Res ( P08764 ) and Mod ( P08763 ). The Mod subunit recognises the DNA sequence specific for the system and is a modification methyltransferase ; as such, it is functionally equivalent to the M and S subunits of type I restriction endonuclease. Res is required for restriction digestion, although it has no enzymatic activity on its own. Type III enzymes recognise short 5–6 bp-long asymmetric DNA sequences and cleave 25–27 bp downstream to leave short, single-stranded 5' protrusions. They require the presence of two inversely oriented unmethylated recognition sites for restriction digestion to occur. These enzymes methylate only one strand of the DNA, at the N-6 position of adenine residues, so newly replicated DNA will have only one strand methylated, which is sufficient to protect against restriction digestion. Type III enzymes belong to the beta-subfamily of N6 adenine methyltransferases , containing the nine motifs that characterise this family, including motif I, the AdoMet binding pocket (FXGXG), and motif IV, the catalytic region (S/D/N (PP) Y/F). [ 37 ] [ 46 ] Type IV enzymes recognize modified, typically methylated DNA and are exemplified by the McrBC and Mrr systems of E. coli . [ 36 ] Type V restriction enzymes (e.g., the cas9 -gRNA complex from CRISPRs [ 47 ] ) utilize guide RNAs to target specific non-palindromic sequences found on invading organisms. They can cut DNA of variable length, provided that a suitable guide RNA is provided. The flexibility and ease of use of these enzymes make them promising for future genetic engineering applications. [ 47 ] [ 48 ] Artificial restriction enzymes can be generated by fusing a natural or engineered DNA-binding domain to a nuclease domain (often the cleavage domain of the type IIS restriction enzyme FokI ). [ 49 ] Such artificial restriction enzymes can target large DNA sites (up to 36 bp) and can be engineered to bind to desired DNA sequences. [ 50 ] Zinc finger nucleases are the most commonly used artificial restriction enzymes and are generally used in genetic engineering applications, [ 51 ] [ 52 ] [ 53 ] [ 54 ] but can also be used for more standard gene cloning applications. [ 55 ] Other artificial restriction enzymes are based on the DNA binding domain of TAL effectors . [ 56 ] [ 57 ] In 2013, a new technology CRISPR-Cas9, based on a prokaryotic viral defense system, was engineered for editing the genome, and it was quickly adopted in laboratories. [ 58 ] For more detail, read CRISPR (Clustered regularly interspaced short palindromic repeats). In 2017, a group from University of Illinois reported using an Argonaute protein taken from Pyrococcus furiosus (PfAgo) along with guide DNA to edit DNA in vitro as artificial restriction enzymes. [ 59 ] Artificial ribonucleases that act as restriction enzymes for RNA have also been developed. A PNA -based system, called a PNAzyme, has a Cu(II)- 2,9-dimethylphenanthroline group that mimics ribonucleases for specific RNA sequence and cleaves at a non-base-paired region (RNA bulge) of the targeted RNA formed when the enzyme binds the RNA. This enzyme shows selectivity by cleaving only at one site that either does not have a mismatch or is kinetically preferred out of two possible cleavage sites. [ 60 ] Since their discovery in the 1970s, many restriction enzymes have been identified; for example, more than 3500 different Type II restriction enzymes have been characterized. [ 61 ] Each enzyme is named after the bacterium from which it was isolated, using a naming system based on bacterial genus , species and strain . [ 62 ] [ 63 ] For example, the name of the EcoRI restriction enzyme was derived as shown in the box. Isolated restriction enzymes are used to manipulate DNA for different scientific applications. They are used to assist insertion of genes into plasmid vectors during gene cloning and protein production experiments. For optimal use, plasmids that are commonly used for gene cloning are modified to include a short polylinker sequence (called the multiple cloning site , or MCS) rich in restriction recognition sequences . This allows flexibility when inserting gene fragments into the plasmid vector; restriction sites contained naturally within genes influence the choice of endonuclease for digesting the DNA, since it is necessary to avoid restriction of wanted DNA while intentionally cutting the ends of the DNA. To clone a gene fragment into a vector, both plasmid DNA and gene insert are typically cut with the same restriction enzymes, and then glued together with the assistance of an enzyme known as a DNA ligase . [ 64 ] [ 65 ] Restriction enzymes can also be used to distinguish gene alleles by specifically recognizing single base changes in DNA known as single-nucleotide polymorphisms (SNPs). [ 66 ] [ 67 ] This is however only possible if a SNP alters the restriction site present in the allele. In this method, the restriction enzyme can be used to genotype a DNA sample without the need for expensive gene sequencing . The sample is first digested with the restriction enzyme to generate DNA fragments, and then the different sized fragments separated by gel electrophoresis . In general, alleles with correct restriction sites will generate two visible bands of DNA on the gel, and those with altered restriction sites will not be cut and will generate only a single band. A DNA map by restriction digest can also be generated that can give the relative positions of the genes. [ 68 ] The different lengths of DNA generated by restriction digest also produce a specific pattern of bands after gel electrophoresis, and can be used for DNA fingerprinting . In a similar manner, restriction enzymes are used to digest genomic DNA for gene analysis by Southern blot . This technique allows researchers to identify how many copies (or paralogues ) of a gene are present in the genome of one individual, or how many gene mutations ( polymorphisms ) have occurred within a population. The latter example is called restriction fragment length polymorphism (RFLP). [ 69 ] Artificial restriction enzymes created by linking the FokI DNA cleavage domain with an array of DNA binding proteins or zinc finger arrays, denoted zinc finger nucleases (ZFN), are a powerful tool for host genome editing due to their enhanced sequence specificity. ZFN work in pairs, their dimerization being mediated in-situ through the FokI domain. Each zinc finger array (ZFA) is capable of recognizing 9–12 base pairs, making for 18–24 for the pair. A 5–7 bp spacer between the cleavage sites further enhances the specificity of ZFN, making them a safe and more precise tool that can be applied in humans. A recent Phase I clinical trial of ZFN for the targeted abolition of the CCR5 co-receptor for HIV-1 has been undertaken. [ 70 ] Others have proposed using the bacteria R-M system as a model for devising human anti-viral gene or genomic vaccines and therapies since the RM system serves an innate defense-role in bacteria by restricting tropism by bacteriophages. [ 71 ] There is research on REases and ZFN that can cleave the DNA of various human viruses, including HSV-2 , high-risk HPVs and HIV-1 , with the ultimate goal of inducing target mutagenesis and aberrations of human-infecting viruses. [ 72 ] [ 73 ] [ 74 ] The human genome already contains remnants of retroviral genomes that have been inactivated and harnessed for self-gain. Indeed, the mechanisms for silencing active L1 genomic retroelements by the three prime repair exonuclease 1 (TREX1) and excision repair cross complementing 1(ERCC) appear to mimic the action of RM-systems in bacteria, and the non-homologous end-joining (NHEJ) that follows the use of ZFN without a repair template. [ 75 ] [ 76 ] Examples of restriction enzymes include: [ 77 ] Key: * = blunt ends N = C or G or T or A W = A or T
https://en.wikipedia.org/wiki/Restriction_enzyme
A restriction fragment is a DNA fragment resulting from the cutting of a DNA strand by a restriction enzyme (restriction endonucleases), a process called restriction. [ 1 ] Each restriction enzyme is highly specific, recognising a particular short DNA sequence, or restriction site, and cutting both DNA strands at specific points within this site. Most restriction sites are palindromic , (the sequence of nucleotides is the same on both strands when read in the 5' to 3' direction of each strand), and are four to eight nucleotides long. Many cuts are made by one restriction enzyme because of the chance repetition of these sequences in a long DNA molecule, yielding a set of restriction fragments. A particular DNA molecule will always yield the same set of restriction fragments when exposed to the same restriction enzyme. Restriction fragments can be analyzed using techniques such as gel electrophoresis or used in recombinant DNA technology. [ 2 ] In recombinant DNA technology, specific restriction endonucleases are used that will isolate a particular gene and cleave the sugar phosphate backbones at different points (retaining symmetry), so that the double-stranded restriction fragments have single-stranded ends. These short extensions, called sticky ends , can form hydrogen bonded base pairs with complementary sticky ends on any other DNA cut with the same enzyme (such as a bacterial plasmid). In agarose gel electrophoresis , the restriction fragments yield a band pattern characteristic of the original DNA molecule and restriction enzyme used, for example the relatively small DNA molecules of viruses and plasmids can be identified simply by their restriction fragment patterns. If the nucleotide differences of two different alleles occur within the restriction site of a particular restriction enzyme, digestion of segments of DNA from individuals with different alleles for that particular gene with that enzyme would produce different fragments and that will each yield different band patterns in gel electrophoresis. This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Restriction_fragment
In molecular biology , restriction fragment length polymorphism ( RFLP ) is a technique that exploits variations in homologous DNA sequences, known as polymorphisms , populations, or species or to pinpoint the locations of genes within a sequence. The term may refer to a polymorphism itself, as detected through the differing locations of restriction enzyme sites , or to a related laboratory technique by which such differences can be illustrated. In RFLP analysis , a DNA sample is digested into fragments by one or more restriction enzymes , and the resulting restriction fragments are then separated by gel electrophoresis according to their size. RFLP analysis is now largely obsolete due to the emergence of inexpensive DNA sequencing technologies, but it was the first DNA profiling technique inexpensive enough to see widespread application. RFLP analysis was an important early tool in genome mapping , localization of genes for genetic disorders , determination of risk for disease, and paternity testing . The basic technique for the detection of RFLPs fragmenting a sample of DNA with the application of a restriction enzyme , which can selectively cleave a DNA molecule wherever a short, specific sequence is recognized in a process known as a restriction digest . The DNA fragments produced by the digest are then separated by length through a process known as agarose gel electrophoresis and transferred to a membrane via the Southern blot procedure. Hybridization of the membrane to a labeled DNA probe then determines the length of the fragments which are complementary to the probe. A restriction fragment length polymorphism is said to occur when the length of a detected fragment varies between individuals, indicating non-identical sequence homologies. Each fragment length is considered an allele , whether it actually contains a coding region or not, and can be used in subsequent genetic analysis. There are two common mechanisms by which the size of a particular restriction fragment can vary. In the first schematic, a small segment of the genome is being detected by a DNA probe (thicker line). In allele A , the genome is cleaved by a restriction enzyme at three nearby sites (triangles), but only the rightmost fragment will be detected by the probe. In allele a , restriction site 2 has been lost by a mutation , so the probe now detects the larger fused fragment running from sites 1 to 3. The second diagram shows how this fragment size variation would look on a Southern blot, and how each allele (two per individual) might be inherited in members of a family. In the third schematic, the probe and restriction enzyme are chosen to detect a region of the genome that includes a variable number tandem repeat (VNTR) segment (boxes in schematic diagram). In allele c , there are five repeats in the VNTR, and the probe detects a longer fragment between the two restriction sites. In allele d , there are only two repeats in the VNTR, so the probe detects a shorter fragment between the same two restriction sites. Other genetic processes, such as insertions , deletions , translocations , and inversions , can also lead to polymorphisms. RFLP tests require much larger samples of DNA than do short tandem repeat (STR) tests. Analysis of RFLP variation in genomes was formerly a vital tool in genome mapping and genetic disease analysis. If researchers were trying to initially determine the chromosomal location of a particular disease gene, they would analyze the DNA of members of a family afflicted by the disease, and look for RFLP alleles that show a similar pattern of inheritance as that of the disease (see genetic linkage ). Once a disease gene was localized, RFLP analysis of other families could reveal who was at risk for the disease, or who was likely to be a carrier of the mutant genes. RFLP test is used in identification and differentiation of organisms by analyzing unique patterns in genome. It is also used in identification of recombination rate in the loci between restriction sites. RFLP analysis was also the basis for early methods of genetic fingerprinting , useful in the identification of samples retrieved from crime scenes, in the determination of paternity , and in the characterization of genetic diversity or breeding patterns in animal populations. The technique for RFLP analysis is, however, slow and cumbersome. It requires a large amount of sample DNA, and the combined process of probe labeling, DNA fragmentation, electrophoresis, blotting, hybridization, washing, and autoradiography can take up to a month to complete. A limited version of the RFLP method that used oligonucleotide probes was reported in 1985. [ 1 ] The results of the Human Genome Project have largely replaced the need for RFLP mapping, and the identification of many single-nucleotide polymorphisms (SNPs) in that project (as well as the direct identification of many disease genes and mutations) has replaced the need for RFLP disease linkage analysis (see SNP genotyping ). The analysis of VNTR alleles continues, but is now usually performed by polymerase chain reaction (PCR) methods. For example, the standard protocols for DNA fingerprinting involve PCR analysis of panels of more than a dozen VNTRs. RFLP is still used in marker-assisted selection. Terminal restriction fragment length polymorphism (TRFLP or sometimes T-RFLP) is a technique initially developed for characterizing bacterial communities in mixed-species samples. The technique has also been applied to other groups including soil fungi. TRFLP works by PCR amplification of DNA using primer pairs that have been labeled with fluorescent tags. The PCR products are then digested using RFLP enzymes and the resulting patterns visualized using a DNA sequencer. The results are analyzed either by simply counting and comparing bands or peaks in the TRFLP profile, or by matching bands from one or more TRFLP runs to a database of known species. A number of different software tools have been developed to automate the process of band matching, comparison and data basing of TRFLP profiles. [ 2 ] The technique is similar in some aspects to temperature gradient or denaturing gradient gel electrophoresis (TGGE and DGGE). The sequence changes directly involved with an RFLP can also be analyzed more quickly by PCR. Amplification can be directed across the altered restriction site, and the products digested with the restriction enzyme. This method has been called Cleaved Amplified Polymorphic Sequence (CAPS). Alternatively, the amplified segment can be analyzed by allele-specific oligonucleotide (ASO) probes, a process that can often be done by a simple dot blot .
https://en.wikipedia.org/wiki/Restriction_fragment_length_polymorphism
Restriction Fragment Mass Polymorphism (RFMP) is a technology which digests DNA into oligonucleotide fragments, and detects variation of DNA sequences by molecular weight of the fragments. RFMP is a proprietary technology of GeneMatrix and can be utilized for genotyping viruses and microorganisms , and for human genome research. It is relatively restricted in usage due to the existence of many other genotyping products. Restriction fragment mass polymorphism (RFMP) is an application of matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF), used for identifying individual nucleotides from a DNA fragment, most commonly used in labeling single nucleotide polymorphisms (SNP). RFMP was developed as a successor to the similar restriction fragment length polymorphism (RFLP) with the intent to allow for more SNPs. Rather than read out lengths of fragments as RFLP does, the individual nucleotides are read out using MALDI-TOF, which gives specific clarity over same-length site cutting. [ 1 ] Like RFLP, the basic mechanism for RFMP is to run polymerase chain reaction (PCR) over a test sample. Modified PCR primers are used to create known restriction sites for enzymatic digestion. From the known fragment lengths, then, selection by length size can filter out DNA of interest. Finally, MALDI-TOF is run on the fragments of interest to produce a m/z (mass-to-charge ratio) identification spectra of the individual nucleotides. A specific process, for example, would be Hong's 2008 strategy, [ 1 ] outlined as the following: These steps, like any experimental methodology, are case-specific, and can vary between experimental setup's goals and/or constraints. RFMP is still primarily limited to South Korean medical literature, as it is an array assay that competes with many other specialized detection systems (whereas RFMP serves as a more general functionality). [ 2 ] There has been focus for RFMP to be used in HPV detection in recent years. This is motivated by fact that it has a sensitivity two log 10 -fold better than standard of care. [ 3 ] However, this still does not put RFMP as the clear top choice in the HPV landscape as there are others such as the Roche Linear Array, Abbot Realtime genotype II, and Sysmex HISCL HCV Gr that experimentally outperform RFMP in terms of detection accuracy. [ 4 ] [ 5 ] Other limitations that hinder RFMP's spread in the medical world are attributed to its lack of information on SNP mutation rate [ 6 ] (e.g. masses have no correspondence to mutagenesis ), as well as a general increase in user-handling difficulty compared to its peers.
https://en.wikipedia.org/wiki/Restriction_fragment_mass_polymorphism
Restriction landmark genomic scanning ( RLGS ) is a genome analysis method for rapid simultaneous visualization of thousands of landmarks, or restriction sites . Using a combination of restriction enzymes some of which are specific to DNA modifications , the technique can be used to visualize differences in methylation levels across the genome of a given organism. [ 1 ] RLGS employs direct labeling of DNA , which is first cut by a specific series of restriction enzymes, and then labeled by a radioactive isotope (usually phosphorus-32 ). A two-dimensional electrophoresis process is then employed, yielding high-resolution results. The radioactive second-dimension gel is then allowed to expose a large sheet of film . The radiation produced by the radioactive labeling will cause the film to be exposed wherever the restriction fragments have migrated during electrophoresis. The film is then developed, yielding a visual representation of the results in the form of an autoradiograph . The same combination of restriction enzymes will produce the same pattern of 'spots' from samples from the same organisms, but different patterns for different types of organism. For example, human and mouse DNA will produce distinctly different patterns when treated with the same combination of enzymes. These finished auto-rads can be examined against each other, revealing any changes in gene expression that lead to visual differences in the film. Each autoradiograph contains thousands of spots, each corresponding to a labeled DNA restriction landmark. RLGS becomes useful when doing whole-genome scans, and can effectively do the work of thousands of polymerase chain reactions at once. It readily detects alterations deviating from normal, and thus is exceptionally effective in identifying hyper/hypomethylation in tumors , deletions or amplifications of genes, or simply changes in gene expression throughout the development of an organism.
https://en.wikipedia.org/wiki/Restriction_landmark_genomic_scanning
A restriction map is a map of known restriction sites within a sequence of DNA . Restriction mapping requires the use of restriction enzymes . In molecular biology , restriction maps are used as a reference to engineer plasmids or other relatively short pieces of DNA, and sometimes for longer genomic DNA. There are other ways of mapping features on DNA for longer length DNA molecules, such as mapping by transduction . [ 1 ] One approach in constructing a restriction map of a DNA molecule is to sequence the whole molecule and to run the sequence through a computer program that will find the recognition sites that are present for every restriction enzyme known. Before sequencing was automated, it would have been prohibitively expensive to sequence an entire DNA strand. To find the relative positions of restriction sites on a plasmid, a technique involving single and double restriction digests is used. Based on the sizes of the resultant DNA fragments the positions of the sites can be inferred. Restriction mapping is a very useful technique when used for determining the orientation of an insert in a cloning vector, by mapping the position of an off-center restriction site in the insert. [ 2 ] The experimental procedure first requires a sample of purified plasmid DNA for each digest to be run. Digestion is then performed with each enzyme(s) chosen. The resulting samples are subsequently run on an electrophoresis gel, typically on agarose gel. The first step following the completion of electrophoresis is to add up the sizes of the fragments in each lane. The sum of the individual fragments should equal the size of the original fragment, and each digest's fragments should also sum up to be the same size as each other. If fragment sizes do not properly add up, there are two likely problems. In one case, some of the smaller fragments may have run off the end of the gel. This frequently occurs if the gel is run too long. A second possible source of error is that the gel was not dense enough and therefore was unable to resolve fragments close in size. This leads to a lack of separation of fragments which were close in size. If all of the digests produce fragments that add up one may infer the position of the REN (restriction endonuclease) sites by placing them in spots on the original DNA fragment that would satisfy the fragment sizes produced by all three digests In this technique the cells are lysed in alkaline conditions. The DNA in the mixture is denatured (strands separated) by disrupting the hydrogen bonds between the two strands. The large genomic DNA is subject to tangling and staying denatured when the pH is lowered during the neutralization. In other words, the strands come back together in a disordered fashion, basepairing randomly. The circular supercoiled plasmids' strands will stay relatively closely aligned and will renature correctly. Therefore, the genomic DNA will form an insoluble aggregate and the supercoiled plasmids will be left in solution. This can be followed by phenol extraction to remove proteins and other molecules. Then the DNA can be subjected to ethanol precipitation to concentrate the sample.
https://en.wikipedia.org/wiki/Restriction_map
The restriction modification system ( RM system ) is found in bacteria and archaea , and provides a defense against foreign DNA , such as that borne by bacteriophages . Bacteria have restriction enzymes , also called restriction endonucleases , which cleave double-stranded DNA at specific points into fragments, which are then degraded further by other endonucleases . This prevents infection by effectively destroying the foreign DNA introduced by an infectious agent (such as a bacteriophage ). Approximately one-quarter of known bacteria possess RM systems and of those about one-half have more than one type of system. As the sequences recognized by the restriction enzymes are very short, the bacterium itself will almost certainly contain some within its genome. In order to prevent destruction of its own DNA by the restriction enzymes, methyl groups are added. These modifications must not interfere with the DNA base-pairing, and therefore, usually only a few specific bases are modified on each strand. Endonucleases cleave internal/non-terminal phosphodiester bonds. They do so only after recognising specific sequences in DNA which are usually 4–6 base pairs long, and often palindromic . The RM system was first discovered by Salvatore Luria and Mary Human in 1952 and 1953. [ 1 ] [ 2 ] They found that a bacteriophage growing within an infected bacterium could be modified, so that upon their release and re-infection of a related bacterium the bacteriophage's growth is restricted (inhibited; also described by Luria in his autobiography on pages 45 and 99 in 1984). [ 3 ] In 1953, Jean Weigle and Giuseppe Bertani reported similar examples of host-controlled modification using different bacteriophage system. [ 4 ] Later work by Daisy Roulland-Dussoix and Werner Arber in 1962 [ 5 ] and many other subsequent workers led to the understanding that restriction was due to attack and breakdown of the modified bacteriophage's DNA by specific enzymes of the recipient bacteria. Further work by Hamilton O. Smith isolated Hin DII , the first of the class of enzymes now known as restriction enzymes , while Daniel Nathans showed that it can be used for restriction mapping . [ 6 ] When these enzymes were isolated in the laboratory they could be used for controlled manipulation of DNA, thus providing the foundation for the development of genetic engineering . Werner Arber, Daniel Nathans, and Hamilton Smith were awarded the Nobel Prize in Physiology or Medicine in 1978 for their work on restriction-modification. [ citation needed ] There are four categories of restriction modification systems: type I, type II, type III and type IV. [ 7 ] All have restriction enzyme activity and a methylase activity (except for type IV that has no methylase activity). They were named in the order of discovery, although the type II system is the most common. [ 7 ] Type I systems are the most complex, consisting of three polypeptides: R (restriction), M (modification), and S (specificity). The resulting complex can both cleave and methylate DNA. Both reactions require ATP, and cleavage often occurs a considerable distance from the recognition site. The S subunit determines the specificity of both restriction and methylation. Cleavage occurs at variable distances from the recognition sequence, so discrete bands are not easily visualized by gel electrophoresis . [ citation needed ] Type II systems are the simplest and the most prevalent. [ 8 ] Instead of working as a complex, the methyltransferase and endonuclease are encoded as two separate proteins and act independently (there is no specificity protein). Both proteins recognize the same recognition site, and therefore compete for activity. The methyltransferase acts as a monomer , methylating the duplex one strand at a time. The endonuclease acts as a homodimer , which facilitates the cleavage of both strands. Cleavage occurs at a defined position close to or within the recognition sequence, thus producing discrete fragments during gel electrophoresis. For this reason, Type II systems are used in labs for DNA analysis and gene cloning . [ citation needed ] Type III systems have R (res) and M (mod) proteins that form a complex of modification and cleavage. The M protein, however, can methylate on its own. Methylation also only occurs on one strand of the DNA unlike most other known mechanisms. The heterodimer formed by the R and M proteins competes with itself by modifying and restricting the same reaction. This results in incomplete digestion. [ 9 ] [ 10 ] Type IV systems are not true RM systems because they only contain a restriction enzyme and not a methylase. Unlike the other types, type IV restriction enzymes recognize and cut only modified DNA. [ 11 ] Neisseria meningitidis has multiple type II restriction endonuclease systems that are employed in natural genetic transformation . Natural genetic transformation is a process by which a recipient bacterial cell can take up DNA from a neighboring donor bacterial cell and integrate this DNA into its genome by recombination. Although early work on restriction modification systems focused on the benefit to bacteria of protecting themselves against invading bacteriophage DNA or other foreign DNA, it is now known that these systems can also be used to restrict DNA introduced by natural transformation from other members of the same, or related species. [ citation needed ] In the pathogenic bacterium Neisseria meningitidis (meningococci), competence for transformation is a highly evolved and complex process where multiple proteins at the bacterial surface, in the membranes and in the cytoplasm interact with the incoming transforming DNA. Restriction-modification systems are abundant in the genus Neisseria . N. meningitidis has multiple type II restriction endonuclease systems. [ 12 ] The restriction modification systems in N. meningitidis vary in specificity between different clades. [ 12 ] [ 13 ] This specificity provides an efficient barrier against DNA exchange between clades. [ 12 ] Luria, on page 99 of his autobiography, [ 3 ] referred to such a restriction behavior as "an extreme instance of unfriendliness." Restriction-modification appears to be a major driver of sexual isolation and speciation in the meningococci. [ 14 ] Caugant and Maiden [ 15 ] suggested that restriction-modification systems in meningococci may act to allow genetic exchange among very close relatives while reducing (but not completely preventing) genetic exchange among meningococci belonging to different clonal complexes and related species. [ citation needed ] RM systems can also act as selfish genetic elements , forcing their maintenance on the cell through postsegregational cell killing. [ 16 ] Some viruses have evolved ways of subverting the restriction modification system, usually by modifying their own DNA, by adding methyl or glycosyl groups to it, thus blocking the restriction enzymes. Other viruses, such as bacteriophages T3 and T7, encode proteins that inhibit the restriction enzymes. [ citation needed ] To counteract these viruses, some bacteria have evolved restriction systems which only recognize and cleave modified DNA, but do not act upon the host's unmodified DNA. Some prokaryotes have developed multiple types of restriction modification systems. [ citation needed ] R-M systems are more abundant in promiscuous species, wherein they establish preferential paths of genetic exchange within and between lineages with cognate R-M systems. [ 17 ] Because the repertoire and/or specificity of R-M systems in bacterial lineages vary quickly, the preferential fluxes of genetic transfer within species are expected to constantly change, producing time-dependent networks of gene transfer. [ citation needed ] (a) Cloning: RM systems can be cloned into plasmids and selected because of the resistance provided by the methylation enzyme. Once the plasmid begins to replicate, the methylation enzyme will be produced and methylate the plasmid DNA, protecting it from a specific restriction enzyme. [ citation needed ] (b) Restriction fragment length polymorphisms: Restriction enzymes are also used to analyse the composition of DNA in regard to presence or absence of mutations that affect the REase cleavage specificity. When wild-type and mutants are analysed by digestion with different REases, the gel-electrophoretic products vary in length, largely because mutant genes will not be cleaved in a similar pattern as wild-type for presence of mutations that render the REases non-specific to the mutant sequence. [ citation needed ] The bacteria R-M system has been proposed as a model for devising human anti-viral gene or genomic vaccines and therapies since the RM system serves an innate defense-role in bacteria by restricting tropism of bacteriophages. [ 18 ] Research is on REases and ZFN that can cleave the DNA of various human viruses, including HSV-2 , high-risk HPVs and HIV-1 , with the ultimate goal of inducing target mutagenesis and aberrations of human-infecting viruses. [ 19 ] [ 20 ] [ 21 ] The human genome already contains remnants of retroviral genomes that have been inactivated and harnessed for self-gain. Indeed, the mechanisms for silencing active L1 genomic retroelements by the three prime repair exonuclease 1 (TREX1) and excision repair cross complementing 1 (ERCC) appear to mimic the action of RM-systems in bacteria, and the non-homologous end-joining (NHEJ) that follows the use of ZFN without a repair template. [ 22 ] [ 23 ] A major advance is the creation of artificial restriction enzymes created by linking the FokI DNA cleavage domain with an array of DNA binding proteins or zinc finger arrays, denoted now as zinc finger nucleases (ZFN). [ 24 ] ZFNs are a powerful tool for host genome editing due to their enhanced sequence specificity. ZFN work in pairs, their dimerization being mediated in-situ through the FoKI domain. Each zinc finger array (ZFA) is capable of recognizing 9–12 base-pairs, making for 18–24 for the pair. A 5–7 bp spacer between the cleavage sites further enhances the specificity of ZFN, making them a safe and more precise tool that can be applied in humans. A recent Phase I clinical trial of ZFN for the targeted abolition of the CCR5 co-receptor for HIV-1 has been undertaken. [ 25 ] R-M systems are major players in the co-evolutionary interaction between mobile genetic elements (MGEs) and their hosts. [ 26 ] Genes encoding R-M systems have been reported to move between prokaryotic genomes within MGEs such as plasmids, prophages, insertion sequences/transposons, integrative conjugative elements (ICEs) and integrons. However, it was recently found that there are relatively few R-M systems in plasmids, some in prophages, and practically none in phages. On the other hand, all these MGEs encode a large number of solitary R-M genes, notably MTases. [ 26 ] In light of this, it is likely that R-M mobility may be less dependent on MGEs and more dependent, for example, on the existence of small genomic integration hotspots. It is also possible that R-M systems frequently exploit other mechanisms such as natural transformation, vesicles, nanotubes, gene transfer agents or generalized transduction in order to move between genomes. [ citation needed ]
https://en.wikipedia.org/wiki/Restriction_modification_system
Restriction sites , or restriction recognition sites , are located on a DNA molecule containing specific (4-8 base pairs in length [ 1 ] ) sequences of nucleotides , which are recognized by restriction enzymes . These are generally palindromic sequences [ 2 ] (because restriction enzymes usually bind as homodimers ), and a particular restriction enzyme may cut the sequence between two nucleotides within its recognition site, or somewhere nearby. For example, the common restriction enzyme EcoRI recognizes the palindromic sequence GAATTC and cuts between the G and the A on both the top and bottom strands. This leaves an overhang (an end-portion of a DNA strand with no attached complement) known as a sticky end [ 2 ] on each end of AATT. The overhang can then be used to ligate in (see DNA ligase ) a piece of DNA with a complementary overhang (another EcoRI-cut piece, for example). Some restriction enzymes cut DNA at a restriction site in a manner which leaves no overhang, called a blunt end. [ 2 ] Blunt ends are much less likely to be ligated by a DNA ligase because the blunt end doesn't have the overhanging base pair that the enzyme can recognize and match with a complementary pair. [ 3 ] Sticky ends of DNA however are more likely to successfully bind with the help of a DNA ligase because of the exposed and unpaired nucleotides. For example, a sticky end trailing with AATTG is more likely to bind with a ligase than a blunt end where both the 5' and 3' DNA strands are paired. In the case of the example the AATTG would have a complementary pair of TTAAC which would reduce the functionality of the DNA ligase enzyme. [ 4 ] Restriction sites can be used for multiple applications in molecular biology such as identifying restriction fragment length polymorphisms ( RFLPs ). Restriction sites are also important consideration to be aware of when designing plasmids . Several databases exist for restriction sites and enzymes, of which the largest noncommercial database is REBASE. [ 5 ] [ 6 ] Recently, it has been shown that statistically significant nullomers (i.e. short absent motifs which are highly expected to exist) in virus genomes are restriction sites indicating that viruses have probably got rid of these motifs to facilitate invasion of bacterial hosts. [ 7 ] Nullomers Database contains a comprehensive catalogue of minimal absent motifs many of which might potentially be not-yet-known restriction motifs.
https://en.wikipedia.org/wiki/Restriction_site
Restriction site associated DNA (RAD) markers are a type of genetic marker which are useful for association mapping, QTL-mapping , population genetics , ecological genetics and evolutionary genetics. The use of RAD markers for genetic mapping is often called RAD mapping. An important aspect of RAD markers and mapping is the process of isolating RAD tags, which are the DNA sequences that immediately flank each instance of a particular restriction site of a restriction enzyme throughout the genome. [ 1 ] Once RAD tags have been isolated, they can be used to identify and genotype DNA sequence polymorphisms mainly in form of single nucleotide polymorphisms (SNPs) . [ 1 ] Polymorphisms that are identified and genotyped by isolating and analyzing RAD tags are referred to as RAD markers. Although genotyping by sequencing presents an approach similar to the RAD-seq method, they differ in some substantial ways. [ 2 ] [ 3 ] [ 4 ] The use of the flanking DNA sequences around each restriction site is an important aspect of RAD tags. [ 1 ] The density of RAD tags in a genome depends on the restriction enzyme used during the isolation process. [ 5 ] There are other restriction site marker techniques, like RFLP or amplified fragment length polymorphism (AFLP), which use fragment length polymorphism caused by different restriction sites, for the distinction of genetic polymorphism. The use of the flanking DNA-sequences in RAD tag techniques is referred as reduced-representation method. [ 2 ] The initial procedure to isolate RAD tags involved digesting DNA with a particular restriction enzyme, ligating biotinylated adapters to the overhangs, randomly shearing the DNA into fragments much smaller than the average distance between restriction sites, and isolating the biotinylated fragments using streptavidin beads. [ 1 ] This procedure was used initially to isolate RAD tags for microarray analysis. [ 1 ] [ 6 ] [ 7 ] More recently, the RAD tag isolation procedure has been modified for use with high-throughput sequencing on the Illumina platform, which has the benefit of greatly reduced raw error rates and high throughput. [ 5 ] The new procedure involves digesting DNA with a particular restriction enzyme (for example: SbfI, NsiI,…), ligating the first adapter, called P1, to the overhangs, randomly shearing the DNA into fragments much smaller than the average distance between restriction sites, preparing the sheared ends into blunt ends and ligating the second adapter (P2), and using PCR to specifically amplify fragments that contain both adapters. Importantly, the first adapter contains a short DNA sequence barcode, called MID (molecular identifier) that is used as a marker to identify different DNA samples that are pooled together and sequenced in the same reaction. [ 5 ] [ 8 ] The use of high-throughput sequencing to analyze RAD tags can be classified as reduced-representation sequencing, which includes, among other things, RADSeq (RAD-Sequencing). [ 2 ] Once RAD tags have been isolated, they can be used to identify and genotype DNA sequence polymorphisms such as single nucleotide polymorphisms (SNPs). [ 1 ] [ 5 ] These polymorphic sites are referred to as RAD markers. The most efficient way to find RAD tags is by high-throughput DNA sequencing , [ 5 ] [ 8 ] called RAD tag sequencing, RAD sequencing, RAD-Seq, or RADSeq. Prior to the development of high-throughput sequencing technologies, RAD markers were identified by hybridizing RAD tags to microarrays. [ 1 ] [ 6 ] [ 7 ] Due to the low sensitivity of microarrays, this approach can only detect either DNA sequence polymorphisms that disrupt restriction sites and lead to the absence of RAD tags or substantial DNA sequence polymorphisms that disrupt RAD tag hybridization. Therefore, the genetic marker density that can be achieved with microarrays is much lower than what is possible with high-throughput DNA-sequencing. [ 9 ] RAD markers were first implemented using microarrays and later adapted for NGS (Next-Generation-Sequencing). [ 9 ] It was developed jointly by Eric Johnson and William Cresko's laboratories at the University of Oregon around 2006. They confirmed the utility of RAD markers by identifying recombination breakpoints in D. melanogaster and by detecting QTLs in threespine sticklebacks. [ 1 ] In 2012 a modified RAD tagging method called double digest RADseq (ddRADseq) was suggested. [ 10 ] [ 11 ] By adding a second restriction enzyme, replacing the random shearing, and a tight DNA size selection step it is possible to perform low-cost population genotyping. This can be an especially powerful tool for whole-genome scans for selection and population differentiation or population adaptation. [ 11 ] A study in 2016 presented a novel method called hybridization RAD (hyRAD), [ 12 ] where biotinylated RAD fragments, covering a random fraction of the genome, are used as baits for capturing homologous fragments from genomic shotgun sequencing libraries . DNA fragments are first generated using ddRADseq protocol applied to fresh samples, and used as hybridization-capture probes to enrich shotgun libraries in the fragments of interest. This simple and cost-effective approach allows sequencing of orthologous loci even from highly degraded DNA samples, opening new avenues of research in the field of museomics . Another advantage of the method is not relying on the restriction site presence, improving among-sample loci coverage. The technique was first tested on museum and fresh samples of Oedaleus decorus , a Palearctic grasshopper species, and later implemented in regent honeyeater , [ 13 ] arthropods , [ 14 ] among other species. A lab protocol was developed to implement hyRAD in birds. [ 15 ]
https://en.wikipedia.org/wiki/Restriction_site_associated_DNA_markers
The Reststrahlen effect (German: “residual rays”) is a reflectance phenomenon in which electromagnetic radiation within a narrow energy band cannot propagate within a given medium due to a change in refractive index concurrent with the specific absorbance band of the medium in question; this narrow energy band is termed the Reststrahlen band . As a result of this inability to propagate, normally incident Reststrahlen band radiation experiences strong-reflection or total-reflection from that medium. The energies at which Reststrahlen bands occur vary and are particular to the individual compound. Numerous physical attributes of a compound will have an effect on the appearance of the Reststrahlen band. These include phonon band-gap, particle/grain size, strongly absorbing compounds, compounds with optically opaque bands in the infrared. The term Reststrahlen was coined following the observation by Heinrich Rubens in 1898 that repeated reflection of an infrared beam at the surface of a given material suppresses radiation at all wavelengths except for certain spectral intervals, and Rubens detected wavelengths of sizes around 60 μm . [ 1 ] The measured intensity for these special intervals (the Reststrahlen range) indicates a reflectance of up to 80% or even more, while the maximum reflectance due to infrared bands of dielectric materials are usually <10%. After four reflections, the intensity of the latter is reduced by a factor of 10 −4 compared to the intensity of the incident radiation, while the light in the Reststrahlen range can maintain 40% of its original intensity by the time it reaches the detector. Obviously, this contrast increases with the number of reflections and explains the observation made by Rubens and the term Reststrahlen (residual rays) used to describe this spectral selection. [ 2 ] Reststrahlen bands manifest in diffuse reflectance infrared absorption spectra as complete band reversal, or in infrared emission spectra as a minimum in emissivity. The Reststrahlen effect is used to investigate the properties of semiconductors , it is also used in geophysics and meteorology .
https://en.wikipedia.org/wiki/Reststrahlen_effect
In mathematics , the resultant of two polynomials is a polynomial expression of their coefficients that is equal to zero if and only if the polynomials have a common root (possibly in a field extension ), or, equivalently, a common factor (over their field of coefficients). In some older texts, the resultant is also called the eliminant . [ 1 ] The resultant is widely used in number theory , either directly or through the discriminant , which is essentially the resultant of a polynomial and its derivative . The resultant of two polynomials with rational or polynomial coefficients may be computed efficiently on a computer. It is a basic tool of computer algebra , and is a built-in function of most computer algebra systems . It is used, among others, for cylindrical algebraic decomposition , integration of rational functions and drawing of curves defined by a bivariate polynomial equation . The resultant of n homogeneous polynomials in n variables (also called multivariate resultant , or Macaulay's resultant for distinguishing it from the usual resultant) is a generalization, introduced by Macaulay , of the usual resultant. [ 2 ] It is, with Gröbner bases , one of the main tools of elimination theory . The resultant of two univariate polynomials A and B is commonly denoted res ⁡ ( A , B ) {\displaystyle \operatorname {res} (A,B)} or Res ⁡ ( A , B ) . {\displaystyle \operatorname {Res} (A,B).} In many applications of the resultant, the polynomials depend on several indeterminates and may be considered as univariate polynomials in one of their indeterminates, with polynomials in the other indeterminates as coefficients. In this case, the indeterminate that is selected for defining and computing the resultant is indicated as a subscript: res x ⁡ ( A , B ) {\displaystyle \operatorname {res} _{x}(A,B)} or Res x ⁡ ( A , B ) . {\displaystyle \operatorname {Res} _{x}(A,B).} The degrees of the polynomials are used in the definition of the resultant. However, a polynomial of degree d may also be considered as a polynomial of higher degree where the leading coefficients are zero. If such a higher degree is used for the resultant, it is usually indicated as a subscript or a superscript, such as res d , e ⁡ ( A , B ) {\displaystyle \operatorname {res} _{d,e}(A,B)} or res x d , e ⁡ ( A , B ) . {\displaystyle \operatorname {res} _{x}^{d,e}(A,B).} The resultant of two univariate polynomials over a field or over a commutative ring is commonly defined as the determinant of their Sylvester matrix . More precisely, let A = a 0 x d + a 1 x d − 1 + ⋯ + a d {\displaystyle A=a_{0}x^{d}+a_{1}x^{d-1}+\cdots +a_{d}} and B = b 0 x e + b 1 x e − 1 + ⋯ + b e {\displaystyle B=b_{0}x^{e}+b_{1}x^{e-1}+\cdots +b_{e}} be nonzero polynomials of degrees d and e respectively. Let us denote by P i {\displaystyle {\mathcal {P}}_{i}} the vector space (or free module if the coefficients belong to a commutative ring) of dimension i whose elements are the polynomials of degree strictly less than i . The map φ : P e × P d → P d + e {\displaystyle \varphi :{\mathcal {P}}_{e}\times {\mathcal {P}}_{d}\rightarrow {\mathcal {P}}_{d+e}} such that φ ( P , Q ) = A P + B Q {\displaystyle \varphi (P,Q)=AP+BQ} is a linear map between two spaces of the same dimension. Over the basis of the powers of x (listed in descending order), this map is represented by a square matrix of dimension d + e , which is called the Sylvester matrix of A and B (for many authors and in the article Sylvester matrix , the Sylvester matrix is defined as the transpose of this matrix; this convention is not used here, as it breaks the usual convention for writing the matrix of a linear map). The resultant of A and B is thus the determinant | a 0 0 ⋯ 0 b 0 0 ⋯ 0 a 1 a 0 ⋯ 0 b 1 b 0 ⋯ 0 a 2 a 1 ⋱ 0 b 2 b 1 ⋱ 0 ⋮ ⋮ ⋱ a 0 ⋮ ⋮ ⋱ b 0 a d a d − 1 ⋯ ⋮ b e b e − 1 ⋯ ⋮ 0 a d ⋱ ⋮ 0 b e ⋱ ⋮ ⋮ ⋮ ⋱ a d − 1 ⋮ ⋮ ⋱ b e − 1 0 0 ⋯ a d 0 0 ⋯ b e | , {\displaystyle {\begin{vmatrix}a_{0}&0&\cdots &0&b_{0}&0&\cdots &0\\a_{1}&a_{0}&\cdots &0&b_{1}&b_{0}&\cdots &0\\a_{2}&a_{1}&\ddots &0&b_{2}&b_{1}&\ddots &0\\\vdots &\vdots &\ddots &a_{0}&\vdots &\vdots &\ddots &b_{0}\\a_{d}&a_{d-1}&\cdots &\vdots &b_{e}&b_{e-1}&\cdots &\vdots \\0&a_{d}&\ddots &\vdots &0&b_{e}&\ddots &\vdots \\\vdots &\vdots &\ddots &a_{d-1}&\vdots &\vdots &\ddots &b_{e-1}\\0&0&\cdots &a_{d}&0&0&\cdots &b_{e}\end{vmatrix}},} which has e columns of a i and d columns of b j (the fact that the first column of a 's and the first column of b 's have the same length, that is d = e , is here only for simplifying the display of the determinant). For instance, taking d = 3 and e = 2 we get | a 0 0 b 0 0 0 a 1 a 0 b 1 b 0 0 a 2 a 1 b 2 b 1 b 0 a 3 a 2 0 b 2 b 1 0 a 3 0 0 b 2 | . {\displaystyle {\begin{vmatrix}a_{0}&0&b_{0}&0&0\\a_{1}&a_{0}&b_{1}&b_{0}&0\\a_{2}&a_{1}&b_{2}&b_{1}&b_{0}\\a_{3}&a_{2}&0&b_{2}&b_{1}\\0&a_{3}&0&0&b_{2}\end{vmatrix}}.} If the coefficients of the polynomials belong to an integral domain , then res ⁡ ( A , B ) = a 0 e b 0 d ∏ 1 ≤ i ≤ d 1 ≤ j ≤ e ( λ i − μ j ) = a 0 e ∏ i = 1 d B ( λ i ) = ( − 1 ) d e b 0 d ∏ j = 1 e A ( μ j ) , {\displaystyle \operatorname {res} (A,B)=a_{0}^{e}b_{0}^{d}\prod _{\begin{array}{c}1\leq i\leq d\\1\leq j\leq e\end{array}}(\lambda _{i}-\mu _{j})=a_{0}^{e}\prod _{i=1}^{d}B(\lambda _{i})=(-1)^{de}b_{0}^{d}\prod _{j=1}^{e}A(\mu _{j}),} where λ 1 , … , λ d {\displaystyle \lambda _{1},\dots ,\lambda _{d}} and μ 1 , … , μ e {\displaystyle \mu _{1},\dots ,\mu _{e}} are respectively the roots, counted with their multiplicities, of A and B in any algebraically closed field containing the integral domain. This is a straightforward consequence of the characterizing properties of the resultant that appear below. In the common case of integer coefficients, the algebraically closed field is generally chosen as the field of complex numbers . In this section and its subsections, A and B are two polynomials in x of respective degrees d and e , and their resultant is denoted res ⁡ ( A , B ) . {\displaystyle \operatorname {res} (A,B).} The following properties hold for the resultant of two polynomials with coefficients in a commutative ring R . If R is a field or more generally an integral domain , the resultant is the unique function of the coefficients of two polynomials that satisfies these properties. Let A and B be two polynomials of respective degrees d and e with coefficients in a commutative ring R , and φ : R → S {\displaystyle \varphi \colon R\to S} a ring homomorphism of R into another commutative ring S . Applying φ {\displaystyle \varphi } to the coefficients of a polynomial extends φ {\displaystyle \varphi } to a homomorphism of polynomial rings R [ x ] → S [ x ] {\displaystyle R[x]\to S[x]} , which is also denoted φ . {\displaystyle \varphi .} With this notation, we have: These properties are easily deduced from the definition of the resultant as a determinant. They are mainly used in two situations. For computing a resultant of polynomials with integer coefficients, it is generally faster to compute it modulo several primes and to retrieve the desired resultant with Chinese remainder theorem . When R is a polynomial ring in other indeterminates, and S is the ring obtained by specializing to numerical values some or all indeterminates of R , these properties may be restated as if the degrees are preserved by the specialization, the resultant of the specialization of two polynomials is the specialization of the resultant . This property is fundamental, for example, for cylindrical algebraic decomposition . This means that the property of the resultant being zero is invariant under linear and projective changes of the variable. It is only when ⁠ B C {\displaystyle BC} ⁠ and ⁠ A {\displaystyle A} ⁠ have the same degree that ⁠ δ {\displaystyle \delta } ⁠ cannot be deduced from the degrees of the given polynomials. If either B is monic , or deg C < deg A – deg B , then res ⁡ ( B , A − C B ) = res ⁡ ( B , A ) , {\displaystyle \operatorname {res} (B,A-CB)=\operatorname {res} (B,A),} If f = deg C > deg A – deg B = d – e , then res ⁡ ( B , A − C B ) = b 0 e + f − d res ⁡ ( B , A ) . {\displaystyle \operatorname {res} (B,A-CB)=b_{0}^{e+f-d}\operatorname {res} (B,A).} These properties imply that in the Euclidean algorithm for polynomials , and all its variants ( pseudo-remainder sequences ), the resultant of two successive remainders (or pseudo-remainders) differs from the resultant of the initial polynomials by a factor which is easy to compute. Conversely, this allows one to deduce the resultant of the initial polynomials from the value of the last remainder or pseudo-remainder. This is the starting idea of the subresultant-pseudo-remainder-sequence algorithm , which uses the above formulae for getting subresultant polynomials as pseudo-remainders, and the resultant as the last nonzero pseudo-remainder (provided that the resultant is not zero). This algorithm works for polynomials over the integers or, more generally, over an integral domain, without any division other than exact divisions (that is, without involving fractions). It involves O ( d e ) {\displaystyle O(de)} arithmetic operations, while the computation of the determinant of the Sylvester matrix with standard algorithms requires O ( ( d + e ) 3 ) {\displaystyle O((d+e)^{3})} arithmetic operations. In this section, we consider two polynomials A = a 0 x d + a 1 x d − 1 + ⋯ + a d {\displaystyle A=a_{0}x^{d}+a_{1}x^{d-1}+\cdots +a_{d}} and B = b 0 x e + b 1 x e − 1 + ⋯ + b e {\displaystyle B=b_{0}x^{e}+b_{1}x^{e-1}+\cdots +b_{e}} whose d + e + 2 coefficients are distinct indeterminates . Let R = Z [ a 0 , … , a d , b 0 , … , b e ] {\displaystyle R=\mathbb {Z} [a_{0},\ldots ,a_{d},b_{0},\ldots ,b_{e}]} be the polynomial ring over the integers defined by these indeterminates. The resultant res ⁡ ( A , B ) {\displaystyle \operatorname {res} (A,B)} is often called the generic resultant for the degrees d and e . It has the following properties. The generic resultant for the degrees d and e is homogeneous in various ways. More precisely: Let I = ⟨ A , B ⟩ {\displaystyle I=\langle A,B\rangle } be the ideal generated by two polynomials A and B in a polynomial ring R [ x ] , {\displaystyle R[x],} where R = k [ y 1 , … , y n ] {\displaystyle R=k[y_{1},\ldots ,y_{n}]} is itself a polynomial ring over a field. If at least one of A and B is monic in x , then: The first assertion is a basic property of the resultant. The other assertions are immediate corollaries of the second one, which can be proved as follows. As at least one of A and B is monic, a tuple ( β 1 , … , β n ) {\displaystyle (\beta _{1},\ldots ,\beta _{n})} is a zero of res x ⁡ ( A , B ) {\displaystyle \operatorname {res} _{x}(A,B)} if and only if there exists α {\displaystyle \alpha } such that ( β 1 , … , β n , α ) {\displaystyle (\beta _{1},\ldots ,\beta _{n},\alpha )} is a common zero of A and B . Such a common zero is also a zero of all elements of I ∩ R . {\displaystyle I\cap R.} Conversely, if ( β 1 , … , β n ) {\displaystyle (\beta _{1},\ldots ,\beta _{n})} is a common zero of the elements of I ∩ R , {\displaystyle I\cap R,} it is a zero of the resultant, and there exists α {\displaystyle \alpha } such that ( β 1 , … , β n , α ) {\displaystyle (\beta _{1},\ldots ,\beta _{n},\alpha )} is a common zero of A and B . So I ∩ R {\displaystyle I\cap R} and R res x ⁡ ( A , B ) {\displaystyle R\operatorname {res} _{x}(A,B)} have exactly the same zeros. Theoretically, the resultant could be computed by using the formula expressing it as a product of roots differences. However, as the roots may generally not be computed exactly, such an algorithm would be inefficient and numerically unstable . As the resultant is a symmetric function of the roots of each polynomial, it could also be computed by using the fundamental theorem of symmetric polynomials , but this would be highly inefficient. As the resultant is the determinant of the Sylvester matrix (and of the Bézout matrix ), it may be computed by using any algorithm for computing determinants. This needs O ( n 3 ) {\displaystyle O(n^{3})} arithmetic operations. As algorithms are known with a better complexity (see below), this method is not used in practice. It follows from § Invariance under change of polynomials that the computation of a resultant is strongly related to the Euclidean algorithm for polynomials . This shows that the computation of the resultant of two polynomials of degrees d and e may be done in O ( d e ) {\displaystyle O(de)} arithmetic operations in the field of coefficients. However, when the coefficients are integers, rational numbers or polynomials, these arithmetic operations imply a number of GCD computations of coefficients which is of the same order and make the algorithm inefficient. The subresultant pseudo-remainder sequences were introduced to solve this problem and avoid any fraction and any GCD computation of coefficients. A more efficient algorithm is obtained by using the good behavior of the resultant under a ring homomorphism on the coefficients: to compute a resultant of two polynomials with integer coefficients, one computes their resultants modulo sufficiently many prime numbers and then reconstructs the result with the Chinese remainder theorem . The use of fast multiplication of integers and polynomials allows algorithms for resultants and greatest common divisors that have a better time complexity , which is of the order of the complexity of the multiplication, multiplied by the logarithm of the size of the input ( log ⁡ ( s ( d + e ) ) , {\displaystyle \log(s(d+e)),} where s is an upper bound of the number of digits of the input polynomials). Resultants were introduced for solving systems of polynomial equations and provide the oldest proof that there exist algorithms for solving such systems. These are primarily intended for systems of two equations in two unknowns, but also allow solving general systems. Consider the system of two polynomial equations P ( x , y ) = 0 Q ( x , y ) = 0 , {\displaystyle {\begin{aligned}P(x,y)&=0\\Q(x,y)&=0,\end{aligned}}} where P and Q are polynomials of respective total degrees d and e . Then R = res y d , e ⁡ ( P , Q ) {\displaystyle R=\operatorname {res} _{y}^{d,e}(P,Q)} is a polynomial in x , which is generically of degree de (by properties of § Homogeneity ). A value α {\displaystyle \alpha } of x is a root of R if and only if either there exist β {\displaystyle \beta } in an algebraically closed field containing the coefficients, such that P ( α , β ) = Q ( α , β ) = 0 {\displaystyle P(\alpha ,\beta )=Q(\alpha ,\beta )=0} , or deg ⁡ ( P ( α , y ) ) < d {\displaystyle \deg(P(\alpha ,y))<d} and deg ⁡ ( Q ( α , y ) ) < e {\displaystyle \deg(Q(\alpha ,y))<e} (in this case, one says that P and Q have a common root at infinity for x = α {\displaystyle x=\alpha } ). Therefore, solutions to the system are obtained by computing the roots of R , and for each root α , {\displaystyle \alpha ,} computing the common root(s) of P ( α , y ) , {\displaystyle P(\alpha ,y),} Q ( α , y ) , {\displaystyle Q(\alpha ,y),} and res x ⁡ ( P , Q ) . {\displaystyle \operatorname {res} _{x}(P,Q).} Bézout's theorem results from the value of deg ⁡ ( res y ⁡ ( P , Q ) ) ≤ d e {\displaystyle \deg \left(\operatorname {res} _{y}(P,Q)\right)\leq de} , the product of the degrees of P and Q . In fact, after a linear change of variables, one may suppose that, for each root x of the resultant, there is exactly one value of y such that ( x , y ) is a common zero of P and Q . This shows that the number of common zeros is at most the degree of the resultant, that is at most the product of the degrees of P and Q . With some technicalities, this proof may be extended to show that, counting multiplicities and zeros at infinity, the number of zeros is exactly the product of the degrees. At first glance, it seems that resultants may be applied to a general polynomial system of equations P 1 ( x 1 , … , x n ) = 0 ⋮ P k ( x 1 , … , x n ) = 0 {\displaystyle {\begin{aligned}P_{1}(x_{1},\ldots ,x_{n})&=0\\&\;\;\vdots \\P_{k}(x_{1},\ldots ,x_{n})&=0\end{aligned}}} by computing the resultants of every pair ( P i , P j ) {\displaystyle (P_{i},P_{j})} with respect to x n {\displaystyle x_{n}} for eliminating one unknown, and repeating the process until getting univariate polynomials. Unfortunately, this introduces many spurious solutions, which are difficult to remove. A method, introduced at the end of the 19th century, works as follows: introduce k − 1 new indeterminates U 2 , … , U k {\displaystyle U_{2},\ldots ,U_{k}} and compute res x n ⁡ ( P 1 , U 2 P 2 + ⋯ + U k P k ) . {\displaystyle \operatorname {res} _{x_{n}}(P_{1},U_{2}P_{2}+\cdots +U_{k}P_{k}).} This is a polynomial in U 2 , … , U k {\displaystyle U_{2},\ldots ,U_{k}} whose coefficients are polynomials in x 1 , … , x n − 1 , {\displaystyle x_{1},\ldots ,x_{n-1},} which have the property that α 1 , … , α n − 1 {\displaystyle \alpha _{1},\ldots ,\alpha _{n-1}} is a common zero of these polynomial coefficients, if and only if the univariate polynomials P i ( α 1 , … , α n − 1 , x n ) {\displaystyle P_{i}(\alpha _{1},\ldots ,\alpha _{n-1},x_{n})} have a common zero, possibly at infinity . This process may be iterated until finding univariate polynomials. To get a correct algorithm two complements have to be added to the method. Firstly, at each step, a linear change of variable may be needed in order that the degrees of the polynomials in the last variable are the same as their total degree. Secondly, if, at any step, the resultant is zero, this means that the polynomials have a common factor and that the solutions split in two components: one where the common factor is zero, and the other which is obtained by factoring out this common factor before continuing. This algorithm is very complicated and has a huge time complexity . Therefore, its interest is mainly historical. The discriminant of a polynomial, which is a fundamental tool in number theory , is a 0 − 1 ( − 1 ) n ( n − 1 ) / 2 res x ⁡ ( f ( x ) , f ′ ( x ) ) {\displaystyle a_{0}^{-1}(-1)^{n(n-1)/2}\operatorname {res} _{x}(f(x),f'(x))} , where a 0 {\displaystyle a_{0}} is the leading coefficient of f ( x ) {\displaystyle f(x)} and n {\displaystyle n} its degree. If α {\displaystyle \alpha } and β {\displaystyle \beta } are algebraic numbers such that P ( α ) = Q ( β ) = 0 {\displaystyle P(\alpha )=Q(\beta )=0} , then γ = α + β {\displaystyle \gamma =\alpha +\beta } is a root of the resultant res x ⁡ ( P ( x ) , Q ( z − x ) ) , {\displaystyle \operatorname {res} _{x}(P(x),Q(z-x)),} and τ = α β {\displaystyle \tau =\alpha \beta } is a root of res x ⁡ ( P ( x ) , x n Q ( z / x ) ) {\displaystyle \operatorname {res} _{x}(P(x),x^{n}Q(z/x))} , where n {\displaystyle n} is the degree of Q ( y ) {\displaystyle Q(y)} . Combined with the fact that 1 / β {\displaystyle 1/\beta } is a root of y n Q ( 1 / y ) = 0 {\displaystyle y^{n}Q(1/y)=0} , this shows that the set of algebraic numbers is a field . Let K ( α ) {\displaystyle K(\alpha )} be an algebraic field extension generated by an element α , {\displaystyle \alpha ,} which has P ( x ) {\displaystyle P(x)} as minimal polynomial . Every element of β ∈ K ( α ) {\displaystyle \beta \in K(\alpha )} may be written as β = Q ( α ) , {\displaystyle \beta =Q(\alpha ),} where Q {\displaystyle Q} is a polynomial. Then β {\displaystyle \beta } is a root of res x ⁡ ( P ( x ) , z − Q ( x ) ) , {\displaystyle \operatorname {res} _{x}(P(x),z-Q(x)),} and this resultant is a power of the minimal polynomial of β . {\displaystyle \beta .} Given two plane algebraic curves defined as the zeros of the polynomials P ( x , y ) and Q ( x , y ) , the resultant allows the computation of their intersection. More precisely, the roots of res y ⁡ ( P , Q ) {\displaystyle \operatorname {res} _{y}(P,Q)} are the x -coordinates of the intersection points and of the common vertical asymptotes, and the roots of res x ⁡ ( P , Q ) {\displaystyle \operatorname {res} _{x}(P,Q)} are the y -coordinates of the intersection points and of the common horizontal asymptotes. A rational plane curve may be defined by a parametric equation x = P ( t ) R ( t ) , y = Q ( t ) R ( t ) , {\displaystyle x={\frac {P(t)}{R(t)}},\qquad y={\frac {Q(t)}{R(t)}},} where P , Q and R are polynomials. An implicit equation of the curve is given by res t ⁡ ( x R − P , y R − Q ) . {\displaystyle \operatorname {res} _{t}(xR-P,yR-Q).} The degree of this curve is the highest degree of P , Q and R , which is equal to the total degree of the resultant. In symbolic integration , for computing the antiderivative of a rational fraction , one uses partial fraction decomposition for decomposing the integral into a "rational part", which is a sum of rational fractions whose antiprimitives are rational fractions, and a "logarithmic part" which is a sum of rational fractions of the form P ( x ) Q ( x ) , {\displaystyle {\frac {P(x)}{Q(x)}},} where Q is a square-free polynomial and P is a polynomial of lower degree than Q . The antiderivative of such a function involves necessarily logarithms , and generally algebraic numbers (the roots of Q ). In fact, the antiderivative is ∫ P ( x ) Q ( x ) d x = ∑ Q ( α ) = 0 P ( α ) Q ′ ( α ) log ⁡ ( x − α ) , {\displaystyle \int {\frac {P(x)}{Q(x)}}dx=\sum _{Q(\alpha )=0}{\frac {P(\alpha )}{Q'(\alpha )}}\log(x-\alpha ),} where the sum runs over all complex roots of Q . The number of algebraic numbers involved by this expression is generally equal to the degree of Q , but it occurs frequently that an expression with less algebraic numbers may be computed. The Lazard –Rioboo– Trager method produces an expression, where the number of algebraic numbers is minimal, without any computation with algebraic numbers. Let S 1 ( r ) S 2 ( r ) 2 ⋯ S k ( r ) k = res r ⁡ ( r Q ′ ( x ) − P ( x ) , Q ( x ) ) {\displaystyle S_{1}(r)S_{2}(r)^{2}\cdots S_{k}(r)^{k}=\operatorname {res} _{r}(rQ'(x)-P(x),Q(x))} be the square-free factorization of the resultant which appears on the right. Trager proved that the antiderivative is ∫ P ( x ) Q ( x ) d x = ∑ i = 1 k ∑ S i ( α ) = 0 α log ⁡ ( T i ( α , x ) ) , {\displaystyle \int {\frac {P(x)}{Q(x)}}dx=\sum _{i=1}^{k}\sum _{S_{i}(\alpha )=0}\alpha \log(T_{i}(\alpha ,x)),} where the internal sums run over the roots of the S i {\displaystyle S_{i}} (if S i = 1 {\displaystyle S_{i}=1} the sum is zero, as being the empty sum ), and T i ( r , x ) {\displaystyle T_{i}(r,x)} is a polynomial of degree i in x . The Lazard-Rioboo contribution is the proof that T i ( r , x ) {\displaystyle T_{i}(r,x)} is the subresultant of degree i of r Q ′ ( x ) − P ( x ) {\displaystyle rQ'(x)-P(x)} and Q ( x ) . {\displaystyle Q(x).} It is thus obtained for free if the resultant is computed by the subresultant pseudo-remainder sequence . All preceding applications, and many others, show that the resultant is a fundamental tool in computer algebra . In fact most computer algebra systems include an efficient implementation of the computation of resultants. The resultant is also defined for two homogeneous polynomial in two indeterminates. Given two homogeneous polynomials P ( x , y ) and Q ( x , y ) of respective total degrees p and q , their homogeneous resultant is the determinant of the matrix over the monomial basis of the linear map ( A , B ) ↦ A P + B Q , {\displaystyle (A,B)\mapsto AP+BQ,} where A runs over the bivariate homogeneous polynomials of degree q − 1 , and B runs over the homogeneous polynomials of degree p − 1 . In other words, the homogeneous resultant of P and Q is the resultant of P ( x , 1) and Q ( x , 1) when they are considered as polynomials of degree p and q (their degree in x may be lower than their total degree): Res ⁡ ( P ( x , y ) , Q ( x , y ) ) = res p , q ⁡ ( P ( x , 1 ) , Q ( x , 1 ) ) . {\displaystyle \operatorname {Res} (P(x,y),Q(x,y))=\operatorname {res} _{p,q}(P(x,1),Q(x,1)).} (The capitalization of "Res" is used here for distinguishing the two resultants, although there is no standard rule for the capitalization of the abbreviation). The homogeneous resultant has essentially the same properties as the usual resultant, with essentially two differences: instead of polynomial roots, one considers zeros in the projective line , and the degree of a polynomial may not change under a ring homomorphism . That is: Any property of the usual resultant may similarly extended to the homogeneous resultant, and the resulting property is either very similar or simpler than the corresponding property of the usual resultant. Macaulay's resultant , named after Francis Sowerby Macaulay , also called the multivariate resultant , or the multipolynomial resultant , [ 3 ] is a generalization of the homogeneous resultant to n homogeneous polynomials in n indeterminates . Macaulay's resultant is a polynomial in the coefficients of these n homogeneous polynomials that vanishes if and only if the polynomials have a common non-zero solution in an algebraically closed field containing the coefficients, or, equivalently, if the n hyper surfaces defined by the polynomials have a common zero in the n –1 dimensional projective space. The multivariate resultant is, with Gröbner bases , one of the main tools of effective elimination theory (elimination theory on computers). Like the homogeneous resultant, Macaulay's may be defined with determinants , and thus behaves well under ring homomorphisms . However, it cannot be defined by a single determinant. It follows that it is easier to define it first on generic polynomials . A homogeneous polynomial of degree d in n variables may have up to ( n + d − 1 n − 1 ) = ( n + d − 1 ) ! ( n − 1 ) ! d ! {\displaystyle {\binom {n+d-1}{n-1}}={\frac {(n+d-1)!}{(n-1)!\,d!}}} coefficients; it is said to be generic , if these coefficients are distinct indeterminates. Let P 1 , … , P n {\displaystyle P_{1},\ldots ,P_{n}} be n generic homogeneous polynomials in n indeterminates, of respective degrees d 1 , … , d n . {\displaystyle d_{1},\dots ,d_{n}.} Together, they involve ∑ i = 1 n ( n + d i − 1 n − 1 ) {\displaystyle \sum _{i=1}^{n}{\binom {n+d_{i}-1}{n-1}}} indeterminate coefficients. Let C be the polynomial ring over the integers, in all these indeterminate coefficients. The polynomials P 1 , … , P n {\displaystyle P_{1},\ldots ,P_{n}} belong thus to C [ x 1 , … , x n ] , {\displaystyle C[x_{1},\ldots ,x_{n}],} and their resultant (still to be defined) belongs to C . The Macaulay degree is the integer D = d 1 + ⋯ + d n − n + 1 , {\displaystyle D=d_{1}+\cdots +d_{n}-n+1,} which is fundamental in Macaulay's theory. For defining the resultant, one considers the Macaulay matrix , which is the matrix over the monomial basis of the C -linear map ( Q 1 , … , Q n ) ↦ Q 1 P 1 + ⋯ + Q n P n , {\displaystyle (Q_{1},\ldots ,Q_{n})\mapsto Q_{1}P_{1}+\cdots +Q_{n}P_{n},} in which each Q i {\displaystyle Q_{i}} runs over the homogeneous polynomials of degree D − d i , {\displaystyle D-d_{i},} and the codomain is the C -module of the homogeneous polynomials of degree D . If n = 2 , the Macaulay matrix is the Sylvester matrix, and is a square matrix , but this is no longer true for n > 2 . Thus, instead of considering the determinant, one considers all the maximal minors , that is the determinants of the square submatrices that have as many rows as the Macaulay matrix. Macaulay proved that the C -ideal generated by these principal minors is a principal ideal , which is generated by the greatest common divisor of these minors. As one is working with polynomials with integer coefficients, this greatest common divisor is defined up to its sign. The generic Macaulay resultant is the greatest common divisor which becomes 1 , when, for each i , zero is substituted for all coefficients of P i , {\displaystyle P_{i},} except the coefficient of x i d i , {\displaystyle x_{i}^{d_{i}},} for which one is substituted. From now on, we consider that the homogeneous polynomials P 1 , … , P n {\displaystyle P_{1},\ldots ,P_{n}} of degrees d 1 , … , d n {\displaystyle d_{1},\ldots ,d_{n}} have their coefficients in a field k , that is that they belong to k [ x 1 , … , x n ] . {\displaystyle k[x_{1},\dots ,x_{n}].} Their resultant is defined as the element of k obtained by replacing in the generic resultant the indeterminate coefficients by the actual coefficients of the P i . {\displaystyle P_{i}.} The main property of the resultant is that it is zero if and only if P 1 , … , P n {\displaystyle P_{1},\ldots ,P_{n}} have a nonzero common zero in an algebraically closed extension of k . The "only if" part of this theorem results from the last property of the preceding paragraph, and is an effective version of projective Nullstellensatz : If the resultant is nonzero, then ⟨ x 1 , … , x n ⟩ D ⊆ ⟨ P 1 , … , P n ⟩ , {\displaystyle \langle x_{1},\ldots ,x_{n}\rangle ^{D}\subseteq \langle P_{1},\ldots ,P_{n}\rangle ,} where D = d 1 + ⋯ + d n − n + 1 {\displaystyle D=d_{1}+\cdots +d_{n}-n+1} is the Macaulay degree, and ⟨ x 1 , … , x n ⟩ {\displaystyle \langle x_{1},\ldots ,x_{n}\rangle } is the maximal homogeneous ideal. This implies that P 1 , … , P n {\displaystyle P_{1},\ldots ,P_{n}} have no other common zero than the unique common zero, (0, ..., 0) , of x 1 , … , x n . {\displaystyle x_{1},\ldots ,x_{n}.} As the computation of a resultant may be reduced to computing determinants and polynomial greatest common divisors , there are algorithms for computing resultants in a finite number of steps. However, the generic resultant is a polynomial of very high degree (exponential in n ) depending on a huge number of indeterminates. It follows that, except for very small n and very small degrees of input polynomials, the generic resultant is, in practice, impossible to compute, even with modern computers. Moreover, the number of monomials of the generic resultant is so high, that, if it would be computable, the result could not be stored on available memory devices, even for rather small values of n and of the degrees of the input polynomials. Therefore, computing the resultant makes sense only for polynomials whose coefficients belong to a field or are polynomials in few indeterminates over a field. In the case of input polynomials with coefficients in a field, the exact value of the resultant is rarely important, only its equality (or not) to zero matters. As the resultant is zero if and only if the rank of the Macaulay matrix is lower than its number of its rows, this equality to zero may by tested by applying Gaussian elimination to the Macaulay matrix. This provides a computational complexity d O ( n ) , {\displaystyle d^{O(n)},} where d is the maximum degree of input polynomials. Another case where the computation of the resultant may provide useful information is when the coefficients of the input polynomials are polynomials in a small number of indeterminates, often called parameters. In this case, the resultant, if not zero, defines a hypersurface in the parameter space. A point belongs to this hyper surface, if and only if there are values of x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} which, together with the coordinates of the point are a zero of the input polynomials. In other words, the resultant is the result of the " elimination " of x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} from the input polynomials. Macaulay's resultant provides a method, called " U -resultant" by Macaulay, for solving systems of polynomial equations . Given n − 1 homogeneous polynomials P 1 , … , P n − 1 , {\displaystyle P_{1},\ldots ,P_{n-1},} of degrees d 1 , … , d n − 1 , {\displaystyle d_{1},\ldots ,d_{n-1},} in n indeterminates x 1 , … , x n , {\displaystyle x_{1},\ldots ,x_{n},} over a field k , their U -resultant is the resultant of the n polynomials P 1 , … , P n − 1 , P n , {\displaystyle P_{1},\ldots ,P_{n-1},P_{n},} where P n = u 1 x 1 + ⋯ + u n x n {\displaystyle P_{n}=u_{1}x_{1}+\cdots +u_{n}x_{n}} is the generic linear form whose coefficients are new indeterminates u 1 , … , u n . {\displaystyle u_{1},\ldots ,u_{n}.} Notation u i {\displaystyle u_{i}} or U i {\displaystyle U_{i}} for these generic coefficients is traditional, and is the origin of the term U -resultant. The U -resultant is a homogeneous polynomial in k [ u 1 , … , u n ] . {\displaystyle k[u_{1},\ldots ,u_{n}].} It is zero if and only if the common zeros of P 1 , … , P n − 1 {\displaystyle P_{1},\ldots ,P_{n-1}} form a projective algebraic set of positive dimension (that is, there are infinitely many projective zeros over an algebraically closed extension of k ). If the U -resultant is not zero, its degree is the Bézout bound d 1 ⋯ d n − 1 . {\displaystyle d_{1}\cdots d_{n-1}.} The U -resultant factorizes over an algebraically closed extension of k into a product of linear forms. If α 1 u 1 + … + α n u n {\displaystyle \alpha _{1}u_{1}+\ldots +\alpha _{n}u_{n}} is such a linear factor, then α 1 , … , α n {\displaystyle \alpha _{1},\ldots ,\alpha _{n}} are the homogeneous coordinates of a common zero of P 1 , … , P n − 1 . {\displaystyle P_{1},\ldots ,P_{n-1}.} Moreover, every common zero may be obtained from one of these linear factors, and the multiplicity as a factor is equal to the intersection multiplicity of the P i {\displaystyle P_{i}} at this zero. In other words, the U -resultant provides a completely explicit version of Bézout's theorem . The U -resultant as defined by Macaulay requires the number of homogeneous polynomials in the system of equations to be n − 1 {\displaystyle n-1} , where n {\displaystyle n} is the number of indeterminates. In 1981, Daniel Lazard extended the notion to the case where the number of polynomials may differ from n − 1 {\displaystyle n-1} , and the resulting computation can be performed via a specialized Gaussian elimination procedure followed by symbolic determinant computation. Let P 1 , … , P k {\displaystyle P_{1},\ldots ,P_{k}} be homogeneous polynomials in x 1 , … , x n , {\displaystyle x_{1},\ldots ,x_{n},} of degrees d 1 , … , d k , {\displaystyle d_{1},\ldots ,d_{k},} over a field k . Without loss of generality, one may suppose that d 1 ≥ d 2 ≥ ⋯ ≥ d k . {\displaystyle d_{1}\geq d_{2}\geq \cdots \geq d_{k}.} Setting d i = 1 {\displaystyle d_{i}=1} for i > k , the Macaulay bound is D = d 1 + ⋯ + d n − n + 1. {\displaystyle D=d_{1}+\cdots +d_{n}-n+1.} Let u 1 , … , u n {\displaystyle u_{1},\ldots ,u_{n}} be new indeterminates and define P k + 1 = u 1 x 1 + ⋯ + u n x n . {\displaystyle P_{k+1}=u_{1}x_{1}+\cdots +u_{n}x_{n}.} In this case, the Macaulay matrix is defined to be the matrix, over the basis of the monomials in x 1 , … , x n , {\displaystyle x_{1},\ldots ,x_{n},} of the linear map ( Q 1 , … , Q k + 1 ) ↦ P 1 Q 1 + ⋯ + P k + 1 Q k + 1 , {\displaystyle (Q_{1},\ldots ,Q_{k+1})\mapsto P_{1}Q_{1}+\cdots +P_{k+1}Q_{k+1},} where, for each i , Q i {\displaystyle Q_{i}} runs over the linear space consisting of zero and the homogeneous polynomials of degree D − d i {\displaystyle D-d_{i}} . Reducing the Macaulay matrix by a variant of Gaussian elimination , one obtains a square matrix of linear forms in u 1 , … , u n . {\displaystyle u_{1},\ldots ,u_{n}.} The determinant of this matrix is the U -resultant. As with the original U -resultant, it is zero if and only if P 1 , … , P k {\displaystyle P_{1},\ldots ,P_{k}} have infinitely many common projective zeros (that is if the projective algebraic set defined by P 1 , … , P k {\displaystyle P_{1},\ldots ,P_{k}} has infinitely many points over an algebraic closure of k ). Again as with the original U -resultant, when this U -resultant is not zero, it factorizes into linear factors over any algebraically closed extension of k . The coefficients of these linear factors are the homogeneous coordinates of the common zeros of P 1 , … , P k , {\displaystyle P_{1},\ldots ,P_{k},} and the multiplicity of a common zero equals the multiplicity of the corresponding linear factor. The number of rows of the Macaulay matrix is less than ( e d ) n , {\displaystyle (ed)^{n},} where e ~ 2.7182 is the usual mathematical constant , and d is the arithmetic mean of the degrees of the P i . {\displaystyle P_{i}.} It follows that all solutions of a system of polynomial equations with a finite number of projective zeros can be determined in time d O ( n ) . {\displaystyle d^{O(n)}.} Although this bound is large, it is nearly optimal in the following sense: if all input degrees are equal, then the time complexity of the procedure is polynomial in the expected number of solutions ( Bézout's theorem ). This computation may be practically viable when n , k and d are not large.
https://en.wikipedia.org/wiki/Resultant
" Resurrection ecology " is an evolutionary biology technique whereby researchers hatch dormant eggs from lake sediments to study animals as they existed decades ago. It is a new approach that might allow scientists to observe evolution as it occurred, by comparing the animal forms hatched from older eggs with their extant descendants . [ 1 ] This technique is particularly important because the live organisms hatched from egg banks can be used to learn about the evolution of behavioural , plastic or competitive traits that are not apparent from more traditional paleontological methods. [ 2 ] One such researcher in the field is W. Charles Kerfoot of Michigan Technological University whose results were published in the journal Limnology and Oceanography . He reported on success in a search for "resting eggs" of zooplankton that are dormant in Portage Lake on Michigan 's Upper Peninsula . The lake has undergone a considerable amount of change over the last 100 years including flooding by copper mine debris, dredging , and eutrophication . [ 2 ] Others have used this technique to explore the evolutionary effects of eutrophication, [ 3 ] predation, [ 4 ] [ 5 ] and metal contamination. [ 2 ] Resurrection ecology provided the best empirical example of the " Red Queen Hypothesis " in nature. [ 4 ] Any organism that produces a resting stage can be used for resurrection ecology. However, the most frequently used organism is the water flea, Daphnia . This genus has well-established protocols for lab experimentation and usually asexually reproduces allowing for experiments on many individuals with the same genotype. Although the more esoteric demonstration of natural selection is alone a valuable aspect of the study described, there is a clear ecological implication in the discovery that very old zooplankton eggs have survived in the lake: the potential still exists, if and when this environment is restored to something of a more pristine nature, for at least some of the original (pre-disturbance) inhabitants to re-establish populations once presumed lost. The genes valuable to survival of those species in a restored environment are still "readily" available and may be quickly assimilated by the modern populations, perhaps requiring no more than a fortuitous disturbance of the bottom.
https://en.wikipedia.org/wiki/Resurrection_ecology
Reta F. Beebe (born October 10, 1936 in Baca County, Colorado) is an American astronomer , author , and popularizer of astronomy . She is an expert on the planets Jupiter and Saturn , and the author of Jupiter: The Giant Planet . [ 1 ] She is a professor emeritus in the Astronomy Department at New Mexico State University and 2010 winner of the NASA Exceptional Public Service medal. [ 2 ] [ 3 ] Beebe spent many years helping to plan and manage NASA missions, including the Voyager program missions to the giant planets. Her specific research interest was the atmospheres of Jupiter, Saturn, Uranus , and Neptune . She designed experiments to the study and measure the clouds and winds of the giant planets. [ 4 ] She worked interpreting the Galileo and Cassini data and used the Hubble Space Telescope to obtain additional atmospheric data on Jupiter and Saturn. She was a member of the Shoemaker/Levy team at the Space Telescope Science Institute in 1994 when the comet struck Jupiter. Formerly, she chaired the Committee for Planetary and Lunar Exploration (COMPLEX), which is the principal space committee of the United States National Research Council . [ 5 ] More recently she was involved with organizing the data about the giant planets in NASA's Planetary Data System . She is in charge of the Atmospheres Discipline Node of that program. [ 6 ] Her planetary data archiving skills have also been employed by the European Space Agency . [ 2 ] She serves on the steering committee of the International Planetary Data Alliance . [ 2 ]
https://en.wikipedia.org/wiki/Reta_Beebe
Retainage is a portion of the agreed upon contract price deliberately withheld until the work is complete to assure that the contractor or subcontractor will satisfy its obligations and complete a construction project. [ 1 ] A retention is money withheld by one party in a contract to act as security against incomplete or defective works. They have their origin in the Railway Mania of the 1840s but are now common across the industry, featuring in the majority of construction contracts. A typical retention rate is 5% of which half is released at completion and half at the end of the defects liability period (often 12 months later). There has been criticism of the practice for leading to uncertainty on payment dates, increasing tensions between parties and putting monies at risk in cases of insolvency . There have been several proposals to replace the practice with alternative systems. The practice of retainage dates back to the construction of the United Kingdom railway system in the 1840s. [ 2 ] [ 3 ] : 32 The size of the railway project increased demand for contractors, which led to the entrance of new contractors into the labor market. [ 2 ] These new contractors were inexperienced, unqualified and unable to successfully complete the project. [ 2 ] Consequently, the railway companies began to withhold as much as 20% of contractors' payments to ensure performance and offset completion costs should the contractor default. [ 2 ] The point was to withhold the contractor's profit only, not to make the contractor and its subcontractors finance the project. Given the often large-scale, complexity, cost and length of construction projects, the risk of something not going according to plan is almost certain. Accordingly, a common approach that contracting parties take in order to mitigate this risk is to include retainage provisions within their agreements. The concept of retainage is unique to the construction industry and attempts to do two things: provide an incentive to the contractor or subcontractor to complete the project and protect the owner against any liens, claims or defaults, which may surface as the project nears completion. [ 4 ] Incidentally, owners and contractors use retainage as a source of financing for the project, contractors in turn withhold retainage from subcontractors, frequently at a greater percentage than is being withheld from them. [ 5 ] Retentions are widely used in the British construction industry: in the majority of all contracts awarded, [ 3 ] : 27 a sum of money is withheld as a security against poor quality products (defects) or works left incomplete. Clients withhold retention against main contractors and main contractors withhold payment against sub-contractors. [ 3 ] : 16 Retentions typically take the form of a percentage on the contract value. [ 3 ] : 18 The rate can vary wildly but is typically around 5%. The general state of the economy can affect the rates set: in a buoyant economy with plentiful work sub-contractors are able to pick which work they accept and therefore have potential to negotiate more favourable rates. [ 3 ] : 19 The chain of retention starts with the client who withholds money on the main contractor. The main contractor withholds money on sub-contractors who may also then withhold on sub-sub contractors. [ 3 ] : 18 The retention money is typically released in two portions (known as moieties); the first being payable at completion of a project and the second at the end of the defects liability period. This period is the time during which the client is able to identify works that are defective to the contractor who must then remedy them; it is often twelve months. [ 3 ] : 18 The use of retentions is not common to all sectors of the industry; for example lift installers have developed their own guarantee system instead. [ 3 ] : 18 A mobilization payment is an advance payment to a contractor at the start of a project to assist in the beginning of operations. [ 6 ] [ 7 ] The use of retentions is intended to encourage efficiency and productivity. The contractor has a financial incentive to achieve completion as early as possible (to release the first moiety payment) and to minimise defects in the works (to achieve the second payment). [ 3 ] : 27 Retentions held against sub-contractors are also a key source of cash for main contractors, who may use them to finance new projects. [ 3 ] : 27 [ 8 ] However sub-contractors often complain about the system. [ 9 ] They sometimes lack a firm date on which retention monies will be paid and a 2017 British government report noted that more than half of contractors had experienced late or non-payment of retention monies. [ 9 ] [ 3 ] : 20 Delays are reportedly longer for sub-contractors and sub-sub contractors than for the main contractor. [ 3 ] : 20 This restricts cash flow available for the company as a going concern and for capital investment. [ 9 ] The chasing up of payments is also resource intensive, as such smaller businesses are hit more severely than larger ones. [ 9 ] Some smaller companies simply write off the retention money, increasing their prices to compensate. [ 3 ] : 20 [ 3 ] : 23 The practice has also been described as increasing tensions between the parties in contract. [ 3 ] : 22 There is no current requirement for retention monies to be ring-fenced (kept separately to general company funds and preserved from spending) and they are usually held in a client's or contractor's main bank account. [ 9 ] This can cause problems in cases of insolvency, where the money can be lost and payments owed to the supply chain put at risk. [ 3 ] : 22 The use of retentions (which are considered a form of stage payments) can also render construction companies unsuitable for factoring (the sale of accounts receivable ). [ 10 ] [ 11 ] Railway construction in the 1840's saw a rapid increase in the number of contractors, often with little experience of the industry. There was a rise in the number of insolvencies and a drop in workmanship standards. Railway companies therefore began withholding a minimum of 20% of payments to contractors as a security against incomplete and defective works. This practice had spread across the industry by the mid-19th century. [ 3 ] : 33 The 1994 Latham Report recommended that legislation be introduced to protect retention monies held by a party, which would prevent it being lost during a liquidation. Despite all of Latham's other payment recommendations being incorporated into the Construction Act 1998 this one was omitted. [ 9 ] The practice was reformed somewhat by the Construction Act 2011 . This made it illegal for the release of retention under one contract to be linked to that of a second. This ended the practice whereby contractors would refuse to release retention to sub-contractors until they had been paid it themselves by the client, over which the sub-contractor had no influence. [ 3 ] : 18 The 2018 collapse of contractor Carillion had a dramatic effect on the industry. [ 9 ] Many of its sub-contractors lost large sums of money as £250 million in unpaid retention was lost when the business went into liquidation. [ 12 ] There is limited use of alternatives to retention in the British construction industry. [ 3 ] : 24 However, there have been recent movements to try to effect change. The Department for Business, Energy and Industrial Strategy (DBEIS) commissioned research into the matter to determine the extent of the use of the practice and its effects on the industry and economy. This was published in 2017 and also identified a number of alternatives to the practice. [ 3 ] : 16–17 A DBEIS public consultation was subsequently launched; this closed on 19 January 2018 but no recommendations were subsequently made for government action. [ 9 ] A private members bill was introduced to the House of Commons by Peter Aldous on 9 January 2018 seeking to introduce protection to retention money but did not proceed through parliament. [ 13 ] The Build UK industry group aimed to secure abolition of retentions by 2025, following an ambition outlined by the Construction Leadership Council in 2014. Build UK put forward proposals that retentions by the main contractor on sub-contractors should be no more onerous than those imposed by the client on the main contractor. They also proposed that retentions should only apply to permanent works, as temporary works are unlikely to lead to defects. The organisation also wants small value contracts (less than £100,000) to become retention-free by 2021, as the risk to the main works is lower for these contracts. [ 8 ] The Construction supply chain payment charter , adopted in 2014, had a target for "ZERO retentions" by 2025 in construction contracts dated 1 January 2015 or later, along with the adoption of 30 days' standard payment terms across the construction sector. However, the charter was withdrawn on 18 January 2022 in favour of reporting regulations applicable to large businesses. [ 14 ] The reporting regulations lapsed on 6 April 2024. [ 15 ] Some organisations have proposed retention deposit schemes, whereby money is deposited with a third party, although these lead to increased fees and bureaucracy and do not solve disputes between parties over when retention should be released. [ 16 ] A mandatory retention deposit system was proposed for inclusion in the Enterprise Act 2016 , [ 17 ] but the proposed scheme was not subsequently included within the Act. [ 18 ] Following the collapse of Carillion there have been increased calls for retention reform. The Scottish Government began a consultation on retentions in 2019. It stated that the UK was behind other countries by continuing the practice, despite the matter having been looked into several times by the UK Government. [ 9 ] Alternatives include project bank accounts (which are used for all payments from the client and contractor), retention bonds (see below), performance bonds , escrow stakeholder accounts (monies held by a third party), parent company guarantees (guarantee of completion by the main contractor's parent organisation) or trust funds to hold retention monies. [ 3 ] : 24 A retention bond is a form of performance bond or insurance against defects, taken out by the contractor at the request of the client, or by a subcontractor at the request of the contractor, seen as being fairer and more efficient than a cash retention. [ 19 ] An agreement is entered into by the two parties and a third party known as a surety provider, who acts as a guarantor between the two parties. The agreement states that cash retentions will not be used and, instead, the surety provider agrees to pay up to the amount which would have been held as a cash retention if the contractor or subcontractor fail to carry out the works as contracted or to remedy any defects. Build UK and its predecessor, the National Specialist Contractors Council, have endorsed the use of retention bonds in their Fair Payment Campaign . [ 20 ] The Joint Contracts Tribunal contracts system allowed for a reform of retentions by permitting the employer (client) to hold retention monies in trust. The 1998 revision of the contract allowed the contractor to request that the client hold the money in a separate bank account; it also permitted the use of retention bonds. The 2016 JCT contract allows for retention-free projects. [ 16 ] The NEC Engineering and Construction Contract , introduced in 1993, has no allowance for retentions in its core clauses. The basic contract relies on the spirit of collaboration between parties to minimise defects, but retentions can be, and often are, introduced by clients through variant clauses (so-called "x clauses"). There is an allowance for retention bonds within the fourth edition of the contract (introduced in 2017). The contract also allows for retention to be withheld only on the labour-element of any price or only to be applied on the final few payments made. The NEC system also has an option to allow the use of project bank accounts in lieu of retention. [ 12 ] If there is to be retainage on the construction project, it is set forth in the construction contract. [ 21 ] Retainage provisions are applicable to subcontracts as well as prime contracts. The amount withheld from the contractor or subcontractor should be determined on a case-by-case basis by the parties negotiating the contract, usually based upon such factors as past performance and the likelihood that the contractor or subcontractor will perform well under the contract. One can structure retainage arrangements in any number of ways. Subject to state statutory requirements, 10% is the retainage amount most often used by contracting parties. Another approach is to start off with a 10% retainage and to reduce it to 5% once the project is 50% complete. [ 22 ] A third approach is to carve out material costs from a withholding requirement on the theory that suppliers, unlike subcontractors, may not accept retainage provisions in their purchase orders. Retainage clauses are usually found within the contract terms outlining the procedure for submitting payment applications. A typical retainage clause parallels the following language: "Owner shall pay the amount due on the Payment Application less retainage of [a specific percentage]." [ 23 ] Retainage is generally due to the contractor or subcontractor once their work is complete. Disputes often arise regarding just when completion occurs - it could be "substantial completion", which is generally when the owner can occupy a structure and use it for its intended purpose; or more often, it could be once a punch list of work has been completed. [ 24 ] Subcontractors tend to bear the brunt of retainage provisions, especially subcontractors performing work early on in the construction process. [ 24 ] The main reason for this, is because many contractors pass down the owner's right to withhold retainage to the subcontractor, but frequently withhold more than is being withheld from them. [ 24 ] For example, a subcontractor performing site work may complete its work in the first few months of the construction project, but generally is not allowed to recover the amount withheld from the owner and contractor until the project is "substantially complete", which could take a few years depending on the size of the project. Coupled with a contingent payment clause, the retainage can cause significant financial distress to a subcontractor. Another problem arises when the contractor withholds from its subcontractors at a greater percentage than the owner has withheld from them. The owner is to pay retainage to the contractor when substantial completion has occurred, however, in this abusive, over-withholding scenario, the contractor will already have been paid a portion of the subcontractors' funds, meaning that the contractor will have to fund the balance of the payment from its own cash flow. This could cause a delay in the project closeout. The contractor may feel that it is more advantageous to keep the project incomplete, than by never being paid its retainage and making the argument that the subcontractors are therefore not due their portion of the retainage. In 1974, Congress established the Office of Federal Procurement Policy to provide a uniform government-wide procurement policy. [ 25 ] Since the mid-1970s, there has been an overall trend in the reduction of percentage withheld on federal construction projects. [ 26 ] The current Federal Acquisition Regulation (F.A.R.) continues to support this trend. Paragraph 32.103 of the regulation states, " . . . Retainage should not be used as a substitute for good contract management, and the contracting officer should not withhold funds without cause. Determinations to retain and the specific amount to be withheld shall be made by the contracting officers on a case-by-case basis. Such decisions will be based on the contracting officer's assessment of past performance and the likelihood that such performance will continue. " [ 27 ] Currently, federal agencies such as the Department of Defense , the General Services Administration , and the US Department of Transportation have 'zero' retainage policies. [ 2 ] Several alternatives exist to standard retainage provisions that provide the same benefits and protections. For example, parties can agree to establish a trust account. [ 26 ] A trust account provides the contractor with some control over its money, even if it is being held by the owner. [ 26 ] In a trust account, retainage is withheld by the owner, placed in a trust account with a trustee that has a fiduciary relationship to the contractor. [ 26 ] The trustee can invest the retainage at the contractor's direction, thereby allowing the contractor to "use" the retained funds that normally would sit idle in an escrow account. [ 26 ] Other alternatives to retainage are to allow the contractor to supply substitute security to the owner in the form of a performance bond, bank letter of credit, or a security of, or guaranteed by, the United States, such as bills, certificates, notes or bonds. [ 26 ] Retentions are used in several other countries. They are common in China, though in some cases the moiety payments are guaranteed by the Agricultural Bank of China . [ 3 ] : 165 They are also used in the United States where the percentage retained is typically higher at around 10%. However the release of retention is different with 50% of the withheld money often released once the works are considered to be 50% complete. Some states have taken measures to abolish or limit the use of retentions in public contracts. [ 3 ] : 163 In the United States the use of retention bonds is more common than in the UK. [ 3 ] : 164 Retentions are common in Qatar where the proportion retained may be up to 30% of contract value due to the large number of foreign companies that operate under limited liability law in the state. [ 3 ] : 165 In Canada retentions are known as "holdback" payments; since 1997 all retention monies in Canada must be held in ring-fenced accounts. [ 3 ] : 164 [ 3 ] : 24 Retentions are used in Australia; in New South Wales all retention monies for projects in excess of $20 million must be held in ring-fenced accounts with an authorised bank. [ 3 ] : 23 [ 3 ] : 164 In New Zealand all retention monies are required to be held in trust and must be in cash or other liquid assets; this requirement was introduced following the 2013 collapse of main contractor Mainzeal . [ 3 ] : 24 [ 3 ] : 164 However, after the 2019 collapse of Stanley Group it was discovered that retention money was not properly administered, residing in the company's main account, despite the group claiming to sub-contractors that it had been held in separate accounts, and was therefore liable to loss during the liquidation process. [ 28 ] The retention system is not used in Germany where the works remain the property of the contractor until completion and are, therefore, liable to be withheld from the client in cases of dispute. [ 3 ] : 165
https://en.wikipedia.org/wiki/Retainage
In chemistry , a retained name is a name for a chemical compound , that is recommended for use by a system of chemical nomenclature (for example, IUPAC nomenclature ), but that is not exactly systematic. [ 1 ] [ 2 ] Retained names are often used for the most fundamental parts of a nomenclature system: almost all the chemical elements have retained names rather than being named systematically , as do the first four alkanes , benzene and most simple heterocyclic compounds . Water and ammonia are other examples of retained names. Retained names may be either semisystematic or completely trivial ; that is, they may contain certain elements of systematic nomenclature or none at all. [ 3 ] Glycerol and acetic acid are examples of retained semisystematic names; furan and anisole are examples of retained trivial names. [ 2 ]
https://en.wikipedia.org/wiki/Retained_name
In chromatography , the retardation factor ( R ) is the fraction of an analyte in the mobile phase of a chromatographic system. [ 1 ] In planar chromatography in particular, the retardation factor R F is defined as the ratio of the distance traveled by the center of a spot to the distance traveled by the solvent front. [ 2 ] Ideally, the values for R F are equivalent to the R values used in column chromatography. [ 2 ] Although the term retention factor is sometimes used synonymously with retardation factor in regard to planar chromatography the term is not defined in this context. However, in column chromatography , the retention factor or capacity factor ( k ) is defined as the ratio of time an analyte is retained in the stationary phase to the time it is retained in the mobile phase, [ 3 ] which is inversely proportional to the retardation factor. In chromatography, the retardation factor, R , is the fraction of the sample in the mobile phase at equilibrium, defined as: [ 1 ] The retardation factor, R F , is commonly used in paper chromatography and thin layer chromatography (TLC) for analyzing and comparing different substances. It can be mathematically described by the following ratio: [ 2 ] An R F value will always be in the range 0 to 1; if the substance moves, it can only move in the direction of the solvent flow, and cannot move faster than the solvent. For example, if particular substance in an unknown mixture travels 2.5 cm and the solvent front travels 5.0 cm, the retardation factor would be 0.50. One can choose a mobile phase with different characteristics (particularly polarity) in order to control how far the substance being investigated migrates. An R F value is characteristic for any given compound (provided that the same stationary and mobile phases are used). It can provide corroborative evidence as to the identity of a compound. If the identity of a compound is suspected but not yet proven, an authentic sample of the compound, or standard, is spotted and run on a TLC plate side by side (or on top of each other) with the compound in question. Note that this identity check must be performed on a single plate, because it is difficult to duplicate all the factors which influence R F exactly from experiment to experiment. In terms of retention factor ( k ), retardation factor ( R ) is defined as follows: based on the definition of k : [ 3 ]
https://en.wikipedia.org/wiki/Retardation_factor
A retarder is a chemical agent that slows down a chemical reaction . For example, retarders are used to slow the chemical reaction hardening of plastic materials such as wallboard , concrete , and adhesives . [ 1 ] Sugar water acts as a retarder for the curing of concrete. It can be used to retard the chemical hardening of the surface, so that the top layer can be washed off to expose the underlying aggregate . This chemical reaction article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Retarder_(chemistry)
A rete mirabile ( Latin for "wonderful net"; pl. : retia mirabilia ) is a complex of arteries and veins lying very close to each other, found in some vertebrates , mainly warm-blooded ones. The rete mirabile utilizes countercurrent blood flow within the net (blood flowing in opposite directions) to act as a countercurrent exchanger . It exchanges heat , ions , or gases between vessel walls so that the two bloodstreams within the rete maintain a gradient with respect to temperature , or concentration of gases or solutes . This term was coined by Galen . [ 1 ] [ 2 ] The effectiveness of retia is primarily determined by how readily the heat, ions, or gases can be exchanged. For a given length, they are most effective with respect to gases or heat, then small ions, and decreasingly so with respect to other substances. [ citation needed ] The retia can provide for extremely efficient exchanges. In bluefin tuna , for example, nearly all of the metabolic heat in the venous blood is transferred to the arterial blood, thus conserving muscle temperature; that heat exchange approaches 99% efficiency. [ 3 ] [ 4 ] In birds with webbed feet , retia mirabilia in the legs and feet transfer heat from the outgoing (hot) blood in the arteries to the incoming (cold) blood in the veins. The effect of this biological heat exchanger is that the internal temperature of the feet is much closer to the ambient temperature, thus reducing heat loss. Penguins also have them in the flippers and nasal passages. Seabirds distill seawater using countercurrent exchange in a so-called salt gland with a rete mirabile. The gland secretes highly concentrated brine stored near the nostrils above the beak. The bird then "sneezes" the brine out. As freshwater is not usually available in their environments, some seabirds, such as pelicans , petrels , albatrosses , gulls and terns , possess this gland, which allows them to drink the salty water from their environments while they are hundreds of miles away from land. [ 5 ] [ 6 ] Fish have evolved retia mirabilia multiple times to raise the temperature [ 7 ] ( endothermy ) or the oxygen concentration of a body part above the ambient level. [ 8 ] In many fish , a rete mirabile helps fill the swim bladder with oxygen , increasing the fish's buoyancy . The rete mirabile is an essential [ 8 ] part of the system that pumps dissolved oxygen from a low partial pressure ( P O 2 {\displaystyle {P_{\rm {O_{2}}}}} ) of 0.2 atmospheres into a gas filled bladder that is at a pressure of hundreds of atmospheres. [ 9 ] A rete mirabile called the choroid rete mirabile is found in most living teleosts and raises the P O 2 {\displaystyle {P_{\rm {O_{2}}}}} of the retina. [ 8 ] The higher supply of oxygen allows the teleost retina to be thick and have few blood vessels thereby increasing its sensitivity to light . [ 10 ] In addition to raising the P O 2 {\displaystyle {P_{\rm {O_{2}}}}} , the choroid rete has evolved to raise the temperature of the eye in some teleosts and sharks . [ 7 ] A countercurrent exchange system is utilized between the venous and arterial capillaries. Lowering the pH levels in the venous capillaries causes oxygen to unbind from blood hemoglobin because of the Root effect . This causes an increase in venous blood oxygen partial pressure, allowing the oxygen to diffuse through the capillary membrane and into the arterial capillaries, where oxygen is still sequestered to hemoglobin. The cycle of diffusion continues until the partial pressure of oxygen in the arterial capillaries exceeds that in the swim bladder. At this point, the dissolved oxygen in the arterial capillaries diffuses into the swim bladder via the gas gland. [ 11 ] The rete mirabile allows for an increase in muscle temperature in regions where this network of vein and arteries is found. The fish is able to thermoregulate certain areas of its body. Additionally, this increase in temperature leads to an increase in basal metabolic temperature. The fish is now able to split ATP at a higher rate and ultimately can swim faster. The opah utilizes retia mirabilia to conserve heat, making it the newest addition to the list of regionally endothermic fish. Blood traveling through capillaries in the gills must carry cold blood due to their exposure to cold water, but retia mirabilia in the opah's gills are able to transfer heat from warm blood in arterioles coming from the heart that heats this colder blood in arterioles leaving the gills. The huge pectoral muscles of the opah, which generate most of the body heat, are thus able to control the temperature of the rest of the body. [ 12 ] In mammals , an elegant rete mirabile in the efferent arterioles of juxtamedullary glomeruli is important in maintaining the hypertonicity of the renal medulla . It is the hypertonicity of this zone, resorbing water osmotically from the renal collecting ducts as they exit the kidney , that makes possible the excretion of a hypertonic urine and maximum conservation of body water. Vascular retia mirabilia are also found in the limbs of a range of mammals. These reduce the temperature in the extremities. Some of these probably function to prevent heat loss in cold conditions by reducing the temperature gradient between the limb and the environment. Others reduce the temperature of the testes increasing their productivity. In the neck of the dog , a rete mirabile protects the brain when the body overheats during hunting; the venous blood is cooled down by panting before entering the net. Retia mirabilia also occur frequently in mammals that burrow, dive or have arboreal lifestyles that involve clinging with the limbs for lengthy periods. In the last case, slow-moving arboreal mammals such as sloths, lorises and arboreal anteaters possess retia of the highly developed type known as vascular bundles. The structure and function of these mammalian retia mirabilia are reviewed by O'Dea (1990). [ 13 ] The ancient physician Galen mistakenly thought that humans also have a rete mirabile in the neck, apparently based on dissection of sheep and misidentifying the results with the human carotid sinus , and ascribed important properties to it; it fell to Berengario da Carpi first, and then to Vesalius to demonstrate the error.
https://en.wikipedia.org/wiki/Rete_mirabile
Retene , methyl isopropyl phenanthrene or 1-methyl-7-isopropyl phenanthrene, C 18 H 18 , is a polycyclic aromatic hydrocarbon present in the coal tar fraction, boiling above 360 °C. It occurs naturally in the tars obtained by the distillation of resinous woods . It crystallizes in large plates, which melt at 98.5 °C and boil at 390 °C. It is readily soluble in warm ether and in hot glacial acetic acid . Sodium and boiling amyl alcohol reduce it to a tetrahydroretene, but if it heated with phosphorus and hydriodic acid to 260 °C, a dodecahydride is formed. Chromic acid oxidizes it to retene quinone, phthalic acid and acetic acid . It forms a picrate that melts at 123-124 °C. Retene is derived by degradation of specific diterpenoids biologically produced by conifer trees. The presence of traces of retene in the air is an indicator of forest fires ; it is a major product of pyrolysis of conifer trees. [ 1 ] [ failed verification – see discussion ] It is also present in effluents from wood pulp and paper mills . [ 2 ] Retene, together with cadalene , simonellite and ip-iHMN, is a biomarker of vascular plants , which makes it useful for paleobotanic analysis of rock sediments . The ratio of retene/cadalene in sediments can reveal the ratio of the genus Pinaceae in the biosphere. [ 3 ] A recent study has shown retene, which is a component of the Amazonian organic PM10 , is cytotoxic to human lung cells. [ 4 ] This article incorporates text from a publication now in the public domain : Chisholm, Hugh , ed. (1911). " Retene ". Encyclopædia Britannica . Vol. 23 (11th ed.). Cambridge University Press. p. 202.
https://en.wikipedia.org/wiki/Retene
A retention agent is a chemical process that improves the retention of a functional chemical in a substrate . The result is that totally fewer chemicals are used to get the same effect as the functional chemical and fewer chemicals go to waste . Retention agents (retention aids) are used in the papermaking industry. These are added in the wet end of the paper machine to improve retention fine particles and fillers during the formation of paper . Retention aids can also be used to improve the retention of other papermaking chemicals, including sizing and cationic starches. The improved retention of papermaking furnish components improves the operational efficiency of the paper machine, reduces the solids and organic loading in the process water loop, and can lower overall chemical costs. Typical chemicals used as retention aids are: polyacrylamide (PAM), polyethyleneimine (PEI), colloidal silica , and bentonite . Retention Agents or Retention Aids are often used along with the addition of drainage aids on paper machines. This is done because while retention is enhanced the forming fabrics get choked, resulting in slower removal of water from the paper web. Research done at a manufacturing laboratory in India signifies that over use of flocculants falling under this category can also result in problems during the runnability of machine. This chemical process -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Retention_agent
A retention basin, sometimes called a retention pond, wet detention basin , or storm water management pond (SWMP), is an artificial pond with vegetation around the perimeter and a permanent pool of water in its design. [ 1 ] [ 2 ] [ 3 ] It is used to manage stormwater runoff , for protection against flooding , for erosion control , and to serve as an artificial wetland and improve the water quality in adjacent bodies of water. It is distinguished from a detention basin , sometimes called a "dry pond", which temporarily stores water after a storm, but eventually empties out at a controlled rate to a downstream water body. It also differs from an infiltration basin which is designed to direct stormwater to groundwater through permeable soils. Wet ponds are frequently used for water quality improvement, groundwater recharge , flood protection, aesthetic improvement, or any combination of these. Sometimes they act as a replacement for the natural absorption of a forest or other natural process that was lost when an area is developed. As such, these structures are designed to blend into neighborhoods and viewed as an amenity. [ 4 ] In urban areas, impervious surfaces (roofs, roads) reduce the time spent by rainfall before entering into the stormwater drainage system. If left unchecked, this will cause widespread flooding downstream. The function of a stormwater pond is to contain this surge and release it slowly. This slow release mitigates the size and intensity of storm-induced flooding on downstream receiving waters. Stormwater ponds also collect suspended sediments, which are often found in high concentrations in stormwater water due to upstream construction and sand applications to roadways. Storm water is typically channeled to a retention basin through a system of street and/or parking lot storm drains , and a network of drain channels or underground pipes. The basins are designed to allow relatively large flows of water to enter, but discharges to receiving waters are limited by outlet structures that function only during very large storm events. Retention ponds are often landscaped with a variety of grasses , shrubs , and/or aquatic plants to provide bank stability and aesthetic benefits. Vegetation also provides water quality benefits by removing soluble nutrients through uptake. [ 5 ] In some areas the ponds can attract nuisance types of wildlife like ducks or Canada geese , particularly where there is minimal landscaping and grasses are mowed. This reduces the ability of foxes , coyotes , and other predators to approach their prey unseen. Such predators tend to hide in the cattails and other tall, thick grass surrounding natural water features. Proper depth of retention ponds is important for removal of pollutants and maintenance of fish populations. Urban fishing continues to be one of the fastest growing fishing segments as new suburban neighborhoods are built around these aquatic areas. [ citation needed ]
https://en.wikipedia.org/wiki/Retention_basin
Retention distance , or R D , is a concept in thin layer chromatography , designed for quantitative measurement of equal-spreading of the spots on the chromatographic plate and one of the Chromatographic response functions . It is calculated from the following formula: R D = [ ( n + 1 ) ( n + 1 ) ∏ i = 0 n ( R F ( i + 1 ) − R F i ) ] 1 n {\displaystyle R_{D}={\Bigg [}(n+1)^{(n+1)}\prod _{i=0}^{n}{(R_{F(i+1)}-R_{Fi}){\Bigg ]}^{\frac {1}{n}}}} where n is the number of compounds separated, R f (1...n) are the Retention factor of the compounds sorted in non-descending order, R f0 = 0 and R f(n+1) = 1. The coefficient lies always in range <0,1> and 0 indicates worst case of separation (all R f values equal to 0 or 1), value 1 indicates ideal equal-spreading of the spots, for example (0.25,0.5,0.75) for three solutes, or (0.2,0.4,0.6,0.8) for four solutes. This coefficient was proposed as an alternative to earlier approaches, such as delta-Rf, delta-Rf product or MRF (Multispot Response Function). Besides its stable range, the advantage is a stable distribution as a random variable, regardless of compounds investigated. In contrast to the similar concept called Retention uniformity , R d is sensitive to R f values close to 0 or 1, or close to themselves. If two values are not separated, it is equal to 0. For example, the R f values (0,0.2,0.2,0.3) (two compounds not separated at 0.2 and one at the start ) result in R D equal to 0, but R U equal to 0.3609. When some distance from 0 and spots occurs, the value is larger, for example R f values (0.1,0.2,0.25,0.3) give R D = 0.4835, R U = 0.4066. This article about analytical chemistry is a stub . You can help Wikipedia by expanding it . This article related to chromatography is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Retention_distance
Retention uniformity , or R U , is a concept in thin layer chromatography . It is designed for the quantitative measurement of equal-spreading of the spots on the chromatographic plate and is one of the chromatographic response functions . Retention uniformity is calculated from the following formula: R U = 1 − 6 ( n + 1 ) n ( 2 n + 1 ) ∑ i = 1 n ( R F i − i n + 1 ) 2 {\displaystyle R_{U}=1-{\sqrt {{\frac {6(n+1)}{n(2n+1)}}\sum _{i=1}^{n}{\left(R_{Fi}-{\frac {i}{n+1}}\right)^{2}}}}} where n is the number of compounds separated, R f (1...n) are the retention factor of the compounds sorted in non-descending order. The coefficient lies always in range <0,1> and 0 indicates worst case of separation (all R f values equal to 0 or 1), value 1 indicates ideal equal-spreading of the spots, for example (0.25,0.5,0.75) for three solutes, or (0.2,0.4,0.6,0.8) for four solutes. This coefficient was proposed as an alternative to earlier approaches, such as D (separation response), I p (performance index) or S m (informational entropy). Besides its stable range, the advantage is a stable distribution as a random variable, regardless of compounds investigated. In contrast to the similar concept called retention distance , R u is insensitive to R f values close to 0 or 1, or close to themselves. If two values are not separated, it still indicates some uniformity of chromatographic system. For example, the R f values (0,0.2,0.2,0.3) (two compounds not separated at 0.2 and one at the start ) result in R U equal to 0.3609.
https://en.wikipedia.org/wiki/Retention_uniformity
A reticle or reticule , [ 1 ] [ 2 ] also known as a graticule or crosshair , is a pattern of fine lines or markings built into the eyepiece of an optical device such as a telescopic sight , spotting scope , theodolite , optical microscope or the screen of an oscilloscope , to provide measurement references during visual inspections . Today, engraved lines or embedded fibers may be replaced by a digital image superimposed on a screen or eyepiece. Both terms may be used to describe any set of patterns used for aiding visual measurements and calibrations , but in modern use reticle is most commonly used for weapon sights , while graticule is more widely used for non-weapon measuring instruments such as oscilloscope display , astronomic telescopes , microscopes and slides , surveying instruments and other similar devices. There are many variations of reticle pattern; this article concerns itself mainly with the most rudimentary reticle: the crosshair . Crosshairs are typically represented as a pair of perpendicularly intersecting lines in the shape of a cross, "+", though many variations of additional features exist including dots, posts , concentric circles / horseshoes , chevrons , graduated markings , or a combination of above. Most commonly associated with telescopic sights for aiming firearms , crosshairs are also common in optical instruments used for astronomy and surveying , and are also popular in graphical user interfaces as a precision pointer . The reticle is said to have been invented by Robert Hooke , and dates to the 17th century. [ 3 ] Another candidate as inventor is the amateur astronomer William Gascoigne , who predated Hooke. [ 4 ] The term reticle comes from the Latin reticulum , meaning small net. Telescopic sights for firearms, generally just called scopes , are probably the device most often associated with crosshairs. Motion pictures and the media often use a view through crosshairs as a dramatic device, which has given crosshairs wide cultural exposure. While the traditional thin crossing lines are the original and still the most familiar cross-hair shape, they are really best suited for precision aiming at high contrast targets, as the thin lines are easily lost in complex backgrounds, such as those encountered while hunting. Thicker bars are much easier to discern against a complex background, but lack the precision of thin bars. The most popular types of cross-hair in modern scopes are variants on the duplex cross-hair, with bars that are thick on the perimeter and thin out in the middle. The thick bars allow the eye to quickly locate the center of the reticle, and the thin lines in the center allow for precision aiming. The thin bars in a duplex reticle may also be designed to be used as a measure. Called a 30/30 reticle, the thin bars on such a reticle span 30 minutes of arc (0.5º), which is approximately equal to 30 inches at 100 yards or 90 centimeters at 100 meters. This enables an experienced shooter to deduce, on the basis of the known size of an object in view, (as opposed to guess or estimate) the range within an acceptable error limit. Originally crosshairs were constructed out of hair or spiderweb, these materials being sufficiently thin and strong. Many modern scopes use wire crosshairs, which can be flattened to various degrees to change the width. These wires are usually silver in color, but appear black when backlit by the image passing through the scope's optics. Wire reticles are by nature fairly simple, as they require lines that pass all the way across the reticle, and the shapes are limited to the variations in thickness allowed by flattening the wire; duplex crosshairs, and crosshairs with dots are possible, and multiple horizontal or vertical lines may be used. The advantage of wire crosshairs is that they are fairly tough and durable, and provide no obstruction to light passing through the scope. The first suggestion for etched glass reticles was made by Philippe de La Hire in 1700. [ 5 ] His method was based on engraving the lines on a glass plate with a diamond point. Many modern crosshairs are actually etched onto a thin plate of glass , which allows a far greater latitude in shapes. Etched glass reticles can have floating elements, which do not cross the reticle; circles and dots are common, and some types of glass reticles have complex sections designed for use in range estimation and bullet drop and drift compensation (see external ballistics ). A potential disadvantage of glass reticles is that the surface of the glass reflects some light (about 4% per surface on uncoated glass [ 6 ] ) lessening transmission through the scope, although this light loss is near zero if the glass is multicoated (coating being the norm for all modern high quality optical products). Reticles may be illuminated, either by a plastic or fiber optic light pipe collecting ambient light or, in low light conditions, by a battery powered LED . Some sights also use the radioactive decay of tritium for illumination that can work for 11 years without using a battery, used in the British SUSAT sight for the SA80 (L85) assault rifle and in the American ACOG (Advanced Combat Optical Gunsight) . Red is the most common color used, as it is the least destructive to the shooter's night vision , but some products use green or yellow illumination, either as a single colour or changeable via user selection. Another term for reticle is graticule , which is frequently encountered in British and British military technical manuals. It came into common use during World War I . [ 7 ] The reticle may be located at the front or rear focal plane (First Focal Plane (FFP) or Second Focal Plane (SFP)) [ 8 ] of the telescopic sight. On fixed power telescopic sights there is no significant difference, but on variable power telescopic sights the front plane reticle remains at a constant size compared to the target, while rear plane reticles remain a constant size to the user as the target image grows and shrinks. Front focal plane reticles are slightly more durable, but most American users prefer that the reticle remains constant as the image changes size, so nearly all modern American variable power telescopic sights are rear focal plane designs. [ citation needed ] American and European high end optics manufacturers often leave the customer the choice between a FFP or SFP mounted reticle. Collimated reticles are produced by non-magnifying optical devices such as reflector sights (often called reflex sights ) that give the viewer an image of the reticle superimposed over the field of view, and blind collimator sights that are used with both eyes. Collimated reticles are created using refractive or reflective optical collimators to generate a collimated image of an illuminated or reflective reticle. These types of sights are used on surveying/triangulating equipment, to aid celestial telescope aiming, and as sights on firearms . Historically they were used on larger military weapon systems that could supply an electrical source to illuminate them and where the operator needed a wide field of view to track and range a moving target visually (i.e. weapons from the pre laser / radar / computer era). More recently sights using low power consumption durable light emitting diodes as the reticle (called red dot sight s) have become common on small arms with versions like the Aimpoint CompM2 being widely fielded by the U.S. Military. Holographic weapon sights use a holographic image of a reticle at finite set range built into the viewing window and a collimated laser diode to illuminate it. An advantage to holographic sights is that they eliminate a type of parallax problem found in some optical collimator based sights (such as the red dot sight ) where the spherical mirror used induces spherical aberration that can cause the reticle to skew off the sight's optical axis . The use of a hologram also eliminates the need for image dimming narrow band reflective coatings and allows for reticles of almost any shape or mil size. A downside to the holographic weapon sight can be the weight and shorter battery life. As with red dot sights, holographic weapon sights have also become common on small arms with versions like the Eotech 512.A65 and similar models fielded by the U.S. Military [ 9 ] and various law enforcement agencies. In older instruments, reticle crosshairs and stadia marks were made using threads taken from the cocoon of the brown recluse spider . This very fine, strong spider silk makes for an excellent crosshair. [ 10 ] [ 11 ] In surveying, reticles are designed for specific uses. Levels and theodolites would have slightly different reticles. However, both may have features such as stadia marks to allow distance measurements. For astronomical uses, reticles could be simple crosshair designs or more elaborate designs for special purposes. Telescopes used for polar alignment could have a reticle that indicates the position of Polaris relative to the north celestial pole. Telescopes that are used for very precise measurements would have a filar micrometer as a reticle; this could be adjusted by the operator to measure angular distances between stars. For aiming telescopes, reflex sights are popular, often in conjunction with a small telescope with a crosshair reticle. They make aiming the telescope at an astronomical object easier. The constellation Reticulum was designated to recognize the reticle and its contributions to astronomy.
https://en.wikipedia.org/wiki/Reticle
In cellular biology , a reticular cell is a type of fibroblast that synthesizes collagen alpha-1(III) and uses it to produce extracellular reticular fibers . Reticular cells provide structural support, since they produce and maintain the thin networks of fibers that are a framework for most lymphoid organs . Reticular cells are found in many organs, including the spleen , lymph nodes and kidneys . They are also found within tissues, such as lymph nodules. There are different types of reticular cells, including epithelial, mesenchymal, and fibroblastic reticular cells. Fibroblastic reticular cells are involved in directing B cells and T cells to specific regions within the tissue whereas epithelial and mesenchymal reticular cells are associated with certain areas of the brain . 2. Schat, K. A., Kaspers, B., & Kaiser, P. (2014). Structure of the Avian Lymphoid System. In I. Olah, N. Nagy & L. Vervelde (Eds.), Avian Immunology (2nd ed., pp. 11-44). Academic Press. This cell biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Reticular_cell
Reticulate evolution , or network evolution is the origination of a lineage through the partial merging of two ancestor lineages, leading to relationships better described by a phylogenetic network than a bifurcating tree . [ 1 ] Reticulate patterns can be found in the phylogenetic reconstructions of biodiversity lineages obtained by comparing the characteristics of organisms. [ 2 ] Reticulation processes can potentially be convergent and divergent at the same time. [ 3 ] Reticulate evolution indicates the lack of independence between two evolutionary lineages. [ 1 ] Reticulation affects survival , fitness and speciation rates of species. [ 2 ] Reticulate evolution can happen between lineages separated only for a short time, for example through hybrid speciation in a species complex . Nevertheless, it also takes place over larger evolutionary distances, as exemplified by the presence of organelles of bacterial origin in eukaryotic cells. [ 2 ] Reticulation occurs at various levels: [ 4 ] at a chromosomal level, meiotic recombination causes evolution to be reticulate; at a species level, reticulation arises through hybrid speciation and horizontal gene transfer ; and at a population level, sexual recombination causes reticulation. [ 1 ] The adjective reticulate stems from the Latin words reticulatus , "having a net-like pattern" from reticulum , "little net." [ 5 ] Since the nineteenth century, scientists from different disciplines have studied how reticulate evolution occurs. Researchers have increasingly succeeded in identifying these mechanisms and processes. It has been found to be driven by symbiosis, symbiogenesis (endosymbiosis), lateral gene transfer, hybridization and infectious heredity. [ 2 ] Symbiosis is a close and long-term biological interaction between two different biological organisms. [ 6 ] Often, both of the organisms involved develop new features upon the interaction with the other organism. This may lead to the development of new, distinct organisms. [ 7 ] [ 8 ] The alterations in genetic material upon symbiosis can occur via germline transmission or lateral transmission. [ 2 ] [ 9 ] [ 10 ] Therefore, the interaction between different organisms can drive evolution of one or both organisms. [ 6 ] Symbiogenesis (endosymbiosis) is a special form of symbiosis whereby an organism lives inside another, different organism. Symbiogenesis is thought to be very important in the origin and evolution of eukaryotes . Eukaryotic organelles , such as mitochondria, have been theorized to have been originated from cell-invaded bacteria living inside another cell. [ 11 ] [ 12 ] Lateral gene transfer , or horizontal gene transfer, is the movement of genetic material between unicellular and/or multicellular organisms without a parent-offspring relationship. The horizontal transfer of genes results in new genes, which could give new functions to the recipient and thus could drive evolution. [ 13 ] In the neo-Darwinian paradigm, one of the assumed definition of a species is that of Mayr's , which defines species based upon sexual compatibility. [ 14 ] Mayr's definition therefore suggests that individuals that can produce fertile offspring must belong to the same species. However, in hybridization , two organisms produce offspring while being distinct species. [ 2 ] During hybridization the characteristics of these two different species are combined yielding a new organism, called a hybrid, thus driving evolution. [ 15 ] Infectious agents, such as viruses , can infect the cells of host organisms. Viruses infect cells of other organisms in order to enable their own reproduction. Hereto, many viruses can insert copies of their genetic material into the host genome, potentially altering the phenotype of the host cell. [ 16 ] [ 17 ] [ 18 ] When these viruses insert their genetic material in the genome of germ line cells, the modified host genome will be passed onto the offspring, yielding genetically differentiated organisms. Therefore, infectious heredity plays an important role in evolution, [ 2 ] for example in the formation of the female placenta . [ 19 ] [ 20 ] Reticulate evolution has played a key role in the evolution of some organisms such as bacteria and flowering plants. [ 21 ] [ 22 ] However, most methods for studying cladistics have been based on a model of strictly branching cladogeny, without assessing the importance of reticulate evolution. [ 23 ] Reticulation at chromosomal, genomic and species levels fails to be modelled by a bifurcating tree. [ 1 ] According to Ford Doolittle , an evolutionary and molecular biologist: “Molecular phylogeneticists will have failed to find the “true tree,” not because their methods are inadequate or because they have chosen the wrong genes, but because the history of life cannot properly be represented as a tree”. [ 24 ] Reticulate evolution refers to evolutionary processes which cannot be successfully represented using a classical phylogenetic tree model , [ 25 ] as it gives rise to rapid evolutionary change with horizontal crossings and mergings often preceding a pattern of vertical descent with modification. [ 26 ] Reconstructing phylogenetic relationships under reticulate evolution requires adapted analytical methods. [ 27 ] Reticulate evolution dynamics contradict the neo-Darwininan theory, compiled in the Modern Synthesis , by which the evolution of life occurs through natural selection and is displayed with a bifurcating or branching pattern. Frequent hybridisation between species in natural populations challenges the assumption that species have evolved from a common ancestor by simple branching, in which branches are genetically isolated. [ 27 ] [ 28 ] The study of reticulate evolution is said to have been largely excluded from the modern synthesis . [ 4 ] The urgent need for new models which take reticulate evolution into account has been stressed by many evolutionary biologists, such as Nathalie Gontier who has stated " reticulate evolution today is a vernacular concept for evolutionary change induced by mechanisms and processes of symbiosis , symbiogenesis , lateral gene transfer , hybridization, or divergence with gene flow , and infectious heredity ". She calls for an extended evolutionary synthesis that integrates these mechanisms and processes of evolution. [ 26 ] Reticulate evolution has been extensively applied to plant hybridization in agriculture and gardening. The first commercial hybrids appeared in the early 1920s. [ 29 ] Since then, many protoplast fusion experiments have been carried out, some of which were aimed at improvement of crop species. [ 30 ] Wild types possessing desirable agronomic traits are selected and fused in order to yield novel, improved species. The newly generated plant will be improved for traits such as better yield, greater uniformity, improved color, and disease resistance. [ 31 ] Reticulate evolution is regarded as a process that has shaped the histories of many organisms. [ 32 ] There is evidence of reticulation events in flowering plants, as the variation patterns between angiosperm families strongly suggests there has been widespread hybridisation. [ 33 ] Grant [ 21 ] states that phylogenetic networks, instead of phylogenetic trees, arise in all major groups of higher plants. Stable speciation events due to hybridisation between angiosperm species supports the occurrence of reticulate evolution and highlights the key role of reticulation in the evolution of plants. [ 34 ] Genetic transfer can occur across wide taxonomic levels in microorganisms and become stably integrated into the new microbial populations, [ 35 ] [ 36 ] as has been observed through protein sequencing. [ 37 ] Reticulation in bacteria usually only involves the transfer of only a few genes or parts of these. [ 23 ] Reticulate evolution driven by lateral gene transfer has also been observed in marine life. [ 38 ] Lateral genetic transfer of photo-response genes between planktonic bacteria and Archaea has been evidenced in some groups, showing an associated increase in environmental adaptability in organisms inhabiting photic zones. [ 39 ] Moreover, in the well-studied Darwin finches signs of reticulate evolution can be observed. Peter and Rosemary Grant , who carried out extensive research on the evolutionary processes of the Geospiza genus, found that hybridization occurs between some species of Darwin finches, yielding hybrid forms. This event could explain the origin of intermediate species. [ 40 ] Jonathan Weiner [ 41 ] commented on the observations of the Grants, suggesting the existence of reticulate evolution: " To the Grants, the whole tree of life now looks different from a year ago. The set of young twigs and shoots they study seems to be growing together in some seasons, apart in others. The same forces that created these lines are moving them toward fusion and then back toward fission ."; and " The Grants are looking at a pattern that was once dismissed as insignificant in the tree of life. The pattern is known as reticulate evolution, from the Latin reticulum, diminutive for net. The finches' lines are not so much lines or branches at all. They are more like twiggy thickets, full of little networks and delicate webbings ."
https://en.wikipedia.org/wiki/Reticulate_evolution
Reticulated foam is a very porous , low-density solid foam . 'Reticulated' means like a net . Reticulated foams are extremely open foams i.e. there are few, if any, intact bubbles or cell windows. In contrast, the foam formed by soap bubbles is composed solely of intact (fully enclosed) bubbles. In a reticulated foam only the lineal boundaries where the bubbles meet ( Plateau borders ) remain. The solid component of a reticulated foam may be an organic polymer like polyurethane , a ceramic , or a metal . These materials are used in a wide range of applications where the high porosity and large surface area are needed, including filters , catalyst supports, fuel tank inserts, and loudspeaker covers. A description of the structure of reticulated foams is still being developed. While Plateau's laws , the rules governing the shape of soap films in foams were developed in the 19th century, a mathematical description of the structure is still debated. The computer-generated Weaire–Phelan structure is the most recent. In a reticulated foam only the edges of the polyhedra remain; the faces are missing. In commercial reticulated foam, up to 98% of the faces are removed. The dodecahedron is sometimes given as the basic unit for these foams, [ 1 ] but the most representative shape is a polyhedron with 13 faces. [ 2 ] [ 3 ] Cell size and cell size distribution are critical parameters for most applications. Porosity is typically 95%, but can be as high as 98%. [ 4 ] Reticulation affects many of the physical properties of a foam. Typically resistance to compression is decreased while tensile properties like elongation and resistance to tearing are increased. [ 5 ] Robert A. Volz is credited with discovering the first process for making reticulated polyurethane foam in 1956 while working for the Scott Paper Company . [ 6 ] Production of reticulated polyurethane foam is a two-step process that begins with the creation of conventional (closed-cell) polyurethane foam, after which cell faces (or "windows") are removed. To do so, the fact that the higher surface area and lower mass of cell faces compared with cell struts (or edges) makes them much more susceptible to both combustion and chemical degradation is exploited. Thus, closed-cell foam is either filled with a combustible gas like hydrogen and ignited under controlled conditions, or it is exposed to a sodium hydroxide solution to chemically degrade the foam, which will remove cell windows whilst sparing the edges. [ 7 ] Reticulated ceramic foams are made by coating a reticulated polyurethane foam with an aqueous suspension of a ceramic powder then heating the material to first evaporate the water then fuse the ceramic particles and finally to burn off the organic polymer. [ 4 ] Reticulated metal foam can also be made using polyurethane foam as a template similar to its use in ceramic foams. Metals can be vapor deposited onto the polyurethane foam and then the organic polymer burned off. [ 8 ] Reticulated foams are used where porosity, surface area, and low density are important.
https://en.wikipedia.org/wiki/Reticulated_foam
In biology , a reticulation of a single-access identification key connects different branches of the identification tree to improve error tolerance and identification success. [ 1 ] [ 2 ] [ 3 ] In a reticulated key, multiple paths lead to the same result; the tree data structure thus changes from a simple tree to a directed acyclic graph . Two forms of reticulation can be distinguished: Terminal reticulation and inner reticulation. Reticulations generally improve the usability of a key , but may also diminish the overall probability of correct identification averaged over all taxa . [ 4 ]
https://en.wikipedia.org/wiki/Reticulation_(single-access_key)
Retina-X Studios is a software manufacturer company that develops computer and cell phone monitoring applications, [ 2 ] focused on computers, smartphones , tablets and networks. [ 3 ] The company is founded in 1997 and it is based in Jacksonville, Florida , United States. [ 1 ] The company was founded in July 1997 primarily as a web consulting and design company. In 2003, after a period of developing monitoring products for outside companies, the company began creating monitoring software products using its own brand name. The first software product, named AceSpy, was released on April 28, 2003. [ 4 ] In May 2007, the company developed and released monitoring software for mobile phones, named Mobile-Spy, particularly for Windows Mobile . Target audiences for Retina-X Studios are parents and employers. [ citation needed ] Parents and employers use legal monitoring software to check their teens' and staff's internet use. [ 5 ] [ 6 ] [ 7 ] Company markets its products as spy applications as parents can review child's messages and call details without the child's knowledge. [ 8 ] Ethical issues can arise if employees are not made aware of monitoring tools, if personal emails are intentionally accessed and if managers are involved directly in evaluating the contents of logging activities as they can be/become biased towards the person whose email is being reviewed. [ 9 ] Using cell phones for spying has also increased due to multiplication of smart phones and compromising one's information is very possible with spy apps. People can stalk each other easily with company software. [ 10 ] [ 11 ] All they need is a onetime access to the gadget and then such software would run invisibly. [ 12 ] The wrong use of the software should not be overlooked. [ 13 ] The hackers can access the online information that is parsed to the customer's account and this can lead to privacy issues. [ 14 ]
https://en.wikipedia.org/wiki/Retina-X_Studios
Retinal (also known as retinaldehyde ) is a polyene chromophore . Retinal, bound to proteins called opsins , is the chemical basis of visual phototransduction , the light-detection stage of visual perception (vision). Some microorganisms use retinal to convert light into metabolic energy. One study suggests that approximately three billion years ago, most living organisms on Earth used retinal, rather than chlorophyll , to convert sunlight into energy. Because retinal absorbs mostly green light and transmits purple light, this gave rise to the Purple Earth hypothesis . [ 2 ] Retinal itself is considered to be a form of vitamin A when eaten by an animal. There are many forms of vitamin A, all of which are converted to retinal, which cannot be made without them. The number of different molecules that can be converted to retinal varies from species to species. Retinal was originally called retinene , [ 3 ] and was renamed [ 4 ] after it was discovered to be vitamin A aldehyde . [ 5 ] [ 6 ] Vertebrate animals ingest retinal directly from meat, or they produce retinal from carotenoids – either from α-carotene or β-carotene – both of which are carotenes . They also produce it from β-cryptoxanthin , a type of xanthophyll . These carotenoids must be obtained from plants or other photosynthetic organisms. No other carotenoids can be converted by animals to retinal. Some carnivores cannot convert any carotenoids at all. The other main forms of vitamin A – retinol and a partially active form, retinoic acid – may both be produced from retinal. Invertebrates such as insects and squid use hydroxylated forms of retinal in their visual systems, which derive from conversion from other xanthophylls . Living organisms produce retinal by irreversible oxidative cleavage of carotenoids. [ 7 ] For example: catalyzed by a beta-carotene 15,15'-monooxygenase [ 8 ] or a beta-carotene 15,15'-dioxygenase. [ 9 ] Just as carotenoids are the precursors of retinal, retinal is the precursor of the other forms of vitamin A. Retinal is interconvertible with retinol , the transport and storage form of vitamin A: catalyzed by retinol dehydrogenases (RDHs) [ 10 ] and alcohol dehydrogenases (ADHs). [ 11 ] Retinol is called vitamin A alcohol or, more often, simply vitamin A. Retinal can also be oxidized to retinoic acid : catalyzed by retinal dehydrogenases [ 12 ] also known as retinaldehyde dehydrogenases (RALDHs) [ 11 ] as well as retinal oxidases . [ 13 ] Retinoic acid, sometimes called vitamin A acid , is an important signaling molecule and hormone in vertebrate animals. Retinal is a conjugated chromophore . In the Vertebrate eyes , retinal begins in an 11- cis -retinal configuration, which — upon capturing a photon of the correct wavelength — straightens out into an all- trans -retinal configuration. This configuration change pushes against an opsin protein in the retina , which triggers a chemical signaling cascade, which results in perception of light or images by the brain. The absorbance spectrum of the chromophore depends on its interactions with the opsin protein to which it is bound, so that different retinal-opsin complexes will absorb photons of different wavelengths (i.e., different colors of light). Retinal is bound to opsins , which are G protein-coupled receptors (GPCRs). [ 14 ] [ 15 ] Opsins, like other GPCRs, have seven transmembrane alpha-helices connected by six loops. They are found in the photoreceptor cells in the retina of eye. The opsin in the vertebrate rod cells is rhodopsin . The rods form disks, which contain the rhodopsin molecules in their membranes and which are entirely inside of the cell. The N-terminus head of the molecule extends into the interior of the disk, and the C-terminus tail extends into the cytoplasm of the cell. The opsins in the cone cells are OPN1SW , OPN1MW , and OPN1LW . The cones form incomplete disks that are part of the plasma membrane , so that the N-terminus head extends outside of the cell. In opsins, retinal binds covalently to a lysine [ 16 ] in the seventh transmembrane helix [ 17 ] [ 18 ] [ 19 ] through a Schiff base . [ 20 ] [ 21 ] Forming the Schiff base linkage involves removing the oxygen atom from retinal and two hydrogen atoms from the free amino group of lysine, giving H 2 O. Retinylidene is the divalent group formed by removing the oxygen atom from retinal, and so opsins have been called retinylidene proteins . Opsins are prototypical G protein-coupled receptors (GPCRs). [ 22 ] Cattle rhodopsin, the opsin of the rod cells, was the first GPCR to have its amino acid sequence [ 23 ] and 3D-structure (via X-ray crystallography ) determined. [ 18 ] Cattle rhodopsin contains 348 amino acid residues. Retinal binds as chromophore at Lys 296 . [ 18 ] [ 23 ] This lysine is conserved in almost all opsins, only a few opsins have lost it during evolution . [ 24 ] Opsins without the retinal binding lysine are not light sensitive. [ 25 ] [ 26 ] [ 27 ] Such opsins may have other functions. [ 26 ] [ 24 ] Although mammals use retinal exclusively as the opsin chromophore, other groups of animals additionally use four chromophores closely related to retinal: 3,4-didehydroretinal (vitamin A 2 ), (3 R )-3-hydroxyretinal, (3 S )-3-hydroxyretinal (both vitamin A 3 ), and (4 R )-4-hydroxyretinal (vitamin A 4 ). Many fish and amphibians use 3,4-didehydroretinal, also called dehydroretinal . With the exception of the dipteran suborder Cyclorrhapha (the so-called higher flies), all insects examined use the ( R )- enantiomer of 3-hydroxyretinal. The ( R )-enantiomer is to be expected if 3-hydroxyretinal is produced directly from xanthophyll carotenoids. Cyclorrhaphans, including Drosophila , use (3 S )-3-hydroxyretinal. [ 28 ] [ 29 ] Firefly squid have been found to use (4 R )-4-hydroxyretinal. The visual cycle is a circular enzymatic pathway , which is the front-end of phototransduction. It regenerates 11- cis -retinal. For example, the visual cycle of mammalian rod cells is as follows: Steps 3, 4, 5, and 6 occur in rod cell outer segments ; Steps 1, 2, and 7 occur in retinal pigment epithelium (RPE) cells. RPE65 isomerohydrolases are homologous with beta-carotene monooxygenases; [ 7 ] the homologous ninaB enzyme in Drosophila has both retinal-forming carotenoid-oxygenase activity and all- trans to 11- cis isomerase activity. [ 32 ] All- trans -retinal is also an essential component of microbial opsins such as bacteriorhodopsin , channelrhodopsin , and halorhodopsin , which are important in bacterial and archaeal anoxygenic photosynthesis . In these molecules, light causes the all- trans -retinal to become 13- cis retinal, which then cycles back to all- trans -retinal in the dark state. These proteins are not evolutionarily related to animal opsins and are not GPCRs; the fact that they both use retinal is a result of convergent evolution . [ 33 ] The American biochemist George Wald and others had outlined the visual cycle by 1958. For his work, Wald won a share of the 1967 Nobel Prize in Physiology or Medicine with Haldan Keffer Hartline and Ragnar Granit . [ 34 ]
https://en.wikipedia.org/wiki/Retinal
A retinal implant is a visual prosthesis for restoration of sight to patients blinded by retinal degeneration. The system is meant to partially restore useful vision to those who have lost their photoreceptors due to retinal diseases such as retinitis pigmentosa (RP) or age-related macular degeneration (AMD). Retinal implants are being developed by a number of private companies and research institutions, and three types are in clinical trials: epiretinal (on the retina ), subretinal (behind the retina), and suprachoroidal (between the choroid and the sclera). The implants introduce visual information into the retina by electrically stimulating the surviving retinal neurons. So far, elicited percepts had rather low resolution, and may be suitable for light perception and recognition of simple objects. Foerster was the first to discover that electrical stimulation of the occipital cortex could be used to create visual percepts, phosphenes . [ 1 ] The first application of an implantable stimulator for vision restoration was developed by Drs. Brindley and Lewin in 1968. [ 2 ] This experiment demonstrated the viability of creating visual percepts using direct electrical stimulation, and it motivated the development of several other implantable devices for stimulation of the visual pathway, including retinal implants. [ 3 ] Retinal stimulation devices, in particular, have become a focus of research as approximately half of all cases of blindness are caused by retinal damage. [ 4 ] The development of retinal implants has also been motivated in part by the advancement and success of cochlear implants , which has demonstrated that humans can regain significant sensory function with limited input. [ 5 ] The Argus II retinal implant , manufactured by Second Sight Medical Products received market approval in the US in Feb 2013 and in Europe in Feb 2011, becoming the first approved implant. [ 6 ] The device may help adults with RP who have lost the ability to perceive shapes and movement to be more mobile and to perform day-to-day activities. The epiretinal device is known as the Retina Implant and was originally developed in Germany by Retina Implant AG . It completed a multi-centre clinical trial in Europe and was awarded a CE Mark in 2013, making it the first wireless epiretinal electronic device to gain approval. Optimal candidates for retinal implants have retinal diseases, such as retinitis pigmentosa or age-related macular degeneration. These diseases cause blindness by affecting the photoreceptor cells in the outer layer of the retina, while leaving the inner and middle retinal layers intact. [ 4 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] Minimally, a patient must have an intact ganglion cell layer in order to be a candidate for a retinal implant. This can be assessed non-invasively using optical coherence tomography (OCT) imaging . [ 12 ] Other factors, including the amount of residual vision, overall health, and family commitment to rehabilitation, are also considered when determining candidates for retinal implants. In subjects with age-related macular degeneration, who may have intact peripheral vision, retinal implants could result in a hybrid form of vision. In this case the implant would supplement the remaining peripheral vision with central vision information. [ 13 ] There are two main types of retinal implants by placement. Epiretinal implants are placed in the internal surface of the retina, while subretinal implants are placed between the outer retinal layer and the retinal pigment epithelium . Epiretinal implants are placed on top of the retinal surface, above the nerve fiber layer, directly stimulating ganglion cells and bypassing all other retinal layers. Array of electrodes is stabilized on the retina using micro tacks which penetrate into the sclera. Typically, external video camera on eyeglasses [ 3 ] acquires images and transmits processed video information to the stimulating electrodes via wireless telemetry . [ 13 ] An external transmitter is also required to provide power to the implant via radio-frequency induction coils or infrared lasers. The real-time image processing involves reducing the resolution, enhancing contrast, detecting the edges in the image and converting it into a spatio-temporal pattern of stimulation delivered to the electrode array on the retina. [ 4 ] [ 13 ] The majority of electronics can be incorporated into the associated external components, allowing for a smaller implant and simpler upgrades without additional surgery. [ 14 ] The external electronics provides full control over the image processing for each patient. [ 3 ] Epiretinal implants directly stimulate the retinal ganglion cells, thereby bypassing all other retinal layers. Therefore, in principle, epiretinal implants could provide visual perception to individuals even if all other retinal layers have been damaged. Since the nerve fiber layer has similar stimulation threshold to that of the retinal ganglion cells, axons passing under the epiretinal electrodes are stimulated, creating arcuate percepts, and thereby distorting the retinotopic map. So far, none of the epiretinal implants had light-sensitive pixels, and hence they rely on external camera for capturing the visual information. Therefore, unlike natural vision, eye movements do not shift the transmitted image on the retina, which creates a perception of the moving object when person with such an implant changes the direction of gaze. Therefore, patients with such implants are asked to not move their eyes, but rather scan the visual field with their head. Additionally, encoding visual information at the ganglion cell layer requires very sophisticated image processing techniques in order to account for various types of the retinal ganglion cells encoding different features of the image. The first epiretinal implant, the ARGUS device, included a silicon platinum array with 16 electrodes. [ 13 ] The Phase I clinical trial of ARGUS began in 2002 by implanting six participants with the device. All patients reported gaining a perception of light and discrete phosphenes, with the visual function of some patients improving significantly over time. Future versions of the ARGUS device are being developed with increasingly dense electrode arrays, allowing for improved spatial resolution. The most recent ARGUS II device contains 60 electrodes, and a 200 electrode device is under development by ophthalmologists and engineers at the USC Eye Institute. [ 15 ] The ARGUS II device received marketing approval in February 2011 (CE Mark demonstrating safety and performance), and it is available in Germany, France, Italy, and UK. Interim results on 30 patients long term trials were published in Ophthalmology in 2012. [ 16 ] Argus II received approval from the US FDA on April 14, 2013 FDA Approval . Another epiretinal device, the Learning Retinal Implant, has been developed by IIP technologies GmbH, and has begun to be evaluated in clinical trials. [ 13 ] A third epiretinal device, EPI-RET, has been developed and progressed to clinical testing in six patients. The EPI-RET device contains 25 electrodes and requires the crystalline lens to be replaced with a receiver chip. All subjects have demonstrated the ability to discriminate between different spatial and temporal patterns of stimulation. [ 17 ] Subretinal implants sit on the outer surface of the retina, between the photoreceptor layer and the retinal pigment epithelium, directly stimulating retinal cells and relying on the normal processing of the inner and middle retinal layers. [ 3 ] Adhering a subretinal implant in place is relatively simple, as the implant is mechanically constrained by the minimal distance between the outer retina and the retinal pigment epithelium. A subretinal implant consists of a silicon wafer containing light sensitive microphotodiodes , which generate signals directly from the incoming light. Incident light passing through the retina generates currents within the microphotodiodes, which directly inject the resultant current into the underlying retinal cells via arrays of microelectrodes . The pattern of microphotodiodes activated by incident light therefore stimulates a pattern of bipolar , horizontal , amacrine , and ganglion cells, leading to a visual perception representative of the original incident image. In principle, subretinal implants do not require any external hardware beyond the implanted microphotodiodes array. However, some subretinal implants require power from external circuitry to enhance the image signal. [ 4 ] A subretinal implant is advantageous over an epiretinal implant in part because of its simpler design. The light acquisition, processing, and stimulation are all carried out by microphotodiodes mounted onto a single chip, as opposed to the external camera, processing chip, and implanted electrode array associated with an epiretinal implant. [ 4 ] The subretinal placement is also more straightforward, as it places the stimulating array directly adjacent to the damaged photoreceptors. [ 3 ] [ 13 ] By relying on the function of the remaining retinal layers, subretinal implants allow for normal inner retinal processing, including amplification, thus resulting in an overall lower threshold for a visual response. [ 3 ] Additionally, subretinal implants enable subjects to use normal eye movements to shift their gaze. The retinotopic stimulation from subretinal implants is inherently more accurate, as the pattern of incident light on the microphotodiodes is a direct reflection of the desired image. Subretinal implants require minimal fixation, as the subretinal space is mechanically constrained and the retinal pigment epithelium creates negative pressure within the subretinal space. [ 4 ] The main disadvantage of subretinal implants is the lack of sufficient incident light to enable the microphotodiodes to generate adequate current. Thus, subretinal implants often incorporate an external power source to amplify the effect of incident light. [ 3 ] The compact nature of the subretinal space imposes significant size constraints on the implant. The close proximity between the implant and the retina also increases the possibility of thermal damage to the retina from heat generated by the implant. [ 4 ] Subretinal implants require intact inner and middle retinal layers, and therefore are not beneficial for retinal diseases extending beyond the outer photoreceptor layer. Additionally, photoreceptor loss can result in the formation of a membrane at the boundary of the damaged photoreceptors, which can impede stimulation and increase the stimulation threshold. [ 13 ] Optobionics was the first company to develop a subretinal implant and evaluate the design in a clinical trial. Initial reports indicated that the implantation procedure was safe, and all subjects reported some perception of light and mild improvement in visual function. [ 18 ] The current version of this device has been implanted in 10 patients, who have each reported improvements in the perception of visual details, including contrast, shape, and movement. [ 4 ] Retina Implant AG in Germany has also developed a subretinal implant, which has undergone clinical testing in nine patients. Trial was put on hold due to repeated failures. [ 13 ] The Retina Implant AG device contains 1500 microphotodiodes, allowing for increased spatial resolution, but requires an external power source. Retina implant AG reported 12 months results on the Alpha IMS study in February 2013 showing that six out of nine patients had a device failure in the nine months post implant Proceedings of the royal society B , and that five of the eight subjects reported various implant-mediated visual perceptions in daily life. One had optic nerve damage and did not perceive stimulation. The Boston Subretinal Implant Project has also developed several iterations of a functional subretinal implant, and focused on short term analysis of implant function. [ 19 ] Results from all clinical trials to date indicate that patients receiving subretinal implants report perception of phosphenes, with some gaining the ability to perform basic visual tasks, such as shape recognition and motion detection. [ 13 ] The quality of vision expected from a retinal implant is largely based on the maximum spatial resolution of the implant. Current prototypes of retinal implants are capable of providing low resolution, pixelated images. "State-of-the-art" retinal implants incorporate 60-100 channels, sufficient for basic object discrimination and recognition tasks. However, simulations of the resultant pixelated images assume that all electrodes on the implant are in contact with the desired retinal cell; in reality the expected spatial resolution is lower, as a few of the electrodes may not function optimally. [ 3 ] Tests of reading performance indicated that a 60-channel implant is sufficient to restore some reading ability, but only with significantly enlarged text. [ 20 ] Similar experiments evaluating room navigation ability with pixelated images demonstrated that 60 channels were sufficient for experienced subjects, while naïve subjects required 256 channels. This experiment, therefore, not only demonstrated the functionality provided by low resolution visual feedback , but also the ability for subjects to adapt and improve over time. [ 21 ] However, these experiments are based merely on simulations of low resolution vision in normal subjects, rather than clinical testing of implanted subjects. The number of electrodes necessary for reading or room navigation may differ in implanted subjects, and further testing needs to be conducted within this clinical population to determine the required spatial resolution for specific visual tasks. Simulation results indicate that 600-1000 electrodes would be required to enable subjects to perform a wide variety of tasks, including reading, face recognition, and navigating around rooms. [ 3 ] Thus, the available spatial resolution of retinal implants needs to increase by a factor of 10, while remaining small enough to implant, to restore sufficient visual function for those tasks. It is worth to note high-density stimulation is not equal to high visual acuity (resolution), which requires a lot of factors in both hardware (electrodes and coatings) and software (stimulation strategies based on surgical results). [ 22 ] Clinical reports to date have demonstrated mixed success, with all patients report at least some sensation of light from the electrodes, and a smaller proportion gaining more detailed visual function, such as identifying patterns of light and dark areas. The clinical reports indicate that, even with low resolution, retinal implants are potentially useful in providing crude vision to individuals who otherwise would not have any visual sensation. [ 13 ] However, clinical testing in implanted subjects is somewhat limited and the majority of spatial resolution simulation experiments have been conducted in normal controls. It remains unclear whether the low level vision provided by current retinal implants is sufficient to balance the risks associated with the surgical procedure, especially for subjects with intact peripheral vision. Several other aspects of retinal implants need to be addressed in future research, including the long term stability of the implants and the possibility of retinal neuron plasticity in response to prolonged stimulation. [ 4 ] The Manchester Royal Infirmary and Prof Paulo E Stanga announced on July 22, 2015, the first successful implantation of Second Sight's Argus II in patients with severe Age Related Macular Degeneration. [ 23 ] [ 24 ] These results are very impressive as it appears that the patients integrate the residual vision and the artificial vision. It potentially opens the use of retinal implants to millions of patients with AMD.
https://en.wikipedia.org/wiki/Retinal_implant
A retinalophototroph is one of two different types of phototrophs , and are named for retinal -binding proteins ( microbial rhodopsins ) they utilize for cell signaling and converting light into energy. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Like all phototrophs, retinalophototrophs absorb photons to initiate their cellular processes. [ 2 ] [ 3 ] [ 4 ] In contrast with chlorophototrophs, retinalophototrophs do not use chlorophyll or an electron transport chain to power their chemical reactions. [ 5 ] [ 2 ] [ 3 ] This means retinalophototrophs are incapable of traditional carbon fixation , a fundamental photosynthetic process that transforms inorganic carbon (carbon contained in molecular compounds like carbon dioxide ) into organic compounds. [ 5 ] [ 4 ] For this reason, experts consider them to be less efficient than their chlorophyll-using counterparts, chlorophototrophs . [ 6 ] Retinalophototrophs achieve adequate energy conversion via a proton-motive force . [ 3 ] [ 4 ] In retinalophototrophs, proton-motive force is generated from rhodopsin-like proteins, primarily bacteriorhodopsin and proteorhodopsin , acting as proton pumps along a cellular membrane. [ 1 ] [ 4 ] To capture photons needed for activating a protein pump, retinalophototrophs employ organic pigments known as carotenoids, namely beta-carotenoids. [ 7 ] [ 3 ] [ 4 ] Beta-carotenoids present in retinalophototrophs are unusual candidates for energy conversion, but they possess high Vitamin-A activity necessary for retinaldehyde, or retinal, formation. [ 7 ] [ 3 ] [ 4 ] Retinal, a chromophore molecule configured from Vitamin A, is formed when bonds between carotenoids are disrupted in a process called cleavage. [ 7 ] [ 3 ] [ 4 ] Due to its acute light sensitivity, retinal is ideal for activation of proton-motive force and imparts a unique purple coloration to retinalophototrophs. [ 1 ] [ 4 ] Once retinal absorbs enough light, it isomerizes, thereby forcing a conformational (i.e., structural) change among the covalent bonds of the rhodopsin-like proteins. [ 1 ] [ 3 ] [ 4 ] Upon activation, these proteins mimic a gateway, allowing passage of ions to create an electrochemical gradient between the interior and exterior of the cellular membrane. [ 1 ] [ 4 ] Ions diffusing outwards across the gradient through proton pumps are then bound to ATP synthase proteins on the cell’s surface. [ 1 ] [ 4 ] As they diffuse back into the cell, their protons catalyze the creation of ATP (from ADP and a phosphorus ion), providing energy for retinalophototrophic self-sustenance and proliferation. [ 1 ] [ 4 ] Many, if not all, retinalophototrophs are photoheterotrophs : although sufficient ATP is produced by light, they cannot subsist on light and inorganic substances alone because they cannot produce needed organic materials from only CO 2 . This category includes retinalophototrophs that perform anaplerotic fixation, such as a flavobacterium that can use pyruvate and CO 2 to make malate . This ability does, however, help "stretch" limited supplies of carbon. [ 8 ] Retinalophototrophs are found across all domains of life but predominantly in the Bacteria and Archaea taxonomic groups. [ 5 ] [ 2 ] [ 6 ] Scientists believe retinalophototroph’s general ecological abundance correlates to horizontal gene transfer since only two genes are required for retinalophototrophy to occur: essentially, one gene for retinal-binding protein synthesis (bop) and one for retinal chromophore synthesis (blh). [ 3 ] [ 4 ] Despite their apparent simplicity, retinalophototrophs boast versatile ion usage that translates to their existence in relatively extreme environments. [ 3 ] For instance, retinalophototrophs can thrive at depths over 200 meters where, despite a lack of inorganic carbon, sufficient light as well as sodium, hydrogen, or chloride concentrations harbor conditions capable of supporting their vital metabolic processes. [ 3 ] Studies have also shown sodium and hydrogen ions correlate directly with retinalophototroph’s nutrient uptake and ATP synthesis, while chloride drives processes responsible for osmotic equilibrium. [ 4 ] Even though retinalophototrophs are widespread, research has shown they can be niche too. [ 1 ] [ 6 ] Depending on their proximity to the oceans surface, retinalophototrophs have evolved to be better at absorbing light within specific wavelengths. [ 1 ] [ 6 ] Most importantly, retinalophototrophs prevalence as a primary producer contributes substantially to the bottom-up mechanics of marine environments and, consequently, success of fauna and flora worldwide. [ 1 ] [ 6 ] Although retinalophototrophs are less efficient at converting light than chlorophototrophs, the simplicity makes it the preferred system in a large number of environments. For example, because retinalophototrophs requires no iron in the reaction center, they are well-adapted to the iron-poor ocean environment. At high light level, they are more efficient in terms of protein investment to energy output due to the small size. [ 6 ]
https://en.wikipedia.org/wiki/Retinalophototroph
1AD6 , 1GH6 , 1GUX , 1H25 , 1N4M , 1O9K , 1PJM , 2AZE , 2QDJ , 2R7G , 3N5U , 3POM , 4ELJ , 4ELL , 4CRI 5925 19645 ENSG00000139687 ENSMUSG00000022105 P06400 P13405 NM_000321 NM_009029 NP_000312 NP_000312.2 NP_033055 The retinoblastoma protein (protein name abbreviated Rb or pRb ; gene name abbreviated Rb , RB or RB1 ) is a tumor suppressor protein that is dysfunctional in several major cancers . [ 5 ] One function of pRb is to prevent excessive cell growth by inhibiting cell cycle progression until a cell is ready to divide. When the cell is ready to divide, pRb is inactivated by phosphorylation , and the cell cycle is allowed to progress. It is also a recruiter of several chromatin remodeling enzymes such as methylases and acetylases . [ 6 ] pRb belongs to the pocket protein family , whose members have a pocket for the functional binding of other proteins. [ 7 ] [ 8 ] Should an oncogenic protein, such as those produced by cells infected by high-risk types of human papillomavirus , bind and inactivate pRb, this can lead to cancer. The RB gene may have been responsible for the evolution of multicellularity in several lineages of life including animals. [ 9 ] In humans, the protein is encoded by the RB1 gene located on chromosome 13 —more specifically, 13q14.1-q14.2 . If both alleles of this gene are mutated in a retinal cell, the protein is inactivated and the cells grow uncontrollably, resulting in development of retinoblastoma , hence the "RB" in the name 'pRb'. Thus most pRb knock-outs occur in retinal tissue when UV radiation-induced mutation inactivates all healthy copies of the gene, but pRb knock-out has also been documented in certain skin cancers in patients from New Zealand where the amount of UV radiation is significantly higher. Two forms of retinoblastoma were noticed: a bilateral, familial form and a unilateral, sporadic form. Sufferers of the former were over six times more likely to develop other types of cancer later in life, compared to individuals with sporadic retinoblastoma. [ 10 ] This highlighted the fact that mutated pRb could be inherited and lent support for the two-hit hypothesis . This states that only one working allele of a tumour suppressor gene is necessary for its function (the mutated gene is recessive ), and so both need to be mutated before the cancer phenotype will appear. In the familial form, a mutated allele is inherited along with a normal allele. In this case, should a cell sustain only one mutation in the other RB gene, all pRb in that cell would be ineffective at inhibiting cell cycle progression, allowing cells to divide uncontrollably and eventually become cancerous. Furthermore, as one allele is already mutated in all other somatic cells, the future incidence of cancers in these individuals is observed with linear kinetics. [ 11 ] The working allele need not undergo a mutation per se, as loss of heterozygosity (LOH) is frequently observed in such tumours. However, in the sporadic form, both alleles would need to sustain a mutation before the cell can become cancerous. This explains why sufferers of sporadic retinoblastoma are not at increased risk of cancers later in life, as both alleles are functional in all their other cells. Future cancer incidence in sporadic pRb cases is observed with polynomial kinetics, not exactly quadratic as expected because the first mutation must arise through normal mechanisms, and then can be duplicated by LOH to result in a tumour progenitor . RB1 orthologs [ 12 ] have also been identified in most mammals for which complete genome data are available. RB / E2F -family proteins repress transcription . [ 13 ] pRb is a multifunctional protein with many binding and phosphorylation sites. Although its common function is seen as binding and repressing E2F targets, pRb is likely a multifunctional protein as it binds to at least 100 other proteins. [ 14 ] pRb has three major structural components: a carboxy-terminus, a "pocket" subunit, and an amino-terminus. Within each domain, there are a variety of protein binding sites, as well as a total of 15 possible phosphorylation sites. Generally, phosphorylation causes interdomain locking, which changes pRb's conformation and prevents binding to target proteins. Different sites may be phosphorylated at different times, giving rise to many possible conformations and likely many functions/activity levels. [ 15 ] pRb restricts the cell's ability to replicate DNA by preventing its progression from the G1 ( first gap phase ) to S ( synthesis phase ) phase of the cell division cycle. [ 16 ] pRb binds and inhibits E2 promoter-binding–protein-dimerization partner (E2F-DP) dimers, which are transcription factors of the E2F family that push the cell into S phase. [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] By keeping E2F-DP inactivated, RB1 maintains the cell in the G1 phase, preventing progression through the cell cycle and acting as a growth suppressor. [ 8 ] The pRb-E2F/DP complex also attracts a histone deacetylase (HDAC) protein to the chromatin , reducing transcription of S phase promoting factors, further suppressing DNA synthesis. pRb has the ability to reversibly inhibit DNA replication through transcriptional repression of DNA replication factors. pRb is able to bind to transcription factors in the E2F family and thereby inhibit their function. When pRb is chronically activated, it leads to the downregulation of the necessary DNA replication factors. Within 72–96 hours of active pRb induction in A2-4 cells, the target DNA replication factor proteins—MCMs, RPA34, DBF4 , RFCp37, and RFCp140—all showed decreased levels. Along with decreased levels, there was a simultaneous and expected inhibition of DNA replication in these cells. This process, however, is reversible. Following induced knockout of pRb, cells treated with cisplatin , a DNA-damaging agent, were able to continue proliferating, without cell cycle arrest, suggesting pRb plays an important role in triggering chronic S-phase arrest in response to genotoxic stress. One such example of E2F-regulated genes repressed by pRb are cyclin E and cyclin A . Both of these cyclins are able to bind to Cdk2 and facilitate entry into the S phase of the cell cycle. Through the repression of expression of cyclin E and cyclin A, pRb is able to inhibit the G1/S transition . There are at least three distinct mechanisms in which pRb can repress transcription of E2F-regulated promoters . Though these mechanisms are known, it is unclear which are the most important for the control of the cell cycle. E2Fs are a family of proteins whose binding sites are often found in the promoter regions of genes for cell proliferation or progression of the cell cycle. E2F1 to E2F5 are known to associate with proteins in the pRb-family of proteins while E2F6 and E2F7 are independent of pRb. Broadly, the E2Fs are split into activator E2Fs and repressor E2Fs though their role is more flexible than that on occasion. The activator E2Fs are E2F1, E2F2 and E2F3 while the repressor E2Fs are E2F4 , E2F5 and E2F6. Activator E2Fs along with E2F4 bind exclusively to pRb. pRb is able to bind to the activation domain of the activator E2Fs which blocks their activity, repressing transcription of the genes controlled by that E2F-promoter. The preinitiation complex (PIC) assembles in a stepwise fashion on the promoter of genes to initiate transcription. The TFIID binds to the TATA box in order to begin the assembly of the TFIIA , recruiting other transcription factors and components needed in the PIC. Data suggests that pRb is able to repress transcription by both pRb being recruited to the promoter as well as having a target present in TFIID. The presence of pRb may change the conformation of the TFIIA/IID complex into a less active version with a decreased binding affinity. pRb can also directly interfere with their association as proteins, preventing TFIIA/IID from forming an active complex. pRb acts as a recruiter that allows for the binding of proteins that alter chromatin structure onto the site E2F-regulated promoters. Access to these E2F-regulated promoters by transcriptional factors is blocked by the formation of nucleosomes and their further packing into chromatin. Nucleosome formation is regulated by post-translational modifications to histone tails. Acetylation leads to the disruption of nucleosome structure. Proteins called histone acetyltransferases (HATs) are responsible for acetylating histones and thus facilitating the association of transcription factors on DNA promoters. Deacetylation, on the other hand, leads to nucleosome formation and thus makes it more difficult for transcription factors to sit on promoters. Histone deacetylases (HDACs) are the proteins responsible for facilitating nucleosome formation and are therefore associated with transcriptional repressors proteins. pRb interacts with the histone deacetylases HDAC1 and HDAC3 . pRb binds to HDAC1 in its pocket domain in a region that is independent to its E2F-binding site. pRb recruitment of histone deacetylases leads to the repression of genes at E2F-regulated promoters due to nucleosome formation. Some genes activated during the G1/S transition such as cyclin E are repressed by HDAC during early to mid-G1 phase. This suggests that HDAC-assisted repression of cell cycle progression genes is crucial for the ability of pRb to arrest cells in G1. To further add to this point, the HDAC-pRb complex is shown to be disrupted by cyclin D/Cdk4 which levels increase and peak during the late G1 phase. Senescence in cells is a state in which cells are metabolically active but are no longer able to replicate. pRb is an important regulator of senescence in cells and since this prevents proliferation, senescence is an important antitumor mechanism. pRb may occupy E2F-regulated promoters during senescence. For example, pRb was detected on the cyclin A and PCNA promoters in senescent cells. Cells respond to stress in the form of DNA damage, activated oncogenes, or sub-par growing conditions, and can enter a senescence-like state called "premature senescence". This allows the cell to prevent further replication during periods of damaged DNA or general unfavorable conditions. DNA damage in a cell can induce pRb activation. pRb's role in repressing the transcription of cell cycle progression genes leads to the S phase arrest that prevents replication of damaged DNA. When it is time for a cell to enter S phase, complexes of cyclin-dependent kinases (CDK) and cyclins phosphorylate pRb, allowing E2F-DP to dissociate from pRb and become active. [ 8 ] When E2F is free it activates factors like cyclins (e.g. cyclin E and cyclin A), which push the cell through the cell cycle by activating cyclin-dependent kinases, and a molecule called proliferating cell nuclear antigen, or PCNA , which speeds DNA replication and repair by helping to attach polymerase to DNA. [ 18 ] [ 21 ] [ 7 ] [ 8 ] [ 19 ] [ 23 ] [ 24 ] Since the 1990s, pRb was known to be inactivated via phosphorylation. Until, the prevailing model was that Cyclin D- Cdk 4/6 progressively phosphorylated it from its unphosphorylated to its hyperphosphorylated state (14+ phosphorylations). However, it was recently shown that pRb only exists in three states: un-phosphorylated, mono-phosphorylated, and hyper-phosphorylated. Each has a unique cellular function. [ 25 ] Before the development of 2D IEF , only hyper-phosphorylated pRb was distinguishable from all other forms, i.e. un-phosphorylated pRb resembled mono-phosphorylated pRb on immunoblots. As pRb was either in its active "hypo-phosphorylated" state or inactive "hyperphosphorylated" state. However, with 2D IEF, it is now known that pRb is un-phosphorylated in G0 cells and mono-phosphorylated in early G1 cells, prior to hyper-phosphorylation after the restriction point in late G1. [ 25 ] When a cell enters G1, Cyclin D- Cdk4/6 phosphorylates pRb at a single phosphorylation site. No progressive phosphorylation occurs because when HFF cells were exposed to sustained cyclin D- Cdk4/6 activity (and even deregulated activity) in early G1, only mono-phosphorylated pRb was detected. Furthermore, triple knockout, p16 addition, and Cdk 4/6 inhibitor addition experiments confirmed that Cyclin D- Cdk 4/6 is the sole phosphorylator of pRb. [ 25 ] Throughout early G1, mono-phosphorylated pRb exists as 14 different isoforms (the 15th phosphorylation site is not conserved in primates in which the experiments were performed). Together, these isoforms represent the "hypo-phosphorylated" active pRb state that was thought to exist. Each isoform has distinct preferences to associate with different exogenous expressed E2Fs. [ 25 ] A recent report showed that mono-phosphorylation controls pRb's association with other proteins and generates functional distinct forms of pRb. [ 26 ] All different mono-phosphorylated pRb isoforms inhibit E2F transcriptional program and are able to arrest cells in G1-phase. Importantly, different mono-phosphorylated forms of pRb have distinct transcriptional outputs that are extended beyond E2F regulation. [ 26 ] After a cell passes the restriction point, Cyclin E - Cdk 2 hyper-phosphorylates all mono-phosphorylated isoforms. While the exact mechanism is unknown, one hypothesis is that binding to the C-terminus tail opens the pocket subunit, allowing access to all phosphorylation sites. This process is hysteretic and irreversible, and it is thought accumulation of mono-phosphorylated pRb induces the process. The bistable, switch like behavior of pRb can thus be modeled as a bifurcation point: [ 25 ] Presence of un-phosphorylated pRb drives cell cycle exit and maintains senescence. At the end of mitosis, PP1 dephosphorylates hyper-phosphorylated pRb directly to its un-phosphorylated state. Furthermore, when cycling C2C12 myoblast cells differentiated (by being placed into a differentiation medium), only un-phosphorylated pRb was present. Additionally, these cells had a markedly decreased growth rate and concentration of DNA replication factors (suggesting G0 arrest). [ 25 ] This function of un-phosphorylated pRb gives rise to a hypothesis for the lack of cell cycle control in cancerous cells: Deregulation of Cyclin D - Cdk 4/6 phosphorylates un-phosphorylated pRb in senescent cells to mono-phosphorylated pRb, causing them to enter G1. The mechanism of the switch for Cyclin E activation is not known, but one hypothesis is that it is a metabolic sensor. Mono-phosphorylated pRb induces an increase in metabolism, so the accumulation of mono-phosphorylated pRb in previously G0 cells then causes hyper-phosphorylation and mitotic entry. Since any un-phosphorylated pRb is immediately phosphorylated, the cell is then unable to exit the cell cycle, resulting in continuous division. [ 25 ] DNA damage to G0 cells activates Cyclin D - Cdk 4/6, resulting in mono-phosphorylation of un-phosphorylated pRb. Then, active mono-phosphorylated pRb causes repression of E2F-targeted genes specifically. Therefore, mono-phosphorylated pRb is thought to play an active role in DNA damage response, so that E2F gene repression occurs until the damage is fixed and the cell can pass the restriction point. As a side note, the discovery that damages causes Cyclin D - Cdk 4/6 activation even in G0 cells should be kept in mind when patients are treated with both DNA damaging chemotherapy and Cyclin D - Cdk 4/6 inhibitors. [ 25 ] During the M-to-G1 transition, pRb is then progressively dephosphorylated by PP1 , returning to its growth-suppressive hypophosphorylated state. [ 8 ] [ 27 ] pRb family proteins are components of the DREAM complex composed of DP, E2F4/5, RB-like (p130/p107) And MuvB (Lin9:Lin37:Lin52:RbAbP4:Lin54). The DREAM complex is assembled in Go/G1 and maintains quiescence by assembling at the promoters of > 800 cell-cycle genes and mediating transcriptional repression. Assembly of DREAM requires DYRK1A (Ser/Thr kinase) dependant phosphorylation of the MuvB core component, Lin52 at Serine28. This mechanism is crucial for recruitment of p130/p107 to the MuvB core and thus DREAM assembly. Consequences of loss of pRb function is dependent on cell type and cell cycle status, as pRb's tumor suppressive role changes depending on the state and current identity of the cell. In G0 quiescent stem cells, pRb is proposed to maintain G0 arrest although the mechanism remains largely unknown. Loss of pRb leads to exit from quiescence and an increase in the number of cells without loss of cell renewal capacity. In cycling progenitor cells, pRb plays a role at the G1, S, and G2 checkpoints and promotes differentiation. In differentiated cells, which make up the majority of cells in the body and are assumed to be in irreversible G0, pRb maintains both arrest and differentiation. [ 28 ] Loss of pRb therefore exhibits multiple different responses within different cells that ultimately all could result in cancer phenotypes. For cancer initiation, loss of pRb may induce cell cycle re-entry in both quiescent and post-mitotic differentiated cells through dedifferentiation. In cancer progression, loss of pRb decreases the differentiating potential of cycling cells, increases chromosomal instability, prevents induction of cellular senescence, promotes angiogenesis, and increases metastatic potential. [ 28 ] Although most cancers rely on glycolysis for energy production ( Warburg effect ), [ 29 ] cancers due to pRb loss tend to upregulate oxidative phosphorylation . [ 30 ] The increased oxidative phosphorylation can increase stemness , metastasis , and (when enough oxygen is available) cellular energy for anabolism . [ 30 ] In vivo, it is still not entirely clear how and which cell types cancer initiation occurs with solely loss of pRb, but it is clear that the pRb pathway is altered in large number of human cancers.[110] In mice, loss of pRb is sufficient to initiate tumors of the pituitary and thyroid glands, and mechanisms of initiation for these hyperplasia are currently being investigated. [ 31 ] The classic view of pRb's role as a tumor suppressor and cell cycle regulator developed through research investigating mechanisms of interactions with E2F family member proteins. Yet, more data generated from biochemical experiments and clinical trials reveal other functions of pRb within the cell unrelated (or indirectly related) to tumor suppression. [ 32 ] In proliferating cells, certain pRb conformations (when RxL motif if bound by protein phosphatase 1 or when it is acetylated or methylated) are resistant to CDK phosphorylation and retain other function throughout cell cycle progression, suggesting not all pRb in the cell are devoted to guarding the G1/S transition. [ 32 ] Studies have also demonstrated that hyperphosphorylated pRb can specifically bind E2F1 and form stable complexes throughout the cell cycle to carry out unique unexplored functions, a surprising contrast from the classical view of pRb releasing E2F factors upon phosphorylation. [ 32 ] In summary, many new findings about pRb's resistance to CDK phosphorylation are emerging in pRb research and shedding light on novel roles of pRb beyond cell cycle regulation. pRb is able to be localize to sites of DNA breaks during the repair process and assist in non-homologous end joining and homologous recombination through complexing with E2F1. Once at the breaks, pRb is able to recruit regulators of chromatin structure such as the DNA helicase transcription activator BRG1. pRb has been shown to also be able to recruit protein complexes such as condensin and cohesin to assist in the structural maintenance of chromatin. [ 32 ] Such findings suggest that in addition to its tumor suppressive role with E2F, pRb is also distributed throughout the genome to aid in important processes of genome maintenance such as DNA break-repair, DNA replication, chromosome condensation, and heterochromatin formation. [ 32 ] pRb has also been implicated in regulating metabolism through interactions with components of cellular metabolic pathways. RB1 mutations can cause alterations in metabolism, including reduced mitochondrial respiration, reduced activity in the electron transport chain, and changes in flux of glucose and/or glutamine. Particular forms of pRb have been found to localize to the outer mitochondrial membrane and directly interacts with Bax to promote apoptosis. [ 33 ] While the frequency of alterations of the RB gene is substantial for many human cancer types including as lung, esophageal, and liver, alterations in up-steam regulatory components of pRb such as CDK4 and CDK6 have been the main targets for potential therapeutics to treat cancers with dysregulation in the RB pathway. [ 34 ] This focus has resulted in the recent development and FDA clinical approval of three small molecule CDK4/6 inhibitors (Palbociclib (IBRANCE, Pfizer Inc. 2015), Ribociclib (KISQUALI, Novartis. 2017), and Abemaciclib (VERZENIO, Eli Lilly. 2017)) for the treatment of specific breast cancer subtypes. However, recent clinical studies finding limited efficacy, high toxicity, and acquired resistance [ 35 ] [ 36 ] of these inhibitors suggests the need to further elucidate mechanisms that influence CDK4/6 activity as well as explore other potential targets downstream in the pRb pathway to reactivate pRb's tumor suppressive functions. Treatment of cancers by CDK4/6 inhibitors depends on the presence of pRb within the cell for therapeutic effect, limiting their usage only to cancers where RB is not mutated and pRb protein levels are not significantly depleted. [ 34 ] Direct pRb reactivation in humans has not been achieved. However, in murine models, novel genetic methods have allowed for in vivo pRb reactivation experiments. pRb loss induced in mice with oncogenic KRAS-driven tumors of lung adenocarcinoma negates the requirement of MAPK signal amplification for progression to carcinoma and promotes loss of lineage commitment as well as accelerate the acquisition of metastatic competency. Reactivation of pRb in these mice rescues the tumors towards a less metastatic state, but does not completely stop tumor growth due to a proposed rewiring of MAPK pathway signaling, which suppresses pRb through a CDK-dependent mechanism. [ 37 ] Besides trying to re-activate the tumor suppressive function of pRb, one other distinct approach to treat dysregulated pRb pathway cancers is to take advantage of certain cellular consequences induced by pRb loss. It has been shown that E2F stimulates expression of pro-apoptotic genes in addition to G1/S transition genes, however, cancer cells have developed defensive signaling pathways that protect themselves from death by deregulated E2F activity. Development of inhibitors of these protective pathways could thus be a synthetically lethal method to kill cancer cells with overactive E2F. [ 34 ] In addition, it has been shown that the pro-apoptotic activity of p53 is restrained by the pRb pathway, such that pRb deficient tumor cells become sensitive to p53 mediated cell death. This opens the door to research of compounds that could activate p53 activity in these cancer cells and induce apoptosis and reduce cell proliferation. [ 34 ] While the loss of a tumor suppressor such as pRb leading to uncontrolled cell proliferation is detrimental in the context of cancer, it may be beneficial to deplete or inhibit suppressive functions of pRb in the context of cellular regeneration. [ 38 ] Harvesting the proliferative abilities of cells induced to a controlled "cancer like" state could aid in repairing damaged tissues and delay aging phenotypes. This idea remains to be thoroughly explored as a potential cellular injury and anti-aging treatment. The retinoblastoma protein is involved in the growth and development of mammalian hair cells of the cochlea , and appears to be related to the cells' inability to regenerate. Embryonic hair cells require pRb, among other important proteins, to exit the cell-cycle and stop dividing, which allows maturation of the auditory system. Once wild-type mammals have reached adulthood, their cochlear hair cells become incapable of proliferation. In studies where the gene for pRb is deleted in mice cochlea, hair cells continue to proliferate in early adulthood. Though this may seem to be a positive development, pRb-knockdown mice tend to develop severe hearing loss due to degeneration of the organ of Corti . For this reason, pRb seems to be instrumental for completing the development of mammalian hair cells and keeping them alive. [ 39 ] [ 40 ] However, it is clear that without pRb, hair cells have the ability to proliferate, which is why pRb is known as a tumor suppressor. Temporarily and precisely turning off pRb in adult mammals with damaged hair cells may lead to propagation and therefore successful regeneration . Suppressing function of the retinoblastoma protein in the adult rat cochlea has been found to cause proliferation of supporting cells and hair cells . pRb can be downregulated by activating the sonic hedgehog pathway, which phosphorylates the proteins and reduces gene transcription. [ 41 ] Disrupting pRb expression in vitro, either by gene deletion or knockdown of pRb short interfering RNA , causes dendrites to branch out farther. In addition, Schwann cells , which provide essential support for the survival of neurons, travel with the neurites , extending farther than normal. The inhibition of pRb supports the continued growth of nerve cells. [ 42 ] pRb is known to interact with more than 300 proteins, some of which are listed below: Several methods for detecting the RB1 gene mutations have been developed [ 120 ] including a method that can detect large deletions that correlate with advanced stage retinoblastoma. [ 121 ] This article incorporates text from the United States National Library of Medicine , which is in the public domain .
https://en.wikipedia.org/wiki/Retinoblastoma_protein
Retinyl acetate (also called vitamin A acetate or all‑trans‑retinol acetate ) is a synthetic, fat‑soluble retinyl ester often used to supply vitamin A in food fortification, dietary supplements, and topical cosmetic products. [ 2 ] [ 3 ] Because the acetyl group protects the alcohol functionality, the compound is markedly more stable to heat, oxygen and light than free retinol, yet is rapidly hydrolyzed in the human intestine to active retinol after ingestion. [ 4 ] Commercially, retinyl acetate is the second most common retinyl ester after retinyl palmitate. Retinyl acetate is the acetate ester of all‑trans‑retinol. Its polyene side chain makes the molecule highly lipophilic and sensitive to photo‑oxidation; antioxidants (e.g., tocopherol) and opaque packaging are therefore used to limit degradation in finished products. The compound melts at ~59 °C and is practically insoluble in water but miscible with edible oils and most organic solvents. [ 5 ] Dietary retinyl acetate is hydrolyzed in the intestinal lumen by pancreatic triglyceride lipase and by brush‑border phospholipase B, releasing free retinol. The retinol is absorbed, re‑esterified mainly with long‑chain fatty acids by lecithin‑retinol acyltransferase (LRAT) inside enterocytes, and secreted in chylomicrons to the liver, where 50–80 % of total‑body vitamin A is stored as retinyl palmitate in hepatic stellate cells. Mobilization of these stores releases retinol bound to retinol‑binding protein 4 (RBP4) for delivery to peripheral tissues. Large‑scale vitamin A manufacture couples a C15 β‑ionone fragment with a C5 acetate side chain via a series of Wittig‑Horner and Grignard reactions, followed by final esterification or trans‑esterification to retinyl acetate. Modern processes achieve >95 % all‑trans selectivity and include crystallization or column purification under nitrogen to minimize isomerization. The United States Food and Drug Administration lists retinyl acetate as "Generally Recognized as Safe" (GRAS) for use as a nutrient supplement in foods (21 CFR 184.1930). [ 6 ] It is commonly added to margarine, plant‑based milk, breakfast cereals and staple oils in low‑ and middle‑income countries to prevent vitamin A deficiency. [ 7 ] Multivitamin tablets typically supply 600–900 µg retinol activity equivalents (RAE) from retinyl acetate or retinyl palmitate. The U.S. National Institutes of Health sets a Tolerable Upper Intake Level (UL) of 3 000 µg RAE day −1 for adults. Retinyl acetate is used in "anti‑aging" skin‑care formulations as a milder, more photo‑stable alternative to retinol. The EU Scientific Committee on Consumer Safety (SCCS) concluded in 2017, and reaffirmed in 2023, that leave‑on products are safe at concentrations providing up to 0.3 % retinol equivalents, while body lotions for children aged 1–3 years should not exceed 0.05 % retinol equivalents. [ 8 ] Excess pre‑formed vitamin A from supplements or fortified foods can cause hypervitaminosis A, characterized acutely by nausea and raised intracranial pressure and chronically by liver injury and teratogenicity. Retinyl acetate shares these dose‑dependent toxicities because it is quantitatively hydrolyzed to retinol. Phototoxicity and photo‑isomerization are significantly lower than for unesterified retinol but can occur in formulations lacking UV stabilizers. [ 9 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Retinyl_acetate
Retiperidiolia aquaphila Retiperidiolia reticulata Retiperidiolia is a genus of fungi in the family Nidulariaceae . Basidiocarps (fruit bodies) are typically under 10 mm in diameter and irregularly spherical. Each produces a number of peridioles which contain the spores and are released from the disintegrating fruit bodies at maturity. Species are usually found growing on herbaceous stems and other plant debris. The genus has a tropical distribution. [ 1 ] Species were previously referred to Mycocalia , but molecular research, based on cladistic analysis of DNA sequences , found that they were not closely related. [ 1 ]
https://en.wikipedia.org/wiki/Retiperidiolia
RetrOryza is a database of Long terminal repeat - retrotransposons for the rice genome. [ 1 ] This Biological database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/RetrOryza
In topology , a retraction is a continuous mapping from a topological space into a subspace that preserves the position of all points in that subspace. [ 1 ] The subspace is then called a retract of the original space. A deformation retraction is a mapping that captures the idea of continuously shrinking a space into a subspace. An absolute neighborhood retract ( ANR ) is a particularly well-behaved type of topological space. For example, every topological manifold is an ANR. Every ANR has the homotopy type of a very simple topological space, a CW complex . Let X be a topological space and A a subspace of X . Then a continuous map is a retraction if the restriction of r to A is the identity map on A ; that is, r ( a ) = a {\textstyle r(a)=a} for all a in A . Equivalently, denoting by the inclusion , a retraction is a continuous map r such that that is, the composition of r with the inclusion is the identity of A . Note that, by definition, a retraction maps X onto A . A subspace A is called a retract of X if such a retraction exists. For instance, any non-empty space retracts to a point in the obvious way (any constant map yields a retraction). If X is Hausdorff , then A must be a closed subset of X . If r : X → A {\textstyle r:X\to A} is a retraction, then the composition ι∘ r is an idempotent continuous map from X to X . Conversely, given any idempotent continuous map s : X → X , {\textstyle s:X\to X,} we obtain a retraction onto the image of s by restricting the codomain . A continuous map is a deformation retraction of a space X onto a subspace A if, for every x in X and a in A , In other words, a deformation retraction is a homotopy between a retraction (strictly, between its composition with the inclusion) and the identity map on X . The subspace A is called a deformation retract of X . A deformation retraction is a special case of a homotopy equivalence . A retract need not be a deformation retract. For instance, having a single point as a deformation retract of a space X would imply that X is path connected (and in fact that X is contractible ). Note: An equivalent definition of deformation retraction is the following. A continuous map r : X → A {\textstyle r:X\to A} is itself called a deformation retraction if it is a retraction and its composition with the inclusion is homotopic to the identity map on X . In this language, a deformation retraction still carries with it a homotopy between the identity map on X and itself, but we refer to the map r {\textstyle r} rather than the homotopy as a deformation retraction. If, in the definition of a deformation retraction, we add the requirement that for all t in [0, 1] and a in A , then F is called a strong deformation retraction . In other words, a strong deformation retraction leaves points in A fixed throughout the homotopy. (Some authors, such as Hatcher , take this as the definition of deformation retraction.) As an example, the n -sphere S n {\textstyle S^{n}} is a strong deformation retract of R n + 1 ∖ { 0 } ; {\textstyle \mathbb {R} ^{n+1}\backslash \{0\};} as strong deformation retraction one can choose the map Note that the condition of being a strong deformation retract is strictly stronger than being a deformation retract. For instance, let X be the subspace of R 2 {\displaystyle \mathbb {R} ^{2}} consisting of closed line segments connecting the origin and the point ( 1 / n , 1 ) {\displaystyle (1/n,1)} for n a positive integer, together with the closed line segment connecting the origin with ( 0 , 1 ) {\displaystyle (0,1)} . Let X have the subspace topology inherited from the Euclidean topology on R 2 {\displaystyle \mathbb {R} ^{2}} . Now let A be the subspace of X consisting of the line segment connecting the origin with ( 0 , 1 ) {\displaystyle (0,1)} . Then A is a deformation retract of X but not a strong deformation retract of X . [ 2 ] A map f : A → X of topological spaces is a ( Hurewicz ) cofibration if it has the homotopy extension property for maps to any space. This is one of the central concepts of homotopy theory . A cofibration f is always injective, in fact a homeomorphism to its image. [ 3 ] If X is Hausdorff (or a compactly generated weak Hausdorff space ), then the image of a cofibration f is closed in X . Among all closed inclusions, cofibrations can be characterized as follows. The inclusion of a closed subspace A in a space X is a cofibration if and only if A is a neighborhood deformation retract of X , meaning that there is a continuous map u : X → [ 0 , 1 ] {\displaystyle u:X\rightarrow [0,1]} with A = u − 1 ( 0 ) {\textstyle A=u^{-1}\!\left(0\right)} and a homotopy H : X × [ 0 , 1 ] → X {\textstyle H:X\times [0,1]\rightarrow X} such that H ( x , 0 ) = x {\textstyle H(x,0)=x} for all x ∈ X , {\displaystyle x\in X,} H ( a , t ) = a {\displaystyle H(a,t)=a} for all a ∈ A {\displaystyle a\in A} and t ∈ [ 0 , 1 ] , {\displaystyle t\in [0,1],} and H ( x , 1 ) ∈ A {\textstyle H\left(x,1\right)\in A} if u ( x ) < 1 {\displaystyle u(x)<1} . [ 4 ] For example, the inclusion of a subcomplex in a CW complex is a cofibration. The boundary of the n -dimensional ball , that is, the ( n −1)-sphere, is not a retract of the ball. (See Brouwer fixed-point theorem § A proof using homology or cohomology .) A closed subset X {\textstyle X} of a topological space Y {\textstyle Y} is called a neighborhood retract of Y {\textstyle Y} if X {\textstyle X} is a retract of some open subset of Y {\textstyle Y} that contains X {\textstyle X} . Let C {\displaystyle {\mathcal {C}}} be a class of topological spaces, closed under homeomorphisms and passage to closed subsets. Following Borsuk (starting in 1931), a space X {\textstyle X} is called an absolute retract for the class C {\displaystyle {\mathcal {C}}} , written AR ⁡ ( C ) , {\textstyle \operatorname {AR} \left({\mathcal {C}}\right),} if X {\textstyle X} is in C {\displaystyle {\mathcal {C}}} and whenever X {\textstyle X} is a closed subset of a space Y {\textstyle Y} in C {\displaystyle {\mathcal {C}}} , X {\textstyle X} is a retract of Y {\textstyle Y} . A space X {\textstyle X} is an absolute neighborhood retract for the class C {\displaystyle {\mathcal {C}}} , written ANR ⁡ ( C ) , {\textstyle \operatorname {ANR} \left({\mathcal {C}}\right),} if X {\textstyle X} is in C {\displaystyle {\mathcal {C}}} and whenever X {\textstyle X} is a closed subset of a space Y {\textstyle Y} in C {\displaystyle {\mathcal {C}}} , X {\textstyle X} is a neighborhood retract of Y {\textstyle Y} . Various classes C {\displaystyle {\mathcal {C}}} such as normal spaces have been considered in this definition, but the class M {\displaystyle {\mathcal {M}}} of metrizable spaces has been found to give the most satisfactory theory. For that reason, the notations AR and ANR by themselves are used in this article to mean AR ⁡ ( M ) {\displaystyle \operatorname {AR} \left({\mathcal {M}}\right)} and ANR ⁡ ( M ) {\displaystyle \operatorname {ANR} \left({\mathcal {M}}\right)} . [ 6 ] A metrizable space is an AR if and only if it is contractible and an ANR. [ 7 ] By Dugundji , every locally convex metrizable topological vector space V {\textstyle V} is an AR; more generally, every nonempty convex subset of such a vector space V {\textstyle V} is an AR. [ 8 ] For example, any normed vector space ( complete or not) is an AR. More concretely, Euclidean space R n , {\textstyle \mathbb {R} ^{n},} the unit cube I n , {\textstyle I^{n},} and the Hilbert cube I ω {\textstyle I^{\omega }} are ARs. ANRs form a remarkable class of " well-behaved " topological spaces. Among their properties are:
https://en.wikipedia.org/wiki/Retraction_(topology)
The Retriangulation of Great Britain was a triangulation project carried out between 1935 and 1962 that sought to improve the accuracy of maps of Great Britain . [ 1 ] Data gathered from the retriangulation replaced data gathered during the Principal Triangulation of Great Britain , which had been performed between 1783 and 1851. [ 2 ] The work was designed to form a complete new survey control network for the whole country, and to unify the mapping of the United Kingdom from local county projections into a single national datum projection and reference system. Its completion led to the establishment of the OSGB36 datum and Ordnance Survey National Grid in use today. The retriangulation was begun in 1935 by the Director General of the Ordnance Survey , Major-General Malcolm MacLeod . [ 1 ] It was directed by the cartographer and mathematician Martin Hotine , head of the Trigonometrical and Levelling Division (TLD). The work was halted by the outbreak of World War II in 1939, by which time the primary triangulation network covered all of England and Wales, but only as far as the Moray Firth in Scotland. Secondary triangulation had commenced in 1938, and after the end of the war, the retriangulation work was focused on secondary and lower-order survey work, to expedite the completion of new large-scale surveys. [ 3 ] [ 4 ] The wartime priorities of the TLD were focused on survey work in connection with the war effort, such as airfield and military construction, survey and computations for anti-aircraft and coastal battery positions, and survey of radiolocation sites. One-third of the Ordnance Survey staff were called up during the war, and the headquarters in Southampton was bombed and badly damaged. [ 5 ] Staff were relocated to the Home Counties , where they produced 1:25,000 scale maps of France, Italy, Germany and most of the rest of Europe in preparation for invasion. Primary triangulation observations were not resumed until 1949, and completed in 1952. [ 3 ] A problem during the Principal Triangulation was that the exact locations of surveying stations were not always rediscoverable, relying on buried markers and unreliable local knowledge. To overcome this, a network of permanent surveying stations was built, most familiarly the concrete triangulation pillars (about 6,500 of them) found on many British Isles hill and mountain tops, but there were many other kinds of surveying stations used. To minimise differences between the 1783–1851 survey and the retriangulation, eleven Principal Triangulation stations, ranging from Dunnose on the Isle of Wight to Great Whernside in Yorkshire, were chosen and pillars erected on them to act as the core framework from which all other measurements were made. The main work of the Retriangulation was finished in 1962, creating the Ordnance Survey National Grid . This system continued to be used, and measurements refined by ground-based surveying, into the 1980s, after which satellite use took over. Electronic measuring devices were introduced towards the end of the Retriangulation, but at that time were not proven reliable enough to replace traditional surveying. [ 5 ] One of the first steps in the retriangulation was the adoption of a new projection for the mapping, with the existing Cassini projection replaced by the Transverse Mercator . This was preferred by the Ordnance Survey because the use of the Cassini projection would have resulted in angular distortion of almost four minutes of arc in the survey. [ 6 ] [ 3 ] The solid form of the Earth, known as the geoid , cannot be fully defined by simple formulae. The spheroid is the nearest mathematical model, but as no one spheroid fits worldwide, a number have to be used. The Airy spheroid provides a good fit in the region of the British Isles, and the Transverse Mercator Projection of this spheroid was therefore adopted by the Ordnance Survey as the basis of the national co-ordinate system. [ 7 ] No projection can be true to scale across its entirety. In the Transverse Mercator, the scale at any given point increases in correlation with its east or west distance from the central meridian. The scale along the north-south line that contains the point remains consistent. The true origin of the projection lies at latitude 49° N, longitude 2° W. A false origin positioned roughly 170 kilometres west of The Lizard was established to ensure all national grid coordinates remained positive, as the whole country is further east and further north than that point. In this system, the central meridian is 400 km east. [ 8 ] The scale on the central meridian should be correct, or 1. However, to ensure that scale error is imperceptible on the national mapping at the eastern and western boundaries, a scale reduction of 1:2500 was applied. This provides a local scale factor of 0.9996 at the central meridian. The scale continually increases with distance from the central meridian, east and west, reaching 1 at 580 km east and 220 km west. It continues to rise, reaching 1.0005 at the eastern and western extremes. [ 3 ] The corresponding local scale factor must be employed to convert a site measured plane length to a projection distance, and vice versa. As the spheroid is set at mean sea level, any surveyed length must be reduced to mean sea level before applying the local scale factor. [ 9 ] [ 10 ] The primary triangulation work commenced with the division of survey work into blocks. The size of these blocks was governed by the largest number of survey observations which could be computed in a simultaneous least-squares adjustment . Reconnaissance of survey stations was commenced in 1935, using Tavistock theodolites to confirm the inter-visibility of stations. [ 3 ] [ 11 ] Survey of the triangulation commenced in April 1936, with observations made during the hours of darkness to electric beacon lamps manufactured by Cooke, Troughton & Simms . In flat areas of the country, such as East Anglia , Bilby towers designed by the United States Coast and Geodetic Survey were used. [ 3 ] [ 7 ] The triangulation was still incomplete at the outbreak of World War II, with five of the seven blocks completed, and two main baselines (one between Whitehorse Hill and Liddington Castle , and the second in Lossiemouth ) measured. [ 12 ] [ 13 ] At the outbreak of the war, the Ordnance Survey regional offices in Bristol , Tunbridge Wells , London , and Edinburgh were reduced to a care and maintenance basis, with only occasional activity connected to wartime survey projects. This was the situation until 1944, when an increase in staff levels was made by men returning from war service. [ 6 ] At the end of the war, the most urgent task was the provision of secondary, tertiary, and lower-order control for large-scale surveys. However, on 11 May 1949, the observations to complete the primary triangulation had recommenced, focused on completion of block six in Scotland, which included the Outer Hebrides , Orkney , and Shetland . Two independent survey teams were used, the first covering an area from Caithness to the Northern Isles , and the second commencing from the boundary of survey block three in Argyll . [ 3 ] The difficulties of completing the field survey work in the Scottish Highlands included completing observations on Ben Nevis in sub-zero temperatures with heavy snowfall, surveying over mountainous terrain, and transportation between various remote Scottish Islands . A member of the survey team suffered a dislocated shoulder when he was attacked by Arctic skuas , whose nesting had been inadvertently disturbed by his work. The work on Ben Nevis alone took twenty-two nights to complete. By 1962 the retriangulation of Britain was complete, with aerial surveying expediting the work in the latter stages. [ 14 ] [ 12 ] The completion of block six was achieved in 1951, and a new block (seven) was added to connect the triangulation to the Isle of Man . In addition, a connection with France was made across the Strait of Dover in collaboration with the Institut national de l'information géographique et forestière . Survey stations on the British side were at Beachy Head , Fairlight Down, Paddlesworth , and Rumsfelds Water Tower, and in France stations at La Canche, Montlambert, Saint-Inglevert , and Gravelines were used. The results were considered good, with the average survey misclosure (the angular error of lines or rays measured during a traverse survey ) being only one second of arc . [ 3 ] A connection was made to Ireland in 1952, in co-operation with Ordnance Survey Ireland . Observations commenced on 19 April 1952, but were initially hampered by heavy rain and clouds. The survey ray between Trostan and Slieve Donard was abandoned after numerous attempts, but was subsequently completed when Slieve Donard was re-occupied to observe the Holyhead ray in July 1952, with the survey team forced to wait twenty-five nights to complete the third and final observation. The Kippure to South Barrule (Isle of Man) ray, 95 miles long and obscured by smog from Dublin , was eventually abandoned. [ 3 ] By mid-June 1952, the northern section of the connection had been finished. Observations for the internal retriangulation of Northern Ireland were then undertaken, whilst the UK survey parties completed additional work to strengthen the western edge of the primary retriangulation on the coast of Wales. On 28 July 1952, work commenced on the southern half of the connection. As the work moved southward, the rays across the Irish Sea became progressively longer. [ 6 ] On 3 September 1952, work began to observe the longest ray in the entire retriangulation, measuring 98 miles (158 km) between the Preseli mountains (Wales) and Ballycreen in County Wicklow . The statutory three nights were sufficient for the completion of this work. A further ray between Preseli and Kippure was not considered essential and, after partial observation, was abandoned. [ 3 ] The Ordnance Survey Ireland team then moved to the Hill of Tara and Forth Mountain in Wexford , but deteriorating weather conditions meant that the work could not be completed until 8 October 1952. This marked the completion of the connection and retriangulation, with an average misclosure of 1.16 seconds. [ 13 ] [ 3 ] The triangulation was connected to both Norway and Iceland using HIRAN, an enhanced version of SHORAN . Survey connections extending from primary triangulation points in Scotland to triangulation points in Norway and Iceland were facilitated by the US Air Force under the implementation of a project known as the North Atlantic Tie. [ 9 ] [ 3 ] [ 15 ] Shortly after World War II, the US Air Force had carried out a readjustment of all the triangulations of continental Europe to produce a geodetic datum known as ED50 , a single system on the Universal Transverse Mercator coordinate system . The North Atlantic Tie initiative aimed to create a geodetic link between North America and Europe, by measuring a trilateration network, and permitting the positioning of European triangulation stations relative to the North American Datum . [ 6 ] From July to September 1953, the US Air Force used HIRAN to survey a link between three geodetic stations in Norway and three on the Scottish mainland and Shetland islands . This marked the initial phase of a larger project which connected surveys of Norway, Iceland, and Greenland to Canada . [ 16 ] The network linking Scotland to Norway comprised fifteen measured lines: three among the Norwegian stations, three among the Scottish and Shetlandic stations, and nine lines across the North Sea . [ 6 ] The SHORAN geodetic stations did not precisely match the geodetic triangulation stations, but the proximity was considered such that no significant error was ascribed to the transfer from one to the other. [ 6 ] The Norwegian stations were: And the British stations were: Each of the fifteen survey lines was gauged by six line crossings at each of two altitude levels, totalling twelve crossings, all forming part of a survey mission. The distance between two survey stations was derived from the minimum sum of the signal transit times from a transmitter, carried in an aircraft flying across the line to be measured, to a pair of terminals at each end of the line and back. A mission was approved provided: The most inaccurate of the rejected survey missions deviated from the accepted measure by 0.0055 miles (29 feet), and the average disparity between a rejected measure and the mean of the accepted measures was 0.0013 miles (6 feet). The final results and assessment were computed from observation of ground survey positions, including stations in both Iceland and the Faroe Islands . [ 6 ] The operation was largely successful, but the Ordnance Survey considered that the results were not of a geodetic standard necessary for primary triangulation, and a 12 metres (39 ft) discrepancy existed in the measurements between Norwegian stations. [ 3 ] Concurrently with the retriangulation programme, a procedure was put in place for overhauling and updating 1:2500 Ordnance Survey maps in dense urban areas. The programme, known as Overhaul , was commenced with early experiments on methods undertaken in the Cotswolds , and the work done to realise the adjustments made to the 1:2500 maps became known as 'the Cotswolds adjustment' or 'Cotswolds Overhaul'. [ 6 ] [ 18 ] The Cotswolds Overhaul was a two-stage process. The first stage required the old maps to be updated to eliminate distortions in size and shape, aligning them with the new projections and control from the retriangulation process. In addition, the map details , many of which had not been updated since the 1891–1914 revision, were reviewed and revised. The new triangulation stations were incorporated into the old maps to complement local details and align with accurate grid positions. [ 6 ] The effectiveness of the Cotswolds Overhaul hinged on inserting enough National Grid survey control to align the old maps with the new triangulation. Overdoing it risked deforming the old details to a degree that would render revision impossible. This delicate equilibrium was achievable in parts of the UK where many of the new triangulation stations could be plotted in the correct relation to the old details. However, in open rural areas, positioning the triangulation stations within the detail framework was problematic, and the method began to falter. [ 19 ] Tests conducted in the early 1970s demonstrated that the Cotswold's accuracy standard (+2.5 metre standard error) had not been achieved across all areas. Two solutions emerged: a complete resurvey, or fixing and incorporating additional control in a way that restored the overhaul accuracy standard at a significantly lower cost. However cost comparisons later led to the conclusion that, in most circumstances, a resurvey was preferable. [ 20 ] [ 6 ]
https://en.wikipedia.org/wiki/Retriangulation_of_Great_Britain
The retro-Diels–Alder reaction ( rDA reaction ) is the reverse of the Diels–Alder (DA) reaction , a [4+2] cycloelimination. It involves the formation of a diene and dienophile from a cyclohexene . It can be accomplished spontaneously with heat, or with acid or base mediation. [ 1 ] [ 2 ] In principle, it becomes thermodynamically favorable for the Diels–Alder reactions to proceed in the reverse direction if the temperature is high enough. In practice, this reaction generally requires some special structural features in order to proceed at temperatures of synthetic relevance. For instance, the cleavage of cyclohexene to give butadiene and ethene has been observed, but only at temperatures exceeding 800 K. [ 3 ] With an appropriate driving force, however, the Diels–Alder reaction proceeds in reverse under relatively mild conditions, providing diene and dienophile from starting cyclohexene derivatives. As early as 1929, this process was known and applied to the detection of cyclohexadienes, which released ethylene and aromatic compounds after reacting with acetylenes through a Diels–Alder/retro-Diels–Alder sequence. [ 4 ] Since then, a variety of substrates have been subject to the rDA, yielding many different dienes and dienophiles. Additionally, conducting the rDA in the presence of a scavenging diene or dienophile has led to the capture of many transient reactive species. [ 5 ] The retro-Diels–Alder reaction proper is the microscopic reverse of the Diels–Alder reaction: a concerted (but not necessarily synchronous), pericyclic, single-step process. Evidence for the retro-Diels–Alder reaction was provided by the observation of endo - exo isomerization of Diels–Alder adducts. [ 6 ] It was postulated that at high temperatures, isomerization of kinetic endo adducts to more thermodynamically stable exo products occurred via an rDA/DA sequence. However, such isomerization may take place via a completely intramolecular, [3,3]-sigmatropic (Cope) process. Evidence for the latter was provided by the reaction below—none of the "head-to-head" isomer was obtained, suggesting a fully intramolecular isomerization process. [ 7 ] (2) Like the Diels–Alder reaction, the rDA preserves configuration in the diene and dienophile. Much less is known about the relative rates of reversion of endo and exo adducts, and studies have pointed to no correlation between relative configuration in the cyclohexene starting material and reversion rate. [ 8 ] A few rDA reactions occur spontaneously at room temperature because of the high reactivity or volatility of the emitted dienophile. Most, however, require additional thermal or chemical activation. The relative tendencies of a variety of dienes and dienophiles to form via rDA are described below: [ citation needed ] Because the Diels–Alder reaction exchanges two π bonds for two σ bonds, it is intrinsically thermodynamically favored in the forward direction. However, a variety of strategies for overcoming this inherent thermodynamic bias are known. Complexation of Lewis acids to basic functionality in the starting material may induce the retro-Diels–Alder reaction, even in cases when the forward reaction is intramolecular. [ 9 ] (3) Base mediation can be used to induce rDA in cases when the separated products are less basic than the starting material. This strategy has been used, for instance, to generate aromatic cyclopentadienyl anions from adducts of cyclopentadiene. [ 10 ] Strategically placed electron-withdrawing groups in the starting material can render this process essentially irreversible. (4) If isolation or reaction of an elusive diene or dienophile is the goal, one of two strategies may be used. Flash vacuum pyrolysis of Diels–Alder adducts synthesized by independent means can provide extremely reactive, short-lived dienophiles (which can then be captured by a unique diene). [ 11 ] Alternatively, the rDA reaction may be carried out in the presence of a scavenger. The scavenger reacts with either the diene or (more typically) the dienophile to drive the equilibrium of the retro-DA process toward products. Highly reactive cyanoacrylates may be isolated from Diels–Alder adducts (synthesized independently) with the use of a scavenger. [ 12 ] (5) Nitriles may be released in rDA reactions of DA adducts of pyrimidines or pyrazines. The resulting highly substituted pyridines can be difficult to access by other means. [ 13 ] (6) Release of isocyanates from Diels–Alder adducts of pyridones can be used to generate highly substituted aromatic compounds. The isocyanates may be isolated or trapped if they are the desired product. [ 14 ] (7) Release of nitrogen from six-membered, cyclic diazenes is common and often spontaneous at room temperature. Such a reaction can be utilized in click reactions where alkanes react with a 1,2,4,5-tetrazine in a diels alder then retro diels alder reaction with the loss of nitrogen. In this another example, the epoxide shown undergoes rDA at 0 °C. The isomer with a cis relationship between the diazene and epoxide reacts only after heating to >180 °C. [ 15 ] (8) The concerted release of oxygen via rDA results in the formation of singlet oxygen . Very high yields of singlet oxygen result from rDA reactions of some cyclic peroxides—in this example, a greater than 90% yield of singlet oxygen was obtained. [ 16 ] (9) Carbon dioxide is a common dienophile released during rDA reactions. Diels–Alder adducts of alkynes and 2-pyrones can undergo rDA to release carbon dioxide and generate aromatic compounds. [ 17 ] (10)
https://en.wikipedia.org/wiki/Retro-Diels–Alder_reaction
Retro (or reverse) screening (RS) is a relatively new approach to determine the specificity and selectivity of a therapeutic drug molecule against a target protein or another macromolecule. It proceeds in the opposite direction to the so-called virtual screening (VS). In VS, the goal is to use a protein target to identify a high-affinity ligand from a search library typically containing hundreds of thousands of small molecules. In contrast, RS employs a known drug molecule to screen a protein library containing hundreds of thousands of individual structures (obtained from both experimental and modeling techniques). Accordingly, the extent to which this drug cross-reacts with the human proteome provides a measure of its efficacy and the potential long-term side-effects. RS is expected to play a key role in providing an additional layer of quality control in drug discovery .
https://en.wikipedia.org/wiki/Retro_screening
Retrocausality , or backwards causation , is a concept of cause and effect in which an effect precedes its cause in time and so a later event affects an earlier one. [ 1 ] [ 2 ] In quantum physics , the distinction between cause and effect is not made at the most fundamental level and so time-symmetric systems can be viewed as causal or retrocausal. [ 3 ] [ page needed ] Philosophical considerations of time travel often address the same issues as retrocausality, as do treatments of the subject in fiction, but the two phenomena are distinct. [ 1 ] Philosophical efforts to understand causality extend back at least to Aristotle 's discussions of the four causes . It was long considered that an effect preceding its cause is an inherent self- contradiction because, as 18th century philosopher David Hume discussed, when examining two related events, the cause is by definition the one that precedes the effect. [ 4 ] [ page needed ] The idea of retrocausality is also found in Indian philosophy. It was defended by at least two Indian Buddhist philosophers, Prajñākaragupta (ca. 8th–9th century) and Jitāri (ca. 940–1000), the latter wrote a specific treatise on the topic, the Treatise on Future Cause ( Bhāvikāraṇavāda ). [ 5 ] In the 1950s, Michael Dummett wrote in opposition to such definitions, stating that there was no philosophical objection to effects preceding their causes. [ 6 ] This argument was rebutted by fellow philosopher Antony Flew and, later, by Max Black . [ 6 ] Black's "bilking argument" held that retrocausality is impossible because the observer of an effect could act to prevent its future cause from ever occurring. [ 7 ] A more complex discussion of how free will relates to the issues Black raised is summarized by Newcomb's paradox . Essentialist philosophers have proposed other theories, such as the existence of "genuine causal powers in nature" or by raising concerns about the role of induction in theories of causality. [ 8 ] [ page needed ] [ 9 ] [ page needed ] Most physical theories are time symmetric : microscopic models like Newton's laws or electromagnetism have no inherent direction of time. The "arrow of time" that distinguishes cause and effect must have another origin. [ 10 ] : 116 To reduce confusion, physicists distinguish strong (macroscopic) from weak (microscopic) causality. [ 11 ] The imaginary ability to affect the past is sometimes taken to suggest that causes could be negated by their own effects, creating a logical contradiction such as the grandfather paradox . [ 12 ] This contradiction is not necessarily inherent to retrocausality or time travel; by limiting the initial conditions of time travel with consistency constraints, such paradoxes and others are avoided. [ 13 ] Aspects of modern physics, such as the hypothetical tachyon particle and certain time-independent aspects of quantum mechanics , may allow particles or information to travel backward in time. Logical objections to macroscopic time travel may not necessarily prevent retrocausality at other scales of interaction. [ 14 ] [ page needed ] Even if such effects are possible, however, they may not be capable of producing effects different from those that would have resulted from normal causal relationships. [ 15 ] [ page needed ] Physicist John G. Cramer has explored various proposed methods for nonlocal or retrocausal quantum communication and found them all flawed and, consistent with the no-communication theorem , unable to transmit nonlocal signals. [ 16 ] "In relativity, time and space are intertwined in the fabric of space-time, so time can contract and stretch under the influence of gravity." [ 17 ] Closed timelike curves (CTCs), sometimes referred to as time loops, [ 17 ] in which the world line of an object returns to its origin, arise from some exact solutions to the Einstein field equation . However, the chronology protection conjecture of Stephen Hawking suggests that any such closed timelike curve would be destroyed before it could be used. [ 18 ] Although CTCs do not appear to exist under normal conditions, extreme environments of spacetime , such as a traversable wormhole or the region near certain cosmic strings , may allow their brief formation, implying a theoretical possibility of retrocausality. [ citation needed ] The exotic matter or topological defects required for the creation of those environments have not been observed. [ 19 ] [ page needed ] [ 20 ] [ page needed ] Most physical models are time symmetric ; [ 10 ] : 116 some use retrocausality at the microscopic level. Wheeler–Feynman absorber theory , proposed by John Archibald Wheeler and Richard Feynman , uses retrocausality and a temporal form of destructive interference to explain the absence of a type of converging concentric wave suggested by certain solutions to Maxwell's equations . [ 21 ] These advanced waves have nothing to do with cause and effect: they are simply a different mathematical way to describe normal waves. The reason they were proposed is that a charged particle would not have to act on itself, which, in normal classical electromagnetism, leads to an infinite self-force. [ 21 ] Ernst Stueckelberg , and later Richard Feynman , proposed an interpretation of the positron as an electron moving backward in time, reinterpreting the negative-energy solutions of the Dirac equation . Electrons moving backward in time would have a positive electric charge . [ 22 ] This time-reversal of anti-particles is required in modern quantum field theory, and is for example a component of how nucleons in atoms are held together with the nuclear force , via exchange of virtual mesons such as the pion . A meson is made up by an equal number of normal quarks and anti-quarks, and is thus simultaneously both emitted and absorbed. [ 23 ] Wheeler invoked this time-reversal concept to explain the identical properties shared by all electrons, suggesting that " they are all the same electron " with a complex, self-intersecting world line . [ 24 ] Yoichiro Nambu later applied it to all production and annihilation of particle-antiparticle pairs, stating that "the eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from past to future, or from future to past." [ 25 ] The backwards-in-time point of view is nowadays accepted as completely equivalent to other pictures, [ 26 ] but it has nothing to do with the macroscopic terms "cause" and "effect", which do not appear in a microscopic physical description. Retrocausality is associated with the Double Inferential state-Vector Formalism (DIVF), later known as the two-state vector formalism (TSVF) in quantum mechanics, where the present is characterised by quantum states of the past and the future taken in combination. [ 27 ] [ 28 ] Retrocausality is sometimes associated with nonlocal correlations that generically arise from quantum entanglement , including for example the delayed choice quantum eraser . [ 29 ] [ 30 ] However accounts of quantum entanglement can be given which do not involve retrocausality. They treat the experiments demonstrating these correlations as being described from different reference frames that disagree on which measurement is a "cause" versus an "effect", as necessary to be consistent with special relativity. [ 31 ] [ 32 ] That is to say, the choice of which event is the cause and which the effect is not absolute but is relative to the observer. The description of such nonlocal quantum entanglements can be described in a way that is free of retrocausality if the states of the system are considered. [ 33 ] Hypothetical superluminal particles called tachyons have a spacelike trajectory, and thus can appear to move backward in time, according to an observer in a conventional reference frame. Despite frequent depiction in science fiction as a method to send messages back in time, hypothetical tachyons do not interact with normal tardyonic matter in a way that would violate standard causality. Specifically, the Feinberg reinterpretation principle means that ordinary matter cannot be used to make a tachyon detector capable of receiving information. [ 34 ] Retrocausality is claimed to occur in some psychic phenomena such as precognition . J. W. Dunne 's 1927 book An Experiment with Time studied precognitive dreams and has become a definitive classic. [ 35 ] Parapsychologist J. B. Rhine and colleagues made intensive investigations during the mid-twentieth century. His successor Helmut Schmidt presented quantum mechanical justifications for retrocausality, eventually claiming that experiments had demonstrated the ability to manipulate radioactive decay through retrocausal psychokinesis . [ 36 ] [ 37 ] Such results and their underlying theories have been rejected by the mainstream scientific community and are widely accepted as pseudoscience , although they continue to have some support from fringe science sources. [ 38 ] [ page needed ] [ 39 ] [ page needed ] [ 40 ] [ unreliable source? ] Efforts to associate retrocausality with prayer healing have been similarly rejected. [ 41 ] [ 42 ] From 1994, psychologist Daryl J. Bem has argued for precognition. He subsequently showed experimental subjects two sets of curtains and instructed them to guess which one had a picture behind it, but did not display the picture behind the curtain until after the subject made their guess. Some results showed a higher margin of success (p. 17) for a subset of erotic images, with subjects who identified as "stimulus-seeking" in the pre-screening questionnaire scoring even higher. However, like his predecessors, his methodology has been strongly criticised and his results discounted. [ 43 ]
https://en.wikipedia.org/wiki/Retrocausality
Retrocomputing is the current use of older computer hardware and software . Retrocomputing is usually classed as a hobby and recreation rather than a practical application of technology; enthusiasts often collect rare and valuable hardware and software for sentimental reasons. [ 1 ] Occasionally, however, an obsolete computer system has to be "resurrected" to run software specific to that system, to access data stored on obsolete media, or to use a peripheral that requires that system. Retrocomputing and retro gaming has been described as preservation activity and as aspects of the remix culture . [ 2 ] Retrocomputing is part of the history of computer hardware . It can be seen as the analogue of experimental archaeology in computing. [ 3 ] Some notable examples include the reconstruction of Babbage 's Difference engine (more than a century after its design) and the implementation of Plankalkül in 2000 (more than half a century since its inception). Some retrocomputing enthusiasts also consider the " homebrewing " (designing and building of retro- and retro-styled computers or kits), to be an important aspect of the hobby, giving new enthusiasts an opportunity to experience more fully what the early years of hobby computing were like. [ 1 ] There are several different approaches to this end. Some are exact replicas of older systems, and some are newer designs based on the principles of retrocomputing, while others combine the two, with old and new features in the same package. Examples include: As old computer hardware becomes harder to maintain, there has been increasing interest in computer simulation. This is especially the case with old mainframe computers , which have largely been scrapped, and have space, power, and environmental requirements unaffordable by the average user. The memory size and speed of current systems enable simulation of many old systems to run faster than that system on original hardware. [ 14 ] [ 15 ] One popular simulator, the history simulator SIMH , offers simulations for over 50 historic systems, from the 1950s through the present. The Hercules emulator simulates the IBM System/360 family from System/360 to 64-bit System/z . A simulator is available for the Honeywell Multics system. Software for older systems was not copyrighted , and was open source , so there is a wide variety of available software to run on these simulators. Some emulations are used by businesses, as running production software in a simulator is usually faster, cheaper, and more reliable than running it on original hardware. [ citation needed ] In an interview with Conan O'Brien in May 2014, George R. R. Martin revealed that he writes his books using WordStar 4.0 , an MS-DOS application dating back to 1987. [ 16 ] US-based streaming video provider Netflix released a multiple-choice movie branded to be part of their Black Mirror series, called Bandersnatch . The protagonist is a teenage programmer working on a contract to deliver a video-game adaptation of a fantasy novel for an 8-bit computer in 1984. The multiple storylines revolve around the emotions and mental health issues resulting from a reality-perception mismatch between a new generation of computer-savvy teenagers and twenty-somethings, and their care givers. Due to their low complexity together with other technical advantages, 8-bit computers are frequently re-discovered for education, especially for introductory programming classes in elementary schools . [ citation needed ] 8-bit computers turn on and directly present a programming environment; there are no distractions, and no need for other features or additional connectivity. The BASIC language is a simple-to-learn programming language that has access to the entire system without having to load libraries for sound, graphics, math, etc. The focus of the programming language is on efficiency; in particular, one command does one thing immediately (e.g. COLOR 0 , 6 turns the screen green).
https://en.wikipedia.org/wiki/Retrocomputing
Retrograde condensation occurs when gas in a tube is compressed beyond the point of condensation with the effect that the liquid evaporates again. This is the opposite of condensation : the so-called retrograde condensation. If the volume of two gases that are kept at constant temperature and pressure below critical conditions is gradually reduced, condensation will start. When a certain volume is reached, the amount of condensation will gradually increase upon further reduction in volume until the gases are liquefied. If the composition of the gases lies between their true and pseudo critical points the condensate formed will disappear on continued reduction of volume. [ 1 ] [ 2 ] This disappearance of condensation is called retrograde condensation. Because most natural gas found in petroleum reservoirs is not a pure product, when non-associated gas is extracted from a field under supercritical pressure/temperature conditions (i.e., the pressure in the reservoir decreases below dewpoint ), condensate liquids may form during the isothermic depressurizing, an effect called retrograde condensation. Dutch physicist Johannes Kuenen discovered retrograde condensation and published his findings in April 1892 in his Ph.D. thesis with the title "Metingen betreffende het oppervlak van Van der Waals voor mengsels van koolzuur en chloormethyl" (Measurements on the Van der Waals surface for mixtures of carbonic acid and methyl chloride ). [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Retrograde_condensation
Retrogression heat treatment (RHT) is a heat treatment process that rapidly heat treats age-hardenable aluminum alloys . Mainly induction heating is used for RHT. In the past, it was mainly used for 6061 and 6063 aluminum alloys . Therefore, forming of complex shapes is possible, without creating damages like cracks. Even hard tempers (for example -T6) can be formed easily after subjecting these alloys to RHT. This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Retrogression_heat_treatment
Retromer is a complex of proteins that has been shown to be important in recycling transmembrane receptors from endosomes to the trans -Golgi network (TGN) and directly back to the plasma membrane. Mutations in retromer and its associated proteins have been linked to Alzheimer's and Parkinson's diseases. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Retromer is a heteropentameric complex, which in humans is composed of a less defined membrane-associated sorting nexin dimer ( SNX1 , SNX2 , SNX5 , SNX6 ), and a vacuolar protein sorting (Vps) heterotrimer containing Vps26 , Vps29 , and Vps35 . Although the SNX dimer is required for the recruitment of retromer to the endosomal membrane, the cargo binding function of this complex is contributed by the core heterotrimer through the binding of Vps26 and Vps35 subunits to various cargo molecules [ 5 ] including M6PR , [ 6 ] wntless , [ 7 ] SORL1 (which is also a receptor for other cargo proteins such as APP ), and sortilin . [ 8 ] Early study on sorting of acid hydrolases such as carboxypeptidase Y (CPY) in S. cerevisiae mutants has led to the identification of retromer in mediating the retrograde trafficking of the pro-CPY receptor ( Vps10 ) from the endosomes to the TGN. [ 9 ] Age-related loss of OXR1 causes retromer decline. [ 10 ] The retromer complex is highly conserved : homologs have been found in C. elegans , mouse and human . The retromer complex consists of 5 proteins in yeast: Vps35p, Vps26p, Vps29p, Vps17p, Vps5p. The mammalian retromer consists of Vps26 , Vps29 , Vps35 , SNX1 and SNX2 , and possibly SNX5 and SNX6 . [ 12 ] It is proposed to act in two subcomplexes: (1) A cargo recognition heterotrimeric complex that consist of Vps35, Vps29 and Vps26, and (2) SNX-BAR dimers, which consist of SNX1 or SNX2 and SNX5 or SNX6 that facilitate endosomal membrane remodulation and curvature, resulting in the formation of tubules/ vesicles that transport cargo molecules to the trans-golgi network (TGN). Humans have two orthologs of VPS26: VPS26A, which is ubiquitous, and VPS26B, which is found in the central nervous system, where it forms a unique retromer that is dedicated to direct recycling of neuronal cell surface proteins such as APP back to the plasma membrane with the assistance of the cargo receptor SORL1. [ 13 ] The retromer complex has been shown to mediate retrieval of various transmembrane receptors, such as the cation-independent mannose 6-phosphate receptor , functional mammalian counterparts of Vps10 such as SORL1 , and the Wnt receptor Wntless . [ 14 ] Retromer is required for the recycling of Kex2p and DPAP-A, which also cycle between the trans -Golgi network and a pre-vacuolar (yeast endosome equivalent) compartment in yeast. It is also required for the recycling of the cell surface receptor CED-1, which is necessary for phagocytosis of apoptotic cells. [ 15 ] Retromer plays a central role in the retrieval of several different cargo proteins from the endosome to the trans -Golgi network, or for direct recycling back to the cell surface. However, it is clear that there are other complexes and proteins that act in this retrieval process. So far it is not clear whether some of the other components that have been identified in the retrieval pathway act with retromer in the same pathway or are involved in alternative pathways. Recent studies have implicated retromer sorting defects in Alzheimer's disease [ 16 ] [ 17 ] and late-onset Parkinson disease [ 18 ] Retromer also seems to play a role in Hepatitis C Virus replication. [ 19 ] The association of the Vps35-Vps29-Vps26 complex with the cytosolic domains of cargo molecules on endosomal membranes initiates the activation of retrograde trafficking and cargo capture. [ 20 ] The nucleation complex is formed through the interaction of VPS complex with GTP -activated Rab7 [ 21 ] with clathrin , clathrin-adaptors and various binding proteins. [ 22 ] The SNX-BAR dimer enters the nucleation complex via direct binding or lateral movement on endosomal surface. The increased level of Retromer SNX-BARs causes a conformational switch to a curvature-inducing mode which initiates membrane tubule formation. [ 23 ] [ 24 ] Once the cargo carriers are matured, the carrier scission is then catalyzed by dynamin-II or EHD1 , [ 25 ] together with the mechanical forces generated by actin polymerization and motor activity. The cargo carrier is transported to the TGN by motor proteins such as dynein . Tethering of the cargo carrier to the recipient compartment is thought to lead to the uncoating of the carrier, which is driven by ATP-hydrolysis and Rab7-GTP hydrolysis. Once released from the carrier, the Vps35-Vps29-Vps26 complex and the SNX-BAR dimers get recycled back onto the endosomal membranes. The other function of retromer is the recycling of protein cargo directly back to the plasma membrane. [ 4 ] Dysfunction of this branch of the retromer recycling pathway causes endosomal protein traffic jams [ 26 ] that are linked to Alzheimer’s disease. [ 27 ] [ 28 ] It has been suggested that recycling dysfunction is the “fire” that drives the common form of Alzheimer’s, leading to the production of amyloid and tau tangle “smoke”. [ 29 ]
https://en.wikipedia.org/wiki/Retromer
In the field of drug discovery , retrometabolic drug design is a strategy for the design of safer drugs either using predictable metabolism to an inactive moiety or using targeted drug delivery approaches. The phrase retrometabolic drug design was coined by Nicholas Bodor. [ 1 ] The method is analogous to retrosynthetic analysis where the synthesis of a target molecule is planned backwards. In retrometabolic drug design, metabolic reaction information of drugs is used to design parent drugs whose metabolism and distribution can be controlled to target and eliminate the drug to increase efficacy and minimize undesirable side effects. The new drugs thus designed achieve selective organ and/or therapeutic site drug targeting and produce safe therapeutic agents and safe environmental chemicals. These approaches represent systematic methodologies that thoroughly integrate structure-activity (SAR) and structure-metabolism (SMR) relationships and are aimed at designing safe, locally active compounds with improved therapeutic index (ratio of benefit vs. side effect). [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The concept of retrometabolic drug design encompasses two distinct approaches. One approach is the design of soft drugs (SDs), [ 4 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] new, active therapeutic agents, often isosteric or isolelectronic analogs of a lead compound, with a chemical structure specifically designed to allow predictable metabolism into inactive metabolites after exerting their desired therapeutic effect(s). The other approach is the design of chemical delivery systems (CDSs). [ 4 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] CDSs are biologically inert molecules intended to enhance drug delivery to a particular organ or site and requiring several conversion steps before releasing the active drug. Although both retrometabolic design approaches involve chemical modifications of the molecular structure and both require enzymatic reactions to fulfill drug targeting, the principles of SD and CDS design are distinctly different. While CDSs are inactive as administered and sequential enzymatic reactions provide the differential distribution and ultimately release the active drug, SDs are active as administered and are designed to be easily metabolized into inactive species. Assuming an ideal situation, with a CDS the drug is present at the site and nowhere else in the body because enzymatic processes destroy the drug at those sites. Whereas, CDSs are designed to achieve drug targeting at a selected organ or site, SDs are designed to afford a differential distribution that can be regarded as reverse targeting. Since its introduction by Nicholas Bodor in the late 1970s, the soft drug concept generated considerable research both in academic and in industrial settings. Bodor defined soft drugs as biologically active, therapeutically useful chemical compounds characterized by a predictable and controllable in vivo metabolism to non-toxic moieties after they achieve their therapeutic role. [ 24 ] There are several rationally designed soft drugs that have either already reached the market, such as or are in late-stage development ( budiodarone , celivarone , AZD3043 , tecafarin ). [ 25 ] There are also compounds that can be considered as soft chemicals (e.g., malathion) or soft drugs (e.g., articaine, methylphenidate) even though they were not developed as such. [ 25 ] Since their introduction in the early 1980s, CDSs have also generated considerable research work, especially for brain and eye targeting of various therapeutic agents, including those that cannot cross the blood–brain barrier or the blood–retinal barrier on their own. Within this approach, three major general CDS classes have been identified: This concept has been extended to many drugs and peptides, its importance illustrated by the fact that its first applications and uses were published in Science [ 26 ] [ 27 ] [ 28 ] in 1975, 1981 and 1983. Its extension to the targeted brain-delivery of neuropeptides was included by the Harvard Health Letter [ 29 ] as one of the top 10 medical advances of 1992. Several compounds have reached advanced clinical development phase, such as In the first example above, brain-targeted CDSs employ a sequential metabolic conversion of a redox-based targetor moiety, which is closely related to the ubiquitous NAD(P)H ⇌ NAD(P) + coenzyme system, to exploit the unique properties of the blood–brain barrier (BBB). After enzymatic oxidation of the NADH type drug conjugate to its corresponding NAD + - drug, the still inactive precursor, "locks-in" behind the BBB to provide targeted and sustained CNS-delivery of the compound of interest. The second example involves eye-specific delivery of betaxoxime , the oxime derivative of betaxolol . The administered, inactive β-amino-ketoxime is converted to the corresponding ketone via oxime hydrolase , an enzyme recently identified with preferential activity in the eye, and then stereospecifically reduced to its alcohol form. IOP-lowering activity is demonstrated without producing the active β-blockers systemically, making them void of any cardiovascular activity, a major drawback of classical antiglaucoma agents. Because of the advantages provided by this unique eye-targeting profile, oxime-based eye-targeting CDSs could replace the β-blockers currently used for ophthalmic applications. These retrometabolic design strategies were introduced by Nicholas Bodor, one of the first and most prominent advocates for the early integration of metabolism, pharmacokinetic and general physicochemical considerations in the drug design process. [ 32 ] [ 33 ] [ 34 ] These drug design concepts recognize the importance of design-controlled metabolism and directly focus not on the increase of activity alone but on the increase of the activity/toxicity ratio (therapeutic index) in order to deliver the maximum benefit while also reducing or eliminating unwanted side effects. The importance of this field is reviewed in a book dedicated to the subject (Bodor, N.; Buchwald, P.; Retrometabolic Drug Design and Targeting , 1st ed., Wiley & Sons, 2012), as well as by a full chapter of Burger's Medicinal Chemistry and Drug Design , 7th ed. (2010) with close to 150 chemical structures and more than 450 references. [ 35 ] At the time of its introduction, the idea of designed-in metabolism represented a significant novelty and was against mainstream thinking then in place that instead focused on minimizing or entirely eliminating drug metabolism. Bodor's work on these design concepts developed during the late 1970s and early 1980s, and came to prominence during the mid-1990s. Loteprednol etabonate, a soft corticosteroid designed and patented [ 36 ] [ 37 ] by Bodor received final Food and Drug Administration (FDA) approval in 1998 as the active ingredient of two ophthalmic preparations (Lotemax and Alrex), currently the only corticosteroid approved by the FDA for use in all inflammatory and allergy-related ophthalmic disorders. Its safety for long-term use [ 38 ] further supports the soft drug concept, and in 2004, loteprednol etabonate [ 39 ] [ 40 ] [ 41 ] was also approved as part of a combination product (Zylet). A second generation of soft corticosteroids such as etiprednol dicloacetate [ 42 ] is in development for a full spectrum of other possible applications such as nasal spray for rhinitis or inhalation products for asthma. The soft drug concept ignited research work in both academic (e.g., Aston University, Göteborg University, Okayama University, Uppsala University, University of Iceland, University of Florida, Université Louis Pasteur, Yale University) and industrial (e.g., AstraZeneca, DuPont, GlaxoSmithKline, IVAX, Janssen Pharmaceutica, Nippon Organon, Novartis, ONO Pharmaceutical, Schering AG) settings. Besides corticosteroids, various other therapeutic areas have been pursued such as soft beta-blockers, soft opioid analgetics, soft estrogens, soft beta-agonists, soft anticholinergics, soft antimicrobials, soft antiarrhythmic agents, soft angiotensin converting enzyme (ACE) inhibitors, soft dihydrofolate reductase (DHFR) inhibitors, soft cancineurin inhibitors (soft immunosuppressants), soft matrix metalloproteinase (MMP) inhibitors, soft cytokine inhibitors, soft cannabinoids, soft Ca 2+ channel blockers (see [ 35 ] for a recent review). Following the introduction of the CDS concepts, work along those lines started in numerous pharmaceutical centers around the world, and brain-targeting CDSs were explored for many therapeutic agents such as steroids (testosterone, progestins, estradiol, dexamethasone), anti-infective agents (penicillins, sulfonamides), antivirals (acyclovir, trifluorothymidine, ribavirin), antiretrovirals (AZT, ganciclovir), anticancer agents (Lomustine, chlorambucil), neurotransmitters (dopamine, GABA), nerve growth factor (NGF) inducers, anticonvulsants (Phenytoin, valproate, stiripentol), Ca 2+ antagonists (felodipine), MAO inhibitors, NSAIDs and neuropeptides (tryptophan, Leu-enkephalin analogs, TRH analogs, kyotorphin analogs). A number of new chemical entities (NCE) were developed based on these principles, such as E 2 -CDS (Estredox [ 30 ] or betaxoxime [ 31 ] are in advanced clinical development phases. A review of ongoing research using the general retrometabolic design approaches is conducted biennially at the Retrometabolism Based Drug Design and Targeting Conference , an international series of symposia developed and organized by Nicholas Bodor. Proceedings of each conference held have been published in the international pharmaceutical journal Pharmazie . Past conferences, and their published proceedings are:
https://en.wikipedia.org/wiki/Retrometabolic_drug_design
A Wagner–Meerwein rearrangement is a class of carbocation 1,2-rearrangement reactions in which a hydrogen , alkyl or aryl group migrates from one carbon to a neighboring carbon. [ 1 ] [ 2 ] They can be described as cationic [1,2]- sigmatropic rearrangements, proceeding suprafacially and with stereochemical retention. As such, a Wagner–Meerwein shift is a thermally allowed pericyclic process with the Woodward-Hoffmann symbol [ ω 0 s + σ 2 s ]. They are usually facile, and in many cases, they can take place at temperatures as low as –120 °C. The reaction is named after the Russian chemist Yegor Yegorovich Vagner ; he had German origin and published in German journals as Georg Wagner; and Hans Meerwein . Several reviews have been published. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] The rearrangement was first discovered in bicyclic terpenes for example the conversion of isoborneol to camphene : [ 8 ] The story of the rearrangement reveals that many scientists were puzzled with this and related reactions and its close relationship to the discovery of carbocations as intermediates. [ 9 ] In a simple demonstration reaction of 1,4-dimethoxybenzene with either 2-methyl-2-butanol or 3-methyl-2-butanol in sulfuric acid and acetic acid yields the same disubstituted product, [ 10 ] the latter via a hydride shift of the cationic intermediate: Currently, there are works relating to the use of skeletal rearrangement in the synthesis of bridged azaheterocycles . These data are summarized in [ 11 ] Plausible mechanisms of the Wagner–Meerwein rearrangement of diepoxyisoindoles : The related Nametkin rearrangement , named after Sergey Namyotkin , involves the rearrangement of methyl groups in certain terpenes. In some cases the reaction type is also called a retropinacol rearrangement (see pinacol rearrangement ).
https://en.wikipedia.org/wiki/Retropinacol_rearrangement
Retroposons are repetitive DNA fragments which are inserted into chromosomes after they had been reverse transcribed from any RNA molecule. In contrast to retrotransposons , retroposons never encode reverse transcriptase (RT) (but see below). Therefore, they are non-autonomous elements with regard to transposition activity (as opposed to transposons ). Non-long terminal repeat (LTR) retrotransposons such as the human LINE1 elements are sometimes falsely referred to as retroposons. However, this depends on the author. For example, Howard Temin published the following definition: Retroposons encode RT but are devoid of long terminal repeats (LTRs), for example long interspersed elements (LINEs). Retrotransposons also feature LTRs and retroviruses , in addition, are packaged as viral particles (virions). Retrosequences are non-autonomous elements devoid of RT. They are retroposed with the aid of the machinery of autonomous elements, such as LINEs; examples are short interspersed nuclear elements (SINEs) or mRNA-derived retro(pseudo)genes . [ 2 ] [ 3 ] [ 4 ] Retroposition accounts for approximately 10,000 gene-duplication events in the human genome, of which approximately 2-10% are likely to be functional. [ 5 ] Such genes are called retrogenes and represent a certain type of retroposon. A classical event is the retroposition of a spliced pre-mRNA molecule of the c-Src gene into the proviral ancestor of the Rous sarcoma virus (RSV). The retroposed c-src pre-mRNA still contained a single intron and within RSV is now referred to as v-Src gene. [ 6 ] This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Retroposon
Retroreflective sheeting is flexible retroreflective material primarily used to increase the nighttime conspicuity of traffic signs , high-visibility clothing , and other items so they are safely and effectively visible in the light of an approaching driver's headlamps . They are also used as a material to increase the scanning range of barcodes in factory settings. The sheeting consists of retroreflective glass beads, microprisms, or encapsulated lenses sealed onto a fabric or plastic substrate. Many different colors and degrees of reflection intensity are provided by numerous manufacturers for various applications. As with any retroreflector, sheeting glows brightly when there is a small angle between the observer's eye and the light source directed toward the sheeting but appears nonreflective when viewed from other directions. Retroreflective sheeting is widely used in a variety of applications today, after early widespread use on road signs in the 1960s. High-visibility clothing frequently combines retroreflective sheeting with fluorescent fabrics in order to significantly increase the wearer's visibility from a distance, which in turn reduces the risk of traffic-related accidents. Such clothing is commonly worn as (often mandatory) PPE by professionals who work near road traffic or heavy machinery , often at night or in low-visibility weather conditions, such as construction workers , road workers and emergency service personnel. It is also commonly worn by cyclists or joggers to increase their nighttime visibility to road traffic. High-visibility clothing typically come in fluorescent colors like yellow, orange, and red, as these shades are highly visible in various lighting conditions and are internationally recognized for safety use. [ 1 ] [ 2 ] [ 3 ] It designed according to specific standards to ensure effectiveness. In Canada, these requirements are outlined in the CSA Standard Z96-15 (R2020), [ 1 ] while in the United States, they follow the ANSI/ISEA 107-2020. [ 3 ] Retroreflective sheeting for road signs is categorized by construction and performance specified by technical standards such as ASTM D4956-11a.; [ 4 ] various types give differing levels of retroreflection, effective view angles, and lifespan. [ 5 ] Sheeting has replaced button copy as the predominant type of retroreflector used in roadway signs. There are several grades of retroreflective sheeting which include the three major grades: engineer grade, high intensity prismatic (HIP) and diamond grade. Within these categories are further delineations based on material used and visibility distance. Diamond grade typically has the greatest distance for visibility of the three major categories. [ 6 ] Barcodes can be printed onto retroreflective sheeting to enable scanning up to 50 feet away. [ 7 ] The special effects technique of front projection uses retroreflective screens to create false backgrounds for scenes shot in studios. Front projection was used in 2001: A Space Odyssey during the "Dawn of Man" sequence. Other films that have used front projection techniques include Silent Running , Where Eagles Dare and Superman . Star Wars episodes IV, V and VI used retroreflective sheeting for the lightsaber blades. [ 8 ] Reflective tape is used to provide an explicit way to do optical navigation of autonomous vehicles . For example, strips of retroreflective tape are used to provide navigation inputs to the prototype Hyperloop pod vehicles on the SpaceX Hypertube test track . [ 9 ]
https://en.wikipedia.org/wiki/Retroreflective_sheeting
Retrospective think aloud protocol is a technique used in usability , and eye tracking in particular, to gather qualitative information on the user intents and reasoning during a test. It's a form of think aloud protocol performed after the user testing session activities, instead of during them. Fairly often the retrospective protocol is stimulated by using a visual reminder such as a video replay. In writing studies, the visual reminder may be the writing produced during the think-aloud session. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Retrospective_think_aloud
Retrosynthetic analysis is a technique for solving problems in the planning of organic syntheses . This is achieved by transforming a target molecule into simpler precursor structures regardless of any potential reactivity/interaction with reagents. Each precursor material is examined using the same method. This procedure is repeated until simple or commercially available structures are reached. These simpler/commercially available compounds can be used to form a synthesis of the target molecule. Retrosynthetic analysis was used as early as 1917 in Robinson 's Tropinone total synthesis. [ 1 ] Important conceptual work on retrosynthetic analysis was published by George Vladutz in 1963 . [ 2 ] [ 3 ] E.J. Corey formalized and popularized the concept from 1967 onwards in his article General methods for the construction of complex molecules and his book The Logic of Chemical Synthesis . [ 4 ] [ 5 ] [ 6 ] [ 7 ] The power of retrosynthetic analysis becomes evident in the design of a synthesis. The goal of retrosynthetic analysis is a structural simplification. Often, a synthesis will have more than one possible synthetic route. Retrosynthesis is well suited for discovering different synthetic routes and comparing them in a logical and straightforward fashion. [ 8 ] A database may be consulted at each stage of the analysis, to determine whether a component already exists in the literature. In that case, no further exploration of that compound would be required. If that compound exists, it can be a jumping point for further steps developed to reach a synthesis. There are both academic and commercial groups developing retrosynthesis tools. With the growing application of machine learning and artificial intelligence in chemistry, many research groups, such as the Coley Group from MIT, and companies, such as Chemical.AI, Reaxys, etc., have started to integrate deep learning into the conventional rule-based approaches. Shown below is a retrosynthetic analysis of phenylacetic acid : In planning the synthesis, two synthons are identified. A nucleophilic "-COOH" group, and an electrophilic "PhCH 2 + " group. Both synthons do not exist as written; synthetic equivalents corresponding to the synthons are reacted to produce the desired product. In this case, the cyanide anion is the synthetic equivalent for the − COOH synthon, while benzyl bromide is the synthetic equivalent for the benzyl synthon. The synthesis of phenylacetic acid determined by retrosynthetic analysis is thus: In fact, phenylacetic acid has been synthesized from benzyl cyanide , [ 9 ] itself prepared by the analogous reaction of benzyl bromide with sodium cyanide . [ 10 ] Manipulation of functional groups can lead to significant reductions in molecular complexity. Numerous chemical targets have distinct stereochemical demands. Stereochemical transformations (such as the Claisen rearrangement and Mitsunobu reaction ) can remove or transfer the desired chirality thus simplifying the target. Directing a synthesis toward a desirable intermediate can greatly narrow the focus of analysis. This allows bidirectional search techniques. The application of transformations to retrosynthetic analysis can lead to powerful reductions in molecular complexity. Unfortunately, powerful transform-based retrons are rarely present in complex molecules, and additional synthetic steps are often needed to establish their presence. The identification of one or more key bond disconnections may lead to the identification of key substructures or difficult to identify rearrangement transformations in order to identify the key structures.
https://en.wikipedia.org/wiki/Retrosynthetic_analysis
Retrotransposons (also called Class I transposable elements ) are mobile elements which move in the host genome by converting their transcribed RNA into DNA through reverse transcription . [ 1 ] Thus, they differ from Class II transposable elements, or DNA transposons, in utilizing an RNA intermediate for the transposition and leaving the transposition donor site unchanged. [ 2 ] Through reverse transcription, retrotransposons amplify themselves quickly to become abundant in eukaryotic genomes such as maize (49–78%) [ 3 ] and humans (42%). [ 4 ] They are only present in eukaryotes but share features with retroviruses such as HIV , for example, discontinuous reverse transcriptase -mediated extrachromosomal recombination. [ 5 ] [ 6 ] There are two main types of retrotransposons, long terminal repeats (LTRs) and non-long terminal repeats (non-LTRs). Retrotransposons are classified based on sequence and method of transposition. [ 7 ] Most retrotransposons in the maize genome are LTR, whereas in humans they are mostly non-LTR. LTR retrotransposons are characterized by their long terminal repeats (LTRs), which are present at both the 5' and 3' ends of their sequences. These LTRs contain the promoters for these transposable elements (TEs), are essential for TE integration, and can vary in length from just over 100 base pairs (bp) to more than 1,000 bp. On average, LTR retrotransposons span several thousand base pairs, with the largest known examples reaching up to 30 kilobases (kb). LTRs are highly functional sequences, and for that reason LTR and non-LTR retrotransposons differ greatly in their reverse transcription and integration mechanisms. Non-LTR retrotransposons use a target-primed reverse transcription (TPRT) process, which requires the RNA of the TE to be brought to the cleavage site of the retrotransposon’s integrase, where it is reverse transcribed. In contrast, LTR retrotransposons undergo reverse transcription in the cytoplasm, utilizing two rounds of template switching, and a formation of a pre-integration complex (PIC) composed of double-stranded DNA and an integrase dimer bound to LTRs. This complex then moves into the nucleus for integration into a new genomic location. LTR retrotransposons typically encode the proteins gag and pol , which may be combined into a single open reading frame (ORF) or separated into distinct ORFs. Similar to retroviruses, the gag protein is essential for capsid assembly and the packaging of the TE's RNA and associated proteins. The pol protein is necessary for reverse transcription and includes these crucial domains: PR (protease), RT (reverse transcriptase), RH ( RNase H ), and INT (integrase). Additionally, some LTR retrotransposons have an ORF for an envelope ( env ) protein that is incorporated into the assembled capsid, facilitating attachment to cellular surfaces. An endogenous retrovirus is a retrovirus without virus pathogenic effects that has been integrated into the host genome by inserting their inheritable genetic information into cells that can be passed onto the next generation like a retrotransposon. [ 8 ] Because of this, they share features with retroviruses and retrotransposons. When the retroviral DNA is integrated into the host genome they evolve into endogenous retroviruses that influence eukaryotic genomes. So many endogenous retroviruses have inserted themselves into eukaryotic genomes that they allow insight into biology between viral-host interactions and the role of retrotransposons in evolution and disease. Many retrotransposons share features with endogenous retroviruses, the property of recognising and fusing with the host genome. However, there is a key difference between retroviruses and retrotransposons, which is indicated by the env gene. Although similar to the gene carrying out the same function in retroviruses, the env gene is used to determine whether the gene is retroviral or retrotransposon. If the gene is retroviral it can evolve from a retrotransposon into a retrovirus. They differ by the order of sequences in pol genes. Env genes are found in LTR retrotransposon types Ty1-copia ( Pseudoviridae ), Ty3-gypsy ( Metaviridae ) and BEL/Pao. [ 9 ] [ 8 ] They encode glycoproteins on the retrovirus envelope needed for entry into the host cell. Retroviruses can move between cells whereas LTR retrotransposons can only move themselves into the genome of the same cell. [ 10 ] Many vertebrate genes were formed from retroviruses and LTR retrotransposons. One endogenous retrovirus or LTR retrotransposon has the same function and genomic locations in different species, suggesting their role in evolution. [ 11 ] Like LTR retrotransposons, non-LTR retrotransposons contain genes for reverse transcriptase, RNA-binding protein, nuclease, and sometimes ribonuclease H domain [ 12 ] but they lack the long terminal repeats. RNA-binding proteins bind the RNA-transposition intermediate and nucleases are enzymes that break phosphodiester bonds between nucleotides in nucleic acids. Instead of LTRs, non-LTR retrotransposons have short repeats that can have an inverted order of bases next to each other aside from direct repeats found in LTR retrotransposons that is just one sequence of bases repeating itself. Although they are retrotransposons, they cannot carry out reverse transcription using an RNA transposition intermediate in the same way as LTR retrotransposons. Those two key components of the retrotransposon are still necessary but the way they are incorporated into the chemical reactions is different. This is because unlike LTR retrotransposons, non-LTR retrotransposons do not contain sequences that bind tRNA. They mostly fall into two types – LINEs (Long interspersed nuclear elements) and SINEs (Short interspersed nuclear elements). SVA elements are the exception between the two as they share similarities with both LINEs and SINEs, containing Alu elements and different numbers of the same repeat. SVAs are shorter than LINEs but longer than SINEs. While historically viewed as "junk DNA", research suggests in some cases, both LINEs and SINEs were incorporated into novel genes to form new functions. [ 13 ] When a LINE is transcribed, the transcript contains an RNA polymerase II promoter that ensures LINEs can be copied into whichever location it inserts itself into. RNA polymerase II is the enzyme that transcribes genes into mRNA transcripts. The ends of LINE transcripts are rich in multiple adenines, [ 14 ] the bases that are added at the end of transcription so that LINE transcripts would not be degraded. This transcript is the RNA transposition intermediate. The RNA transposition intermediate moves from the nucleus into the cytoplasm for translation. This gives the two coding regions of a LINE that in turn binds back to the RNA it is transcribed from. The LINE RNA then moves back into the nucleus to insert into the eukaryotic genome. LINEs insert themselves into regions of the eukaryotic genome that are rich in bases AT. At AT regions LINE uses its nuclease to cut one strand of the eukaryotic double-stranded DNA. The adenine-rich sequence in LINE transcript base pairs with the cut strand to flag where the LINE will be inserted with hydroxyl groups. Reverse transcriptase recognises these hydroxyl groups to synthesise LINE retrotransposon where the DNA is cut. Like with LTR retrotransposons, this new inserted LINE contains eukaryotic genome information so it can be copied and pasted into other genomic regions easily. The information sequences are longer and more variable than those in LTR retrotransposons. Most LINE copies have variable length at the start because reverse transcription usually stops before DNA synthesis is complete. In some cases this causes RNA polymerase II promoter to be lost so LINEs cannot transpose further. [ 15 ] LINE-1 (L1) retrotransposons make up a significant portion of the human genome, with an estimated 500,000 copies per genome. Genes encoding for human LINE1 usually have their transcription inhibited by methyl groups binding to its DNA carried out by PIWI proteins and enzymes DNA methyltransferases. L1 retrotransposition can disrupt the nature of genes transcribed by pasting themselves inside or near genes which could in turn lead to human disease. LINE1s can only retrotranspose in some cases to form different chromosome structures contributing to differences in genetics between individuals. [ 17 ] There is an estimate of 80–100 active L1s in the reference genome of the Human Genome Project, and an even smaller number of L1s within those active L1s retrotranspose often. L1 insertions have been associated with tumorigenesis by activating cancer-related genes oncogenes and diminishing tumor suppressor genes. Each human LINE1 contains two regions from which gene products can be encoded. The first coding region contains a leucine zipper protein involved in protein-protein interactions and a protein that binds to the terminus of nucleic acids. The second coding region has a purine/pyrimidine nuclease, reverse transcriptase and protein rich in amino acids cysteines and histidines. The end of the human LINE1, as with other retrotransposons is adenine-rich. [ 18 ] [ 19 ] [ 20 ] Human L1 actively retrotransposes in the human genome. A recent study identified 1,708 somatic L1 retrotransposition events, especially in colorectal epithelial cells. These events occur from early embryogenesis and retrotransposition rate is substantially increased during colorectal tumourigenesis. [ 21 ] SINEs are much shorter (300bp) than LINEs. [ 22 ] They share similarity with genes transcribed by RNA polymerase II, the enzyme that transcribes genes into mRNA transcripts, and the initiation sequence of RNA polymerase III, the enzyme that transcribes genes into ribosomal RNA, tRNA and other small RNA molecules. [ 23 ] SINEs such as mammalian MIR elements have tRNA gene at the start and adenine-rich at the end like in LINEs. SINEs do not encode a functional reverse transcriptase protein and rely on other mobile transposons, especially LINEs . [ 24 ] SINEs exploit LINE transposition components despite LINE-binding proteins prefer binding to LINE RNA. SINEs cannot transpose by themselves because they cannot encode SINE transcripts. They usually consist of parts derived from tRNA and LINEs. The tRNA portion contains an RNA polymerase III promoter which the same kind of enzyme as RNA polymerase II. This makes sure the LINE copies would be transcribed into RNA for further transposition. The LINE component remains so LINE-binding proteins can recognise the LINE part of the SINE. Alu s are the most common SINE in primates. They are approximately 350 base pairs long, do not encode proteins and can be recognized by the restriction enzyme AluI (hence the name). Their distribution may be important in some genetic diseases and cancers. Copy and pasting Alu RNA requires the Alu's adenine-rich end and the rest of the sequence bound to a signal. The signal-bound Alu can then associate with ribosomes. LINE RNA associates on the same ribosomes as the Alu. Binding to the same ribosome allows Alus of SINEs to interact with LINE. This simultaneous translation of Alu element and LINE allows SINE copy and pasting. SVA elements are present at lower levels than SINES and LINEs in humans. The starts of SVA and Alu elements are similar, followed by repeats and an end similar to endogenous retrovirus. LINEs bind to sites flanking SVA elements to transpose them. SVA are one of the youngest transposons in great apes genome and among the most active and polymorphic in the human population. SVA was created by a fusion between an Alu element, a VNTR (variable number tandem repeat), and an LTR fragment. [ 25 ] Retrotransposons ensure they are not lost by chance by occurring only in cell genetics that can be passed on from one generation to the next from parent gametes. However, LINEs can transpose into the human embryo cells that eventually develop into the nervous system, raising the question whether this LINE retrotransposition affects brain function. LINE retrotransposition is also a feature of several cancers, but it is unclear whether retrotransposition itself causes cancer instead of just a symptom. Uncontrolled retrotransposition is bad for both the host organism and retrotransposons themselves so they have to be regulated. Retrotransposons are regulated by RNA interference . RNA interference is carried out by a bunch of short non-coding RNAs . The short non-coding RNA interacts with protein Argonaute to degrade retrotransposon transcripts and change their DNA histone structure to reduce their transcription. LTR retrotransposons came about later than non-LTR retrotransposons, possibly from an ancestral non-LTR retrotransposon acquiring an integrase from a DNA transposon. Retroviruses gained additional properties to their virus envelopes by taking the relevant genes from other viruses using the power of LTR retrotransposon. Due to their retrotransposition mechanism, retrotransposons amplify in number quickly, composing 40% of the human genome. The insertion rates for LINE1, Alu and SVA elements are 1/200 – 1/20, 1/20 and 1/900 respectively. The LINE1 insertion rates have varied a lot over the past 35 million years, so they indicate points in genome evolution. Notably a large number of 100 kilobases in the maize genome show variety due to the presence or absence of retrotransposons. However since maize is unusual genetically as compared to other plants it cannot be used to predict retrotransposition in other plants. Mutations caused by retrotransposons include:
https://en.wikipedia.org/wiki/Retrotransposon
Retrotransposon markers are components of DNA which are used as cladistic markers. They assist in determining the common ancestry , or not, of related taxa . The "presence" of a given retrotransposon in related taxa suggests their orthologous integration, a derived condition acquired via a common ancestry, while the "absence" of particular elements indicates the plesiomorphic condition prior to integration in more distant taxa. The use of presence/absence analyses to reconstruct the systematic biology of mammals depends on the availability of retrotransposons that were actively integrating before the divergence of a particular species . The analysis of SINEs – Short INterspersed Elements – LINEs – Long INterspersed Elements – or truncated LTRs – Long Terminal Repeats – as molecular cladistic markers represents a particularly interesting complement to DNA sequence and morphological data. The reason for this is that retrotransposons are assumed to represent powerful noise-poor synapomorphies . [ 1 ] The target sites are relatively unspecific so that the chance of an independent integration of exactly the same element into one specific site in different taxa is not large and may even be negligible over evolutionary time scales. Retrotransposon integrations are currently assumed to be irreversible events; this might change since no eminent biological mechanisms have yet been described for the precise re-excision of class I transposons , but see van de Lagemaat et al. (2005). [ 2 ] A clear differentiation between ancestral and derived character state at the respective locus thus becomes possible as the absence of the introduced sequence can be with high confidence considered ancestral. In combination, the low incidence of homoplasy together with a clear character polarity make retrotransposon integration markers ideal tools for determining the common ancestry of taxa by a shared derived transpositional event. [ 1 ] [ 3 ] The "presence" of a given retrotransposon in related taxa suggests their orthologous integration, a derived condition acquired via a common ancestry, while the "absence" of particular elements indicates the plesiomorphic condition prior to integration in more distant taxa. The use of presence/absence analyses to reconstruct the systematic biology of mammals depends on the availability of retrotransposons that were actively integrating before the divergence of a particular species . [ 4 ] Examples for phylogenetic studies based on retrotransposon presence/absence data are the definition of whales as members of the order Cetartiodactyla with hippos being their closest living relatives, [ 5 ] hominoid relationships, [ 6 ] the strepsirrhine tree, [ 7 ] the marsupial radiation from South America to Australia, [ 8 ] and the placental mammalian evolution. [ 9 ] [ 10 ] Inter-retrotransposons amplified polymorphisms ( IRAP s) are alternative retrotransposon-based markers. In this method, PCR oligonucleotide primers face outwards from terminal retrotransposon regions. Thus, they amplify the fragment between two retrotransposon insertions. As retrotransposon integration patterns vary between genotypes, the number and size of the resulting amplicons can be used for differentiation of genotypes or cultivars, to measure genetic diversity or to reconstruct phylogenies. [ 11 ] [ 12 ] [ 13 ] SINEs, which are small in size and often integrate within or next to genes represent an optimal source for the generation of effective IRAP markers. [ 14 ]
https://en.wikipedia.org/wiki/Retrotransposon_marker
A retrovirus is a type of virus that inserts a DNA copy of its RNA genome into the DNA of a host cell that it invades, thus changing the genome of that cell. [ 2 ] After invading a host cell's cytoplasm , the virus uses its own reverse transcriptase enzyme to produce DNA from its RNA genome, the reverse of the usual pattern, thus retro (backward). The new DNA is then incorporated into the host cell genome by an integrase enzyme, at which point the retroviral DNA is referred to as a provirus . The host cell then treats the viral DNA as part of its own genome, transcribing and translating the viral genes along with the cell's own genes, producing the proteins required to assemble new copies of the virus. Many retroviruses cause serious diseases in humans, other mammals, and birds. [ 3 ] Retroviruses have many subfamilies in three basic groups. The specialized DNA-infiltration enzymes in retroviruses make them valuable research tools in molecular biology, and they have been used successfully in gene delivery systems. [ 6 ] Evidence from endogenous retroviruses (inherited provirus DNA in animal genomes) suggests that retroviruses have been infecting vertebrates for at least 450 million years. [ 7 ] Virions , viruses in the form of independent particles of retroviruses, consist of enveloped particles about 100 nm in diameter. The outer lipid envelope consists of glycoprotein. [ 8 ] The virions also contain two identical single-stranded RNA molecules 7–10 kilobases in length. The two molecules are present as a dimer, formed by base pairing between complementary sequences. Interaction sites between the two RNA molecules have been identified as a " kissing stem-loop ". [ 3 ] Although virions of different retroviruses do not have the same morphology or biology, all the virion components are very similar. [ 9 ] The main virion components are: The retroviral genome is packaged as viral particles. These viral particles are dimers of single-stranded, positive-sense, linear RNA molecules. [ 10 ] Retroviruses (and orterviruses in general) follow a layout of 5'– gag – pro – pol – env –3' in the RNA genome. gag and pol encode polyproteins, each managing the capsid and replication. The pol region encodes enzymes necessary for viral replication, such as reverse transcriptase, protease and integrase. [ 19 ] Depending on the virus, the genes may overlap or fuse into larger polyprotein chains. Some viruses contain additional genes. The lentivirus genus, the spumavirus genus, the HTLV / bovine leukemia virus (BLV) genus, and a newly introduced fish virus genus are retroviruses classified as complex. These viruses have genes called accessory genes, in addition to gag, pro, pol and env genes. Accessory genes are located between pol and env, downstream from the env, including the U3 region of LTR, or in the env and overlapping portions. While accessory genes have auxiliary roles, they also coordinate and regulate viral gene expression. In addition, some retroviruses may carry genes called oncogenes or onc genes from another class. Retroviruses with these genes (also called transforming viruses) are known for their ability to quickly cause tumors in animals and transform cells in culture into an oncogenic state. [ 20 ] The polyproteins are cleaved into smaller proteins each with their own function. The nucleotides encoding them are known as subgenes . [ 18 ] When retroviruses have integrated their own genome into the germ line , their genome is passed on to a following generation. These endogenous retroviruses (ERVs), contrasted with exogenous ones, now make up 5–8% of the human genome. [ 21 ] Most insertions have no known function and are often referred to as " junk DNA ". However, many endogenous retroviruses play important roles in host biology, such as control of gene transcription, cell fusion during placental development in the course of the germination of an embryo , and resistance to exogenous retroviral infection. Endogenous retroviruses have also received special attention in the research of immunology -related pathologies, such as autoimmune diseases like multiple sclerosis , although endogenous retroviruses have not yet been proven to play any causal role in this class of disease. [ 22 ] While transcription was classically thought to occur only from DNA to RNA, reverse transcriptase transcribes RNA into DNA. The term "retro" in retrovirus refers to this reversal (making DNA from RNA) of the usual direction of transcription. It still obeys the central dogma of molecular biology , which states that information can be transferred from nucleic acid to nucleic acid but cannot be transferred back from protein to either protein or nucleic acid. Reverse transcriptase activity outside of retroviruses has been found in almost all eukaryotes , enabling the generation and insertion of new copies of retrotransposons into the host genome. These inserts are transcribed by enzymes of the host into new RNA molecules that enter the cytosol. Next, some of these RNA molecules are translated into viral proteins. The proteins encoded by the gag and pol genes are translated from genome-length mRNAs into Gag and Gag–Pol polyproteins. In example, for the gag gene; it is translated into molecules of the capsid protein, and for the pol gene; it is translated into molecules of reverse transcriptase. Retroviruses need a lot more of the Gag proteins than the Pol proteins and have developed advanced systems to synthesize the required amount of each. As an example, after Gag synthesis nearly 95 percent of the ribosomes terminate translation, while other ribosomes continue translation to synthesize Gag–Pol. In the rough endoplasmic reticulum glycosylation begins and the env gene is translated from spliced mRNAs in the rough endoplasmic reticulum, into molecules of the envelope protein. When the envelope protein molecules are carried to the Golgi complex, they are divided into surface glycoprotein and transmembrane glycoprotein by a host protease. These two glycoprotein products stay in close affiliation, and they are transported to the plasma membrane after further glycosylation. [ 3 ] It is important to note that a retrovirus must "bring" its own reverse transcriptase in its capsid , otherwise it is unable to utilize the enzymes of the infected cell to carry out the task, due to the unusual nature of producing DNA from RNA. [ 23 ] Industrial drugs that are designed as protease and reverse-transcriptase inhibitors are made such that they target specific sites and sequences within their respective enzymes. However these drugs can quickly become ineffective due to the fact that the gene sequences that code for the protease and the reverse transcriptase quickly mutate. These changes in bases cause specific codons and sites with the enzymes to change and thereby avoid drug targeting by losing the sites that the drug actually targets. [ citation needed ] Because reverse transcription lacks the usual proofreading of DNA replication, a retrovirus mutates very often. This enables the virus to grow resistant to antiviral pharmaceuticals quickly, and impedes the development of effective vaccines and inhibitors for the retrovirus. [ 24 ] One difficulty faced with some retroviruses, such as the Moloney retrovirus, involves the requirement for cells to be actively dividing for transduction. As a result, cells such as neurons are very resistant to infection and transduction by retroviruses. This gives rise to a concern that insertional mutagenesis due to integration into the host genome might lead to cancer or leukemia. This is unlike Lentivirus , a genus of Retroviridae , which are able to integrate their RNA into the genome of non-dividing host cells. [ citation needed ] Two RNA genomes are packaged into each retrovirus particle, but, after an infection, each virus generates only one provirus . [ 25 ] After infection, reverse transcription occurs and this process is accompanied by recombination . Recombination involves template strand switching between the two genome copies (copy choice recombination) during reverse transcription. From 5 to 14 recombination events per genome occur at each replication cycle. [ 26 ] Genetic recombination appears to be necessary for maintaining genome integrity and as a repair mechanism for salvaging damaged genomes. [ 25 ] The DNA formed after reverse transcription (the provirus) is longer than the RNA genome because each of the terminals have the U3 - R - U5 sequences called long terminal repeat (LTR). Thus, 5' terminal has the extra U3 sequence, while the other terminal has the U5 sequence. [ 15 ] LTRs are able to send signals for vital tasks to be carried out such as initiation of RNA production or management of the rate of transcription. This way, LTRs can control replication, hence, the entire progress of the viral cycle. [ 28 ] Although located in the nucleus, the non-integrated retroviral cDNA is a very weak substrate for transcription. For this reason, an integrated provirus is a necessary for permanent and an effective expression of retroviral genes. [ 10 ] This DNA can be incorporated into host genome as a provirus that can be passed on to progeny cells. The retrovirus DNA is inserted at random into the host genome. Because of this, it can be inserted into oncogenes . In this way some retroviruses can convert normal cells into cancer cells. Some provirus remains latent in the cell for a long period of time before it is activated by the change in cell environment. [ citation needed ] Studies of retroviruses led to the first demonstrated synthesis of DNA from RNA templates, a fundamental mode for transferring genetic material that occurs in both eukaryotes and prokaryotes . It has been speculated that the RNA to DNA transcription processes used by retroviruses may have first caused DNA to be used as genetic material. In this model, the RNA world hypothesis , cellular organisms adopted the more chemically stable DNA when retroviruses evolved to create DNA from the RNA templates. [ citation needed ] An estimate of the date of evolution of the foamy-like endogenous retroviruses placed the time of the most recent common ancestor at > 450 million years ago . [ 29 ] Gammaretroviral and lentiviral vectors for gene therapy have been developed that mediate stable genetic modification of treated cells by chromosomal integration of the transferred vector genomes. This technology is of use, not only for research purposes, but also for clinical gene therapy aiming at the long-term correction of genetic defects, e.g., in stem and progenitor cells. Retroviral vector particles with tropism for various target cells have been designed. Gammaretroviral and lentiviral vectors have so far been used in more than 300 clinical trials, addressing treatment options for various diseases. [ 6 ] [ 30 ] Retroviral mutations can be developed to make transgenic mouse models to study various cancers and their metastatic models . [ citation needed ] Retroviruses that cause tumor growth include Rous sarcoma virus and mouse mammary tumor virus . Cancer can be triggered by proto-oncogenes that were mistakenly incorporated into proviral DNA or by the disruption of cellular proto-oncogenes. Rous sarcoma virus contains the src gene that triggers tumor formation. Later it was found that a similar gene in cells is involved in cell signaling, which was most likely excised with the proviral DNA. Nontransforming viruses can randomly insert their DNA into proto-oncogenes, disrupting the expression of proteins that regulate the cell cycle. The promoter of the provirus DNA can also cause over expression of regulatory genes. Retroviruses can cause diseases such as cancer and immunodeficiency. If viral DNA is integrated into host chromosomes, it can lead to permanent infections. It is therefore important to discover the body's response to retroviruses. Exogenous retroviruses are especially associated with pathogenic diseases. For example, mice have mouse mammary tumor virus (MMTV), which is a retrovirus. This virus passes to newborn mice through mammary milk. When they are 6 months old, the mice carrying the virus get mammary cancer because of the retrovirus. In addition, leukemia virus I (HTLV-1), found in human T cell, has been found in humans for many years. It is estimated that this retrovirus causes leukemia in the ages of 40 and 50. [ 31 ] It has a replicable structure that can induce cancer. In addition to the usual gene sequence of retroviruses, HTLV-1 contains a fourth region, PX. This region encodes Tax, Rex, p12, p13 and p30 regulatory proteins. The Tax protein initiates the leukemic process and organizes the transcription of all viral genes in the integrated HTLV proviral DNA. [ 32 ] Exogenous retroviruses are infectious RNA- or DNA-containing viruses that are transmitted from one organism to another. In the Baltimore classification system, which groups viruses together based on their manner of messenger RNA synthesis, they are classified into two groups: Group VI: single-stranded RNA viruses with a DNA intermediate in their life cycle, and Group VII: double-stranded DNA viruses with an RNA intermediate in their life cycle. [ citation needed ] All members of Group VI use virally encoded reverse transcriptase , an RNA-dependent DNA polymerase, to produce DNA from the initial virion RNA genome. This DNA is often integrated into the host genome, as in the case of retroviruses and pseudoviruses , where it is replicated and transcribed by the host. Group VI includes: The family Retroviridae was previously divided into three subfamilies ( Oncovirinae , Lentivirinae , and Spumavirinae ), but are now divided into two: Orthoretrovirinae and Spumaretrovirinae . The term oncovirus is now commonly used to describe a cancer-causing virus. This family now includes the following genera: Both families in Group VII have DNA genomes contained within the invading virus particles. The DNA genome is transcribed into both mRNA, for use as a transcript in protein synthesis, and pre-genomic RNA, for use as the template during genome replication. Virally encoded reverse transcriptase uses the pre-genomic RNA as a template for the creation of genomic DNA. Group VII includes: The latter family is closely related to the newly proposed whilst families Belpaoviridae , Metaviridae , Pseudoviridae , Retroviridae , and Caulimoviridae constitute the order Ortervirales . [ 34 ] Endogenous retroviruses are not formally included in this classification system, and are broadly classified into three classes, on the basis of relatedness to exogenous genera: Antiretroviral drugs are medications for the treatment of infection by retroviruses, primarily HIV . Different classes of antiretroviral drugs act on different stages of the HIV life cycle . Combination of several (typically three or four) antiretroviral drugs is known as highly active antiretroviral therapy (HAART). [ 36 ] Feline leukemia virus and Feline immunodeficiency virus infections are treated with biologics , including the only immunomodulator currently licensed for sale in the United States, Lymphocyte T-Cell Immune Modulator (LTCI). [ 37 ]
https://en.wikipedia.org/wiki/Retrovirus
Return flow is surface and subsurface water that leaves the field following application of irrigation water. [ 1 ] While irrigation return flows are point sources , in the United States they are expressly exempted from discharge permit requirements under the Clean Water Act . [ 2 ] Return flows generally return to the irrigation centre after a period of about three to four weeks; due to this, the farmers usually need to pour bleach into the water to clean it of any organisms that have entered the stream. If this is not taken care of, diseases such as typhoid or cholera could enter the irrigation and pose a risk of epidemic disease to surrounding towns and cities. The return flows in irrigation is nearly 50% of the water supplied in silty clay soil type in tropical countries. The salinity of the return flow water increases with decrease in % of return flow quantity. Rest of the water supplied to irrigation evaporates to atmosphere due to evapotranspiration . When ground water is extracted for irrigation and other uses, most of the return flows seep back into the ground instead of joining the nearby surface stream. When ground water is used in excess of recharge from rainfall/ precipitation , the quality of groundwater deteriorates over a period of time and becomes unfit for irrigation use. This agriculture article is a stub . You can help Wikipedia by expanding it . This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Return_flow
In telecommunications , return loss is a measure in relative terms of the power of the signal reflected by a discontinuity in a transmission line or optical fiber . This discontinuity can be caused by a mismatch between the termination or load connected to the line and the characteristic impedance of the line. It is usually expressed as a ratio in decibels (dB): where RL(dB) is the return loss in dB, P i is the incident power, and P r is the reflected power. Return loss is related to both standing wave ratio (SWR) and reflection coefficient (Γ). Increasing return loss corresponds to lower SWR. Return loss is a measure of how well devices or lines are matched. A match is good if the return loss is high. A high return loss is desirable and results in a lower insertion loss . From a certain perspective "return loss" is a misnomer. The usual function of a transmission line is to convey power from a source to a load with minimal loss. If a transmission line is correctly matched to a load, the reflected power will be zero, no power will be lost due to reflection, and "return loss" will be infinite. Conversely if the line is terminated in an open circuit, the reflected power will be equal to the incident power; all of the incident power will be lost in the sense that none of it will be transferred to a load, and RL will be zero. Thus the numerical values of RL tend in the opposite sense to that expected of a "loss". As defined above, RL will always be positive, since P r can never exceed P i . However, return loss has historically been expressed as a negative number, and this convention is still widely found in the literature. [ 1 ] Strictly speaking, if a negative sign is ascribed to RL, the ratio of reflected to incident power is implied: where RL′(dB) is the negative of RL(dB). In practice, the sign ascribed to RL is largely immaterial. If a transmission line includes several discontinuities along its length, the total return loss will be the sum of the RLs caused by each discontinuity, and provided all RLs are given the same sign, no error or ambiguity will result. Whichever convention is used, it will always be understood that P r can never exceed P i . In metallic conductor systems, reflections of a signal traveling down a conductor can occur at a discontinuity or impedance mismatch. The ratio of the amplitude of the reflected wave V r to the amplitude of the incident wave V i is known as the reflection coefficient . Return loss is the negative of the magnitude of the reflection coefficient in dB. Since power is proportional to the square of the voltage, return loss is given by where the vertical bars indicate magnitude . Thus, a large positive return loss indicates that the reflected power is small relative to the incident power, which indicates good impedance match between transmission line and load. If the incident power and the reflected power are expressed in "absolute" decibel units, (e.g., dBm ), then the return loss in dB can be calculated as the difference between the incident power P i (in absolute dBm units) and the reflected power P r (also in absolute dBm units): In optics (particularly in fiber optics ) a loss that takes place at discontinuities of refractive index , especially at an air– glass interface such as a fiber endface. At those interfaces, a fraction of the optical signal is reflected back toward the source. This reflection phenomenon is also called " Fresnel reflection loss ", or simply " Fresnel loss ". Fiber optic transmission systems use lasers to transmit signals over optical fiber, and a low optical return loss (where P r {\displaystyle P_{\text{r}}} is the reflected power, and P i {\displaystyle P_{\text{i}}} is the incident, or input, power) can cause the laser to stop transmitting correctly. The measurement of ORL is becoming more important in the characterization of optical networks as the use of wavelength-division multiplexing increases. These systems use lasers that have a lower tolerance for ORL and introduce elements into the network that are located in close proximity to the laser.
https://en.wikipedia.org/wiki/Return_loss
Return on modelling effort ( ROME ) is the benefit resulting from a (supplementary) effort to create and / or improve a model. [ 1 ] [ 2 ] In engineering, modelling always serves a particular goal. For example, the lightning protection of aircraft can be modelled as an electrical circuit, in order to predict whether the protection will still work in 30 years, given the ageing of its electrical components. More and more effort can be put in making this model predict reality perfectly. However, this perfection comes at a price: researchers invest time and money in improving the model. As a Return on investment (ROI), the ROME is a metric for the use of further modelling. It may therefore serve as a 'stopping criterion'. [ 2 ] Typically, researchers will pull towards continuing modelling, while management will pull towards stopping modelling. Being explicit about the cost and benefits of continued modeling may help to make informed decisions that are understood by both sides. [ citation needed ] Continuous communication between model developers and model users increases the probability of models being actually put to profit. [ 3 ] ROME is a metric , which can be evaluated wherever modelling is performed with a quantifiable goal. Examples include: The initiative "Models at Work" studies the creation, management and use of domain models in scientific and industrial practice, aiming at a diversity of goals, varying from (as truthful as possible) representation of the conceptual structure of the domain that is modelled, via animation, simulation, execution and gamification, until automated (logic-based) reasoning. [ 8 ]
https://en.wikipedia.org/wiki/Return_on_modeling_effort
A return period , also known as a recurrence interval or repeat interval , is an average time or an estimated average time between events such as earthquakes , floods , [ 1 ] landslides , [ 2 ] or river discharge flows to occur. The reciprocal value of return period is called the frequency of occurrence . It is a statistical measurement typically based on historic data over an extended period, and is used usually for risk analysis. Examples include deciding whether a project should be allowed to go forward in a zone of a certain risk or designing structures to withstand events with a certain return period. The following analysis assumes that the probability of the event occurring does not vary over time and is independent of past events. Recurrence interval = n + 1 m {\displaystyle ={n+1 \over m}} For floods, the event may be measured in terms of m 3 /s or height; for storm surges , in terms of the height of the surge, and similarly for other events. This is Weibull's Formula. [ 4 ] : 12 [ 5 ] [ failed verification ] The theoretical return period between occurrences is the inverse of the average frequency of occurrence. For example, a 10-year flood has a 1/10 = 0.1 or 10% chance of being exceeded in any one year and a 50-year flood has a 0.02 or 2% chance of being exceeded in any one year. This does not mean that a 100-year flood will happen regularly every 100 years, or only once in 100 years. Despite the connotations of the name "return period". In any given 100-year period, a 100-year event may occur once, twice, more, or not at all, and each outcome has a probability that can be computed as below. Also, the estimated return period below is a statistic : it is computed from a set of data (the observations), as distinct from the theoretical value in an idealized distribution. One does not actually know that a certain or greater magnitude happens with 1% probability, only that it has been observed exactly once in 100 years. That distinction is significant because there are few observations of rare events: for instance, if observations go back 400 years, the most extreme event (a 400-year event by the statistical definition) may later be classed, on longer observation, as a 200-year event (if a comparable event immediately occurs) or a 500-year event (if no comparable event occurs for a further 100 years). Further, one cannot determine the size of a 1000-year event based on such records alone but instead must use a statistical model to predict the magnitude of such an (unobserved) event. Even if the historic return interval is a lot less than 1000 years, if there are a number of less-severe events of a similar nature recorded, the use of such a model is likely to provide useful information to help estimate the future return interval. One would like to be able to interpret the return period in probabilistic models. The most logical interpretation for this is to take the return period as the counting rate in a Poisson distribution since it is the expectation value of the rate of occurrences. An alternative interpretation is to take it as the probability for a yearly Bernoulli trial in the binomial distribution . That is disfavoured because each year does not represent an independent Bernoulli trial but is an arbitrary measure of time. This question is mainly academic as the results obtained will be similar under both the Poisson and binomial interpretations. The probability mass function of the Poisson distribution is where r {\displaystyle r} is the number of occurrences the probability is calculated for, t {\displaystyle t} the time period of interest, T {\displaystyle T} is the return period and μ = 1 / T {\displaystyle \mu =1/T} is the counting rate. The probability of no-occurrence can be obtained simply considering the case for r = 0 {\displaystyle r=0} . The formula is Consequently, the probability of exceedance (i.e. the probability of an event "stronger" than the event with return period T {\displaystyle T} to occur at least once within the time period of interest) is Note that for any event with return period T {\displaystyle T} , the probability of exceedance within an interval equal to the return period (i.e. t = T {\displaystyle t=T} ) is independent from the return period and it is equal to 1 − exp ⁡ ( − 1 ) ≈ 63.2 % {\displaystyle 1-\exp(-1)\approx 63.2\%} . This means, for example, that there is a 63.2% probability of a flood larger than the 50-year return flood to occur within any period of 50 year. If the return period of occurrence T {\textstyle T} is 243 years ( μ = 0.0041 {\textstyle \mu =0.0041} ) then the probability of exactly one occurrence in ten years is In a given period of n × τ {\displaystyle n\times \tau } for a unit time τ {\displaystyle \tau } (e.g. τ = 1 year {\displaystyle \tau =1{\text{year}}} ), the probability of a given number r of events of a return period μ {\displaystyle \mu } is given by the binomial distribution as follows. This is valid only if the probability of more than one occurrence per unit time τ {\displaystyle \tau } is zero. Often that is a close approximation, in which case the probabilities yielded by this formula hold approximately. If n → ∞ , μ → 0 {\displaystyle n\rightarrow \infty ,\mu \rightarrow 0} in such a way that n μ → λ {\displaystyle n\mu \rightarrow \lambda } then Take where Given that the return period of an event is 100 years, So the probability that such an event occurs exactly once in 10 successive years is: Return period is useful for risk analysis (such as natural, inherent, or hydrologic risk of failure). [ 6 ] When dealing with structure design expectations, the return period is useful in calculating the riskiness of the structure. The probability of at least one event that exceeds design limits during the expected life of the structure is the complement of the probability that no events occur which exceed design limits. The equation for assessing this parameter is where
https://en.wikipedia.org/wiki/Return_period
Reuse of human excreta is the safe, beneficial use of treated human excreta after applying suitable treatment steps and risk management approaches that are customized for the intended reuse application. Beneficial uses of the treated excreta may focus on using the plant-available nutrients (mainly nitrogen, phosphorus and potassium) that are contained in the treated excreta. They may also make use of the organic matter and energy contained in the excreta. To a lesser extent, reuse of the excreta's water content might also take place, although this is better known as water reclamation from municipal wastewater . The intended reuse applications for the nutrient content may include: soil conditioner or fertilizer in agriculture or horticultural activities. Other reuse applications, which focus more on the organic matter content of the excreta, include use as a fuel source or as an energy source in the form of biogas . There is a large and growing number of treatment options to make excreta safe and manageable for the intended reuse option. [ 1 ] Options include urine diversion and dehydration of feces ( urine-diverting dry toilets ), composting ( composting toilets or external composting processes ), sewage sludge treatment technologies and a range of fecal sludge treatment processes. They all achieve various degrees of pathogen removal and reduction in water content for easier handling. Pathogens of concern are enteric bacteria, virus, protozoa, and helminth eggs in feces. [ 2 ] As the helminth eggs are the pathogens that are the most difficult to destroy with treatment processes, they are commonly used as an indicator organism in reuse schemes. Other health risks and environmental pollution aspects that need to be considered include spreading micropollutants , pharmaceutical residues and nitrate in the environment which could cause groundwater pollution and thus potentially affect drinking water quality . There are several "human excreta derived fertilizers" which vary in their properties and fertilizing characteristics, for example: urine, dried feces, composted feces, fecal sludge, sewage , sewage sludge . The nutrients and organic matter which are contained in human excreta or in domestic wastewater (sewage) have been used in agriculture in many countries for centuries. However, this practice is often carried out in an unregulated and unsafe manner in developing countries . World Health Organization Guidelines from 2006 have set up a framework describing how this reuse can be done safely by following a "multiple barrier approach". [ 3 ] Such barriers might be selecting a suitable crop, farming methods, methods of applying the fertilizer and education of the farmers. Human excreta, fecal sludge and wastewater are often referred to as wastes (see also human waste ). Within the concept of a circular economy in sanitation, an alternative term that is being used is "resource flows". [ 4 ] : 10 The final outputs from the sanitation treatment systems can be called "reuse products" or "other outputs". [ 4 ] : 10 These reuse products are general fertilizers, soil conditioners , biomass , water, or energy. Reuse of human excreta focuses on the nutrient and organic matter content of human excreta unlike reuse of wastewater which focuses on the water content. An alternative term is "use of human excreta" rather than " reuse " as strictly speaking it is the first use of human excreta, not the second time that it is used. [ 3 ] The resources available in wastewater and human excreta include water, plant nutrients , organic matter and energy content. Sanitation systems that are designed for safe and effective recovery of resources can play an important role in a community's overall resource management . Recovering the resources embedded in excreta and wastewater (like nutrients, water and energy) contributes to achieving Sustainable Development Goal 6 and other sustainable development goals . [ 5 ] It can be efficient to combine wastewater and human excreta with other organic waste such as manure , and food and crop waste for the purposes of resource recovery. [ 6 ] There is a large and growing number of treatment options to make excreta safe and manageable for the intended reuse option. [ 1 ] Various technologies and practices, ranging in scale from a single rural household to a city, can be used to capture potentially valuable resources and make them available for safe, productive uses that support human well-being and broader sustainability . Some treatment options are listed below but there are many more: [ 1 ] A guide by the Swedish University of Agricultural Sciences provides a list of treatment technologies for sanitation resource recovery: Vermicomposting and vermifiltration , black soldier fly composting, algae cultivation, microbial fuel cell , nitrification and distillation of urine, struvite precipitation, incineration, carbonization , solar drying, membranes, filters, alkaline dehydration of urine, [ 7 ] [ 8 ] ammonia sanitization/urea treatment, and lime sanitization. [ 4 ] Further research involves UV advanced oxidation processes in order to degrade organic pollutants present in the urine before reuse [ 9 ] or the dehydration of urine by using acids. [ 10 ] The most common reuse of excreta is as fertilizer and soil conditioner in agriculture. This is also called a "closing the loop" approach for sanitation with agriculture. It is a central aspect of the ecological sanitation approach. Reuse options depend on the form of the excreta that is being reused: it can be either excreta on its own or mixed with some water (fecal sludge) [ 11 ] or mixed with much water (domestic wastewater or sewage). The most common types of excreta reuse include: [ 6 ] Resource recovery from fecal sludge can take many forms, including as a fuel, soil amendment, building material, protein, animal fodder, and water for irrigation. [ 11 ] Reuse products that can be recovered from sanitation systems include: Stored urine , concentrated urine, sanitized blackwater , digestate, nutrient solutions, dry urine, struvite, dried feces, pit humus, dewatered sludge, compost, ash from sludge, biochar , nutrient-enriched filter material, algae , macrophytes , black soldier fly larvae, worms, irrigation water , aquaculture , and biogas. [ 4 ] There is an untapped fertilizer resource in human excreta. In Africa, for example, the theoretical quantities of nutrients that can be recovered from human excreta are comparable with all current fertilizer use on the continent. [ 6 ] : 16 Therefore, reuse can support increased food production and also provide an alternative to chemical fertilizers, which is often unaffordable to small-holder farmers. However, nutritional value of human excreta largely depends on dietary input. [ 2 ] Mineral fertilizers are made from mining activities and can contain heavy metals. Phosphate ores contain heavy metals such as cadmium and uranium, which can reach the food chain via mineral phosphate fertilizer. [ 12 ] This does not apply to excreta-based fertilizers (unless the human's food was contaminated beyond safe limits to start with), which is an advantage. Fertilizing elements of organic fertilizers are mostly bound in carbonaceous reduced compounds. If these are already partially oxidized as in the compost, the fertilizing minerals are adsorbed on the degradation products ( humic acids ) etc. Thus, they exhibit a slow-release effect and are usually less rapidly leached compared to mineral fertilizers. [ 13 ] [ 14 ] Urine contains large quantities of nitrogen (mostly as urea ), as well as reasonable quantities of dissolved potassium . [ 15 ] The nutrient concentrations in urine vary with diet. [ 16 ] In particular, the nitrogen content in urine is related to quantity of protein in the diet: A high protein diet results in high urea levels in urine. The nitrogen content in urine is proportional to the total food protein in the person's diet, and the phosphorus content is proportional to the sum of total food protein and vegetal food protein. [ 17 ] : 5 Urine's eight main ionic species (> 0.1 meq L−1) are cations Na , K , NH 4 , Ca , and the anions , Cl , SO 4 , PO 4 , and HCO 3 . [ 18 ] Urine typically contains 70% of the nitrogen and more than half the potassium found in sewage, while making up less than 1% of the overall volume. [ 15 ] The amount of urine produced by an adult is around 0.8 to 1.5 L per day. [ 3 ] Applying urine as fertilizer has been called "closing the cycle of agricultural nutrient flows" or ecological sanitation or ecosan . Urine fertilizer is usually applied diluted with water because undiluted urine can chemically burn the leaves or roots of some plants, causing plant injury, [ 19 ] particularly if the soil moisture content is low. The dilution also helps to reduce odor development following application. When diluted with water (at a 1:5 ratio for container-grown annual crops with fresh growing medium each season or a 1:8 ratio for more general use), it can be applied directly to soil as a fertilizer. [ 20 ] [ 21 ] The fertilization effect of urine has been found to be comparable to that of commercial nitrogen fertilizers. [ 22 ] [ 23 ] Urine may contain pharmaceutical residues ( environmental persistent pharmaceutical pollutants ). [ 24 ] Concentrations of heavy metals such as lead , mercury , and cadmium , commonly found in sewage sludge, are much lower in urine. [ 25 ] Typical design values for nutrients excreted with urine are: 4 kg nitrogen per person per year, 0.36 kg phosphorus per person per year and 1.0 kg potassium per person per year. [ 17 ] : 5 Based on the quantity of 1.5 L urine per day (or 550 L per year), the concentration values of macronutrients as follows: 7.3 g/L N; .67 g/L P; 1.8 g/L K. [ 17 ] : 5 [ 26 ] : 11 These are design values but the actual values vary with diet. [ 15 ] [ a ] Urine's nutrient content, when expressed with the international fertilizer convention of N:P 2 O 5 :K 2 O, is approximately 7:1.5:2.2. [ 26 ] [ b ] Since urine is rather diluted as a fertilizer compared to dry manufactured nitrogen fertilizers such as diammonium phosphate , the relative transport costs for urine are high as a lot of water needs to be transported. [ 26 ] The general limitations to using urine as fertilizer depend mainly on the potential for buildup of excess nitrogen (due to the high ratio of that macronutrient), [ 20 ] and inorganic salts such as sodium chloride , which are also part of the wastes excreted by the renal system . Over-fertilization with urine or other nitrogen fertilizers can result in too much ammonia for plants to absorb, acidic conditions, or other phytotoxicity . [ 24 ] Important parameters to consider while fertilizing with urine include salinity tolerance of the plant, soil composition, addition of other fertilizing compounds, and quantity of rainfall or other irrigation. [ 16 ] It was reported in 1995 that urine nitrogen gaseous losses were relatively high and plant uptake lower than with labelled ammonium nitrate . [ citation needed ] In contrast, phosphorus was utilized at a higher rate than soluble phosphate. [ 18 ] Urine can also be used safely as a source of nitrogen in carbon-rich compost. [ 21 ] Human urine can be collected with sanitation systems that utilize urinals or urine diversion toilets. If urine is to be separated and collected for use as a fertilizer in agriculture, then this can be done with sanitation systems that utilize waterless urinals, urine-diverting dry toilets (UDDTs) or urine diversion flush toilets. [ 26 ] During storage, the urea in urine is rapidly hydrolyzed by urease , creating ammonia . [ 28 ] Further treatment can be done with collected urine to stabilize the nitrogen and concentrate the fertilizer. [ 29 ] One low-tech solution to odor is to add citric acid or vinegar to the urine collection container, so that the urease is inactivated and any ammonia that do form are less volatile. [ 27 ] Besides concentration, simple chemical processes can be used to extract pure substances: nitrogen as nitrates (similar to medieval nitre beds ) and phosphorus as struvite . [ 29 ] The health risks of using urine as a source of fertilizer are generally regarded as negligible, especially when dispersed in soil rather than on the part of a plant that is consumed. Urine can be distributed via perforated hoses buried ~10 cm under the surface of the soil among crop plants, thus minimizing risk of odors, loss of nutrients due to votalization, or transmission of pathogens . [ 30 ] There are potentially more environmental problems (such as eutrophication resulting from the influx of nutrient rich effluent into aquatic or marine ecosystems) and a higher energy consumption when urine is treated as part of sewage in sewage treatment plants compared with when it is used directly as a fertilizer resource. [ 31 ] [ 32 ] In developing countries, the use of raw sewage or fecal sludge has been common throughout history, yet the application of pure urine to crops is still quite rare in 2021. This is despite many publications that advocate the use of urine as a fertilizer since at least 2001. [ 22 ] [ 33 ] Since about 2011, the Bill and Melinda Gates Foundation is providing funding for research involving sanitation systems that recover the nutrients in urine. [ 34 ] According to the 2004 "proposed Swedish default values", an average Swedish adult excretes 0.55 kg nitrogen, 0.18 kg phosphorus, and 0.36 kg potassium as feces per year. The yearly mass is 51 kg wet and 11 kg dried, so that wet feces would have a NPK% value of 1.1:0.8:0.9. [ 17 ] : 5 [ a ] [ c ] Reuse of dried feces from urine-diverting dry toilets after post-treatment can result in increased crop production through fertilizing effects of nitrogen, phosphorus, potassium and improved soil fertility through organic carbon. [ 35 ] Compost derived from composting toilets (where organic kitchen waste is in some cases also added to the composting toilet) has, in principle, the same uses as compost derived from other organic waste products, such as sewage sludge or municipal organic waste. One limiting factor may be legal restrictions due to the possibility that pathogens remain in the compost. In any case, the use of compost from composting toilets in one's own garden can be regarded as safe and is the main method of use for compost from composting toilets. Hygienic measures for handling of the compost must be applied by all those people who are exposed to it, e.g. wearing gloves and boots. Some of the urine will be part of the compost although some urine will be lost via leachate and evaporation. Urine can contain up to 90 percent of the nitrogen , up to 50 percent of the phosphorus , and up to 70 percent of the potassium present in human excreta. [ 36 ] The nutrients in compost from a composting toilet have a higher plant availability than dried feces from a typical urine-diverting dry toilet. The two processes are not mutually exclusive, however: some composting toilets do divert urine (to avoid over-saturation of water and nitrogen) and dried feces can still be composted. [ 37 ] Fecal sludge is defined as "coming from onsite sanitation technologies, and has not been transported through a sewer." Examples of onsite technologies include pit latrines, unsewered public ablution blocks, septic tanks and dry toilets. Fecal sludge can be treated by a variety of methods to render it suitable for reuse in agriculture. These include (usually carried out in combination) dewatering, thickening, drying (in sludge drying beds), composting , pelletization, and anaerobic digestion . [ 38 ] Reclaimed water can be reused for irrigation, industrial uses, replenishing natural water courses, water bodies, aquifers , and other potable and non-potable uses. These applications, however, focus usually on the water aspect, not on the nutrients and organic matter reuse aspect, which is the focus of "reuse of excreta". When wastewater is reused in agriculture, its nutrient (nitrogen and phosphorus) content may be useful for additional fertilizer application. [ 39 ] Work by the International Water Management Institute and others has led to guidelines on how reuse of municipal wastewater in agriculture for irrigation and fertilizer application can be safely implemented in low income countries. [ 40 ] [ 3 ] The use of treated sewage sludge (after treatment also called " biosolids ") as a soil conditioner or fertilizer is possible but is a controversial topic in some countries (such as USA, some countries in Europe) due to the chemical pollutants it may contain, such as heavy metals and environmental persistent pharmaceutical pollutants. Northumbrian Water in the United Kingdom uses two biogas plants to produce what the company calls "poo power"—using sewage sludge to produce energy to generate income. Biogas production has reduced its pre-1996 electricity expenditure of 20 million GBP by about 20%. Severn Trent and Wessex Water also have similar projects. [ 41 ] Sludge treatment liquids (after anaerobic digestion) can be used as an input source for a process to recover phosphorus in the form of struvite for use as fertilizer. For example, the Canadian company Ostara Nutrient Recovery Technologies is marketing a process based on controlled chemical precipitation of phosphorus in a fluidized bed reactor that recovers struvite in the form of crystalline pellets from sludge dewatering streams. The resulting crystalline product is sold to the agriculture , turf , and ornamental plants sectors as fertilizer under the registered trade name "Crystal Green". [ 42 ] In the case of phosphorus in particular, reuse of excreta is one known method to recover phosphorus to mitigate the looming shortage (also known as " peak phosphorus ") of economical mined phosphorus. Mined phosphorus is a limited resource that is being used up for fertilizer production at an ever-increasing rate, which is threatening worldwide food security . Therefore, phosphorus from excreta-based fertilizers is an interesting alternative to fertilizers containing mined phosphate ore. [ 43 ] Research into how to make reuse of urine and feces safe in agriculture has been carried out in Sweden since the 1990s. [ 16 ] In 2006 the World Health Organization (WHO) provided guidelines on safe reuse of wastewater, excreta, and greywater. [ 3 ] The multiple barrier concept to reuse, which is the key cornerstone of this publication, has led to a clear understanding of how excreta reuse can be done safely. The concept is also used in water supply and food production, and is generally understood as a series of treatment steps and other safety precautions to prevent the spread of pathogens. The degree of treatment required for excreta-based fertilizers before they can safely be used in agriculture depends on a number of factors. It mainly depends on which other barriers will be put in place according to the multiple barrier concept. Such barriers might be selecting a suitable crop, farming methods, methods of applying the fertilizer, education of the farmers, and so forth. [ 44 ] For example, in the case of urine-diverting dry toilets secondary treatment of dried feces can be performed at community level rather than at household level and can include thermophilic composting where fecal material is composted at over 50 °C, prolonged storage with a duration of 1.5 to two years, chemical treatment with ammonia from urine to inactivate the pathogens, solar sanitation for further drying or heat treatment to eliminate pathogens. [ 45 ] [ 35 ] Exposure of farm workers to untreated excreta constitutes a significant health risk due to its pathogen content. There can be a large amount of enteric bacteria, virus, protozoa, and helminth eggs in feces. [ 2 ] This risk also extends to consumers of crops fertilized with untreated excreta. Therefore, excreta needs to be appropriately treated before reuse, and health aspects need to be managed for all reuse applications as the excreta can contain pathogens even after treatment. Temperature is a treatment parameter with an established relation to pathogen inactivation for all pathogen groups: Temperatures above 50 °C (122 °F) have the potential to inactivate most pathogens. [ 4 ] : 101 Therefore, thermal sanitization is utilized in several technologies, such as thermophilic composting and thermophilic anaerobic digestion and potentially in sun drying. Alkaline conditions (pH value above 10) can also deactivate pathogens. This can be achieved with ammonia sanitization or lime treatment. [ 4 ] : 101 The treatment of excreta and wastewater for pathogen removal can take place: As an indicator organism in reuse schemes, helminth eggs are commonly used as these organisms are the most difficult to destroy in most treatment processes. The multiple barrier approach is recommended where e.g. lower levels of treatment may be acceptable when combined with other post-treatment barriers along the sanitation chain. [ 3 ] Excreta from humans contains hormones and pharmaceutical drug residues which could in theory enter the food chain via fertilized crops but are currently not fully removed by conventional wastewater treatment plants anyway and can enter drinking water sources via household wastewater (sewage). [ 26 ] In fact, the pharmaceutical residues in the excreta are degraded better in terrestrial systems (soil) than in aquatic systems. [ 26 ] Only a fraction of the nitrogen-based fertilizers is converted to produce plant matter. The remainder accumulates in the soil or is lost as run-off. [ 46 ] This also applies to excreta-based fertilizer since it also contains nitrogen. Excessive nitrogen which is not taken up by plants is transformed into nitrate which is easily leached. [ 47 ] High application rates combined with the high water-solubility of nitrate leads to increased runoff into surface water as well as leaching into groundwater . [ 48 ] [ 49 ] [ 50 ] Nitrate levels above 10 mg/L (10 ppm) in groundwater can cause ' blue baby syndrome ' (acquired methemoglobinemia ). [ 51 ] The nutrients, especially nitrates, in fertilizers can cause problems for ecosystems and for human health if they are washed off into surface water or leached through the soil into groundwater. Apart from use in agriculture, there are other possible uses of excreta. For example, in the case of fecal sludge, it can be treated and then serve as protein ( black soldier fly process), fodder , fish food, building materials, and biofuels (biogas from anaerobic digestion, incineration or co-combustion of dried sludge, pyrolysis of fecal sludge, and biodiesel from fecal sludge). [ 38 ] [ 6 ] Pilot scale research in Uganda and Senegal has shown that it is viable to use dry feces as for combustion in industry, provided it has been dried to a minimum of 28% dry solids. [ 52 ] Dried sewage sludge can be burned in sludge incineration plants and generate heat and electricity (the waste-to-energy process is one example). Resource recovery of fecal sludge as a solid fuel has been found to have high market potential in Sub-Saharan Africa . [ 11 ] Urine has also been investigated as a potential source of hydrogen fuel . [ 53 ] [ 54 ] Urine was found to be a suitable wastewater for high rate hydrogen production in a microbial electrolysis cell (MEC). [ 53 ] Small-scale biogas plants are being utilized in many countries, including Ghana, [ 55 ] Vietnam [ 56 ] and many others. [ 57 ] Larger centralized systems are being planned that mix animal and human feces to produce biogas. [ 52 ] Biogas is also produced during sewage sludge treatment processes with anaerobic digestion. Here, it can be used for heating the digesters and for generating electricity. [ 58 ] Biogas is an important waste-to-energy resource which plays a huge role in reducing environmental pollution and most importantly in reducing greenhouse gases effect caused by the waste. Utilization of raw material such as human waste for biogas generation is considered beneficial because it does not require additional starters such as microorganism seeds for methane production, and a supply of microorganisms occurs continuously during the feeding of raw materials. [ 59 ] Combination outhouses/feeding troughs were used in several countries since ancient times. [ 60 ] They are generally being phased out. Pilot facilities are being developed for feeding black soldier fly larvae with feces. The mature flies would then be a source of protein to be included in the production of feed for chickens in South Africa. [ 52 ] Black soldier fly (BSF) bio-waste processing is a relatively new treatment technology that has received increasing attention over the last decades. Larvae grown on bio-waste can be a necessary raw material for animal feed production, and can therefore provide revenues for financially applicable waste management systems. In addition, when produced on bio-waste, insect-based feeds can be more sustainable than conventional feeds. [ 61 ] It is known that additions of fecal matter up to 20% by dried weight in clay bricks does not make a significant functional difference to bricks. [ 52 ] A Japanese sewage treatment facility extracts precious metals from sewage sludge, "high percentage of gold found at the Suwa facility was probably due to the large number of precision equipment manufacturers in the vicinity that use [gold]. The facility recently recorded finding 1,890 grammes of gold per tonne of ash from incinerated sludge. That is a far higher gold content than Japan’s Hishikari Mine, one of the world’s top gold mines, [...] which contains 20–40 grammes of the precious metal per tonne of ore." [ 62 ] This idea was also tested by the US Geological Survey (USGS) which found that the yearly sewage sludge generated by 1 million people contained 13 million dollars worth of precious metals. [ 62 ] With pyrolysis, urine is turned into a pre-doped, highly porous, carbon material termed "urine carbon" (URC). URC is cheaper than current fuel cell catalysts while performing better. [ 63 ] The reuse of excreta as a fertilizer for growing crops has been practiced in many countries for a long time. Debate is ongoing about whether reuse of excreta is cost effective. [ 64 ] The terms "sanitation economy" and "toilet resources" have been introduced to describe the potential for selling products made from human feces or urine . [ 64 ] [ 65 ] The NGO SOIL in Haiti began building urine-diverting dry toilets and composting the waste produced for agricultural use in 2006. [ 66 ] SOIL's two composting waste treatment facilities currently transform over 20,000 U.S. gallons (76,000 liters) of human excreta into organic, agricultural-grade compost every month. [ 67 ] The compost produced at these facilities is sold to farmers, organizations, businesses, and institutions around the country to help finance SOIL's waste treatment operations. [ 68 ] Crops grown with this soil amendment include spinach, peppers, sorghum, maize, and more. Each batch of compost produced is tested for the indicator organism E. coli to ensure that complete pathogen kill has taken place during the thermophilic composting process. [ 69 ] There is still a lack of examples of implemented policy where the reuse aspect is fully integrated in policy and advocacy. [ 70 ] When considering drivers for policy change in this respect, the following lessons learned should be taken into consideration: Revising legislation does not necessarily lead to functioning reuse systems; it is important to describe the “institutional landscape” and involve all actors; parallel processes should be initiated at all levels of government (i.e. national, regional and local level); country specific strategies and approaches are needed; and strategies supporting newly developed policies need to be developed). [ 70 ] Regulations such as Global Good Agricultural Practices may hinder export and import of agricultural products that have been grown with the application of human excreta-derived fertilisers. [ 71 ] [ 72 ] The European Union allows the use of source separated urine only in conventional farming within the EU, but not yet in organic farming. This is a situation that many agricultural experts, especially in Sweden, would like to see changed. [ 25 ] This ban may also reduce the options to use urine as a fertilizer in other countries if they wish to export their products to the EU. [ 71 ] In the United States, the EPA regulation governs the management of sewage sludge but has no jurisdiction over the byproducts of a urine-diverting dry toilet. Oversight of these materials falls to the states. [ 73 ] [ 74 ] Treatment disposal of human excreta can be categorized into three types: fertilizer use, discharge and biogas use. Discharge is the disposal of human excreta to soil, septic tank or water body. [ 75 ] In China, with the impact of the long tradition, human excreta is often used as fertilizer for crops. [ 76 ] The main application methods are direct usage for crops and fruits as basal or top application after fermentation in a ditch for a certain period, compost with crop stalk for basal application and direct usage as feed for fish in ponds. [ 60 ] On the other hand, as much as many people rely on human waste as an agricultural fertilizer, if the waste is not properly treated, the use of night soil may promote the spread of infectious diseases. [ 77 ] Urine is used as organic manure in India. It is also used for making an alcohol-based bio-pesticide: the ammonia within breaks down lignin, allowing plant materials like straw to be more easily fermented into alcohol. In Mukuru, Kenya, the slum dwellers are worst hit by the sanitation challenge due to a high population density and a lack of supporting infrastructure. Makeshift pit latrines, illegal toilet connections to the main sewer systems and lack of running water to support the flushable toilets present a sanitation nightmare in all Kenyan slums. The NGO Sanergy seeks to provide decent toilet facilities to Mukuru residents and uses the feces and urine from the toilets to provide fertilizer and energy for the market. [ 78 ] Reuse of wastewater in agriculture is a common practice in the developing world. In a study in Kampala , although famers were not using fecal sludge, 8% of farmers were using wastewater sludge as a soil amendment. Compost from animal manure and composted household waste are applied by many farmers as soil conditioners. On the other hand, farmers are already mixing their own feed because of limited trust in the feed industry and the quality of products. [ 79 ] Electricity demand is significantly more than the electricity generation and only a small margin of the population nationally has access to electricity. The pellets produced from fecal sludge are being used in gasification for electricity production. Converting fecal sludge for energy could contribute toward meeting present and future energy needs. [ 80 ] In Tororo District in eastern Uganda—a region with severe land degradation problems— smallholder farmers appreciated urine fertilization as a low-cost, low-risk practice. They found that it could contribute to significant yield increases. The importance of social norms and cultural perceptions needs to be recognized but these are not absolute barriers to adoption of the practice. [ 81 ] In Ghana, the only wide scale implementation is small scale rural digesters, with about 200 biogas plants using human excreta and animal dung as feedstock. Linking up of public toilets with biogas digesters as a way of improving communal hygiene and combating hygiene-related communicable diseases including cholera and dysentery is also a notable solution within Ghana. [ 79 ]
https://en.wikipedia.org/wiki/Reuse_of_human_excreta
Reuterin (3-hydroxypropionaldehyde) is the organic compound with the formula HOCH 2 CH 2 CHO. It is a bifunctional molecule, containing both a hydroxy and aldehyde functional groups . The name reuterin is derived from Lactobacillus reuteri , which produces the compound biosynthetically from glycerol as a broad-spectrum antibiotic ( bacteriocin ). [ 1 ] L. reuteri itself is named after the microbiologist Gerhard Reuter, who did early work in distinguishing it as a distinct species. In aqueous solution 3-hydroxypropionaldehyde exists in equilibrium with its hydrate (1,1,3-propanetriol), in which the aldehyde group converts to a geminal diol : The hydrate is also in equilibrium with its dimer (2-(2-hydroxyethyl)-4-hydroxy-1,3-dioxane), which dominates at high concentrations. These three components - the aldehyde, its dimer, and the hydrate are therefore in a dynamic equilibrium. [ 2 ] Besides, 3-hydroxypropionaldehyde suffers an spontaneous dehydration in aqueous solution, and the resulting molecule is called acrolein . [ 3 ] In fact, the term reuterin is the name given to the dynamic system formed by 3-hydroxypropionaldehyde, its hydrate, the dimer, and acrolein. This last molecule, acrolein, was recently included in reuterin definition. [ 3 ] [ 4 ] 3-Hydroxypropionaldehyde is formed by the condensation of acetaldehyde and formaldehyde . This reaction, when conducted in the gas-phase, was the basis for a now obsolete industrial route acrolein : [ 5 ] Presently 3-hydroxypropionaldehyde is an intermediate in the production of pentaerythritol . Hydrogenation of reuterin gives 1,3-propanediol . Reuterin is an intermediate in the metabolism of glycerol to 1,3-propanediol catalysed by the coenzyme B12 -dependent glycerol dehydratase . Reuterin is a potent antimicrobial compound produced by Lactobacillus reuteri . It inhibits the growth of some harmful Gram-negative and Gram-positive bacteria, along with yeasts , molds , and protozoa . [ 6 ] L. reuteri can secrete sufficient amounts of reuterin to inhibit the growth of harmful gut organisms, without killing beneficial gut bacteria, allowing L. reuteri to remove gut invaders while keeping normal gut flora intact. [ 7 ] Reuterin is water-soluble, effective in a wide range of pH , resistant to proteolytic and lipolytic enzymes, and has been studied as a food preservative or auxiliary therapeutic agent. [ 8 ] [ 9 ] [ 10 ] Reuterin as an extracted compound has been shown capable of killing Escherichia coli O157:H7 and Listeria monocytogenes , with the addition of lactic acid increasing its efficacy . [ 3 ] [ 10 ] It has also been demonstrated to kill Escherichia coli O157:H7 when produced by L. reuteri . [ 11 ]
https://en.wikipedia.org/wiki/Reuterin
RML AgTech Pvt. Ltd. , formerly known as Reuters Market Light , was a business that provided technology and data analytics solutions to farmers and the agriculture value chain in India. [ 1 ] The Decision Support Technology provided farmers with personalised agricultural data analytics. They received data on topics like pre-sowing or post-harvest via a mobile application or SMS during the initial phase. Approximately 3.4 million Indian farmers across 18 states were part of this service. They received information on 450 crop varieties and more than 1300 markets. At its core, RML AgTech Pvt. Ltd. was an information service provider to farmers. [ 2 ] They offered services to farmers including tailored information about crops and markets, information sharing through SMS, communication in the local language, farming tips based on local and international standards, user-friendly interface across all handsets and telecom operators, and rural outlets to facilitate easy accessibility for farmers with grievances. Thomson Reuters (then Reuters) began with a one-page idea of a Reuters employee to use mobile solutions to address the state of farmers around the developing world . Initial research suggested that farmers lacked relevant, reliable, timely and consistent information and commerce support to improve their productivity, reduce their crop losses and realize fair prices for their produce. [ 3 ] The findings prompted the company to develop a system that addressed these problems with the help of technology. RML was designed keeping in mind the holistic approach to create a structured ecosystem for farmers. Hence, RML approached the problem keeping in mind the individual requirement of each farmer. Therefore, the farmers received customized information based depending on their data. The type of crop, soil, location, irrigation type, and even the stage of the crop cycle was considered when a piece of information was sent out. Realizing the potential in the idea, Thomson Reuters incubated RML as an internal start-up as part of its global innovation program. Spearheading this development was Mr Premprakash Saboo, co-founder, CFO, Head of Institutional Sales. Following nearly 18 months of market research , user-led prototyping, and market trials, RML AgTech was officially launched on October 1, 2007, in Maharashtra by Sharad Pawar , the union minister of agriculture of India followed by launch in Punjab in 2008 by Mr. Prakash Badal, the state's Chief Minister. [ 4 ] The business received funding from Thomson Reuters and IvyCap Ventures Advisors Private Limited (IvyCap), a fund management company. IvyCap, which is backed by the IIT Alumni Network, was the lead investor in RML AgTech. Thomson Reuters remained a shareholder and partner in the newly formed RML AgTech Pvt Ltd (formerly RML Information Services Private Limited). The company evolved from a phone-led product to an Android app for the farmers. It launched data and commerce-support products for enterprises connected with the Agri value chain, such as banks and agri-input and sourcing companies. RML also worked with state and central governments and partnered with other telecom businesses, including as Nokia , Idea , Airtel , and Vodafone . [ 5 ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ]
https://en.wikipedia.org/wiki/Reuters_Market_Light
Revegetation is the process of replanting and rebuilding the soil of disturbed land. This may be a natural process produced by plant colonization and succession , manmade rewilding projects, accelerated process designed to repair damage to a landscape due to wildfire , mining , flood , or other cause. Originally the process was simply one of applying seed and fertilizer to disturbed lands, usually grasses or clover . The fibrous root network of grasses is useful for short-term erosion control, particularly on sloping ground. Establishing long-term plant communities requires forethought as to appropriate species for the climate, size of stock required, and impact of replanted vegetation on local fauna. [ 1 ] The motivations behind revegetation are diverse, answering needs that are both technical and aesthetic, but it is usually erosion prevention that is the primary reason. Revegetation helps prevent soil erosion , enhances the ability of the soil to absorb more water in significant rain events, and in conjunction reduces turbidity dramatically in adjoining bodies of water. Revegetation also aids protection of engineered grades and other earthworks. [ 2 ] Organisations like Trees For Life ( Brooklyn Park ) provide good examples. Revegetation is often used to join up patches of natural habitat that have been lost and can be a very important tool in places where much of the natural vegetation has been cleared. It is therefore particularly important in urban environments, and research in Brisbane has shown that revegetation projects can significantly improve urban bird populations. [ 3 ] The Brisbane study showed that connecting a revegetation patch with existing habitat improved bird species richness, while simply concentrating on making large patches of habitat was the best way to increase bird abundance. Revegetation plans, therefore, need to consider how the revegetated sites are connected with existing habitat patches. Revegetation in agricultural areas can support breeding bird populations, but often it supports more common species, rather than those that are in decline. [ 4 ] The spatial arrangement of the selected plant species influences the vegetation system and the greater habitat system. Spatial planning determines interactions between plant species. These interactions can be facilitative or competitive . [ 5 ] Planting certain species together can protect one or both from extreme temperature fluctuations, drying out in the sun, harsh winds, and predators, in addition to improving soil composition. Competition can occur within or between species, and generally weaker individuals and weaker species die out, resulting in increased plant spacing. [ 5 ] Spatial arrangement of revegetation species also influences pollination and seed dispersal . For species whose seeds are wind-dispersed and animal-dispersed, plant diversity in seed dispersal range is important for genetic fitness . However, too much competition within the seed dispersal range can cause reproduction to be suppressed, so it is important to balance. [ 5 ] On the ecosystem level, the spatial planning of revegetation species influences animal species. A more varied plant species composition is more likely to be used by a wider variety of animal species. High-density edible plants mean animals do not have to forage as far to eat, and a plant species even being in the presence of palatable species could lead to it having more interaction with animals. [ 5 ] Abiotic aspects of the ecosystem are also altered. Higher density revegetation can reduce erosion , protect against extreme temperatures, decrease evaporative losses of water, and increase water filtration and reinfiltration . However, higher density revegetation requires the use of more soil nutrients and water, which can potentially dry out and deplete the soil. [ 5 ] For riparian revegetation, plant roots help to increase the shear strength of bank soil, and if tree roots begin to lose their strength, the bank is susceptible to land slips . Fibrous or matted roots in particular help to prevent against soil erosion , and are typically found in reed and sedge species. [ 6 ] Mine reclamation may involve soil amendment, replacement, or creation, particularly for areas that have been strip mined or suffered severe erosion or soil compaction . In some cases, the native soil may be removed before construction and replaced with fill for the duration of the work. After construction is completed, the fill is again removed and replaced with the reserved native soil for revegetation. [ 7 ] Mycorrhizae , symbiotic fungal -plant communities, are important to the success of revegetation efforts. Most woody plant species need these root-fungi communities to thrive, and nursery or greenhouse transplants may not have sufficient or correct mycorrhizae for good survival. Mycorhizal communities are particularly beneficial to nitrogen-fixing woody plants, C4 -grasses, and soil environments low in phosphorus. [ 8 ] Two types of mycorrhizal fungi aid in restoration: ectomycorrhizal fungi and arbuscular mycorrhizal fungi.
https://en.wikipedia.org/wiki/Revegetation
Revel Systems is an iPad-based point of sale system co-founded by Lisa Falzone and Christopher Ciabarra . [ 2 ] In June 2024, it was announced that the company was acquired by Shift4 . [ 3 ] Revel Systems was founded in 2010 [ 4 ] in San Francisco . [ 5 ] In May 2011, Revel received $3.7 million in funding from DCM. [ 6 ] In 2015 the company announced an investment of approximately $13.5 Million from ROTH Capital Partners, bringing Revel's Series C round to approximately $110 Million. This infusion from ROTH was Revel's C-3 investment round, a follow-up to the Series C-1 round led by Welsh, Carson, Anderson & Stowe (WCAS) in November 2014 and Series C-2 round led by Intuit Inc. in December 2014. The two founders Lisa Falzone and Chris Ciabarra took Revel to a 500 million dollar evaluation before exiting. [ 7 ] In 2015, the company announced a strategic partnership with Apple Computers as a member of the Apple Enterprise Mobility Program and in 2014 Revel announced a partnership with Intuit to create Quickbooks Point of Sale Powered by Revel Systems and in Sept 2016 Revel announced a partnership with Shell Global [ 8 ] The company integrates with third-party vendors, and has an open API , allowing others to customize the POS system. Revel released Atlas V2 for the iPad POS in February 2012. [ 9 ] The Revel Systems headquarters is located in Atlanta, Georgia . Additional offices are located in San Francisco, California and Vilnius, Lithuania . [ 10 ] European sales are handled by office in London. Revel's iPad point of sale software focuses on security in order to be a properly licensed system. [ 11 ] Revel was the first iPad POS to implement EMV —or " Chip and Pin "—Processing in the United States, in January 2013. [ 12 ] Some of Revel's clients include the following (or franchisees of the following): Shell, Smoothie King, Tully's Coffee, Little Caesars Pizza , Legends Hospitality , Rocky Mountain Chocolate Factory , Popeyes Louisiana Kitchen , Illy Coffee, Dairy Queen, Forever Yogurt, and Twistee Treat, among others. Revel has partnered with retail giants Belkin and Goodwill . [ 13 ] In February 2017 it was announced that Falzone had been replaced as CEO after taking the company to half a billion dollars in evaluation. Lisa Falzone and CTO Chris Ciabarra were removed by majority share holder, investment firm Welsh, Carson, Anderson & Stowe (WCAS), which now has a majority stake in the company. [ 14 ] New CEO Greg Dukat was appointed. [ 15 ] [ 16 ] Revel was controlled by WCAS until June 2024, when the company was acquired by Shift4. Revel Systems' Point of sale system operates on the Apple iPad . The backend can be managed via mobile device or via Web browser . Associated hardware includes: receipt printer, cash drawer, and card swipe. [ 17 ] Revel also announced the Revel Ethernet Connect cable in 2015 that allows for a hardwired Ethernet connection to iPads running Revel software. Revel has several POS systems for the culinary industry such as Kitchen Display System, Drive-through POS, [ 18 ] Food Truck POS, [ 19 ] and Restaurant POS. Other retail POS systems include Grocery POS, Retail POS, and Quick Service POS. Revel also have systems for large venues including Stadium POS [ 20 ] and Events POS. [ 21 ] Revel Systems offers a range of preconfigured hardware to complement its point of sale system. The Apple iPad acts as a business's main POS terminal, or register. Transactions, orders, and various other functions take place on the iPad POS. The iPad Mini is used as a POS terminal for customer-facing kiosks and table-side ordering. The Apple iPod Touch serves as a line-buster, or as a customer-facing display. These terminals work with Epson Printers, wireless routers, access points, cash drawers , card swipes, and barcode scanners to meet a merchant's needs. Revel allows for a customizable point of sale solution and integrates with a variety of third party providers. Providers for payment include FirstData, Mercury Payments, LevelUp , Adyen [ 22 ] and PayPal . [ 23 ] Reporting is provided by companies including Avero, CTUIT, and RTI Connect. Revel's gift card providers include Givex, Mercury, PlasticPrinters, Synergy, and Valutec. The loyalty and reward program is provided by companies including LevelUp, Punchh, LoyalTree, and Synergy. Revel systems include Facebook and Twitter integration with online ordering options provided by companies including Zuppler, and Shopify . Revel's Managed Hosting is provided by Singlehop and Softlayer . Revel Systems was included first on the list of Business News Daily's "Best iPad POS Systems." [ 17 ] In 2013, Revel Systems was chosen as the Best Retail app in the Business at the Tabby Awards [ 24 ] and CEO Lisa Falzone was recognized in Tech Cocktail as one of "15 Female Entrepreneurs You Should Know About (But Probably Don't)." [ 25 ] In 2015 Lisa Falzone was named on the Fortune 40 Under 40 list and the Forbes list of Eight Rising Stars. [ 26 ]
https://en.wikipedia.org/wiki/Revel_Systems
Reverberation mapping (or Echo mapping ) is an astrophysical technique for measuring the structure of the broad-line region (BLR) around a supermassive black hole at the center of an active galaxy , and thus estimating the hole's mass. It is considered a "primary" mass estimation technique, i.e., the mass is measured directly from the motion that its gravitational force induces in the nearby gas. [ 1 ] Newton's law of gravity defines a direct relation between the mass of a central object and the speed of a smaller object in orbit around the central mass. Thus, for matter orbiting a black hole, the black-hole mass M ∙ {\displaystyle M_{\bullet }} is related by the formula to the RMS velocity Δ V of gas moving near the black hole in the broad emission-line region, measured from the Doppler broadening of the gaseous emission lines. In this formula, R BLR is the radius of the broad-line region; G is the constant of gravitation ; and f is a poorly known "form factor" that depends on the shape of the BLR. While Δ V can be measured directly using spectroscopy , the necessary determination of R BLR is much less straightforward. This is where reverberation mapping comes into play. [ 2 ] It utilizes the fact that the emission-line fluxes vary strongly in response to changes in the continuum, i.e., the light from the accretion disk near the black hole. Put simply, if the brightness of the accretion disk varies, the emission lines, which are excited in response to the accretion disk's light, will "reverberate", that is, vary in response. But it will take some time for light from the accretion disk to reach the broad-line region. Thus, the emission-line response is delayed with respect to changes in the continuum. Assuming that this delay is solely due to light travel times, the distance traveled by the light, corresponding to the radius of the broad emission-line region, can be measured. Less than 40 [ needs update ] active galactic nuclei have been accurately mapped in this way. An alternative approach is to use an empirical correlation between R BLR and the continuum luminosity. [ 1 ] Another uncertainty is the value of f . In principle, the response of the BLR to variations in the continuum could be used to map out the three-dimensional structure of the BLR. In practice, the amount and quality of data required to carry out such a deconvolution is prohibitive. Until about 2004, f was estimated ab initio based on simple models for the structure of the BLR. More recently, the value of f has been determined so as to bring the M–sigma relation for active galaxies into the best possible agreement with the M–sigma relation for quiescent galaxies. [ 1 ] When f is determined in this way, reverberation mapping becomes a "secondary", rather than "primary", mass estimation technique.
https://en.wikipedia.org/wiki/Reverberation_mapping
A reverberatory furnace is a metallurgical or process furnace that isolates the material being processed from contact with the fuel , but not from contact with combustion gases . The term reverberation is used here in a generic sense of rebounding or reflecting , not in the acoustic sense of echoing . Chemistry determines the optimum relationship between the fuel and the material, among other variables. The reverberatory furnace can be contrasted on the one hand with the blast furnace , in which fuel and material are mixed in a single chamber, and, on the other hand, with crucible , muffling , or retort furnaces , in which the subject material is isolated from the fuel and all of the products of combustion including gases and flying ash. There are, however, a great many furnace designs, and the terminology of metallurgy has not been very consistently defined, so it is difficult to categorically contradict other views. The applications of these devices fall into two general categories, metallurgical melting furnaces, and lower temperature processing furnaces typically used for metallic ores and other minerals. A reverberatory furnace is at a disadvantage from the standpoint of efficiency compared to a blast furnace due to the separation of the burning fuel and the subject material, and it is necessary to effectively utilize both reflected radiant heat and direct contact with the exhaust gases ( convection ) to maximize heat transfer . Historically these furnaces have used solid fuel, and bituminous coal has proven to be the best choice. The brightly visible flames, due to the substantial volatile component, give more radiant heat transfer than anthracite coal or charcoal . Contact with the products of combustion, which may add undesirable elements to the subject material, is used to advantage in some processes. Control of the fuel/air balance can alter the exhaust gas chemistry toward either an oxidizing or a reducing mixture, and thus alter the chemistry of the material being processed. For example, cast iron can be puddled in an oxidizing atmosphere to convert it to the lower-carbon mild steel or bar iron . The Siemens-Martin oven in open hearth steelmaking is also a reverberatory furnace. Reverberatory furnaces (in this context, usually called air furnaces ) were formerly also used for melting brass, bronze , and pig iron for foundry work. They were also, for the first 75 years of the 20th century, the dominant smelting furnace used in copper production, treating either roasted calcine or raw copper sulfide concentrate. [ 1 ] While they have been supplanted in this role, first by flash furnaces and more recently also by the Ausmelt [ 1 ] and ISASMELT furnaces, [ 2 ] they are very effective at producing slags with low copper losses. [ 1 ] The first reverberatory furnaces were perhaps in the medieval period, and were used for melting bronze for casting bells. The earliest known detailed description was provided by Biringuccio. [ 3 ] They were first applied to smelting metals in the late 17th century. Sir Clement Clerke and his son Talbot built cupolas or reverberatory furnaces in the Avon Gorge below Bristol in about 1678. In 1687, while obstructed from smelting lead (by litigation), they moved on to copper. In the following decades, reverberatory furnaces were widely adopted for smelting these metals and also tin. They had the advantage over older methods that the fuel was mineral coal, not charcoal or 'white coal' (chopped dried wood). In the 1690s, they (or associates) applied the reverberatory furnace (in this case known as an air furnace) to melting pig iron for foundry purposes. This was used at Coalbrookdale and various other places, but became obsolete at the end of the 18th century with the introduction of the foundry cupola furnace , which was a kind of small blast furnace, and a quite different species from the reverberatory furnace. [ citation needed ] The puddling furnace , introduced by Henry Cort in the 1780s to replace the older finery process , was also a variety of reverberatory furnace. [ citation needed ] Reverberatory furnaces were introduced to Chile around 1830 by Charles Saint Lambert . [ 4 ] This revolutionized Chilean copper mining to such degree that the country came to supply 19% of the copper produced worldwide in the 19th century. [ 5 ] [ 6 ] [ 7 ] The use of mineral coal instead charcoal in reverberatory furnances introduced by Saint Lambert also meant there was not longer a dependency on the scarce firewood to be found on Atacama Desert and its sorrounding semi-arid areas as was the case with earlier smelting technology. [ 8 ] By 1872 there were one hundred "smelting works" in Chile. [ 9 ] Competition stemming from new processing techniques pushed Chilean copper production in the late 19th century back to represent 6% of the worldwide production, reaching a low of 4.3% in 1914. [ 10 ] [ 11 ] Reverberatory furnaces are widely used to melt secondary aluminium scrap for eventual use by die-casting industries. [ 12 ] The simplest reverberatory furnace is nothing more than a steel box lined with alumina refractory brick with a flue at one end and a vertically lifting door at the other. Conventional oil or gas burners are placed usually on either side of the furnace to heat the brick and the eventual bath of molten metal is then poured into a casting machine to produce ingots . [ 12 ]
https://en.wikipedia.org/wiki/Reverberatory_furnace
The reversal test is a heuristic designed to spot and eliminate status quo bias , an emotional bias irrationally favouring the current state of affairs. The test is applicable to the evaluation of any decision involving a potential deviation from the status quo along some continuous dimension. The reversal test was introduced in the context of the bioethics of human enhancement by Nick Bostrom and Toby Ord . [ 1 ] Bostrom and Ord introduced the reversal test to provide an answer to the question of how one can, given that humans might suffer from irrational status quo bias, distinguish between valid criticisms of a proposed increase in some human trait and criticisms merely motivated by resistance to change. [ 1 ] The reversal test attempts to do this by asking whether it would be a good thing if the trait was decreased : An example given is that if someone objects that an increase in intelligence would be a bad thing due to more dangerous weapons being made etc., the objector to that position would then ask "Shouldn't we decrease intelligence then?" " Reversal Test : When a proposal to change a certain parameter is thought to have bad overall consequences, consider a change to the same parameter in the opposite direction. If this is also thought to have bad overall consequences, then the onus is on those who reach these conclusions to explain why our position cannot be improved through changes to this parameter. If they are unable to do so, then we have reason to suspect that they suffer from status quo bias." (p. 664) [ 1 ] Ideally the test will help reveal whether status quo bias is an important causal factor in the initial judgement. A similar thought experiment in regards to dampening traumatic memories was described by Adam J. Kolber, imagining whether aliens naturally resistant to traumatic memories should adopt traumatic "memory enhancement". [ 2 ] The "trip to reality" rebuttal to Nozick's experience machine thought experiment (where one's entire current life is shown to be a simulation and one is offered to return to reality) can also be seen as a form of reversal test. [ 3 ] A further elaboration on the reversal test is suggested as the double reversal test: [ 1 ] " Double Reversal Test : Suppose it is thought that increasing a certain parameter and decreasing it would both have bad overall consequences. Consider a scenario in which a natural factor threatens to move the parameter in one direction and ask whether it would be good to counterbalance this change by an intervention to preserve the status quo. If so, consider a later time when the naturally occurring factor is about to vanish and ask whether it would be a good idea to intervene to reverse the first intervention. If not, then there is a strong prima facie case for thinking that it would be good to make the first intervention even in the absence of the natural countervailing factor." (p. 673) As an example, consider the parameter to be life expectancy , moving in the downward direction because of a sudden natural disease. We might intervene to invest in better health infrastructure to preserve the current life expectancy. Now if the disease is cured, the double reversal test asks: should we reverse our investment and defund the health services we created since the disease, now that it's gone? If not, perhaps we should invest in health infrastructure even if there never is a disease in the first place. In this case the status quo bias is turned against itself, greatly reducing its impact on the reasoning. It also purports to handle arguments of evolutionary adaptation, transition costs, risk, and societal ethics that can counter the other test. Alfred Nordmann argues that the reversal test merely erects a straw-man argument in favour of enhancement. He claims that the tests are limited to approaches that are consequentialist and deontological . He adds that one cannot view humans as sets of parameters that can be optimized separately or without regard to their history. [ 4 ] Christian Weidemann argues that the double reversal test can muddy the water; guaranteeing and weighing transition costs versus benefits might be the relevant practical ethical question for much human enhancement analysis. [ 5 ]
https://en.wikipedia.org/wiki/Reversal_test
The reverse Krebs cycle (also known as the reverse tricarboxylic acid cycle , the reverse TCA cycle , or the reverse citric acid cycle , or the reductive tricarboxylic acid cycle , or the reductive TCA cycle ) is a sequence of chemical reactions that are used by some bacteria and archaea [ 1 ] to produce carbon compounds from carbon dioxide and water by the use of energy -rich reducing agents as electron donors. The reaction is the citric acid cycle run in reverse. Where the Krebs cycle takes carbohydrates and oxidizes them to CO 2 and water, the reverse cycle takes CO 2 and H 2 O to make carbon compounds. This process is used by some bacteria (such as Aquificota ) to synthesize carbon compounds, sometimes using hydrogen , sulfide , or thiosulfate as electron donors . [ 2 ] [ 3 ] This process can be seen as an alternative to the fixation of inorganic carbon in the Calvin cycle which occurs in a wide variety of microbes and higher organisms. In contrast to the oxidative citric acid cycle, the reverse or reductive cycle has a few key differences. There are three enzymes specific to the reductive citric acid cycle – citrate lyase , fumarate reductase , and α-ketoglutarate synthase . [ citation needed ] The splitting of citric acid to oxaloacetate and acetate is in catalyzed by citrate lyase , rather than the reverse reaction of citrate synthase . [ 4 ] Succinate dehydrogenase is replaced by fumarate reductase and α-ketoglutarate synthase replaces α-ketoglutarate dehydrogenase . [ citation needed ] The conversion of succinate to 2-oxoglutarate is also different. In the oxidative reaction this step is coupled to the reduction of NADH . However, the oxidation of 2-oxoglutarate to succinate is so energetically favorable, that NADH lacks the reductive power to drive the reverse reaction. In the rTCA cycle, this reaction has to use a reduced low potential ferredoxin . [ 5 ] The reaction is a possible candidate for prebiotic early-Earth conditions and, therefore, is of interest in the research of the origin of life . It has been found that some non-consecutive steps of the cycle can be catalyzed by minerals through photochemistry , [ 6 ] while entire two and three-step sequences can be promoted by metal ions such as iron (as reducing agents ) under acidic conditions. In addition, these organisms that undergo photochemistry can and do utilize the citric acid cycle. [ 2 ] However, the conditions are extremely harsh and require 1 M hydrochloric or 1 M sulfuric acid and strong heating at 80–140 °C. [ 7 ] Along with these possibilities of the rTCA cycle contributing to early life and biomolecules , it is thought that the rTCA cycle could not have been completed without the use of enzymes. The kinetic and thermodynamic parameters of the reduction of highly oxidized species to push the rTCA cycle are seemingly unlikely without the necessary action of biological catalysts known as enzymes . The rate of some of the reactions in the rTCA cycle likely would have been too slow to contribute significantly to the formation of life on Earth without enzymes. Considering the thermodynamics of the rTCA cycle, the increase in Gibbs free energy going from product to reactant would make pyrophosphate an unlikely energy source for the conversion of pyruvate to oxaloacetate as the reaction is too endoergic . [ 8 ] However, it is suggested that a nonenzymatic precursor to the Krebs cycle, glyoxylate cycle , and reverse Krebs cycle might have originated, where oxidation and reduction reactions cooperated. The later use of carboxylation utilizing ATP could have given rise to parts of reverse Krebs cycle. [ 9 ] It is suggested that the reverse Krebs cycle was incomplete, even in the last universal common ancestor . [ 10 ] [ 11 ] Many reactions of the reverse Krebs cycle, including thioesterification and hydrolysis, could have been catalyzed by iron-sulfide minerals at deep sea alkaline hydrothermal vent cavities. [ 12 ] More recently, aqueous microdroplets have been shown to promote reductive carboxylation reactions in the reverse Krebs cycle. [ 13 ] The reverse Krebs cycle is proposed to be a major role in the pathophysiology of melanoma . Melanoma tumors are known to alter normal metabolic pathways in order to utilize waste products. These metabolic adaptations help the tumor adapt to its metabolic needs. The most well known adaptation is the Warburg effect where tumors increase their uptake and utilization of glucose . Glutamine is one of the known substances to be utilized in the reverse Krebs cycle in order to produce acetyl-CoA. [ 14 ] This type of mitochondrial activity could provide a new way to identify and target cancer causing cells. [ 15 ] Thiomicrospira denitrificans , Candidatus Arcobacter , and Chlorobaculum tepidum have been shown to utilize the rTCA cycle to turn CO 2 into carbon compounds. The ability of these bacteria, among others, to use the rTCA cycle, supports the idea that they are derived from an ancestral proteobacterium , and that other organisms using this cycle are much more abundant than previously believed. [ 16 ]
https://en.wikipedia.org/wiki/Reverse_Krebs_cycle
Reverse Mathematics: Proofs from the Inside Out is a book by John Stillwell on reverse mathematics , the process of examining proofs in mathematics to determine which axioms are required by the proof. It was published in 2018 by the Princeton University Press . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] The book begins with a historical overview of the long struggles with the parallel postulate in Euclidean geometry , [ 3 ] and of the foundational crisis of the late 19th and early 20th centuries, [ 6 ] Then, after reviewing background material in real analysis and computability theory , [ 1 ] the book concentrates on the reverse mathematics of theorems in real analysis, [ 3 ] including the Bolzano–Weierstrass theorem , the Heine–Borel theorem , the intermediate value theorem and extreme value theorem , the Heine–Cantor theorem on uniform continuity , [ 6 ] the Hahn–Banach theorem , and the Riemann mapping theorem . [ 5 ] These theorems are analyzed with respect to three of the "big five" subsystems of second-order arithmetic , namely arithmetical comprehension, recursive comprehension, and the weak Kőnig's lemma. [ 1 ] The book is aimed at a "general mathematical audience" [ 1 ] including undergraduate mathematics students with an introductory-level background in real analysis. [ 2 ] It is intended both to excite mathematicians, physicists, and computer scientists about the foundational issues in their fields, [ 6 ] and to provide an accessible introduction to the subject. However, it is not a textbook; [ 3 ] [ 4 ] for instance, it has no exercises. One theme of the book is that many theorems in this area require axioms in second-order arithmetic that encompass infinite processes and uncomputable functions . [ 3 ] Jeffry Hirst criticizes the book, writing that "if one is not too obsessive about the details, Proofs from the Inside Out is an interesting introduction," while finding details that he would prefer to be handled differently, in a topic for which details are important. In particular, in this area, there are multiple choices for how to build up the arithmetic on real numbers from simpler data types such as the natural numbers , and while Stillwell discusses three of them ( decimal numerals, Dedekind cuts , and nested intervals), converting between them itself requires nontrivial axiomatic assumptions. [ 2 ] However, James Case calls the book "very readable", [ 6 ] and Roman Kossak calls it "a stellar example of expository writing on mathematics". [ 5 ] Several other reviewers agree that this book could be helpful as a non-technical way to create interest in this topic in mathematicians who are not already familiar with it, and lead them to more in-depth material in this area. [ 1 ] [ 2 ] [ 3 ] As additional reading on reverse mathematics in combinatorics , Hirst suggests Slicing the Truth by Denis Hirschfeldt. [ 2 ] Another book suggested by reviewer Reinhard Kahle is Stephen G. Simpson 's Subsystems of Second Order Arithmetic . [ 1 ]
https://en.wikipedia.org/wiki/Reverse_Mathematics:_Proofs_from_the_Inside_Out
Infix notation Reverse Polish notation ( RPN ), also known as reverse Łukasiewicz notation , Polish postfix notation or simply postfix notation , is a mathematical notation in which operators follow their operands , in contrast to prefix or Polish notation (PN), in which operators precede their operands. The notation does not need any parentheses for as long as each operator has a fixed number of operands . The term postfix notation describes the general scheme in mathematics and computer sciences, whereas the term reverse Polish notation typically refers specifically to the method used to enter calculations into hardware or software calculators, which often have additional side effects and implications depending on the actual implementation involving a stack . The description "Polish" refers to the nationality of logician Jan Łukasiewicz , [ 1 ] [ 2 ] who invented Polish notation in 1924. [ 3 ] [ 4 ] [ 5 ] [ 6 ] The first computer to use postfix notation, though it long remained essentially unknown outside of Germany, was Konrad Zuse 's Z3 in 1941 [ 7 ] [ 8 ] as well as his Z4 in 1945. The reverse Polish scheme was again proposed in 1954 by Arthur Burks , Don Warren, and Jesse Wright [ 9 ] and was independently reinvented by Friedrich L. Bauer and Edsger W. Dijkstra in the early 1960s to reduce computer memory access and use the stack to evaluate expressions . The algorithms and notation for this scheme were extended by the philosopher and computer scientist Charles L. Hamblin in the mid-1950s. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ excessive citations ] During the 1970s and 1980s, Hewlett-Packard used RPN in all of their desktop and hand-held calculators, and has continued to use it in some models into the 2020s. [ 16 ] [ 17 ] In computer science , reverse Polish notation is used in stack-oriented programming languages such as Forth , dc , Factor , STOIC , PostScript , RPL , and Joy . In reverse Polish notation, the operators follow their operands . For example, to add 3 and 4 together, the expression is 3 4 + rather than 3 + 4 . The conventional notation expression 3 − 4 + 5 becomes 3 (enter) 4 − 5 + in reverse Polish notation: 4 is first subtracted from 3, then 5 is added to it. The concept of a stack , a last-in/first-out construct, is integral to the left-to-right evaluation of RPN. In the example 3 4 − , first the 3 is put onto the stack, then the 4; the 4 is now on top and the 3 below it. The subtraction operator removes the top two items from the stack, performs 3 − 4 , and puts the result of −1 onto the stack. The common terminology is that added items are pushed on the stack and removed items are popped . The advantage of reverse Polish notation is that it removes the need for order of operations and parentheses that are required by infix notation and can be evaluated linearly, left-to-right. For example, the infix expression (3 + 4) × (5 + 6) becomes 3 4 + 5 6 + × in reverse Polish notation. Reverse Polish notation has been compared to how one had to work through problems with a slide rule . [ 18 ] In comparison, testing of reverse Polish notation with algebraic notation, reverse Polish has been found to lead to faster calculations, for two reasons. The first reason is that reverse Polish calculators do not need expressions to be parenthesized, so fewer operations need to be entered to perform typical calculations. Additionally, users of reverse Polish calculators made fewer mistakes than for other types of calculators. [ 19 ] [ 20 ] Later research clarified that the increased speed from reverse Polish notation may be attributed to the smaller number of keystrokes needed to enter this notation, rather than to a smaller cognitive load on its users. [ 21 ] However, anecdotal evidence suggests that reverse Polish notation is more difficult for users who previously learned algebraic notation. [ 20 ] Edsger W. Dijkstra invented the shunting-yard algorithm to convert infix expressions to postfix expressions (reverse Polish notation), so named because its operation resembles that of a railroad shunting yard . There are other ways of producing postfix expressions from infix expressions. Most operator-precedence parsers can be modified to produce postfix expressions; in particular, once an abstract syntax tree has been constructed, the corresponding postfix expression is given by a simple post-order traversal of that tree. The first computer implementing a form of reverse Polish notation (but without the name and also without a stack ), was Konrad Zuse 's Z3 , which he started to construct in 1938 and demonstrated publicly on 12 May 1941. [ 22 ] [ 23 ] [ 24 ] [ 25 ] In dialog mode, it allowed operators to enter two operands followed by the desired operation. [ z3 1 ] It was destroyed on 21 December 1943 in a bombing raid. [ 23 ] With Zuse's help a first replica was built in 1961. [ 23 ] The 1945 Z4 also added a 2-level stack . [ 31 ] [ 32 ] Other early computers to implement architectures enabling reverse Polish notation were the English Electric Company 's KDF9 machine, which was announced in 1960 and commercially available in 1963, [ 33 ] and the Burroughs B5000 , announced in 1961 and also delivered in 1963: Presumably, the KDF9 designers drew ideas from Hamblin's GEORGE (General Order Generator), [ 10 ] [ 11 ] [ 13 ] [ 34 ] [ 35 ] [ 32 ] an autocode programming system written for a DEUCE computer installed at the University of Sydney , Australia, in 1957. [ 10 ] [ 11 ] [ 13 ] [ 33 ] One of the designers of the B5000, Robert S. Barton , later wrote that he developed reverse Polish notation independently of Hamblin sometime in 1958 after reading a 1954 textbook on symbolic logic by Irving Copi , [ 36 ] [ 37 ] [ 38 ] where he found a reference to Polish notation, [ 38 ] which made him read the works of Jan Łukasiewicz as well, [ 38 ] and before he was aware of Hamblin's work. Friden introduced reverse Polish notation to the desktop calculator market with the EC-130 , designed by Robert "Bob" Appleby Ragen , [ 39 ] supporting a four-level stack [ 5 ] in June 1963. [ 40 ] The successor EC-132 added a square root function in April 1965. [ 41 ] Around 1966, the Monroe Epic calculator supported an unnamed input scheme resembling RPN as well. [ 5 ] Hewlett-Packard engineers designed the 9100A Desktop Calculator in 1968 with reverse Polish notation [ 16 ] with only three stack levels with working registers X ("keyboard"), Y ("accumulate") and visible storage register Z ("temporary"), [ 42 ] [ 43 ] a reverse Polish notation variant later referred to as three-level RPN . [ 44 ] This calculator popularized reverse Polish notation among the scientific and engineering communities. The HP-35 , the world's first handheld scientific calculator , [ 16 ] introduced the classical four-level RPN with its specific ruleset of the so-called operational (memory) stack [ 45 ] [ nb 1 ] (later also called automatic memory stack [ 46 ] [ 47 ] [ nb 1 ] ) in 1972. [ 48 ] In this scheme, the Enter ↑ key duplicates values into Y under certain conditions ( automatic stack lift with temporary stack lift disable ), and the top register T ("top") gets duplicated on drops ( top copy on pop aka top stack level repetition ) in order to ease some calculations and to save keystrokes. [ 47 ] HP used reverse Polish notation on every handheld calculator it sold, whether scientific, financial, or programmable, until it introduced the HP-10 adding machine calculator in 1977. By this time, HP was the leading manufacturer of calculators for professionals, including engineers and accountants. Later calculators with LCDs in the early 1980s, such as the HP-10C , HP-11C , HP-15C , HP-16C , and the financial HP-12C calculator also used reverse Polish notation. In 1988, Hewlett-Packard introduced a business calculator, the HP-19B , without reverse Polish notation, but its 1990 successor, the HP-19BII , gave users the option of using algebraic or reverse Polish notation again. In 1986, [ 49 ] [ 50 ] HP introduced RPL , an object-oriented successor to reverse Polish notation. It deviates from classical reverse Polish notation by using a dynamic stack only limited by the amount of available memory (instead of three or four fixed levels) and which could hold all kinds of data objects (including symbols, strings, lists, matrices, graphics, programs, etc.) instead of just numbers. The system would display an error message when running out of memory instead of just dropping values off the stack on overflow as with fixed-sized stacks. [ 51 ] It also changed the behaviour of the stack to no longer duplicate the top register on drops (since in an unlimited stack there is no longer a top register) and the behaviour of the Enter ↑ key so that it no longer duplicated values into Y, which had shown to sometimes cause confusion among users not familiar with the specific properties of the automatic memory stack . From 1990 to 2003, HP manufactured the HP-48 series of graphing RPL calculators, followed by the HP-49 series between 1999 and 2008. The last RPL calculator was named HP 50g , introduced in 2006 and discontinued in 2015. However, there are several community efforts like newRPL or DB48X to recreate RPL on modern calculators. As of 2011, Hewlett-Packard was offering the calculator models 12C, 12C Platinum, 17bII+ , 20b , 30b , 33s , 35s , 48gII (RPL) and 50g (RPL) which support reverse Polish notation. [ 52 ] While calculators emulating classical models continued to support classical reverse Polish notation, new reverse Polish notation models feature a variant of reverse Polish notation, where the Enter ↑ key behaves as in RPL. This latter variant is sometimes known as entry RPN . [ 53 ] In 2013, the HP Prime introduced a 128-level form of entry RPN called advanced RPN . In contrast to RPL with its dynamic stack, it just drops values off the stack on overflow like other fixed-sized stacks do. [ 51 ] However, like RPL, it does not emulate the behaviour of a classical operational RPN stack to duplicate the top register on drops. In late 2017, the list of active models supporting reverse Polish notation included only the 12C, 12C Platinum, 17bii+, 35s, and Prime. By July 2023, only the 12C, 12C Platinum, the freshly released HP 15C Collector's Edition , and the Prime remain active models supporting RPN. In Britain, Clive Sinclair 's Sinclair Scientific (1974) and Scientific Programmable (1975) models used reverse Polish notation. [ 54 ] [ 55 ] In 1974, Commodore produced the Minuteman *6 (MM6) without an Enter ↑ key and the Minuteman *6X (MM6X) with an Enter ↑ key, both implementing a form of two-level RPN . The SR4921 RPN came with a variant of four-level RPN with stack levels named X, Y, Z, and W (rather than T) and an Ent key (for "entry"). In contrast to Hewlett-Packard's reverse Polish notation implementation, W filled with 0 instead of its contents being duplicated on stack drops. [ 56 ] Prinz and Prinztronic were own-brand trade names of the British Dixons photographic and electronic goods stores retail chain, later rebranded as Currys Digital stores, and became part of DSG International. A variety of calculator models was sold in the 1970s under the Prinztronic brand, all made for them by other companies. Among these was the PROGRAM [ 57 ] Programmable Scientific Calculator which featured reverse Polish notation. The Aircraft Navigation Computer Heathkit OC-1401 / OCW-1401 used five-level RPN in 1978. Soviet programmable calculators ( MK-52 , MK-61 , B3-34 and earlier B3-21 [ 58 ] models) used reverse Polish notation for both automatic mode and programming. Modern Russian calculators MK-161 [ 59 ] and MK-152 , [ 60 ] designed and manufactured in Novosibirsk since 2007 and offered by Semico , [ 61 ] are backwards compatible with them. Their extended architecture is also based on reverse Polish notation. An eight-level stack was suggested by John A. Ball in 1978. [ 5 ] The community-developed calculators WP 34S (2011), WP 31S (2014) and WP 34C (2015), which are based on the HP 20b / HP 30b hardware platform, support classical Hewlett-Packard-style reverse Polish notation supporting automatic stack lift behaviour of the Enter ↑ key and top register copies on pops, but switchable between a four- and an eight-level operational stack. In addition to the optional support for an eight-level stack, the newer SwissMicros DM42 -based WP 43S as well as the WP 43C (2019) / C43 (2022) / C47 (2023) derivatives support data types for stack objects (real numbers, infinite integers, finite integers, complex numbers, strings, matrices, dates and times). The latter three variants can also be switched between classical and entry RPN behaviour of the Enter ↑ key, a feature often requested by the community. [ 66 ] They also support a rarely seen significant figures mode, which had already been available as a compile-time option for the WP 34S and WP 31S. [ 67 ] [ 68 ] Since 2021, the HP-42S simulator Free42 version 3 can be enabled to support a dynamic RPN stack only limited by the amount of available memory instead of the classical 4-level stack. This feature was incorporated as a selectable function into the DM42 since firmware DMCP-3.21 / DM42-3.18. [ 69 ] [ 70 ] Software calculators: Existing implementations using reverse Polish notation include:
https://en.wikipedia.org/wiki/Reverse_Polish_notation
Reverse transcription loop-mediated isothermal amplification ( RT-LAMP ) is a one step nucleic acid amplification method to multiply specific sequences of RNA. It is used to diagnose infectious disease caused by RNA viruses . [ 1 ] It combines LAMP [ 2 ] DNA-detection with reverse transcription , making cDNA from RNA before running the reaction. [ 3 ] RT-LAMP does not require thermal cycles (unlike PCR ) and is performed at a constant temperature between 60 and 65 °C. RT-LAMP is used in the detection of RNA viruses (groups II, IV, and V on the Baltimore Virus Classification system), such as the SARS-CoV-2 virus [ 4 ] and the Ebola virus . [ 5 ] RT-LAMP is used to test for the presence of specific RNA-samples of viruses for the specific sequence of the virus, made possible by comparing the sequences against a large external database of references. The RT-LAMP technique is being supported as a cheaper and easier alternative to RT-PCR for the early diagnostics of people that are infectious for COVID-19 . [ 6 ] There are open access test designs (including the recombinant proteins ) which makes it legally possible for anyone to produce a test. In contrast to classic rapid tests by lateral flow , RT-LAMP allows the early diagnosis of the disease by testing the viral RNA . [ 7 ] The tests can be done without previous RNA-isolation, detecting the viruses directly from swabs [ 8 ] or from saliva . [ 9 ] One example of use case of RT-LAMP was as an experiment to detect a new duck Tembusu-like, BYD virus, named after the region, Baiyangdian , where it was first isolated [ 10 ] [ 11 ] [ 1 ] Another recent application of this method, was in a 2013 experiment to detect an Akabane virus using RT-LAMP. The experiment, done in China, isolated the virus from aborted calf fetuses. [ 12 ] RT-LAMP is also being used in Forensic Serology to identify body fluids. Researchers have done experiments to show that this method can effectively identify certain body fluids. Knowing there would be limitations, Su et al, come to the conclusion that RT-LAMP was only able to identify blood. [ 13 ] [ 14 ] A specific sequence of the cDNA is detected by 4 LAMP primers . Two of them are inner primers (FIP and BIP), which serve as base for the Bst enzyme copy the template into a new DNA. The outer primers (F3 and B3) anneal to the template strand and help the reaction to proceed. As in the case of RT-PCR , the RT-LAMP procedure starts by making DNA from the sample RNA. This conversion is made by a reverse transcriptase , an enzyme derived from retroviruses capable of making such a conversion. [ 15 ] This DNA derived from RNA is called cDNA , or complementary DNA. The FIP primer is used by the reverse transcriptase to build a single-strand of copy DNA. The F3 primer binds to this side of the template strand as well, and displaces the previously made copy. This displaced, single-stranded copy is a mixture of target RNA and primers. The primers are designed to have a sequence that binds to the sequence itself, forming a loop. The BIP primer binds to the other end of this single strand and is used by the Bst DNA polymerase to build a complementary strand, making double-strand DNA. The F3 primer binds to this end and displaces, once again, this newly generated single-stranded DNA molecule. This new single strand that has been released will act as the starting point for the LAMP cycling amplification. This single-stranded DNA has a dumbbell -like structure as the ends fold and self-bind, forming two loops. The DNA polymerase and the FIP or BIP primers keep amplifying this strand and the LAMP-reaction product is extended. This cycle can be started from either the forward or backward side of the strand using the appropriate primer. Once this cycle has begun, the strand undergoes self-primed DNA synthesis during the elongation stage of the amplification process. This amplification takes place in less an hour, under isothermal conditions between 60 and 65 °C. The read out of RT-LAMP tests is frequently colorimetric. Two of the common ways are based on measuring either pH or magnesium ions. The amplification reaction causes pH to lower and Mg2+ levels to drop. This can be perceived by indicators, such as Phenol red , for pH, and hydroxynaphthol blue (HNB), for magnesium. [ 15 ] Another option is to use SYBR Green I , a DNA intercalating coloring agent. [ 16 ] This method is specifically advantageous because it can all be done quickly in one step. The sample is mixed with the primers, reverse transcriptase and DNA polymerase and the reaction takes place under a constant temperature. The required temperature can be achieved using a simple hot water bath. PCR requires thermocycling ; RT-LAMP does not, making it more time efficient and very cost effective. [ 3 ] This inexpensive and streamlined method can be more readily used in developing countries that do not have access to high tech laboratories. A disadvantage of this method is generating the sequence specific primers. For each LAMP assay, primers must be specifically designed to be compatible with the target DNA. This can be difficult which discourages researchers from using the LAMP method in their work. [ 1 ] There is however, a free software called Primer Explorer, developed by Fujitsu in Japan, which can aid in the selection of these primers.
https://en.wikipedia.org/wiki/Reverse_Transcription_Loop-mediated_Isothermal_Amplification
Reverse architecture is a process of deducing the underlying architecture and design of a system by observing its behaviour. [ 1 ] It has its roots in the field of reverse engineering . Practicing reverse architecture is used to decipher the logistics of building. There are a variety of techniques available, the most notable being architecture driven modelling . [ clarification needed ] [ citation needed ]
https://en.wikipedia.org/wiki/Reverse_architecture
Reverse cholesterol transport (RCT) is a multistep process comprising removal of excess cholesterol from cells in the body and delivery to the liver for excretion into the small intestine. [ 1 ] Enhancing reverse cholesterol transport is considered a potential strategy for preventing and treating atherosclerosis and associated diseases such as cardiovascular disease and stroke. [ 2 ] Atherosclerosis is caused by the build-up in arterial blood vessels of atherosclerotic plaques . These consist mostly of foam cells , which are macrophages overloaded with cholesterol and other lipids. Foam cells and other cells in peripheral tissues can hand over their excess cholesterol to high-density lipoprotein (HDL) particles. These will transport the cholesterol via the lymph and then the blood stream to the liver, from where it will be excreted with bile into the small intestine. Reverse cholesterol transport thereby works against the build-up of atherosclerotic plaques from dying foam cells. In more detail, reverse cholesterol transport proceeds in the following steps: Through these steps, RCT plays a vital role in maintaining cholesterol homeostasis and preventing the accumulation of cholesterol in peripheral tissues, thereby reducing the risk of cardiovascular diseases. While excess fat (lipids) can simply be catabolized (burned) by cells as energy source, cholesterol's complex molecular structure cannot be efficiently catabolized. Therefore, excess peripheral cholesterol is recycled to the liver via RCT. Adiponectin induces ABCA1-mediated reverse cholesterol transport from macrophages by activation of PPAR-γ and LXRα/β . [ 5 ] High-density lipoprotein cholesterol (HDL-C) refers to the total cholesterol content carried by all HDL particles in the bloodstream. Traditionally the amount of HDL-C is used as a proxy to measure the amount of HDL particles, and from there a proxy for the reverse cholesterol transport capacity. However, a number of conditions that increase reverse cholesterol transport (e.g. being male) will reduce HDL-C due to the greater clearance of HDL, making such a test unreliable. In fact, when many known correlates of CVD risks are controlled for, HDL-C does not have any correlation with cardiovascular event risks. In this way, HDL-C only seems to serve as an imperfect, but easy-to-measure, proxy for a healthy lifestyle. [ 6 ] The actual cholesterol efflux capacity (CEC) is measured directly: one takes a blood sample from the patient, isolates the serum, and removes any ApoB-containg particles from it. Mouse macrophages are incubated in an ACAT inhibitor and radioisotope-labelled cholesterol, then have their efflux ability "woken up" with an ABCA1 agonist before use. They are then mixed with the prepared serum. The macrophages are then recovered to quantify their change in radioactivity compared to a control batch. Any extra loss in radioactivity is interpreted to have been taken up by the HDL particles in the patient's serum. [ 7 ] (This test does not account for the liver-bile-feces part of the transport.) The cholesterol efflux capacity (CEC) has much better correlation with CVD risks and CVD event frequencies, even when controlling for known correlates. [ 6 ] Many drugs affect enzymes and receptors involved in the transport process:
https://en.wikipedia.org/wiki/Reverse_cholesterol_transport
Reverse complement polymerase chain reaction (RC-PCR) is a modification of the polymerase chain reaction (PCR). It is primarily used to generate amplicon libraries for DNA sequencing by next generation sequencing (NGS). The technique permits both the amplification and the ability to append sequences or functional domains of choice independently to either end of the generated amplicons in a single closed tube reaction. RC-PCR was invented in 2013 by Daniel Ward and Christopher Mattocks at Salisbury NHS Foundation Trust , UK. In RC-PCR, no target specific primers are present in the reaction mixture. Instead target specific primers are formed as the reaction proceeds. A typical reaction employing the approach requires four oligonucleotides . The oligonucleotides interact with each other in pairs; one oligonucleotide probe and one universal primer (containing functional domains of choice), which hybridize with each other at their 3’ ends. Once hybridized, the universal primer can be extended, using the oligonucleotide probe as the template, to yield fully formed, target specific primers, which are then available to amplify the template in subsequent rounds of thermal cycling as per a standard PCR reaction. The oligonucleotide probe may also be blocked at the 3’ end preventing equivalent extension of the probe, but this is not essential. The probe is not consumed; it is available to act as a template for the universal primer to be ‘converted’ into target specific primer throughout successive PCR cycles. This generation of target specific primer occurs in parallel with standard PCR amplification under standard PCR conditions. RC-PCR provides significant advantages over other methods of amplicon library preparation methods. Most significantly it is a single closed tube reaction, this eliminates cross contamination associated with other two-step PCR approaches as well as utilising less reagent and requiring less labour to perform. The technique also provides the significant advantage of the flexibility of appending any desired sequence or functional domain of choice to either end of any amplicon. This is currently most advantageous in modern next generation sequencing (NGS) laboratories where a single target specific probe pair can be used with a whole library of universal primers. This benefit is used with NGS applications to apply sample specific indexes independently to each end of the amplicon construct. A Laboratory employing this approach would only require a single set of index primers, which can be used with all target specific probes compatible with that index set. This significantly reduces the number and length of oligonucleotides required by the laboratory compared to using full length pre-synthesised indexed target specific primers. The generation of the target specific primer in the reaction as it progresses also leads to more balanced reaction components. Concentrations of target specific primer are more aligned with target molecule concentration thereby reducing the potential of both off target priming and primer dimerisation. Following the invention of RC-PCR in 2013 the technique was clinically validated and employed diagnostically for a range of both inherited diseases such as hemochromatosis and thrombophilia as well as somatically acquired disorders including Myeloproliferative neoplasms and Acute myeloid leukemia in the Wessex Regional Genetics Laboratory (WRGL), Salisbury UK. More recently work has been undertaken to utilise the technology in the fight against the SARS-CoV-2 pandemic. [ 1 ] The patent application was filed in the UK in 2015 and awarded in 2020. Patent applications have been filed in other jurisdictions worldwide and are currently pending. In May 2019 the Intellectual property was licensed to Nimagen B.V. [ 2 ] to develop, manufacture and market kits exploiting the technology. Currently commercially available kits employing the technology include those for Human identification [ 3 ] [ 4 ] and for the whole genome sequencing of the SARS-CoV-2 virus for variant identification, tracking and treatment response. [ 5 ] [ 6 ] In August 2022 Nimagen officially launched a range of products employing the RC-PCR technology for human forensics applications under the trademark IDseek®. The Short Tandem Repeat version of the kit is validated by the Netherlands Forensic Institute as an improved method for routine massively parallel sequencing of short tandem repeats. [ 7 ] The RC-PCR approach is becoming more widely used for human health and several CE IVD kits are available for human clinical diagnostics including BRCA , TP53 , PALB2 and CFTR analysis. The technique has also been proven as a useful and powerful tool in the identification of the causative infectious pathogen in patients suspected of having a bacterial infection, in this setting it has been shown to provide a significant increase in the number of clinical samples in which a potentially clinically relevant pathogen is identified compared to the commonly used 16S Sanger method. [ 8 ] It has also been shown to provide similar advantages over traditional methods in the deconvolution of microbial communities in environmental samples, [ 9 ] and when used in conjunction with Oxford Nanopore devices has proven to be an efficient method for the full length 16S rRNA gene sequencing for microbial community deconvolution. [ 10 ]
https://en.wikipedia.org/wiki/Reverse_complement_polymerase_chain_reaction
Reverse computation is a software application of the concept of reversible computing . Because it offers a possible solution to the heat problem faced by chip manufacturers, reversible computing has been extensively studied in the area of computer architecture. The promise of reversible computing is that the amount of heat loss for reversible architectures would be minimal for significantly large numbers of transistors. [ 1 ] [ 2 ] Rather than creating entropy (and thus heat) through destructive operations, a reversible architecture conserves the energy by performing other operations that preserve the system state. [ 3 ] [ 4 ] The concept of reverse computation is somewhat simpler than reversible computing in that reverse computation is only required to restore the equivalent state of a software application, rather than support the reversibility of the set of all possible instructions. Reversible computing concepts have been successfully applied as reverse computation in software application areas such as database design, [ 5 ] checkpointing and debugging, [ 6 ] and code differentiation. [ 7 ] [ 8 ] Based on the successful application of Reverse Computation concepts in other software domains, Chris Carothers, Kalyan Perumalla and Richard Fujimoto [ 9 ] suggest the application of reverse computation to reduce state saving overheads in parallel discrete event simulation (PDES). They define an approach based on reverse event codes (which can be automatically generated), and demonstrate performance advantages of this approach over traditional state saving for fine-grained applications (those with a small amount of computation per event). The key property that reverse computation exploits is that a majority of the operations that modify the state variables are “constructive” in nature. That is, the undo operation for such operations requires no history. Only the most current values of the variables are required to undo the operation. For example, operators such as ++, ––, +=, -=, *= and /= belong to this category. Note, that the *= and /= operators require special treatment in the case of multiply or divide by zero, and overflow / underflow conditions. More complex operations such as circular shift (swap being a special case), and certain classes of random number generation also belong here. Operations of the form a = b, modulo and bitwise computations that result in the loss of data, are termed to be destructive. Typically these operations can only be restored using conventional state-saving techniques. However, we observe that many of these destructive operations are a consequence of the arrival of data contained within the event being processed. For example, in the work of Yaun, Carothers, et al., with large-scale TCP simulation, [ 10 ] the last-sent time records the time stamp of the last packet forwarded on a router logical process. The swap operation makes this operation reversible. In 1985 Jefferson introduced the optimistic synchronization protocol, which was utilized in parallel discrete event simulations, known as Time Warp. [ 11 ] To date, the technique known as Reverse Computation has only been applied in software for optimistically synchronized, parallel discrete event simulation. In December 1999, Michael Frank graduated from the University of Florida . His doctoral thesis focused on reverse computation at the hardware level, but included descriptions of both an instruction set architecture and a high level programming language (R) for a processor based on reverse computation. [ 12 ] [ notes 1 ] In 1998 Carothers and Perumalla published a paper for the Principles of Advanced and Distributed Simulation workshop [ 13 ] as part of their graduate studies under Richard Fujimoto, introducing technique of Reverse Computation as an alternative rollback mechanism in optimistically synchronized parallel discrete event simulations (Time Warp). In 1998, Carothers became an associate professor at Rensselaer Polytechnic Institute . Working with graduate students David Bauer and Shawn Pearce, Carothers integrated the Georgia Tech Time Warp design into Rensselaer’s Optimistic Simulation System (ROSS), which supported only reverse computation as the rollback mechanism. Carothers also constructed RC models for BitTorrent at General Electric, as well as numerous network protocols with students ( BGP4 , TCP Tahoe, Multicast ). Carothers created a course on Parallel and Distributed Simulation in which students were required to construct RC models in ROSS. Around the same time, Perumalla graduated from Georgia Tech and went to work at the Oak Ridge National Laboratory (ORNL). There he built the uSik simulator, which was a combined optimistic / conservative protocol PDES. The system was capable of dynamically determining the best protocol for LPs and remapping them during execution in response to model dynamics. In 2007 Perumalla tested uSik on Blue Gene/L and found that, while scalability is limited to 8K processors for pure Time Warp implementation, the conservative implementation scales to 16K available processors. Note that benchmarking was performed using PHOLD with a constrained remote event rate of 10%, where the timestamp of events was determined by an exponential distribution with a mean of 1.0, and an additional lookahead of 1.0 added to each event. This was the first implementation of PDES on Blue Gene using reverse computation. From 1998 to 2005 Bauer performed graduate work at RPI under Carothers, focusing solely on reverse computation. He developed the first PDES system solely based on reverse computation, called Rensselaer’s Optimistic Simulation System (ROSS). [ 14 ] for combined shared and distributed memory systems. From 2006 to 2009 Bauer worked under E.H. Page at Mitre Corporation , and in collaboration with Carothers and Pearce pushed the ROSS simulator to the 131,072 processor Blue Gene/P (Intrepid). This implementation was stable for remote event rates of 100% (every event sent over the network). During his time at RPI and MITRE, Bauer developed the network simulation system ROSS.Net [ 15 ] that supports semi-automated experiment design for black-box optimization of network protocol models executing in ROSS. A primary goal of the system was to optimize multiple network protocol models for execution in ROSS. For example, creating an LP layering structure to eliminate events being passed between network protocol LPs on the same simulated machine optimizes simulation of TCP/IP network nodes by eliminating zero-offset timestamps between TCP and IP protocols. Bauer also constructed RC agent-based models for social contact networks to study the effects of infectious diseases , in particular pandemic influenza, that scale to hundreds of millions of agents; as well as RC models for Mobile ad-hoc networks implementing functionality of mobility (proximity detection) and highly accurate physical layer electromagnetic wave propagation (Transmission Line Matrix model). [ 16 ] There has also been a recent push by the PDES community into the realm of continuous simulation. For example, Fujimoto and Perumalla, working with Tang et al. [ 17 ] have implemented an RC model of particle-in-cell and demonstrated excellent speedup over continuous simulation for models of light as a particle. Bauer and Page demonstrated excellent speedup for an RC Transmission Line Matrix model (P.B. Johns, 1971), modeling light as a wave at microwave frequencies. Bauer also created an RC variant of SEIR that generates enormous improvement over continuous models in the area of infectious disease spread. In addition, the RC SEIR model is capable of modeling multiple diseases efficiently, whereas the continuous model explodes exponentially with respect to the number of combinations of diseases possible throughout the population.
https://en.wikipedia.org/wiki/Reverse_computation
Reverse echo and reverse reverb are sound effects created as the result of recording an echo or reverb effect of an audio recording played backwards. The original recording is then played forwards accompanied by the recording of the echoed or reverberated signal which now precedes the original signal. The process produces a swelling effect preceding and during playback. Guitarist and producer Jimmy Page claims to have invented the effect, stating that he originally developed the method when recording the single " Ten Little Indians " with The Yardbirds in 1967. [ 1 ] He later used it on a number of Led Zeppelin tracks, including " You Shook Me ", " Whole Lotta Love ", and their cover of " When the Levee Breaks ". In an interview he gave to Guitar World magazine in 1993, Page explained: During one session [with The Yardbirds], we were recording "Ten Little Indians", which was an extremely silly song that featured a truly awful brass arrangement. In fact, the whole track sounded terrible. In a desperate attempt to salvage it, I hit upon an idea. I said, "Look, turn the tape over and employ the echo for the brass on a spare track. Then turn it back over and we'll get the echo preceding the signal." The result was very interesting—it made the track sound like it was going backwards. [ 2 ] Despite Page's claims, an earlier example of the effect is possibly heard towards the end of the 1966 Lee Mallory single "That's the Way It's Gonna Be", produced by Curt Boettcher . [ 3 ] [ 4 ] [ 5 ] [ 6 ] Jimmy Page of Led Zeppelin used this effect in the bridge of " Whole Lotta Love ” (1969). [ 7 ] [ 8 ] [ 9 ] Another early example is found in "Alucard" from the eponymous Gentle Giant album (1970), although usage was somewhat common throughout the 1970s, for example in “Crying to the Sky” by Be-Bop Deluxe . Reverse reverb is commonly used in shoegaze , particularly by such bands as My Bloody Valentine and Spacemen 3 . It is also often used as a lead-in to vocal passages in hardstyle music, and various forms of EDM and pop music. The reverse reverb is applied to the first word or syllable of the vocal for a build-up effect or other-worldly sound. Metallica used the effect in the song " Fade To Black " on James Hetfield's vocals in their 1984 album Ride The Lightning . The effect was also employed by Genesis (on Phil Collins’ snare drum) at the end of the song “ Deep in the Motherlode ” on the 1978 album ...And Then There Were Three... . [ citation needed ] Reverse reverb has been used in filmmaking and television production for an otherworldly effect on voices, especially in horror movies. [ 10 ] Reverse reverb was also used in the company logo for production company CBS Studios . [ citation needed ]
https://en.wikipedia.org/wiki/Reverse_echo
Reverse ecology refers to the use of genomics to study ecology with no a priori assumptions about the organism(s) under consideration. The term was suggested in 2007 by Matthew Rockman during a conference on ecological genomics in Christchurch , New Zealand. [ 1 ] Rockman was drawing an analogy to the term reverse genetics in which gene function is studied by comparing the phenotypic effects of different genetic sequences of that gene. Most researchers employing reverse ecology make use of some sort of population genomics methodology. This requires that a genome scan is performed on multiple individuals from at least two populations in order to identify genomic regions or sites that show signs of selection. These genome scans usually utilize single nucleotide polymorphism (SNP) markers, though use of microsatellites can work as well (with reduced resolution). Reverse ecology has been used by researchers to understand environments and other ecological traits of organisms on Earth using genomic approaches. By examining the genes of bacteria , scientists are able to reconstruct what the organisms ' environments are like today, or even from millions of years ago. The data could help us understand key events in the history of life on Earth. In 2010, researchers presented a technique to carry out reverse ecology to infer a bacteria 's living temperature-range conditions based on the GC content of certain genomic regions. [ 2 ] In 2011, researchers at the University of California, Berkeley were able to demonstrate that one can determine an organism's adaptive traits by looking first at its genome and checking for variations across a population. [ 3 ]
https://en.wikipedia.org/wiki/Reverse_ecology
Reverse electron flow (also known as reverse electron transport ) is a mechanism in microbial metabolism . Chemolithotrophs using an electron donor with a higher redox potential than NAD(P) + /NAD(P)H , such as nitrite or sulfur compounds, must use energy to reduce NAD(P) + . This energy is supplied by consuming proton motive force to drive electrons in a reverse direction through an electron transport chain and is thus the reverse process as forward electron transport. In some cases, the energy consumed in reverse electron transport is five times greater than energy gained from the forward process. [ 1 ] Autotrophs can use this process to supply reducing power for inorganic carbon fixation . Reverse electron transfer ( RET ) is the process that can occur in respiring mitochondria , when a small fraction of electrons from reduced ubiquinol is driven upstream by the membrane potential towards mitochondrial complex I . This results in reduction of oxidized pyridine nucleotide ( NAD + or NADP + ). This is a reversal of the exergonic reaction of forward electron transfer in the mitochondrial complex I when electrons travel from NADH to ubiquinone . The term "Reverse electron transfer" is used in regard to the reversibility of the reaction performed by complex I of the mitochondrial or bacterial respiratory chain . Complex I is responsible for the oxidation of NADH generated in catabolism when in the forward reaction electrons from the nucleotide (NADH) are transferred to membrane ubiquinone and energy is saved in the form of proton-motive force . The reversibility of the electron transfer reactions at complex I was first discovered when Chance and Hollunger have shown that the addition of succinate to mitochondria in State 4 leads to an uncoupler -sensitive reduction of the intramitochondrial nucleotides (NAD(P) + ). [ 2 ] When succinate is oxidized by intact mitochondria, complex I can catalyze reverse electron transfer when electrons from ubiquinol (QH 2 , formed during oxidation of succinate) is driven by the proton-motive force to complex I flavin toward the nucleotide-binding site. Since the discovery of the reverse electron transfer in the 1960s it was regarded as in vitro phenomenon, until the role of RET in the development of ischemia / reperfusion injury has been recognized in the brain [ 3 ] and heart. [ 4 ] During ischemia substantial amount of succinate is generated in cerebral [ 5 ] or cardiac tissue [ 6 ] and upon reperfusion it can be oxidized by mitochondria initiating reverse electron transfer reaction. Reverse electron transfer supports the highest rate of mitochondrial Reactive Oxygen Species ( ROS ) production, and complex I flavin mononucleotide (FMN) has been identified as the site where one-electron reduction of oxygen takes place. [ 7 ] [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Reverse_electron_flow
Reverse genetics is a method in molecular genetics that is used to help understand the function(s) of a gene by analysing the phenotypic effects caused by genetically engineering specific nucleic acid sequences within the gene. The process proceeds in the opposite direction to forward genetic screens of classical genetics . While forward genetics seeks to find the genetic basis of a phenotype or trait, reverse genetics seeks to find what phenotypes are controlled by particular genetic sequences. Automated DNA sequencing generates large volumes of genomic sequence data relatively rapidly. Many genetic sequences are discovered in advance of other, less easily obtained, biological information. Reverse genetics attempts to connect a given genetic sequence with specific effects on the organism. [ 1 ] Reverse genetics systems can also allow the recovery and generation of infectious or defective viruses with desired mutations. [ 2 ] This allows the ability to study the virus in vitro and in vivo . In order to learn the influence a sequence has on phenotype, or to discover its biological function, researchers can engineer a change or disrupt the DNA . After this change has been made a researcher can look for the effect of such alterations in the whole organism . There are several different methods of reverse genetics: Site-directed mutagenesis is a sophisticated technique that can either change regulatory regions in the promoter of a gene or make subtle codon changes in the open reading frame to identify important amino residues for protein function. [ citation needed ] Alternatively, the technique can be used to create null alleles so that the gene is not functional. For example, deletion of a gene by gene targeting ( gene knockout ) can be done in some organisms, such as yeast , mice and moss . Unique among plants, in Physcomitrella patens , gene knockout via homologous recombination to create knockout moss (see figure) is nearly as efficient as in yeast. [ 4 ] In the case of the yeast model system directed deletions have been created in every non-essential gene in the yeast genome. [ 5 ] In the case of the plant model system huge mutant libraries have been created based on gene disruption constructs. [ 6 ] In gene knock-in , the endogenous exon is replaced by an altered sequence of interest. [ 7 ] In some cases conditional alleles can be used so that the gene has normal function until the conditional allele is activated. This might entail 'knocking in' recombinase sites (such as lox or frt sites) that will cause a deletion at the gene of interest when a specific recombinase (such as CRE, FLP) is induced. Cre or Flp recombinases can be induced with chemical treatments, heat shock treatments or be restricted to a specific subset of tissues. [ citation needed ] Another technique that can be used is TILLING . This is a method that combines a standard and efficient technique of mutagenesis with a chemical mutagen such as ethyl methanesulfonate (EMS) with a sensitive DNA-screening technique that identifies point mutations in a target gene. [ citation needed ] In the field of virology, reverse-genetics techniques can be used to recover full-length infectious viruses with desired mutations or insertions in the viral genomes or in specific virus genes. Technologies that allow these manipulations include circular polymerase extension reaction (CPER) which was first used to generate infectious cDNA for Kunjin virus a close relative of West Nile virus. [ 8 ] CPER has also been successfully utilised to generate a range of positive-sense RNA viruses such as SARS-CoV-2, [ 9 ] the causative agent of COVID-19. The discovery of gene silencing using double stranded RNA, also known as RNA interference (RNAi), and the development of gene knockdown using Morpholino oligos, have made disrupting gene expression an accessible technique for many more investigators. This method is often referred to as a gene knockdown since the effects of these reagents are generally temporary, in contrast to gene knockouts which are permanent. [ citation needed ] RNAi creates a specific knockout effect without actually mutating the DNA of interest. In C. elegans , RNAi has been used to systematically interfere with the expression of most genes in the genome. RNAi acts by directing cellular systems to degrade target messenger RNA (mRNA). [ citation needed ] RNAi interference, specifically gene silencing, has become a useful tool to silence the expression of genes and identify and analyze their loss-of-function phenotype. When mutations occur in alleles, the function which it represents and encodes also is mutated and lost; this is generally called a loss-of-function mutation. [ 10 ] The ability to analyze the loss-of-function phenotype allows analysis of gene function when there is no access to mutant alleles. [ 11 ] While RNA interference relies on cellular components for efficacy (e.g. the Dicer proteins, the RISC complex) a simple alternative for gene knockdown is Morpholino antisense oligos. Morpholinos bind and block access to the target mRNA without requiring the activity of cellular proteins and without necessarily accelerating mRNA degradation. Morpholinos are effective in systems ranging in complexity from cell-free translation in a test tube to in vivo studies in large animal models. [ citation needed ] A molecular genetic approach is the creation of transgenic organisms that overexpress a normal gene of interest. The resulting phenotype may reflect the normal function of the gene. Alternatively it is possible to overexpress mutant forms of a gene that interfere with the normal ( wildtype ) gene's function. For example, over-expression of a mutant gene may result in high levels of a non-functional protein resulting in a dominant negative interaction with the wildtype protein. In this case the mutant version will out compete for the wildtype proteins partners resulting in a mutant phenotype. Other mutant forms can result in a protein that is abnormally regulated and constitutively active ('on' all the time). This might be due to removing a regulatory domain or mutating a specific amino residue that is reversibly modified (by phosphorylation , methylation , or ubiquitination ). Either change is critical for modulating protein function and often result in informative phenotypes. Reverse genetics plays a large role in vaccine synthesis. Vaccines can be created by engineering novel genotypes of infectious viral strains which diminish their pathogenic potency enough to facilitate immunity in a host. The reverse genetics approach to vaccine synthesis utilizes known viral genetic sequences to create a desired phenotype: a virus with both a weakened pathological potency and a similarity to the current circulating virus strain. Reverse genetics provides a convenient alternative to the traditional method of creating inactivated vaccines , viruses which have been killed using heat or other chemical methods. Vaccines created through reverse genetics methods are known as attenuated vaccines , named because they contain weakened (attenuated) live viruses. Attenuated vaccines are created by combining genes from a novel or current virus strain with previously attenuated viruses of the same species. [ 12 ] Attenuated viruses are created by propagating a live virus under novel conditions, such as a chicken's egg. This produces a viral strain that is still live, but not pathogenic to humans, [ 13 ] as these viruses are rendered defective in that they cannot replicate their genome enough to propagate and sufficiently infect a host. However, the viral genes are still expressed in the host's cell through a single replication cycle, allowing for the development of an immunity. [ 14 ] A common way to create a vaccine using reverse genetic techniques is to utilize plasmids to synthesize attenuated viruses. This technique is most commonly used in the yearly production of influenza vaccines , where an eight plasmid system can rapidly produce an effective vaccine. The entire genome of the influenza A virus consists of eight RNA segments, so the combination of six attenuated viral cDNA plasmids with two wild-type plasmids allow for an attenuated vaccine strain to be constructed. For the development of influenza vaccines, the fourth and sixth RNA segments, encoding for the hemagglutinin and neuraminidase proteins respectively, are taken from the circulating virus, while the other six segments are derived from a previously attenuated master strain. The HA and NA proteins exhibit high antigen variety, and therefore are taken from the current strain for which the vaccine is being produced to create a well matching vaccine. [ 12 ] The plasmid used in this eight-plasmid system contains three major components that allow for vaccine development. Firstly, the plasmid contains restriction sites that will enable the incorporation of influenza genes into the plasmid. Secondly, the plasmid contains an antibiotic resistance gene, allowing the selection of merely plasmids containing the correct gene. Lastly, the plasmid contains two promotors, human pol 1 and pol 2 promotor that transcribe genes in opposite directions. [ 15 ] cDNA sequences of viral RNA are synthesized from attenuated master strains by using RT-PCR . [ 12 ] This cDNA can then be inserted between an RNA polymerase I (Pol I) promoter and terminator sequence through restriction enzyme digestion. The cDNA and pol I sequence is then, in turn, surrounded by an RNA polymerase II (Pol II) promoter and a polyadenylation site. [ 16 ] This entire sequence is then inserted into a plasmid. Six plasmids derived from attenuated master strain cDNA are cotransfected into a target cell, often a chicken egg, alongside two plasmids of the currently circulating wild-type influenza strain. Inside the target cell, the two "stacked" Pol I and Pol II enzymes transcribe the viral cDNA to synthesize both negative-sense viral RNA and positive-sense mRNA, effectively creating an attenuated virus. [ 12 ] The result is a defective vaccine strain that is similar to the current virus strain, allowing a host to build immunity. This synthesized vaccine strain can then be used as a seed virus to create further vaccines. Vaccines engineered from reverse genetics carry several advantages over traditional vaccine designs. Most notable is speed of production. Due to the high antigenic variation in the HA and NA glycoproteins , a reverse-genetic approach allows for the necessary genotype (i.e. one containing HA and NA proteins taken from currently circulating virus strains) to be formulated rapidly. [ 12 ] Additionally, since the final product of a reverse genetics attenuated vaccine production is a live virus, a higher immunogenicity is exhibited than in traditional inactivated vaccines, [ 17 ] which must be killed using chemical procedures before being transferred as a vaccine. However, due to the live nature of attenuated viruses, complications may arise in immunodeficient patients. [ 18 ] There is also the possibility that a mutation in the virus could result the vaccine to turning back into a live unattenuated virus. [ 19 ]
https://en.wikipedia.org/wiki/Reverse_genetics
Reverse mathematics is a program in mathematical logic that seeks to determine which axioms are required to prove theorems of mathematics. Its defining method can briefly be described as "going backwards from the theorems to the axioms ", in contrast to the ordinary mathematical practice of deriving theorems from axioms. It can be conceptualized as sculpting out necessary conditions from sufficient ones. The reverse mathematics program was foreshadowed by results in set theory such as the classical theorem that the axiom of choice and Zorn's lemma are equivalent over ZF set theory . The goal of reverse mathematics, however, is to study possible axioms of ordinary theorems of mathematics rather than possible axioms for set theory. Reverse mathematics is usually carried out using subsystems of second-order arithmetic , [ 1 ] where many of its definitions and methods are inspired by previous work in constructive analysis and proof theory . The use of second-order arithmetic also allows many techniques from recursion theory to be employed; many results in reverse mathematics have corresponding results in computable analysis . In higher-order reverse mathematics, the focus is on subsystems of higher-order arithmetic , and the associated richer language. [ clarification needed ] The program was founded by Harvey Friedman [ 2 ] [ 3 ] and brought forward by Steve Simpson . [ 1 ] In reverse mathematics, one starts with a framework language and a base theory—a core axiom system—that is too weak to prove most of the theorems one might be interested in, but still powerful enough to develop the definitions necessary to state these theorems. For example, to study the theorem “Every bounded sequence of real numbers has a supremum ” it is necessary to use a base system that can speak of real numbers and sequences of real numbers. For each theorem that can be stated in the base system but is not provable in the base system, the goal is to determine the particular axiom system (stronger than the base system) that is necessary to prove that theorem. To show that a system S is required to prove a theorem T , two proofs are required. The first proof shows T is provable from S ; this is an ordinary mathematical proof along with a justification that it can be carried out in the system S . The second proof, known as a reversal , shows that T itself implies S ; this proof is carried out in the base system. [ 1 ] The reversal establishes that no axiom system S′ that extends the base system can be weaker than S while still proving T . Most reverse mathematics research focuses on subsystems of second-order arithmetic . The body of research in reverse mathematics has established that weak subsystems of second-order arithmetic suffice to formalize almost all undergraduate-level mathematics. In second-order arithmetic, all objects can be represented as either natural numbers or sets of natural numbers. For example, in order to prove theorems about real numbers, the real numbers can be represented as Cauchy sequences of rational numbers , each of which sequence can be represented as a set of natural numbers. The axiom systems most often considered in reverse mathematics are defined using axiom schemes called comprehension schemes . Such a scheme states that any set of natural numbers definable by a formula of a given complexity exists. In this context, the complexity of formulas is measured using the arithmetical hierarchy and analytical hierarchy . The reason that reverse mathematics is not carried out using set theory as a base system is that the language of set theory is too expressive. Extremely complex sets of natural numbers can be defined by simple formulas in the language of set theory (which can quantify over arbitrary sets). In the context of second-order arithmetic, results such as Post's theorem establish a close link between the complexity of a formula and the (non)computability of the set it defines. Another effect of using second-order arithmetic is the need to restrict general mathematical theorems to forms that can be expressed within arithmetic. For example, second-order arithmetic can express the principle "Every countable vector space has a basis" but it cannot express the principle "Every vector space has a basis". In practical terms, this means that theorems of algebra and combinatorics are restricted to countable structures, while theorems of analysis and topology are restricted to separable spaces . Many principles that imply the axiom of choice in their general form (such as "Every vector space has a basis") become provable in weak subsystems of second-order arithmetic when they are restricted. For example, "every field has an algebraic closure" is not provable in ZF set theory, but the restricted form "every countable field has an algebraic closure" is provable in RCA 0 , the weakest system typically employed in reverse mathematics. A recent strand of higher-order reverse mathematics research, initiated by Ulrich Kohlenbach in 2005, focuses on subsystems of higher-order arithmetic . [ 4 ] Due to the richer language of higher-order arithmetic, the use of representations (aka 'codes') common in second-order arithmetic, is greatly reduced. For example, a continuous function on the Cantor space is just a function that maps binary sequences to binary sequences, and that also satisfies the usual 'epsilon-delta'-definition of continuity. Higher-order reverse mathematics includes higher-order versions of (second-order) comprehension schemes. Such a higher-order axiom states the existence of a functional that decides the truth or falsity of formulas of a given complexity. In this context, the complexity of formulas is also measured using the arithmetical hierarchy and analytical hierarchy . The higher-order counterparts of the major subsystems of second-order arithmetic generally prove the same second-order sentences (or a large subset) as the original second-order systems. [ 5 ] For instance, the base theory of higher-order reverse mathematics, called RCA ω 0 , proves the same sentences as RCA 0 , up to language. As noted in the previous paragraph, second-order comprehension axioms easily generalize to the higher-order framework. However, theorems expressing the compactness of basic spaces behave quite differently in second- and higher-order arithmetic: on one hand, when restricted to countable covers/the language of second-order arithmetic, the compactness of the unit interval is provable in WKL 0 from the next section. On the other hand, given uncountable covers/the language of higher-order arithmetic, the compactness of the unit interval is only provable from (full) second-order arithmetic. [ 6 ] Other covering lemmas (e.g. due to Lindelöf , Vitali , Besicovitch , etc.) exhibit the same behavior, and many basic properties of the gauge integral are equivalent to the compactness of the underlying space. Second-order arithmetic is a formal theory of the natural numbers and sets of natural numbers. Many mathematical objects, such as countable rings , groups , and fields , as well as points in effective Polish spaces , can be represented as sets of natural numbers, and modulo this representation can be studied in second-order arithmetic. Reverse mathematics makes use of several subsystems of second-order arithmetic. A typical reverse mathematics theorem shows that a particular mathematical theorem T is equivalent to a particular subsystem S of second-order arithmetic over a weaker subsystem B . This weaker system B is known as the base system for the result; in order for the reverse mathematics result to have meaning, this system must not itself be able to prove the mathematical theorem T . [ citation needed ] Steve Simpson describes five particular subsystems of second-order arithmetic, which he calls the Big Five , that occur frequently in reverse mathematics. [ 7 ] [ 8 ] In order of increasing strength, these systems are named by the initialisms RCA 0 , WKL 0 , ACA 0 , ATR 0 , and Π 1 1 -CA 0 . The following table summarizes the "big five" systems [ 9 ] and lists the counterpart systems in higher-order arithmetic. [ 5 ] The latter generally prove the same second-order sentences (or a large subset) as the original second-order systems. [ 5 ] The subscript 0 in these names means that the induction scheme has been restricted from the full second-order induction scheme. [ 10 ] For example, ACA 0 includes the induction axiom (0 ∈ X ∧ {\displaystyle \wedge } ∀ n ( n ∈ X → n + 1 ∈ X )) → ∀ n n ∈ X . This together with the full comprehension axiom of second-order arithmetic implies the full second-order induction scheme given by the universal closure of ( φ (0) ∧ {\displaystyle \wedge } ∀ n ( φ ( n ) → φ ( n +1))) → ∀ n φ ( n ) for any second-order formula φ . However ACA 0 does not have the full comprehension axiom, and the subscript 0 is a reminder that it does not have the full second-order induction scheme either. This restriction is important: systems with restricted induction have significantly lower proof-theoretical ordinals than systems with the full second-order induction scheme. RCA 0 is the fragment of second-order arithmetic whose axioms are the axioms of Robinson arithmetic , induction for Σ 0 1 formulas , and comprehension for Δ 0 1 formulas. The subsystem RCA 0 is the one most commonly used as a base system for reverse mathematics. The initials "RCA" stand for "recursive comprehension axiom", where "recursive" means "computable", as in recursive function . This name is used because RCA 0 corresponds informally to "computable mathematics". In particular, any set of natural numbers that can be proven to exist in RCA 0 is computable, and thus any theorem that implies that noncomputable sets exist is not provable in RCA 0 . To this extent, RCA 0 is a constructive system, although it does not meet the requirements of the program of constructivism because it is a theory in classical logic including the law of excluded middle . Despite its seeming weakness (of not proving any non-computable sets exist), RCA 0 is sufficient to prove a number of classical theorems which, therefore, require only minimal logical strength. These theorems are, in a sense, below the reach of the reverse mathematics enterprise because they are already provable in the base system. The classical theorems provable in RCA 0 include: The first-order part of RCA 0 (the theorems of the system that do not involve any set variables) is the set of theorems of first-order Peano arithmetic with induction limited to Σ 0 1 formulas. It is provably consistent, as is RCA 0 , in full first-order Peano arithmetic. The subsystem WKL 0 consists of RCA 0 plus a weak form of Kőnig's lemma , namely the statement that every infinite subtree of the full binary tree (the tree of all finite sequences of 0's and 1's) has an infinite path. This proposition, which is known as weak Kőnig's lemma , is easy to state in the language of second-order arithmetic. WKL 0 can also be defined as the principle of Σ 0 1 separation (given two Σ 0 1 formulas of a free variable n that are exclusive, there is a set containing all n satisfying the one and no n satisfying the other). When this axiom is added to RCA 0 , the resulting subsystem is called WKL 0 . A similar distinction between particular axioms on the one hand, and subsystems including the basic axioms and induction on the other hand, is made for the stronger subsystems described below. In a sense, weak Kőnig's lemma is a form of the axiom of choice (although, as stated, it can be proven in classical Zermelo–Fraenkel set theory without the axiom of choice). It is not constructively valid in some senses of the word "constructive". To show that WKL 0 is actually stronger than (not provable in) RCA 0 , it is sufficient to exhibit a theorem of WKL 0 that implies that noncomputable sets exist. This is not difficult; WKL 0 implies the existence of separating sets for effectively inseparable recursively enumerable sets. It turns out that RCA 0 and WKL 0 have the same first-order part, meaning that they prove the same first-order sentences. WKL 0 can prove a good number of classical mathematical results that do not follow from RCA 0 , however. These results are not expressible as first-order statements but can be expressed as second-order statements. The following results are equivalent to weak Kőnig's lemma and thus to WKL 0 over RCA 0 : ACA 0 is RCA 0 plus the comprehension scheme for arithmetical formulas (which is sometimes called the "arithmetical comprehension axiom"). That is, ACA 0 allows us to form the set of natural numbers satisfying an arbitrary arithmetical formula (one with no bound set variables, although possibly containing set parameters). [ 1 ] pp. 6--7 Actually, it suffices to add to RCA 0 the comprehension scheme for Σ 1 formulas (also including second-order free variables) in order to obtain full arithmetical comprehension. [ 1 ] Lemma III.1.3 The first-order part of ACA 0 is exactly first-order Peano arithmetic; ACA 0 is a conservative extension of first-order Peano arithmetic. [ 1 ] Corollary IX.1.6 The two systems are provably (in a weak system) equiconsistent. ACA 0 can be thought of as a framework of predicative mathematics, although there are predicatively provable theorems that are not provable in ACA 0 . Most of the fundamental results about the natural numbers, and many other mathematical theorems, can be proven in this system. One way of seeing that ACA 0 is stronger than WKL 0 is to exhibit a model of WKL 0 that does not contain all arithmetical sets. In fact, it is possible to build a model of WKL 0 consisting entirely of low sets using the low basis theorem , since low sets relative to low sets are low. The following assertions are equivalent to ACA 0 over RCA 0 : The system ATR 0 adds to ACA 0 an axiom that states, informally, that any arithmetical functional (meaning any arithmetical formula with a free number variable n and a free set variable X , seen as the operator taking X to the set of n satisfying the formula) can be iterated transfinitely along any countable well ordering starting with any set. ATR 0 is equivalent over ACA 0 to the principle of Σ 1 1 separation. ATR 0 is impredicative, and has the proof-theoretic ordinal Γ 0 {\displaystyle \Gamma _{0}} , the supremum of that of predicative systems. ATR 0 proves the consistency of ACA 0 , and thus by Gödel's theorem it is strictly stronger. The following assertions are equivalent to ATR 0 over RCA 0 : Π 1 1 -CA 0 is stronger than arithmetical transfinite recursion and is fully impredicative. It consists of RCA 0 plus the comprehension scheme for Π 1 1 formulas. In a sense, Π 1 1 -CA 0 comprehension is to arithmetical transfinite recursion (Σ 1 1 separation) as ACA 0 is to weak Kőnig's lemma (Σ 0 1 separation). It is equivalent to several statements of descriptive set theory whose proofs make use of strongly impredicative arguments; this equivalence shows that these impredicative arguments cannot be removed. The following theorems are equivalent to Π 1 1 -CA 0 over RCA 0 : Over RCA 0 , Π 1 1 transfinite recursion, ∆ 0 2 determinacy, and the ∆ 1 1 Ramsey theorem are all equivalent to each other. Over RCA 0 , Σ 1 1 monotonic induction, Σ 0 2 determinacy, and the Σ 1 1 Ramsey theorem are all equivalent to each other. The following are equivalent: [ 15 ] [ 16 ] The set of Π 1 3 consequences of second-order arithmetic Z 2 has the same theory as RCA 0 + (schema over finite n ) determinacy in the n th level of the difference hierarchy of Σ 0 3 sets. [ 17 ] For a poset P {\displaystyle P} , let MF ( P ) {\displaystyle {\textrm {MF}}(P)} denote the topological space consisting of the filters on P {\displaystyle P} whose open sets are the sets of the form { F ∈ MF ( P ) ∣ p ∈ F } {\displaystyle \{F\in {\textrm {MF}}(P)\mid p\in F\}} for some p ∈ P {\displaystyle p\in P} . The following statement is equivalent to Π 2 1 − C A 0 {\displaystyle \Pi _{2}^{1}{\mathsf {-CA}}_{0}} over Π 1 1 − C A 0 {\displaystyle \Pi _{1}^{1}{\mathsf {-CA}}_{0}} : for any countable poset P {\displaystyle P} , the topological space MF ( P ) {\displaystyle {\textrm {MF}}(P)} is completely metrizable iff it is regular . [ 18 ] The ω in ω-model stands for the set of non-negative integers (or finite ordinals). An ω-model is a model for a fragment of second-order arithmetic whose first-order part is the standard model of Peano arithmetic, [ 1 ] but whose second-order part may be non-standard. More precisely, an ω-model is given by a choice S ⊆ P ( ω ) {\displaystyle S\subseteq {\mathcal {P}}(\omega )} of subsets of ω {\displaystyle \omega } . The first-order variables are interpreted in the usual way as elements of ω {\displaystyle \omega } , and + {\displaystyle +} , × {\displaystyle \times } have their usual meanings, while second-order variables are interpreted as elements of S {\displaystyle S} . There is a standard ω-model where one just takes S {\displaystyle S} to consist of all subsets of the integers. However, there are also other ω-models; for example, RCA 0 has a minimal ω-model where S {\displaystyle S} consists of the recursive subsets of ω {\displaystyle \omega } . A β-model is an ω model that agrees with the standard ω-model on truth of Π 1 1 {\displaystyle \Pi _{1}^{1}} and Σ 1 1 {\displaystyle \Sigma _{1}^{1}} sentences (with parameters). Non-ω models are also useful, especially in the proofs of conservation theorems.
https://en.wikipedia.org/wiki/Reverse_mathematics
Within molecular and cell biology , reverse migration is the phenomenon in which some neutrophils migrate away from the inflammation site, against the chemokine gradient, during inflammation resolution. The activation of in vivo inflammatory pathways (such as hypoxia-inducible factor, HIF), alters this behavior of reverse migration. The introduction of HIF and other related inflammatory pathways can alter the usual behavior and pattern of neutrophil migration, allowing these neutrophils to move away from the injury site rather than toward it. [ 1 ] Several studies in the last few years have shown that reverse migration of neutrophils can play a dual role in the immune system response. On one hand, reverse migration can help in the resolution of inflammation by removing neutrophils once they have played their role at the site of injury. On the other hand, neutrophils re-entering the bloodstream can further contribute to the spread of a systemic infection. Therefore, it is essential to understand the regulation of reverse migration to treat a wide variety of inflammation-driven diseases including sepsis. However, the mechanisms that regulate the complex process of reverse migration remain poorly understood for the most part. [ 2 ] Sepsis is a life-threatening organ dysfunction caused by failure for the host immune system to respond adequately to infection. Sepsis can result from the spread of any type of infection, but the majority of cases of septic shock are the result of hospital-acquired gram-negative bacilli or gram-positive cocci infections. Sepsis shock occurs more often in patients who are immunocompromised and in patients that have chronic or debilitating diseases. [ 3 ] During the progression of sepsis, polymorphonuclear neutrophils (PMNs) are the most abundantly recruited innate immune cells at the site of infection, playing a critical role in the healing process. PMNs exhibit reverse migration as sepsis progresses as they migrate away from the injury site back into the vasculature, the arrangement of blood vessels around the site, following initial PMN infiltration. The role of reverse migration in the immune response requires further investigation, but the current thinking is that reverse migration can play a role in both a protective response and also a tissue-damaging event. A better understanding of the role of reverse migration in sepsis can provide a critical branching point in the development of therapeutic approaches to sepsis. [ 4 ] Current Knowledge on The Mechanisms of PMN Reverse Migration (rM) The mechanisms that regulate polymorphonuclear neutrophil (PMN) migration from inflammatory sites are still not entirely or well understood. Several factors that contribute to PMN forward migration, such as chemotaxis, chemotactic attractants and repellents, chemokine receptors, interactions with endothelial cells, and changes in PMN behavior, are however thought to play integral roles in controlling PMN reverse migration (rM). [ 5 ] In a typical infection response, polymorphonuclear neutrophils (PMNs) exhibit antimicrobial activity to clear pathogens from a site of inflammation through degranulation , phagocytosis , and the release of cytokines . Another process recently found to play a critical role in coagulation and neutrophil immune response is the formation of neutrophil extracellular traps (NETs). NETs are networks composed of chromatin fibres and with granules associated with antimicrobial peptides and enzymes, which assist in the capture and removal of invading microbial pathogens. [ 6 ] Once the antimicrobial functions of PMNs are carried out, it is essential to clear PMNs to restore homeostasis. [ 7 ] Previously, PMN clearance was thought to occur through apoptosis or necrosis, followed by phagocytosis by macrophages. However, recent findings in imaging technology have revealed that PMNs can also migrate back into circulation, providing an alternative mechanism for removal of PMNs from the site of inflammation. [ 8 ] Neutrophils are highly motile immune cells that play a crucial role in the body’s defense against infection and injury. They exhibit two distinct types of movement: chemokinesis , in which they migrate randomly in response to environmental cues, and chemotaxis , which is a more directed, regulated movement toward a specific location in response to chemical signals. During an inflammation event or injury, a variety of chemical signals, including chemokines and cytokines, orchestrate the movement of neutrophils to and from the injury site. Once neutrophils exit the bloodstream through transendothelial migration, they encounter several chemoattractants that help direct them toward the injured tissue. Once they have arrived at the site of inflammation, neutrophils perform several immune functions to eliminate pathogens and clear any possible debris. However, the effective resolution of inflammation depends not only on the neutrophils' ability to reach the site of injury but also on their timely removal from the site after their immune functions are completed. This removal can occur through programmed cell death (apoptosis) or reverse migration, where neutrophils return to the bloodstream and circulation. Consequently, any impairment in neutrophils' ability to interpret and respond to chemoattractants and complex signaling cues can lead to immune dysfunction or contribute to chronic inflammatory diseases. [ 9 ] A major goal in immunology is to identify molecular targets involved in the body's response to wound-induced inflammation, which may include the process of reverse migration and the neutrophils involved. The introduction of necrosis or apoptosis-inducing drugs may cause an overall response of increased inflammation, even though they would aid in the clearance of neutrophils. Thus, there is heightened interest in targeting reverse migration of PMNs for the development of anti-inflammatory therapies. [ 10 ] Several clinical trials are currently in place aiming to specifically target neutrophil migration signals. One current phase II trial involves the drug Reparixin, which has the potential to combat ischaemia–reperfusion injury and inflammation after on-pump coronary artery bypass graft surgery. [ 11 ] Since this initial study in 2015, Reparixin has also been investigated as a treatment for patients with severe cases of COVID-19 related pneumonia. Innovative approaches to inflammation and infection such as the study of potential therapeutic compounds like Reparixin have the potential to provide unprecedented treatments for traditionally life-threatening infections. [ 12 ] This molecular or cell biology article is a stub . You can help Wikipedia by expanding it . Figure 1: Copyright © 2021 Ji and Fan. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. Figure 2: © 2017 The Authors. Journal of Cellular and Molecular Medicine published by John Wiley & Sons Ltd and Foundation for Cellular and Molecular Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. Figure 3: Copyright © 2020 Capucetti, Albano and Bonecchi. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
https://en.wikipedia.org/wiki/Reverse_migration_(immunology)
The reverse northern blot is a method by which gene expression patterns may be analyzed by comparing isolated RNA molecules from a tester sample to samples in a control cDNA library. It is a variant of the northern blot in which the nucleic acid immobilized on a membrane is a collection of isolated DNA fragments rather than RNA , and the probe is RNA extracted from a tissue and radioactively labelled. A reverse northern blot can be used to profile expression levels of particular sets of RNA sequences in a tissue or to determine presence of a particular RNA sequence in a sample. [ 1 ] Although DNA Microarrays and newer next-generation techniques have generally supplanted reverse northern blotting, it is still utilized today and provides a relatively cheap and easy means of defining expression of large sets of genes. In order to prepare the reverse northern membrane, cDNA sequences for transcripts of interest are immobilized on nylon membranes, which can be accomplished by use of dot blots or bidirectional agarose gel blotting and UV fixation of the DNA to the membranes. In many cases, cDNA probes may be preferred over RNA probes in order to mitigate problems of RNA degradation by RNAses or tissue metabolites. [ 2 ] Prepared reverse northern blot membranes are pre-hybridized in Denhardt's solution with SSC buffer and labeled cDNA probes are denatured at 100 °C and added to the pre-hybridization solution. The membrane is incubated with the probes for at least 15 hours at 65 °C, then washed and exposed. [ 3 ] Reverse Northern blot, much like the northern blot upon which it is based, is used to determine levels of gene expression in particular tissues. In comparison to the Northern blot, the reverse northern blot is able to probe a large number of transcripts at once with less specificity with regard to probes than is required for Northern blot. [ 4 ] Often this will involve the use of suppression subtractive hybridization (SSH) libraries or differential display to isolate differentially expressed transcripts and create bacterial clones containing inserts for these sequences. These will serve as the targets hybridized to the membrane and will be probed by sample RNA. Expression levels can be quantified by increase or decrease in fluorescent or radioactive signal over a control treatment. [ 3 ] Bands or dots which appear darker and larger signify transcripts which are over-expressed in a sample of interest and lighter dots indicate that a transcript is down-regulated versus a control sample. Due to a tendency to generate high numbers of false positives caused by band contamination with heterogeneous sequences, differential display hits will need to be confirmed by an alternative method for determining differential expression. [ 5 ] While northern blot or q-PCR are often used to confirm results, both techniques have drawbacks. Northern blot is limited by its ability to only probe with one mRNA at a time, while q-PCR requires transcripts to be long enough to generate primers for the sequence and probes can be costly. Therefore, reverse northern has been used as one means of confirming hits from DD-PCR, or sequences with altered expression levels. In this case, the membrane will be coated with amplified DD-PCR products which have been cloned into vectors, sequenced, and reamplified. [ 4 ] DNA microarrays operate by similar procedures to those used in the reverse northern blot, consisting of many DNA probes hybridized to a solid glass, plastic or silicon substrate which is probed with labeled RNA or cDNA. This allows for significantly expanded gene expression profiling . [ 6 ] Arrays may be purchased from commercial suppliers tailored to research needs e.g. cancer, cell cycle, or toxicology microarrays, or may be generated for custom targets. [ 7 ] Fluorescent or radioactive signals generated by hybridization of isolated sample cDNA probes will be proportional to the transcript's abundance in the tissue being studied. [ 8 ]
https://en.wikipedia.org/wiki/Reverse_northern_blot
In the field of drug discovery , reverse pharmacology [ 1 ] [ 2 ] [ 3 ] also known as target-based drug discovery (TDD), [ 4 ] a hypothesis is first made that modulation of the activity of a specific protein target thought to be disease modifying will have beneficial therapeutic effects. Screening of chemical libraries of small molecules is then used to identify compounds that bind with high affinity to the target. The hits from these screens are then used as starting points for drug discovery. This method became popular after the sequencing of the human genome which allowed rapid cloning and synthesis of large quantities of purified proteins. This method is the most widely used in drug discovery today. [ 5 ] Differently than the classical ( forward ) pharmacology, with the reverse pharmacology approach in vivo efficacy of identified active ( lead ) compounds is usually performed in the final drug discovery stages. This pharmacology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Reverse_pharmacology
A reverse phase protein lysate microarray ( RPMA ) is a protein microarray designed as a dot-blot platform that allows measurement of protein expression levels in a large number of biological samples simultaneously in a quantitative manner when high-quality antibodies are available. [ 1 ] Technically, minuscule amounts of (a) cellular lysates, from intact cells or laser capture microdissected cells, (b) body fluids such as serum, CSF, urine, vitreous, saliva, etc., are immobilized on individual spots on a microarray that is then incubated with a single specific antibody to detect expression of the target protein across many samples. [ 2 ] A summary video of RPPA is available. [ 3 ] One microarray, depending on the design, can accommodate hundreds to thousands of samples that are printed in a series of replicates. Detection is performed using either a primary or a secondary labeled antibody by chemiluminescent , fluorescent or colorimetric assays. The array is then imaged and the obtained data is quantified. Multiplexing is achieved by probing multiple arrays spotted with the same lysate with different antibodies simultaneously and can be implemented as a quantitative calibrated assay. [ 4 ] In addition, since RPMA can utilize whole-cell or undissected or microdissected cell lysates, it can provide direct quantifiable information concerning post translationally modified proteins that are not accessible with other high-throughput techniques. [ 5 ] [ 6 ] Thus, RPMA provides high-dimensional proteomic data in a high throughput, sensitive and quantitative manner. [ 5 ] However, since the signal generated by RPMA could be generated from unspecific primary or secondary antibody binding, as is seen in other techniques such as ELISA, or immunohistochemistry, the signal from a single spot could be due to cross-reactivity . Thus, the antibodies used in RPMA must be carefully validated for specificity and performance against cell lysates by western blot . [ 1 ] [ 7 ] RPMA has various uses such as quantitative analysis of protein expression in cancer cells, body fluids or tissues for biomarker profiling, cell signaling analysis and clinical prognosis, diagnosis or therapeutic prediction. [ 1 ] This is possible as a RPMA with lysates from different cell lines and or laser capture microdissected tissue biopsies of different disease stages from various organs of one or many patients can be constructed for determination of relative or absolute abundance or differential expression of a protein marker level in a single experiment. It is also used for monitoring protein dynamics in response to various stimuli or doses of drugs at multiple time points. [ 1 ] Some other applications that RPMA is used for include exploring and mapping protein signaling pathways, evaluating molecular drug targets and understanding a candidate drug's mechanism of action. [ 8 ] It has been also suggested as a potential early screen test in cancer patients to facilitate or guide therapeutic decision making. Other protein microarrays include forward protein microarrays (PMAs) and antibody microarrays (AMAs). PMAs immobilize individual purified and sometimes denatured recombinant proteins on the microarray that are screened by antibodies and other small compounds. AMAs immobilize antibodies that capture analytes from the sample applied on the microarray. [ 4 ] [ 6 ] The target protein is detected either by direct labeling or a secondary labeled antibody against a different epitope on the analyte target protein (sandwich approach). Both PMAs and AMAs can be classified as forward phase arrays as they involve immobilization of a bait to capture an analyte. In forward phase arrays, each array is incubated with one test sample such as a cellular lysate or a patient's serum, but multiple analytes in the sample are tested simultaneously. [ 4 ] Figure 1 shows a forward (using antibody as a bait in here) and reverse phase protein microarray at the molecular level. Depending on the research question or the type and aim of the study, RPMA can be designed by selecting the content of the array, the number of samples, sample placement within micro-plates, array layout, type of microarrayer, correct detection antibody, signal detection method, inclusion of control and quality control of the samples. The actual experiment is then set up in the laboratory and the results obtained are quantified and analyzed. The experimental stages are listed below: Cells are grown in T-25 flasks at 37 degree and 5% CO 2 in appropriate medium. [ 1 ] Depending on the design of the study, after cells are confluent they could be treated with drugs, growth factors or they could be irradiated before lysis step. For time course studies, a stimulant is added to a set of flasks concurrently and the flasks are then processed at different time points. [ 1 ] For drug dose studies, a set of flasks are treated with different doses of the drug and all the flasks are collected at the same time. [ 1 ] If a RPMA containing cell fraction lysates of a tissue/s is to be made, laser capture microdissection (LCM) or fine needle aspiration methods is used to isolate specific cells from a region of tissue microscopically. [ 4 ] [ 8 ] Pellets from cells collected through any of the above means are lysed with a cell lysis buffer to obtain high protein concentration. [ 1 ] Aliquots of the lysates are pooled and resolved by two-dimensional single lane SDS-PAGE followed by western blotting on a nitrocellulose membrane. The membrane is cut into four-millimeter strips, and each strip is probed with a different antibody. Strips with single band indicate specific antibodies that are suitable for RPMA use. Antibody performance should be also validated with a smaller sample size under identical condition before actual sample collection for RPMA. [ 1 ] [ 7 ] Cell lysates are collected and are serially diluted six to ten times if using colorimetric techniques, or without dilution when fluorometric detection is used (due to the greater dynamic range of fluorescence than colorimetric detection). Serial dilutions are then plated in replicates into a 384- or a 1536-well microtiter plate. [ 1 ] The lysates are then printed onto either nitrocellulose or PVDF membrane coated glass slides by a microarrayer such as Aushon BioSystem 2470 or Flexys robot (Genomic solution). [ 1 ] [ 9 ] Aushon 2470 with a solid pin system is the ideal choice as it can be used for producing arrays with very viscous lysates and it has humidity environmental control and automated slide supply system. [ 1 ] That being said, there are published papers showing that Arrayit Microarray Printing Pins can also be used and produce microarrays with much higher throughput using less lysate. [ 10 ] The membrane coated glass slides are commercially available from several different companies such as Schleicher and Schuell Bioscience (now owned by GE Whatman www.whatman.com), [ 9 ] Grace BioLabs (www.gracebio.com), Thermo Scientific, and SCHOTT Nexterion (www.schott.com/nexterion). [ 11 ] After the slides are printed, non-specific binding sites on the array are blocked using a blocking buffer such as I-Block and the arrays are probed with a primary antibody followed by a secondary antibody. Detection is usually conducted with DakoCytomation catalyzed signal amplification (CSA) system. For signal amplification, slides are incubated with streptavidin-biotin-peroxidase complex followed by biotinyl-tyramide/hydrogen peroxide and streptavidin-peroxidase. Development is completed using hydrogen peroxide and scans of the slides are obtained (1). Tyramide signal amplification works as follows: immobilized horseradish peroxidase (HRP) converts tyramide into reactive intermediate in the presence of hydrogen peroxide. Activated tyramide binds to neighboring proteins close to a site where the activating HRP enzyme is bound. This leads to more tyramide molecule deposition at the site; hence the signal amplification. [ 12 ] [ 13 ] Lance Liotta and Emanual Petricoin invented the RPMA technique in 2001 (see history section below), and have developed a multiplexed detection method using near-infrared fluorescent techniques. [ 14 ] In this study, they report the use of a dual dye-based approach that can effectively double the number of endpoints observed per array, allowing, for example, both phospho-specific and total protein levels to be measured and analyzed at once. Once immunostaining has been performed protein expression must then be quantified. The signal levels can be obtained by using the reflective mode of an ordinary optical flatbed scanner if a colorimetric detection technique is used [ 1 ] or by laser scanning, such as with a TECAN LS system, if fluorescent techniques are used. Two programs available online (P-SCAN and ProteinScan) can then be used to convert the scanned image into numerical values. [ 1 ] These programs quantify signal intensities at each spot and use a dose interpolation algorithm (DI 25 ) to compute a single normalized protein expression level value for each sample. Normalization is necessary to account for differences in total protein concentration between each sample and so that antibody staining can be directly compared between samples. [ 15 ] This can be achieved by performing an experiment in parallel in which total proteins are stained by colloidal gold total protein staining or Sypro Ruby total protein staining. [ 1 ] When multiple RPMAs are analyzed, the signal intensity values can be displayed as a heat map, allowing for Bayesian clustering analysis and profiling of signaling pathways. [ 15 ] An optimal software tool, custom designed for RPMAs is called Microvigene, by Vigene Tech, Inc. The greatest strength of RPMAs is that they allow for high throughput, multiplexed, ultra-sensitive detection of proteins from extremely small numbers of input material, a feat which cannot be done by conventional western blotting or ELISA . [ 1 ] [ 9 ] The small spot size on the microarray, ranging in diameter from 85 to 200 micrometres, enables the analysis of thousands of samples with the same antibody in one experiment. [ 9 ] RPMAs have increased sensitivity and are capable of detecting proteins in the picogram range. [ 9 ] Some researchers have even reported detection of proteins in the attogram range. [ 9 ] This is a significant improvement over protein detection by ELISA , which requires microgram amounts of protein (6). The increase in sensitivity of RPMAs is due to the miniature format of the array, which leads to an increase in the signal density (signal intensity/area) [ 9 ] coupled with tyramide deposition-enabled enhancement. The high sensitivity of RPMAs allows for the detection of low abundance proteins or biomarkers such as phosphorylated signaling proteins from very small amounts of starting material such as biopsy samples, which are often contaminated with normal tissue. [ 4 ] Using laser capture microdissection lysates can be analyzed from as few as 10 cells, [ 4 ] with each spot containing less than a hundredth of a cell equivalent of protein. A great improvement of RPMAs over traditional forward phase protein arrays is a reduction in the number of antibodies needed to detect a protein. Forward phase protein arrays typically use a sandwich method to capture and detect the desired protein. [ 4 ] [ 15 ] This implies that there must be two epitopes on the protein (one to capture the protein and one to detect the protein) for which specific antibodies are available. [ 15 ] Other forward phase protein microarrays directly label the samples, however there is often variability in the labeling efficiency for different protein, and often the labeling destroys the epitope to which the antibody binds. [ 15 ] This problem is overcome by RPMAs as sample need not be labeled directly. Another strength of RPMAs over forward phase protein microarrays and western blotting is the uniformity of results, as all samples on the chip are probed with the same primary and secondary antibody and the same concentration of amplification reagents for the same length of time. [ 9 ] This allows for the quantification of differences in protein levels across all samples. Furthermore, printing each sample, on the chip in serial dilution (colorimetric) provides an internal control to ensure analysis is performed only in the linear dynamic range of the assay. [ 4 ] Optimally, printing of calibrators and high and low controls directly on the same chip will then provide for unmatched ability to quantitatively measure each protein over time and between experiments. A problem that is encountered with tissue microarrays is antigen retrieval and the inherent subjectivity of immunohistochemistry. Antibodies, especially phospho-specific reagents, often detect linear peptide sequences that may be masked due to the three-dimensional conformation of the protein. [ 15 ] This problem is overcome with RPMAs as the samples can be denatured, revealing any concealed epitopes. [ 15 ] The biggest limitation of RPMA, as is the case for all immunoassays, is its dependence on antibodies for detection of proteins. Currently there is a limited but rapidly growing number of signaling proteins for which antibodies exist that give an analyzable signal. [ 15 ] In addition, finding the appropriate antibody could require extensive screening of many antibodies by western blotting prior to beginning RPMA analysis. [ 1 ] To overcome this issue, two open resource databases have been created to display western blot results for antibodies that have good binding specificity within the expected range. [ 1 ] [ 16 ] [ 17 ] Furthermore, RPMAs, unlike western blots, do not resolve protein fractions by molecular weight. [ 1 ] Thus, it is critical that upfront antibody validation be performed. RPMA was first introduced in 2001 in a paper by Lance Liotta and Emanuel Petricoin who invented the technology. [ 8 ] The authors used the technique to successfully analyze the state of pro-survival checkpoint protein at the microscopic transition stage using laser capture microdissection of histologically normal prostate epithelium, prostate intraepithelial neoplasia, and patient-matched invasive prostate cancer. [ 8 ] Since then RPMA has been used in many basic biology, translational and clinical research. In addition, the technique has now been brought into clinical trials for the first time whereby patients with metastatic colorectal and breast cancers are chosen for therapy based on the results of the RPMA. This technique has been commercialized for personalized medicine-based applications by Theranostics Health, Inc.
https://en.wikipedia.org/wiki/Reverse_phase_protein_lysate_microarray
Reverse roll coating is a roll-to-roll coating method for wet coatings. It is distinguished from other roll coating methods by having two reverse-running nips. The metering roll and the applicator roll contra-rotate, with an accurate gap between them. The surface of the applicator roll is loaded with an excess of coating prior to the metering nip, so its surface emerges from the metering nip with a precise thickness of coating equal to the gap. At the application nip, the applicator roll transfers all of this coating to the substrate , by running in the opposite direction to the movement of the substrate, wiping the coating onto the substrate. Reverse roll coating machines demand high specifications in their construction, e.g. for the machining and bearings of the rollers and for highly uniform speed control. This makes them relatively expensive compared to other coating technologies. Unlike many other coating methods, they can however handle coatings with a very wide range of viscosities , from 1 to more than 50000 mPas, and are capable of producing extremely polished finishes on the coatings they apply. They have been produced in a variety of 3-roll and 4-roll configurations. [ 1 ] Products that have been manufactured on reverse roll coating machines include magnetic tapes ; coated papers; and pressure sensitive tapes . [ 2 ] The rise of slot-die coating has tended to eclipse reverse roll coaters as in most if not all cases, the same products can be made on cheaper machinery.
https://en.wikipedia.org/wiki/Reverse_roll_coating
A reverse salient refers to a component of a technological system that, due to its insufficient development, prevents the system in its entirety from achieving its development goals. The term was coined by Thomas P. Hughes , [ 1 ] in his work Networks of power: Electrification in western society, 1880-1930 . [ 2 ] Technological systems may refer to a hierarchically nested structure of technological parts, whereby the system is seen as a composition of interdependent sub-systems that are themselves systems comprising further sub-systems. [ 3 ] In this manner the holistic system and its properties are seen to be synthesized through the sub-systems that constitute them. Technological systems may also be seen as socio-technical systems that contain both technical and social sub-systems, such as the creators and users of technology , as well as overseeing regulatory bodies. In both perspectives, technological systems are imputed to be goal-seeking, therefore evolving towards objectives. [ 4 ] Hughes [ 1 ] proposed that technological systems pass through certain phases during their evolution. The first is invention and development, owed greatly to the efforts of inventors and entrepreneurs , such as Thomas Edison in the development of the electric technological system. The second is the era of technological transfer from one region or society to others, for example, the dissemination of Edison's electric system from New York City to London and Berlin. The third phase is of growth and expansion, marked by efforts to improve the system's performance, as in output efficiency. By this phase the system is dependent on the satisfactory evolution of ’’all’’ its components’ performances. The development of technological systems is therefore reliant on reciprocated and interdependent cause and effect processes amongst social and technical components. It may be described as co-evolutionary , where the balanced co-evolution of system components carries significance in establishing desired system progress. Subsequently, a sub-system which evolves at a sufficient pace contributes positively to the collective development, while one which does not prevents the system from achieving its targeted goals. Hughes names these problematic sub-systems “reverse salients”. [ 1 ] [ 5 ] A reverse salient is the inverse of a salient that depicts the forward protrusion along an object's profile or a line of battle. [ 5 ] Hence, reverse salients are the backward projections along similar, continuous lines. The reverse salient subsequently refers to the sub-system that has strayed behind the advancing performance frontier of the system due to its lack of sufficient performance. In turn, the reverse salient hampers the progress or prevents the fulfillment of potential development of the collective system. In line with the socio-technical standpoint, reverse salients can be technical elements such as motors and capacitors of an electric system, or social elements such as organizations or productive units. [ 1 ] Because reverse salients limit system development, the further development of the system lies in the correction of the reverse salient, where correction is attained through incremental or radical innovations . The reverse salient denotes a focusing device, in the words of Nathan Rosenberg , [ 6 ] for technological system stakeholders , which strive to remove it through innovation. It is possible that the reverse salient is not able to be corrected within the bounds of the existing technological system through incremental innovations . Consequently, radical innovations may be needed to correct the reverse salient. However, radical innovations can lead to the creation of new and different technological systems, as witnessed in the emergence of the alternating current system that overcame the problem of low cost electricity distribution, which the direct current system could not. [ 1 ] Hence, the reverse salient is a useful concept for analyzing technological system evolution, [ 7 ] because often the analysis of technological systems centers on the factors that limit system development. More than technical components, these factors may also be social components. Subsequently, reverse salients may be more applicable in certain contexts to denote system performance hindrance than similar or overlapping concepts such as bottleneck and technological imbalance or disequilibrium. [ 8 ] The reverse salient refers to an extremely complex situation in which individuals, groups, material forces, historical influences, and other factors have idiosyncratic, causal forces, and in which accidents as well as trends play a part. On the contrary, the disequilibrium concept suggests a relatively straightforward abstraction of physical science . [ 1 ] Additionally, while the reverse salient and bottleneck concepts share similarities and have been used interchangeably in particular contexts, the reverse salient often refers to the sub-system that not only curbs the performance or output of the collective system but also requires correction because of its limiting affect. This is not necessarily the case with bottlenecks, which are geometrically too symmetrical [ 1 ] and therefore do not represent the complexity of system evolution. For instance, a particular system's output performance may be compromised due to a bottleneck sub-system but the bottleneck will not require improvement if the system's present output performance is satisfactory. If, on the other hand, a higher level of performance would be required of the same system, the bottleneck may emerge as a reverse salient that holds the system back from attaining that higher output performance. While numerous studies illustrate technological systems that have been hampered by reverse salients, the most seminal work in this field of study is that of Hughes, [ 1 ] who gives a historical account of the development of Edison's direct-current electric system. In order to supply electricity within a defined region of distribution, sub-systems such as the direct current generator were identified as reverse salients and corrected. The most notable limitation of the direct-current system was, however, its low voltage transmission distance, and the resulting cost of distributing electricity beyond a certain range. To reduce costs, Edison introduced a three-wire system to replace the previously installed two-wire alternative and trialed different configuration of generators, as well as the usage of storage batteries. These improvements however did not correct the reverse salient completely. The satisfactory resolution of the problem was eventually provided by the radical innovation of the alternating current system. Since Hughes' seminal work, other authors have also provided examples of reverse salients in different technological systems. In the ballistic missile technological development, where the systemic objective has been to increase missile accuracy, MacKenzie [ 9 ] has identified the gyroscope sub-system as a technical reverse salient. Takeishi and Lee [ 10 ] have argued that music copyright managing institutions have acted as a social reverse salient in the evolution of the mobile music technology system in Japan and Korea, where the objective was to proliferate mobile music throughout the end-user market. And further, Mulder and Knot, [ 11 ] see the development of the PVC ( polyvinyl chloride ) plastic technology system to have been sequentially hampered by several states of reverse salience, including: difficulty to process PVC material, quality of manufactured products, health concerns for individuals exposed to effluent from PVC manufacturing facilities, and finally the carcinogenic nature of vinyl chloride. The magnitude of reverse salience emerges as an informative parameter in technological systems analysis as it signifies not only the technological disparity between sub-systems but also the entire system's limited level of performance. Notwithstanding its importance, the literature studying technological system evolution has remained limited in terms of analytical tools that measure the state of reverse salience. Dedehayir and Mäkinen [ 12 ] [ 13 ] have subsequently proposed an absolute performance gap measure of reverse salience magnitude. This measure evaluates the technological performance differential between the salient sub-system (i.e. the advanced sub-system) and the reverse salient sub-system at a particular point in time. In turn, by evaluating a series of performance differentials over time, the performance gap measure helps reflect the dynamics of change in the evolving technological system through changing reverse salience magnitude. According to Thomas Hughes, the name "reverse salient" was inspired by the Verdun salient during the Battle of Verdun , which he claimed his history professor in college referred to as a "reverse salient". He described it as a backward bulge in the advancing line of a military front . [ 14 ] This is the same as a salient ; moreover, "reverse salient" is not a military term in general usage.
https://en.wikipedia.org/wiki/Reverse_salient
A reverse transcriptase ( RT ) is an enzyme used to convert RNA genome to DNA , a process termed reverse transcription . Reverse transcriptases are used by viruses such as HIV and hepatitis B to replicate their genomes, by retrotransposon mobile genetic elements to proliferate within the host genome, and by eukaryotic cells to extend the telomeres at the ends of their linear chromosomes . The process does not violate the flows of genetic information as described by the classical central dogma , but rather expands it to include transfers of information from RNA to DNA. [ 2 ] [ 3 ] [ 4 ] Retroviral RT has three sequential biochemical activities: RNA-dependent DNA polymerase activity, ribonuclease H (RNase H), and DNA-dependent DNA polymerase activity. Collectively, these activities enable the enzyme to convert single-stranded RNA into double-stranded cDNA. In retroviruses and retrotransposons, this cDNA can then integrate into the host genome, from which new RNA copies can be made via host-cell transcription . The same sequence of reactions is widely used in the laboratory to convert RNA to DNA for use in molecular cloning , RNA sequencing , polymerase chain reaction (PCR), or genome analysis . Reverse transcriptases were discovered by Howard Temin at the University of Wisconsin–Madison in Rous sarcoma virions [ 5 ] and independently isolated by David Baltimore in 1970 at MIT from two RNA tumour viruses: murine leukemia virus and again Rous sarcoma virus . [ 6 ] For their achievements, they shared the 1975 Nobel Prize in Physiology or Medicine (with Renato Dulbecco ). Well-studied reverse transcriptases include: The enzymes are encoded and used by viruses that use reverse transcription as a step in the process of replication. Reverse-transcribing RNA viruses , such as retroviruses , use the enzyme to reverse-transcribe their RNA genomes into DNA, which is then integrated into the host genome and replicated along with it. Reverse-transcribing DNA viruses , such as the hepadnaviruses , can allow RNA to serve as a template in assembling and making DNA strands. HIV infects humans with the use of this enzyme. Without reverse transcriptase, the viral genome would not be able to incorporate into the host cell, resulting in failure to replicate. [ citation needed ] Reverse transcriptase creates double-stranded DNA from an RNA template. In virus species with reverse transcriptase lacking DNA-dependent DNA polymerase activity, creation of double-stranded DNA can possibly be done by host-encoded DNA polymerase δ , mistaking the viral DNA-RNA for a primer and synthesizing a double-stranded DNA by a similar mechanism as in primer removal , where the newly synthesized DNA displaces the original RNA template. [ citation needed ] The process of reverse transcription, also called retrotranscription or retrotras, is extremely error-prone, and it is during this step that mutations may occur. Such mutations may cause drug resistance . [ citation needed ] Retroviruses , also referred to as class VI ssRNA-RT viruses, are RNA reverse-transcribing viruses with a DNA intermediate. Their genomes consist of two molecules of positive-sense single-stranded RNA with a 5' cap and 3' polyadenylated tail . Examples of retroviruses include the human immunodeficiency virus ( HIV ) and the human T-lymphotropic virus ( HTLV ). Creation of double-stranded DNA occurs in the cytosol [ 10 ] as a series of these steps: Creation of double-stranded DNA also involves strand transfer , in which there is a translocation of short DNA product from initial RNA-dependent DNA synthesis to acceptor template regions at the other end of the genome, which are later reached and processed by the reverse transcriptase for its DNA-dependent DNA activity. [ 11 ] Retroviral RNA is arranged in 5' terminus to 3' terminus. The site where the primer is annealed to viral RNA is called the primer-binding site (PBS). The RNA 5'end to the PBS site is called U5, and the RNA 3' end to the PBS is called the leader. The tRNA primer is unwound between 14 and 22 nucleotides and forms a base-paired duplex with the viral RNA at PBS. The fact that the PBS is located near the 5' terminus of viral RNA is unusual because reverse transcriptase synthesize DNA from 3' end of the primer in the 5' to 3' direction (with respect to the newly synthesized DNA strand). Therefore, the primer and reverse transcriptase must be relocated to 3' end of viral RNA. In order to accomplish this reposition, multiple steps and various enzymes including DNA polymerase , ribonuclease H(RNase H) and polynucleotide unwinding are needed. [ 12 ] [ 13 ] The HIV reverse transcriptase also has ribonuclease activity that degrades the viral RNA during the synthesis of cDNA, as well as DNA-dependent DNA polymerase activity that copies the sense cDNA strand into an antisense DNA to form a double-stranded viral DNA intermediate (vDNA). [ 14 ] The HIV viral RNA structural elements regulate the progression of reverse transcription. [ 15 ] Self-replicating stretches of eukaryotic genomes known as retrotransposons utilize reverse transcriptase to move from one position in the genome to another via an RNA intermediate. They are found abundantly in the genomes of plants and animals. Telomerase is another reverse transcriptase found in many eukaryotes, including humans, which carries its own RNA template; this RNA is used as a template for DNA replication . [ 16 ] Initial reports of reverse transcriptase in prokaryotes came as far back as 1971 in France ( Beljanski et al., 1971a, 1972) and a few years later in the USSR (Romashchenko 1977 [ 17 ] ). These have since been broadly described as part of bacterial Retrons , distinct sequences that code for reverse transcriptase, and are used in the synthesis of msDNA . In order to initiate synthesis of DNA, a primer is needed. In bacteria, the primer is synthesized during replication. [ 18 ] Valerian Dolja of Oregon State argues that viruses, due to their diversity, have played an evolutionary role in the development of cellular life, with reverse transcriptase playing a central role. [ 19 ] The reverse transcriptase employs a "right hand" structure similar to that found in other viral nucleic acid polymerases . [ 20 ] [ 21 ] In addition to the transcription function, retroviral reverse transcriptases have a domain belonging to the RNase H family, which is vital to their replication. By degrading the RNA template, it allows the other strand of DNA to be synthesized. [ 22 ] Some fragments from the digestion also serve as the primer for the DNA polymerase (either the same enzyme or a host protein), responsible for making the other (plus) strand. [ 20 ] There are three different replication systems during the life cycle of a retrovirus. The first process is the reverse transcriptase synthesis of viral DNA from viral RNA, which then forms newly made complementary DNA strands. The second replication process occurs when host cellular DNA polymerase replicates the integrated viral DNA. Lastly, RNA polymerase II transcribes the proviral DNA into RNA, which will be packed into virions. Mutation can occur during one or all of these replication steps. [ 23 ] Reverse transcriptase has a high error rate when transcribing RNA into DNA since, unlike most other DNA polymerases , it has no proofreading ability. This high error rate allows mutations to accumulate at an accelerated rate relative to proofread forms of replication. The commercially available reverse transcriptases produced by Promega are quoted by their manuals as having error rates in the range of 1 in 17,000 bases for AMV and 1 in 30,000 bases for M-MLV. [ 24 ] Other than creating single-nucleotide polymorphisms , reverse transcriptases have also been shown to be involved in processes such as transcript fusions , exon shuffling and creating artificial antisense transcripts. [ 25 ] [ 26 ] It has been speculated that this template switching activity of reverse transcriptase, which can be demonstrated completely in vivo , may have been one of the causes for finding several thousand unannotated transcripts in the genomes of model organisms. [ 27 ] Two RNA genomes are packaged into each retrovirus particle, but, after an infection, each virus generates only one provirus . [ 28 ] After infection, reverse transcription is accompanied by template switching between the two genome copies (copy choice recombination). [ 28 ] There are two models that suggest why RNA transcriptase switches templates. The first, the forced copy-choice model, proposes that reverse transcriptase changes the RNA template when it encounters a nick, implying that recombination is obligatory to maintaining virus genome integrity. The second, the dynamic choice model, suggests that reverse transcriptase changes templates when the RNAse function and the polymerase function are not in sync rate-wise, implying that recombination occurs at random and is not in response to genomic damage. A study by Rawson et al. supported both models of recombination. [ 28 ] From 5 to 14 recombination events per genome occur at each replication cycle. [ 29 ] Template switching (recombination) appears to be necessary for maintaining genome integrity and as a repair mechanism for salvaging damaged genomes. [ 30 ] [ 28 ] As HIV uses reverse transcriptase to copy its genetic material and generate new viruses (part of a retrovirus proliferation circle), specific drugs have been designed to disrupt the process and thereby suppress its growth. Collectively, these drugs are known as reverse-transcriptase inhibitors and include the nucleoside and nucleotide analogues zidovudine (trade name Retrovir), lamivudine (Epivir) and tenofovir (Viread), as well as non-nucleoside inhibitors, such as nevirapine (Viramune). [ citation needed ] Reverse transcriptase is commonly used in research to apply the polymerase chain reaction technique to RNA in a technique called reverse transcription polymerase chain reaction (RT-PCR). The classical PCR technique can be applied only to DNA strands, but, with the help of reverse transcriptase, RNA can be transcribed into DNA, thus making PCR analysis of RNA molecules possible. Reverse transcriptase is used also to create cDNA libraries from mRNA . The commercial availability of reverse transcriptase greatly improved knowledge in the area of molecular biology, as, along with other enzymes , it allowed scientists to clone, sequence, and characterise RNA. [ citation needed ]
https://en.wikipedia.org/wiki/Reverse_transcriptase
Reverse transcription polymerase chain reaction ( RT-PCR ) is a laboratory technique combining reverse transcription of RNA into DNA (in this context called complementary DNA or cDNA) and amplification of specific DNA targets using polymerase chain reaction (PCR). [ 1 ] It is primarily used to measure the amount of a specific RNA. This is achieved by monitoring the amplification reaction using fluorescence, a technique called real-time PCR or quantitative PCR (qPCR). Confusion can arise because some authors use the acronym RT-PCR to denote real-time PCR. In this article, RT-PCR will denote Reverse Transcription PCR. Combined RT-PCR and qPCR are routinely used for analysis of gene expression and quantification of viral RNA in research and clinical settings. The close association between RT-PCR and qPCR has led to metonymic use of the term qPCR to mean RT-PCR. Such use may be confusing, [ 2 ] as RT-PCR can be used without qPCR, for example to enable molecular cloning , sequencing or simple detection of RNA. Conversely, qPCR may be used without RT-PCR, for example, to quantify the copy number of a specific piece of DNA. The combined RT-PCR and qPCR technique has been described as quantitative RT-PCR [ 3 ] or real-time RT-PCR [ 4 ] (sometimes even called quantitative real-time RT-PCR [ 5 ] ), has been variously abbreviated as qRT-PCR, [ 6 ] RT-qPCR, [ 7 ] RRT-PCR, [ 8 ] and rRT-PCR. [ 9 ] In order to avoid confusion, the following abbreviations will be used consistently throughout this article: Not all authors, especially earlier ones, use this convention and the reader should be cautious when following links. RT-PCR has been used to indicate both real-time PCR (qPCR) and reverse transcription PCR (RT-PCR). Since its introduction in 1977, Northern blot has been used extensively for RNA quantification despite its shortcomings: (a) time-consuming technique, (b) requires a large quantity of RNA for detection, and (c) quantitatively inaccurate in the low abundance of RNA content. [ 10 ] [ 11 ] However, since PCR was invented by Kary Mullis in 1983, RT PCR has since displaced Northern blot as the method of choice for RNA detection and quantification. [ 12 ] RT-PCR has risen to become the benchmark technology for the detection and/or comparison of RNA levels for several reasons: (a) it does not require post PCR processing, (b) a wide range (>10 7 -fold) of RNA abundance can be measured, and (c) it provides insight into both qualitative and quantitative data. [ 5 ] Due to its simplicity, specificity and sensitivity, RT-PCR is used in a wide range of applications from experiments as simple as quantification of yeast cells in wine to more complex uses as diagnostic tools for detecting infectious agents such as the avian flu virus and SARS-CoV-2 . [ 13 ] [ 14 ] [ 15 ] In RT-PCR, the RNA template is first converted into a complementary DNA (cDNA) using a reverse transcriptase (RT). The cDNA is then used as a template for exponential amplification using PCR. The use of RT-PCR for the detection of RNA transcript has revolutionized the study of gene expression in the following important ways: The quantification of mRNA using RT-PCR can be achieved as either a one-step or a two-step reaction. The difference between the two approaches lies in the number of tubes used when performing the procedure. The two-step reaction requires that the reverse transcriptase reaction and PCR amplification be performed in separate tubes. The disadvantage of the two-step approach is susceptibility to contamination due to more frequent sample handling. [ 19 ] On the other hand, the entire reaction from cDNA synthesis to PCR amplification occurs in a single tube in the one-step approach. The one-step approach is thought to minimize experimental variation by containing all of the enzymatic reactions in a single environment. It eliminates the steps of pipetting cDNA product, which is labor-intensive and prone to contamination, to PCR reaction. The further use of inhibitor-tolerant thermostable DNA polymerases , polymerase enhancers with an optimized one-step RT-PCR condition, supports the reverse transcription of the RNA from unpurified or crude samples, such as whole blood and serum . [ 20 ] [ 21 ] However, the starting RNA templates are prone to degradation in the one-step approach, and the use of this approach is not recommended when repeated assays from the same sample is required. Additionally, the one-step approach is reported to be less accurate compared to the two-step approach. It is also the preferred method of analysis when using DNA binding dyes such as SYBR Green since the elimination of primer-dimers can be achieved through a simple change in the melting temperature . Nevertheless, the one-step approach is a relatively convenient solution for the rapid detection of target RNA directly in biosensing. [ citation needed ] Quantification of RT-PCR products can largely be divided into two categories: end-point and real-time. [ 22 ] The use of end-point RT-PCR is preferred for measuring gene expression changes in small number of samples, but the real-time RT-PCR has become the gold standard method for validating quantitative results obtained from array analyses or gene expression changes on a global scale. [ 23 ] The measurement approaches of end-point RT-PCR requires the detection of gene expression levels by the use of fluorescent dyes like ethidium bromide , [ 24 ] [ 25 ] P32 labeling of PCR products using phosphorimager , [ 26 ] or by scintillation counting . [ 18 ] End-point RT-PCR is commonly achieved using three different methods: relative, competitive and comparative. [ 27 ] [ 28 ] The emergence of novel fluorescent DNA labeling techniques in the past few years has enabled the analysis and detection of PCR products in real-time and has consequently led to the widespread adoption of real-time RT-PCR for the analysis of gene expression. [ 35 ] Not only is real-time RT-PCR now the method of choice for quantification of gene expression, it is also the preferred method of obtaining results from array analyses and gene expressions on a global scale. [ 36 ] Currently, there are four different fluorescent DNA probes available for the real-time RT-PCR detection of PCR products: SYBR Green , TaqMan , molecular beacons , and scorpion probes . All of these probes allow the detection of PCR products by generating a fluorescent signal. While the SYBR Green dye emits its fluorescent signal simply by binding to the double-stranded DNA in solution, the TaqMan probes', molecular beacons' and scorpions' generation of fluorescence depend on Förster Resonance Energy Transfer (FRET) coupling of the dye molecule and a quencher moiety to the oligonucleotide substrates. [ 37 ] Two strategies are commonly employed to quantify the results obtained by real-time RT-PCR; the standard curve method and the comparative threshold method. [ 42 ] The exponential amplification via reverse transcription polymerase chain reaction provides for a highly sensitive technique in which a very low copy number of RNA molecules can be detected. RT-PCR is widely used in the diagnosis of genetic diseases and, semiquantitatively, in the determination of the abundance of specific different RNA molecules within a cell or tissue as a measure of gene expression . RT-PCR is commonly used in research methods to measure gene expression. For example, Lin et al. used qRT-PCR to measure expression of Gal genes in yeast cells. First, Lin et al. engineered a mutation of a protein suspected to participate in the regulation of Gal genes. This mutation was hypothesized to selectively abolish Gal expression. To confirm this, gene expression levels of yeast cells containing this mutation were analyzed using qRT-PCR. The researchers were able to conclusively determine that the mutation of this regulatory protein reduced Gal expression. [ 43 ] Northern blot analysis is used to study the RNA's gene expression further. RT-PCR can also be very useful in the insertion of eukaryotic genes into prokaryotes . Because most eukaryotic genes contain introns , which are present in the genome but not in the mature mRNA, the cDNA generated from a RT-PCR reaction is the exact (without regard to the error-prone nature of reverse transcriptases) DNA sequence that would be directly translated into protein after transcription . When these genes are expressed in prokaryotic cells for the sake of protein production or purification, the RNA produced directly from transcription need not undergo splicing as the transcript contains only exons . (Prokaryotes, such as E. coli, lack the mRNA splicing mechanism of eukaryotes). RT-PCR can be used to diagnose genetic disease such as Lesch–Nyhan syndrome . This genetic disease is caused by a malfunction in the HPRT1 gene, which clinically leads to the fatal uric acid urinary stone and symptoms similar to gout . [6] [ clarification needed ] Analyzing a pregnant mother and a fetus for mRNA expression levels of HPRT1 will reveal if the mother is a carrier and if the fetus will likely to develop Lesch–Nyhan syndrome. [ 44 ] Scientists are working on ways to use RT-PCR in cancer detection to help improve prognosis , and monitor response to therapy. Circulating tumor cells produce unique mRNA transcripts depending on the type of cancer. The goal is to determine which mRNA transcripts serve as the best biomarkers for a particular cancer cell type and then analyze its expression levels with RT-PCR. [ 45 ] RT-PCR is commonly used in studying the genomes of viruses whose genomes are composed of RNA, such as Influenzavirus A , retroviruses like HIV and SARS-CoV-2 . [ 46 ] PCR tests can be used for early detection of DNA-based pathogens through the amplification of pathogenic DNA, even before the host begins producing antibodies . [ 47 ] RT-PCR allows this process to be extended to RNA-based pathogens through the amplification of cDNA reverse-transcribed from a given pathogen's RNA. [ 36 ] RT-PCR tests are best known for their use in COVID-19 testing [ 48 ] but have also been used to diagnose diseases such as Ebola , [ 36 ] Zika , [ 36 ] MERS , [ 36 ] SARS [ 36 ] and influenza . [ 48 ] Despite its major advantages, RT-PCR is not without drawbacks. The exponential growth of the reverse transcribed complementary DNA (cDNA) during the multiple cycles of PCR produces inaccurate end point quantification due to the difficulty in maintaining linearity. [ 49 ] In order to provide accurate detection and quantification of RNA content in a sample, qRT-PCR was developed using fluorescence-based modification to monitor the amplification products during each cycle of PCR. The extreme sensitivity of the technique can be a double-edged sword since even the slightest DNA contamination can lead to undesirable results. [ 50 ] A simple method for elimination of false positive results is to include anchors, or tags , to the 5' region of a gene specific primer. [ 51 ] Additionally, planning and design of quantification studies can be technically challenging due to the existence of numerous sources of variation including template concentration and amplification efficiency. [ 31 ] Spiking in a known quantity of RNA into a sample, adding a series of RNA dilutions generating a standard curve, and adding in a no template copy sample (no cDNA) may used as controls. RT-PCR can be carried out by the one-step RT-PCR protocol or the two-step RT-PCR protocol. One-step RT-PCR subjects mRNA targets (up to 6 kb) to reverse transcription followed by PCR amplification in a single test tube. Using intact, high-quality RNA and a sequence-specific primer will produce the best results. Once a one-step RT-PCR kit with a mix of reverse transcriptase, Taq DNA polymerase, and a proofreading polymerase is selected and all necessary materials and equipment are obtained a reaction mix is to be prepared. The reaction mix includes dNTPs, primers, template RNA, necessary enzymes, and a buffer solution. The reaction mix is added to a PCR tube for each reaction, followed by template RNA. The PCR tubes are then placed in a thermal cycler to begin cycling. In the first cycle, the synthesis of cDNA occurs. The second cycle is the initial denaturation wherein reverse transcriptase is inactivated. The remaining 40-50 cycles are the amplification, which includes denaturation, annealing, and elongation. When amplification is complete, the RT-PCR products can be analyzed with gel electrophoresis . [ 52 ] [ 53 ] (PCR Applications Manual and Biotools) Two-step RT-PCR, as the name implies, occurs in two steps. First the reverse transcription and then the PCR. This method is more sensitive than the one-step method. Kits are also useful for two-step RT-PCR. Just as for one-step PCR, use only intact, high-quality RNA for the best results. The primer for two-step PCR does not have to be sequence-specific. First combine template RNA, primer, dNTP mix, and nuclease-free water in a PCR tube. Then, add an RNase inhibitor and reverse transcriptase to the PCR tube. Next, place the PCR tube into a thermal cycler for one cycle wherein annealing, extending, and inactivating of reverse transcriptase occurs. Finally, proceed directly to step two which is PCR, or store product on ice until PCR can be performed. Add master mix which contains buffer, dNTP mix, MgCl 2 , Taq polymerase, and nuclease-free water to each PCR tube. Then add the necessary primer to the tubes. Next, place the PCR tubes in a thermal cycler for 30 cycles of the amplification program. This includes denaturation, annealing, and elongation. The products of RT-PCR can be analyzed with gel electrophoresis. [ 54 ] Quantitative RT-PCR assay is considered to be the gold standard for measuring the number of copies of specific cDNA targets in a sample but it is poorly standardized. [ 55 ] As a result, while there are numerous publications utilizing the technique, many provide inadequate experimental detail and use unsuitable data analysis to draw inappropriate conclusions. Due to the inherent variability in the quality of any quantitative PCR data, not only do reviewers have a difficult time evaluating these manuscripts, but the studies also become impossible to replicate. [ 56 ] Recognizing the need for the standardization of the reporting of experimental conditions, the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE, pronounced mykee) guidelines have been published by an international consortium of academic scientists. The MIQE guidelines describe the minimum information necessary for evaluating quantitative PCR experiments that should be required for publication to encourage better experimental practice and ensuring the relevance, accuracy, correct interpretation, and repeatability of quantitative PCR data. [ 57 ] Besides reporting guidelines, the MIQE stresses the need to standardize the nomenclature associated with quantitative PCR to avoid confusion; for example, the abbreviation qPCR should be used for quantitative real-time PCR , while RT-qPCR should be used for reverse transcription-qPCR, and genes used for normalization should be referred to as reference genes instead of housekeeping genes . It also proposes that commercially derived terms like TaqMan probes should not be used, but instead referred to as hydrolysis probes . Additionally, it is proposed that the quantification cycle (Cq) be used to describe the PCR cycle used for quantification instead of the threshold cycle (Ct), crossing point (Cp), and takeoff point (TOP), which refer to the same value but were coined by different manufacturers of real-time instruments . [ 55 ] The guideline consists of the following elements: 1) experimental design, 2) sample, 3) nucleic acid extraction, 4) reverse transcription, 5) qPCR target information, 6) oligonucleotides, 7) protocol, 8) validation, and 9) data analysis. Specific items within each element carry a label of either E (essential) or D (desirable). Those labeled E are considered critical and indispensable while those labeled D are considered peripheral yet important for best practices. [ 57 ] In 2023, researchers developed a working prototype of an RT-LAMP lab-on-a-chip system, which provided results for SARS-CoV-2 tests within three minutes. The technology integrates microfluidic channels into printed circuit boards with, which may enable low-cost mass production. [ 58 ] [ 59 ]
https://en.wikipedia.org/wiki/Reverse_transcription_polymerase_chain_reaction
Reverse transfection is a technique for the transfer of genetic material into cells . As DNA is printed on a glass slide for the transfection process (the deliberate introduction of nucleic acids into cells) to occur before the addition of adherent cells, the order of addition of DNA and adherent cells is reverse that of conventional transfection . [ 1 ] Hence, the word “reverse” is used. A DNA -gelatin mixture may be used for printing onto a slide. Gelatin powder is first dissolved in sterile Milli-Q water to form a 0.2% gelatin solution. Purified DNA plasmid is then mixed with the gelatin solution, and the final gelatin concentration is kept greater than 0.17%. Besides gelatin, atelocollagen and fibronectin are also successful transfection vectors for introducing foreign DNA into the cell nucleus. After the DNA-gelatin mixture preparation, the mixture is pipetted onto a slide surface and the slide is placed in a covered petri dish . A desiccant is added to the dish to dry up the solution. Finally, cultured cells are poured into the dish for plasmid uptake. However, with the invention of different types of microarray printing systems , hundreds of transfection mixes (containing different DNA of interest) may be printed on the same slide for cell uptake of plasmids. [ 2 ] There are two major types of microarray printing systems manufactured by different companies: contact and non-contact printing systems. An example of a non-contact printing system is the Piezorray Flexible Non-contact Microarraying System. It uses pressure control and a piezoelectric collar to squeeze out consistent drops of approximately 333 pL in volume. The PiezoTip dispensers do not contact the surface to which the sample is dispensed; thus, contamination potential is reduced and the risk of disrupting the target surface is eliminated. An example of a contact printing system is the SpotArray 72 (Perkin Elmer Life Sciences) contact-spotting system. Its printhead can accommodate up to 48 pins, and creates compact arrays by selectively raising and lowering subsets of pins during printing. After printing, the pins are washed with a powerful pressure-jet pin washer and vacuum-dried, eliminating carryover. Another example of a contact printing system is the Qarray system (Genetix). It has three types of printing systems: QArray Mini, QArray 2 and QArray Max. After printing, the solution is allowed to dry up and the DNA-gelatin is stuck tightly in position on the array. First, adhesive from the HybriWell is peeled off and the HybriWell is attached over the area of the slide printed with the gelatin-DNA solution. Second, 200ul of transfection mix is pipetted into one of the HybriWell ports; the mixture will distribute evenly over the array. The array is then incubated, with temperature and time dependent on the cell types used. Third, the transfection mix is pipetted away and the HybriWell removed with a thin-tipped forceps . Fourth, the printed slide treated with transfection reagent is placed into a square dish with the printed side facing up. Fifth, the harvested cells are gently poured onto the slides (not on the printed areas). Finally, the dish is placed in a 37 °C, 5% CO 2 humidified incubator and incubated overnight. Effectene Reagent is used in conjunction with the enhancer and the DNA condensation buffer (Buffer EC) to achieve high transfection efficiency. In the first step of Effectene–DNA complex formation , the DNA is condensed by interaction with the enhancer in a defined-buffer system. Effectene Reagent is then added to the condensed DNA to produce condensed Effectene–DNA complexes. The Effectene–DNA complexes are mixed with the medium and directly added to the cells. Effectene Reagent spontaneously forms micelle structures exhibiting no size or batch variation (as may be found with pre-formulated liposome reagents). This feature ensures reproducibility of transfection-complex formation. The process of highly condensing DNA molecules and then coating them with Effectene Reagent is an effective way to transfer DNA into eukaryotic cells . The advantages of reverse transfection (over conventional transfection) are: The disadvantages of reverse transfection are:
https://en.wikipedia.org/wiki/Reverse_transfection
Reverse vaccinology is an improvement of vaccinology that employs bioinformatics and reverse pharmacology practices, pioneered by Rino Rappuoli and first used against Serogroup B meningococcus . [ 1 ] Since then, it has been used on several other bacterial vaccines. [ 2 ] [ full citation needed ] The basic idea behind reverse vaccinology is that an entire pathogenic genome can be screened using bioinformatics approaches to find genes. Some traits that the genes are monitored for, may indicate antigenicity and include genes that code for proteins with extracellular localization , signal peptides & B cell epitopes . [ 3 ] Those genes are filtered for desirable attributes that would make good vaccine targets such as outer membrane proteins . Once the candidates are identified, they are produced synthetically and are screened in animal models of the infection. [ 4 ] After Craig Venter published the genome of the first free-living organism in 1995, the genomes of other microorganisms became more readily available throughout the end of the twentieth century. Reverse vaccinology, designing vaccines using the pathogen's sequenced genome, came from this new wealth of genomic information, as well as technological advances. Reverse vaccinology is much more efficient than traditional vaccinology, which requires growing large amounts of specific microorganisms as well as extensive wet lab tests. [ citation needed ] In 2000, Rino Rappuoli and the J. Craig Venter Institute developed the first vaccine using Reverse Vaccinology against Serogroup B meningococcus. The J. Craig Venter Institute and others then continued work on vaccines for A Streptococcus, B Streptococcus, Staphylococcus aureus, and Streptococcus pneumoniae. [ 5 ] Attempts at reverse vaccinology first began with Meningococcus B (MenB). Meningococcus B caused over 50% of meningococcal meningitis, and scientists had been unable to create a successful vaccine for the pathogen because of the bacterium's unique structure. This bacterium's polysaccharide shell is identical to that of a human self-antigen, but its surface proteins vary greatly; and the lack of information about the surface proteins caused developing a vaccine to be extremely difficult. As a result, Rino Rappuoli and other scientists turned towards bioinformatics to design a functional vaccine. [ 5 ] Rappuoli and others at the J. Craig Venter Institute first sequenced the MenB genome. Then, they scanned the sequenced genome for potential antigens. They found over 600 possible antigens, which were tested by expression in Escherichia coli . The most universally applicable antigens were used in the prototype vaccines. Several proved to function successfully in mice, however, these proteins alone did not effectively interact with the human immune system due to not inducing a good immune response in order for the protection to be achieved. Later, by addition of outer membrane vesicles that contain lipopolysaccharides from the purification of blebs on gram negative cultures. The addition of this adjuvant (previously identified by using conventional vaccinology approaches) enhanced immune response to the level that was required. Later, the vaccine was proven to be safe and effective in adult humans. [ 5 ] During the development of the MenB vaccine, scientists adopted the same Reverse Vaccinology methods for other bacterial pathogens. A Streptococcus and B Streptococcus vaccines were two of the first Reverse Vaccines created. Because those bacterial strains induce antibodies that react with human antigens, the vaccines for those bacteria needed to not contain homologies to proteins encoded in the human genome in order to not cause adverse reactions, thus establishing the need for genome-based Reverse Vaccinology. [ 5 ] Later, Reverse Vaccinology was used to develop vaccines for antibiotic-resistant Staphylococcus aureus and Streptococcus pneumoniae [ 5 ] The major advantage for reverse vaccinology is finding vaccine targets quickly and efficiently. Traditional methods may take decades to unravel pathogens and antigens, diseases and immunity. However, In silico can be very fast, allowing to identify new vaccines for testing in only a few years. [ 6 ] The downside is that only proteins can be targeted using this process. Whereas, conventional vaccinology approaches can find other biomolecular targets such as polysaccharides . [ citation needed ] Though using bioinformatic technology to develop vaccines has become typical in the past ten years, general laboratories often do not have the advanced software that can do this. However, there are a growing number of programs making reverse vaccinology information more accessible. NERVE is one relatively new data processing program. Though it must be downloaded and does not include all epitope predictions, it does help save some time by combining the computational steps of reverse vaccinology into one program. Vaxign, an even more comprehensive program, was created in 2008. Vaxign is web-based and completely public-access. [ 7 ] Though Vaxign has been found to be extremely accurate and efficient, some scientists still utilize the online software RANKPEP for the peptide bonding predictions. Both Vaxign and RANKPEP employ PSSMs (Position Specific Scoring Matrices) when analyzing protein sequences or sequence alignments. [ 8 ] Computer-Aided bioinformatics projects are becoming extremely popular, as they help guide the laboratory experiments. [ 9 ]
https://en.wikipedia.org/wiki/Reverse_vaccinology
Reversed-phase liquid chromatography ( RP-LC ) is a mode of liquid chromatography in which non-polar stationary phase and polar mobile phases are used for the separation of organic compounds. [ 1 ] [ 2 ] [ 3 ] The vast majority of separations and analyses using high-performance liquid chromatography (HPLC) in recent years are done using the reversed phase mode. In the reversed phase mode, the sample components are retained in the system the more hydrophobic they are. [ 4 ] The factors affecting the retention and separation of solutes in the reversed phase chromatographic system are as follows: a. The chemical nature of the stationary phase , i.e., the ligands bonded on its surface, as well as their bonding density, namely the extent of their coverage. b. The composition of the mobile phase . Type of the bulk solvents whose mixtures affect the polarity of the mobile phase, hence the name modifier for a solvent added to affect the polarity of the mobile phase. c. Additives, such as buffers, affect the pH of the mobile phase , which affect the ionization state of the solutes and their polarity. In order to retain the organic components in mixtures, the stationary phases, packed within columns, consist of a hydrophobic substrates, bonded to the surface of porous silica-gel particles in various geometries (spheric, irregular), at different diameters (sub-2, 3, 5, 7, 10 um), with varying pore diameters (60, 100, 150, 300, A).   The particle's surface is covered by chemically bonded hydrocarbons , such as C3, C4, C8, C18 and more. The longer the hydrocarbon associated with the stationary phase, the longer the sample components will be retained. Some stationary phases are also made of hydrophobic polymeric particles, or hybridized silica-organic groups particles, for method in which mobile phases at extreme pH are used. Most current methods of separation of biomedical materials use C-18 columns, sometimes called by trade names, such as ODS (octadecylsilane) or RP-18. The mobile phases are mixtures of water and polar organic solvents, the vast majority of which are methanol and acetonitrile .  These mixtures usually contain various additives such as buffers ( acetate , phosphate , citrate ), surfactants (alkyl amines or alkyl sulfonates ) and special additives ( EDTA ). The goal of using supplements of one kind or another is to increase efficiency, selectivity, and control solute retention. The history and evolution of reversed phase stationary phases in described in detail in an article by Majors, Dolan, Carr and Snyder. [ 6 ] In the 1970s, most liquid chromatography runs were performed using solid particles as the stationary phases, made of unmodified silica gel or alumina . This type of technique is now referred to as normal-phase chromatography . Since the stationary phase is hydrophilic in this technique, and the mobile phase is non-polar (consisting of organic solvents such as hexane and heptane), biomolecules with hydrophilic properties in the sample adsorb to the stationary phase strongly. Moreover, they were not dissolved easily in the mobile phase solvents. At the same time hydrophobic molecules experience less affinity to the polar stationary phase, and elute through it early with not enough retention. This was the reasons why during the 1970s the silica based particles were treated with hydrocarbons, immobilized or bonded on their surface, and the mobile phases were switched to aqueous and polar in nature, to accommodate biomedical substances. The use of a hydrophobic stationary phase and polar mobile phases is essentially the reverse of normal phase chromatography, since the polarity of the mobile and stationary phases have been inverted – hence the term reversed-phase chromatography. [ 7 ] [ 8 ] As a result, hydrophobic molecules in the polar mobile phase tend to adsorb to the hydrophobic stationary phase, and hydro philic molecules in the sample pass through the column and are eluted first. [ 7 ] [ 9 ] Hydrophobic molecules can be eluted from the column by decreasing the polarity of the mobile phase using an organic (non-polar) solvent, which reduces hydrophobic interactions. The more hydrophobic the molecule, the more strongly it will bind to the stationary phase, and the higher the concentration of organic solvent that will be required to elute the molecule. Many of the mathematical parameters of the theory of chromatography and experimental considerations used in other chromatographic methods apply to RP-LC as well (for example, the selectivity factor, chromatographic resolution, plate count, etc. It can be used for the separation of a wide variety of molecules. It is typically used for separation of proteins, [ 10 ] because the organic solvents used in normal-phase chromatography can denature many proteins. Today, RP-LC is a frequently used analytical technique. There are huge variety of stationary phases available for use in RP-LC, allowing great flexibility in the development of the separation methods. [ 11 ] [ 12 ] Silica gel particles are commonly used as a stationary phase in high-performance liquid chromatography (HPLC) for several reasons, [ 13 ] [ 14 ] including: The United States Pharmacopoeia (USP) has classified HPLC columns by L# types. [ 20 ] The most popular column in this classification is an octadecyl carbon chain (C18)-bonded silica (USP classification L1). [ 21 ] This is followed by C8-bonded silica (L7), pure silica (L3), cyano-bonded silica (CN) (L10) and phenyl-bonded silica (L11). Note that C18, C8 and phenyl are dedicated reversed-phase stationary phases, while CN columns can be used in a reversed-phase mode depending on analyte and mobile phase conditions. Not all C18 columns have identical retention properties. Surface functionalization of silica can be performed in a monomeric or a polymeric reaction with different short-chain organosilanes used in a second step to cover remaining silanol groups ( end-capping ). While the overall retention mechanism remains the same, subtle differences in the surface chemistries of different stationary phases will lead to changes in selectivity. Modern columns have different polarity depending on the ligand bonded to the stationary phase. PFP is pentafluorphenyl. CN is cyano. NH2 is amino. ODS is octadecyl or C18. ODCN is a mixed mode column consisting of C18 and nitrile. [ 22 ] Recent developments in chromatographic supports and instrumentation for liquid chromatography (LC) facilitate rapid and highly efficient separations, using various stationary phases geometries. [ 23 ] Various analytical strategies have been proposed, such as the use of silica-based monolithic supports, elevated mobile phase temperatures, and columns packed with sub-3 μm superficially porous particles (fused or solid core) [ 24 ] or with sub-2 μm fully porous particles for use in ultra-high-pressure LC systems (UHPLC). [ 25 ] A comprehensive article on the modern trends and best practices of mobile phase selection in reversed-phase chromatography was published by Boyes and Dong. [ 26 ] A mobile phase in reversed-phase chromatograpy consists of mixtures of water or aqueous buffers, to which organic solvents are added, to elute analytes from a reversed-phase column in a selective manner. [ 7 ] [ 27 ] The added organic solvents must be miscible with water, and the two most common organic solvents used are acetonitrile and methanol . Other solvents can also be used such as ethanol or 2-propanol ( isopropyl alcohol ) and tetrahydrofuran (THF). The organic solvent is called also a modifier, since it is added to the aqueous solution in the mobile phase in order to modify the polarity of the mobile phase. Water is the most polar solvent in the reversed phase mobile phase; therefore, lowering the polarity of the mobile phase by adding modifiers enhances its elution strength. The two most widely used organic modifiers are acetonitrile and methanol, although acetonitrile is the more popular choice. Isopropanol (2-propanol) can also be used, because of its strong eluting properties, but its use is limited by its high viscosity, which results in higher backpressures. Both acetonitrile and methanol are less viscous than isopropanol, although a mixture of 50:50 percent of methanol:water is also very viscous and causes high backpressures. All three solvents are essentially UV transparent. This is a crucial property for common reversed phase chromatography since sample components are typically detected by UV detectors. Acetonitrile is more transparent than the others in low UV wavelengths range, therefore it is used almost exclusively when separating molecules with weak or no chromophores (UV-VIS absorbing groups), such as peptides. Most peptides only absorb at low wavelengths in the ultra-violet spectrum (typically less than 225 nm) and acetonitrile provides much lower background absorbance at low wavelengths than the other common solvents. The pH of the mobile phase can have an important role on the retention of an analyte and can change the selectivity of certain analytes. [ 28 ] [ 29 ] For samples containing solutes with ionized functional groups, such as amines , carboxyls , phosphates , phosphonates , sulfates , and sulfonates , the ionization of these groups can be controlled using mobile phase buffers. [ 30 ] For example, carboxylic groups in solutes become increasingly negatively charged as the pH of the mobile phase rises above their pKa, hence the whole molecule becomes more polar and less retained on the a-polar stationary phase. In this case, raising the pH of the phase mobile above 4–5 = pH (which is the typical pKa range for carboxylic groups) increases their ionization, hence decreases their retention. Conversely, using a mobile phase at a pH lower than 4 [ 31 ] will increase their retention, because it will decrease their ionization degree, rendering them less polar. The same considerations apply to substances containing basic functional groups, such as amines, whose pKa ranges are around 8 and above, are retained more, as the pH of the mobile phase increases, approaching 8 and above, because they are less ionized, hence less polar. However, in the case of high pH mobile phases, most of the traditional silica gel based Reversed Phase columns are generally limited for use with mobile phases at pH 8 and above, therefore, control over the retention of amines in this range is limited. [ 32 ] The choice of buffer type is an important factor in RP-LC method development, as it can affect the retention, selectivity, and resolution of the analytes of interest. [ 26 ] When selecting a buffer for RP-HPLC, there are a number of factors to consider, including: Some of the most common buffers used in RP-HPLC include: [ 34 ] Charged analytes can be separated on a reversed-phase column by the use of ion-pairing (also called ion-interaction). This technique is known as reversed-phase ion-pairing chromatography. [ 35 ] Elution can be performed isocratically (the water-solvent composition does not change during the separation process) or by using a solution gradient (the water-solvent composition changes during the separation process, usually by decreasing the polarity).
https://en.wikipedia.org/wiki/Reversed-phase_chromatography
In probability theory , the reversed compound agent theorem ( RCAT ) is a set of sufficient conditions for a stochastic process expressed in any formalism to have a product form stationary distribution [ 1 ] (assuming that the process is stationary [ 2 ] [ 1 ] ). The theorem shows that product form solutions in Jackson's theorem , [ 1 ] the BCMP theorem [ 3 ] and G-networks are based on the same fundamental mechanisms. [ 4 ] The theorem identifies a reversed process using Kelly's lemma , from which the stationary distribution can be computed. [ 1 ] This probability -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Reversed_compound_agent_theorem
Reverse electrodialysis ( RED ) is the salinity gradient energy retrieved from the difference in the salt concentration between seawater and river water . [ 1 ] A method of utilizing the energy produced by this process by means of a heat engine was invented by Prof. Sidney Loeb in 1977 at the Ben-Gurion University of the Negev. --United States Patent US4171409 In reverse electrodialysis a salt solution and fresh water are let through a stack of alternating cation and anion exchange membranes. The chemical potential difference between salt and fresh water generates a voltage over each membrane and the total potential of the system is the sum of the potential differences over all membranes. The process works through difference in ion concentration instead of an electric field, which has implications for the type of membrane needed. [ 2 ] In RED, as in a fuel cell , the cells are stacked. A module with a capacity of 250 kW has the size of a shipping container. In the Netherlands , for example, more than 3,300 m 3 fresh water runs into the sea per second on average. The membrane halves the pressure differences which results in a water column of approximately 135 meters. The energy potential is therefore e= mgΔh =3.3*10 6 kg/s*10 m/s 2 *135 meters ca.= 4.5*10 9 Joule per second, Power=4.5 gigawatts. In 2006 a 50 kW plant was located at a coastal test site in Harlingen , the Netherlands, [ 3 ] the focus being on prevention of biofouling of the anode , cathode , and membranes and increasing the membrane performance. [ 4 ] [ 5 ] In 2007 the Directorate for Public Works and Water Management, Redstack, and ENECO signed a declaration of intent for development of a pilot plant on the Afsluitdijk in the Netherlands. [ 6 ] The plant was put into service on 26 November 2014 and produces 50 kW of electricity to show the technical feasibility in real-life conditions using fresh IJsselmeer water and salt water from the Wadden Sea. Theoretically, with 1m 3 /s river water and an equal amount of sea water, approximately 1 MW of renewable electricity can be recovered at this location by upscaling the plant. [ 7 ] It is to be expected that after this phase the installation could be further expanded to a final capacity of 200 MW. The main disadvantage of reverse electrodialysis electricity production is the high capital costs involved. Ion exchange membranes are very expensive and power produced per membrane area is really low. As consequence, return of investment is much lower than other renewable energy sources such as wind or solar.
https://en.wikipedia.org/wiki/Reversed_electrodialysis
Chain polymerization, propagated by chain carriers that are deactivated reversibly, bringing them into active-dormant equilibria of which there might be more than one. Note: examples of reversible-deactivation polymerization are group-transfer polymerization, reversible-deactivation radical polymerization (RDRP), reversible addition−fragmentation chain-transfer polymerization (RAFT) and atom-transfer radical polymerization (ATRP). [ 1 ] In polymer chemistry , reversible-deactivation polymerization ( RDP ) is a form of polymerization propagated by chain carriers , some of which at any instant are held in a state of dormancy through an equilibrium process involving other species. An example of reversible-deactivation anionic polymerization (RDAP) is group transfer polymerization of alkyl methacrylates, where the initiator and the dormant state is a silyl ketene acetal . In the case of reversible-deactivation radical polymerization (RDRP), a majority of the chain must be held in a dormant state to ensure that the concentration of active carriers is sufficiently low as to render chain termination reactions negligible. Despite having some common features, RDP is distinct from living polymerization which requires a complete absence of termination and irreversible chain transfer. This article about polymer science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Reversible-deactivation_polymerization
Chain polymerization , propagated by radicals that are deactivated reversibly, bringing them into active/dormant equilibria of which there might be more than one. [ 1 ] See also reversible-deactivation polymerization RDP. In polymer chemistry , reversible-deactivation radical polymerizations ( RDRP s) are members of the class of reversible-deactivation polymerizations which exhibit much of the character of living polymerizations , but cannot be categorized as such as they are not without chain transfer or chain termination reactions. [ 2 ] [ 3 ] Several different names have been used in literature, which are: Though the term "living" radical polymerization was used in early days, it has been discouraged by IUPAC , because radical polymerization cannot be a truly living process due to unavoidable termination reactions between two radicals. The commonly-used term controlled radical polymerization is permitted, but reversible-deactivation radical polymerization or controlled reversible-deactivation radical polymerization (RDRP) is recommended. RDRP – sometimes misleadingly called 'free' radical polymerization – is one of the most widely used polymerization processes since it can be applied The steady-state concentration of the growing polymer chains is 10 −7 M by order of magnitude, and the average life time of an individual polymer radical before termination is about 5–10 s. A drawback of the conventional radical polymerization is the limited control of chain architecture, molecular weight distribution, and composition. In the late 20th century it was observed that when certain components were added to systems polymerizing by a chain mechanism they are able to react reversibly with the (radical) chain carriers, putting them temporarily into a 'dormant' state. [ 4 ] [ 5 ] This had the effect of prolonging the lifetime of the growing polymer chains (see above) to values comparable with the duration of the experiment. At any instant most of the radicals are in the inactive (dormant) state, however, they are not irreversibly terminated (‘dead’). Only a small fraction of them are active (growing), yet with a fast rate of interconversion of active and dormant forms, faster than the growth rate, the same probability of growth is ensured for all chains, i.e., on average, all chains are growing at the same rate. Consequently, rather than a most probable distribution, the molecular masses (degrees of polymerization) assume a much narrower Poisson distribution , and a lower dispersity prevails. IUPAC also recognizes the alternative name, ‘controlled reversible-deactivation radical polymerization’ as acceptable, "provided the controlled context is specified, which in this instance comprises molecular mass and molecular mass distribution." These types of radical polymerizations are not necessarily ‘living’ polymerizations, since chain termination reactions are not precluded". [ 1 ] [ 2 ] [ 3 ] The adjective ‘controlled’ indicates that a certain kinetic feature of a polymerization or structural aspect of the polymer molecules formed is controlled (or both). The expression ‘controlled polymerization’ is sometimes used to describe a radical or ionic polymerization in which reversible-deactivation of the chain carriers is an essential component of the mechanism and interrupts the propagation that secures control of one or more kinetic features of the polymerization or one or more structural aspects of the macromolecules formed, or both. The expression ‘controlled radical polymerization’ is sometimes used to describe a radical polymerization that is conducted in the presence of agents that lead to e.g. atom-transfer radical polymerization (ATRP), nitroxide-(aminoxyl) mediated polymerization (NMP), or reversible-addition-fragmentation chain transfer (RAFT) polymerization. All these and further controlled polymerizations are included in the class of reversible-deactivation radical polymerizations. Whenever the adjective ‘controlled’ is used in this context the particular kinetic or the structural features that are controlled have to be specified. There is a mode of polymerization referred to as reversible-deactivation polymerization which is distinct from living polymerization, despite some common features. Living polymerization requires a complete absence of termination reactions, whereas reversible-deactivation polymerization may contain a similar fraction of termination as conventional polymerization with the same concentration of active species. [ 1 ] Some important aspects of these are compared in the table: As the name suggests, the prerequisite of a successful RDRP is fast and reversible activation/deactivation of propagating chains. There are three types of RDRP; namely deactivation by catalyzed reversible coupling, deactivation by spontaneous reversible coupling and deactivation by degenerative transfer (DT). A mixture of different mechanisms is possible; e.g. a transition metal mediated RDRP could switch among ATRP, OMRP and DT mechanisms depending on the reaction conditions and reagents used. In any RDRP processes, the radicals can propagate with the rate coefficient k p by addition of a few monomer units before the deactivation reaction occurs to regenerate the dormant species. Concurrently, two radicals may react with each other to form dead chains with the rate coefficient k t . The rates of propagation and termination between two radicals are not influenced by the mechanism of deactivation or the catalyst used in the system. Thus it is possible to estimate how fast a RDRP can be conducted with preserved chain end functionality? [ 6 ] In addition, other chain breaking reactions such as irreversible chain transfer/termination reactions of the propagating radicals with solvent, monomer, polymer, catalyst, additives, etc. would introduce additional loss of chain end functionality (CEF). [ 7 ] The overall rate coefficient of chain breaking reactions besides the direct termination between two radicals is represented as k tx . In all RDRP methods, the theoretical number average molecular weight of obtained polymers, M n , can be defined by following equation: M n = M m × [ M ] 0 − [ M ] t [ R-X ] 0 {\displaystyle M_{\text{n}}=M_{\text{m}}\times {\frac {[{\text{M}}]_{0}-[{\text{M}}]_{t}}{[{\text{R-X}}]_{0}}}} where M m is the molecular weight of monomer; [M] 0 and [M] t are the monomer concentrations at time 0 and time t ; [R-X] 0 is the initial concentration of the initiator. Besides the designed molecular weight, a well controlled RDRP should give polymers with narrow molecular distributions, which can be quantified by M w / M n values, and well preserved chain end functionalities. A well controlled RDRP process requires: 1) the reversible deactivation process should be sufficiently fast; 2) the chain breaking reactions which cause the loss of chain end functionalities should be limited; 3) properly maintained radical concentration; 4) the initiator should have proper activity. The initiator of the polymerization is usually an organohalogenid and the dormant state is achieved in a metal complex of a transition metal (‘radical buffer’). This method is very versatile but requires unconventional initiator systems that are sometimes poorly compatible with the polymerization media. Given certain conditions a homolytic splitting of the C-O bond in alkoxylamines can occur and a stable 2-centre 3 electron N-O radical can be formed that is able to initiate a polymerization reaction. The preconditions for an alkoxylamine suitable to initiate a polymerization are bulky, sterically obstructive substituents on the secondary amine, and the substituent on the oxygen should be able to form a stable radical, e.g. benzyl. RAFT is one of the most versatile and convenient techniques in this context. The most common RAFT-processes are carried out in the presence of thiocarbonylthio compounds that act as radical buffers. While in ATRP and NMP reversible deactivation of propagating radical-radical reactions takes place and the dormant structures are a halo-compound in ATRP and the alkoxyamine in NMP, both being a sink for radicals and source at the same time and described by the corresponding equilibria. RAFT on the contrary, is controlled by chain-transfer reactions that are in a deactivation-activation equilibrium. Since no radicals are generated or destroyed an external source of radicals is necessary for initiation and maintenance of the propagation reaction. Although not a strictly living form of polymerization catalytic chain transfer polymerization must be mentioned as it figures significantly in the development of later forms of living free radical polymerization. Discovered in the late 1970s in the USSR it was found that cobalt porphyrins were able to reduce the molecular weight during polymerization of methacrylates . Later investigations showed that the cobalt glyoxime complexes were as effective as the porphyrin catalysts and also less oxygen sensitive. Due to their lower oxygen sensitivity these catalysts have been investigated much more thoroughly than the porphyrin catalysts. The major products of catalytic chain transfer polymerization are vinyl -terminated polymer chains. One of the major drawbacks of the process is that catalytic chain transfer polymerization does not produce macromonomers but instead produces addition fragmentation agents. When a growing polymer chain reacts with the addition fragmentation agent the radical end-group attacks the vinyl bond and forms a bond. However, the resulting product is so hindered that the species undergoes fragmentation, leading eventually to telechelic species . These addition fragmentation chain transfer agents do form graft copolymers with styrenic and acrylate species however they do so by first forming block copolymers and then incorporating these block copolymers into the main polymer backbone. While high yields of macromonomers are possible with methacrylate monomers , low yields are obtained when using catalytic chain transfer agents during the polymerization of acrylate and stryenic monomers. This has been seen to be due to the interaction of the radical centre with the catalyst during these polymerization reactions. The reversible reaction of the cobalt macrocycle with the growing radical is known as cobalt carbon bonding and in some cases leads to living polymerization reactions. An iniferter is a chemical compound that simultaneously acts as initiator , transfer agent, and terminator (hence the name ini-fer-ter) in controlled free radical iniferter polymerizations, the most common is the dithiocarbamate type. [ 8 ] [ 9 ] Iodine-transfer polymerization (ITP , also called ITRP ), developed by Tatemoto and coworkers in the 1970s [ 10 ] gives relatively low polydispersities for fluoroolefin polymers. While it has received relatively little academic attention, this chemistry has served as the basis for several industrial patents and products and may be the most commercially successful form of living free radical polymerization. [ 11 ] It has primarily been used to incorporate iodine cure sites into fluoroelastomers . The mechanism of ITP involves thermal decomposition of the radical initiator (typically persulfate ), generating the initiating radical In•. This radical adds to the monomer M to form the species P 1 •, which can propagate to P m •. By exchange of iodine from the transfer agent R-I to the propagating radical P m • a new radical R• is formed and P m • becomes dormant. This species can propagate with monomer M to P n •. During the polymerization exchange between the different polymer chains and the transfer agent occurs, which is typical for a degenerative transfer process. Typically, iodine transfer polymerization uses a mono- or diiodo-per fluoroalkane as the initial chain transfer agent. This fluoroalkane may be partially substituted with hydrogen or chlorine. The energy of the iodine-perfluoroalkane bond is low and, in contrast to iodo-hydrocarbon bonds, its polarization small. [ 12 ] Therefore, the iodine is easily abstracted in the presence of free radicals. Upon encountering an iodoperfluoroalkane, a growing poly(fluoroolefin) chain will abstract the iodine and terminate, leaving the now-created perfluoroalkyl radical to add further monomer. But the iodine-terminated poly(fluoroolefin) itself acts as a chain transfer agent. As in RAFT processes, as long as the rate of initiation is kept low, the net result is the formation of a monodisperse molecular weight distribution. Use of conventional hydrocarbon monomers with iodoperfluoroalkane chain transfer agents has been described. [ 13 ] The resulting molecular weight distributions have not been narrow since the energetics of an iodine-hydrocarbon bond are considerably different from that of an iodine- fluorocarbon bond and abstraction of the iodine from the terminated polymer difficult. The use of hydrocarbon iodides has also been described, but again the resulting molecular weight distributions were not narrow. [ 14 ] Preparation of block copolymers by iodine-transfer polymerization was also described by Tatemoto and coworkers in the 1970s. [ 15 ] Although use of living free radical processes in emulsion polymerization has been characterized as difficult, [ 16 ] all examples of iodine-transfer polymerization have involved emulsion polymerization. Extremely high molecular weights have been claimed. [ 17 ] Listed below are some other less described but to some extent increasingly important living radical polymerization techniques. Diphenyl diselenide and several benzylic selenides have been explored by Kwon et al. as photoiniferters in polymerization of styrene and methyl methacrylate. Their mechanism of control over polymerization is proposed to be similar to the dithiuram disulfide iniferters. However, their low transfer constants allow them to be used for block copolymer synthesis but give limited control over the molecular weight distribution. [ 18 ] Telluride-mediated polymerization or TERP first appeared to mainly operate under a reversible chain transfer mechanism by homolytic substitution under thermal initiation. However, in a kinetic study it was found that TERP predominantly proceeds by degenerative transfer rather than 'dissociation combination'. [ 19 ] Alkyl tellurides of the structure Z-X-R, were Z=methyl and R= a good free radical leaving group, give the better control for a wide range of monomers, phenyl tellurides (Z=phenyl) giving poor control. Polymerization of methyl methacrylates are only controlled by ditellurides. The importance of X to chain transfer increases in the series O<S<Se<Te, makes alkyl tellurides effective in mediating control under thermally initiated conditions and the alkyl selenides and sulfides effective only under photoinitiated polymerization. More recently Yamago et al. reported stibine-mediated polymerization, using an organostibine transfer agent with the general structure Z(Z')-Sb-R (where Z= activating group and R= free radical leaving group). A wide range of monomers (styrenics, (meth)acrylics and vinylics) can be controlled, giving narrow molecular weight distributions and predictable molecular weights under thermally initiated conditions. [ 20 ] [ 21 ] Yamago has also published a patent indicating that bismuth alkyls can also control radical polymerizations via a similar mechanism. More reversible-deactivation radical polymerizations are known to be catalysed by copper .
https://en.wikipedia.org/wiki/Reversible-deactivation_radical_polymerization
The classic Monod–Wyman–Changeux model (MWC) for cooperativity is generally published in an irreversible form. That is, there are no product terms in the rate equation which can be problematic for those wishing to build metabolic models since there are no product inhibition terms. [ 1 ] However, a series of publications by Popova and Sel'kov [ 2 ] derived the MWC rate equation for the reversible, multi-substrate, multi-product reaction. The same problem applies to the classic Hill equation which is almost always shown in an irreversible form. Hofmeyr and Cornish-Bowden first published the reversible form of the Hill equation. [ 1 ] The equation has since been discussed elsewhere [ 3 ] [ 4 ] and the model has also been used in a number of kinetic models such as a model of Phosphofructokinase and Glycolytic Oscillations in the Pancreatic β-cells [ 5 ] or a model of a glucose-xylose co-utilizing S. cerevisiae strain. [ 6 ] The model has also been discussed in modern enzyme kinetics textbooks. [ 7 ] [ 8 ] Consider the simpler case where there are two binding sites. See the scheme shown below. Each site is assumed to bind either molecule of substrate S or product P. The catalytic reaction is shown by the two reactions at the base of the scheme triangle, that is S to P and P to S. The model assumes the binding steps are always at equilibrium. The reaction rate is given by: v = k 1 ( E S + 2 E S 2 + E S P ) − k 2 ( E P + 2 E P 2 + E S P ) {\displaystyle v=k_{1}\left(ES+2ES_{2}+ESP\right)-k_{2}\left(EP+2EP_{2}+ESP\right)} Invoking the rapid-equilibrium assumption we can write the various complexes in terms of equilibrium constants to give: v = V f σ ( 1 − ρ ) ( σ + π ) 1 + ( σ + π ) 2 {\displaystyle v={\frac {V_{f}\sigma (1-\rho )(\sigma +\pi )}{1+(\sigma +\pi )^{2}}}} where ρ = Γ / K e q {\displaystyle \rho =\Gamma /K_{eq}} . The σ {\displaystyle \sigma } and π {\displaystyle \pi } terms are the ratio of substrate and product to their respective half-saturation constants, namely σ = S / S 0.5 {\displaystyle \sigma =S/S_{0.5}} and π = P / P 0.5 {\displaystyle \pi =P/P_{0.5}} and Using the author's own notation, if an enzyme has h {\displaystyle h} sites that can bind ligand, the form, in the general case, can be shown to be: v = V f σ ( 1 − ρ ) ( σ + π ) h − 1 1 + ( σ + π ) h {\displaystyle v={\frac {V_{f}\sigma (1-\rho )(\sigma +\pi )^{h-1}}{1+(\sigma +\pi )^{h}}}} The non-cooperative reversible Michaelis-Menten equation can be seen to emerge when we set the Hill coefficient to one. If the enzyme is irreversible the equation turns into the simple Michaelis-Menten equation that is irreversible. When setting the equilibrium constant to infinity, the equation can be seen to revert to the simpler case where the product inhibits the reverse step. A comparison has been made between the MWC and reversible Hill equation. [ 9 ] A modification of the reversible Hill equation was published by Westermark et al [ 10 ] where modifiers affected the catalytic properties instead. This variant was shown to provide a much better fit for describing the kinetics of muscle phosphofructokinase .
https://en.wikipedia.org/wiki/Reversible_Hill_equation
Enzymes are proteins that act as biological catalysts by accelerating chemical reactions. Enzymes act on small molecules called substrates, which an enzyme converts into products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. The study of how fast an enzyme can transform a substrate into a product is called enzyme kinetics . The rate of reaction of many chemical reactions shows a linear response as function of the concentration of substrate molecules. Enzymes however display a saturation effect where,, as the substrate concentration is increased the reaction rate reaches a maximum value. Standard approaches to describing this behavior are based on models developed by Michaelis and Menten as well and Briggs and Haldane . Most elementary formulations of these models assume that the enzyme reaction is irreversible, that is product is not converted back to substrate. However, this is unrealistic when describing the kinetics of enzymes in an intact cell because there is product available. Reversible Michaelis–Menten kinetics , using the reversible form of the Michaelis–Menten equation , is therefore important when developing computer models of cellular processes involving enzymes. In enzyme kinetics, the Michaelis–Menten kinetics kinetic rate law that describes the conversion of one substrate to one product, is often commonly depicted in its irreversible form as: where v {\displaystyle v} is the reaction rate, V max {\displaystyle V_{\max }} is the maximum rate when saturating levels of the substrate are present, K m {\displaystyle K_{\mathrm {m} }} is the Michaelis constant and s {\displaystyle s} the substrate concentration. In practice, this equation is used to predict the rate of reaction when little or no product is present. Such situations arise in enzyme assays. When used to model enzyme rates in vivo , for example, to model a metabolic pathway, this representation is inadequate because under these conditions product is present. As a result, when building computer models of metabolism [ 1 ] or other enzymatic processes, it is better to use the reversible form of the Michaelis–Menten equation. To model the reversible form of the Michaelis–Menten equation, the following reversible mechanism is considered: E + S ⇌ k − 1 k 1 ES ⇌ k − 2 k 2 E + P {\displaystyle {\ce {{E}+{S}<=>[k_{1}][k_{-1}]ES<=>[k_{2}][k_{-2}]{E}+{P}}}} To derive the rate equation, it is assumed that the concentration of enzyme-substrate complex is at steady-state, [ 2 ] that is d e s / d t = 0 {\displaystyle des/dt=0} . Following current literature convention, [ 3 ] we will be using lowercase Roman lettering to indicate concentrations (this avoids cluttering the equations with square brackets). Thus e s {\displaystyle es} indicates the concentration of enzyme-substrate complex, ES. The net rate of change of product (which is equal to v {\displaystyle v} ) is given by the difference in forward and reverse rates: v = v f − v r = k 2 e s − k − 2 e p {\displaystyle v=v_{f}-v_{r}=k_{2}es-k_{-2}e\ p} The total level of enzyme moiety is the sum total of free enzyme and enzyme-complex, that is e t = e + e s {\displaystyle e_{t}=e+es} . Hence the level of free e {\displaystyle e} is given by the difference in the total enzyme concentration, e t {\displaystyle e_{t}} and the concentration of complex, that is: e = e t − e s {\displaystyle e=e_{t}-es} Using mass conservation we can compute the rate of change of e s {\displaystyle es} using the balance equation: d e s d t = k 1 ( e t − e s ) s + k − 2 ( e t − e s ) p − ( k − 1 + k 2 ) e s = 0 {\displaystyle {\frac {des}{dt}}=k_{1}\left(e_{t}-es\right)s+k_{-2}\left(e_{t}-es\right)p-\left(k_{-1}+k_{2}\right)es=0} where e {\displaystyle e} has been replaced using e = e t − e s {\displaystyle e=e_{t}-es} . This leaves e s {\displaystyle es} as the only unknown. Solving for e s {\displaystyle es} gives: e s = e t ( k 1 s + k − 2 p ) k − 1 + k 2 + k 1 s + k − 2 p {\displaystyle es={\frac {\mathrm {e_{t}} \left(k_{1}s+k_{-2}\ p\right)}{k_{-1}+k_{2}+k_{1}\ s+k_{-2}\ p}}} Inserting e s {\displaystyle es} into the rate equation v = k 2 e s − k − 2 e p {\displaystyle v=k_{2}es-k_{-2}e\ p} and rearranging gives: v = e t k 1 k 2 s − k − 1 k − 2 p k − 1 + k 2 + k 1 s + k − 2 p {\displaystyle v=e_{t}{\frac {k_{1}k_{2}s-k_{-1}k_{-2}p}{k_{-1}+k_{2}+k_{1}s+k_{-2}p}}} The following substitutions are now made: k 2 = V max f e t ; K m s = k − 1 + k 2 k 1 {\displaystyle k_{2}={\frac {V_{\max }^{f}}{e_{t}}};\quad K_{m}^{s}={\frac {k_{-1}+k_{2}}{k_{1}}}} and k − 2 = V max r e t ; K m p = k − 1 + k 2 k − 2 {\displaystyle k_{-2}={\frac {V_{\max }^{r}}{e_{t}}};\quad K_{m}^{p}={\frac {k_{-1}+k_{2}}{k_{-2}}}} after rearrangement, we obtain the reversible Michaelis–Menten equation in terms of four constants: v = V max f K m s s − V max r K m p p 1 + s K m s + p K m p {\displaystyle v={\frac {{\frac {V_{\max }^{f}}{K_{m}^{s}}}s-{\frac {V_{\max }^{r}}{K_{m}^{p}}}p}{1+{\frac {s}{K_{m}^{s}}}+{\frac {p}{K_{m}^{p}}}}}} This is not the usual form in which the equation is used. Instead, the equation is set to zero, meaning v = 0 {\displaystyle v=0} , indicating we are at equilibrium and the concentrations s {\displaystyle s} and p {\displaystyle p} are now equilibrium concentrations, hence: 0 = V max f s e q / K m s − V max r p e q / K m p {\displaystyle 0=V_{\max }^{f}s_{eq}/K_{m}^{s}-V_{\max }^{r}p_{eq}/K_{m}^{p}} Rearranging this gives the so-called Haldane relationship: K e q = p e q s e q = V max f K m p V max r K m s {\displaystyle K_{eq}={\frac {p_{eq}}{s_{eq}}}={\frac {V_{\max }^{f}K_{m}^{p}}{V_{\max }^{r}K_{m}^{s}}}} The advantage of this is that one of the four constants can be eliminated and replaced with the equilibrium constant which is more likely to be known. In addition, it allows one to make a useful interpretation in terms of the thermodynamic and saturation effects (see next section). Most often the reverse maximum rate is eliminated to yield the final equation: v = V max f / K m S ( s − p / K e q ) 1 + s / K m s + p / K m p {\displaystyle v={\frac {V_{\max }^{f}/K_{m}^{S}\left(s-p/K_{eq}\right)}{1+s/K_{m}^{s}+p/K_{m}^{p}}}} The reversible Michaelis–Menten law, as with many enzymatic rate laws, can be decomposed into a capacity term, a thermodynamic term, and an enzyme saturation level. [ 4 ] [ 5 ] This is more easily seen when we write the reversible rate law as: v = V max f ⋅ ( s − p / K e q ) ⋅ 1 1 + s / K m s + p / K m p {\displaystyle v=V_{\max }^{f}\cdot \left(s-p/K_{eq}\right)\cdot {\frac {1}{1+s/K_{m}^{s}+p/K_{m}^{p}}}} where V max f {\displaystyle V_{\max }^{f}} is the capacity term, ( s − p / K e q ) {\displaystyle \left(s-p/K_{eq}\right)} the thermodynamic term and 1 1 + s / K m s + p / K m p {\displaystyle {\frac {1}{1+s/K_{m}^{s}+p/K_{m}^{p}}}} the saturation term. The separation can be even better appreciated if we look at the elasticity coefficient ε s v {\displaystyle \varepsilon _{s}^{v}} . According to elasticity algebra , the elasticity of a product is the sum of the sub-term elasticities, [ 6 ] that is: ε x a b = ε x a + ε x b {\displaystyle \varepsilon _{x}^{ab}=\varepsilon _{x}^{a}+\varepsilon _{x}^{b}} Hence the elasticity of the reversible Michaelis–Menten rate law can easily be shown to be: ε s v = ε s v c a p + ε s v t h e r m o + ε s v s a t {\displaystyle \varepsilon _{s}^{v}=\varepsilon _{s}^{v_{cap}}+\varepsilon _{s}^{v_{thermo}}+\varepsilon _{s}^{v_{sat}}} Since the capacity term is a constant, the first elasticity is zero. The thermodynamic term can be easily shown to be: ε s v t h e r m o = 1 1 − ρ {\displaystyle \varepsilon _{s}^{v_{thermo}}={\frac {1}{1-\rho }}} where ρ {\displaystyle \rho } is the disequilibrium ratio and equals Γ / K e q {\displaystyle \Gamma /K_{eq}} and Γ {\displaystyle \Gamma } the mass–action ratio The saturation term becomes: ε s v s a t = − s / K m s 1 + s / K m s + p / K m p {\displaystyle \varepsilon _{s}^{v_{sat}}={\frac {-s/K_{m}^{s}}{1+s/K_{m}^{s}+p/K_{m}^{p}}}}
https://en.wikipedia.org/wiki/Reversible_Michaelis–Menten_kinetics
For an electrode in a solution with a particular size and geometry , the reversible charge injection limit is the amount of charge that can move from the electrode to the surroundings without causing a chemical reaction that is irreversible . [ 1 ] This electrochemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Reversible_charge_injection_limit
In mathematics , a reversible diffusion is a specific example of a reversible stochastic process . Reversible diffusions have an elegant characterization due to the Russian mathematician Andrey Nikolaevich Kolmogorov . Let B denote a d - dimensional standard Brownian motion ; let b : R d → R d be a Lipschitz continuous vector field . Let X : [0, +∞) × Ω → R d be an Itō diffusion defined on a probability space (Ω, Σ, P ) and solving the Itō stochastic differential equation d X t = b ( X t ) d t + d B t {\displaystyle \mathrm {d} X_{t}=b(X_{t})\,\mathrm {d} t+\mathrm {d} B_{t}} with square-integrable initial condition, i.e. X 0 ∈ L 2 (Ω, Σ, P ; R d ). Then the following are equivalent: (Of course, the condition that b be the negative of the gradient of Φ only determines Φ up to an additive constant; this constant may be chosen so that exp(−2Φ(·)) is a probability density function with integral 1.)
https://en.wikipedia.org/wiki/Reversible_diffusion