text
stringlengths
11
1.65k
source
stringlengths
38
44
Strep-tag Just like other short-affinity tags (His-tag, FLAG-tag), the can be easily fused to recombinant proteins during subcloning of its cDNA or gene. For its expression various vectors for various host organisms ("E. coli", yeast, insect, and mammalian cells) are available. A particular benefit of the is its rather small size and the fact that it is biochemically almost inert. Therefore, protein folding or secretion is not influenced and usually it does not interfere with protein function. is especially suited for analysis of functional proteins, because the purification procedure can be kept under physiological conditions. This not only allows the isolation of sensitive proteins in a native state, but it is also possible to purify intact protein complexes, even if just one subunit carries the tag. In the first step of the purification cycle, the cell lysate containing fusion protein is applied to a column with immobilized Strep-Tactin (step 1). After the tagged protein has specifically bound to Strep-Tactin, a short washing step with a physiological buffer (e.g. PBS) removes all other host proteins (step 2). This is due to its extraordinary low tendency to bind proteins non specifically. Then, the purified fusion protein is gently eluted with a low concentration of desthiobiotin, which specifically competes for the biotin binding pocket (step 3). To regenerate the column, desthiobiotin is removed by application of a HABA containing solution (a yellow azo dye)
https://en.wikipedia.org/wiki?curid=22810498
Strep-tag The removal of desthiobiotin is indicated by a color change from yellow-orange to red (step 4+5). Finally, the HABA solution is washed out with a small volume of running buffer, thus making the column ready to use for the next purification run. The system offers a highly selective tool to purify proteins under physiological conditions. The proteins obtained are bioactive and display a very high purity (above 95%). Also, the system can be used for protein detection in various assays. Depending on the experimental circumstances, antibodies "or" Strep-Tactin, with an enzymatic (e.g.horseradish peroxidase (HRP), alkaline phosphatase (AP)) or fluorescence (e.g. green fluorescent protein (GFP)) marker. If high purity is required, the lysate can be purified by first using Strep-Tactin and then perform a second run using antibodies against Strep-tag. This reduces the contamination with unspecific bound proteins, which might occur in some rare scenarios. Following assays can be conducted using the detection system: Because the is capable of isolating protein complexes, strategies for the study of protein-protein interactions can also be conducted. Another option is the immobilization of proteins with a specific high affinity antibody on microplates or biochips. Strep-Tag/StrepTactin system is also used in single molecule optical tweezers and AFM experiments, showing high mechanical stability comparable to the strongest noncovalent linkages currently available.
https://en.wikipedia.org/wiki?curid=22810498
DNA-encoded chemical library DNA-encoded chemical libraries (DEL) is a technology for the synthesis and screening on unprecedented scale of collections of small molecule compounds. DEL is used in medicinal chemistry to bridge the fields of combinatorial chemistry and molecular biology. The aim of DEL technology is to accelerate the drug discovery process and in particular early phase discovery activities such as target validation and hit identification. DEL technology involves the conjugation of chemical compounds or building blocks to short DNA fragments that serve as identification bar codes and in some cases also direct and control the chemical synthesis. The technique enables the mass creation and interrogation of libraries via affinity selection, typically on an immobilized protein target. A homogeneous method for screening DNA-encoded libraries has recently been developed which uses water-in-oil emulsion technology to isolate, count and identify individual ligand-target complexes in a single-tube approach. In contrast to conventional screening procedures such as high-throughput screening, biochemical assays are not required for binder identification, in principle allowing the isolation of binders to a wide range of proteins historically difficult to tackle with conventional screening technologies
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library So, in addition to the general discovery of target specific molecular compounds, the availability of binders to pharmacologically important, but so-far “undruggable” target proteins opens new possibilities to develop novel drugs for diseases that could not be treated so far. In eliminating the requirement to initially assess the activity of hits it is hoped and expected that many of the high affinity binders identified will be shown to be active in independent analysis of selected hits, therefore offering an efficient method to identify high quality hits and pharmaceutical leads. Until recently, the application of molecular evolution in the laboratory had been limited to display technologies involving biological molecules, where small molecules lead discovery was considered beyond this biological approach. DEL has opened the field of display technology to include non-natural compounds such as small molecules, extending the application of molecular evolution and natural selection to the identification of small molecule compounds of desired activity and function. DNA encoded chemical libraries bear resemblance to biological display technologies such as antibody phage display technology, yeast display, mRNA display and aptamer SELEX. In antibody phage display, antibodies are physically linked to phage particles that bear the gene coding for the attached antibody, which is equivalent to a physical linkage of a “phenotype” (the protein) and a “genotype” (the gene encoding for the protein )
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library Phage-displayed antibodies can be isolated from large antibody libraries by mimicking molecular evolution: through rounds of selection (on an immobilized protein target), amplification and translation. In DEL the linkage of a small molecule to an identifier DNA code allows the facile identification of binding molecules. DEL libraries are subjected to affinity selection procedures on an immobilized target protein of choice, after which non-binders are removed by washing steps, and binders can subsequently be amplified by polymerase chain reaction (PCR) and identified by virtue of their DNA code (e.g.by DNA sequencing). In evolution-based DEL technologies (see below) hits can be further enriched by performing rounds of selection, PCR amplification and translation in analogy to biological display systems such as antibody phage display. This makes it possible to work with much larger libraries. The concept of DNA-encoding was first described in a theoretical paper by Brenner and Lerner in 1992 in which was proposed to link each molecule of a chemically synthesized entity to a particular oligonucleotide sequence constructed in parallel and to use this encoding genetic tag to identify and enrich active compounds. In 1993 the first practical implementation of this approach was presented by S. Brenner and K. Janda and similarly by the group of M.A. Gallop
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library Brenner and Janda suggested to generate individual encoded library members by an alternating parallel combinatorial synthesis of the heteropolymeric chemical compound and the appropriate oligonucleotide sequence on the same bead in a “split-&-pool”-based fashion (see below). Since unprotected DNA is restricted to a narrow window of conventional reaction conditions, until the end of the 1990s a number of alternative encoding strategies were envisaged (i.e. MS-based compound tagging, peptide encoding, haloaromatic tagging, encoding by secondary amines, semiconductor devices.), mainly to avoid inconvenient solid phase DNA synthesis and to create easily screenable combinatorial libraries in high-throughput fashion. However, the selective amplificability of DNA greatly facilitates library screening and it becomes indispensable for the encoding of organic compounds libraries of this unprecedented size. Consequently, at the beginning of the 2000s DNA-combinatorial chemistry experienced a revival. The beginning of the millennium saw the introduction of several independent developments in DEL technology. These technologies can be classified under two general categories: non-evolution-based and evolution-based DEL technologies capable of molecular evolution. The first category benefits from the ability to use off the shelf reagents and therefore enables rather straightforward library generation
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library Hits can be identified by DNA sequencing, however DNA translation and therefore molecular evolution is not feasible by these methods. The split and pool approaches developed by researchers at Praecis Pharmaceuticals (now owned by GlaxoSmithKline), Nuevolution (Copenhagen, Denmark) and ESAC technology developed in the laboratory of Prof D. Neri (Institute of Pharmaceutical Science, Zurich, Switzerland) fall under this category. ESAC technology sets itself apart being a combinatorial self-assembling approach which resembles fragment based hit discovery (Fig 1b). Here DNA annealing enables discrete building block combinations to be sampled, but no chemical reaction takes place between them. Examples of evolution-based DEL technologies are DNA-routing developed by Prof. D.R. Halpin and Prof. P.B. Harbury (Stanford University, Stanford, CA), DNA-templated synthesis developed by Prof. D. Liu (Harvard University, Cambridge, MA) and commercialized by Ensemble Therapeutics (Cambridge, MA) and YoctoReactor technology. developed and commercialized by Vipergen (Copenhagen, Denmark). These technologies are described in further detail below. DNA-templated synthesis and YoctoReactor technology require the prior conjugation of chemical building blocks (BB) to a DNA oligonucleotide tag before library assembly, therefore more upfront work is required before library assembly
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library Furthermore, the DNA tagged BBs enable the generation of a genetic code for synthesized compounds and artificial translation of the genetic code is possible: That is the BB's can be recalled by the PCR-amplified genetic code, and the library compounds can be regenerated. This, in turn, enables the principle of Darwinian natural selection and evolution to be applied to small molecule selection in direct analogy to biological display systems; through rounds of selection, amplification and translation. In order to apply combinatorial chemistry for the synthesis of DNA-encoded chemical libraries, a Split-&-Pool approach was pursued. Initially a set of unique DNA-oligonucleotides (n) each containing a specific coding sequence is chemically conjugated to a corresponding set of small organic molecules.Consequently, the oligonucleotide-conjugate compounds are mixed ("Pool") and divided ("Split") into a number of groups (m). In appropriate conditions a second set of building blocks (m) are coupled to the first one and a further oligonucleotide which is coding for the second modification is enzymatically introduced before mixing again. This “split-&-pool” steps can be iterated a number of times (r) increasing at each round the library size in a combinatorial manner (i.e. (n x m)). Alternatively, peptide nucleic acids have been used to encode libraries prepared by "split-&-pool" method. A benefit of PNA-encoding is that the chemistry can be performed by standard SPPS
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library A promising strategy for the construction of DNA-encoded libraries is represented by the use of multifunctional building blocks covalently conjugated to an oligonucleotide serving as a “core structure” for library synthesis. In a ‘pool-and-split’ fashion a set of multifunctional scaffolds undergo orthogonal reactions with series of suitable reactive partners. Following each reaction step, the identity of the modification is encoded by an enzymatic addition of DNA segment to the original DNA “core structure”. The use of "N"-protected amino acids covalently attached to a DNA fragment allow, after a suitable deprotection step, a further amide bond formation with a series of carboxylic acids or a reductive amination with aldehydes. Similarly, diene carboxylic acids used as scaffolds for library construction at the 5’-end of amino modified oligonucleotide, could be subjected to a Diels-Alder reaction with a variety of maleimide derivatives. After completion of the desired reaction step, the identity of the chemical moiety added to the oligonucleotide is established by the annealing of a partially complementary oligonucleotide and by a subsequent Klenow fill-in DNA-polymerization, yielding a double stranded DNA fragment. The synthetic and encoding strategies described above enable the facile construction of DNA-encoded libraries of a size up to 10 member compounds carrying two sets of “building blocks”
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library However the stepwise addition of at least three independent sets of chemical moieties to a tri-functional core building block for the construction and encoding of a very large DNA-encoded library (comprising up to 10 compounds) can also be envisaged.(Fig.2) Encoded Self-Assembling Chemical (ESAC) libraries rely on the principle that two sublibraries of a size of x members (e.g. 10) containing a constant complementary hybridization domain can yield a combinatorial DNA-duplex library after hybridization with a complexity of x uniformly represented library members (e.g. 10). Each sub-library member would consist of an oligonucleotide containing a variable, coding region flanked by a constant DNA sequence, carrying a suitable chemical modification at the oligonucleotide extremity. The ESAC sublibraries can be used in at least four different embodiments. Preferential binders isolated from an affinity-based selection can be PCR-amplified and decoded on complementary oligonucleotide microarrays or by concatenation of the codes, subcloning and sequencing. The individual building blocks can eventually be conjugated using suitable linkers to yield a drug-like high-affinity compound. The characteristics of the linker (e.g. length, flexibility, geometry, chemical nature and solubility) influence the binding affinity and the chemical properties of the resulting binder.(Fig.3) Bio-panning experiments on HSA of a 600-member ESAC library allowed the isolation of the 4-("p"-iodophenyl)butanoic moiety
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library The compound represents the core structure of a series of portable albumin binding molecules and of Albufluor a recently developed fluorescein angiographic contrast agent currently under clinical evaluation. ESAC technology has been used for the isolation of potent inhibitors of bovine trypsin and for the identification of novel inhibitors of stromelysin-1 (MMP-3), a matrix metalloproteinase involved in both physiological and pathological tissue remodeling processes, as well as in disease processes, such as arthritis and metastasis. In 2004, D.R. Halpin and P.B. Harbury presented a novel intriguing method for the construction of DNA-encoded libraries. For the first time the DNA-conjugated templates served for both encoding and programming the infrastructure of the “split-&-pool” synthesis of the library components. The design of Halpin and Harbury enabled alternating rounds of selection, PCR amplification and diversification with small organic molecules, in complete analogy to phage display technology. The DNA-routing machinery consists of a series of connected columns bearing resin-bound anticodons, which could sequence-specifically separate a population of DNA-templates into spatially distinct locations by hybridization. According to this split-and-pool protocol a peptide combinatorial library DNA-encoded of 10 members was generated
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library In 2001 David Liu and co-workers showed that complementary DNA oligonucleotides can be used to assist certain synthetic reactions, which do not efficiently take place in solution at low concentration. A DNA-heteroduplex was used to accelerate the reaction between chemical moieties displayed at the extremities of the two DNA strands. Furthermore, the "proximity effect", which accelerates bimolecular reaction, was shown to be distance-independent (at least within a distance of 30 nucleotides). In a sequence-programmed fashion oligonucleotides carrying one chemical reactant group were hybridized to complementary oligonucleotide derivatives carrying a different reactive chemical group. The proximity conferred by the DNA hybridization drastically increases the effective molarity of the reaction reagents attached to the oligonucleotides, enabling the desired reaction to occur even in an aqueous environment at concentrations which are several orders of magnitude lower than those needed for the corresponding conventional organic reaction not DNA-templated. Using a DNA-templated set-up and sequence-programmed synthesis Liu and co-workers generated a 64-member compound DNA encoded library of macrocycles. The YoctoReactor (yR) is a 3D proximity-driven approach which exploits the self-assembling nature of DNA oligonucleotides into 3, 4 or 5-way junctions to direct small molecule synthesis at the center of the junction. Figure 5 illustrates the basic concept with a 4-way DNA junction
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library The center of the DNA junction constitutes a volume on the order of a yoctoliter, hence the name YoctoReactor. This volume contains a single molecule reaction yielding reaction concentrations in the high mM range. The effective concentration facilitated by the DNA greatly accelerates chemical reactions that otherwise would not take place at the actual concentration several orders of magnitude lower. Figure 6 illustrates the generation of a yR library using a 3-way DNA junction. In summary, chemical building-blocks (BB) are attached via cleavable or non-cleavable linkers to three types of bispecific DNA oligonucleotides (oligo-BBs) representing each arm of the yR. To facilitate synthesis in a combinatorial manner, the oligo-BBs are designed such that the DNA contains (a) the code for an attached BB at the distal end of the oligo (colored lines) and (b) areas of constant DNA sequence (black lines) to bring about the self-assembly of the DNA into a 3-way junction (independently of the BB) and the subsequent chemical reaction. Chemical reactions are performed via a stepwise procedure and after each step the DNA is ligated and the product purified by polyacryamide gel electrophoresis. Cleavable linkers (BB-DNA) are used for all but one position yielding a library of small molecules with a single covalent link to the DNA code. Table 1 outlines how libraries of different sizes can be generated using yR technology
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library The yR design approach provides an unvarying reaction site with regard to both (a) distance between reactants and (b) sequence environment surrounding the reaction site. Furthermore, the intimate connection between the code and the BB on the oligo-BB moieties which are mixed combinatorially in a single pot confers a high fidelity to the encoding of the library. The code of the synthesized products, furthermore, is not preset, but rather is assembled combinatorially and synthesized in synchronicity with the innate product. A homogeneous method for screening yoctoreactor libraries (yR) has recently been developed which uses water-in-oil emulsion technology to isolate individual ligand-target complexes. Called Binder Trap Enrichment (BTE), ligands to a protein target are identified by trapping binding pairs (DNA-labelled protein target and yR ligand) in emulsion droplets during dissociation dominated kinetics. Once trapped, the target and ligand DNA are joined by ligation, thus preserving the binding information. Hereafter, identification of hits is essentially a counting exercise: information on binding events is deciphered by sequencing and counting the joined DNA - selective binders are counted with a much higher frequency than random binders. This is possible because random trapping of target and ligand is "diluted" by the high number of water droplets in the emulsion
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library The low noise and background signal characteristic of BTE is attributed to the "dilution" of the random signal, the lack of surface artifacts and the high fidelity of the yR library and screening method. Screening is performed in a single tube method. Biologically active hits are identified in a single round of BTE characterized by a low false positive rate. BTE mimics the non-equilibrium nature of in vivo ligand-target interactions and offers the unique possibility to screen for target specific ligands based on ligand-target residence time because the emulsion, which traps the binding complex, is formed during a dynamic dissociation phase. Following selection from DNA-encoded chemical libraries, the decoding strategy for the fast and efficient identification of the specific binding compounds is crucial for the further development of the DEL technology. So far, Sanger-sequencing-based decoding, microarray-based methodology and high-throughput sequencing techniques represented the main methodologies for the decoding of DNA-encoded library selections. Although many authors implicitly envisaged a traditional Sanger sequencing-based decoding, the number of codes to sequence simply according to the complexity of the library is definitely an unrealistic task for a traditional Sanger sequencing approach. Nevertheless, the implementation of Sanger sequencing for decoding DNA-encoded chemical libraries in high-throughput fashion was the first to be described
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library After selection and PCR amplification of the DNA-tags of the library compounds, concatamers containing multiple coding sequences were generated and ligated into a vector. Following Sanger sequencing of a representative number of the resulting colonies revealed the frequencies of the codes present in the DNA-encoded library sample before and after selection. A DNA microarray is a device for high-throughput investigations widely used in molecular biology and in medicine. It consists of an arrayed series of microscopic spots (‘features’ or ‘locations’) containing few picomoles of oligonucleotides carrying a specific DNA sequence. This can be a short section of a gene or other DNA element that are used as probes to hybridize a DNA or RNA sample under suitable conditions. Probe-target hybridization is usually detected and quantified by fluorescence-based detection of fluorophore-labeled targets to determine relative abundance of the target nucleic acid sequences. Microarray has been used for the successfully decoding of ESAC DNA-encoded libraries and PNA-encoded libraries. The coding oligonucleotides representing the individual chemical compounds in the library, are spotted and chemically linked onto the microarray slides, using a BioChip Arrayer robot. Subsequently, the oligonucleotide tags of the binding compounds isolated from the selection are PCR amplified using a fluorescent primer and hybridized onto the DNA-microarray slide
https://en.wikipedia.org/wiki?curid=22810768
DNA-encoded chemical library Afterwards, microarrays are analyzed using a laser scan and spot intensities detected and quantified. The enrichment of the preferential binding compounds is revealed comparing the spots intensity of the DNA-microarray slide before and after selection. According to the complexity of the DNA encoded chemical library (typically between 10 and 10 members), a conventional Sanger sequencing based decoding is unlikely to be usable in practice, due both to the high cost per base for the sequencing and to the tedious procedure involved. High throughput sequencing technologies exploited strategies that parallelize the sequencing process displacing the use of capillary electrophoresis and producing thousands or millions of sequences at once. In 2008 was described the first implementation of a high-throughput sequencing technique originally developed for genome sequencing (i.e. "454 technology") to the fast and efficient decoding of a DNA encoded chemical library comprising 4000 compounds. This study led to the identification of novel chemical compounds with submicromolar dissociation constants towards streptavidin and definitely shown the feasibility to construct, perform selections and decode DNA-encoded libraries containing millions of chemical compounds.
https://en.wikipedia.org/wiki?curid=22810768
Keller's reagent can refer to either of two different mixtures of acids. In metallurgy, is a mixture of nitric acid, hydrochloric acid, and hydrofluoric acid, used to etch aluminum alloys to reveal their grain boundaries and orientations. It is also sometimes called Dix–Keller reagent, after E. H. Dix, Jr., and Fred Keller of the Aluminum Corporation of America, who pioneered the use of this technique in the late 1920s and early 1930s. In organic chemistry, is a mixture of anhydrous (glacial) acetic acid, concentrated sulfuric acid, and small amounts of ferric chloride, used to detect alkaloids. can also be used to detect other kinds of alkaloids via reactions in which it produces products with a wide range of colors. Cohn describes its use to detect the principal components of digitalis. The reaction with this reagent is also known as the Keller–Kiliani reaction, after C. C. Keller and H. Kiliani, who both used it to study digitalis in the late 19th century.
https://en.wikipedia.org/wiki?curid=22812166
Edward Harrison Memorial Prize The was awarded from 1926 to 1979 by the Chemical Society and from 1980 to 2007 by its successor the Royal Society of Chemistry to a British chemist who was under 32 years, and working the fields of theoretical or physical chemistry. It commemorated the work of Edward Harrison who was credited with producing the first serviceable gas mask and whose work saved many lives. In 2008 the prize was joined with the Meldola Medal and Prize to form the Harrison-Meldola Memorial Prizes. Winners include
https://en.wikipedia.org/wiki?curid=22813656
Harrison-Meldola Memorial Prizes The are annual prizes awarded by Royal Society of Chemistry to chemists in Britain who are 34 years of age or below. The prize is given to scientist who demonstrate the most meritorious and promising original investigations in chemistry and published results of those investigations. There are 3 prizes given every year, each winning £5000 and a medal. Candidates are not permitted to nominate themselves. They were begun in 2008 when two previous awards, the Meldola Medal and Prize and the Edward Harrison Memorial Prize, were joined together. They commemorate Raphael Meldola and Edward Harrison. Source: Royal Society of Chemistry The Meldola Medal and Prize commemorated Raphael Meldola, President of the Maccabaeans and the Institute of Chemistry. The last winners of the prize in 2007 were Hon Lam from the University of Edinburgh, and Rachel O'Reilly of the University of Cambridge. The Edward Harrison Memorial Prize commemorated the work of Edward Harrison who was credited with producing the first serviceable gas mask. The last winner of the prize was Katherine Holt of University College London.
https://en.wikipedia.org/wiki?curid=22813684
N-linked glycosylation N"-linked glycosylation, is the attachment of an oligosaccharide, a carbohydrate consisting of several sugar molecules, sometimes also referred to as glycan, to a nitrogen atom (the amide nitrogen of an asparagine (Asn) residue of a protein), in a process called N"-glycosylation, studied in biochemistry. This type of linkage is important for both the structure and function of some eukaryotic proteins. The "N"-linked glycosylation process occurs in eukaryotes and widely in archaea, but very rarely in bacteria. The nature of "N"-linked glycans attached to a glycoprotein is determined by the protein and the cell in which it is expressed. It also varies across species. Different species synthesize different types of "N"-linked glycan. There are two types of bonds involved in a glycoprotein: bonds between the saccharides residues in the glycan and the linkage between the glycan chain and the protein molecule. The sugar moieties are linked to one another in the glycan chain via glycosidic bonds. These bonds are typically formed between carbons 1 and 4 of the sugar molecules. The formation of glycosidic bond is energetically unfavourable, therefore the reaction is coupled to the hydrolysis of two ATP molecules. On the other hand, the attachment of a glycan residue to a protein requires the recognition of a consensus sequence
https://en.wikipedia.org/wiki?curid=22814939
N-linked glycosylation "N"-linked glycans are almost always attached to the nitrogen atom of an asparagine (Asn) side chain that is present as a part of Asn–X–Ser/Thr consensus sequence, where X is any amino acid except proline (Pro). In animal cells, the glycan attached to the asparagine is almost inevitably "N"-acetylglucosamine (GlcNAc) in the β-configuration. This β-linkage is similar to glycosidic bond between the sugar moieties in the glycan structure as described above. Instead of being attached to a sugar hydroxyl group, the anomeric carbon atom is attached to an amide nitrogen. The energy required for this linkage comes from the hydrolysis of a pyrophosphate molecule. The biosynthesis of "N"-linked glycans occurs via 3 major steps: Synthesis, en bloc transfer and initial trimming of precursor oligosaccharide occurs in the endoplasmic reticulum (ER). Subsequent processing and modification of the oligosaccharide chain is carried out in the Golgi apparatus. The synthesis of glycoproteins is thus spatially separated in different cellular compartments. Therefore, the type of "N"-glycan synthesised, depends on its accessibility to the different enzymes present within these cellular compartments. However, in spite of the diversity, all "N"-glycans are synthesised through a common pathway with a common core glycan structure. The core glycan structure is essentially made up of two "N"-acetyl glucosamine and three mannose residues
https://en.wikipedia.org/wiki?curid=22814939
N-linked glycosylation This core glycan is then elaborated and modified further, resulting in a diverse range of "N"-glycan structures. The process of "N"-linked glycosylation starts with the formation of dolichol-linked GlcNAc sugar. Dolichol is a lipid molecule composed of repeating isoprene units. This molecule is found attached to the membrane of the ER. Sugar molecules are attached to the dolichol through a pyrophosphate linkage (one phosphate was originally linked to dolichol, and the second phosphate came from the nucleotide sugar). The oligosaccharide chain is then extended through the addition of various sugar molecules in a stepwise manner to form a precursor oligosaccharide. The assembly of this precursor oligosaccharide occurs in two phases: Phase I and II. Phase I takes place on the cytoplasmic side of the ER and Phase II takes place on the luminal side of the ER. The precursor molecule, ready to be transferred to a protein, consist of 2 GlcNAc, 9 mannose and 3 glucose molecules. Once the precursor oligosaccharide is formed, the completed glycan is then transferred to the nascent polypeptide in the lumen of the ER membrane. This reaction is driven by the energy released from the cleavage of the pyrophosphate bond between the dolichol-glycan molecule
https://en.wikipedia.org/wiki?curid=22814939
N-linked glycosylation There are three conditions to fulfill before a glycan is transferred to a nascent polypeptide: Oligosaccharyltransferase is the enzyme responsible for the recognition of the consensus sequence and the transfer of the precursor glycan to a polypeptide acceptor which is being translated in the endoplasmic reticulum lumen. "N"-linked glycosylation is therefore, is a co-translational event "N"-glycan processing is carried out in endoplasmic reticulum and the Golgi body. Initial trimming of the precursor molecule occurs in the ER and the subsequent processing occurs in the Golgi. Upon transferring the completed glycan onto the nascent polypeptide, two glucose residues are removed from the structure. Enzymes known as glycosidases remove some sugar residues. These enzymes can break glycosidic linkages by using a water molecule. These enzymes are exoglycosidases as they only work on monosaccharide residues located at the non-reducing end of the glycan. This initial trimming step is thought to act as a quality control step in the ER to monitor protein folding. Once the protein is folded correctly, two glucose residues are removed by glucosidase I and II. The removal of the final third glucose residue signals that the glycoprotein is ready for transit from the ER to the "cis"-Golgi. . ER mannosidase catalyses the removal of this final glucose. However, if the protein is not folded properly, the glucose residues are not removed and thus the glycoprotein can't leave the endoplasmic reticulum
https://en.wikipedia.org/wiki?curid=22814939
N-linked glycosylation A chaperone protein (calnexin/calreticulin) binds to the unfolded or partially folded protein to assist protein folding. The next step involves further addition and removal of sugar residues in the cis-Golgi. These modifications are catalyzed by glycosyltransferases and glycosidases respectively. In the "cis"-Golgi, a series of mannosidases remove some or all of the four mannose residues in α-1,2 linkages. Whereas in the medial portion of the Golgi, glycosyltransferases add sugar residues to the core glycan structure, giving rise to the three main types of glycans: high mannose, hybrid and complex glycans. The order of addition of sugars to the growing glycan chains is determined by the substrate specificities of the enzymes and their access to the substrate as they move through secretory pathway. Thus, the organization of this machinery within a cell plays an important role in determining which glycans are made. Golgi enzymes play a key role in determining the synthesis of the various types of glycans. The order of action of the enzymes is reflected in their position in the Golgi stack: Similar "N"-glycan biosynthesis pathway have been found in prokaryotes and Archaea. However, compared to eukaryotes, the final glycan structure in eubacteria and archaea does not seem to differ much from the initial precursor made in the endoplasmic reticulum. In eukaryotes, the original precursor oligosaccharide is extensively modified en route to the cell surface. "N"-linked glycans have intrinsic and extrinsic functions
https://en.wikipedia.org/wiki?curid=22814939
N-linked glycosylation Within the immune system the "N"-linked glycans on an immune cell's surface will help dictate that migration pattern of the cell, e.g. immune cells that migrate to the skin have specific glycosylations that favor homing to that site. The glycosylation patterns on the various immunoglobulins including IgE, IgM, IgD, IgA, and IgG bestow them with unique effector functions by altering their affinities for Fc and other immune receptors. Glycans may also be involved in "self" and "non self" discrimination, which may be relevant to the pathophysiology of various autoimmune diseases. Changes in "N"-linked glycosylation has been associated with different diseases including rheumatoid arthritis, type 1 diabetes, Crohn's disease, and cancers. Mutations in eighteen genes involved in "N"-linked glycosylation result in a variety of diseases, most of which involve the nervous system. Many therapeutic proteins in the market are antibodies, which are "N"-linked glycoproteins. For example, Etanercept, Infliximab and Rituximab are "N"-glycosylated therapeutic proteins. The importance of "N"-linked glycosylation is becoming increasingly evident in the field of pharmaceuticals. Although bacterial or yeast protein production systems have significant potential advantages such as high yield and low cost, problems arise when the protein of interest is a glycoprotein. Most prokaryotic expression systems such as "E. coli" cannot carry out post-translational modifications
https://en.wikipedia.org/wiki?curid=22814939
N-linked glycosylation On the other hand, eukaryotic expression hosts such as yeast and animal cells, have different glycosylation patterns. The proteins produced in these expression hosts are often not identical to human protein and thus, causes immunogenic reactions in patients. For example, "S.cerevisiae" (yeast) often produce high-mannose glycans which are immunogenic. Non-human mammalian expression systems such as CHO or NS0 cells have the machinery required to add complex, human-type glycans. However, glycans produced in these systems can differ from glycans produced in humans, as they can be capped with both "N"-glycolylneuraminic acid (Neu5Gc) and "N"-acetylneuraminic acid (Neu5Ac), whereas human cells only produce glycoproteins containing "N"-acetylneuraminic acid. Furthermore, animal cells can also produce glycoproteins containing the galactose-alpha-1,3-galactose epitope, which can induce serious allergenic reactions, including anaphylactic shock, in people who have Alpha-gal allergy. These drawbacks have been addressed by several approaches such as eliminating the pathways that produce these glycan structures through genetic knockouts. Furthermore, other expression systems have been genetically engineered to produce therapeutic glycoproteins with human-like "N"-linked glycans. These include yeasts such as "Pichia pastoris", insect cell lines, green plants, and even bacteria.
https://en.wikipedia.org/wiki?curid=22814939
Dioxetanedione may refer to:
https://en.wikipedia.org/wiki?curid=22819351
Bioelectrospray Electrospray Electrospray ionization Bio-electrospraying is a new technology that enables the deposition of living cells on various targets with a resolution that depends on cell size and not on the jetting phenomenon. It is envisioned that "unhealthy cells would draw a different charge at the needle from healthy ones, and could be identified by the mass spectrometer", with tremendous implications in the health care industry. The early versions of bio-electrosprays were employed in several areas of research, most notably self-assembly of carbon nanotubes. Although the self-assembly mechanism is not clear yet, "elucidating electrosprays as a competing nanofabrication route for forming self-assemblies with a wide range of nanomaterials in the nanoscale for top-down based bottom-up assembly of structures." Future research may reveal important interactions between migrating cells and self-assembled nanostructures. "Such nano-assemblies formed by means of this top-down approach could be explored as a bottom-up methodology for encouraging cell migration to those architectures for forming cell patterns to nano-electronics, which are a few examples, respectively." After initial exploration with a single protein, increasingly complex systems were studied by bio-electrosprays. These include, but are not limited to, neuronal cells, stem cells, and even whole embryos
https://en.wikipedia.org/wiki?curid=22819973
Bioelectrospray The potential of the method was demonstrated by investigating cytogenetic and physiological changes of human lymphocyte cells as well as conducting comprehensive genetic, genomic and physiological state studies of human cells and cells of the model yeast Saccharomyces cerevisiae.
https://en.wikipedia.org/wiki?curid=22819973
Trinder glucose activity test The is a diagnostic test used in medicine to determine the presence of glucose or glucose oxidase. The test employs the Trinder reagent, and is a colour change test resulting from the Trinder reaction. The Trinder reagent, named after P. Trinder of the Biochemistry Department of the Royal Infirmary in Sunderland (see the article listed in further reading), comprises an aminoantipyrine (such as 4-aminoantipyrine) and phenol (p-hydroxybenzene). The Trinder reaction is the reaction between hydrogen peroxide and the phenol and aminoantipyrine to form a quinone (quinoneimine), catalyzed by the presence of a peroxidase (such as horseradish peroxidase). The hydrogen peroxide is itself produced by an initial reaction where the glucose is oxidised in the presence of the glucose oxidase catalyst into HO and gluconic acid. The quinone is red-violet in colour, with the intensity of the colour being in proportion to the glucose concentration. The colour is measured at 505 nm, 510 nm, or 540 nm. Diagnostic kits containing the Trinder reagent are available, including one from Sigma-Aldrich. The Stanbio Single Reagent Glucose Method is based upon the Trinder technique.
https://en.wikipedia.org/wiki?curid=22821460
Tetrabutylammonium tribromide is a pale orange solid with the formula [N(CH)]Br. It is a salt of the lipophilic tetrabutylammonium cation and the linear tribromide anion. The salt is sometimes used as a reagent used in organic synthesis as a conveniently weighable, solid source of bromine. The compound is prepared by treatment of solid tetra-"n"-butylammonium bromide with bromine vapor:
https://en.wikipedia.org/wiki?curid=22826470
Ringer's solution is a solution of several salts dissolved in water for the purpose of creating an isotonic solution relative to the body fluids of an animal. typically contains sodium chloride, potassium chloride, calcium chloride and sodium bicarbonate, with the last used to balance the pH. Other additions can include chemical fuel sources for cells, including ATP and dextrose, as well as antibiotics and antifungals. typically contains NaCl, KCl, CaCl and NaHCO, sometimes with other minerals such as MgCl, dissolved in distilled water. The precise proportions of these vary from species to species, particularly between marine osmoconformers and osmoregulators. is frequently used in "in vitro" experiments on organs or tissues, such as "in vitro" muscle testing. The precise mix of ions can vary depending upon the taxon, with different recipes for birds, mammals, freshwater fish, marine fish, etc. It may also be used for therapeutic purposes, such as arthroscopic lavage in the case of septic arthritis. Its clinical uses are for replacing extracellular fluid losses and restoring chemical balances when treating isotonic dehydration. is named after Sydney Ringer, who in 1882–1885 determined that a solution perfusing a frog's heart must contain sodium, potassium and calcium salts in a definite proportion if the heart is to be kept beating for long. This solution was adjusted further in the 1930s by Alexis Hartmann, who added sodium lactate to form Ringer's lactate solution.
https://en.wikipedia.org/wiki?curid=22827976
Light non-aqueous phase liquid A light non-aqueous phase liquid (LNAPL) is a groundwater contaminant that is not soluble in water and has lower density than water, in contrast to a DNAPL which has higher density than water. Once a LNAPL infiltrates the ground, it will stop at the height of the water table because the LNAPL is less dense than water. Efforts to locate and remove LNAPLs is relatively less expensive and easier than for DNAPLs because LNAPLs float on top of the water in the underground water table. Examples of LNAPLs are benzene, toluene, xylene, and other hydrocarbons.
https://en.wikipedia.org/wiki?curid=22830139
Crystal structure prediction (CSP) is the calculation of the crystal structures of solids from first principles. Reliable methods of predicting the crystal structure of a compound, based only on its composition, has been a goal of the physical sciences since the 1950s. Computational methods employed include simulated annealing, evolutionary algorithms, distributed multipole analysis, random sampling, basin-hopping, data mining, density functional theory and molecular mechanics. The crystal structures of simple ionic solids have long been rationalised in terms of Pauling's rules, first set out in 1929 by Linus Pauling. For metals and semiconductors one has different rules involving valence electron concentration. However, prediction and rationalization are rather different things. Most commonly, the term crystal structure prediction means a search for the minimum-energy arrangement of its constituent atoms (or, for molecular crystals, of its molecules) in space. The problem has two facets: combinatorics (the "search phase space", in practice most acute for inorganic crystals), and energetics (or "stability ranking", most acute for molecular organic crystals). For complex non-molecular crystals (where the "search problem" is most acute), major recent advances have been the development of the Martonak version of metadynamics, the Oganov-Glass evolutionary algorithm USPEX, and first principles random search
https://en.wikipedia.org/wiki?curid=22832517
Crystal structure prediction The latter are capable of solving the global optimization problem with up to around a hundred degrees of freedom, while the approach of metadynamics is to reduce all structural variables to a handful of "slow" collective variables (which often works). Predicting organic crystal structures is important in academic and industrial science, particularly for pharmaceuticals and pigments, where understanding polymorphism is beneficial. The crystal structures of molecular substances, particularly organic compounds, are very hard to predict and rank in order of stability. Intermolecular interactions are relatively weak and non-directional and long range. This results in typical lattice and free energy differences between polymorphs that are often only a few kJ/mol, very rarely exceeding 10 kJ/mol. methods often locate many possible structures within this small energy range. These small energy differences are challenging to predict reliably without excessive computational effort. Since 2007, significant progress has been made in the CSP of small organic molecules, with several different methods proving effective. The most widely discussed method first ranks the energies of all possible crystal structures using a customised MM force field, and finishes by using a dispersion-corrected DFT step to estimate the lattice energy and stability of each short-listed candidate structure
https://en.wikipedia.org/wiki?curid=22832517
Crystal structure prediction More recent efforts to predict crystal structures have focused on estimating crystal free energy by including the effects of temperature and entropy in organic crystals using vibrational analysis or molecular dynamics. The following codes can predict stable and metastable structures given chemical composition and external conditions (pressure, temperature):
https://en.wikipedia.org/wiki?curid=22832517
Molecular models of DNA structures are representations of the molecular geometry and topology of deoxyribonucleic acid (DNA) molecules using one of several means, with the aim of simplifying and presenting the essential, physical and chemical, properties of DNA molecular structures either "in vivo" or "in vitro". These representations include closely packed spheres (CPK models) made of plastic, metal wires for "skeletal models", graphic computations and animations by computers, artistic rendering. Computer molecular models also allow animations and molecular dynamics simulations that are very important for understanding how DNA functions "in vivo". The more advanced, computer-based molecular models of DNA involve molecular dynamics simulations and quantum mechanics computations of vibro-rotations, delocalized molecular orbitals (MOs), electric dipole moments, hydrogen-bonding, and so on. "DNA molecular dynamics modeling" involves simulating deoxyribonucleic acid (DNA) molecular geometry and topology changes with time as a result of both intra- and inter- molecular interactions of DNA. Whereas molecular models of DNA molecules such as closely packed spheres (CPK models) made of plastic or metal wires for "skeletal models" are useful representations of static DNA structures, their usefulness is very limited for representing complex DNA dynamics. Computer molecular modeling allows both animations and molecular dynamics simulations that are very important to understand how DNA functions "in vivo"
https://en.wikipedia.org/wiki?curid=22833956
Molecular models of DNA From the very early stages of structural studies of DNA by X-ray diffraction and biochemical means, molecular models such as the Watson-Crick nucleic acid double helix model were successfully employed to solve the 'puzzle' of DNA structure, and also find how the latter relates to its key functions in living cells. The first high quality X-ray diffraction patterns of A-DNA were reported by Rosalind Franklin and Raymond Gosling in 1953. Rosalind Franklin made the critical observation that DNA exists in two distinct forms, A and B, and produced the sharpest pictures of both through X-ray diffraction technique. The first calculations of the Fourier transform of an atomic helix were reported one year earlier by Cochran, Crick and Vand, and were followed in 1953 by the computation of the Fourier transform of a coiled-coil by Crick. Structural information is generated from X-ray diffraction studies of oriented DNA fibers with the help of molecular models of DNA that are combined with crystallographic and mathematical analysis of the X-ray patterns. The first reports of a double helix molecular model of B-DNA structure were made by James Watson and Francis Crick in 1953. That same year, Maurice F. Wilkins, A. Stokes and H.R. Wilson, reported the first X-ray patterns of "in vivo" B-DNA in partially oriented salmon sperm heads
https://en.wikipedia.org/wiki?curid=22833956
Molecular models of DNA The development of the first correct double helix molecular model of DNA by Crick and Watson may not have been possible without the biochemical evidence for the nucleotide base-pairing ([A---T]; [C---G]), or Chargaff's rules. Although such initial studies of DNA structures with the help of molecular models were essentially static, their consequences for explaining the "in vivo" functions of DNA were significant in the areas of protein biosynthesis and the quasi-universality of the genetic code. Epigenetic transformation studies of DNA "in vivo" were however much slower to develop despite their importance for embryology, morphogenesis and cancer research. Such chemical dynamics and biochemical reactions of DNA are much more complex than the molecular dynamics of DNA physical interactions with water, ions and proteins/enzymes in living cells. An old standing dynamic problem is how DNA "self-replication" takes place in living cells that should involve transient uncoiling of supercoiled DNA fibers. Although DNA consists of relatively rigid, very large elongated biopolymer molecules called "fibers" or chains (that are made of repeating nucleotide units of four basic types, attached to deoxyribose and phosphate groups), its molecular structure "in vivo" undergoes dynamic configuration changes that involve dynamically attached water molecules and ions
https://en.wikipedia.org/wiki?curid=22833956
Molecular models of DNA Supercoiling, packing with histones in chromosome structures, and other such supramolecular aspects also involve "in vivo" DNA topology which is even more complex than DNA molecular geometry, thus turning molecular modeling of DNA into an especially challenging problem for both molecular biologists and biotechnologists. Like other large molecules and biopolymers, DNA often exists in multiple stable geometries (that is, it exhibits conformational isomerism) and configurational, quantum states which are close to each other in energy on the potential energy surface of the DNA molecule. Such varying molecular geometries can also be computed, at least in principle, by employing "ab initio" quantum chemistry methods that can attain high accuracy for small molecules, although claims that acceptable accuracy can be also achieved for polynuclelotides, and DNA conformations, were recently made on the basis of vibrational circular dichroism (VCD) spectral data. Such quantum geometries define an important class of "ab initio" molecular models of DNA which exploration has barely started, especially related to results obtained by VCD in solutions. More detailed comparisons with such "ab initio" quantum computations are in principle obtainable through 2D-FT NMR spectroscopy and relaxation studies of polynucleotide solutions or specifically labeled DNA, as for example with deuterium labels. In an interesting twist of roles, the DNA molecule was proposed to be used for quantum computing via DNA
https://en.wikipedia.org/wiki?curid=22833956
Molecular models of DNA Both DNA nanostructures and DNA computing biochips have been built. The chemical structure of DNA is insufficient to understand the complexity of the 3D structures of DNA. In contrast, animated molecular models allow one to visually explore the three-dimensional (3D) structure of DNA. The DNA model shown (far right) is a space-filling, or CPK, model of the DNA double helix. Animated molecular models, such as the wire, or skeletal, type shown at the top of this article, allow one to visually explore the three-dimensional (3D) structure of DNA. Another type of DNA model is the space-filling, or CPK, model. The hydrogen bonding dynamics and proton exchange is very different by many orders of magnitude between the two systems of fully hydrated DNA and water molecules in ice. Thus, the DNA dynamics is complex, involving nanosecond and several tens of picosecond time scales, whereas that of liquid ice is on the picosecond time scale, and that of proton exchange in ice is on the millisecond time scale. The proton exchange rates in DNA and attached proteins may vary from picosecond to nanosecond, minutes or years, depending on the exact locations of the exchanged protons in the large biopolymers. A simple harmonic oscillator 'vibration' is only an oversimplified dynamic representation of the longitudinal vibrations of the DNA intertwined helices which were found to be anharmonic rather than harmonic as often assumed in quantum dynamic simulations of DNA
https://en.wikipedia.org/wiki?curid=22833956
Molecular models of DNA The structure of DNA shows a variety of forms, both double-stranded and single-stranded. The mechanical properties of DNA, which are directly related to its structure, are a significant problem for cells. Every process which binds or reads DNA is able to use or modify the mechanical properties of DNA for purposes of recognition, packaging and modification. The extreme length (a chromosome may contain a 10 cm long DNA strand), relative rigidity and helical structure of DNA has led to the evolution of histones and of enzymes such as topoisomerases and helicases to manage a cell's DNA. The properties of DNA are closely related to its molecular structure and sequence, particularly the weakness of the hydrogen bonds and electronic interactions that hold strands of DNA together compared to the strength of the bonds within each strand. Experimental methods which can directly measure the mechanical properties of DNA are relatively new, and high-resolution visualization in solution is often difficult. Nevertheless, scientists have uncovered large amount of data on the mechanical properties of this polymer, and the implications of DNA's mechanical properties on cellular processes is a topic of active current research. The DNA found in many cells can be macroscopic in length: a few centimetres long for each human chromosome. Consequently, cells must compact or "package" DNA to carry it within them. In eukaryotes this is carried by spool-like proteins named histones, around which DNA winds
https://en.wikipedia.org/wiki?curid=22833956
Molecular models of DNA It is the further compaction of this DNA-protein complex which produces the well known mitotic eukaryotic chromosomes. In the late 1970s, alternate non-helical models of DNA structure were briefly considered as a potential solution to problems in DNA replication in plasmids and chromatin. However, the models were set aside in favor of the double-helical model due to subsequent experimental advances such as X-ray crystallography of DNA duplexes, and later the nucleosome core particle, and the discovery of topoisomerases. Such non-double-helical models are not currently accepted by the mainstream scientific community. After DNA has been separated and purified by standard biochemical methods, one has a sample in a jar much like in the figure at the top of this article. Below are the main steps involved in generating structural information from X-ray diffraction studies of oriented DNA fibers that are drawn from the hydrated DNA sample with the help of molecular models of DNA that are combined with crystallographic and mathematical analysis of the X-ray patterns. A paracrystalline lattice, or paracrystal, is a molecular or atomic lattice with significant amounts (e.g., larger than a few percent) of partial disordering of molecular arrangements. Limiting cases of the paracrystal model are nanostructures, such as glasses, liquids, etc., that may possess only local ordering and no global order
https://en.wikipedia.org/wiki?curid=22833956
Molecular models of DNA A simple example of a paracrystalline lattice is shown in the following figure for a silica glass: Liquid crystals also have paracrystalline rather than crystalline structures. Highly hydrated B-DNA occurs naturally in living cells in such a paracrystalline state, which is a dynamic one despite the relatively rigid DNA double helix stabilized by parallel hydrogen bonds between the nucleotide base-pairs in the two complementary, helical DNA chains (see figures). For simplicity most DNA molecular models omit both water and ions dynamically bound to B-DNA, and are thus less useful for understanding the dynamic behaviors of B-DNA "in vivo". The physical and mathematical analysis of X-ray and spectroscopic data for paracrystalline B-DNA is thus far more complex than that of crystalline, A-DNA X-ray diffraction patterns. The paracrystal model is also important for DNA technological applications such as DNA nanotechnology. Novel methods that combine X-ray diffraction of DNA with X-ray microscopy in hydrated living cells are now also being developed. There are various uses of DNA molecular modeling in Genomics and Biotechnology research applications, from DNA repair to PCR and DNA nanostructures. Two-dimensional DNA junction arrays have been visualized by Atomic force microscopy. DNA molecular modeling has various uses in genomics and biotechnology, with research applications ranging from DNA repair to PCR and DNA nanostructures. These include computer molecular models of molecules as varied as RNA polymerase, an E
https://en.wikipedia.org/wiki?curid=22833956
Molecular models of DNA coli, bacterial DNA primase template suggesting very complex dynamics at the interfaces between the enzymes and the DNA template, and molecular models of the mutagenic, chemical interaction of potent carcinogen molecules with DNA. These are all represented in the gallery below. Technological application include a DNA biochip and DNA nanostructures designed for DNA computing and other dynamic applications of DNA nanotechnology. The image at right is of self-assembled DNA nanostructures. The DNA "tile" structure in this image consists of four branched junctions oriented at 90° angles. Each tile consists of nine DNA oligonucleotides as shown; such tiles serve as the primary "building block" for the assembly of the DNA nanogrids shown in the AFM micrograph. Quadruplex DNA may be involved in certain cancers. Images of quadruplex DNA are in the gallery below.
https://en.wikipedia.org/wiki?curid=22833956
Automotive oil recycling involves the recycling of used oils and the creation of new products from the recycled oils, and includes the recycling of motor oil and hydraulic oil. Oil recycling also benefits the environment: increased opportunities for consumers to recycle oil lessens the likelihood of used oil being dumped on lands and in waterways. For example, one gallon of motor oil dumped into waterways has the potential to pollute one million gallons of water. Recycled motor oil can be combusted as fuel, usually in plant boilers, space heaters, or industrial heating applications such as blast furnaces and cement kilns. When used motor oil is burned as fuel it must be burned at high temperatures to avoid gaseous pollution. Alternatively, waste motor oil can be distilled into diesel fuel or marine fuel in a process similar to oil re-refining, but without the final hydrotreating process. The lubrication properties of motor oil persist, even in used oil, and it can be recycled indefinitely. Used oil re-refining is the process of restoring used oil to new oil by removing chemical impurities, heavy metals and dirt. Used industrial and automotive oil is recycled at re-refineries. The used oil is first tested to determine suitability for re-refining, after which it is dehydrated and the water distillate is treated before being released into the environment. Dehydrating also removes the residual light fuel that can be used to power the refinery, and additionally captures ethylene glycol for re-use in recycled antifreeze
https://en.wikipedia.org/wiki?curid=22834446
Automotive oil recycling Next, industrial fuel is separated out of the used oil then vacuum distillation removes the lube cut (that is, the fraction suitable for reuse as lubricating oil) leaving a heavy oil that contains the used oil's additives and other by-products such as asphalt extender. The lube cut next undergoes hydro treating, or catalytic hydrogenation to remove residual polymers and other chemical compounds, and saturate carbon chains with hydrogen for greater stability. Final oil separation, or fractionating, separates the oil into three different oil grades: Light viscosity lubricants suitable for general lubricant applications, low viscosity lubricants for automotive and industrial applications, and high viscosity lubricants for heavy-duty applications. The oil that is produced in this step is referred to as re-refined base oil (RRBL). The final step is blending additives into these three grades of oil products to produce final products with the right detergent and anti-friction qualities. Then each product is tested again for quality and purity before being released for sale to the public. But you can not simply compare those ratios and conclude that refining from crude is immensely inefficient. Crude oil refining yields large amounts of fuels. Below is a comparison of refining from used motor oil and refining from crude
https://en.wikipedia.org/wiki?curid=22834446
Automotive oil recycling Re-refining one unit of used motor oil will yield: Refining one unit of crude oil will yield: The sludge ("residue") associated with engine oil recycling, which collects at the bottom of re-refining vacuum distillation towers, is known by various names, including "re-refined engine oil bottoms" (abbreviated "REOB" or "REOBs"). A report from the U.S. Federal Highway Administration (FHWA) states that: Some producers of asphalt for paving have—openly or secretly—incorporated REOBs into their asphalt, creating some controversy and concern in the traffic engineering community, with some experts suggesting it reduces the durability of the resulting pavement.
https://en.wikipedia.org/wiki?curid=22834446
Commission on Isotopic Abundances and Atomic Weights The (CIAAW) is an international scientific committee of the International Union of Pure and Applied Chemistry (IUPAC) under its Division of Inorganic Chemistry. Since 1899, it is entrusted with periodic critical evaluation of atomic weights of chemical elements and other cognate data, such as the isotopic composition of elements. The biennial CIAAW Standard Atomic Weights are accepted as the authoritative source in science and appear worldwide on the periodic table wall charts. The use of CIAAW Standard Atomic Weights is also required legally, for example, in calculation of calorific value of natural gas (ISO 6976:1995), or in gravimetric preparation of primary reference standards in gas analysis (ISO 6142:2006). In addition, until 2019 the definition of kelvin, the SI unit for thermodynamic temperature, made direct reference to the isotopic composition of oxygen and hydrogen as recommended by CIAAW. The latest CIAAW report was published in February 2016. After 20 May 2019 a new definition for kelvin came into force based on the Boltzmann constant. Although the atomic weight had taken on the concept of a constant of nature like the speed of light, the lack of agreement on accepted values created difficulties in trade. Quantities measured by chemical analysis were not being translated into weights in the same way by all parties and standardization became an urgent matter
https://en.wikipedia.org/wiki?curid=39042932
Commission on Isotopic Abundances and Atomic Weights With so many different values being reported, the American Chemical Society (ACS), in 1892, appointed a permanent committee to report on a standard table of atomic weights for acceptance by the Society. Clarke, who was then the chief chemist for the U.S. Geological Survey, was appointed a committee of one to provide the report. He presented the first report at the 1893 annual meeting and published it in January 1894. In 1897, the German Society of Chemistry, following a proposal by Hermann Emil Fischer, appointed a three-person working committee to report on atomic weights. The committee consisted of Chairman Prof. Hans H. Landolt (Berlin University), Prof. Wilhelm Ostwald (University of Leipzig), and Prof. Karl Seubert (University of Hanover). This committee published its first report in 1898, in which the committee suggested the desirability of an international committee on atomic weights. On 30 March 1899 Landolt, Ostwald and Seubert issued an invitation to other national scientific organizations to appoint delegates to the International Committee on Atomic Weights. Fifty-eight members were appointed to the Great International Committee on Atomic Weights, including Frank W. Clarke. The large committee conducted its business by correspondence to Landolt which created difficulties and delays associated with correspondence among fifty-eight members. As a result, on 15 December 1899, the German committee asked the International members to select a small committee of three to four members. In 1902, Prof. Frank W
https://en.wikipedia.org/wiki?curid=39042932
Commission on Isotopic Abundances and Atomic Weights Clarke (USA), Prof. Karl Seubert (Germany), and Prof. Thomas Edward Thorpe (UK) were elected, and the International Committee on Atomic Weights published its inaugural report in 1903 under the chairmanship of Prof. Clarke. Since 1899, the Commission periodically and critically evaluates the published scientific literature and produces the Table of Standard Atomic Weights. In recent times, the Table of Standard Atomic Weights has been published biennially. Each recommended standard atomic-weight value reflects the best knowledge of evaluated, published data. In the recommendation of standard atomic weights, CIAAW generally does not attempt to estimate the average or composite isotopic composition of the Earth or of any subset of terrestrial materials. Instead, the Commission seeks to find a single value and symmetrical uncertainty that would include almost all substances likely to be encountered. Many notable decisions have been made by the Commission over its history. Some of these are highlighted below. Though Dalton proposed setting the atomic weight of hydrogen as unity in 1803, many other proposals were popular throughout the 19th century. By the end of the 19th century, two scales gained popular support: H=1 and O=16. This situation was undesired in science and in October 1899, the inaugural task of the International Commission on Atomic Weights was to decide on the international scale and the oxygen scale became the international standard
https://en.wikipedia.org/wiki?curid=39042932
Commission on Isotopic Abundances and Atomic Weights The endorsement of the oxygen scale created significant backlash in the chemistry community, and the inaugural Atomic Weights Report was thus published using both scales. This practice soon ceded and the oxygen scale remained the international standard for decades to come. Nevertheless, when the Commission joined the IUPAC in 1920, it was asked to revert to the H=1 scale, which it rejected. With the discovery of oxygen isotopes in 1929, a situation arose where chemists based their calculations on the average atomic mass (atomic weight) of oxygen whereas physicists used the mass of the predominant isotope of oxygen, oxygen-16. This discrepancy became undesired and a unification between the chemistry and physics was necessary. In the 1957 Paris meeting the Commission put forward a proposal for a carbon-12 scale. The carbon-12 scale for atomic weights and nuclide masses was approved by IUPAP (1960) and IUPAC (1961) and it is still in use worldwide. In the early 20th century, measurements of the atomic weight of lead showed significant variations depending on the origin of the sample. These differences were considered to be an exception attributed to lead isotopes being products of the natural radioactive decay chains of uranium. In 1930s, however, Malcolm Dole reported that the atomic weight of oxygen in air was slightly different from that in water. Soon thereafter, Alfred Nier reported natural variation in the isotopic composition of carbon. It was becoming clear that atomic weights are not constants of nature
https://en.wikipedia.org/wiki?curid=39042932
Commission on Isotopic Abundances and Atomic Weights At the Commission’s meeting in 1951, it was recognized that the isotopic-abundance variation of sulfur had a significant effect on the internationally accepted value of an atomic weight. In order to indicate the span of atomic-weight values that may apply to sulfur from different natural sources, the value ± 0.003 was attached to the atomic weight of sulfur. By 1969, the Commission had assigned uncertainties to all atomic-weight values. At its meeting in 2009 in Vienna, the Commission decided to express the standard atomic weight of hydrogen, carbon, oxygen, and other elements in a manner that clearly indicates that the values are not constants of nature. For example, writing the standard atomic weight of hydrogen as [1.007 84, 1.008 11] shows that the atomic weight in any normal material will be greater than or equal to 1.007 84 and will be less than or equal to 1.008 11. The has undergone many name changes: Since its establishment, many notable chemists have been members of the Commission. Notably, eight Nobel laureates have served in the Commission: Henri Moissan (1903-1907), Wilhelm Ostwald (1906-1916), Francis William Aston, Frederick Soddy, Theodore William Richards, Niels Bohr, Otto Hahn and Marie Curie. Richards was awarded the 1914 Nobel Prize in Chemistry "in recognition of his accurate determinations of the atomic weight of a large number of chemical elements" while he was a member of the Commission
https://en.wikipedia.org/wiki?curid=39042932
Commission on Isotopic Abundances and Atomic Weights Likewise, Francis Aston was a member of the Commission when he was awarded the 1922 Nobel Prize in Chemistry for his work on isotope measurements. Incidentally, the 1925 Atomic Weights report was signed by three Nobel laureates. Among other notable scientists who have served on the Commission were Georges Urbain (discoverer of lutetium, though priority was disputed with Carl Auer von Welsbach), André-Louis Debierne (discoverer of actinium, though priority has been disputed with Friedrich Oskar Giesel), Marguerite Perey (discoverer of francium), Georgy Flyorov (namesake of the element flerovium), Robert Whytlaw-Gray (first isolated radon), and Arne Ölander (Secretary and Member of the Nobel Committee for Chemistry). Since its establishment, the chairmen of the Commission have been: In 1950, the Spanish chemist Enrique Moles became the first Secretary of the Commission when this position was created.
https://en.wikipedia.org/wiki?curid=39042932
Lactate shuttle hypothesis The lactate shuttle hypothesis was proposed by professor George Brooks of the University of California at Berkeley, describing the movement of lactate intracellularly (within a cell) and intercellularly (between cells). The hypothesis is based on the observation that lactate is formed and utilized continuously in diverse cells under both anaerobic and aerobic conditions. Further, lactate produced at sites with high rates of glycolysis and glycogenolysis can be shuttled to adjacent or remote sites including heart or skeletal muscles where the lactate can be used as a gluconeogenic precursor or substrate for oxidation. In addition to its role as a fuel source predominantly in the muscles, heart, brain, and liver, the lactate shuttle hypothesis also relates the role of lactate in redox signalling, gene expression, and lipolytic control. These additional roles of lactate have given rise to the term ‘lactormone’, pertaining to the role of lactate as a signalling hormone. Prior to the formation of the lactate shuttle hypothesis, lactate had long been considered a byproduct resulting from glucose breakdown through glycolysis in times of anaerobic metabolism. As a means of regenerating oxidized NAD, lactate dehydrogenase catalyzes the conversion of pyruvate to lactate in the cytosol, oxidizing NADH to NAD, regenerating the necessary substrate needed to continue glycolysis
https://en.wikipedia.org/wiki?curid=39047203
Lactate shuttle hypothesis Lactate is then transported from the peripheral tissues to the liver by means of the Cori Cycle where it is reformed into pyruvate through the reverse reaction using lactate dehydrogenase. By this logic, lactate was traditionally considered a toxic metabolic byproduct that could give rise to fatigue and muscle pain during times of anaerobic respiration. Lactate was essentially payment for ‘oxygen debt’ defined by Hill and Lupton as the ‘total amount of oxygen used, after cessation of exercise in recovery there from’. In addition to Cori Cycle, the lactate shuttle hypothesis proposes complementary functions of lactate in multiple tissues. Contrary to the long-held belief that lactate is formed as a result of oxygen-limited metabolism, substantial evidence exists that suggests lactate is formed under both aerobic and anaerobic conditions, as a result of substrate supply and equilibrium dynamics. During physical exertion or moderate intensity exercise lactate released from working muscle and other tissue beds is the primary fuel source for the heart, exiting the muscles through monocarboxylate transport protein (MCT). This evidence is supported by an increased amount of MCT shuttle proteins in the heart and muscle in direct proportion to exertion as measured through muscular contraction. Furthermore, both neurons and astrocytes have been shown to express MCT proteins, suggesting that the lactate shuttle may be involved in brain metabolism
https://en.wikipedia.org/wiki?curid=39047203
Lactate shuttle hypothesis Astrocytes express MCT4, a low affinity transporter for lactate (Km = 35mM), suggesting its function is to export lactate produced by glycolysis. Conversely, neurons express MCT2, a high affinity transporter for lactate (Km = 0.7mM). Thus, it is hypothesized that the astrocytes produce lactate which is then taken up by the adjacent neurons and oxidized for fuel. The lactate shuttle hypothesis also explains the balance of lactate production in the cytosol, via glycolysis or glycogenolysis, and lactate oxidation in the mitochondria (described below). MCT2 transporters within the peroxisome function to transport pyruvate into the peroxisome where it is reduced by peroxisomal LDH (pLDH) to lactate. In turn, NADH is converted to NAD+, regenerating this necessary component for subsequent β-oxidation. Lactate is then shuttled out of the peroxisome via MCT2, where it is oxidized by cytoplasmic LDH (cLDH) to pyruvate, generating NADH for energy use and completing the cycle (see figure). While the cytosolic fermentation pathway of lactate is well established, a novel feature of the lactate shuttle hypothesis is the oxidation of lactate in the mitochondria. Baba and Sherma (1971) were the first to identify the enzyme lactate dehydrogenase (LDH) in the mitochondrial inner membrane and matrix of rat skeletal and cardiac muscle. Subsequently, LDH was found in the rat liver, kidney, and heart mitochondria. It was also found that lactate could be oxidized as quickly as pyruvate in rat liver mitochondria
https://en.wikipedia.org/wiki?curid=39047203
Lactate shuttle hypothesis Because lactate can either be oxidized in the mitochondria (back to pyruvate for entry into the Krebs’ cycle, generating NADH in the process), or serve as a gluconeogenic precursor, the intracellular lactate shuttle has been proposed to account for the majority of lactate turnover in the human body (as evidenced by the slight increases in arterial lactate concentration). Brooks et al. confirmed this in 1999, when they found that lactate oxidation exceeded that of pyruvate by 10-40% in rat liver, skeletal, and cardiac muscle. In 1990, Roth and Brooks found evidence for the facilitated transporter of lactate, monocarboxylate transport protein (MCT), in the sarcolemma vesicles of rat skeletal muscle. Later, MCT1 was the first of the MCT super family to be identified. The first four MCT isoforms are responsible for pyruvate/lactate transport. MCT1 was found to be the predominant isoform in many tissues including skeletal muscle, neurons, erthrocytes, and sperm. In skeletal muscle, MCT1 is found in the membranes of the sarcolemma, peroxisome, and mitochondria. Because of the mitochondrial localization of MCT (to transport lactate into the mitochondria), LDH (to oxidize the lactate back to pyruvate), and COX (cytochrome c oxidase, the terminal element of the electron transport chain), Brooks et al. proposed the possibility of a mitochondrial lactate oxidation complex in 2006. This is supported by the observation that the ability of muscle cells to oxidize lactate was related to the density of mitochondria
https://en.wikipedia.org/wiki?curid=39047203
Lactate shuttle hypothesis Furthermore, it was shown that training increases MCT1 protein levels in skeletal muscle mitochondria, and that corresponded with an increase in the ability of muscle to clear lactate from the body during exercise. The affinity of MCT for pyruvate is greater than lactate, however two reactions will ensure that lactate will be present in concentrations that are orders of magnitude greater than pyruvate: first, the equilibrium constant of LDH(3.6 x 104) greatly favors the formation of lactate. Secondly, the immediate removal of pyruvate from the mitochondria (either via the Krebs’ cycle or gluconeogenesis) ensures that pyruvate is not present in great concentrations within the cell. LDH isoenzyme expression is tissue-dependent. It was found that in rats, LDH-1 was the predominant form in the mitochondria of myocardium, but LDH-5 was predominant in the liver mitochondria. It is suspected that this difference in isoenzyme is due to the predominant pathway the lactate will take - in liver it is more likely to be gluconeogenesis, whereas in the myocardium it is more likely to be oxidation. Despite these differences, it is thought that the redox state of the mitochondria dictates the ability of the tissues to oxidize lactate, not the particular LDH isoform. As illustrated by the peroxisomal intracellular lactate shuttle described above, the interconversion of lactate and pyruvate between cellular compartments plays a key role in the oxidative state of the cell
https://en.wikipedia.org/wiki?curid=39047203
Lactate shuttle hypothesis Specifically, the interconversion of NAD+ and NADH between compartments has been hypothesized to occur in the mitochondria. However, the evidence for this is lacking, as both lactate and pyruvate are quickly metabolized inside the mitochondria. However, the existence of the peroxisomal lactate shuttle suggests that this redox shuttle could exist for other organelles. Increased intracellular levels of lactate can act as a signalling hormone, inducing changes in gene expression that will upregulate genes involved in lactate removal. These genes include MCT1, cytochrome c oxidase (COX), and other enzymes involved in the lactate oxidation complex. Additionally, lactate will increase levels of peroxisome proliferator activated receptor gamma coactivator 1-alpha (PGC1-α), suggesting that lactate stimulates mitochondrial biogenesis. In addition to the role of the lactate shuttle in supplying NAD+ substrate for β-oxidation in the peroxisomes, the shuttle also regulates FFA mobilization by controlling plasma lactate levels. Research has demonstrated that lactate functions to inhibit lipolysis in fat cells through activation of an orphan G-protein couple receptor (GPR81) that acts as a lactate sensor, inhibiting lipolysis in response to lactate . As found by Brooks, et al., while lactate is disposed of mainly through oxidation and only a minor fraction supports gluconeogenesis, lactate is the main gluconeogenic precursor during sustained exercise
https://en.wikipedia.org/wiki?curid=39047203
Lactate shuttle hypothesis Brooks demonstrated in his earlier studies that little difference in lactate production rates were seen in trained and untrained subjects at equivalent power outputs. What was seen, however, was more efficient clearance rates of lactate in the trained subjects suggesting an upregulation of MCT protein. Local lactate use depends on exercise exertion. During rest, approximately 50% of lactate disposal take place through lactate oxidation whereas in time of strenuous exercise (50-75% VO2 max) approximately 75-80% of lactate is used by the active cell, indicating lactate’s role as a major contributor to energy conversion during increased exercise exertion. Highly malignant tumors rely heavily on anaerobic glycolysis (metabolism of glucose to lactic acid even under ample tissue oxygen; Warburg effect) and thus need to efflux lactic acid via MCTs to the tumor micro-environment to maintain a robust glycolytic flux and to prevent the tumor from being "pickled to death". The MCTs have been successfully targeted in pre-clinical studies using RNAi and a small-molecule inhibitor alpha-cyano-4-hydroxycinnamic acid (ACCA; CHC) to show that inhibiting lactic acid efflux is a very effective therapeutic strategy against highly glycolytic malignant tumors. In some tumor types, growth and metabolism relies on the exchange of lactate between glycolytic and rapidly respiring cells. This is of particular importance during tumor cell development when cells often undergo anaerobic metabolism, as described by the Warburg effect
https://en.wikipedia.org/wiki?curid=39047203
Lactate shuttle hypothesis Other cells in the same tumor may have access to or recruit sources of oxygen (via angiogenesis), allowing it to undergo aerobic oxidation. The lactate shuttle could occur as the hypoxic cells anaerobically metabolize glucose and shuttle the lactate via MCT to the adjacent cells capable of using the lactate as a substrate for oxidation. Investigation into how MCT-mediated lactate exchange in targeted tumor cells can be inhibited, therefore depriving cells of key energy sources, could lead to promising new chemotherapeutics. Additionally, lactate has been shown to be a key factor in tumor angiogenesis. Lactate promotes angiogenesis by upregulating HIF-1 in endothelial cells. Thus a promising target of cancer therapy is the inhibition of lactate export, through MCT-1 blockers, depriving developing tumors of an oxygen source.
https://en.wikipedia.org/wiki?curid=39047203
Stephen Liddle Stephen T. Liddle FRSC is an English professor of inorganic chemistry at the University of Manchester. He is Head of Inorganic Chemistry and Co-Director of the Centre for Radiochemistry Research at the University of Manchester since 2015. Liddle was born in Sunderland, in the North East of England, in 1974. In 1997 he graduated with a BSc(Hons) in chemistry with applied chemistry from the University of Newcastle. His degree included a year working as a research scientist for ICI Performance Chemicals at Wilton, Teeside. He continued his studies at the University receiving his PhD in 2000. His PhD supervisor was Professor W. Clegg. After postdoctoral fellowships at the University of Edinburgh (P. J. Bailey), University of Newcastle (K. Izod) as the Wilfred Hall Research Fellow, and University of Nottingham (P. L. Arnold) his independent academic career began at University of Nottingham with a Royal Society University Research Fellowship (2007-2015) held with a proleptic Lectureship. He was promoted to Associate Professor and Reader (2010) and Professor of Inorganic Chemistry (2013). He moved to the University of Manchester in 2015 as Head of Inorganic Chemistry and Co-Director of the Centre for Radiochemistry Research. He currently holds an Engineering and Physical Sciences Research Council Established Career Fellowship (2015-2020)
https://en.wikipedia.org/wiki?curid=39064863
Stephen Liddle He was Chairman of COST Action CM1006, a 22 country, research network of over 120 research groups in f-block chemistry (2011-2015), is an advisor to the Commonwealth Scholarship Commission (2013-), and is an elected category 3 member of Senate, the University of Manchester (2016-). Liddle's research is focused on synthetic inorganic chemistry, particularly making early transition metal, lanthanide, and actinide complexes to explore their structure, bonding, reactivity, and magnetism. In 2011 he reported a single-molecule magnet based on depleted uranium. In 2012 his research group was the first to synthesize a molecule with a terminal uranium-nitrogen triple bond (for example uranium nitride). Liddle was elected Fellow of the Royal Society of Chemistry (FRSC) in 2011 and is Vice President to the Executive Committee of the European Rare Earth and Actinide Society (2012-). He was awarded the RSC Sir Edward Frankland Fellowship (2011), the RSC Radiochemistry Group Bill Newton Award (2011) and the RSC Corday-Morgan Prize (2015). He was a recipient of a Rising Star Award at the 41st International Conference on Coordination Chemistry (2014). He was awarded a European Research Council (ERC) Starter Grant (2009) and Consolidator Grant (2014). He was one of the Periodic Videos team awarded the IChemE Petronas award for excellence in education and training (2008)
https://en.wikipedia.org/wiki?curid=39064863
Stephen Liddle Liddle is known for his work on "The Periodic Table of Videos", a series of videos from the University of Nottingham presented on YouTube, which feature educational vignettes on the periodic table. He is executive producer for Chemistry at Manchester Explains Research Advances (CAMERA), a series of videos from the University of Manchester presented on YouTube, which feature videos explaining chemistry research papers published from the University of Manchester. He is a National Co-ordinating Centre for Public Engagement Ambassador (2013-).
https://en.wikipedia.org/wiki?curid=39064863
C7H10O7 The molecular formula CHO may refer to:
https://en.wikipedia.org/wiki?curid=39070884
Crack closure is a phenomenon in fatigue loading, where the opposing faces of a crack remain in contact even with an external load acting on the material. As the load is increased, a critical value will be reached at which time the crack becomes "open". occurs from the presence of material propping open the crack faces and can arise from many sources including plastic deformation or phase transformation during crack propagation, corrosion of crack surfaces, presence of fluids in the crack, or roughness at cracked surfaces.. During cyclic loading, a crack will open and close causing the crack tip opening displacement (CTOD) to vary cyclically in phase with the applied force. If the loading cycle includes a period of negative force or stress ratio formula_1 (i.e. formula_2), the CTOD will remain equal to zero as the crack faces are pressed together. However, it was discovered that the CTOD can also be zero at other times even when the applied force is positive preventing the stress intensity factor reaching its minimum. Thus, the amplitude of the stress intensity factor range, also known as the "crack tip driving force", is reduced relative to the case in which no closure occurs, thereby reducing the crack growth rate. The closure level increases with stress ratio and above approximately formula_3, the crack faces do not contact and closure does not typically occur. The applied load will generate a stress intensity factor at the crack tip, formula_4 producing a crack tip opening displacement, CTOD
https://en.wikipedia.org/wiki?curid=39078701
Crack closure Crack growth is generally a function of the stress intensity factor range, formula_5 for an applied loading cycle and is However, crack closure occurs when the fracture surfaces are in contact below the "opening" level stress intensity factor formula_7 even though under positive load, allowing us to define an effective stress intensity range formula_8 as which is less than the nominal applied formula_5. The phenomenon of crack closure was first discovered by Elber in 1970. He observed that a contact between the fracture surfaces could take place even during cyclic tensile loading. The crack closure effect helps explain a wide range of fatigue data, and is especially important in the understanding of the effect of stress ratio (less closure at higher stress ratio) and short cracks (less closure than long cracks for the same cyclic stress intensity). The phenomenon of plasticity-induced crack closure is associated with the development of residual plastically deformed material on the flanks of an advancing fatigue crack.. The degree of plasticity at the crack tip is influenced by the level of material constraint. The two extreme cases are: "Deformation-induced martensitic transformation" in the stress field of the crack tip is another possible reason to cause crack closure. It was first studied by Pineau and Pelloux and Hornbogen in metastale austenitic stainless steels
https://en.wikipedia.org/wiki?curid=39078701
Crack closure These steels transform from the austenitic to the martensitic lattice structure under sufficiently high deformation, which leads to an increase of the material volume ahead of the crack tip. Therefore, compression stresses are likely to arise as the crack surfaces contact each other. This transformation-induced closure is strongly influenced by the size and geometry of the test specimen and of the fatigue crack. "Oxide-induced closure" occurs where rapid corrosion occurs during crack propagation. It is caused when the base material at the fracture surface is exposed to gaseous and aqueous atmospheres and becomes oxidized. Although the oxidized layer is normally very thin, under continuous and repetitive deformation, the contaminated layer and the base material experience repetitive breaking, exposing even more of the base material, and thus produce even more oxides. The oxidized volume grows and is typically larger than the volume of the base material around the crack surfaces. As such, the volume of the oxides can be interpreted as a wedge inserted into the crack, reducing the effect stress intensity range. Experiments have shown that oxide-induced crack closure occurs at both room and elevated temperature, and the oxide build-up is more noticeable at low R-ratios and low (near-threshold) crack growth rates. "Roughness induced closure" occurs with Mode II or in-plane shear type of loading, which is due to the misfit of the rough fracture surfaces of the crack’s upper and lower parts
https://en.wikipedia.org/wiki?curid=39078701
Crack closure Due to the anisotropy and heterogeneity in the micro structure, out-of-plane deformation occurs locally when Mode II loading is applied, and thus microscopic roughness of fatigue fracture surfaces is present. As a result, these mismatch wedges come into contact during the fatigue loading process, resulting in crack closure. The misfit in the fracture surfaces also takes place in the far field of the crack, which can be explained by the asymmetric displacement and rotation of material. Roughness induced crack closure is justifiable or valid when the roughness of the surface is of same order as the crack opening displacement. It is influenced by such factors as grain size, loading history, material mechanical properties, load ratio and specimen type.
https://en.wikipedia.org/wiki?curid=39078701
Barsoum elements is a finite element analysis technique used in fracture analysis to determine the stress intensity factor of a crack. It was introduced by R. Barsoum in 1976. In this method, the usual isoparametric 6 node triangular or 8 node isoparametric quadrilateral elements are employed. The mid side nodes on 2 adjacent sides are shifted towards the corner node to the quarter point location. For these locations of the mid nodes, the Jacobian becomes singular at the corner node thus making displacement derivatives infinite and stresses and strains become infinite as well. It can be shown that the variation of stresses along the 2 sides of the elements is according to . On the other hand, if all the three nodes on the side of an 8 node quadrilateral element are collapsed to one node (given the same node number) then the stress or strain varies as along any radial line emanating from crack tip. All the mid side nodes adjacent to the crack tip are at quarter point locations. From the displacement field solution the stress intensity factor "K" in a mode 1 case can be calculated as per the following relation: formula_1 where "V" and "V" are the displacement in the y direction behind the crack tip. It has been demonstrated that "K" found by this method is within 2% of theoretical solutions. Accuracy of finite element calculation can be improved if the neighboring elements are also modeled to have the terms depicting the stresses for a crack with its tip outside the element.
https://en.wikipedia.org/wiki?curid=39082552
Alejandro Strachan is a scientist in the field of computational materials and a professor of materials engineering at Purdue University. Before joining Purdue University, he was a staff member at Los Alamos National Laboratory. Strachan studied physics at the University of Buenos Aires, Argentina. He received his master's of science there in 1995, followed by his PhD in 1998. He then moved to Caltech, first as a postdoctoral scholar and then research scientist, until 2002. Strachan became a staff scientist in the Theoretical Division of Los Alamos National Laboratory in 2002, staying until becoming a faculty member at Purdue in 2005. He became a full professor in 2013. Strachan's research focuses on the development of predictive atomistic and molecular simulation methodologies to describe materials, primarily density functional theory and molecular dynamics. With these methods he studies problems of technological importance including coupled electronic, thermal, and mechanical processes in nano-electronics, MEMS and energy conversion devices; thermo-mechanical response and chemistry of polymers, polymer composites, and molecular solids; as well as active materials including shape-memory alloys and high-energy density materials. He also actively focuses on uncertainty quantification across the field of materials modelling. He previously served as the Deputy Director of the NNSA Center for the Prediction of Reliability, Integrity and Survivability of Microsystems (PRISM)
https://en.wikipedia.org/wiki?curid=39084991
Alejandro Strachan He is currently co-principal investigator for the Network for Computational Nanotechnology (NCN) and nanoHUB (with principal investigator Gerhard Klimeck) and co-leads the Center for Predictive Material and Devices (c-PRIMED), also with Klimeck. Strachan is also active in education, particularly through nanoHUB, including the fully open and online course ""From Atoms to Materials: Predictive Theories and Simulations"".
https://en.wikipedia.org/wiki?curid=39084991
Crack growth resistance curve In materials modeled by linear elastic fracture mechanics (LEFM), crack extension occurs when the applied energy release rate formula_1 exceeds formula_2, where formula_2 is the material's resistance to crack extension. Conceptually formula_4 can be thought of as the energetic "gain" associated with an additional infinitesimal increment of crack extension, while formula_2 can be thought of as the energetic "penalty" of an additional infinitesimal increment of crack extension. At any moment in time, if formula_6 then crack extension is energetically favorable. A complication to this process is that in some materials, formula_2 is not a constant value during the crack extension process. A plot of crack growth resistance formula_2 versus crack extension formula_9 is called a crack growth resistance curve, or R-curve. A plot of energy release rate formula_4 versus crack extension formula_9 for a particular loading configuration is called the driving force curve. The nature of the applied driving force curve relative to the material's R-curve determines the stability of a given crack. The usage of R-curves in fracture analysis is a more complex, but more comprehensive failure criteria compared to the common failure criteria that fracture occurs when formula_12 where formula_13 is simply a constant value called the critical energy release rate. An R-curve based failure analysis takes into account the notion that a material's resistance to fracture is not necessarily constant during crack growth
https://en.wikipedia.org/wiki?curid=39087496
Crack growth resistance curve R-curves can alternatively be discussed in terms of stress intensity factors formula_14 rather than energy release rates formula_15, where the R-curves can be expressed as the fracture toughness (formula_16, sometimes referred to as formula_17) as a function of crack length formula_18. The simplest case of a material's crack resistance curve would be materials which exhibit a "flat R-curve" (formula_2 is constant with respect to formula_9). In materials with flat R-curves, as a crack propagates, the resistance to further crack propagation remains constant and thus, the common failure criteria of formula_12 is largely valid. In these materials, if formula_4 increases as a function of formula_9 ("which is the case in many loading configurations and crack geometries"), then as soon as the applied formula_4 exceeds formula_13 the crack will unstably grow to failure without ever halting. Physically, the independence of formula_2 from formula_9 is indicative that in these materials the phenomena which are energetically costly during crack propagation do not evolve during crack propagation. This tends to be an accurate model for perfectly brittle materials such as ceramics, in which the principal energetic cost of fracture is the development of new free surfaces on the crack faces. The character of the energetic cost of the creation of new surfaces remains largely unchanged regardless of how long the crack has propagated from its initial length
https://en.wikipedia.org/wiki?curid=39087496
Crack growth resistance curve Another category of R-curve that is common in real materials is a "rising R-curve" (formula_2 increases as formula_9 increases). In materials with rising R-curves, as a crack propagates, the resistance to further crack propagation increases, and it requires a higher and higher applied formula_4 in order to achieve each subsequent increment of crack extension formula_31. As such, it can be technically challenging in these materials in practice to define a single value to quantify resistance to fracture (i.e. formula_13 or formula_16) as the resistance to fracture rises continuously as any given crack propagates. Materials with rising R-curves can also more easily exhibit stable crack growth than materials with flat R-curves, even if formula_4 strictly increases as a function of formula_18. If at some moment in time a crack exists with initial length formula_36 and an applied energy release rate which is infinitesimally exceeding the R-curve at this crack length formula_37 then this material would immediately fail if it exhibited flat R-curve behavior. If instead it exhibits rising R-curve behavior, then the crack has an added criteria for crack growth that the instantaneous slope of the driving force curve must be greater than the instantaneous slope of the crack resistance curve formula_38 or else it is energetically unfavorable to grow the crack further
https://en.wikipedia.org/wiki?curid=39087496
Crack growth resistance curve If formula_39 is infinitesimally greater than formula_40 but formula_41 then the crack will grow by an infinitesimally small increment formula_31 such that formula_43 and then crack growth will arrest. If the applied crack driving force formula_44 was gradually increased over time (through increasing the applied force for example) then this would lead to stable crack growth in this material as long as the instantaneous slope of the driving force curve continued to be less than the slope of the crack resistance curve. Physically, the dependence of formula_2 on formula_9 is indicative that in rising R-curve materials, the phenomena which are energetically costly during crack propagation are evolving as the crack grows in such a way that leads to accelerated energy dissipation during crack growth. This tends to be the case in materials which undergo ductile fracture as it can be observed that the plastic zone at the crack tip increases in size as the crack propagates, indicating that an increasing amount of energy must be dissipated to plastic deformation for the crack to continue to grow. A rising R-curve can also sometimes be observed in situations where a material's fracture surface becomes significantly rougher as the crack propagates, leading to additional energy dissipation as additional area of free surfaces is generated. In theory, formula_2 does "not" continue to increase to infinity as formula_48, and instead will asymptotically approach some steady-state value after a finite amount of crack growth
https://en.wikipedia.org/wiki?curid=39087496
Crack growth resistance curve It is usually not feasible to reach this steady-state condition, as it often requires very long crack extensions before reaching this condition, and thus would require large testing specimen geometries (and thus high applied forces) to observe. As such, most materials with rising R-curves are be treated as if formula_2 continually rises until failure. While far less common, some materials can exhibit falling R-curves (formula_2 decreases as formula_9 increases). In some cases, the material may initially exhibit rising R-curve behavior, reach a steady-state condition, and then transition into falling R-curve behavior. In a falling R-curve regime, as a crack propagates, the resistance to further crack propagation drops, and it requires less and less applied formula_4 in order to achieve each subsequent increment of crack extension formula_31. Materials experiencing these conditions would exhibit highly unstable crack growth as soon as any initial crack began to propagate. Polycrystalline graphite has been reported to demonstrate falling R-curve behavior after initially exhibiting rising R-curve behavior, which is postulated to be due to the gradual development of a microcracking damage zone in front of the crack tip which eventually dominates after the phenomena leading to the initial rising R-curve behavior reach steady-state. Size and geometry also plays a role in determining the shape of the R curve
https://en.wikipedia.org/wiki?curid=39087496
Crack growth resistance curve A crack in a thin sheet tends to produce a steeper R curve than a crack in a thick plate because there is a low degree of stress triaxiality at the crack tip in the thin sheet while the material near the tip of the crack in the thick plate may be in plane strain. The R curve can also change at free boundaries in the structure. Thus, a wide plate may exhibit a somewhat different crack growth resistance behavior than a narrow plate of the same material. Ideally, the R curve, as well as other measures of fracture toughness, is a property only of the material and does not depend on the size or shape of the cracked body. Much of fracture mechanics is predicated on the assumption that fracture toughness is a material property. ASTM evolved a standard practice for determining R-curves to accommodate the widespread need for this type of data. While the materials to which this standard practice can be applied are not restricted by strength, thickness or toughness, the test specimens must be of sufficient size to remain predominantly elastic throughout the test. The size requirement is to ensure the validity of the linear elastic fracture mechanics calculations. Specimens of standard proportions are required, but size is variable, adjusted for yield strength and toughness of the material considered. ASTM Standard E561 covers the determination of R-curves using a middle cracked tension panel [M(T)], compact tension [C(T)], and crack-line-wedge-loaded [C(W)] specimens
https://en.wikipedia.org/wiki?curid=39087496
Crack growth resistance curve While the C(W) specimen had gained substantial popularity for collecting KR curve data, many organizations still conduct wide panel, center cracked tension tests to obtain fracture toughness data. As with the plane-strain fracture toughness standard, ASTM E399, the planar dimensions of the specimens are sized to ensure that nominal elastic conditions are met. For the M(T) specimen, the width (W) and half crack size (a) must be chosen so that the remaining ligament is below net section yielding at failure.
https://en.wikipedia.org/wiki?curid=39087496
Widespread fatigue damage (WFD) in a structure is characterised by the simultaneous presence of fatigue cracks at multiple points that are of sufficient size and density that while individually they may be acceptable, link-up of the cracks could suddenly occur and the structure could fail. For example, small fatigue cracks developing along a row of fastener holes can coalesce increasing the stress on adjacent cracked sites increasing the rate of growth of those cracks. The objective of a designer is to determine when large numbers of small cracks could degrade the joint strength to an unacceptable level. The in-flight loss of part of the fuselage from Aloha Airlines Flight 243 was attributed to multi-site fatigue damage. Several factors can influence the occurrence of WFD, like Design issues and Probabilistic parameters like manufacturing, environment etc. Two categories of WFD are: MSD is the simultaneous presence of fatigue cracks in the same structural element. MED is the simultaneous presence of fatigue cracks in similar adjacent structural elements. Main difficulties involved are: First, a parameter called Limits Of Validity (LOV) is defined. LOV is defined as “the period of time (in flight cycles, hours or both) up to which WFD will not occur in aeroplane structure.” The steps followed are:
https://en.wikipedia.org/wiki?curid=39088988
Isotopes in medicine A medical isotope is an isotope used in medicine. The first uses of isotopes in medicine were in radiopharmaceuticals, and this is still the most common use. However more recently, separated stable isotopes have also come into use. Examples of non-radioactive medical isotopes are:
https://en.wikipedia.org/wiki?curid=39090181
Aneugen An aneugen is a substance that causes a daughter cell to have an abnormal number of chromosomes or aneuploidy. A substance's aneugenicity reflects its ability to induce aneuploidy. Exposure of males to lifestyle, environmental and/or occupational hazards may increase the risk of spermatozoa aneuploidy. Tobacco smoke contains chemicals that cause DNA damage (see Tobacco smoking#Health). Smoking also can induce aneuploidy. For instance, smoking increases chromosome 13 disomy in spermatozoa by 3-fold and YY disomy by 2-fold. Occupational exposure to benzene is associated with a 2.8-fold increase of XX disomy and a 2.6-fold increase of YY disomy in spermatozoa. Pesticides are released to the environment in large quantities so that most individuals have some degree of exposure. The insecticides fenvalerate and carbaryl have been reported to increase spermatozoa aneuploidy. Occupational exposure of pesticide factory workers to fenvalerate is associated with increased spermatozoa DNA damage. Exposure to fenvalerate raised sex chromosome disomy 1.9-fold and disomy of chromosome 18 by 2.6-fold (Xia et al., 2004). Exposure of male workers to carbaryl increased DNA fragmentation in spermatozoa, and also increased sex chromosome disomy by 1.7-fold and chromosome 18 disomy by 2.2-fold. Humans are exposed to perfluorinated compounds (PFCs) in many commercial products. Men contaminated with PFCs in whole blood or seminal plasma have spermatozoa with increased levels of DNA fragmentation and chromosomal aneuploidies.
https://en.wikipedia.org/wiki?curid=39090240
Concrete fracture analysis Concrete is widely used construction material all over the world. It is composed of aggregate, cement and water. Composition of concrete varies to suit for different applications desired. Even size of the aggregate can influence mechanical properties of concrete to a great extent. Concrete is strong in compression but weak in tension. When tensile loads are applied, concrete undergoes fracture easily. The reason behind this phenomenon can be explained as follows. The aggregates in concrete are capable of taking compressive stresses so that concrete withstands compressive loading. But during tensile loading cracks are formed which separates the cement particles which hold the aggregates together. This separation of cement particles causes the entire structure to fail as crack propagates. This problem in concrete is resolved by the introduction of reinforcing components such as metallic bars, ceramic fibres etc. These components act as a skeleton of the entire structure and are capable of holding aggregates under tensile loading. This is known as "Reinforcement of Concrete". Concrete may be referred to as a brittle material. This is because concrete's behaviour under loading is completely different from that of ductile materials like steel. But actually concrete differs from ideal brittle materials in many aspects. In modern fracture mechanics concrete is considered as a quasi-brittle material
https://en.wikipedia.org/wiki?curid=39092774
Concrete fracture analysis Quasi-brittle materials possess considerable hardness which is similar to ceramic hardness, so often it is called ceramic hardness. The reason for ceramic hardness can be explained on the basis of subcritical cracking that happens during loading of concrete. Subcritical cracking in concrete which precedes ultimate failure, results in nonlinear StressStrain response and Rcurve behaviour. So concrete obtains hardness from subcritical failure. Also concrete has a heterogeneous structure due to uneven composition of ingredients in it. This also complicates the analysis of concrete by producing misleading results. Linear Elastic Fracture Mechanics yields reliable results in the field of ductile materials like steel. Most of the experiments and theories in fracture mechanics are formulated taking ductile materials as object of interest. But if we compare the salient features in LEFM with results derived from the testing of concrete, we may find it irrelevant and sometimes trivial. For example, LEFM permits infinite stress at crack tip. This makes no sense in real analysis of concrete where the stress at crack tip is fixed. And LEFM fails to calculate stress at crack tip precisely. So we need some other ways to find out what is stress at crack tip and distribution stress near crack tip. LEFM cannot answer many phenomenon exhibited by concrete. Some examples are In LEFMPA, during cracking, no specific region is mentioned in between the area which is cracked and that which is not
https://en.wikipedia.org/wiki?curid=39092774
Concrete fracture analysis But it is evident that in concrete, there is some intermediate space between cracked and uncracked portion. This region is defined as the "Fracture Process Zone (FPZ)". FPZ consists of micro cracks which are minute individual cracks situated nearer to crack tip. As the crack propagates these micro cracks merge and becomes a single structure to give continuity to the already existing crack. So indeed, FPZ acts as a bridging zone between cracked region and uncracked region. Analysis of this zone deserves special notice because it is very helpful to predict the propagation of crack and ultimate failure in concrete. In steel (ductile) FPZ is very small and therefore strain hardening dominates over strain softening. Also due to small FPZ, crack tip can easily be distinguished from uncracked metal. And in ductile materials FPZ is a yielding zone. When we consider FPZ in concrete, we find that FPZ is sufficiently large and contains micro cracks. And cohesive pressure still remains in the region. So strain softening is prevalent in this region. Due to the presence of comparatively large FPZ, locating a precise crack tip is not possible in concrete. If we plot stress ("Pascal") vs. strain ("percentage deformation") characteristics of a material, the maximum stress up to which the material can be loaded is known as peak value (formula_1). The behaviour of concrete and steel can be compared to understand the difference in their fracture characteristics
https://en.wikipedia.org/wiki?curid=39092774
Concrete fracture analysis For this a strain controlled loading of un-notched specimen of each materials can be done. From the observations we can draw these conclusions: "Pre-peak" "Post-peak" Fracture energy is defined as the energy required to open unit area of crack surface. It is a material property and does not depend on size of structure. This can be well understood from the definition that it is defined for a unit area and thus influence of size is removed. Fracture energy can be expressed as the sum of surface creation energy and surface separation energy. Fracture energy found to be increasing as we approach crack tip. Fracture energy is a function of displacement and not strain. Fracture energy deserves prime role in determining ultimate stress at crack tip. In Finite Element Method analysis of concrete, if mesh size is varied, then entire result varies according to it. This is called mesh size dependence. If mesh size is higher, then the structure can withstand more stress. But such results obtained from FEM analysis contradict real case. In classical Fracture Mechanics, critical stress value is considered as a material property. So it is same for a particular material of any shape and size. But in practice, it is observed that, in some materials like plain concrete size has a strong influence on critical stress value. So fracture mechanics of concrete consider critical stress value a material property as well as a size dependent parameter
https://en.wikipedia.org/wiki?curid=39092774
Concrete fracture analysis where This clearly proves that material size and even the component size like aggregate size can influence cracking of concrete. Because of the heterogeneous nature of concrete, it responds to already existing crack testing models "anomaly". And it is evident that alteration of existing models was required to answer the unique fracture mechanics characteristics of concrete. The main drawback of both these models was negligence of concept of fracture energy. The model proposed by Hillerborg in 1976, was the first model to analyse concrete fracture making use of the fracture energy concept. In this model, Hillerborg describes two crack regions namely, In this zone at crack tip, we have peak stress = tensile strength of concrete. Along the FPZ stress is continuous and displacement is discontinuous. Crack propagation in FPZ starts when critical stress is equal to tensile strength of concrete and as crack starts propagating, stress does not become zero. Using the plot of fracture energy versus crack width, we can calculate critical stress at any point including crack tip. So one of the major drawbacks of LEFM is overcome using fracture energy approach. Direction of crack propagation can also be determined by identifying the direction of maximum energy release rate. where Hillerborg characteristic length can be used to predict brittleness of a material. As magnitude of characteristic length decreases brittle nature dominates and vice versa
https://en.wikipedia.org/wiki?curid=39092774
Concrete fracture analysis Proposed by Bazant and Oh in 1983, this theory can well attribute materials whose homogeneous nature changes over a certain range randomly. So we select any particular more or less homogeneous volume for the purpose of analysis. Hence we can determine the stresses and strains. The size of this region should be several times that of maximum aggregate. Otherwise the data obtained will be of no physical significance. Fracture Process Zone is modelled with bands of smeared crack. And to overcome the Finite Element Method unobjectivity, we use cracking criterion of fracture energy. Crack width is estimated as the product of crack band width and element strain. In finite element analysis, the crack band width is the element size of fracture process path.
https://en.wikipedia.org/wiki?curid=39092774
Green bullet Green bullet, green ammunition or green ammo are nicknames for a United States Department of Defense program to eliminate the use of hazardous materials from small arms ammunition and from small arms ammunition manufacturing. Initial objectives were elimination of ozone-depleting substances, volatile organic compounds, and heavy metals from primers and projectiles. These materials were perceived as causing difficulties through the entire life cycle of ammunition. The materials generated hazardous wastes and emissions at manufacturing facilities and use of ammunition caused contamination at shooting ranges. Potential health hazards made demilitarization and disposal of unused ammunition difficult and expensive. The Joint Working Group for Non-Toxic Ammunition was formed by the Small Caliber Ammunition Branch of the United States Army Armament Research, Development and Engineering Center in October 1995. Members of the working group included the National Guard of the United States, the United States Coast Guard, the United States Army Infantry School, the Industrial Operations Command, the Lake City Army Ammunition Plant, the Oak Ridge National Laboratory, the Los Alamos National Laboratory and the United States Department of Energy Kansas City Plant. In 2013, lead bullet production represented the second largest use of lead in the U.S., after lead-acid batteries. Studies by the U.S. CDC suggest blood lead levels are correlated with self-reported consumption of game meat
https://en.wikipedia.org/wiki?curid=39095186
Green bullet October 11, 2013 Governor Jerry Brown of California signed into law AB 711 Hunting: nonlead ammunition. Cost reductions from conversion to green ammo are estimated at "$2.5 million required for waste removal at each outdoor firing range as well as the $100 thousand annual costs for lead contamination monitoring". Two green ammunition cartridges are the 5.56×45mm NATO M855A1 and the MK281 40 mm grenade. Switching to the 5.56 mm green bullet, the M855A1 Enhanced Performance Round, or EPR, in 2010 has eliminated nearly 2,000 tons of lead from the waste stream. U.S. Army representatives at a 2013 House Armed Services Committee hearing have credited the 5.56mm M855A1 Enhanced Performance Round “close to” those of a 7.62mm in its performance capabilities. The longer, less dense M855A1 bullet must be seated deeper than the lead core bullet it replaced to maintain the same exterior cartridge dimensions required for reliable functioning in self-loading firearms; and higher pressure is required to obtain the same bullet velocity with reduced propellant volume. Increased pressure causes gas port erosion producing a higher cyclic rate of automatic fire making jamming malfunctions more likely. Cracks in bolt locking lugs have been observed after 3000 rounds of full automatic fire with the M855A1 cartridge
https://en.wikipedia.org/wiki?curid=39095186
Green bullet Enhanced Performance Round, Lead-Free The Army Research Laboratory and other participants developed the M855A1, Enhanced Performance Round (EPR), by applying ballistics concepts originally used in large-caliber cartridges to small arms. The result was significant improvements to lethality of small arms. The 5.56-mm (M855A1) ammunition was first battle-tested in mid-2010 in Afghanistan. The 7.62-mm (M80A1) ammunition was fielded in 2014. The EPR “bronze tip” ammo – previously known generically as “Green Ammo” – was born at the kickoff meeting for Phase II of the Army's Green Ammunition replacement program in mid-2005, at the Lake City Army Ammunition Plant. Participants met to discuss problems surrounding environmentally-friendly small arms training ammunition. The program team was composed of Project Manager, Maneuver Ammunition Systems (PM-MAS), Army Research Laboratory (ARL), U.S. Army Armaments Research Development and Engineering Center (ARDEC), and other team members. Participants evaluated more than 20 potential projectile designs before moving forward with a three-piece, reverse-jacket bullet design incorporating a hardened steel penetrator and lead-free slug. The EPR produces consistent effects against soft targets; increased effectiveness at long ranges; increased defeat of hard targets; and reduced muzzle flash (to help conceal soldiers’ firing positions)
https://en.wikipedia.org/wiki?curid=39095186
Green bullet The lead-free cartridges also reduce environmental impact by removing more than 2,000 metric tons of lead per year that otherwise could end up in the environment. The EPR contains an environmentally-friendly projectile that eliminates lead from the manufacturing process in direct support of Army commitment to environmental stewardship. Under the Green Ammo Phase II initiative, the Army focused on lead-free ammo in stateside training ranges, in response to tightening state environmental regulations. Some of a bullet's kinetic energy is typically converted to heat if the bullet strikes a hard surface like rock. Collision debris may include high temperature bullet fragments as sparks. Steel core and solid copper ammunition have the highest potential to start wildfires. Lead core bullets are less likely to ignite surrounding vegetation. Rifling is required to stabilize elongated bullets, and longer bullets require faster rotation for similar stability. The rate of rotation is determined by the twist of the lands and grooves engraved on the interior of a rifled barrel. Twist is usually expressed as the length of barrel (in inches) in which the bullet will rotate through a full 360 degrees; so bullets fired from a 1:10" twist rifle will make a complete rotation in every of distance traveled. Since lead is a very dense material, bullets made of inexpensive, non-toxic materials will be lighter than bullets made of lead unless bullet length is increased
https://en.wikipedia.org/wiki?curid=39095186
Green bullet Inferior external ballistics cause lighter bullets to be less effective against distant targets. Increasing bullet length may require a faster rifling twist to maintain stability. Some early trials versions of the M16 rifle had 1:14" twist barrels, but this was increased to 1:12" twist in early military production to improve stability with M193 lead-core bullets in the early 5.56×45mm NATO cartridges. Twist was increased to 1:9" after combat experience demonstrated the advantages of longer M855 bullets with a portion of the lead core replaced by a less dense steel penetrator. Barrels with 1:7" twist have been used in 21st century 5.56×45mm NATO firearms and have replaced barrels of older United States military firearms to stabilize longer M856 tracer bullets and M855A1 green bullets of less dense materials.
https://en.wikipedia.org/wiki?curid=39095186
KaiC is a gene belonging to the KaiABC gene cluster (with KaiA, and KaiB) that, together, regulate bacterial circadian rhythms, specifically in cyanobacteria. encodes for the protein which interacts with the KaiA and KaiB proteins in a post-translational oscillator (PTO). The PTO is cyanobacteria master clock that is controlled by sequences of phosphorylation of protein. Regulation of KaiABC expression and KaiABC phosphorylation is essential for cyanobacteria circadian rhythmicity, and is particularly important for regulating cyanobacteria processes such as nitrogen fixation, photosynthesis, and cell division. Studies have shown similarities to Drosophila, Neurospora, and mammalian clock models in that the kaiABC regulation of the cyanobacteria slave circadian clock is also based on a transcription translation feedback loop (TTFL). protein has both auto-kinase and auto-phosphatase activity and acts as the circadian regulator in both the PTO and the TTFL. has been found to not only suppress kaiBC when overexpressed, but also suppress circadian expression of all genes in the cyanobacterial genome. Though the "KaiABC" gene cluster has been found to exist only in cyanobacteria, evolutionarily "KaiC" contains homologs that occur in Archaea and Proteobacteria. It is the oldest circadian gene that has been discovered in prokaryotes. "KaiC" has a double-domain structure and sequence that classifies it as part of the "RecA" gene family of ATP-dependent recombinases
https://en.wikipedia.org/wiki?curid=39103747
KaiC Based on a number of single-domain homologous genes in other species, "KaiC" is hypothesized to have horizontally transferred from Bacteria to Archaea, eventually forming the double-domain "KaiC" through duplication and fusion. "KaiC'"s key role in circadian control and homology to "RecA" suggest its individual evolution before its presence in the "KaiABC" gene cluster. Takao Kondo, Susan S. Golden, and Carl H. Johnson discovered the gene cluster in 1998 and named the gene cluster kaiABC, as "kai" means “cycle” in Japanese. They generated 19 different clock mutants that were mapped to kaiA, kaiB, and kaiC genes, and successfully cloned the gene cluster in the cyanobacteria Synechococcus elongatus. Using a bacterial luciferase reporter to monitor the expression of clock-controlled gene psbAI in Synechococcus, they investigated and reported on the rescue to normal rhythmicity of long-period clock mutant C44a (with a period of 44 hours) by kaiABC. They inserted wild-type DNA through a pNIBB7942 plasmid vector into the C44a mutant, and generated clones that restored normal period (a period of 25 hours). They were eventually able to localize the gene region causing this rescue, and observed circadian rhythmicity in upstream promotor activity of kaiA and kaiB, as well as in the expression of kaiA and kaiBC messenger RNA. They determined abolishing any of the three kai genes would cause arrhythmicity in the circadian clock and reduce kaiBC promoter activity
https://en.wikipedia.org/wiki?curid=39103747
KaiC was later found to have both autokinase and autophosphatase activity. These findings suggested that circadian rhythm was controlled by a TTFL mechanism, which is consistent with other known biological clocks. In 2000, S. elongatus was observed in constant dark (DD) and constant light (LL). In DD, transcription and translation halted due to the absence of light but the circadian mechanism showed no significant phase shift after transitioning to constant light. In 2005, after closer examination of the KaiABC protein interactions, the phosphorylation of proved to oscillate with daily rhythms in the absence of light. In addition to the TTFL model, the PTO model was hypothesized for the KaiABC phosphorylation cycle. Also in 2005, Nakajima et al. lysed S. elongatus and isolated KaiABC proteins. In test tubes containing only KaiABC proteins and ATP, "in vitro" phosphorylation of oscillated with a near 24 hour period with a slightly smaller amplitude than "in vivo" oscillation, proving that the KaiABC proteins are sufficient for circadian rhythm solely in the presence of ATP. Combined with the TTFL model, KaiABC as a circadian PTO was shown to be the fundamental clock regulator in S. elongatus On "Synechococcus elongatus"' singular circular chromosome, the protein-coding gene "kaiC" is located at position 380696-382255 (its locus tag is syc0334_d). The gene "kaiC" has paralogs "kaiB" (located 380338..380646) and "kaiA" (located 379394..380248). "kaiC" encodes the protein (519 amino acids)
https://en.wikipedia.org/wiki?curid=39103747
KaiC acts as a non-specific transcription regulator that represses transcription of the "kaiBC" promoter. Its crystal structure has been solved at 2.8 Å resolution; it is a homohexameric complex (approximately 360 kDa) with a double-doughnut structure and a central pore which is open at the N-terminal ends and partially sealed at the C-terminal ends due to the presence of six arginine residues. The hexamer has twelve ATP molecules between the N- (CI) and C-terminal (CII) domains, which demonstrate ATPase activity. The CI and CII domains are linked by the N-terminal region of the CII domain. The last 20 residues from the C-terminal of the CII domain protrude from the doughnut to form what is called the A-loop. Interfaces on KaiC's CII domain are sites for both auto-kinase and auto-phosphatase activity, both "in vitro" and "in vivo". has two P loops or Walker’s motif As (ATP-/GTP-binding motifs) in the CI and CII domains; the CI domain also contains two DXXG (X represents any amino acid) motifs that are highly conserved among the GTPase super-family. shares structural similarities to several other proteins with hexameric rings, including RecA, DnaB and ATPases. The hexameric rings of closely resembles RecA, with 8 α-helices surrounding a twisted β-sheet made up of 7 strands. This structure favours the binding of a nucleotide at the carboxy-end of the β-sheet. KaiC’s structural similarities to these proteins suggests a role for in transcription regulation
https://en.wikipedia.org/wiki?curid=39103747
KaiC Further, the diameter of the rings in are suitable to accommodate single stranded DNA. Additionally, the surface potential at the CII ring and the C-terminal channel opening is mostly positive. The compatibility of the diameter as well as the surface potential charge suggests that DNA may be able to bind to the C-terminal channel opening. Kai proteins regulate genome-wide gene expression. Protein KaiA enhances the phosphorylation of protein by binding to the A loop of the CII domain to promote auto-kinase activity during subjective day. Phosphorylation at subunits occurs in an ordered manner, beginning with phosphorylation of Threonine 432 (T432) followed by Serine 431 (S431) on the CII domain. This leads to tight stacking of the CII domain to the CI domain. KaiB then binds to the exposed B loop on the CII domain of and sequesters KaiA from the C-terminals during subjective night, which inhibits phosphorylation and stimulates auto-phosphatase activity. Dephosphorylation of T432 occurs followed by S431, returning to its original state. Disruption of KaiC’s CI domain results both in arrhythmia of "kaiBC" expression and a reduction of ATP-binding activity; this, along with "in vitro" autophosphorylation of indicate that ATP binding to is crucial for "Synechococcus" circadian oscillation. The phosphorylation status of has been correlated with "Synechococcus" clock speed "in vivo"
https://en.wikipedia.org/wiki?curid=39103747