text
stringlengths
11
1.65k
source
stringlengths
38
44
Polyclonal antibodies This includes adjuvant selection, routes and sites of administration, injection volumes per site and number of sites per animal. Institutional policies generally include allowable volumes of blood per collection and safety precautions including appropriate restraint and sedation or anesthesia of animals for injury prevention to animals or personnel. The primary goal of antibody production in laboratory animals is to obtain high titer, high affinity antisera for use in experimentation or diagnostic tests. Adjuvants are used to improve or enhance an immune response to antigens. Most adjuvants provide for an injection site, antigen depot which allows for a slow release of antigen into draining lymph nodes. Many adjuvants also contain or act directly as: Such antigens by themselves are generally poor immunogens. Most complex protein antigens induce multiple B-cell clones during the immune response, thus, the response is polyclonal. Immune responses to non-protein antigens are generally poorly or enhanced by adjuvants and there is no system memory. Antibodies are currently also being produced from isolation of human B-lymphocytes to produce specific recombinant monoclonal antibody mixtures. The biotechnology company, Symphogen, develops this type of antibodies for therapeutic applications. They are the first research company to reach phase two trials with the monoclonal antibody mixtures that mimic the diversity of the polyclonal antibody drugs
https://en.wikipedia.org/wiki?curid=1111019
Polyclonal antibodies This production prevents viral and prion transmission and this is the simple process. Animals frequently used for polyclonal antibody production include chickens, goats, guinea pigs, hamsters, horses, mice, rats, and sheep. However, the rabbit is the most commonly used laboratory animal for this purpose. Animal selection should be based upon: Goats or horses are generally used when large quantities of antisera are required. Many investigators favor chickens because of their phylogenetic distance from mammals. Chickens transfer high quantities of IgY (IgG) into the egg yolk and harvesting antibodies from eggs eliminates the need for the invasive bleeding procedure. One week's eggs can contain 10 times more antibodies than the volume of rabbit blood obtained from one weekly bleeding. However, there are some disadvantages when using certain chicken derived antibodies in immunoassays. Chicken IgY does not fix mammalian complement component C1 and it does not perform as a precipitating antibody using standard solutions. Although mice are used most frequently for monoclonal antibody production, their small size usually prevents their use for sufficient quantities of polyclonal, serum antibodies. However, polyclonal antibodies in mice can be collected from ascites fluid using any one of a number of ascites producing methodologies. When using rabbits, young adult animals (2.5–3.0 kg or 5.5-6.5 lbs) should be used for primary immunization because of the vigorous antibody response
https://en.wikipedia.org/wiki?curid=1111019
Polyclonal antibodies Immune function peaks at puberty and primary responses to new antigens decline with age. Female rabbits are generally preferred because they are more docile and are reported to mount a more vigorous immune response than males. At least two animals per antigen should be used when using outbred animals. This principle reduces potential total failure resulting from non-responsiveness to antigens of individual animals. The size, extent of aggregation and relative nativity of protein antigens can all dramatically affect the quality and quantity of antibody produced. Small polypeptides (<10 ku) and non-protein antigens generally need to be conjugated or crosslinked to larger, immunogenic, carrier proteins to increase immunogenicity and provide T cell epitopes. Generally, the larger the immunogenic protein the better. Larger proteins, even in smaller amounts, usually result in better engagement of antigen presenting antigen processing cells for a satisfactory immune response. Injection of soluble, non-aggregated proteins has a higher probability of inducing tolerance rather than a satisfactory antibody response. Keyhole limpet hemocyanin (KLH) and bovine serum albumin are two widely used carrier proteins. Poly-L-lysine has also been used successfully as a backbone for peptides. Although the use of Poly-L-lysine reduces or eliminates production of antibodies to foreign proteins, it may result in failure of peptide-induced antibody production
https://en.wikipedia.org/wiki?curid=1111019
Polyclonal antibodies Recently, liposomes have also been successfully used for delivery of small peptides and this technique is an alternative to delivery with oily emulsion adjuvants. Selection of antigen quantity for immunization varies with the properties of the antigen and the adjuvant selected. In general, microgram to milligram quantities of protein in adjuvant are necessary to elicit high titer antibodies. Antigen dosage is generally species, rather than body weight, associated. The so-called “window” of immunogenicity in each species is broad but too much or too little antigen can induce tolerance, suppression or immune deviation towards cellular immunity rather than a satisfactory humoral response. Optimal and usual protein antigen levels for immunizing specific species have been reported in the following ranges: Optimal “priming” doses are reported to be at the low end of each range. The affinity of serum antibodies increases with time (months) after injection of antigen-adjuvant mixtures and as antigen in the system decreases. Widely used antigen dosages for “booster” or secondary immunizations are usually one half to equal the priming dosages. Antigens should be free of preparative byproducts and chemicals such as polyacrylamide gel, SDS, urea, endotoxin, particulate matter and extremes of pH. When a peptide is being used to generate the antibody, it is extremely important to design the antigens properly. There are several resources that can aid in the design as well as companies that offer this service
https://en.wikipedia.org/wiki?curid=1111019
Polyclonal antibodies Expasy has aggregated a set of public tools under its ProtScale page that require some degree of user knowledge to navigate. For a more simple peptide scoring tool there is a Antigen Profiler tool available that will enable you to score individual peptide sequences based upon a relation epitope mapping database of previous immunogens used to generate antibodies. Finally, as a general rule peptides should follow some basic criteria. When examining peptides for synthesis and immunization, it is recommended that certain residues and sequences be avoided due to potential synthesis problems. This includes some of the more common characteristics: Investigators should also consider the status of nativity of protein antigens when used as immunogens and reaction with antibodies produced. Antibodies to native proteins react best with native proteins and antibodies to denatured proteins react best with denatured proteins. If elicited antibodies are to be used on membrane blots (proteins subjected to denaturing conditions) then antibodies should be made against denatured proteins. On the other hand, if antibodies are to be used to react with a native protein or block a protein active site, then antibodies should be made against the native protein. Adjuvants can often alter the nativity of the protein. Generally, absorbed protein antigens in a preformed oil-in-water emulsion adjuvant, retain greater native protein structure than those in water-in-oil emulsions
https://en.wikipedia.org/wiki?curid=1111019
Polyclonal antibodies Antigens should always be prepared using techniques that ensure that they are free of microbial contamination. Most protein antigen preparations can be sterilized by passage through a 0.22μm filter. Septic abscesses often occur at inoculation sites of animals when contaminated preparations are used. This can result in failure of immunization against the targeted antigen. There are many commercially available immunologic adjuvants. Selection of specific adjuvants or types varies depending upon whether they are to be used for research and antibody production or in vaccine development. Adjuvants for vaccine use only need to produce protective antibodies and good systemic memory while those for antiserum production need to rapidly induce high titer, high avidity antibodies. No single adjuvant is ideal for all purposes and all have advantages and disadvantages. Adjuvant use generally is accompanied by undesirable side effects of varying severity and duration. Research on new adjuvants focuses on substances which have minimal toxicity while retaining maximum immunostimulation. Investigators should always be aware of potential pain and distress associated with adjuvant use in laboratory animals. The most frequently used adjuvants for antibody production are Freund's, Alum, the Ribi Adjuvant System and Titermax. There are two basic types of Freund's adjuvants: Freund's Complete Adjuvant (FCA) and Freund's Incomplete Adjuvant (FIA)
https://en.wikipedia.org/wiki?curid=1111019
Polyclonal antibodies FCA is a water-in-oil emulsion that localizes antigen for release periods up to 6 months. It is formulated with mineral oil, the surfactant mannide monoleate and heat killed "Mycobacterium tuberculosis", "Mycobacterium butyricum" or their extracts (for aggregation of macrophages at the inoculation site). This potent adjuvant stimulates both cell mediated and humoral immunity with preferential induction of antibody against epitopes of denatured proteins. Although FCA has historically been the most widely used adjuvant, it is one of the more toxic agents due to non-metabolizable mineral oil and it induces granulomatous reactions. Its use is limited to laboratory animals and it should be used only with weak antigens. It should not be used more than once in a single animal since multiple FCA inoculations can cause severe systemic reactions and decreased immune responses. Freund's Incomplete Adjuvant has the same formulation as FCA but does not contain mycobacterium or its components. FIA usually is limited to booster doses of antigen since it normally much less effective than FCA for primary antibody induction. Freund's adjuvants are normally mixed with equal parts of antigen preparations to form stable emulsions. Ribi adjuvants are oil-in-water emulsions where antigens are mixed with small volumes of a metabolizable oil (squalene) which are then emulsified with saline containing the surfactant Polysorbate 80
https://en.wikipedia.org/wiki?curid=1111019
Polyclonal antibodies This system also contains refined mycobacterial products (cord factor, cell wall skeleton) as immunostimulants and bacterial monophosphoryl lipid A. Three different species oriented formulations of the adjuvant system are available. These adjuvants interact with membranes of immune cells resulting in cytokine induction, which enhances antigen uptake, processing and presentation. This adjuvant system is much less toxic and less potent than FCA but generally induces satisfactory amounts of high avidity antibodies against protein antigens. Titermax represents a newer generation of adjuvants that are less toxic and contain no biologically derived materials. It is based upon mixtures of surfactant acting, linear, blocks or chains of nonionic copolymers polyoxypropylene (POP) and polyoxyethylene (POE). These copolymers are less toxic than many other surfactant materials and have potent adjuvant properties which favor chemotaxis, complement activation and antibody production. Titermax adjuvant forms a microparticulate water-in-oil emulsion with a copolymer and metabolizable squalene oil. The copolymer is coated with emulsion stabilizing silica particles which allows for incorporation of large amounts of a wide variety of antigenic materials. The adjuvant active copolymer forms hydrophilic surfaces, which activate complement, immune cells and increased expression of class II major histocompatibility molecules on macrophages
https://en.wikipedia.org/wiki?curid=1111019
Polyclonal antibodies Titermax presents antigen in a highly concentrated form to the immune system, which often results in antibody titers comparable to or higher than FCA. Specol: Specol is a water in oil adjuvant made of purified mineral oil. It has been reported to induce immune response comparable to Freund's adjuvant in rabbit and other research animal while producing fewer histological lesions. Digoxin Immune Fab is the antigen binding fragment of polyclonal antibodies raised to Digitalis derivative as a hapten bound to a protein and is used for the reversal of life-threatening digoxin or digitoxin toxicity. Rho(D) immune globulin is made from pooled human plasma provided by Rh-negative donors with antibodies to the D antigen. It is used to provide passive immune binding of antigen, preventing a maternal active immune response which could potentially result in hemolytic disease of the newborn. Rozrolimupab is the anti-RhD recombinant human polyclonal antibody composed of 25 unique IgG1 antibodies and is used for the treatment of immune thrombocytopenia purpura and prevention of isoimmunization in Rh-negative pregnant women. The use of polyclonal antibodies (PAbs) over monoclonal antibodies has its advantages. The technical skills needed to produce polyclonal antibodies is not as demanding. They're inexpensive to make and can be generated fairly quickly, taking up to several months to produce. PAbs are heterogeneous, which allows them to bind to a wide range of antigen epitopes
https://en.wikipedia.org/wiki?curid=1111019
Polyclonal antibodies Because PAbs are produced from a large number of B cell clones, they're more likely to successfully bind to a specific antigen. PAbs remain stable in different environments, such as a change in pH or salt concentration, which allows them to be more applicable in certain procedures. Additionally, depending on the amount needed, PAbs can be made in large quantities in relation to the size of the animal used.
https://en.wikipedia.org/wiki?curid=1111019
Dry etching refers to the removal of material, typically a masked pattern of semiconductor material, by exposing the material to a bombardment of ions (usually a plasma of reactive gases such as fluorocarbons, oxygen, chlorine, boron trichloride; sometimes with addition of nitrogen, argon, helium and other gases) that dislodge portions of the material from the exposed surface. A common type of dry etching is reactive-ion etching. Unlike with many (but not all, see isotropic etching) of the wet chemical etchants used in wet etching, the dry etching process typically etches directionally or anisotropically. is used in conjunction with photolithographic techniques to attack certain areas of a semiconductor surface in order to form recesses in material, such as contact holes (which are contacts to the underlying semiconductor substrate) or via holes (which are holes that are formed to provide an interconnect path between conductive layers in the layered semiconductor device) or to otherwise remove portions of semiconductor layers where predominantly vertical sides are desired. Along with semiconductor manufacturing, micromachining and display production, the removal of organic residues by oxygen plasmas is sometimes correctly described as a dry etch process. The term plasma ashing can be used instead. is particularly useful for materials and semiconductors which are chemically resistant and could not be wet etched, such as silicon carbide or gallium nitride
https://en.wikipedia.org/wiki?curid=1112733
Dry etching is currently used in semiconductor fabrication processes due to its unique ability over wet etch to do anisotropic etching (removal of material) to create high aspect ratio structures (e.g. deep holes or capacitor trenches). The dry etching hardware design basically involves a vacuum chamber, special gas delivery system, RF waveform generator and an exhaust system. The anisotropic dry etching process was developed by Hwa-Nien Yu at the IBM T.J. Watson Research Center in the early 1970s. It was used by Yu with Robert H. Dennard to fabricate the first micron-scale MOSFETs (metal-oxide-semiconductor field-effect transistors) in the 1970s.
https://en.wikipedia.org/wiki?curid=1112733
Mean high water springs (MHWS) is the highest level that spring tides reach on the average over a period of time (often 19 years). The height of mean high water springs is the average throughout the year (when the average maximum declination of the moon is 23.5°) of two successive high waters during those periods of 24 hours when the range of the tide is at its greatest. This level is generally close to being the "high water mark" where debris accumulates on the shore annually.
https://en.wikipedia.org/wiki?curid=1112787
Campylite is a variety of the lead arsenate mineral mimetite which received the name from the Greek 'kampylos'- bent, on account of the barrel-shaped bend of its crystals. It has also been used as an alternate name for pyromorphite. It occurs in the upper lead deposits through the oxidation of galena or cerussite. The main deposits are Příbram in Bohemia and Dry Gill, Caldbeck Fells, near Wigton, Cumbria, England.
https://en.wikipedia.org/wiki?curid=1118061
Vaterite is a mineral, a polymorph of calcium carbonate (CaCO). It was named after the German mineralogist Heinrich Vater. It is also known as mu-calcium carbonate (μ-CaCO) and has a JCPDS number of 13-192. belongs to the hexagonal crystal system, whereas calcite is trigonal and aragonite is orthorhombic. Vaterite, like aragonite, is a metastable phase of calcium carbonate at ambient conditions at the surface of the earth. As it is less stable than either calcite or aragonite, vaterite has a higher solubility than either of these phases. Therefore, once vaterite is exposed to water, it converts to calcite (at low temperature) or aragonite (at high temperature: ~60 °C). At 37 °C for example a solution-mediated transition from vaterite to calcite occurs, where the latter one dissolves and subsequently precipitates as calcite assisted by an Ostwald ripening process. However, vaterite does occur naturally in mineral springs, organic tissue, gallstones, urinary calculi and plants. In those circumstances, some impurities (metal ions or organic matter) may stabilize the vaterite and prevent its transformation into calcite or aragonite. is usually colorless, its shape is spherical, and its diameter is small, ranging from 0.05 to 5 μm. can be produced as the first mineral deposits repairing natural or experimentally-induced shell damage in some aragonite-shelled mollusks (e.g. gastropods). Subsequent shell deposition occurs as aragonite
https://en.wikipedia.org/wiki?curid=1123875
Vaterite In 2018, vaterite was identified as a constituent of a deposit formed on the leaves of "Saxifraga" at Cambridge University Botanic Garden.
https://en.wikipedia.org/wiki?curid=1123875
Hans Jenny (cymatics) Hans Jenny (16 August 1904, Basel – 23 June 1972, Dornach) was a physician and natural scientist who coined the term cymatics to describe acoustic effects of sound wave phenomena. Jenny was born in Basel, Switzerland. After completing a doctorate he taught science at the Rudolph Steiner School in Zürich for four years before beginning medical practice. In 1967, Jenny published the first volume of "Cymatics: The Study of Wave Phenomena." The second volume came out in 1972, the year he died. This book was a written and photographic documentation of the effects of sound vibrations on fluids, powders and liquid paste. He concluded, "This is not an unregulated chaos; it is a dynamic but ordered pattern." Jenny made use of crystal oscillators and his so-called tonoscope to set plates and membranes vibrating. He spread quartz sand onto a black drum membrane 60 cm in diameter. The membrane was caused to vibrate by singing loudly through a cardboard pipe, and the sand produced symmetrical Chladni patterns, named after Ernst Chladni, who had discovered this phenomenon in 1787. Low tones resulted in rather simple and clear pictures, while higher tones formed more complex structures. Chladni's and Jenny's work influenced Alvin Lucier and helped lead to his composition "Queen of the South". Cymatics was also followed up by Center for Advanced Visual Studies (CAVS) founder György Kepes at MIT. His work in this area included an acoustically vibrated piece of sheet metal in which small holes had been drilled in a grid
https://en.wikipedia.org/wiki?curid=1125112
Hans Jenny (cymatics) Small flames of gas burned through these holes and thermodynamic patterns were made visible by this setup. A special edition of the Hafler Trio's work "Exactly As I Say" includes a DVD containing material said to be "based on and extended from techniques suggested by Prof. Hans Jenny". Photographer Alexander Lauterwasser has also captured imagery of water surfaces set into motion by sound sources ranging from sine waves to music by Beethoven, Karlheinz Stockhausen and overtone singing.
https://en.wikipedia.org/wiki?curid=1125112
Zenker's fixative is a rapid-acting fixative for animal tissues. It is employed to prepare specimens of animal or vegetable tissues for microscopic study. It provides excellent fixation of nuclear chromatin, connective tissue fibers and some cytoplasmic features but does not preserve delicate cytoplasmic organelles such as mitochondria. Helly's fixative is preferable for traditional dye staining of mitochondria. contains mercuric chloride ("corrosive sublimate"), potassium dichromate, sodium sulfate, water, and acetic acid. Fixatives containing mercuric chloride or potassium dichromate are toxic, making disposal as hazardous waste costly. Mercuric chloride can be replaced with the same weight of less toxic zinc chloride but the resulting "zinc-Zenker" may not give the same quality of fixation as the original mixture. This fixative is named after Konrad Zenker, a German histologist, who died in 1894 (Baker 1958). Zenker is usually made with 50g of mercuric chloride, 25g of potassium dichromate, 10g of sodium sulfate (decahydrate) and distilled water to complete 1000 ml. Before use, 5 ml glacial acetic acid is added to 100 ml of the solution. Both the stock solution and the complete Zenker fixative are stable for many years. If the glacial acetic acid is replaced by 5 ml of formalin (37–40% formaldehyde), the resulting solution is Helly's fixative, also sometimes called "formol-Zenker"
https://en.wikipedia.org/wiki?curid=1126046
Zenker's fixative Helly is stable for only a few hours because the formaldehyde and dichromate components react, producing formic acid and chromium(III) ions; the orange solution becomes greenish.
https://en.wikipedia.org/wiki?curid=1126046
Isoschizomer Isoschizomers are pairs of restriction enzymes specific to the same recognition sequence. For example, SphI (CGTAC/G) and BbuI (CGTAC/G) are isoschizomers of each other. The first enzyme discovered which recognizes a given sequence is known as the prototype; all subsequently identified enzymes that recognize that sequence are isoschizomers. Isoschizomers are isolated from different strains of bacteria and therefore may require different reaction conditions. In some cases, only one out of a pair of isoschizomers can recognize both the methylated as well as unmethylated forms of restriction sites. In contrast, the other restriction enzyme can recognize only the unmethylated form of the restriction site. This property of some isoschizomers allows identification of methylation state of the restriction site while isolating it from a bacterial strain. For example, the restriction enzymes HpaII and MspI are isoschizomers, as they both recognize the sequence 5'-CCGG-3' when it is unmethylated. But when the second C of the sequence is methylated, only MspI can recognize it while HpaII cannot. An enzyme that recognizes the same sequence but cuts it differently is a neoschizomer. Neoschizomers are a specific type (subset) of isoschizomer. For example, SmaI (CCC/GGG) and XmaI (C/CCGGG) are neoschizomers of each other. Similarly Kpn1 (GGTAC/C) and Acc651 (G/GTACC) are neoschizomers of each other. An enzyme that recognizes a slightly different sequence, but produces the same ends is an isocaudomer.
https://en.wikipedia.org/wiki?curid=1127940
Andromeda I is a dwarf spheroidal galaxy (dSph) about 2.40 million light-years away in the constellation Andromeda. is part of the local group of galaxies and a satellite galaxy of the Andromeda Galaxy (M31). It is roughly 3.5 degrees south and slightly east of M31. As of 2005, it is the closest known dSph companion to M31 at an estimated projected distance of ~40 kpc or ~150,000 light-years. was discovered by Sidney van den Bergh in 1970 with the Mount Palomar Observatory 48-inch telescope. Further study of was done by the WFPC2 camera of the Hubble Space Telescope. This found that the horizontal branch stars, like other dwarf spheroidal galaxies were predominantly red. From this, and the abundance of blue horizontal branch stars, along with 99 RR Lyrae stars detected in 2005, lead to the conclusion there was an extended epoch of star formation. The estimated age is approximately 10 Gyr. The Hubble telescope also found a globular cluster in Andromeda I, being the least luminous galaxy where such a cluster was found.
https://en.wikipedia.org/wiki?curid=1128826
Substantial equivalence In food safety, the concept of substantial equivalence holds that the safety of a new food, particularly one that has been genetically modified (GM), may be assessed by comparing it with a similar traditional food that has proven safe in normal use over time. It was first formulated as a food safety policy in 1993, by the Organisation for Economic Co-operation and Development (OECD). As part of a food safety testing process, substantial equivalence is the initial step, establishing toxicological and nutritional differences in the new food compared to a conventional counterpart—differences are analyzed and evaluated, and further testing may be conducted, leading to a final safety assessment. is the underlying principle in GM food safety assessment for a number of national and international agencies, including the Canadian Food Inspection Agency (CFIA), Japan's Ministry of Health, Labour and Welfare (MHLW), the US Food and Drug Administration (FDA), and the United Nations' Food and Agriculture Organization (FAO) and World Health Organization. The concept of comparing genetically modified foods to traditional foods as a basis for safety assessment was first introduced as a recommendation during the 1990 Joint FAO/WHO Expert Consultation on biotechnology and food safety (a scientific conference of officials and industry), although the term "substantial equivalence" was not used
https://en.wikipedia.org/wiki?curid=1129005
Substantial equivalence Adopting the term, "substantial equivalence" was formulated as a food safety policy by the OECD, first described in their 1993 report, "Safety Evaluation of Foods Derived by Modern Biotechnology: Concepts and Principles. The term was borrowed from the FDA's 1976 substantial equivalence definition for new medical devices—under Premarket Notification 510(k), a new Class II device that is essentially similar to an existing device can be cleared for release without further testing. The underlying approach of comparing a new product or technique to an existing one has long been used in various fields of science and technology. The OECD bases the substantial equivalence principle on a definition of food safety where we can assume that a food is safe for consumption if it has been eaten over time without evident harm. It recognizes that traditional foods may naturally contain toxic components (usually called antinutrients)—such as the glycoalkaloids solanine in potatoes and alpha-tomatine in tomatoes—which do not affect their safety when prepared and eaten in traditional ways. The report proposes that, while biotechnology broadens the scope of food modification, it does not inherently introduce additional risk, and therefore, GM products may be assessed in the same way as conventionally bred products. Further, the relative precision of biotech methods should allow assessment to be focused on the most likely problem areas
https://en.wikipedia.org/wiki?curid=1129005
Substantial equivalence The concept of substantial equivalence is then described as a comparison between a GM food and a similar conventional food, taking into account food processing, and how the food is normally consumed, including quantity, dietary patterns, and the characteristics of the consumer population. is the starting point for GM food safety assessment. It can be applied at different points in the food chain, from unprocessed harvested crop to final ingredient or product, depending on the nature of the product and its intended use. For a GM plant, the overall evaluation process may be viewed in four phases: There has been discussion about applying new biochemical concepts and methods in evaluating substantial equivalence, such as metabolic profiling and protein profiling. These concepts refer, respectively, to the complete measured biochemical spectrum (total fingerprint) of compounds (metabolites) or of proteins present in a food or crop. The goal would be to compare overall the biochemical profile of a new food to an existing food to see if the new food's profile falls within the range of natural variation already exhibited by the profile of existing foods or crops. However, these techniques are not considered sufficiently evaluated, and standards have not yet been developed, to apply them. Approaches to GM food regulation vary by country, while substantial equivalence is generally the underlying principle of GM food safety assessment
https://en.wikipedia.org/wiki?curid=1129005
Substantial equivalence This is the case for national and international agencies that include the Canadian Food Inspection Agency (CFIA), Japan's Ministry of Health, Labour and Welfare (MHLW), the US Food and Drug Administration (FDA), and the United Nations' Food and Agriculture Organization (FAO) and World Health Organization. In 1997, the European Union established a novel food assessment procedure whereby, once the producer has confirmed substantial equivalence with an existing food, government notification, with accompanying scientific evidence, is the only requirement for commercial release, however, foods containing genetically modified organisms (GMOs) are excluded and require mandatory authorization. To establish substantial equivalence, the modified product is tested by the manufacturer for unexpected changes to a targeted set of components such as toxins, nutrients, or allergens, that are present in a similar unmodified food. The manufacturer's data is then assessed by a regulatory agency. If regulators determine that there is no significant difference between the modified and unmodified products, then there will generally be no further requirement for food safety testing. However, if the product has no natural equivalent, or shows significant differences from the unmodified food, or for other reasons that regulators may have (for instance, if a gene produces a protein that has not been a food component before), further safety testing may be required
https://en.wikipedia.org/wiki?curid=1129005
Substantial equivalence There have been criticisms of the effectiveness of substantial equivalence.
https://en.wikipedia.org/wiki?curid=1129005
Softening is a numerical trick used in N-body techniques to prevent numerical divergences when a particle comes too close to another (and the force goes to infinity). This is obtained by modifying the gravitational potential of each particle as where formula_2 is the softening parameter. The value of the softening parameter should be set small enough to keep simulations realistic.
https://en.wikipedia.org/wiki?curid=1130616
Symplast The symplast of a plant is the inner side of the plasma membrane in which water and low-molecular-weight solutes can freely diffuse. cells have more than one nucleus. The plasmodesmata allow the direct flow of small molecules such as sugars, amino acids, and ions between cells. Larger molecules, including transcription factors and plant viruses, can also be transported through with the help of actin structures. This allows direct cytoplasm-to-cytoplasm flow of water and other nutrients along concentration gradients. In particular, symplastic flow is used in the root systems to bring in nutrients from soil. It moves these solutes from epidermis cells through the cortex into the endodermis. Once solutes reach the endodermal cells thorough apoplastic flow, they are forced into the symplastic pathway due to the presence of the Casparian strip. Once the solutes are passively filtered, they eventually reach the pericycle, where it can be moved into the xylem for long distance transport. It is contrasted with the apoplastic flow, which uses cell wall transport. The symplastic transport was first realized by Eduard Tangl in 1879, who also discovered the plasmodesmata, a term coined by Eduard Strasburger, 1901. In 1880, Hanstein coined the term symplast. The contrasting terms apoplast and symplast were used together in 1930 by Münch.
https://en.wikipedia.org/wiki?curid=1131025
Antisense therapy is a form of treatment for genetic disorders or infections. When the genetic sequence of a particular gene is known to cause a particular disease, it is possible to synthesize a strand of nucleic acid (DNA, RNA or a chemical analogue) that will bind to the messenger RNA (mRNA) produced by that gene and inactivate it, effectively turning that gene "off". This is because mRNA has to be single stranded for it to be translated. Alternatively, the strand might be targeted to bind a splicing site on pre-mRNA and modify the exon content of an mRNA. Antisense therapies are not gene therapies, and should be considered RNA-based drug discovery, as it has only a few elements in common with gene therapy. This synthesized nucleic acid is termed an "antisense" oligonucleotide (ASO) because its base sequence is complementary to the gene's messenger RNA (mRNA), which is called the "sense" sequence (so that a sense segment of mRNA " 5'-AAGGUC-3' " would be blocked by the antisense mRNA segment " 3'-UUCCAG-5' "). Antisense oligonucleotides have been researched as potential drugs for diseases such as cancers (including lung cancer, colorectal carcinoma, pancreatic carcinoma, malignant glioma and malignant melanoma), diabetes, amyotrophic lateral sclerosis (ALS), Parkinson's disease, Duchenne muscular dystrophy, spinal muscular atrophy, Ataxia–telangiectasia (in vitro) and diseases such as asthma, arthritis and pouchitis with an inflammatory component. As of 2016, several antisense drugs have been approved by the U.S
https://en.wikipedia.org/wiki?curid=1137144
Antisense therapy Food and Drug Administration (FDA): fomivirsen as a treatment for cytomegalovirus retinitis, mipomersen for homozygous familial hypercholesterolemia, eteplirsen for Duchenne muscular dystrophy, and nusinersen for spinal muscular atrophy. In 2019, a report was published detailing the development of milasen, an antisense oligonucleotide drug for Batten disease, under an expanded-access investigational clinical protocol authorized by the Food and Drug Administration (FDA). Milasen "itself remains an investigational drug, and it is not suited for the treatment of other patients with Batten's disease" because it was customized for a single patient's specific mutation. However it is an example of individualized genomic medicine therapeutical intervention. As of 2012, some 40 antisense oligonucleotides and siRNAs were in clinical trials, including over 20 in advanced clinical trials (Phase II or III). With U.S. FDA approval, milasen was rationally designed, tested, and deployed as a novel individualized therapeutic agent for the treatment of Batten disease in the eponymous patient, Mila(Mee-lah) Makovec. This therapy serves as a pioneering example of personalized medicine. Also in 2006, German physicians reported on a dose-escalation study for the compound AP 12009 (a phosphorothioate antisense oligodeoxynucleotide specific for the mRNA of human transforming growth factor TGF-beta2) in patients with high grade gliomas
https://en.wikipedia.org/wiki?curid=1137144
Antisense therapy At the time of the report, the median overall survival had not been obtained and the authors hinted at a potential cure. Fomivirsen (marketed as Vitravene), was approved by the U.S. FDA in Aug 1998 as a treatment for cytomegalovirus retinitis. In January 2013 mipomersen (marketed as Kynamro) was approved by the FDA for the treatment of homozygous familial hypercholesterolemia. In early 2006, scientists studying the Ebola hemorrhagic fever virus at USAMRIID announced a 75% recovery rate after infecting four rhesus monkeys and then treating them with an antisense morpholino drug developed by Sarepta Therapeutics (formerly named AVI BioPharma), a U.S. biotechnology firm. The usual mortality rate for monkeys infected with Ebola virus is 100%. In late 2008, AVI BioPharma successfully filed Investigational New Drug (IND) applications with the FDA for its two lead products for Marburg and Ebola viruses. These drugs, AVI-6002 and AVI-6003 are novel analogs based on AVI's PMO antisense chemistry in which anti-viral potency is enhanced by the addition of positively charged components to the morpholino oligomer chain. Preclinical results of AVI-6002 and AVI-6003 demonstrated reproducible and high rates of survival in non-human primates challenged with a lethal infection of the Ebola and Marburg viruses, respectively. Starting in 2004, researchers in the US have been conducting research on using antisense technology to combat HIV
https://en.wikipedia.org/wiki?curid=1137144
Antisense therapy In February 2010 researchers reported success in reducing HIV viral load using patient T-cells which had been harvested, modified with an RNA antisense strand to the HIV viral envelope protein, and re-infused into the patient during a planned lapse in retroviral drug therapy. In 2004, development of an antisense therapy for spinal muscular atrophy was started. Over the following years, an antisense oligonucleotide later named nusinersen was developed by Ionis Pharmaceuticals under a licensing agreement with Biogen. In December 2016, nusinersen received regulatory approval from FDA for use to treat spinal muscular atrophy. Several morpholino oligos have been approved to treat specific groups of mutations causing Duchenne muscular dystrophy. In September 2016 eteplirsen (ExonDys51) received FDA approval for the treatment of cases that can benefit from skipping exon 51 of the dystrophin transcript. In December 2019 golodirsen (Vyondys 53) received FDA approval for the treatment of cases that can benefit from skipping exon 53 of the dystrophin transcript. Volanesorsen is in phase 3 clinical trials for treating hypertriglyceridemia as of December 2016. IONIS-HTTRx (also referred to by its investigational name RG6042) is an antisense drug in Phase 3 clinical trials for the treatment of Huntington's disease. The common stem for antisense oligonucleotides is -rsen. The substem -virsen designates antiviral antisense oligonucleotides
https://en.wikipedia.org/wiki?curid=1137144
Antisense therapy Because nucleases that cleave the phosphodiester linkage in DNA are expressed in almost every cell, unmodified DNA molecules are generally degraded before they reach their targets. Therefore, antisense drug candidate molecules are generally modified during the drug discovery phase of their development. Additionally, most targets of antisense are located inside cells, and getting nucleic acids across cell membranes is also difficult. Therefore, most clinical candidates have modified DNA "backbones", or the nucleobase or sugar moieties of the nucleotides are altered. Additionally, other molecules may be conjugated to antisense molecules in order to improve their ability to target certain cells or to cross barriers like cell membranes or the blood brain barrier.
https://en.wikipedia.org/wiki?curid=1137144
Jean Jacques Dozy (18 June 1908, in Rotterdam – 1 November 2004, in The Hague) was a Dutch geologist. In 1936, he participated in the Dutch Carstensz Expedition in Dutch New Guinea to explore and climb Mount Carstensz, the highest mountain of the island of New Guinea. Besides succeeding in climbing the highest point at the time with Anton Colijn and Frits Wissel, Dozy discovered the presence of abundant copper ore in a mountain he called Ertsberg (English, "Ore mountain"). Years later this gave rise to the Grasberg copper mine. In 1939, he published an article about his find, but it was neglected due to World War II. Twenty years later, the article led to rediscovery of the Ertsberg and the development of the Ertsberg-Grasberg mine complex.
https://en.wikipedia.org/wiki?curid=1137807
Auramine–rhodamine stain The auramine–rhodamine stain (AR), also known as the "Truant auramine–rhodamine stain", is a histological technique used to visualize acid-fast bacilli using fluorescence microscopy, notably species in the "Mycobacterium" genus. Acid-fast organisms display a reddish-yellow fluorescence. Although the auramine–rhodamine stain is not as specific for acid-fast organisms (i.e. "Mycobacterium tuberculosis" or "Nocardia") as the Ziehl–Neelsen stain, it is more affordable and more sensitive, therefore it is often utilized as a screening tool. AR stain is a mixture of auramine O and rhodamine B. It is carcinogenic.
https://en.wikipedia.org/wiki?curid=1144734
Syncline In structural geology, a syncline is a fold with younger layers closer to the center of the structure, whereas an anticline is the inverse of a syncline. A synclinorium (plural synclinoriums or synclinoria) is a large syncline with superimposed smaller folds. Synclines are typically a downward fold (synform), termed a synformal syncline (i.e. a trough), but synclines that point upwards can be found when strata have been overturned and folded (an antiformal syncline). On a geologic map, synclines are recognized as a sequence of rock layers, with the youngest at the fold's center or "hinge" and with a reverse sequence of the same rock layers on the opposite side of the hinge. If the fold pattern is circular or elongate, the structure is a basin. Folds typically form during crustal deformation as the result of compression that accompanies orogenic mountain building.
https://en.wikipedia.org/wiki?curid=1146946
Range fractionation is a term used in biology used to denote varying firing thresholds for different stimuli intensities. Sense organs are usually composed of many sensory receptors measuring the same property. These sensory receptors show a limited degree of precision due to an upper limit in firing rate. If the receptors are endowed with distinct transfer functions in such a way that the points of highest sensitivity are scattered along the axis of the quality being measured, the precision of the sense organ as a whole can be increased. This was shown for the chordotonal organ in the locust leg.
https://en.wikipedia.org/wiki?curid=1148992
Hyperactivation is a type of sperm motility. Hyperactivated sperm motility is characterised by a high amplitude, asymmetrical beating pattern of the sperm tail (flagellum). This type of motility may aid in sperm penetration of the zona pellucida, which encloses the ovum. could then be followed by the acrosome reaction where the cap-like structure on the head of the cell releases the enzymes it contains. This facilitates the penetration of the ovum and fertilisation. Some definitions consider sperm activation to consist of these two processes of hyperactivation and the acrosome reaction is a term also used to express an X chromosome gene dosage compensation mechanism and is seen in Drosophila. Here, a complex of proteins bind to the X-linked genes to effectively double their genetic activity. This allows males (XY) to have equal genetic activity as females (XX), whose X's are not hyperactivated. Mammalian sperm cells become more active when they approach an egg cell in a process called sperm activation. Sperm activation has been shown to be caused by calcium ionophores "in vitro", progesterone released by nearby cumulus cells and binding to ZP3 of the zona pellucida. The initial change is called "hyperactivation", which causes a change in spermatozoa motility. They swim faster and their tail movements become more forceful and erratic. A recent discovery links hyperactivation to a sudden influx of calcium ion into the tails
https://en.wikipedia.org/wiki?curid=1152003
Hyperactivation The whip-like tail (flagellum) of the sperm is studded with ion channels formed by proteins called CatSper. These channels are selective, allowing only calcium ion to pass. The opening of CatSper channels is responsible for the influx of calcium. The sudden rise in calcium levels causes the flagellum to form deeper bends, propelling the sperm more forcefully through the viscous environment. Sperm hyperactivity is necessary for breaking through two physical barriers that protect the egg from fertilization. has also shown to serve as a feature of the human sperm Chemotaxis. When the sperm is exposed to chemo-attractant, especially progesterone; the sperm will exhibit sudden flagellar arrest, followed by sharp turn and hyperactivation. This response suggests that hyperactivation serves as method to quickly guide sperm through chemo-attractant gradient. Before reaching the egg, the sperm are often trapped in epithelial cells in a Fallopian tube, meaning they are rendered inert unless they undergo hyperactivation. The change in motion and force of the tail movements enable the sperm to escape from the epithelium. Thus, only those sperm which have undergone hyperactivation have the ability to fertilize the egg.
https://en.wikipedia.org/wiki?curid=1152003
Side chain In organic chemistry and biochemistry, a side chain is a chemical group that is attached to a core part of the molecule called the "main chain" or backbone. The side chain is a hydrocarbon branching element of a molecule that is attached to a larger hydrocarbon backbone. It is one factor in determining a molecule's properties and reactivity. A side chain is also known as a pendant chain, but a pendant group (side group) has a different definition. The placeholder R is often used as a generic placeholder for alkyl (saturated hydrocarbon) group side chains in chemical structure diagrams. To indicate other non-carbon groups in structure diagrams, X, Y, or Z are often used. The "R" symbol was introduced by 19th-century French chemist Charles Frédéric Gerhardt, who advocated its adoption on the grounds that it would be widely recognizable and intelligible given its correspondence in multiple European languages to the initial letter of "root" or "residue": French "racine" ("root") and "résidu" ("residue"), these terms' respective English translations along with "radical" (itself derived from Latin "radix" below), Latin "radix" ("root") and "residuum" ("residue"), and German "Rest" ("remnant" and, in the context of chemistry, both "residue" and "radical"). In polymer science, the side chain of an oligomeric or polymeric offshoot extends from the backbone chain of a polymer. Side chains have noteworthy influence on a polymer's properties, mainly its crystallinity and density
https://en.wikipedia.org/wiki?curid=1152896
Side chain An oligomeric branch may be termed a short-chain branch, and a polymeric branch may be termed a long-chain branch. Side groups are different from side chains; they are neither oligomeric nor polymeric. In proteins, which are composed of amino acid residues, the side chains are attached to the alpha-carbon atoms of the amide backbone. The side chain connected to the alpha-carbon is specific for each amino acid and is responsible for determining charge and polarity of the amino acid. The amino acid side chains are also responsible for many of the interactions that lead to proper protein folding and function. Amino acids with similar polarity are usually attracted to each other, while nonpolar and polar side chains usually repel each other. Nonpolar/polar interactions can still play an important part in stabilizing the secondary structure due the relatively large amount of them occurring throughout the protein. Spatial positions of side-chain atoms can be predicted based on protein backbone geometry using computational tools for side-chain reconstruction.
https://en.wikipedia.org/wiki?curid=1152896
Jeffrey Harvey (biologist) Jeff Harvey (born 1957 in Toronto, Ontario, Canada) is a Senior Scientist in the Department of Multitrophic Interactions at the Netherlands Institute of Ecology, and formerly an associate editor of "Nature". Harvey specializes in research concerning:
https://en.wikipedia.org/wiki?curid=1154221
Pyrosequencing is a method of DNA sequencing (determining the order of nucleotides in DNA) based on the "sequencing by synthesis" principle, in which the sequencing is performed by detecting the nucleotide incorporated by a DNA polymerase. relies on light detection based on a chain reaction when pyrophosphate is released. Hence, the name pyrosequencing. The principle of was first described in 1993 by Bertil Pettersson, Mathias Uhlen and Pål Nyren by combining the solid phase sequencing method using streptavidin coated magnetic beads with recombinant DNA polymerase lacking 3´to 5´exonuclease activity (proof-reading) and luminescence detection using the firefly luciferase enzyme. A mixture of three enzymes (DNA polymerase, ATP sulfurylase and firefly luciferase) and a nucleotide (dNTP) are added to single stranded DNA to be sequenced and the incorporation of nucleotide is followed by measuring the light emitted. The intensity of the light determines if 0, 1 or more nucleotides have been incorporated, thus showing how many complementary nucleotides are present on the template strand. The nucleotide mixture is removed before the next nucleotide mixture is added. This process is repeated with each of the four nucleotides until the DNA sequence of the single stranded template is determined. A second solution-based method for was described in 1998 by Mostafa Ronaghi, Mathias Uhlen and Pål Nyren
https://en.wikipedia.org/wiki?curid=1154853
Pyrosequencing In this alternative method, an additional enzyme apyrase is introduced to remove nucleotides that are not incorporated by the DNA polymerase. This enabled the enzyme mixture including the DNA polymerase, the luciferase and the apyrase to be added at the start and kept throughout the procedure, thus providing a simple set-up suitable for automation. An automated instrument based on this principle was introduced to the market the following year by the company Pyrosequencing. A third microfluidic variant of the method was described in 2005 by Jonathan Rothberg and co-workers at the company 454 Life Sciences. This alternative approach for was based on the original principle of attaching the DNA to be sequenced to a solid support and they showed that sequencing could be performed in a highly parallel manner using a microfabricated microarray. This allowed for high-throughput DNA sequencing and an automated instrument was introduced to the market. This became the first next generation sequencing instrument starting a new era in genomics research, with rapidly falling prices for DNA sequencing allowing whole genome sequencing at affordable prices. "Sequencing by synthesis" involves taking a single strand of the DNA to be sequenced and then synthesizing its complementary strand enzymatically. The pyrosequencing method is based on detecting the activity of DNA polymerase (a DNA synthesizing enzyme) with another chemoluminescent enzyme
https://en.wikipedia.org/wiki?curid=1154853
Pyrosequencing Essentially, the method allows sequencing a single strand of DNA by synthesizing the complementary strand along it, one base pair at a time, and detecting which base was actually added at each step. The template DNA is immobile, and solutions of A, C, G, and T nucleotides are sequentially added and removed from the reaction. Light is produced only when the nucleotide solution complements the first unpaired base of the template. The sequence of solutions which produce chemiluminescent signals allows the determination of the sequence of the template. For the solution-based version of Pyrosequencing, the single-strand DNA (ssDNA) template is hybridized to a sequencing primer and incubated with the enzymes DNA polymerase, ATP sulfurylase, luciferase and apyrase, and with the substrates adenosine 5´ phosphosulfate (APS) and luciferin. PPi + APS → ATP + Sulfate ATP + luciferin + O2 → AMP + PPi + oxyluciferin + CO2 + (hv) (catalyzed by luciferase) where APS is adenosine 5-phosphosulfate. AMP is adenosine monophosphate. CO2 is carbon dioxide. hv is light. O2 is oxygen molecule. Currently, a limitation of the method is that the lengths of individual reads of DNA sequence are in the neighborhood of 300-500 nucleotides, shorter than the 800-1000 obtainable with chain termination methods (e.g. Sanger sequencing). This can make the process of genome assembly more difficult, particularly for sequences containing a large amount of repetitive DNA. Lack of proof-reading activity limits accuracy of this method
https://en.wikipedia.org/wiki?curid=1154853
Pyrosequencing The company AB in Uppsala, Sweden was founded with venture capital provided by HealthCap in order to commercialize machinery and reagents for sequencing short stretches of DNA using the pyrosequencing technique. AB was listed on the Stockholm Stock Exchange in 1999. It was renamed to Biotage in 2003. The pyrosequencing business line was acquired by Qiagen in 2008. technology was further licensed to 454 Life Sciences. 454 developed an array-based pyrosequencing technology which emerged as a platform for large-scale DNA sequencing, including genome sequencing and metagenomics. Roche announced the discontinuation of the 454 sequencing platform in 2013 when its technology became noncompetitive.
https://en.wikipedia.org/wiki?curid=1154853
McLeod gauge A is a scientific instrument used to measure very low pressures, down to 10 Torr. It was invented in 1874 by Herbert McLeod (1841–1923). McLeod gauges were once commonly found attached to equipment that operates under vacuum, such as a lyophilizer. Today, however, these gauges have largely been replaced by electronic vacuum gauges. The design of a is somewhat similar to that of a mercury-column manometer. Typically it is filled with mercury. If used incorrectly, this mercury can escape and contaminate the vacuum system attached to the gauge. McLeod gauges operate by taking in a sample volume of gas from a vacuum chamber, then compressing it by tilting and infilling with mercury. The pressure in this smaller volume is then measured by a mercury manometer, and knowing the compression ratio (the ratio of the initial and final volumes), the pressure of the original vacuum can be determined by applying Boyle's law. This method is fairly accurate for non-condensible gases, such as oxygen and nitrogen. However, condensible gases, such as water vapour, ammonia, carbon dioxide, and pump-oil vapors may be in gaseous form in the low pressure of the vacuum chamber, but will condense when compressed by the McLeod gauge. The result is an erroneous reading, showing a pressure much lower than actually present. A cold trap may be used in conjunction with a to condense these vapors before they enter the gauge
https://en.wikipedia.org/wiki?curid=1155556
McLeod gauge The has the advantage that it is simple to use and that its calibration is nearly the same for all non-condensable gases. The device can be manually operated and the scale read visually, or the process can be automated in various ways. For example, a small electric motor can periodically rotate the assembly to collect a gas sample. If a fine platinum wire is in the capillary tube, its resistance indicates the height of the mercury column around it. Modern electronic vacuum gauges are simpler to use, less fragile, and do not present a mercury hazard, but their reading is highly dependent on the chemical nature of the gas being measured, and their calibration is unstable. For this reason, McLeod gauges continue to be used as a calibration standard for electronic gauges.
https://en.wikipedia.org/wiki?curid=1155556
Erygmascope An erygmascope is the name given to a late 19th-century electric lighting apparatus designed for the examination of the strata of earth traversed by boring apparatus. It consisted of a very powerful incandescent lamp enclosed in a metallic cylinder. One of the two semi-cylindrical sides constitutes the reflector, and the other, which is of thick glass, allows the passage of light, which illuminates the strata of earth traversed by the instrument. The base, which is inclined at an angle of 45°, is an elliptical mirror, and the top, of straight section, is open in order to permit the observer standing at the mouth of the well, and provided with a powerful spyglass, to see in the mirror the image of the earth. The lamp is so mounted that its upwardly emitted rays are intercepted. The whole apparatus was suspended from a long cable, formed of two conducting wires, which winds around a windlass with metallic journals which are electrically insulated. These journals communicate, through the intermedium of two friction springs, with the conductors on the one hand and, on the other, with the poles of an automatic and portable battery. This permits of lowering and raising the apparatus at will, without derangement, and without its being necessary to interrupt the light and the observation. The erygmascope was described in an 1891 edition of the Scientific American Supplement; it is uncertain of the extent to which it was ever put to practical use.
https://en.wikipedia.org/wiki?curid=1156141
Cutin is one of two waxy polymers that are the main components of the plant cuticle, which covers all aerial surfaces of plants. The other major cuticle polymer is cutan, which is much more readily preserved in the fossil record. consists of omega hydroxy acids and their derivatives, which are interlinked via ester bonds, forming a polyester polymer of indeterminate size. There are two major monomer families of cutin, the C16 and C18 families. The C16 family consists mainly of 16-hydroxy palmitic acid and 9,16- or 10,16-dihydroxypalmitic acid. The C18 family consists mainly of 18-hydroxy oleic acid, 9,10-epoxy-18-hydroxy stearic acid, and 9,10,18-trihydroxystearate.
https://en.wikipedia.org/wiki?curid=1157235
Depolarization ratio In Raman spectroscopy, the depolarization ratio is the intensity ratio between the perpendicular component and the parallel component of Raman scattered light. Early work in this field was carried out by George Placzek, who developed the theoretical treatment of bond polarizability The Raman scattered light is emitted by the stimulation of the electric field of the incident light. Therefore, the direction of the vibration of the electric field, or polarization direction, of the scattered light might be expected to be the same as that of the incident light. In reality, however, some fraction of the Raman scattered light has a polarization direction that is perpendicular to that of the incident light. This component is called the perpendicular component. Naturally, the component of the Raman scattered light whose polarization direction is parallel to that of the incident light is called the parallel component, and the Raman scattered light consists of the parallel component and the perpendicular component. The ratio of the peak intensity of the parallel and perpendicular component is known as the depolarization ratio (ρ), defined in equation 1. For example, a spectral band with a peak of intensity 10 units when the polarizers are parallel, and intensity 1 unit when the polarizers are perpendicular, would have a depolarization ratio of 1/10 = 0.1, which corresponds to a highly polarized band
https://en.wikipedia.org/wiki?curid=1157370
Depolarization ratio The value of the depolarization ratio of a Raman band depends on the symmetry of the molecule and the normal vibrational mode, in other words, the point group of the molecule and its irreducible representation to which the normal mode belongs. Under Placzek’s polarizability approximation, it is known that the depolarization ratio of a totally symmetric vibrational mode is less than 0.75, and that of the other modes equals 0.75. A Raman band whose depolarization ratio is less than 0.75 is called a polarized band, and a band with a depolarization ratio equal to or greater than 0.75 is called a depolarized band. For a spherical top molecule in which all three axes are equivalent, symmetric vibrations have Raman spectral bands which are completely polarized (ρ = 0). An example is the symmetric stretching or "breathing" mode of methane (CH) in which all 4 C–H bonds vibrate in phase. However for the asymmetric mode in which one C–H bond stretches while the other three contract, the Raman scattered radiation is depolarized. For molecules of lower symmetry (symmetric tops or asymmetric tops), a vibration with the full symmetry of the molecule leads to a polarized or partially polarized Raman band (ρ < 0.75), while a less symmetric vibration yields a depolarized band (ρ ≥ 0.75).
https://en.wikipedia.org/wiki?curid=1157370
DNA sequencing is the process of determining the nucleic acid sequence – the order of nucleotides in DNA. It includes any method or technology that is used to determine the order of the four bases: adenine, guanine, cytosine, and thymine. The advent of rapid methods has greatly accelerated biological and medical research and discovery. Knowledge of DNA sequences has become indispensable for basic biological research, and in numerous applied fields such as medical diagnosis, biotechnology, forensic biology, virology and biological systematics. Comparing healthy and mutated DNA sequences can diagnose different diseases including various cancers, characterize antibody repertoire, and can be used to guide patient treatment. Having a quick way to sequence DNA allows for faster and more individualized medical care to be administered, and for more organisms to be identified and cataloged. The rapid speed of sequencing attained with modern technology has been instrumental in the sequencing of complete DNA sequences, or genomes, of numerous types and species of life, including the human genome and other complete DNA sequences of many animal, plant, and microbial species. The first DNA sequences were obtained in the early 1970s by academic researchers using laborious methods based on two-dimensional chromatography. Following the development of fluorescence-based sequencing methods with a DNA sequencer, has become easier and orders of magnitude faster. may be used to determine the sequence of individual genes, larger genetic regions (i
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing e. clusters of genes or operons), full chromosomes, or entire genomes of any organism. is also the most efficient way to indirectly sequence RNA or proteins (via their open reading frames). In fact, has become a key technology in many areas of biology and other sciences such as medicine, forensics, and anthropology. Sequencing is used in molecular biology to study genomes and the proteins they encode. Information obtained using sequencing allows researchers to identify changes in genes, associations with diseases and phenotypes, and identify potential drug targets. Since DNA is an informative macromolecule in terms of transmission from one generation to another, is used in evolutionary biology to study how different organisms are related and how they evolved. The field of metagenomics involves identification of organisms present in a body of water, sewage, dirt, debris filtered from the air, or swab samples from organisms. Knowing which organisms are present in a particular environment is critical to research in ecology, epidemiology, microbiology, and other fields. Sequencing enables researchers to determine which types of microbes may be present in a microbiome, for example. As most viruses are too small to be seen by a light microscope, sequencing is one of the main tools in virology to identify and study the virus
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing Traditional Sanger sequencing and next-generation sequencing are used to sequence viruses in basic and clinical research, as wells as for the diagnosis of emerging viral infections, molecular epidemiology of viral pathogens, and drug-resistance testing. There are more than 2.3 million unique viral sequences in GenBank. Recently, NGS has surpassed traditional Sanger as the most popular approach for generating viral genomes. Medical technicians may sequence genes (or, theoretically, full genomes) from patients to determine if there is risk of genetic diseases. This is a form of genetic testing, though some genetic tests may not involve DNA sequencing. Also, may be useful for determining a specific bacteria, to allow for more precise antibiotics treatments, hereby reducing the risk of creating antimicrobial resistance in bacteria populations. may be used along with DNA profiling methods for forensic identification and paternity testing. DNA testing has evolved tremendously in the last few decades to ultimately link a DNA print to what is under investigation. The DNA patterns in fingerprint, saliva, hair follicles, etc. uniquely separate each living organism from another. Testing DNA is a technique which can detect specific genomes in a DNA strand to produce a unique and individualized pattern. The canonical structure of DNA has four bases: thymine (T), adenine (A), cytosine (C), and guanine (G). is the determination of the physical order of these bases in a molecule of DNA
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing However, there are many other bases that may be present in a molecule. In some viruses (specifically, bacteriophage), cytosine may be replaced by hydroxy methyl or hydroxy methyl glucose cytosine. In mammalian DNA, variant bases with methyl groups or phosphosulfate may be found. Depending on the sequencing technique, a particular modification, e.g., the 5mC (5 methyl cytosine) common in humans, may or may not be detected. Deoxyribonucleic acid (DNA) was first discovered and isolated by Friedrich Miescher in 1869, but it remained under-studied for many decades because proteins, rather than DNA, were thought to hold the genetic blueprint to life. This situation changed after 1944 as a result of some experiments by Oswald Avery, Colin MacLeod, and Maclyn McCarty demonstrating that purified DNA could change one strain of bacteria into another. This was the first time that DNA was shown capable of transforming the properties of cells. In 1953, James Watson and Francis Crick put forward their double-helix model of DNA, based on crystallized X-ray structures being studied by Rosalind Franklin. According to the model, DNA is composed of two strands of nucleotides coiled around each other, linked together by hydrogen bonds and running in opposite directions. Each strand is composed of four complementary nucleotides – adenine (A), cytosine (C), guanine (G) and thymine (T) – with an A on one strand always paired with T on the other, and C always paired with G
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing They proposed that such a structure allowed each strand to be used to reconstruct the other, an idea central to the passing on of hereditary information between generations. The foundation for sequencing proteins was first laid by the work of Frederick Sanger who by 1955 had completed the sequence of all the amino acids in insulin, a small protein secreted by the pancreas. This provided the first conclusive evidence that proteins were chemical entities with a specific molecular pattern rather than a random mixture of material suspended in fluid. Sanger's success in sequencing insulin spurred on x-ray crystallographers, including Watson and Crick, who by now were trying to understand how DNA directed the formation of proteins within a cell. Soon after attending a series of lectures given by Frederick Sanger in October 1954, Crick began developing a theory which argued that the arrangement of nucleotides in DNA determined the sequence of amino acids in proteins, which in turn helped determine the function of a protein. He published this theory in 1958. RNA sequencing was one of the earliest forms of nucleotide sequencing. The major landmark of RNA sequencing is the sequence of the first complete gene and the complete genome of Bacteriophage MS2, identified and published by Walter Fiers and his coworkers at the University of Ghent (Ghent, Belgium), in 1972 and 1976. Traditional RNA sequencing methods require the creation of a cDNA molecule which must be sequenced
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing The first method for determining DNA sequences involved a location-specific primer extension strategy established by Ray Wu at Cornell University in 1970. DNA polymerase catalysis and specific nucleotide labeling, both of which figure prominently in current sequencing schemes, were used to sequence the cohesive ends of lambda phage DNA. Between 1970 and 1973, Wu, R Padmanabhan and colleagues demonstrated that this method can be employed to determine any DNA sequence using synthetic location-specific primers. Frederick Sanger then adopted this primer-extension strategy to develop more rapid methods at the MRC Centre, Cambridge, UK and published a method for "with chain-terminating inhibitors" in 1977. Walter Gilbert and Allan Maxam at Harvard also developed sequencing methods, including one for "by chemical degradation". In 1973, Gilbert and Maxam reported the sequence of 24 basepairs using a method known as wandering-spot analysis. Advancements in sequencing were aided by the concurrent development of recombinant DNA technology, allowing DNA samples to be isolated from sources other than viruses. The first full DNA genome to be sequenced was that of bacteriophage φX174 in 1977. Medical Research Council scientists deciphered the complete DNA sequence of the Epstein-Barr virus in 1984, finding it contained 172,282 nucleotides. Completion of the sequence marked a significant turning point in because it was achieved with no prior genetic profile knowledge of the virus
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing A non-radioactive method for transferring the DNA molecules of sequencing reaction mixtures onto an immobilizing matrix during electrophoresis was developed by Pohl and co-workers in the early 1980s. Followed by the commercialization of the DNA sequencer "Direct-Blotting-Electrophoresis-System GATC 1500" by GATC Biotech, which was intensively used in the framework of the EU genome-sequencing programme, the complete DNA sequence of the yeast "Saccharomyces cerevisiae" chromosome II. Leroy E. Hood's laboratory at the California Institute of Technology announced the first semi-automated machine in 1986. This was followed by Applied Biosystems' marketing of the first fully automated sequencing machine, the ABI 370, in 1987 and by Dupont's Genesis 2000 which used a novel fluorescent labeling technique enabling all four dideoxynucleotides to be identified in a single lane. By 1990, the U.S. National Institutes of Health (NIH) had begun large-scale sequencing trials on "Mycoplasma capricolum", "Escherichia coli", "Caenorhabditis elegans", and "Saccharomyces cerevisiae" at a cost of US$0.75 per base. Meanwhile, sequencing of human cDNA sequences called expressed sequence tags began in Craig Venter's lab, an attempt to capture the coding fraction of the human genome. In 1995, Venter, Hamilton Smith, and colleagues at The Institute for Genomic Research (TIGR) published the first complete genome of a free-living organism, the bacterium "Haemophilus influenzae"
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing The circular chromosome contains 1,830,137 bases and its publication in the journal Science marked the first published use of whole-genome shotgun sequencing, eliminating the need for initial mapping efforts. By 2001, shotgun sequencing methods had been used to produce a draft sequence of the human genome. Several new methods for were developed in the mid to late 1990s and were implemented in commercial DNA sequencers by the year 2000. Together these were called the "next-generation" or "second-generation" sequencing (NGS) methods, in order to distinguish them from the aforementioned earlier methods, like Sanger Sequencing. In contrast to the first generation of sequencing, NGS technology is typically characterized by being highly scalable, allowing the entire genome to be sequenced at once. Usually, this is accomplished by fragmenting the genome into small pieces, randomly sampling for a fragment, and sequencing it using one of a variety of technologies, such as those described below. An entire genome is possible because multiple fragments are sequenced at once (giving it the name "massively parallel" sequencing) in an automated process. NGS technology has tremendously empowered researchers to look for insights into health, anthropologists to investigate human origins, and is catalyzing the "Personalized Medicine" movement. However, it has also opened the door to more room for error. There are many software tools to carry out the computational analysis of NGS data, each with its own algorithm
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing Even the parameters within one software package can change the outcome of the analysis. In addition, the large quantities of data produced by have also required development of new methods and programs for sequence analysis. Several efforts to develop standards in the NGS field have been attempted to address these challenges, most of which have been small-scale efforts arising from individual labs. Most recently, a large, organized, FDA-funded effort has culminated in the BioCompute standard. On 26 October 1990, Roger Tsien, Pepi Ross, Margaret Fahnestock and Allan J Johnston filed a patent describing stepwise ("base-by-base") sequencing with removable 3' blockers on DNA arrays (blots and single DNA molecules). In 1996, Pål Nyrén and his student Mostafa Ronaghi at the Royal Institute of Technology in Stockholm published their method of pyrosequencing. On 1 April 1997, Pascal Mayer and Laurent Farinelli submitted patents to the World Intellectual Property Organization describing DNA colony sequencing. The DNA sample preparation and random surface-polymerase chain reaction (PCR) arraying methods described in this patent, coupled to Roger Tsien et al.'s "base-by-base" sequencing method, is now implemented in Illumina's Hi-Seq genome sequencers
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing In 1998, Phil Green and Brent Ewing of the University of Washington described their phred quality score for sequencer data analysis, a landmark analysis technique that gained widespread adoption, and which is still the most common metric for assessing the accuracy of a sequencing platform. Lynx Therapeutics published and marketed massively parallel signature sequencing (MPSS), in 2000. This method incorporated a parallelized, adapter/ligation-mediated, bead-based sequencing technology and served as the first commercially available "next-generation" sequencing method, though no DNA sequencers were sold to independent laboratories. Allan Maxam and Walter Gilbert published a method in 1977 based on chemical modification of DNA and subsequent cleavage at specific bases. Also known as chemical sequencing, this method allowed purified samples of double-stranded DNA to be used without further cloning. This method's use of radioactive labeling and its technical complexity discouraged extensive use after refinements in the Sanger methods had been made. Maxam-Gilbert sequencing requires radioactive labeling at one 5' end of the DNA and purification of the DNA fragment to be sequenced. Chemical treatment then generates breaks at a small proportion of one or two of the four nucleotide bases in each of four reactions (G, A+G, C, C+T). The concentration of the modifying chemicals is controlled to introduce on average one modification per DNA molecule
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing Thus a series of labeled fragments is generated, from the radiolabeled end to the first "cut" site in each molecule. The fragments in the four reactions are electrophoresed side by side in denaturing acrylamide gels for size separation. To visualize the fragments, the gel is exposed to X-ray film for autoradiography, yielding a series of dark bands each corresponding to a radiolabeled DNA fragment, from which the sequence may be inferred. The chain-termination method developed by Frederick Sanger and coworkers in 1977 soon became the method of choice, owing to its relative ease and reliability. When invented, the chain-terminator method used fewer toxic chemicals and lower amounts of radioactivity than the Maxam and Gilbert method. Because of its comparative ease, the Sanger method was soon automated and was the method used in the first generation of DNA sequencers. Sanger sequencing is the method which prevailed from the 1980s until the mid-2000s. Over that period, great advances were made in the technique, such as fluorescent labelling, capillary electrophoresis, and general automation. These developments allowed much more efficient sequencing, leading to lower costs. The Sanger method, in mass production form, is the technology which produced the first human genome in 2001, ushering in the age of genomics. However, later in the decade, radically different approaches reached the market, bringing the cost per genome down from $100 million in 2001 to $10,000 in 2011
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing Large-scale sequencing often aims at sequencing very long DNA pieces, such as whole chromosomes, although large-scale sequencing can also be used to generate very large numbers of short sequences, such as found in phage display. For longer targets such as chromosomes, common approaches consist of cutting (with restriction enzymes) or shearing (with mechanical forces) large DNA fragments into shorter DNA fragments. The fragmented DNA may then be cloned into a DNA vector and amplified in a bacterial host such as "Escherichia coli". Short DNA fragments purified from individual bacterial colonies are individually sequenced and assembled electronically into one long, contiguous sequence. Studies have shown that adding a size selection step to collect DNA fragments of uniform size can improve sequencing efficiency and accuracy of the genome assembly. In these studies, automated sizing has proven to be more reproducible and precise than manual gel sizing. The term ""de novo" sequencing" specifically refers to methods used to determine the sequence of DNA with no previously known sequence. "De novo" translates from Latin as "from the beginning". Gaps in the assembled sequence may be filled by primer walking. The different strategies have different tradeoffs in speed and accuracy; shotgun methods are often used for sequencing large genomes, but its assembly is complex and difficult, particularly with sequence repeats often causing gaps in genome assembly
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing Most sequencing approaches use an "in vitro" cloning step to amplify individual DNA molecules, because their molecular detection methods are not sensitive enough for single molecule sequencing. Emulsion PCR isolates individual DNA molecules along with primer-coated beads in aqueous droplets within an oil phase. A polymerase chain reaction (PCR) then coats each bead with clonal copies of the DNA molecule followed by immobilization for later sequencing. Emulsion PCR is used in the methods developed by Marguilis et al. (commercialized by 454 Life Sciences), Shendure and Porreca et al. (also known as "polony sequencing") and SOLiD sequencing, (developed by Agencourt, later Applied Biosystems, now Life Technologies). Emulsion PCR is also used in the GemCode and Chromium platforms developed by 10x Genomics. Shotgun sequencing is a sequencing method designed for analysis of DNA sequences longer than 1000 base pairs, up to and including entire chromosomes. This method requires the target DNA to be broken into random fragments. After sequencing individual fragments, the sequences can be reassembled on the basis of their overlapping regions. High-throughput sequencing, which includes next-generation "short-read" and third-generation "long-read" sequencing methods, applies to exome sequencing, genome sequencing, genome resequencing, transcriptome profiling (RNA-Seq), DNA-protein interactions (ChIP-sequencing), and epigenome characterization
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing Resequencing is necessary, because the genome of a single individual of a species will not indicate all of the genome variations among other individuals of the same species. The high demand for low-cost sequencing has driven the development of high-throughput sequencing technologies that parallelize the sequencing process, producing thousands or millions of sequences concurrently. High-throughput sequencing technologies are intended to lower the cost of beyond what is possible with standard dye-terminator methods. In ultra-high-throughput sequencing as many as 500,000 sequencing-by-synthesis operations may be run in parallel. Such technologies led to the ability to sequence an entire human genome in as little as one day. , corporate leaders in the development of high-throughput sequencing products included Illumina, Qiagen and ThermoFisher Scientific. SMRT sequencing is based on the sequencing by synthesis approach. The DNA is synthesized in zero-mode wave-guides (ZMWs) – small well-like containers with the capturing tools located at the bottom of the well. The sequencing is performed with use of unmodified polymerase (attached to the ZMW bottom) and fluorescently labelled nucleotides flowing freely in the solution. The wells are constructed in a way that only the fluorescence occurring by the bottom of the well is detected. The fluorescent label is detached from the nucleotide upon its incorporation into the DNA strand, leaving an unmodified DNA strand
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing According to Pacific Biosciences (PacBio), the SMRT technology developer, this methodology allows detection of nucleotide modifications (such as cytosine methylation). This happens through the observation of polymerase kinetics. This approach allows reads of 20,000 nucleotides or more, with average read lengths of 5 kilobases. In 2015, Pacific Biosciences announced the launch of a new sequencing instrument called the Sequel System, with 1 million ZMWs compared to 150,000 ZMWs in the PacBio RS II instrument. SMRT sequencing is referred to as "third-generation" or "long-read" sequencing. The DNA passing through the nanopore changes its ion current. This change is dependent on the shape, size and length of the DNA sequence. Each type of the nucleotide blocks the ion flow through the pore for a different period of time. The method does not require modified nucleotides and is performed in real time. Nanopore sequencing is referred to as "third-generation" or "long-read" sequencing, along with SMRT sequencing. Early industrial research into this method was based on a technique called 'exonuclease sequencing', where the readout of electrical signals occurred as nucleotides passed by alpha(α)-hemolysin pores covalently bound with cyclodextrin. However the subsequent commercial method, 'strand sequencing', sequenced DNA bases in an intact strand. Two main areas of nanopore sequencing in development are solid state nanopore sequencing, and protein based nanopore sequencing
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing Protein nanopore sequencing utilizes membrane protein complexes such as α-hemolysin, MspA ("Mycobacterium smegmatis" Porin A) or CssG, which show great promise given their ability to distinguish between individual and groups of nucleotides. In contrast, solid-state nanopore sequencing utilizes synthetic materials such as silicon nitride and aluminum oxide and it is preferred for its superior mechanical ability and thermal and chemical stability. The fabrication method is essential for this type of sequencing given that the nanopore array can contain hundreds of pores with diameters smaller than eight nanometers. The concept originated from the idea that single stranded DNA or RNA molecules can be electrophoretically driven in a strict linear sequence through a biological pore that can be less than eight nanometers, and can be detected given that the molecules release an ionic current while moving through the pore. The pore contains a detection region capable of recognizing different bases, with each base generating various time specific signals corresponding to the sequence of bases as they cross the pore which are then evaluated. Precise control over the DNA transport through the pore is crucial for success. Various enzymes such as exonucleases and polymerases have been used to moderate this process by positioning them near the pore's entrance
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing The first of the high-throughput sequencing technologies, massively parallel signature sequencing (or MPSS), was developed in the 1990s at Lynx Therapeutics, a company founded in 1992 by Sydney Brenner and Sam Eletr. MPSS was a bead-based method that used a complex approach of adapter ligation followed by adapter decoding, reading the sequence in increments of four nucleotides. This method made it susceptible to sequence-specific bias or loss of specific sequences. Because the technology was so complex, MPSS was only performed 'in-house' by Lynx Therapeutics and no machines were sold to independent laboratories. Lynx Therapeutics merged with Solexa (later acquired by Illumina) in 2004, leading to the development of sequencing-by-synthesis, a simpler approach acquired from Manteia Predictive Medicine, which rendered MPSS obsolete. However, the essential properties of the MPSS output were typical of later high-throughput data types, including hundreds of thousands of short DNA sequences. In the case of MPSS, these were typically used for sequencing cDNA for measurements of gene expression levels. The polony sequencing method, developed in the laboratory of George M. Church at Harvard, was among the first high-throughput sequencing systems and was used to sequence a full "E. coli" genome in 2005. It combined an in vitro paired-tag library with emulsion PCR, an automated microscope, and ligation-based sequencing chemistry to sequence an "E. coli" genome at an accuracy of >99
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing 9999% and a cost approximately 1/9 that of Sanger sequencing. The technology was licensed to Agencourt Biosciences, subsequently spun out into Agencourt Personal Genomics, and eventually incorporated into the Applied Biosystems SOLiD platform. Applied Biosystems was later acquired by Life Technologies, now part of Thermo Fisher Scientific. A parallelized version of pyrosequencing was developed by 454 Life Sciences, which has since been acquired by Roche Diagnostics. The method amplifies DNA inside water droplets in an oil solution (emulsion PCR), with each droplet containing a single DNA template attached to a single primer-coated bead that then forms a clonal colony. The sequencing machine contains many picoliter-volume wells each containing a single bead and sequencing enzymes. Pyrosequencing uses luciferase to generate light for detection of the individual nucleotides added to the nascent DNA, and the combined data are used to generate sequence reads. This technology provides intermediate read length and price per base compared to Sanger sequencing on one end and Solexa and SOLiD on the other. Solexa, now part of Illumina, was founded by Shankar Balasubramanian and David Klenerman in 1998, and developed a sequencing method based on reversible dye-terminators technology, and engineered polymerases. The reversible terminated chemistry concept was invented by Bruno Canard and Simon Sarfati at the Pasteur Institute in Paris. It was developed internally at Solexa by those named on the relevant patents
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing In 2004, Solexa acquired the company Manteia Predictive Medicine in order to gain a massively parallel sequencing technology invented in 1997 by Pascal Mayer and Laurent Farinelli. It is based on "DNA clusters" or "DNA colonies", which involves the clonal amplification of DNA on a surface. The cluster technology was co-acquired with Lynx Therapeutics of California. Solexa Ltd. later merged with Lynx to form Solexa Inc. In this method, DNA molecules and primers are first attached on a slide or flow cell and amplified with polymerase so that local clonal DNA colonies, later coined "DNA clusters", are formed. To determine the sequence, four types of reversible terminator bases (RT-bases) are added and non-incorporated nucleotides are washed away. A camera takes images of the fluorescently labeled nucleotides. Then the dye, along with the terminal 3' blocker, is chemically removed from the DNA, allowing for the next cycle to begin. Unlike pyrosequencing, the DNA chains are extended one nucleotide at a time and image acquisition can be performed at a delayed moment, allowing for very large arrays of DNA colonies to be captured by sequential images taken from a single camera. Decoupling the enzymatic reaction and the image capture allows for optimal throughput and theoretically unlimited sequencing capacity
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing With an optimal configuration, the ultimately reachable instrument throughput is thus dictated solely by the analog-to-digital conversion rate of the camera, multiplied by the number of cameras and divided by the number of pixels per DNA colony required for visualizing them optimally (approximately 10 pixels/colony). In 2012, with cameras operating at more than 10 MHz A/D conversion rates and available optics, fluidics and enzymatics, throughput can be multiples of 1 million nucleotides/second, corresponding roughly to 1 human genome equivalent at 1x coverage per hour per instrument, and 1 human genome re-sequenced (at approx. 30x) per day per instrument (equipped with a single camera). This method is an upgraded modification to combinatorial probe anchor ligation technology (cPAL) described by Complete Genomics which has since become part of Chinese genomics company BGI in 2013. The two companies have refined the technology to allow for longer read lengths, reaction time reductions and faster time to results. In addition, data are now generated as contiguous full-length reads in the standard FASTQ file format and can be used as-is in most short-read-based bioinformatics analysis pipelines. The two technologies that form the basis for this high-throughput sequencing technology are DNA nanoballs (DNB) and patterned arrays for nanoball attachment to a solid surface
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing DNA nanoballs are simply formed by denaturing double stranded, adapter ligated libraries and ligating the forward strand only to a splint oligonucleotide to form a ssDNA circle. Faithful copies of the circles containing the DNA insert are produced utilizing Rolling Circle Amplification that generates approximately 300–500 copies. The long strand of ssDNA folds upon itself to produce a three-dimensional nanoball structure that is approximately 220 nm in diameter. Making DNBs replaces the need to generate PCR copies of the library on the flow cell and as such can remove large proportions of duplicate reads, adapter-adapter ligations and PCR induced errors. The patterned array of positively charged spots is fabricated through photolithography and etching techniques followed by chemical modification to generate a sequencing flow cell. Each spot on the flow cell is approximately 250 nm in diameter, are separated by 700 nm (centre to centre) and allows easy attachment of a single negatively charged DNB to the flow cell and thus reducing under or over-clustering on the flow cell. Sequencing is then performed by addition of an oligonucleotide probe that attaches in combination to specific sites within the DNB. The probe acts as an anchor that then allows one of four single reversibly inactivated, labelled nucleotides to bind after flowing across the flow cell
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing Unbound nucleotides are washed away before laser excitation of the attached labels then emit fluorescence and signal is captured by cameras that is converted to a digital output for base calling. The attached base has its terminator and label chemically cleaved at completion of the cycle. The cycle is repeated with another flow of free, labelled nucleotides across the flow cell to allow the next nucleotide to bind and have its signal captured. This process is completed a number of times (usually 50 to 300 times) to determine the sequence of the inserted piece of DNA at a rate of approximately 40 million nucleotides per second as of 2018. Applied Biosystems' (now a Life Technologies brand) SOLiD technology employs sequencing by ligation. Here, a pool of all possible oligonucleotides of a fixed length are labeled according to the sequenced position. Oligonucleotides are annealed and ligated; the preferential ligation by DNA ligase for matching sequences results in a signal informative of the nucleotide at that position. Each base in the template is sequenced twice, and the resulting data are decoded according to the 2 base encoding scheme used in this method. Before sequencing, the DNA is amplified by emulsion PCR. The resulting beads, each containing single copies of the same DNA molecule, are deposited on a glass slide. The result is sequences of quantities and lengths comparable to Illumina sequencing. This sequencing by ligation method has been reported to have some issue sequencing palindromic sequences
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing Ion Torrent Systems Inc. (now owned by Life Technologies) developed a system based on using standard sequencing chemistry, but with a novel, semiconductor-based detection system. This method of sequencing is based on the detection of hydrogen ions that are released during the polymerisation of DNA, as opposed to the optical methods used in other sequencing systems. A microwell containing a template DNA strand to be sequenced is flooded with a single type of nucleotide. If the introduced nucleotide is complementary to the leading template nucleotide it is incorporated into the growing complementary strand. This causes the release of a hydrogen ion that triggers a hypersensitive ion sensor, which indicates that a reaction has occurred. If homopolymer repeats are present in the template sequence, multiple nucleotides will be incorporated in a single cycle. This leads to a corresponding number of released hydrogens and a proportionally higher electronic signal. DNA nanoball sequencing is a type of high throughput sequencing technology used to determine the entire genomic sequence of an organism. The company Complete Genomics uses this technology to sequence samples submitted by independent researchers. The method uses rolling circle replication to amplify small fragments of genomic DNA into DNA nanoballs. Unchained sequencing by ligation is then used to determine the nucleotide sequence
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing This method of allows large numbers of DNA nanoballs to be sequenced per run and at low reagent costs compared to other high-throughput sequencing platforms. However, only short sequences of DNA are determined from each DNA nanoball which makes mapping the short reads to a reference genome difficult. This technology has been used for multiple genome sequencing projects and is scheduled to be used for more. Heliscope sequencing is a method of single-molecule sequencing developed by Helicos Biosciences. It uses DNA fragments with added poly-A tail adapters which are attached to the flow cell surface. The next steps involve extension-based sequencing with cyclic washes of the flow cell with fluorescently labeled nucleotides (one nucleotide type at a time, as with the Sanger method). The reads are performed by the Heliscope sequencer. The reads are short, averaging 35 bp. What made this technology especially novel was that it was the first of its class to sequence non-amplified DNA, thus preventing any read errors associated with amplification steps. In 2009 a human genome was sequenced using the Heliscope, however in 2012 the company went bankrupt. There are two main microfluidic systems that are used to sequence DNA; droplet based microfluidics and digital microfluidics. Microfluidic devices solve many of the current limitations of current sequencing arrays. Abate et al. studied the use of droplet-based microfluidic devices for DNA sequencing
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing These devices have the ability to form and process picoliter sized droplets at the rate of thousands per second. The devices were created from polydimethylsiloxane (PDMS)and used Forster resonance energy transfer, FRET assays to read the sequences of DNA encompassed in the droplets. Each position on the array tested for a specific 15 base sequence. Fair et al. used digital microfluidic devices to study DNA pyrosequencing. Significant advantages include the portability of the device, reagent volume, speed of analysis, mass manufacturing abilities, and high throughput. This study provided a proof of concept showing that digital devices can be used for pyrosequencing; the study included using synthesis, which involves the extension of the enzymes and addition of labeled nucleotides. Boles et al. also studied pyrosequencing on digital microfluidic devices. They used an electro-wetting device to create, mix, and split droplets. The sequencing uses a three-enzyme protocol and DNA templates anchored with magnetic beads. The device was tested using two protocols and resulted in 100% accuracy based on raw pyrogram levels. The advantages of these digital microfluidic devices include size, cost, and achievable levels of functional integration. research, using microfluidics, also has the ability to be applied to the sequencing of RNA, using similar droplet microfluidic techniques, such as the method, inDrops
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing This shows that many of these techniques will be able to be applied further and be used to understand more about genomes and transcriptomes. methods currently under development include reading the sequence as a DNA strand transits through nanopores (a method that is now commercial but subsequent generations such as solid-state nanopores are still in development), and microscopy-based techniques, such as atomic force microscopy or transmission electron microscopy that are used to identify the positions of individual nucleotides within long DNA fragments (>5,000 bp) by nucleotide labeling with heavier elements (e.g., halogens) for visual detection and recording. Third generation technologies aim to increase throughput and decrease the time to result and cost by eliminating the need for excessive reagents and harnessing the processivity of DNA polymerase. Another approach uses measurements of the electrical tunnelling currents across single-strand DNA as it moves through a channel. Depending on its electronic structure, each base affects the tunnelling current differently, allowing differentiation between different bases. The use of tunnelling currents has the potential to sequence orders of magnitude faster than ionic current methods and the sequencing of several DNA oligomers and micro-RNA has already been achieved. "Sequencing by hybridization" is a non-enzymatic method that uses a DNA microarray
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing A single pool of DNA whose sequence is to be determined is fluorescently labeled and hybridized to an array containing known sequences. Strong hybridization signals from a given spot on the array identifies its sequence in the DNA being sequenced. This method of sequencing utilizes binding characteristics of a library of short single stranded DNA molecules (oligonucleotides), also called DNA probes, to reconstruct a target DNA sequence. Non-specific hybrids are removed by washing and the target DNA is eluted. Hybrids are re-arranged such that the DNA sequence can be reconstructed. The benefit of this sequencing type is its ability to capture a large number of targets with a homogenous coverage. A large number of chemicals and starting DNA is usually required. However, with the advent of solution-based hybridization, much less equipment and chemicals are necessary. Mass spectrometry may be used to determine DNA sequences. Matrix-assisted laser desorption ionization time-of-flight mass spectrometry, or MALDI-TOF MS, has specifically been investigated as an alternative method to gel electrophoresis for visualizing DNA fragments. With this method, DNA fragments generated by chain-termination sequencing reactions are compared by mass rather than by size. The mass of each nucleotide is different from the others and this difference is detectable by mass spectrometry. Single-nucleotide mutations in a fragment can be more easily detected with MS than by gel electrophoresis alone
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing MALDI-TOF MS can more easily detect differences between RNA fragments, so researchers may indirectly sequence DNA with MS-based methods by converting it to RNA first. The higher resolution of DNA fragments permitted by MS-based methods is of special interest to researchers in forensic science, as they may wish to find single-nucleotide polymorphisms in human DNA samples to identify individuals. These samples may be highly degraded so forensic researchers often prefer mitochondrial DNA for its higher stability and applications for lineage studies. MS-based sequencing methods have been used to compare the sequences of human mitochondrial DNA from samples in a Federal Bureau of Investigation database and from bones found in mass graves of World War I soldiers. Early chain-termination and TOF MS methods demonstrated read lengths of up to 100 base pairs. Researchers have been unable to exceed this average read size; like chain-termination sequencing alone, MS-based may not be suitable for large "de novo" sequencing projects. Even so, a recent study did use the short sequence reads and mass spectroscopy to compare single-nucleotide polymorphisms in pathogenic "Streptococcus" strains. In microfluidic Sanger sequencing the entire thermocycling amplification of DNA fragments as well as their separation by electrophoresis is done on a single glass wafer (approximately 10 cm in diameter) thus reducing the reagent usage as well as cost
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing In some instances researchers have shown that they can increase the throughput of conventional sequencing through the use of microchips. Research will still need to be done in order to make this use of technology effective. This approach directly visualizes the sequence of DNA molecules using electron microscopy. The first identification of DNA base pairs within intact DNA molecules by enzymatically incorporating modified bases, which contain atoms of increased atomic number, direct visualization and identification of individually labeled bases within a synthetic 3,272 base-pair DNA molecule and a 7,249 base-pair viral genome has been demonstrated. This method is based on use of RNA polymerase (RNAP), which is attached to a polystyrene bead. One end of DNA to be sequenced is attached to another bead, with both beads being placed in optical traps. RNAP motion during transcription brings the beads in closer and their relative distance changes, which can then be recorded at a single nucleotide resolution. The sequence is deduced based on the four readouts with lowered concentrations of each of the four nucleotide types, similarly to the Sanger method. A comparison is made between regions and sequence information is deduced by comparing the known sequence regions to the unknown sequence regions. A method has been developed to analyze full sets of protein interactions using a combination of 454 pyrosequencing and an "in vitro" virus mRNA display method
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing Specifically, this method covalently links proteins of interest to the mRNAs encoding them, then detects the mRNA pieces using reverse transcription PCRs. The mRNA may then be amplified and sequenced. The combined method was titled IVV-HiTSeq and can be performed under cell-free conditions, though its results may not be representative of "in vivo" conditions. The success of any protocol relies upon the DNA or RNA sample extraction and preparation from the biological material of interest. According to the sequencing technology to be used, the samples resulting from either the DNA or the RNA extraction require further preparation. For Sanger sequencing, either cloning procedures or PCR are required prior to sequencing. In the case of next-generation sequencing methods, library preparation is required before processing. Assessing the quality and quantity of nucleic acids both after extraction and after library preparation identifies degraded, fragmented, and low-purity samples and yields high-quality sequencing data. The high-throughput nature of current DNA/RNA sequencing technologies has posed a challenge for sample preparation method to scale-up
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing Several liquid handling instruments are being used for the preparation of higher numbers of samples with a lower total hands-on time: In October 2006, the X Prize Foundation established an initiative to promote the development of full genome sequencing technologies, called the Archon X Prize, intending to award $10 million to "the first Team that can build a device and use it to sequence 100 human genomes within 10 days or less, with an accuracy of no more than one error in every 100,000 bases sequenced, with sequences accurately covering at least 98% of the genome, and at a recurring cost of no more than $10,000 (US) per genome." Each year the National Human Genome Research Institute, or NHGRI, promotes grants for new research and developments in genomics. 2010 grants and 2011 candidates include continuing work in microfluidic, polony and base-heavy sequencing methodologies. The sequencing technologies described here produce raw data that needs to be assembled into longer sequences such as complete genomes (sequence assembly). There are many computational challenges to achieve this, such as the evaluation of the raw sequence data which is done by programs and algorithms such as Phred and Phrap. Other challenges have to deal with repetitive sequences that often prevent complete genome assemblies because they occur in many places of the genome. As a consequence, many sequences may not be assigned to particular chromosomes
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing The production of raw sequence data is only the beginning of its detailed bioinformatical analysis. Yet new methods for sequencing and correcting sequencing errors were developed. Sometimes, the raw reads produced by the sequencer are correct and precise only in a fraction of their length. Using the entire read may introduce artifacts in the downstream analyses like genome assembly, snp calling, or gene expression estimation. Two classes of trimming programs have been introduced, based on the window-based or the running-sum classes of algorithms. This is a partial list of the trimming algorithms currently available, specifying the algorithm class they belong to: Human genetics have been included within the field of bioethics since the early 1970s and the growth in the use of (particularly high-throughput sequencing) has introduced a number of ethical issues. One key issue is the ownership of an individual's DNA and the data produced when that DNA is sequenced. Regarding the DNA molecule itself, the leading legal case on this topic, "Moore v. Regents of the University of California" (1990) ruled that individuals have no property rights to discarded cells or any profits made using these cells (for instance, as a patented cell line). However, individuals have a right to informed consent regarding removal and use of cells. Regarding the data produced through DNA sequencing, "Moore" gives the individual no rights to the information derived from their DNA
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing As becomes more widespread, the storage, security and sharing of genomic data has also become more important. For instance, one concern is that insurers may use an individual's genomic data to modify their quote, depending on the perceived future health of the individual based on their DNA. In May 2008, the Genetic Information Nondiscrimination Act (GINA) was signed in the United States, prohibiting discrimination on the basis of genetic information with respect to health insurance and employment. In 2012, the US Presidential Commission for the Study of Bioethical Issues reported that existing privacy legislation for data such as GINA and the Health Insurance Portability and Accountability Act were insufficient, noting that whole-genome sequencing data was particularly sensitive, as it could be used to identify not only the individual from which the data was created, but also their relatives. Ethical issues have also been raised by the increasing use of genetic variation screening, both in newborns, and in adults by companies such as 23andMe. It has been asserted that screening for genetic variations can be harmful, increasing anxiety in individuals who have been found to have an increased risk of disease. For example, in one case noted in "Time", doctors screening an ill baby for genetic variants chose not to inform the parents of an unrelated variant linked to dementia due to the harm it would cause to the parents
https://en.wikipedia.org/wiki?curid=1158125
DNA sequencing However, a 2011 study in "The New England Journal of Medicine" has shown that individuals undergoing disease risk profiling did not show increased levels of anxiety.
https://en.wikipedia.org/wiki?curid=1158125
Bridgman–Stockbarger method The Bridgman–Stockbarger method, or Bridgman–Stockbarger technique, is named after Harvard physicist Percy Williams Bridgman (1882-1961) and MIT physicist Donald C. Stockbarger (1895–1952). The method includes two similar but distinct techniques primarily used for growing boules (single crystal ingots), but which can be used for solidifying polycrystalline ingots as well. The methods involve heating polycrystalline material above its melting point and slowly cooling it from one end of its container, where a seed crystal is located. A single crystal of the same crystallographic orientation as the seed material is grown on the seed and is progressively formed along the length of the container. The process can be carried out in a horizontal or vertical orientation, and usually involves a rotating crucible/ampoule to stir the melt. The Bridgman method is a popular way of producing certain semiconductor crystals such as gallium arsenide, for which the Czochralski method is more difficult. The process can reliably produce single crystal ingots, but does not necessarily result in uniform properties through the crystal. The difference between the Bridgman technique and Stockbarger technique is subtle: While both methods utilize a temperature gradient and a moving crucible, the Bridgman technique utilizes the relatively uncontrolled gradient produced at the exit of the furnace; the Stockbarger technique introduces a baffle, or shelf, separating two coupled furnaces with temperatures above and below the freezing point
https://en.wikipedia.org/wiki?curid=1159136
Bridgman–Stockbarger method Stockbarger's modification of the Bridgman technique allows for better control over the temperature gradient at the melt/crystal interface. When seed crystals are not employed as described above, polycrystalline ingots can be produced from a feedstock consisting of rods, chunks, or any irregularly shaped pieces once they are melted and allowed to re-solidify. The resultant microstructure of the ingots so obtained are characteristic of directionally solidified metals and alloys with their aligned grains. A variant of the technique known as the horizontal directional solidification method (HDSM) developed by Khachik Bagdasarov starting in the 1960s in the Soviet Union uses a flat-bottomed crucible with short sidewalls rather than an enclosed ampoule, and has been used to grow various large oxide crystals including Yb:YAG (a laser host crystal), and sapphire crystals 45 cm wide and over 1 meter long.
https://en.wikipedia.org/wiki?curid=1159136
Seed crystal A seed crystal is a small piece of single crystal or polycrystal material from which a large crystal of typically the same material is to be grown in a laboratory. Used to replicate material, the use of seed crystal to promote growth avoids the otherwise slow randomness of natural crystal growth and allows manufacture on a scale suitable for industry. The large crystal can be grown by dipping the seed into a supersaturated solution, into molten material that is then cooled, or by growth on the seed face by passing vapor of the material to be grown over it. The theory behind this effect is thought to derive from the physical intermolecular interaction that occurs between compounds in a supersaturated solution (or possibly vapor). In solution, liberated (soluble) molecules (solute) are free to move about in random flow. This random flow permits for the possibility of two or more molecular compounds to interact. This interaction can potentiate intermolecular forces between the separate molecules and form a basis for a crystal lattice. The placement of a seed crystal into solution allows the recrystallization process to expedite by eliminating the need for random molecular collision or interaction. By introducing an already pre-formed basis of the target crystal to act upon, the intermolecular interactions are formed much more easily or readily, than relying on random flow. Often, this phase transition from solute in a solution to a crystal lattice will be referred to as nucleation
https://en.wikipedia.org/wiki?curid=1159186
Seed crystal Seeding is therefore said to decrease the necessary amount of time needed for nucleation to occur in a recrystallization process. One example where a seed crystal is used to grow large boules or ingots of a single crystal is the semiconductor industry where methods such as the Czochralski process or Bridgman technique are employed. Another use of seed crystals is the fact that they are the coagulant in a water-purifier packet (these packets help purify dirty water) Also during the process of tempering chocolate, seed crystals can be used to promote the growth of favorable type V crystals
https://en.wikipedia.org/wiki?curid=1159186
Jagannatha Samrat Paṇḍita Jagannātha Samrāṭ (1652–1744) was an Indian astronomer and mathematician who served in the court of Jai Singh II of Amber, and was also his guru. Jagannātha, whose father's name was Gaṇeśa, and grandfather's Viṭṭhala was from a Vedic family originally from Maharashtra. At the suggestion of Jai Singh, he learned Arabic and Persian, in order to study Islamic astronomy. Having become proficient in these languages, he translated texts in these languages into Sanskrit. These translations include: His original works include: Jagannātha held that when theory and observation differed, observation was the true "pramāṇa" and overruled theory. While he used and described a number of astronomical instruments, telescopes were not one of them.
https://en.wikipedia.org/wiki?curid=1160141
Network automaton A network automaton (plural network automata) is a mathematical system consisting of a network of nodes that evolves over time according to predetermined rules. It is similar in concept to a cellular automaton, but much less studied. Stephen Wolfram's book "A New Kind of Science", which is primarily concerned with cellular automata, briefly discusses network automata, and suggests (without positive evidence) that the universe might at the very lowest level be a network automaton.
https://en.wikipedia.org/wiki?curid=1162374
N-vector model In statistical mechanics, the "n"-vector model or O("n") model is a simple system of interacting spins on a crystalline lattice. It was developed by H. Eugene Stanley as a generalization of the Ising model, XY model and Heisenberg model. In the "n"-vector model, "n"-component unit-length classical spins formula_1 are placed on the vertices of a "d"-dimensional lattice. The Hamiltonian of the "n"-vector model is given by: where the sum runs over all pairs of neighboring spins formula_3 and formula_4 denotes the standard Euclidean inner product. Special cases of the "n"-vector model are: The general mathematical formalism used to describe and solve the "n"-vector model and certain generalizations are developed in the article on the Potts model.
https://en.wikipedia.org/wiki?curid=1165549
Society of Economic Geologists The (SEG) is a scientific organization that promotes the study of geology as it relates to mining, mineral exploration, mineral resource classification and mineral extraction. The society's Publication Board publishes the scientific journal "Economic Geology". The society serves 7,000+ members worldwide who are committed to advancing the science and the discovery of mineral resources through research, publications, courses, and field trips. SEG originated from a 1919 gathering of a group of Geological Society of America (GSA) members who were especially interested in economic geology. The Society was established on December 28, 1920, during a constituting meeting of 60 distinguished professionals.
https://en.wikipedia.org/wiki?curid=1173332
Bort <!-- [[File:Diamond-270244.jpg|thumb|260px|A mixture of bort and gem diamonds (larger inclusions) from the [[Category:Diamond]]
https://en.wikipedia.org/wiki?curid=1173544
Sclerometer The sclerometer, also known as the Turner-sclerometer (from meaning "hard"), is an instrument used by metallurgists, material scientists and mineralogists to measure the scratch hardness of materials. It was invented in year 1896 by Thomas Turner (1861–1951), the first Professor of metallurgy in Britain, at the University of Birmingham. The Turner-test consists of microscopically measuring the width of a scratch made by a diamond under a fixed load, and drawn across the face of the specimen under fixed conditions.
https://en.wikipedia.org/wiki?curid=1174281
Wu Youxun or Y. H. Woo (; 26 February 1897 – 30 November 1977) was a Chinese physicist. His courtesy name was Zhèngzhī (正之). Wu graduated from the Department of Physics of Nanjing Higher Normal School (later renamed National Central University and Nanjing University), and was later associated with the Department of Physics at Tsinghua University. He served as president of National Central University and Jiaotong University in Shanghai. When he was a graduate student at the University of Chicago he studied x-ray and electron scattering, and verified the Compton effect which gave Arthur Compton the Nobel Prize in Physics. In 2000, the Chinese Physical Society established five prizes, in recognition of five pioneers of modern physics in China. The Prize is awarded to physicists in nuclear physics.
https://en.wikipedia.org/wiki?curid=1178383
Pieter Kok Pieter Kok, Ph.D. (born in June, 1972) is a Dutch physicist and one of the co-developers of quantum interferometric optical lithography. Kok was born in Friesland in the Netherlands. In 1997 he graduated from the University of Utrecht with a degree in Foundations of Quantum Theory. In 2001, he received his PhD in physics from the University of Wales, Bangor. His research specializations include linear optical implementations of quantum communication and computation protocols, quantum teleportation and the interpretation of quantum theory. Dr. Kok has worked in the Quantum Computing Technologies Group at the NASA/Jet Propulsion Laboratory, in Pasadena, California, Hewlett-Packard Laboratories in Bristol, England and at the Department of Materials, University of Oxford. He is a Professor of Theoretical Physics at the University of Sheffield. He and his wife, Rose Roberto, live in northern England with their two children.
https://en.wikipedia.org/wiki?curid=1179730
Boracite is a magnesium borate mineral with formula: MgBOCl. It occurs as blue green, colorless, gray, yellow to white crystals in the orthorhombic - pyramidal crystal system. also shows pseudo-isometric cubical and octahedral forms. These are thought to be the result of transition from an unstable high temperature isometric form on cooling. Penetration twins are not unusual. It occurs as well formed crystals and dispersed grains often embedded within gypsum and anhydrite crystals. It has a Mohs hardness of 7 to 7.5 and a specific gravity of 2.9. Refractive index values are nα = 1.658 - 1.662, nβ = 1.662 - 1.667 and nγ = 1.668 - 1.673. It has a conchoidal fracture and does not show cleavage. It is insoluble in water (not to be confused with borax, which is soluble in water). is typically found in evaporite sequences associated with gypsum, anhydrite, halite, sylvite, carnallite, kainite and hilgardite. It was first described in 1789 for specimens from its type locality of Kalkberg hill, Lüneburg, Lower Saxony, Germany. The name is derived from its boron content (19 to 20% boron by mass).
https://en.wikipedia.org/wiki?curid=1179801