text stringlengths 11 1.65k | source stringlengths 38 44 |
|---|---|
NGC 5068 is a face-on field barred spiral galaxy in the Virgo constellation. is located approximately 22 million light-years away and has a diameter that exceeds 45000 light-years. | https://en.wikipedia.org/wiki?curid=8073352 |
NGC 5624 is a spiral galaxy in the Hydra constellation. | https://en.wikipedia.org/wiki?curid=8073511 |
Elso Sterrenberg Barghoorn (June 15, 1915 – January 22, 1984) was an American paleobotanist, called by his student Andrew Knoll, the present Fisher Professor of Natural History at Harvard, "the father of Pre-Cambrian palaeontology." Barghoorn is best known for discovering in South African rocks fossil evidence of life that is at least 3.4 billion years old. These fossils show that life was present on Earth comparatively soon after the Late Heavy Bombardment (about 3.8 billion years ago). Barghoorn was born in New York City. After graduating from Miami University with a BSc and an MSc in biology, Barghoorn obtained his Ph.D. in paleobotany from the Harvard University, faculty of Biological Sciences, in 1941. After teaching for five years at Amherst College, he joined the Harvard faculty, becoming Fisher Professor of Natural History and Curator of the University's plant fossils collections. He was elected a Fellow of the American Academy of Arts and Sciences in 1950. In 1972 Barghoorn was awarded the Charles Doolittle Walcott Medal from the National Academy of Sciences. Barghoorn married Margaret Alden MaCleod in 1941, Teresa Joan LaCroix, and Dorothy Dellmer Osgood in 1964. The first two marriages ended in divorce. | https://en.wikipedia.org/wiki?curid=8076881 |
Reaction step A reaction step of a chemical reaction is defined as: ""An elementary reaction, constituting one of the stages of a stepwise reaction in which a reaction intermediate (or, for the first step, the reactants) is converted into the next reaction intermediate (or, for the last step, the products) in the sequence of intermediates between reactants and products"". | https://en.wikipedia.org/wiki?curid=8078684 |
Gemmatimonadetes The are a phylum of bacteria created for the type species "Gemmatimonas aurantiaca". This bacterium makes up about 2% of soil bacterial communities and has been identified as one of the top nine phyla found in soils; yet, there are currently only six cultured isolates. have been found in a variety of arid soils, such as grassland, prairie, and pasture soil, as well as eutrophic lake sediments and alpine soils. This wide range of environments where have been found suggests an adaptation to low soil moisture. A study conducted showed that the distribution of the in soil tends to be more dependent on the moisture availability than aggregation, reinforcing the belief that the members of this phylum prefer dryer soils. The phylum is distinct from the phylum "Cyanobacteria" and may have diverged in early microbial evolution at least 3 billion years ago. The first member of this phylum was discovered in 2003 in activated sludge in a sewage treatment system. The bacterium was named "Gemmatimonas aurantiaca". This bacterium is identified as strain T-27T, is Gram-negative, and is the only member of this phylum that has been studied in depth. The metabolic pathways and enzymes of this bacterium are unique and it is able to grow by both aerobic and anaerobic respiration. The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LSPN), National Center for Biotechnology Information and the All-Species Living Tree Project. | https://en.wikipedia.org/wiki?curid=8080850 |
Effective radius The effective radius or half-light radius (formula_1) of a galaxy is the radius at which half of the total light of the system is emitted. This assumes the galaxy has either intrinsic spherical symmetry or is at least circularly symmetric as viewed in the plane of the sky. Alternatively, a half-light contour, or isophote, may be used for spherically and circularly asymmetric objects. formula_1 is an important length scale in de Vaucouleurs formula_3 law, which characterizes a specific rate at which surface brightness decreases as a function of radius: where formula_5 is the surface brightness at formula_6. At formula_7, Thus, the central surface brightness is approximately formula_9. | https://en.wikipedia.org/wiki?curid=8082019 |
Nonlinear X-wave In physics, a nonlinear X-wave (NLX) is a multi-dimensional wave that can travel without distortion. At variance with X-waves, a nonlinear X-wave does exist in the presence of nonlinearity, and in many cases it self-generates from a Gaussian (in any direction) wave packet. The distinctive feature of an NLX is its "biconical" shape, (see figure) which appears as an "X" in any section plane containing the wave peak and the direction of propagation. So far, nonlinear X-waves have been only observed in nonlinear optics experiments, and have been predicted to occur in a variety of nonlinear media including Bose–Einstein condensates. | https://en.wikipedia.org/wiki?curid=8082812 |
Adriano de Paiva (1847–1907) was a Portuguese scientist who was one of the pioneers of telectroscope. He worked at the Politechnical Academy of Porto and conducted research into selenium as a material to transmit images. His work followed the discovery of photoconductivity in selenium by Willoughby Smith in 1873. In the 19th century he suggested the use of a chemical that would enable images to be transmitted at a considerable range. | https://en.wikipedia.org/wiki?curid=8087137 |
Stephen Demainbray Stephen Charles Triboudet Demainbray (1710 – 20 February 1782) was an English natural scientist and astronomer, who was Superintendent (or King's Astronomer) at the King's Observatory in Richmond, Surrey (now in London) from 1768 to 1782. Demainbray was born in the parish of St Martins, London in 1710. His father, who had come to England from France following the revocation of the Edict of Nantes, died soon afterwards and he was brought up by his uncle, who placed him at Westminster School. There he studied under Dr Desaguliers, who taught him mathematics and natural philosophy. After that he went to the Leiden University. In 1727 he married; his wife died in 1750. From 1740 to 1742 he lectured in experimental philosophy in Edinburgh. The 1745 Jacobite Rising brought him to take arms for the government for four years, and he was a volunteer at the Battle of Prestonpans. In 1746 he resumed his lectures, and worked on the influence of electricity on vegetables. Three years later, he began travelling throughout Britain and Europe, lecturing in Dublin and Paris. In 1753, he was invited to London by the Prince of Wales, later George III, and the Duke of York. On his return to England he married his second wife, Sarah Horne, who was a sister of John Horne Tooke. In 1755 he read a public course of lectures in the concert-room in Panton Street, and later gave private courses to members of the royal family, including the future King George III | https://en.wikipedia.org/wiki?curid=8088110 |
Stephen Demainbray In 1768, he was appointed Superintendent of the King's Observatory (or King's Astronomer) in Richmond, which King George III had commissioned from Sir William Chambers. He arranged for George III to see the Transit of Venus on 3 June 1769. He held that appointment until his death on 20 February 1782. His assistant there was James Stephen Rigaud, who married Demainbray's daughter Mary in Richmond in 1771. His instruments were combined with the King's collection and given to King's College London and then, in 1927, to the Science Museum. | https://en.wikipedia.org/wiki?curid=8088110 |
Residual chemical shift anisotropy (RCSA) is the difference between the chemical shift anisotropy (CSA) of aligned and non-aligned molecules. It is normally three orders of magnitude smaller than the static CSA, with values on the order of parts-per-billion (ppb). RCSA is useful for structural determination and it is among the new developments in NMR spectroscopy. | https://en.wikipedia.org/wiki?curid=8088302 |
Cloudscape photography is photography of clouds or sky. An early cloudscape photographer, Belgian photographer Léonard Misonne (1870–1943), was noted for his black and white photographs of heavy skies and dark clouds. In the early to middle 20th century, American photographer Alfred Stieglitz (1864–1946) created a series of photographs of clouds, called "equivalents" (1925–1931). According to an essay on the series at the Phillips Collection website, "A symbolist aesthetic underlies these images, which became increasingly abstract equivalents of his own experiences, thoughts, and emotions". More recently, photographers such as Ralph Steiner, Robert Davies and Tzeli Hadjidimitriou (see catalogues listed below) have been noted for producing such images. | https://en.wikipedia.org/wiki?curid=8089026 |
Excitation temperature The excitation temperature (formula_1) is defined for a population of particles via the Boltzmann factor. It satisfies where "n" and "n" represent the number of particles in an upper ("e.g." excited) and lower ("e.g." ground) state, and "g" and "g" their statistical weights respectively. Thus the excitation temperature is the temperature at which we would expect to find a system with this ratio of level populations. However it has no actual physical meaning except when in local thermodynamic equilibrium. The excitation temperature can even be negative for a system with inverted levels (such as a maser). In observations of the 21 cm line of hydrogen, the apparent value of the excitation temperature is often called the "spin temperature". | https://en.wikipedia.org/wiki?curid=8092026 |
Acoustic contrast factor The acoustic contrast factor is a number used to describe the relationship between the densities and the sound velocities (or, equivalently because of the form of the expression, the densities and compressibilities) of two media. It is most often used in the context of biomedical ultrasonic imaging techniques using acoustic contrast agents and in the field of ultrasonic manipulation of particles much smaller than the wavelength using ultrasonic standing waves. In the latter context, the acoustic contrast factor is the number which, depending on its sign, tells whether a given type of particle in a given medium will be attracted to the pressure nodes or anti-nodes. Given the compressibilities formula_1 and formula_2 and densities formula_3 and formula_4 of the medium and particle, respectively, the acoustic contrast factor formula_5 can be expressed as For a positive value of formula_7, the particles will be attracted to the pressure nodes, and vice versa. | https://en.wikipedia.org/wiki?curid=8099272 |
Genkei Masamune Genkei Masamune | https://en.wikipedia.org/wiki?curid=8109635 |
Newton–Wigner localization (named after Theodore Duddell Newton and Eugene Wigner) is a scheme for obtaining a position operator for massive relativistic quantum particles. It is known to largely conflict with the Reeh-Schlieder theorem outside of a very limited scope. The Newton–Wigner position operators , , , are the premier notion of position in relativistic quantum mechanics of a single particle. They enjoy the same commutation relations with the 3 space momentum operators and transform under rotations in the same way as the , , in ordinary QM. Though formally they have the same properties with respect to , , , as the position in ordinary QM, they have additional properties: One of these is that This ensures that the free particle moves at the expected velocity with the given momentum/energy. Apparently these notions were discovered when attempting to define a self adjoint operator in the relativistic setting that resembled the position operator in basic quantum mechanics in the sense that at low momenta it approximately agreed with that operator. It also has several famous strange behaviors, one of which is seen as the motivation for having to introduce quantum field theory. | https://en.wikipedia.org/wiki?curid=8117002 |
Unitarity gauge In theoretical physics, the unitarity gauge or unitary gauge is a particular choice of a gauge fixing in a gauge theory with a spontaneous symmetry breaking. In this gauge, the scalar fields responsible for the Higgs mechanism are transformed into a basis in which their Goldstone boson components are set to zero. In other words, the unitarity gauge makes the manifest number of scalar degrees of freedom minimal. The gauge was introduced to particle physics by Steven Weinberg in the context of the electroweak theory. In electroweak theory, the degrees of freedom in a unitarity gauge are the massive spin-1 W, W and Z bosons with three polarizations each, the photon with two polarizations, and the scalar Higgs boson. The unitarity gauge is usually used in tree-level calculations. For loop calculations, other gauge choices such as the 't Hooft-Feynman gauge often reduce the mathematical complexity of the calculation. | https://en.wikipedia.org/wiki?curid=8120688 |
Konishi anomaly In theoretical physics, the is the violation of the conservation of the Noether current associated with certain transformations in theories with N=1 supersymmetry. More precisely, this transformation changes the phase of a chiral superfield. It shouldn't be confused with the R-symmetry that also depends on the fermionic superspace variables. The divergence of the corresponding Noether current for the Konishi transformation is nonzero but can be exactly expressed using the superpotential. is named after its discoverer Kenichi Konishi, who is currently full professor of Theoretical Physics at the Physics Department E.Fermi of University of Pisa, Italy. | https://en.wikipedia.org/wiki?curid=8120897 |
R-symmetry In theoretical physics, the is the symmetry transforming different supercharges in a theory with supersymmetry into each other. In the simplest case of the "N"=1 supersymmetry, such an is isomorphic to a global U(1) group or its discrete subgroup (for the Z subgroup it is called R-parity). For extended supersymmetry, the group becomes a global non-abelian group. In a model that is classically invariant under both "N"=1 supersymmetry and conformal transformations, the closure of the superconformal algebra (at least on-shell) needs the introduction of a further bosonic generator that is associated to the R-symmetry. | https://en.wikipedia.org/wiki?curid=8120935 |
Wess–Zumino gauge In particle physics, the is a particular choice of a gauge transformation in a gauge theory with supersymmetry. In this gauge, the supersymmetrized gauge transformation is chosen in such a way that most components of the vector superfield vanish, except for the usual physical ones, when the function of the superspace is expanded in terms of components. | https://en.wikipedia.org/wiki?curid=8121089 |
El Capitan (Mars) El Capitan is a layered rock outcrop found within the Margaritifer Sinus quadrangle (MC-19) region of the planet Mars, this geological outcrop was first discovered and observed by the Mars Exploration Rover "Opportunity" in February 2004. The rock outcrop was named for El Capitan, a topographical mountain lying within the state, Texas. | https://en.wikipedia.org/wiki?curid=8132949 |
Last Chance (Mars) Last Chance is a layered rock outcrop found within the Margaritifer Sinus quadrangle (MC-19) region of the planet Mars, discovered by the Mars Exploration Rover "Opportunity" in March 2004. The rock lies within the outcrop near the rover's landing site at Meridiani Planum, Mars. Images returned show evidence for a geologic feature known as ripple cross-stratification. At the base of the rock, layers can be seen dipping downward to the right. The bedding that contains these dipping layers is only one to two centimeters (0.4 to 0.8 inches) thick. In the upper right corner of the rock, layers also dip to the right, but exhibit a weak "concave-up" geometry. These two features—the thin, cross-stratified bedding combined with the possible concave geometry—suggest small ripples with sinuous crest lines. Although wind can produce ripples, they rarely have sinuous crest lines and never form steep, dipping layers at such a small scale. The most probable explanation for these ripples is that they were formed in the presence of moving water. | https://en.wikipedia.org/wiki?curid=8133165 |
Exfoliation (botany) Exfoliation (from the term "foliate", meaning “related to leaves”) means the removal or loss of leaves from a plant. It is used both to describe the loss of a leaves as a natural part of a plant's life cycle (such as in the case of deciduous trees which lose their leaves in the autumn) or because of some trauma or outside cause (such as dehydration, an infestation of caterpillars or hurricane-force winds). In arboriculture, the term “exfoliating bark” describes the natural process and condition of the bark peeling-away from a tree trunk, typically in large pieces that remain partially attached to the trunk until such time as they are completely detached by the elements or the eventual and subsequent exfoliation of additional layers of bark. Examples of trees with exfoliating bark are the paperbark maple and various species of Plane (Sycamore) and birch. | https://en.wikipedia.org/wiki?curid=8136768 |
I-CreI I-"Cre"I is a homing endonuclease whose gene was first discovered in the chloroplast genome of "Chlamydomonas reinhardtii", a species of unicellular green algae. It is named for the facts that: it resides in an Intron; it was isolated from "Clamydomonas reinhardtii"; it was the first (I) such gene isolated from "C. reinhardtii". Its gene resides in a group I intron in the 23S ribosomal RNA gene of the "C. reinhardtii" chloroplast, and I-"Cre"I is only expressed when its mRNA is spliced from the primary transcript of the 23S gene. I-"Cre"I enzyme, which functions as a homodimer, recognizes a 22-nucleotide sequence of duplex DNA and cleaves one phosphodiester bond on each strand at specific positions. I-"Cre"I is a member of the LAGLIDADG family of homing endonucleases, all of which have a conserved LAGLIDADG amino acid motif that contributes to their associative domains and active sites. When the I-"Cre"I-containing intron encounters a 23S gene lacking the intron, I-"Cre"I enzyme "homes" in on the "intron-minus" allele of 23S and effects its parent intron's insertion into the intron-minus allele. Introns with this behavior are called mobile introns. Because I-"Cre"I provides for its own propagation while conferring no benefit on its host, it is an example of selfish DNA. I-"Cre"I was first observed as an intervening sequence in the 23S rRNA gene of the "C. reinhardtii" chloroplast genome. The 23S gene is an RNA gene, meaning that its transcript is not translated into protein | https://en.wikipedia.org/wiki?curid=8142925 |
I-CreI As RNA, it forms part of the large subunit of the ribosome. An open reading frame coding for a 163-amino acid protein was found in this 23S intron, suggesting that a protein might facilitate the homing behavior of the mobile intron. Furthermore, the predicted protein had a LAGLIDADG motif, a conserved amino acid sequence that is present in other proteins coded for in group I mobile introns. A 1991 study established that the ORF codes for a DNA endonuclease, I-"Cre"I, which selectively cuts a site corresponding to where the intron is spliced out of the 23S primary transcript. The study also showed that the intron was able to invade 23S alleles that did not already have it. I-"Cre"I has evolved to cut a 22-nucleotide sequence of DNA that occurs in alleles of the 23S ribosomal RNA gene that lack the I-"Cre"I-containing intron. When such an "intron-minus" allele is cut, pathways of double-strand break repair are activated in the cell. The cell uses as a template for repair the 23S allele that yielded the responsible I-"Cre"I enzyme, thus replicating the I-"Cre"I-containing intron. The resulting "intron-plus" allele no longer contains an intact homing site for the I-"Cre"I enzyme, and is therefore not cleaved. Since this intron provides for its own replication without conferring any benefit on its host, I-"Cre"I is a form of selfish DNA | https://en.wikipedia.org/wiki?curid=8142925 |
I-CreI Because I-"Cre"I has evolved to cut such a long sequence of DNA, unlike restriction endonucleases that typically cut four- or six-nucleotide sequences, it is capable of cutting a single site within a very large genome. A four- or six-nucleotide sequence is expected to occur many, many times in a genome of millions or billions of nucleotides simply by chance, whereas a 22-nucleotide sequence might occur only once (10/4 vs. 10/4). This specificity of I-"Cre"I cleavage makes I-"Cre"I a promising tool for gene targeting. If a person were to have a disease due to a defective allele of some gene, it would be helpful to be able to replace that allele with a functional one. If one could cause I-"Cre"I to cut the DNA only in the defective allele while simultaneously providing a normal allele for the cell to use as a repair template, the patient's own homologous recombination machinery could insert the desired allele in place of the dysfunctional one. The specificity of I-"Cre"I also allows for the reduction of deleterious effects due to double-strand breaks outside of the gene of interest. In order to use I-"Cre"I as a tool in this fashion, it is necessary to make it recognize and cleave sequences of DNA different from its native homing site. An "Escherichia coli" genetic system for studying the relationship between I-"Cre"I structure and its homing site specificity was created in 1997 | https://en.wikipedia.org/wiki?curid=8142925 |
I-CreI In 1997, the structure of the protein was determined, and in 1998, its crystal structure bound to its native DNA homing site was solved, greatly aiding research in altering the homing site recognition of the protein. Mutant forms of the protein have since been created that exhibit altered homing site specificity. A genetic system in "Saccharomyces cerevisiae" has also been created, yielding additional I-"Cre"I mutants with modified homing site specificities. I-"Cre"I has already been used successfully to induce homologous recombination in "Drosophila melanogaster", an extremely popular eukaryotic model organism. It seems very likely that advances in molecular biological techniques and generation of a library of I-"Cre"I-derived novel endonucleases will eventually allow for the targeting of many genes of etiological significance. | https://en.wikipedia.org/wiki?curid=8142925 |
Zdeněk Veselovský Prof. (August 26, 1928 – November 24, 2006) was one of the most important Czech zoologists of the 20th century, founder of Czech ethology, very successful director of the Prague Zoo (1959-1988) and the president of the International Union of Directors of Zoological Gardens (1971-1975) (renamed in 2000 to World Association of Zoos and Aquariums) | https://en.wikipedia.org/wiki?curid=8142938 |
Allen Lowrie (born 1948) is a West Australian botanist. He is recognised for his expertise on the genera "Drosera" and "Stylidium". Lowrie, originally a businessman and inventor, first experienced the carnivorous flora of western Australia in the late sixties and studied it as an amateur. Over time, his hobby turned into a profession and Lowrie discovered and described numerous species (especially "Drosera", "Byblis" and "Utricularia"), partly together with Neville Marchant. From 1987 to 1998 he published "Carnivorous Plants of Australia" in three volumes; a fourth is in the making. Lowrie lives in Duncraig, a Perth suburb, is married and has two daughters. | https://en.wikipedia.org/wiki?curid=8146294 |
Fixation (histology) In the fields of histology, pathology, and cell biology, fixation is the preservation of biological tissues from decay due to autolysis or putrefaction. It terminates any ongoing biochemical reactions and may also increase the treated tissues' mechanical strength or stability. Tissue fixation is a critical step in the preparation of histological sections, its broad objective being to preserve cells and tissue components and to do this in such a way as to allow for the preparation of thin, stained sections. This allows the investigation of the tissues' structure, which is determined by the shapes and sizes of such macromolecules (in and around cells) as proteins and nucleic acids. In performing their protective role, fixatives denature proteins by coagulation, by forming additive compounds, or by a combination of coagulation and additive processes. A compound that adds chemically to macromolecules stabilizes structure most effectively if it is able to combine with parts of two different macromolecules, an effect known as cross-linking. Fixation of tissue is done for several reasons. One reason is to kill the tissue so that postmortem decay (autolysis and putrefaction) is prevented. Fixation preserves biological material (tissue or cells) as close to its natural state as possible in the process of preparing tissue for examination. To achieve this, several conditions usually must be met | https://en.wikipedia.org/wiki?curid=8156507 |
Fixation (histology) First, a fixative usually acts to disable intrinsic biomolecules—particularly proteolytic enzymes—which otherwise digest or damage the sample. Second, a fixative typically protects a sample from extrinsic damage. Fixatives are toxic to most common microorganisms (bacteria in particular) that might exist in a tissue sample or which might otherwise colonize the fixed tissue. In addition, many fixatives chemically alter the fixed material to make it less palatable (either indigestible or toxic) to opportunistic microorganisms. Finally, fixatives often alter the cells or tissues on a molecular level to increase their mechanical strength or stability. This increased strength and rigidity can help preserve the morphology (shape and structure) of the sample as it is processed for further analysis. Even the most careful fixation does alter the sample and introduce artifacts that can interfere with interpretation of cellular ultrastructure. A prominent example is the bacterial "mesosome", which was thought to be an organelle in gram-positive bacteria in the 1970s, but was later shown by new techniques developed for electron microscopy to be simply an artifact of chemical fixation. Standardization of fixation and other tissue processing procedures takes this introduction of artifacts into account, by establishing what procedures introduce which kinds of artifacts | https://en.wikipedia.org/wiki?curid=8156507 |
Fixation (histology) Researchers who know what types of artifacts to expect with each tissue type and processing technique can accurately interpret sections with artifacts, or choose techniques that minimize artifacts in areas of interest. Fixation is usually the first stage in a multistep process to prepare a sample of biological material for microscopy or other analysis. Therefore, the choice of fixative and fixation protocol may depend on the additional processing steps and final analyses that are planned. For example, immunohistochemistry uses antibodies that bind to a specific protein target. Prolonged fixation can chemically mask these targets and prevent antibody binding. In these cases, a 'quick fix' method using cold formalin for around 24 hours is typically used. Methanol (100%) can also be used for quick fixation, and that time can vary depending on the biological material. For example, MDA-MB 231 human breast cancer cells can be fixed for only 3 minutes with cold methanol (-20 °C). For enzyme localization studies, the tissues should either be pre-fixed lightly only, or post-fixed after the enzyme activity product has formed. There are generally three types of fixation processes depending on the initial specimen: Heat fixation: After a smear has dried at room temperature, the slide is gripped by tongs or a clothespin and passed through the flame of a Bunsen burner several times to heat-kill and adhere the organism to the slide. Routinely used with bacteria and archaea | https://en.wikipedia.org/wiki?curid=8156507 |
Fixation (histology) Heat fixation generally preserves overall morphology but not internal structures. Heat denatures the proteolytic enzyme and prevents autolysis. Heat fixation cannot be used in the capsular stain method as heat fixation will shrink or destroy the capsule (glycocalyx) and cannot be seen in stains. Immersion: The sample of tissue is immersed in fixative solution of volume at a minimum of 20 times greater than the volume of the tissue to be fixed. The fixative must diffuse through the tissue to fix, so tissue size and density, as well as type of fixative must be considered. This is a common technique for cellular applications. Using a larger sample means it takes longer for the fixative to reach the deeper tissue. Perfusion: Fixation via blood flow. The fixative is injected into the heart with the injection volume matching cardiac output. The fixative spreads through the entire body, and the tissue doesn't die until it is fixed. This has the advantage of preserving perfect morphology, but the disadvantages are that the subject dies and the cost of the volume of fixative needed for larger organisms is high. In both immersion and perfusion fixation processes, chemical fixatives are used to preserve structures in a state (both chemically and structurally) as close to living tissue as possible. This requires a chemical fixative. Crosslinking fixatives act by creating covalent chemical bonds between proteins in tissue. This anchors soluble proteins to the cytoskeleton, and lends additional rigidity to the tissue | https://en.wikipedia.org/wiki?curid=8156507 |
Fixation (histology) Preservation of transient or fine cytoskeletal structure such as contractions during embryonic differentiation waves is best achieved by a pretreatment using microwaves before the addition of a cross linking fixative. The most commonly used fixative in histology is formaldehyde. It is usually used as a 10% neutral buffered formalin (NBF), that is approx. 3.7%–4.0% formaldehyde in phosphate buffer, pH 7. Since formaldehyde is a gas at room temperature, formalin – formaldehyde gas dissolved in water (~37% w/v) – is used when making the former fixative. Formaldehyde fixes tissue by cross-linking the proteins, primarily the residues of the basic amino acid lysine. Its effects are reversible by excess water and it avoids formalin pigmentation. Paraformaldehyde is also commonly used and will depolymerise back to formalin when heated, also making it an effective fixative. Other benefits to paraformaldehyde include long term storage and good tissue penetration. It is particularly good for immunohistochemistry techniques. The formaldehyde vapor can also be used as a fixative for cell smears. Another popular aldehyde for fixation is glutaraldehyde. It operates similarly to formaldehyde, causing the deformation of proteins' α-helices. However glutaraldehyde is a larger molecule than formaldehyde, and so permeates membranes more slowly. Consequently, glutaraldehyde fixation on thicker tissue samples can be difficult; this can be troubleshot by reducing the size of the tissue sample | https://en.wikipedia.org/wiki?curid=8156507 |
Fixation (histology) One of the advantages of glutaraldehyde fixation is that it may offer a more rigid or tightly linked fixed product—its greater length and two aldehyde groups allow it to 'bridge' and link more distant pairs of protein molecules. It causes rapid and irreversible changes, is well suited for electron microscopy, works well at 4 C, and gives the best overall cytoplasmic and nuclear detail. It is, however, not ideal for immunohistochemistry staining. Some fixation protocols call for a combination of formaldehyde and glutaraldehyde so that their respective strengths complement one another. These crosslinking fixatives, especially formaldehyde, tend to preserve the secondary structure of proteins and may also preserve most tertiary structure. Precipitating (or "denaturing") fixatives act by reducing the solubility of protein molecules and often by disrupting the hydrophobic interactions that give many proteins their tertiary structure. The precipitation and aggregation of proteins is a very different process from the crosslinking that occurs with aldehyde fixatives. The most common precipitating fixatives are ethanol and methanol. They are commonly used to fix frozen sections and smears. Acetone is also used and has been shown to produce better histological preservation than frozen sections when employed in the Acetone Methylbenzoate Xylene (AMEX) technique. Protein-denaturing methanol, ethanol and acetone are rarely used alone for fixing blocks unless studying nucleic acids | https://en.wikipedia.org/wiki?curid=8156507 |
Fixation (histology) Acetic acid is a denaturant that is sometimes used in combination with the other precipitating fixatives, such as Davidson's AFA. The alcohols, by themselves, are known to cause considerable shrinkage and hardening of tissue during fixation while acetic acid alone is associated with tissue swelling; combining the two may result in better preservation of tissue morphology. The oxidizing fixatives can react with the side chains of proteins and other biomolecules, allowing the formation of crosslinks that stabilize tissue structure. However they cause extensive denaturation despite preserving fine cell structure and are used mainly as secondary fixatives. Osmium tetroxide is often used as a secondary fixative when samples are prepared for electron microscopy. (It is not used for light microscopy as it penetrates thick sections of tissue very poorly.) Potassium dichromate, chromic acid, and potassium permanganate all find use in certain specific histological preparations. Mercurials such as B-5 and Zenker's fixative have an unknown mechanism that increases staining brightness and give excellent nuclear detail. Despite being fast, mercurials penetrate poorly and produce tissue shrinkage. Their best application is for fixation of hematopoietic and reticuloendothelial tissues. Also note that since they contain mercury care must be taken with disposal. Picrates penetrate tissue well to react with histones and basic proteins to form crystalline picrates with amino acids and precipitate all proteins | https://en.wikipedia.org/wiki?curid=8156507 |
Fixation (histology) It is a good fixative for connective tissue, preserves glycogen well, and extracts lipids to give superior results to formaldehyde in immunostaining of biogenic and polypeptide hormones However, it causes a loss of basophils unless the specimen is thoroughly washed following fixation. Hepes-glutamic acid buffer-mediated organic solvent protection effect (HOPE) gives formalin-like morphology, excellent preservation of protein antigens for immunohistochemistry and enzyme histochemistry, good RNA and DNA yields and absence of crosslinking proteins. Should be kept in the physiological range, between pH 4-9. The pH for the ultrastructure preservation should be buffered between 7.2 to 7.4 Hypertonic solutions give rise to cell shrinkage. Hypotonic solutions result in cell swelling and poor fixation. 10% neutral buffer formalin is 4% formaldehyde (1.33 osmolar) in PBS buffer (0.3 osmolar) sums to 1.63 osmolar. This is a very hypertonic solution yet it has worked well as a general tissue fixation condition for many years in pathology labs. The size of tissue can also affect the fixation process. 1–4 mm thickness [0.5 cm] At least 15-20 times greater than tissue volume Increasing the temperature increases speed of fixation. However, care is required to avoid cooking the specimen. Fixation is routinely carried out at room temperature. As a general rule, 1 hour per millimetre of tissue paraformaldehyde (PFA) must penetrate | https://en.wikipedia.org/wiki?curid=8156507 |
Fixation (histology) So if we have a three-dimensional tissue block, it is the shortest dimension that determines fixation time. Fixation is a chemical process, and time must be allowed for the process to complete. Although "over fixation" can be detrimental, under-fixation has recently been appreciated as a significant problem and may be responsible for inappropriate results for some assays. | https://en.wikipedia.org/wiki?curid=8156507 |
Mother liquor A mother liquor is the part of a solution that is left over after crystallization. It is encountered in chemical processes including sugar refining.It is the liquid obtained by filtering the crystals by filtration. In crystallization, a solid (usually impure) is dissolved in a solvent at high temperature, taking advantage of the fact that most solids are more soluble at higher temperatures. As the solution cools, the solubility of the solute in the solvent will gradually become smaller. The resultant solution is described as supersaturated, meaning that there is more solute dissolved in the solution than would be predicted by its solubility at that temperature. Crystallization can then be induced from this supersaturated solution and the resultant pure crystals removed by such methods as vacuum filtration and centrifugal separators. The remaining solution, once the crystals have been filtered out, is known as the mother liquor, and will contain a portion of the original solute (as predicted by its solubility at that temperature) as well as any impurities that were not filtered out. Second and third crops of crystals can then be harvested from the mother liquor. | https://en.wikipedia.org/wiki?curid=8163362 |
Natural History Review The was a short-lived, quarterly journal devoted to natural history. It was published in Dublin and London between 1854 and 1865. The "Natural History Review" included the transactions of the Belfast Natural History and Philosophical Society, Cork Cuvierian Society, Dublin Natural History Society, Dublin University Zoological Association, and the Literary and Scientific Institution of Kilkenny, as authorised...It was founded by Edward Perceval Wright who was also the editor. The parts are: Vols 1-4, 1854–57; title concludes: ...by the Councils of these Societies (Geological Society of Dublin later added to list) This was continued as "Natural History Review, and quarterly journal of science". Edited by Edward Percival Wright, William Henry Harvey, Joseph Reay Greene, Samuel Haughton and Alexander Henry Haliday London, Vols 5-7, 1858-60. In turn continued as "Natural History Review": a quarterly journal of biological science. Edited by George Busk, William Benjamin Carpenter, F.Currey et al., London, Vols 1-5, 1861–65; no more published. | https://en.wikipedia.org/wiki?curid=8177105 |
Giacinto Cestoni Diacinto (or Giacinto) Cestoni (May 13, 1637 – January 29, 1718) was an Italian naturalist. Born in Montegiorgio, he was self-taught. He lived and worked at Livorno where he led a pharmacy shop next to the port. He studied fleas and algae, and showed that scabies is provoked by "Sarcoptes scabiei". | https://en.wikipedia.org/wiki?curid=8178922 |
Blue Norther (weather) A Blue Norther, also known as a Texas Norther, is a fast moving cold front marked by a rapid drop in temperature, strong winds, and dark blue or "black" skies. The cold front originates from the north, hence the "norther", and can send temperatures plummeting by 20 or 30 degrees in merely minutes. The Midwestern United States lacks natural geographic barriers to protect itself from the frigid winter air masses that originate in Canada and the arctic. Multiple times per year conditions will become favorable to push severe cold fronts as far south as Texas, bringing sleet and snow and causing the windchill to plunge into the teens. Depending on the time of year, high temperatures that immediately precede a Texas Norther can reach 85 °F or even 90 °F under bright sunlight in nearly-calm conditions before the cold front approaches. However, most Blue Northers don't advance as far south as Mexico, and even the most severe examples typically reach their apex midway through Texas. For example, cities in North Texas, like Dallas, experience drastically more Blue Northers than cities along the Gulf of Mexico, like Houston. As a city is struck by a Blue Norther, its temperatures can be 30 to 50 degrees colder than neighboring cities that are only a few miles away that have not yet been struck. Blue Northers can be dangerous due to their volatile temperature swings which catch some people unprepared. Blue Northers occur multiple times per year | https://en.wikipedia.org/wiki?curid=8195160 |
Blue Norther (weather) They are usually recorded between the months of November and March, although they have been recorded less frequently in October and April as well. The Blue Norther phenomenon is especially common in November, when the last vestiges of autumn are still clinging to life. One of the most famous Blue Northers was the Great Blue Norther of November 11, 1911, which spawned multiple tornadoes and dropped temperatures 40 degrees in only 15 minutes and 67 degrees in 10 hours, a world record. | https://en.wikipedia.org/wiki?curid=8195160 |
Luke Chia-Liu Yuan (; April 5, 1912 – February 11, 2003) was a Chinese-American physicist and grandson of Yuan Shikai, the first president of the Republic of China from 1912 to 1916. Born in Anyang, Henan, Yuan attended Yenching University in Beijing, the University of California at Berkeley, and the California Institute of Technology. He began living in the United States in 1936. That same year, he attended the University of California, Berkeley and met renowned physicist Chien-Shiung Wu, whom he married in 1942. She took part in the Manhattan Project and conducted the Wu Experiment which got her the Wolf Prize in Physics. For financial reasons, Yuan transferred to Caltech, where he did his doctoral training under Nobel laureate Robert A. Millikan. Yuan worked at RCA Laboratories and then Brookhaven National Laboratory as a senior physicist and science educator. In 1958, he was awarded Guggenheim Fellowship for Natural Sciences. He helped found the Synchrotron Radiation Research Center of Taiwan and Wu-Yuan Natural Science Foundation. For over a year, Yuan was ill and died on February 11, 2003 in Beijing. He is survived by his granddaughter, Jada Yuan (a writer in New York City), son Vincent Yuan (nuclear physicist of New Mexico) and brother Yuan Jiaji of Tianjin. Some of the things that he and his wife had were donated to the Cheng-Shiung Wu Memorial Hall, which is located in Nanjing, China. | https://en.wikipedia.org/wiki?curid=8197474 |
Marine chemist A marine chemist is an environmental, occupational safety and health professional who is a trained professional who is responsible for ensuring that repair and construction of marine vessels can be made in safety whenever those repairs might result in fire, explosion, or exposure toxic vapors or chemicals. By virtue of his or her training, experience, and education, the Marine Chemist is uniquely qualified as a specialist in confined space safety and atmospheric sampling or monitoring. Why do we need Marine Chemist? Lets take a moment to examine that question. Basic ship design... open decks and enclosed spaces for cargoes... has remained basically unchanged for decades. Structural materials and methods have changed, but in principle, the basic design concept has been the same for centuries. Today's cargoes, however, have shifted to a greater number of toxic substances. That has added the health concern of toxicity to the existing safety concerns of fire and explosion, not just during a voyage, but even when a vessel is in a shipyard for routine maintenance and repair. In 1963, the National Fire Protection Association National Fire Protection Association assumed jurisdiction over the Marine Chemist program. The NFPA continues to oversee the profession which is based on the NFPA Standard 306: Standard for Control of Gas Hazards on Vessels. | https://en.wikipedia.org/wiki?curid=8202165 |
Mason equation The is an approximate analytical expression for the growth (due to condensation) or evaporation of a water droplet—it is due to the meteorologist B. J. Mason. The expression is found by recognising that mass diffusion towards the water drop in a supersaturated environment transports energy as latent heat, and this has to be balanced by the diffusion of sensible heat back across the boundary layer, (and the energy of heatup of the drop, but for a cloud-sized drop this last term is usually small). In Mason's formulation the changes in temperature across the boundary layer can be related to the changes in saturated vapour pressure by the Clausius–Clapeyron relation; the two energy transport terms must be nearly equal but opposite in sign and so this sets the interface temperature of the drop. The resulting expression for the growth rate is significantly lower than that expected if the drop were not warmed by the latent heat. Thus if the drop has a size "r", the inward mass flow rate is given by and the sensible heat flux by and the final expression for the growth rate is where | https://en.wikipedia.org/wiki?curid=8202435 |
Ramification (botany) In botany, ramification is the divergence of the stem and limbs of a plant into smaller ones, i.e. trunk into branches, branches into increasingly smaller branches, etc. Gardeners stimulate the process of ramification through pruning, thereby making trees, shrubs and other plants bushier and denser. Short internodes (the section of stem between nodes, i.e. areas where leaves are produced) help increase ramification in those plants that form branches at these nodes. Long internodes (which may be the result of over-watering, the over-use of fertilizer, or a seasonal "growth spurt") decrease a gardener's ability to induce ramification in a plant. A high degree of ramification is essential for the creation of topiary as it enables the topiary artist to carve a bush or hedge into a shape with an even surface. Ramification is also essential to practitioners of the art of bonsai as it helps recreate the form and habit of a full-size tree in a small tree grown in a container. The pruning practices of coppicing and pollarding induce ramification by removing most of a tree's mass above the root. Fruit tree pruning increases the yield of orchards by inducing ramification and thereby creating many vigorous, fruitful branches in the place of a few less-fruitful ones. | https://en.wikipedia.org/wiki?curid=8208376 |
Weissenberg effect The is a phenomenon that occurs when a spinning rod is inserted into a solution of elastic liquid. Instead of being thrown outward, the solution is drawn towards the rod and rises up around it. This is a direct consequence of the normal stress that acts like a hoop stress around the rod. The effect is named after Karl Weissenberg. | https://en.wikipedia.org/wiki?curid=8209052 |
Resorption is the absorption into the circulatory system of cells or tissue, usually by osteoclasts. Types of resorption include: | https://en.wikipedia.org/wiki?curid=8210612 |
Gaussian broadening refers to broadening effects in spectral lines, these can be produced by Doppler broadening. This effect is similar to Gaussian blur effect in image processing produced by convolution with the Gaussian function. The term is named after Carl Friedrich Gauss. | https://en.wikipedia.org/wiki?curid=8211384 |
Helicoidal flow is the cork-screw-like flow of water in a meander. It is one example of a secondary flow. is a contributing factor to the formation of slip-off slopes and river cliffs in a meandering section of the river. The helicoidal motion of the flow aids the processes of hydraulic action and corrasion on the outside of the meander, and sweeps sediment across the floor of the meander towards the inside of the meander, forming point bar deposits. | https://en.wikipedia.org/wiki?curid=8215751 |
Pollen-presenter A pollen-presenter is an area on the tip of the style in flowers of plants of the family Proteaceae on which the anthers release their pollen prior to anthesis. To ensure pollination, the style grows during anthesis, sticking out the pollen-presenter prominently, and so ensuring that the pollen easily contacts the bodies of potential pollination vectors such as bees, birds and nectarivorous mammals. The systematic depositing of pollen on the tip of the style implies the plants have some strategy to avoid excessive self-pollination. | https://en.wikipedia.org/wiki?curid=8216619 |
Indian National Chemistry Olympiad The (INChO for short) is an Olympiad in Chemistry held in India. The theory part of the INChO examination is held in end-January/beginning February of every year. It is conducted by the Indian Association of Chemistry Teachers. School students (usually of standards 11, 12) first need to qualify the National Standard Examination in Chemistry (NSEC) held in November of the preceding year. Among the 30,000+ students who sit for the NSEC, only the top 1% are selected for the INChO. About 35 students are selected from the written examination. A total of 30 students are selected from these to attend the Orientation-Cum-Selection-Camp (OCSC), chemistry, held at HBCSE, Mumbai. Most of the students qualifying the INChO are those completing their twelfth standard. However, there have been some cases of students qualifying INChO at the end of eleventh, or tenth standard itself. The Orientation-Cum-Selection-Camp (OCSC), Chemistry consists of rigorous training and testing in theory and experiment. The top four performers here are selected to represent India in the International Chemistry Olympiad. Before IChO, the selected team undergoes rigorous training in theory and experiment, in a Pre-Departure Training Camp, held in HBCSE. | https://en.wikipedia.org/wiki?curid=8228402 |
Johan Erik Vesti Boas (2 July 1855 – 25 January 1935), also J.E.V. Boas, was a Danish zoologist and a disciple of Carl Gegenbaur and Steenstrup. During the beginning and end of his career, worked at the Zoological Museum of Copenhagen. However, during an intervening period of 35 years, Boas worked with the Veterinary and Agricultural University of Copenhagen, because Boas had felt ignored at the appointment of the museum curator post, which went, instead, to G.M.R. Levinsen (q.v.). | https://en.wikipedia.org/wiki?curid=8234511 |
Asilomar Conference on Recombinant DNA The was an influential conference organized by Paul Berg to discuss the potential biohazards and regulation of biotechnology, held in February 1975 at a conference center at Asilomar State Beach. A group of about 140 professionals (primarily biologists, but also including lawyers and physicians) participated in the conference to draw up voluntary guidelines to ensure the safety of recombinant DNA technology. The conference also placed scientific research more into the public domain, and can be seen as applying a version of the precautionary principle. The effects of these guidelines are still being felt through the biotechnology industry and the participation of the general public in scientific discourse. Due to potential safety hazards, scientists worldwide had halted experiments using recombinant DNA technology, which entailed combining DNAs from different organisms. After the establishment of the guidelines during the conference, scientists continued with their research, which increased fundamental knowledge about biology and the public's interest in biomedical research. Recombinant DNA technology arose as a result of advances in biology that began in the 1950s and '60s. During these decades, a tradition of merging the structural, biochemical and informational approaches to the central problems of classical genetics became more apparent | https://en.wikipedia.org/wiki?curid=8239144 |
Asilomar Conference on Recombinant DNA Two main underlying concepts of this tradition were that genes consisted of DNA and that DNA encoded information that determined the processes of replication and protein synthesis. These concepts were embodied in the model of DNA produced through the combined efforts of James Watson, Francis Crick, and Rosalind Franklin. Further research on the Watson-Crick model yielded theoretical advances that were reflected in new capacities to manipulate DNA. One of these capacities was recombinant DNA technology. This technology entails the joining of DNA from different species and the subsequent insertion of the hybrid DNA into a host cell. One of the first individuals to develop recombinant DNA technology was a biochemist at Stanford by the name of Paul Berg. In his experimental design in 1974, he cleaved (cut into fragments) the monkey virus SV40. He then cleaved the double helix of another virus; an antibacterial agent known as bacteriophage lambda. In the third step, he fastened DNA from the SV40 to DNA from the bacteriophage lambda. The final step involved placing the mutant genetic material into a laboratory strain of the E. coli bacterium. This last step, however, was not completed in the original experiment. Berg did not complete his final step due to the pleas of several fellow investigators who feared the biohazards associated with the last step. The SV40 was known to cause cancer tumors to develop in mice. Additionally, the E | https://en.wikipedia.org/wiki?curid=8239144 |
Asilomar Conference on Recombinant DNA coli bacterium (although not the strain used by Berg) inhabited the human intestinal tract. For these reasons, the other investigators feared that the final step would create cloned SV40 DNA that might escape into the environment and infect laboratory workers. These workers could then become cancer victims. Concern about this potential biohazard, along with others, caused a group of leading researchers to send a letter to the president of the National Academy of Science (NAS). In this letter, they requested that he appoint an ad hoc committee to study the bio-safety ramifications of this new technology. This committee, called the Committee on Recombinant DNA molecules of the National Academy of Science, U.S.A., held in 1974, concluded that an international conference was necessary to resolve the issue and that until that time, scientists should halt experiments involving recombinant DNA technology. The took place at the Asilomar Conference Center on California's Monterey Peninsula in 1975. The main goal of the conference was to address the biohazards presented by recombinant DNA technology. During the conference, the principles guiding the recommendations for how to conduct experiments using this technology safely were established. The first principle for dealing with potential risks was that containment should be made an essential consideration in the experimental design. A second principle was that the effectiveness of the containment should match the estimated risk as closely as possible | https://en.wikipedia.org/wiki?curid=8239144 |
Asilomar Conference on Recombinant DNA The conference also suggested the use of biological barriers to limit the spread of recombinant DNA. Such biological barriers included fastidious bacterial hosts that were unable to survive in natural environments. Other barriers were nontransmissible and equally fastidious vectors (plasmids, bacteriophages, or other viruses) that were able to grow in only specified hosts. In addition to biological barriers, the conference advocated the use of additional safety factors. One such safety factor was physical containment, exemplified by the use of hoods or where applicable, limited access or negative pressure laboratories. Another factor was the strict adherence to good microbiological practices, which would limit the escape of organisms from the experimental situation. Additionally, the education and training of all personnel involved in the experiments would be essential to effective containment measures. The Asilomar Conference also gave recommendations for matching the types of containment necessary for different types of experiments. These recommendations were based on the different levels of risk associated with the experiment, which would require different levels of containment. These levels were minimal, low, moderate and high risk. The minimal risk level of containment was intended for experiments in which the biohazards could be accurately assessed and were expected to be minimal | https://en.wikipedia.org/wiki?curid=8239144 |
Asilomar Conference on Recombinant DNA Low risk containment was appropriate for experiments that generated novel biotypes but where the available information indicated that the recombinant DNA could not either alter appreciably the ecological behavior of the recipient species, increase significantly its pathogenicity or prevent effective treatments of any resulting infections. The moderate risk level of containment was intended for experiments in which there was a probability of generating an agent with a significant potential for pathogenicity or ecological disruption. High-risk containment was intended for experiments in which the potential for ecological disruption or pathogenicity of the modified organism could be severe and thereby pose a serious biohazard to laboratory personnel or to the public. These levels of containments, along with the previously mentioned safety measures, formed the basis for the guidelines used by investigators in future experiments that involved the construction and propagation of recombinant DNA molecules using DNA from prokaryotes, bacteriophages and other plasmids, animal viruses and eukaryotes. For prokaryotes, bacteriophages and other plasmids, experiments could be performed in minimal risk containment facilities when the construction of recombinant DNA molecules and their propagation involved prokaryotic agents that were known to exchange genetic information naturally | https://en.wikipedia.org/wiki?curid=8239144 |
Asilomar Conference on Recombinant DNA For experiments involving the creation and propagation of recombinant DNA molecules from DNAs of species that ordinarily did not exchange genetic information and generate novel biotypes, the experiments were to be performed in at least in a low risk containment facility. If the experiment increased the pathogenicity of the recipient species or result in new metabolic pathways in species, then moderate or high-risk containment facilities were to be used. In experiments where the range of resistance of established human pathogens to therapeutically useful antibiotics or disinfectants was extended, the experiments were to be undertaken only in moderate or high-risk containment facilities. When working with animal viruses, experiments that involved the linkage of viral genomes or genome segments to prokaryotic vectors and their propagation in prokaryotic cells were to be conducted only with vector-host systems that had demonstrated restricted growth capabilities outside the laboratory and in moderate risk containment facilities. As safer vector-host systems became available, such experiments could be performed in low risk facilities. In experiments designed to introduce or propagate DNA from non-viral or other low risk agents in animal cells, only low risk animal DNA could be used as vectors and the manipulations were to be confined to moderate risk containment facilities | https://en.wikipedia.org/wiki?curid=8239144 |
Asilomar Conference on Recombinant DNA With eukaryotes, attempts to clone segments of DNA using recombinant DNA technology from warm-blooded vertebrates genomes were to be performed only with vector-host systems that had demonstrably restricted growth capabilities outside the laboratory and in a moderate risk containment facility. This was because they potentially contained cryptic viral genomes that were potentially pathogenic to humans. However, unless the organism made a dangerous product, recombinant DNAs from cold-blooded vertebrates and all other lower eukaryotes could be constructed and propagated with the safest vector-host system available in low risk containment facilities. Additionally, purified DNA from any source that performed known functions and was judged to be non-toxic could be cloned with available vectors in low risk containment facilities. In addition to regulating the experiments that were conducted, the guidelines also forbade the performance of other experiments. One such experiment was the cloning of recombinant DNAs derived from highly pathogenic organisms. In addition, neither the cloning of DNA containing toxin genes, nor large scale experiments using recombinant DNAs that were able to make products that were potentially harmful to humans, animals or plants were allowed under the guidelines. These experiments were banned because the potential biohazards could not be contained by the then current safety precautions | https://en.wikipedia.org/wiki?curid=8239144 |
Asilomar Conference on Recombinant DNA The participants of the Asilomar Conference also endeavored to bring science into the domain of the general public, with a possible motivation being the Watergate scandal. The scandal resulted from a bungled break-in at the Watergate hotel, which served as the Democratic National Committee headquarters in 1972. Two years after the burglary, taped evidence was discovered that indicated that President Nixon had discussed a cover-up a week after it. Three days following the release of the tape, Nixon resigned from his presidential office. This event focused the nation's attention on the problem of government secrecy fostering illegal and immoral behavior and it has been suggested by the political scientist Ira H. Carmen that this motivated the scientists at the Asilomar Conference to bring science into the public eye to ensure that they would not be accused of a cover-up. Additionally, according to Dr. Berg and Dr. Singer, by being forthright, scientists avoided restrictive legislation due to the development of a consensus on how they were to conduct their research. Bringing science into the public eye also coincided with the rapid rate at which recombinant DNA technology entered the industrial world. Because of the practical applications of the technology, funding for research using it started coming more from the private sector and less from the public sector | https://en.wikipedia.org/wiki?curid=8239144 |
Asilomar Conference on Recombinant DNA In addition, many molecular biologists who once confined themselves to academia, developed ties with private industry as equity owners, corporate executives and consultants. This led to the creation of a biotechnology industry, although during this time, public debates occur over the hazards of recombinant DNA. These debates were eventually won over by scientists who stated that the hazards were exaggerated and that the research could be conducted safely. Such was seen in the Ascot report, found in the Federal Register in March 1978. This report emphasized that the hazards of recombinant DNA to the general community were small to the point that they were of no practical consequence to the general public. For this reason, along with high economic pressures for industrial development and a more supportive political environment that existed after 1979, research and industry based on recombinant DNA continued to expand. Years after the conference, people ascribed a large amount of significance to it. According to Paul Berg and Maxine Singer in 1995, the conference marked the beginning of an exceptional era for both science and the public discussion of science policy. The guidelines devised by the conference enabled scientists to conduct experiments with recombinant DNA technology, which by 1995 dominated biological research. This research, in turn, increased knowledge about fundamental life processes, such as the cell cycle | https://en.wikipedia.org/wiki?curid=8239144 |
Asilomar Conference on Recombinant DNA Additionally, the conference along with public debates on recombinant DNA, increased public interest in biomedical research and molecular genetics. For this reason, by 1995, genetics and its vocabulary had become a part of the daily press and television news. This, in turn, stimulated knowledgeable public discussion about some of the social, political and environmental issues that emerged from genetic medicine and the use of genetically modified plants in agriculture. Another significant outcome of the conference was the precedent it set about how to respond to changes in scientific knowledge. According to the conference, the proper response to new scientific knowledge was to develop guidelines that governed how to regulate it. | https://en.wikipedia.org/wiki?curid=8239144 |
Humboldtian science refers to a movement in science in the 19th century closely connected to the work and writings of German scientist, naturalist and explorer Alexander von Humboldt. It maintained a certain ethics of precision and observation, which combined scientific field work with the sensitivity and aesthetic ideals of the age of Romanticism. Like Romanticism in science, it was rather popular in the 19th century. The term was coined by Susan Faye Cannon in 1978. The example of Humboldt's life and his writings allowed him to reach out beyond the academic community with his natural history and address a wider audience with popular science aspects. It has supplanted the older Baconian method, related as well to a single person, Francis Bacon. Humboldt was born in Berlin in 1769 and worked as a Prussian mining official in the 1790s until 1797 when he quit and began collecting scientific knowledge and equipment. His extensive wealth aided his infatuation with the spirit of Romanticism; he amassed an extensive collection of scientific instruments and tools as well as a sizeable library. In 1799 Humboldt, under the protection of King Charles IV of Spain, left for South America and New Spain, toting all of his tools and books. The purpose of the voyage was steeped in Romanticism; Humboldt intended to investigate how the forces of nature interact with one another and find out about the unity of nature. Humboldt returned to Europe in 1804 and was acclaimed as a public hero | https://en.wikipedia.org/wiki?curid=8243937 |
Humboldtian science The details and findings of Humboldt's journey were published in his "Personal Narrative of Travels to the Equatorial Regions of the New Continent" (30 volumes). This "Personal Narrative" was taken by Charles Darwin on his famous voyage on H.M.S Beagle. Humboldt spent the rest of his life mainly in Europe, although he did embark on a short expedition to Siberia and the Russian steppes in 1829. Humboldt's last works were contained in his book, "" ("Cosmos. Sketch for a Physical Description of the Universe"). The book mainly described the development of a life-force from the cosmos, but also included the formation of stars from nebular clouds as well as the geography of planets. Alexander von Humboldt died in 1859, while working on the fifth volume of "Kosmos". Through his travels to South America and his observational records in "An Essay on the Geography of Plants" as well as "Kosmos", an important trend emerged through his techniques of observation, scientific instruments used and unique perspective on nature. Humboldt's novel style has been defined as Humboldtian Science. Humboldt had the ability to combine the study of empirical data with a holistic view of nature and its aesthetically pleasing characteristics, which is now held to be the true definition of the study of vegetation and plant geography. is one of the first techniques for studying both organic and inorganic branches of science | https://en.wikipedia.org/wiki?curid=8243937 |
Humboldtian science Examining the interconnectedness of vegetation and its respective environment is one of the new and important aspects of Humboldt's work, an idea labeled as "terrestrial physics," something that scientists who preceded him, such as Linnaeus, failed to do. is founded on a principle of "general equilibrium of forces." General equilibrium was the idea that there are infinite forces in nature that are in constant conflict, yet all forces balance each other out. Humboldt laid the groundwork for future scientific endeavors by establishing the importance of studying organisms and their environment in conjunction . includes both the extensive work of Alexander von Humboldt, as well as many of the works of 19th century scientists. Susan Cannon is attributed with coining the term "Humboldtian science". According to Cannon, is, "the accurate, measured study of widespread but interconnected real phenomena in order to find a definite law and a dynamical cause." is used now in place of the traditional, "Baconianism," as a more appropriate and less vague term for the themes of 19th century science. Natural history in the eighteenth century was the "nomination of the visible". Carl Linnaeus was preoccupied with fitting all nature into taxonomy, fixated on only what was visible. Towards the turn of the eighteenth century, Immanuel Kant became interested in understanding where species derived from, and was less concerned with an organism's physical attributes | https://en.wikipedia.org/wiki?curid=8243937 |
Humboldtian science Next, Johann Reinhold Forster, one of Humboldt's future partners, became interested in the study of vegetation as an essential way of understanding nature and its relationship with human society. Proceeding Forster, Karl Willdenow examined floristic plant geography, the distribution of plants and regionality as a whole. All of these pieces in the history before Humboldt help to shape what is defined as Humboldtian science. Humboldt took into account both the outward appearance and inward meaning of plant species. His attention to natural aesthetics and empirical data and evidence is what set his scientific work apart from ecologists before him. Malcolm so aptly puts it as; "Humboldt effortlessly combined a commitment to empiricism and the experimental elucidation of the laws of nature with an equally strong commitment to holism and to a view of nature which was intended to be aesthetically and spiritually satisfactory". It was through this holistic approach to science and the study of nature that Humboldt was able to find a web of interconnectedness despite a multitude of extensive differences between different species of organisms. According to Malcolm Nicholson, "Susan Cannon characterized as synthetic, empirical, quantitative and impossible to fit into any one of our twentieth century disciplinary boundaries." A central element of was its use of the latest advances in scientific instrumentation to observe and measure physical variables, while attending to all possible sources of error | https://en.wikipedia.org/wiki?curid=8243937 |
Humboldtian science revolved around understanding the relationship between accurate measurement, sources of error and mathematical laws. Cannon identifies four distinctive features that marked out from previous versions of science: Humboldt was committed to what he called 'terrestrial physics.' Essentially Humboldt's new scientific approach required a new type of scientist: demanded a transition from the naturalist to the physicist. Humboldt described how his idea of terrestrial physics differs from traditional "descriptive" natural history when he stated, "[traveling naturalists] have neglected to track the great and constant laws of nature manifested in the rapid flux of phenomena…and to trace the reciprocal interaction of the divided physical forces." Humboldt did not consider himself an explorer, but rather a scientific traveler, who accurately measured what explorers had reported inaccurately. According to Humboldt, the goal of the terrestrial physicist was to investigate the confluence and interweaving of all physical forces. An incredibly extensive array of precise instrumentation had to be readily available for Humboldt's terrestrial physicist. The expansive amount of scientific resources that characterized the Humboldtian scientist is best described in the book "Science in Culture", Thus the complete Humboldtian traveller, in order to make satisfactory observations, should be able to cope with everything from the revolution of the satellites of Jupiter to the carelessness of clumsy donkeys | https://en.wikipedia.org/wiki?curid=8243937 |
Humboldtian science Just some of such instruments included chronometers, telescopes, sextants, microscopes, magnetic compasses, thermometers, hygrometers, barometers, electrometers, and eudiometers. Furthermore, it was necessary to have multiple makes and models of each specific instrument to compare errors and constancy among each type. One concept that is central to is that of a general equilibrium of forces. Humboldt explains: "The general equilibrium which reigns amongst disturbances and apparent turmoil, is the result of infinite number of mechanical forces and chemical attractions balancing each other out." Equilibrium is derived from an infinite number of forces acting simultaneously and varying globally. In other words, the lawfulness of nature, according to Humboldt, is a result of infinity and complexity. promotes the idea that the more forces that are accurately measured over more of the earth's surface results in a greater understanding of the order of nature. The voyage to the Americas produced many discoveries and developments that help to illustrate Humboldt's ideas about this equilibrium of forces. Humboldt produced the "Tableau physique des Andes" ("Physical Profile of the Andes), which aimed at capturing his voyage to the America's in a single graphic table. Humboldt meant to capture all of the physical forces, from organisms to electricity, in this single table. Among many other complex empirical recordings of elevation-specific data, the table included a detailed biodistribution | https://en.wikipedia.org/wiki?curid=8243937 |
Humboldtian science This biodistribution mapped the specific distributions of flora and fauna at every elevation level on the mountain. Humboldt's study of plants provides an example of the movement of away from traditional science. Humboldt's botany also further illustrates the concept of equilibrium and the Humboldtian ideas of the interrelationship of nature's elements. Although he was concerned with physical features of plants, he was largely focused on the investigation of underlying connections and relations among plant organisms. Humboldt worked for years on developing an understanding of plant distributions and geography. The link between the balancing equilibrium of natural forces and organism distribution is evident when Humboldt states: As in all other phenomena of the physical universe, so in the distribution of organic beings: amidst the apparent disorder which seems to result from the influence of a multitude of local causes, the unchanging law of nature become evident as soon as one surveys an extensive territory, or uses a mass of facts in which the partial disturbances compensate one another. The study of vegetation and plant geography arose out of new concerns that emerged with Humboldtian science. These new areas of concern in science included integrative processes, invisible connections, historical development, and natural wholes. applied the idea of general equilibrium of forces to the continuities in the history of the generation of the planet | https://en.wikipedia.org/wiki?curid=8243937 |
Humboldtian science Humboldt saw the history of the earth as a continuous global distribution of such things as heat, vegetation, and rock formations. In order to graphically represent this continuity Humboldt developed isothermal lines. These isothermal lines functioned in the general balancing of forces in that isothermal lines preserved local peculiarities within a general regularity. According to Humboldtian science, nature's order and equilibrium emerged "gradually and progressively from laborious observing, averaging, and mapping over increasingly extended areas." Ralph Waldo Emerson once dubbed Humboldt to be "one of those wonders of the world… who appear from time to time, as if to show us the possibilities of the human mind." When Humboldt first began his studies of organisms and the environment he claimed that he wanted to "reorganize the general connections that link organic beings and to study the great harmonies of Nature". He is often considered one of the world's first genuine ecologists. Humboldt succeeded in developing a comprehensive science that joined the separate branches of natural philosophy under a model of natural order founded on the concept of dynamic equilibrium. Humboldt's work reached far beyond his personal expeditions and discoveries. Figures from all across the globe participated on his work. Some such participants included French naval officers, East India Company physicians, Russian provincial administrators, Spanish military commanders, and German diplomats | https://en.wikipedia.org/wiki?curid=8243937 |
Humboldtian science Furthermore, as was aforementioned, Charles Darwin carried a copy of Humboldt's "Personal Narrative" aboard H.M.S. Beagle. Humboldt's projects, particularly those related to natural philosophy, played a significant role in the influx of European money and travelers to Spanish America in increasing numbers in the early 19th century. Sir Edward Sabine, a British scientist, worked on terrestrial magnetism in a manner that was certainly Humboldtian. Also, British scientist George Gabriel Stokes depended heavily on abstract mathematical measurement to deal with error in a precision instrument; certainly Humboldtian science. Maybe the most prominent figure whose work can be considered representative of Humboldtian science, is geologist Charles Lyell. Despite a lack of emphasis on precise measurement in geology at the time, Lyell insisted on precision in a Humboldtian manner. The promotion and development of terrestrial physics under produced not only useful maps and statistics, but offered both European and Creole societies tools for essentially 're-imaging' America. The lasting impact of is described in "Cultures of Natural History", "illuminates the reorganization of knowledge and disciplines in the early nineteenth century that defined the emergence of natural history out of natural philosophy." | https://en.wikipedia.org/wiki?curid=8243937 |
Protein precipitation is widely used in downstream processing of biological products in order to concentrate proteins and purify them from various contaminants. For example, in the biotechnology industry protein precipitation is used to eliminate contaminants commonly contained in blood. The underlying mechanism of precipitation is to alter the solvation potential of the solvent, more specifically, by lowering the solubility of the solute by addition of a reagent. The solubility of proteins in aqueous buffers depends on the distribution of hydrophilic and hydrophobic amino acid residues on the protein's surface. Hydrophobic residues predominantly occur in the globular protein core, but some exist in patches on the surface. Proteins that have high hydrophobic amino acid content on the surface have low solubility in an aqueous solvent. Charged and polar surface residues interact with ionic groups in the solvent and increase the solubility of a protein. Knowledge of a protein's amino acid composition will aid in determining an ideal precipitation solvent and methods. Repulsive electrostatic forces form when proteins are dissolved in an electrolyte solution. These repulsive forces between proteins prevent aggregation and facilitate dissolution. Upon dissolution in an electrolyte solution, solvent counterions migrate towards charged surface residues on the protein, forming a rigid matrix of counterions on the protein's surface | https://en.wikipedia.org/wiki?curid=8253098 |
Protein precipitation Next to this layer is another solvation layer that is less rigid and, as one moves away from the protein surface, contains a decreasing concentration of counterions and an increasing concentration of co-ions. The presence of these solvation layers cause the protein to have fewer ionic interactions with other proteins and decreases the likelihood of aggregation. Repulsive electrostatic forces also form when proteins are dissolved in water. Water forms a solvation layer around the hydrophilic surface residues of a protein. Water establishes a concentration gradient around the protein, with the highest concentration at the protein surface. This water network has a damping effect on the attractive forces between proteins. Dispersive or attractive forces exist between proteins through permanent and induced dipoles. For example, basic residues on a protein can have electrostatic interactions with acidic residues on another protein. However, solvation by ions in an electrolytic solution or water will decrease protein–protein attractive forces. Therefore, to precipitate or induce accumulation of proteins, the hydration layer around the protein should be reduced. The purpose of the added reagents in protein precipitation is to reduce the hydration layer. Protein precipitate formation occurs in a stepwise process. First, a precipitating agent is added and the solution is steadily mixed. Mixing causes the precipitant and protein to collide. Enough mixing time is required for molecules to diffuse across the fluid eddies | https://en.wikipedia.org/wiki?curid=8253098 |
Protein precipitation Next, proteins undergo a nucleation phase, where submicroscopic sized protein aggregates, or particles, are generated. Growth of these particles is under Brownian diffusion control. Once the particles reach a critical size (0.1 µm to 10 µm for high and low shear fields, respectively), by diffusive addition of individual protein molecules to it, they continue to grow by colliding into each other and sticking or flocculating. This phase occurs at a slower rate. During the final step, called aging in a shear field, the precipitate particles repeatedly collide and stick, then break apart, until a stable mean particle size is reached, which is dependent upon individual proteins. The mechanical strength of the protein particles correlates with the product of the mean shear rate and the aging time, which is known as the Camp number. Aging helps particles withstand the fluid shear forces encountered in pumps and centrifuge feed zones without reducing in size. Salting out is the most common method used to precipitate a protein. Addition of a neutral salt, such as ammonium sulfate, compresses the solvation layer and increases protein–protein interactions. As the salt concentration of a solution is increased, the charges on the surface of the protein interact with the salt, not the water, thereby exposing hydrophobic patches on the protein surface and causing the protein to fall out of solution (aggregate and precipitate) | https://en.wikipedia.org/wiki?curid=8253098 |
Protein precipitation Salting out is a spontaneous process when the right concentration of the salt is reached in solution. The hydrophobic patches on the protein surface generate highly ordered water shells. This results in a small decrease in enthalpy, Δ"H", and a larger decrease in entropy, Δ"S," of the ordered water molecules relative to the molecules in the bulk solution. The overall free energy change, Δ"G", of the process is given by the Gibbs free energy equation: Δ"G" = Free energy change, Δ"H" = Enthalpy change upon precipitation, Δ"S" = Entropy change upon precipitation, "T" = Absolute temperature. When water molecules in the rigid solvation layer are brought back into the bulk phase through interactions with the added salt, their greater freedom of movement causes a significant increase in their entropy. Thus, Δ"G" becomes negative and precipitation occurs spontaneously. Kosmotropes or "water structure stabilizers" are salts which promote the dissipation / dispersion of water from the solvation layer around a protein. Hydrophobic patches are then exposed on the protein's surface, and they interact with hydrophobic patches on other proteins. These salts enhance protein aggregation and precipitation. Chaotropes or "water structure breakers," have the opposite effect of Kosmotropes. These salts promote an increase in the solvation layer around a protein | https://en.wikipedia.org/wiki?curid=8253098 |
Protein precipitation The effectiveness of the kosmotropic salts in precipitating proteins follows the order of the Hofmeister series: Most precipitation formula_2 least precipitation Most precipitation formula_3 least precipitation The decrease in protein solubility follows a normalized solubility curve of the type shown. The relationship between the solubility of a protein and increasing ionic strength of the solution can be represented by the Cohn equation: "S" = solubility of the protein, "B" is idealized solubility, "K" is a salt-specific constant and "I" is the ionic strength of the solution, which is attributed to the added salt. formula_5 "z" is the ion charge of the salt and "c" is the salt concentration. The ideal salt for protein precipitation is most effective for a particular amino acid composition, inexpensive, non-buffering, and non-polluting. The most commonly used salt is ammonium sulfate. There is a low variation in salting out over temperatures 0 °C to 30 °C. Protein precipitates left in the salt solution can remain stable for years-protected from proteolysis and bacterial contamination by the high salt concentrations. The isoelectric point (pI) is the pH of a solution at which the net primary charge of a protein becomes zero. At a solution pH that is above the pI the surface of the protein is predominantly negatively charged and therefore like-charged molecules will exhibit repulsive forces | https://en.wikipedia.org/wiki?curid=8253098 |
Protein precipitation Likewise, at a solution pH that is below the pI, the surface of the protein is predominantly positively charged and repulsion between proteins occurs. However, at the pI the negative and positive charges cancel, repulsive electrostatic forces are reduced and the attraction forces predominate. The attraction forces will cause aggregation and precipitation. The pI of most proteins is in the pH range of 4–6. Mineral acids, such as hydrochloric and sulfuric acid are used as precipitants. The greatest disadvantage to isoelectric point precipitation is the irreversible denaturation caused by the mineral acids. For this reason isoelectric point precipitation is most often used to precipitate contaminant proteins, rather than the target protein. The precipitation of casein during cheesemaking, or during production of sodium caseinate, is an isoelectric precipitation. Addition of miscible solvents such as ethanol or methanol to a solution may cause proteins in the solution to precipitate. The solvation layer around the protein will decrease as the organic solvent progressively displaces water from the protein surface and binds it in hydration layers around the organic solvent molecules. With smaller hydration layers, the proteins can aggregate by attractive electrostatic and dipole forces. Important parameters to consider are temperature, which should be less than 0 °C to avoid denaturation, pH and protein concentration in solution | https://en.wikipedia.org/wiki?curid=8253098 |
Protein precipitation Miscible organic solvents decrease the dielectric constant of water, which in effect allows two proteins to come close together. At the isoelectric point the relationship between the dielectric constant and protein solubility is given by: "S" is an extrapolated value of "S", "e" is the dielectric constant of the mixture and "k" is a constant that relates to the dielectric constant of water. The Cohn process for plasma protein fractionation relies on solvent precipitation with ethanol to isolate individual plasma proteins. a clinical application for the use of methanol as a protein precipitating agent is in the estimation of bilirubin. Polymers, such as dextrans and polyethylene glycols, are frequently used to precipitate proteins because they have low flammability and are less likely to denature biomaterials than isoelectric precipitation. These polymers in solution attract water molecules away from the solvation layer around the protein. This increases the protein–protein interactions and enhances precipitation. For the specific case of polyethylene glycol, precipitation can be modeled by the equation: "C" is the polymer concentration, "P" is a protein–protein interaction coefficient, "a" is a protein–polymer interaction coefficient and "μ" is the chemical potential of component I, "R" is the universal gas constant and "T" is the absolute temperature. Alginate, carboxymethylcellulose, polyacrylic acid, tannic acid and polyphosphates can form extended networks between protein molecules in solution | https://en.wikipedia.org/wiki?curid=8253098 |
Protein precipitation The effectiveness of these polyelectrolytes depend on the pH of the solution. Anionic polyelectrolytes are used at pH values less than the isoelectric point. Cationic polyelectrolytes are at pH values above the pI. It is important to note that an excess of polyelectrolytes will cause the precipitate to dissolve back into the solution. An example of polyelectrolyte flocculation is the removal of protein cloud from beer wort using Irish moss. Metal salts can be used at low concentrations to precipitate enzymes and nucleic acids from solutions. Polyvalent metal ions frequently used are Ca, Mg, Mn or Fe. There are numerous industrial scaled reactors than can be used to precipitate large amounts of proteins, such as recombinant DNA polymerases from a solution. Batch reactors are the simplest type of precipitation reactor. The precipitating agent is slowly added to the protein solution under mixing. The aggregating protein particles tend to be compact and regular in shape. Since the particles are exposed to a wide range of shear stresses for a long period of time, they tend to be compact, dense and mechanically stable. In tubular reactors, feed protein solution and the precipitating reagent are contacted in a zone of efficient mixing then fed into long tubes where precipitation takes place. The fluid in volume elements approach plug flow as they move though the tubes of the reactor. Turbulent flow is promoted through wire mesh inserts in the tube | https://en.wikipedia.org/wiki?curid=8253098 |
Protein precipitation The tubular reactor does not require moving mechanical parts and is inexpensive to build. However, the reactor can become impractically long if the particles aggregate slowly. CSTR reactors run at steady state with a continuous flow of reactants and products in a well-mixed tank. Fresh protein feed contacts slurry that already contains precipitate particles and the precipitation reagents. | https://en.wikipedia.org/wiki?curid=8253098 |
DNA separation by silica adsorption is a method of DNA separation that is based on DNA molecules binding to silica surfaces in the presence of certain salts and under certain pH conditions, usually conducted on a microchip coated in silica channels. Conventional methods for DNA extraction, such as ethanol precipitation or preparations using commercial purification kits, cannot be integrated onto microchips because they require multiple hands-on processing steps. In addition, they also require large equipment and high volumes of reagents and samples. Silica resins avoid these issues through integration on microchips, where solid phase extraction provides accurate analysis of DNA on a small scale. In order to conduct DNA separation by silica adsorption, a sample (this may be anything from purified cells to a tissue specimen) is placed onto a specialized chip and lysed. The resultant mix of proteins, DNA, phospholipids, etc., is then run through the channel where the DNA is adsorbed by a silica surface in the presence of solutions with high ionic strength. The highest DNA adsorption efficiencies occur in the presence of buffer solution with a pH at or below the pKa of the surface silanol groups. The mechanism behind DNA adsorption onto silica is not fully understood; one possible explanation involves reduction of the silica surface's negative charge due to the high ionic strength of the buffer. This decrease in surface charge leads to a decrease in the electrostatic repulsion between the negatively charged DNA and the negatively charged silica | https://en.wikipedia.org/wiki?curid=8255258 |
DNA separation by silica adsorption Meanwhile, the buffer also reduces the activity of water by formatting hydrated ions. This leads to the silica surface and DNA becoming dehydrated. These conditions lead to an energetically favorable situation for DNA to adsorb to the silica surface. A further explanation of how DNA binds to silica is based on the action of guanidinium HCl (GuHCl), which acts as a chaotrope. A chaotrope denatures biomolecules by disrupting the shell of hydration around them. This allows positively charged ions to form a salt bridge between the negatively charged silica and the negatively charged DNA backbone in high salt concentration. The DNA can then be washed with high salt and ethanol, and ultimately eluted with low salt. After the DNA is adsorbed to the silica surface, all other molecules pass through the column. Most likely, these molecules are sent to a waste section on the chip, which can then be closed off using a gated channel or a pressure- or voltage-controlled chamber. The DNA is then washed to remove any excess waste particles from the sample and then eluted from the channel using an elution buffer for further downstream processing. The following solutions have been proposed and validated for use in this process DNA binding: GuHCl- based loading buffer; Channel Wash: 80% isopropanol; DNA elution: TE at pH 8.4. Methods using silica beads and silica resins have been created that can isolate DNA molecules for subsequent PCR amplification. However, these methods have associated problems | https://en.wikipedia.org/wiki?curid=8255258 |
DNA separation by silica adsorption First, beads and resins are highly variable depending on how well they are packed and are thus hard to reproduce. Each loading of a micro-channel can result in a different amount of packing and thus change the amount of DNA that adsorbed to the channel. Furthermore, these methods result in a two step manufacturing process. Silica structures are a much more effective method of packing material because they are etched into the channel during its fabrication and is thus the result of a one step manufacturing processes via soft lithography. Silica structures are therefore easier to use in highly parallelized designs than beads or resins. | https://en.wikipedia.org/wiki?curid=8255258 |
Type 0 string theory The is a less well-known model of string theory. It is a superstring theory in the sense that the worldsheet theory is supersymmetric. However, the spacetime spectrum is not supersymmetric and, in fact, does not contain any fermions at all. In dimensions greater than two, the ground state is a tachyon so the theory is unstable. These properties make it similar to the bosonic string and an unsuitable proposal for describing the world as we observe it, although a GSO projection does get rid of the tachyon and the even G-parity sector of the theory defines a stable string theory. The theory is used sometimes as a toy model for exploring concepts in string theory, notably closed string tachyon condensation. Some other recent interest has involved the two-dimensional Type 0 string which has a non-perturbatively stable matrix model description. Like the Type II string, different GSO projections result in slightly different theories, Type 0A and Type 0B. The difference lies in which types of Ramond–Ramond fields lie in the massless spectrum. | https://en.wikipedia.org/wiki?curid=8261269 |
Wear (journal) Wear is a scientific journal publishing papers on wear and friction. The papers may fall within the subjects of physics, chemistry, material science or mechanical engineering. It is published by Elsevier. | https://en.wikipedia.org/wiki?curid=8263012 |
Ynolate Ynolates are chemical compounds with a negatively charged oxygen attached to an alkyne functionality. They were first synthesized in 1975 by Schöllkopf and Hoppe via the "n"-butyllithium fragmentation of 3,4-diphenylisoxazole. Synthetically, they behave as ketene precursors or synthons. | https://en.wikipedia.org/wiki?curid=8268288 |
Arizona Geological Survey The (AZGS) was established by the Arizona Legislature to investigate and describe Arizona's geology and to educate and inform the public regarding its geologic setting. Each year since 1915, AZGS has released geologic maps, formal reports, and other geology-related publications. In Tucson, the Survey maintains a geological library comprising more than 15,000 volumes and approximately 100 linear feet of mine files that include newspaper clippings, maps, mine schematics and mine reports; it also maintains a small core repository of donated rock core. In addition, AZGS archives well cuttings of more than 1,000 oil and gas wells on behalf of the Arizona Oil and Gas Conservation Commission. The AZGS Phoenix branch maintains tens-of-thousands of mine maps and reports acquired in 2011 when the Arizona Department of Mines and Mineral Resources merged with AZGS. The Survey main office is located in the State Office complex in downtown Tucson (416 W. Congress, Ste #100, Tucson, AZ 85701). The is the latest in a line of academic departments and state agencies serving the people of the Arizona Territory and now the State of Arizona. In 1883, then Territorial Governor Tritle, requested federal assistance in establishing a geologic survey for the Arizona Territory. The U.S. Congress responded in 1888 by creating the post of Territorial Geologist of Arizona. The unpaid position of Territorial Geologist first went to John F. Blandy, who served until the mid-1890s | https://en.wikipedia.org/wiki?curid=8276189 |
Arizona Geological Survey Upon gaining statehood in 1912, the position of Territorial Geologist was abolished. From 1893 until 1915, the role of geologic mapping and reporting was handed off to the University of Arizona Bureau of Mines. In 1915, the Arizona Bureau of Mines was established at the University of Arizona with Charles Willis as its first director. See our online yearbook for Arizona's former directors of state and territorial geologic agencies. World War II was a fertile time for the Arizona Bureau of Mines. The hunt for strategic metals from large volume, low-grade deposits involved Bureau geologists in research and design of ore concentrating facilities at five major low-grade copper deposits. Following World War II, renewed emphasis on geologic mapping led to the publication of county geologic maps between 1957 and 1960. Territorial and State Geologic Agencies of Arizona from 1888–2007 The 2017 annual budget for the AZGS is $941,000. | https://en.wikipedia.org/wiki?curid=8276189 |
Cell culture assay A cell culture assay is any method used to assess the cytotoxicity of a material. This refers to the "in vitro" assessment of a material to determine whether it releases toxic chemicals in the cell. It also determines if the quantity is sufficient to kill cells, either directly or indirectly, through the inhibition of cell metabolic pathways. Cell culture evaluations are the precursor to whole animal studies and are a way to determine if significant cytotoxicity exists for the given material. Cell culture assays are standardized by ASTM, ISO, and BSI (British Standards Institution.) | https://en.wikipedia.org/wiki?curid=8307294 |
The Murchison Fund is an award given by the Geological Society of London to researchers under the age of 40 who have contributed substantially to the study of hard rock and tectonic geology. It is named in honour of Prof. Roderick Impey Murchison. Source: Murchison Fund, The Geological Society | https://en.wikipedia.org/wiki?curid=8334000 |
Intermediate spiral galaxy An intermediate spiral galaxy is a galaxy that is in between the classifications of a barred spiral galaxy and an unbarred spiral galaxy. It is designated as SAB in the galaxy morphological classification system devised by Gerard de Vaucouleurs. Subtypes are labeled as SAB0, SABa, SABb, or SABc, following a sequence analogous to the Hubble sequence for barred and unbarred spirals. The subtype (0, a, b, or c) is based on the relative prominence of the central bulge and how tightly wound the spiral arms are. | https://en.wikipedia.org/wiki?curid=8335091 |
Unbarred spiral galaxy An unbarred spiral galaxy is a type of spiral galaxy without a central bar, or one that is not a barred spiral galaxy. It is designated with an SA in the galaxy morphological classification scheme. Barless spiral galaxies are one of three general types of spiral galaxies under the "de Vaucouleurs system" classification system, the other two being intermediate spiral galaxy and barred spiral galaxy. Under the "Hubble tuning fork", it is one of two general types of spiral galaxy, the other being barred spirals. | https://en.wikipedia.org/wiki?curid=8335148 |
Center for Drug Evaluation and Research The (CDER, pronounced "see'-der") is a division of the U.S. Food and Drug Administration (FDA) that monitors most drugs as defined in the Food, Drug, and Cosmetic Act. Some biological products are also legally considered drugs, but they are covered by the Center for Biologics Evaluation and Research. The center reviews applications for brand name, generic, and over the counter pharmaceuticals, manages US current Good Manufacturing Practice (cGMP) regulations for pharmaceutical manufacturing, determines which medications require a medical prescription, monitors advertising of approved medications, and collects and analyzes safety data about pharmaceuticals that are already on the market. CDER receives considerable public scrutiny, and thus implements processes that tend toward objectivity and tend to isolate decisions from being attributed to specific individuals. The decisions on approval will often make or break a small company's stock price (e.g., Martha Stewart and Imclone), so the markets closely watch CDER's decisions. The center has around 1,300 employees in "review teams" that evaluate and approve new drugs. Additionally, the CDER employs a "safety team" with 72 employees to determine whether new drugs are unsafe or present risks not disclosed in the product's labeling. The FDA's budget for approving, labeling, and monitoring drugs is roughly $290 million per year. The safety team monitors the effects of more than 3,000 prescription drugs on 200 million people with a budget of about $15 million a year | https://en.wikipedia.org/wiki?curid=8338259 |
Center for Drug Evaluation and Research Janet Woodcock is the director of CDER. CDER reviews New Drug Applications to ensure that the drugs are safe and effective. Its primary objective is to ensure that all prescription and over-the-counter (OTC) medications are safe and effective when used as directed. The FDA requires a four phased series of clinical trials for testing drugs. Phase I involves testing new drugs on healthy volunteers in small groups to determine the maximum safe dosage. Phase II trials involve patients with the condition the drug is intended to treat to test for safety and minimal efficacy in a somewhat larger group of people. Phase III trials involve one to five thousand patients to determine whether the drug is effective in treating the condition it is intended to be used for. After this stage, a new drug application is submitted. If the drug is approved, stage IV trials are conducted after marketing to ensure there are no adverse effects or long term effects of the drug that were not previously discovered. With the rapid advancement of biologically-derived treatments, the FDA has stated that it is working to modernize the process of approval for new drugs. In 2017, Commissioner Scott Gottlieb estimated that they have more than 600 active applications for gene and cell based therapies. CDER is divided into 8 sections with different responsibilities: The FDA has had the responsibility of reviewing drugs since the passage of the 1906 Pure Food and Drugs Act | https://en.wikipedia.org/wiki?curid=8338259 |
Center for Drug Evaluation and Research The 1938 Federal Food, Drug and Cosmetic Act required all new drugs to be tested before marketing by submitting the original form of the new drug application. Within the first year, the FDA's Drug Division, the predecessor to CDER, received over 1200 applications. The Drug Amendments of 1962 required manufacturers to prove to the FDA that the drug in question was both safe and effective. In 1966, the division was reorganized to create the Office of New Drugs, which was responsible for reviewing new drug applications and clinical testing of drugs. In 1982, when the beginning of the biotechnology revolution blurred the line between a drug and a biologic, the Bureau of Drugs was merged with the FDA's Bureau of Biologics to form the "National Center for Drugs and Biologics" during an agency-wide reorganization under Commissioner Arthur Hayes. This reorganization similarly merged the bureaus responsible for medical devices and radiation control into the Center for Devices and Radiological Health. In 1987, under Commissioner Frank Young, CDER and the Center for Biologics Evaluation and Research (CBER) were split into their present form. The two groups were charged with enforcing different laws and had significantly different philosophical and cultural differences. At that time, CDER was more cautious about approving therapeutics and had a more adversarial relationship with the industry | https://en.wikipedia.org/wiki?curid=8338259 |
Center for Drug Evaluation and Research The growing crisis around HIV testing and treatment and an inter-agency dispute between officials from the former Bureau of Drugs and officials from the former Bureau of Biologics over whether to approve Genentech's Activase (tissue plasminogen activator) led to the split. In its original form, CDER was composed of six offices: Management, Compliance, Drug Standards, Drug Evaluation I, Drug Evaluation II, Epidemiology and Biostatistics, and Research Resources. The Division of Antiviral Products was added in 1989 under Drug Evaluation II due to the large amount of drugs proposed for treating AIDS. The Office of Generic Drugs was also formed. In 2002, the FDA transferred a number of biologically produced therapeutics to CDER. These include therapeutic monoclonal antibodies, proteins intended for therapeutic use, immunomodulators, and growth factors and other products designed to alter production of blood cells. | https://en.wikipedia.org/wiki?curid=8338259 |
Electrolysed water (electrolyzed water, EOW, ECA, electrolyzed oxidizing water, electro-activated water or electro-chemically activated water solution) is produced by the electrolysis of ordinary tap water containing dissolved sodium chloride. The electrolysis of such salt solutions produces a solution of hypochlorous acid and sodium hydroxide. The resulting water can be used as a disinfectant. The electrolysis occurs in a specially designed reactor which allows the separation of the cathodic and anodic solutions. In this process, hydrogen gas and hydroxide ions can be produced at the cathode, leading to an alkaline solution that consists essentially of sodium hydroxide. At the anode, chloride ions can be oxidized to elemental chlorine, which is present in acidic solution and can be corrosive to metals. If the solution near the anode is acidic then it will contain elemental chlorine, if it is alkaline then it will comprise sodium hydroxide. The key to delivering a powerful sanitising agent is to form hypochlorous acid without elemental chlorine - this occurs at around neutral pH. Hypochlorous is a weak acid and an oxidizing agent. This "acidic electrolyzed water" can be raised in pH by mixing in the desired amount of hydroxide ion solution from the cathode compartment, yielding a solution of Hypochlorous acid (HOCl) and sodium hydroxide (NaOH). A solution whose pH is 7.3 will contain equal concentrations of hypochlorous acid and hypochlorite ion; reducing the pH will shift the balance toward the hypochlorous acid | https://en.wikipedia.org/wiki?curid=8339883 |
Electrolysed water At a pH between 5.5 and 6.0 approximately 90% of the ions are in the form of hypochlorous acid. In that pH range, the disinfectant capability of the solution is more effective than regular sodium hypochlorite (household bleach). Both sodium hydroxide and hypochlorous acid are efficient disinfecting agents; as mentioned above, the key to effective sanitation is to have a high proportion of hypochlorous acid present, this happens between acidic and neutral pH conditions. EOW will kill spores and many viruses and bacteria. Electrolysis units sold for industrial and institutional disinfectant use and for municipal water-treatment are known as chlorine "generators". These avoid the need to ship and store chlorine, as well as the weight penalty of shipping prepared chlorine solutions. In March, 2016 inexpensive units have become available for home or small business users. Although the field of electro-chemical activation (ECA) technology has existed for more than 40 years, companies producing such solutions have only recently approached the U.S. Environmental Protection Agency (EPA) seeking registration. Recently, a number of companies that manufacture electrolytic devices have sought and received EPA registration as a disinfectant. Electrolyzed alkaline ionized water loses its potency fairly quickly, so it cannot be stored for long. But, acidic ionized water (a byproduct of electrolysis) will store indefinitely (until it's used up or evaporated). Electrolysis machines can be but are not necessarily expensive | https://en.wikipedia.org/wiki?curid=8339883 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.