text stringlengths 11 1.65k | source stringlengths 38 44 |
|---|---|
Sporeling A sporeling is a young plant or fungus produced by a germinated spore, similar to a seedling derived from a germinated seed. They occur in algae, fungi, lichens, bryophytes and seedless vascular plants. Most spores germinate by first producing a germ-rhizoid or holdfast followed by a germ tube emerging from the opposite end. The germ tube develops into the hypha, protonema or thallus of the gametophyte. In seedless vascular plants such as ferns and lycopodiophyta, the term "sporeling" refers to the young sporophyte growing on the gametophyte. These sporelings develop via an embryo stage from a fertilized egg inside an archegonium and depend on the gametophyte for their early stages of growth before becoming independent sporophytes. Young fern sporelings can often be found with the prothallus gametophyte still attached at the base of their fronds. | https://en.wikipedia.org/wiki?curid=10111947 |
Ebulliometer An ebulliometer is designed to accurately measure the boiling point of liquids by measuring the temperature of the vapor-liquid equilibrium either isobarically or isothermally. The primary components in a Świętosławski ebulliometer, which operates isobarically, are the boiler, the Cottrell pumps, the thermowell, and the condenser. Such an ebulliometer can be used for extremely accurate measurements of boiling temperature, molecular weights, mutual solubilities, and solvent purities by using a resistance thermometer (RTD) to measure the near-equilibrium conditions of the thermowell. The ebulliometer is frequently used for measuring the alcohol content of dry wines. See also Sweetness of wine and Oechsle scale. | https://en.wikipedia.org/wiki?curid=10113156 |
Archaeophyte An archaeophyte is a plant species which is non-native to a geographical region, but which was an introduced species in "ancient" times, rather than being a modern introduction. Those arriving after are called neophytes. In Britain, archaeophytes are considered to be those species first introduced prior to 1492, when Christopher Columbus arrived in the New World and the Columbian Exchange began. In some cases, introduced species, whether archaeophytes or neophytes, may have been native species before the ice ages, which extirpated vast numbers of plant species. "Rhododendron ponticum" is an example of a species which recolonised central and northern Europe following the Ice Ages. Archaeophytes are often cultivated species, transported deliberately by humans, but are also often weeds of cultivation, spread accidentally with grain. Archaeophytes in the United Kingdom include sweet chestnut, wheat, field poppy, flixweed, red valerian, ground elders, soapwort, small toadflax, good king henry and cornflower. | https://en.wikipedia.org/wiki?curid=10125600 |
Wound tumor virus is an invertebrate and plant virus found in the United States of America belonging to the genus Phytoreovirus and the family Reoviridae. The virus is a Type III virus under the Baltimore classification system; that is it has a double-stranded RNA genome. This genome is approximately 25,000 base pairs long and organised into twelve segments. All the viral replication occurs in the cytoplasm. The virus is 22% RNA by weight, the other 78% being structural proteins. Structurally, the virus is constructed from 7 different structural proteins. The capsid has icosahedral symmetry, is non-enveloped and around 70 nm in diameter. There is an inner-shell with a diameter of around 50 nm. More than 50 species of plants are potential hosts for Wound tumor virus. It was first reported in Melilotus officinalis. The virus causes tumors to form on the plant at the stem and roots – with the root tumors being more severe. The virus is spread by an insect vector – the Leaf hopper family, notably 'Agallia constricta'. Since viral replication occurs relatively independently of cellular processes, the virus also replicates in the insect vector. | https://en.wikipedia.org/wiki?curid=10129040 |
Bernard Zabłocki (January 1, 1907 in Navahrudak – March 3, 2002 in Delta, British Columbia) was a Polish microbiologist and immunologist. He was a professor at University of Łódź (since 1950) and a member of the Polish Academy of Sciences (since 1965). Notable works include "Bakterie i wirusy chorobotwórcze człowieka" (1966) and "Podstawy współczesnej immunologii" (1973). | https://en.wikipedia.org/wiki?curid=10131339 |
Pulse storm A pulse storm is a single cell thunderstorm of substantial intensity which only produces severe weather for short periods of time. Such a storm weakens and then generates another short burst – hence "pulse". Single cell thunderstorms ordinarily form in environments with low wind shear and moderate instability, with the low wind shear contributing to a short average lifespan of less than an hour. When the instability, calculated by Convective available potential energy (CAPE), is strong, the updraft will bring a larger amount of humid air very high above ground and generate a cumulonimbus cloud with high water and ice content. When the rain content, and even hail, falls from it, they can generate damaging winds brought about by downbursts. Rarely, a weak tornado develops in association with a pulse storm as the environment is only weakly sheared, or not at all. One can distinguish three stages in the evolution of a pulse storm: | https://en.wikipedia.org/wiki?curid=10132497 |
Local storm report A Local Storm Report (LSR) is transmitted by the National Weather Service (NWS) when it receives significant information from storm spotters, such as amateur radio operators, storm chasers, law enforcement officials, civil defense (now emergency management) personnel, firefighters, EMTs or public citizens, about severe weather conditions in their warning responsibility area (County Warning Area or CWA). Those reports are received by local National Weather Service offices (WFOs), and they can be used to issue Severe Thunderstorm Warnings, Tornado Warnings, and other weather warnings/bulletins, in addition to the LSR. The Storm Prediction Center, working with the NWS WFOs, collects these reports for its own database, and it also works with the National Climatic Data Center, which eventually stores the reports in the official record, which is called "Storm Data". The following is an example of a stand-alone LSR that has one individual report from a SKYWARN spotter: Summary LSRs, which can have an extensive listing of individual reports, are also often issued by NWS WFOs after a weather event has ended in order to inform the public and news media outlets of the breadth of severe weather across a WFO's CWA. | https://en.wikipedia.org/wiki?curid=10132787 |
Othmar Zeidler (29 August 1850 – 17 June 1911) was an Austrian chemist credited with the first synthesis of DDT. He was born on 29 August 1850 in Vienna a son of the Viennese pharmacist Franz Zeidler. Othmar's brother, Franz Zeidler Jr. (1851–1901), also became a chemist and would collaborate with him on several projects. As a doctoral student with Adolf von Baeyer at the University of Strasbourg, then in Germany, Zeidler is credited with the first synthesis of the insecticide Dichloro Diphenyl Trichloroethane or DDT in 1874. Othmar returned to Austria before 1876 and, after working at the "I. chemischen Universitätslaboratorium" at the University of Vienna, became a pharmacist in the Fünfhaus district of the capital. He died in Mauer near Vienna on 17 June 1911. | https://en.wikipedia.org/wiki?curid=10143223 |
Baby Bio is the brand name for a range of house plant and, more recently, outdoor plant care products created by Pan Britannica Industries Ltd (PBI) and marketed by Bayer. The most popular and first product was a house plant feed, or fertilizer, which is a dark brown concentrate that must be diluted with water before use. Coming in a bottle styled after an old fashioned perfume bottle, it contains Nitrogen, Phosphorus and Potassium, ensuring that the plant receives the necessary macronutrients. is a very popular house plant feed in the UK and can be used all year round, even on Bonsai plants, with the text on the bottle promising greener leaves and vibrant colours. Part of the popularity of the brand in the UK arose from the major success of Dr Hessayon's series of Expert books, which also came from PBI starting in 1958. As well as concentrate bottles of Baby Bio, it has now been produced in ready dilute spray and 1 litre bottles that contain pesticides too, 'Roota', a rooting hormone and fungicide solution designed to be used on the roots of plant cuttings, and leaf wipes for cleaning house plant leaves. An orchid feed is available which comes in the same bottle as the original except the liquid and design has a pink theme. The concentrated liquid fertilizer is exactly half of what can be found in the traditional bottle. is also often used in biology experimentation when studying algal growth. | https://en.wikipedia.org/wiki?curid=10145982 |
Composite reflectivity The composite reflectivity is the maximum dBZ reflectivity from any of the reflectivity angles of the NEXRAD weather radar. The reflectivity on individual PPI angles show the precipitation intensity at that specific angle above the horizon. Some of these angles are .5, 1.45, 2.4, and 3.35 degrees with the Doppler radar having up to 14 angles when it's in Severe Mode. In the Composite, the highest intensities amongst those available in the different angles above each point of the image will be displayed. In the Canadian weather radar network, this is called MAXR, for "Maximum reflectivity" in the column. When compared to the base angle reflectivity, the lowest angle of elevation sounding, the composite reflectivity, including higher elevation scan information, may appear to indicate more widespread rain. This could indicate one of two things: | https://en.wikipedia.org/wiki?curid=10148481 |
Equilibrium level In meteorology, the equilibrium level (EL), or level of neutral buoyancy (LNB), or limit of convection (LOC), is the height at which a rising parcel of air is at the same temperature as its environment. This means that unstable air is now stable when it reaches the equilibrium level and convection stops. This level is often near the tropopause and can be indicated as near where the anvil of a thunderstorm because it is where the thunderstorm updraft is finally cut off, except in the case of overshooting tops where it continues rising to the maximum parcel level (MPL) due to momentum. More precisely, the cumulonimbus will stop rising around a few kilometres prior to reaching the level of neutral buoyancy and on average anvil glaciation occurs at a higher altitude over land than over sea (despite little difference in LNB from land to sea). | https://en.wikipedia.org/wiki?curid=10148780 |
Anders Karlsson (physicist) Anders Karlsson (b. 1964 Järna, Sweden) is a scientist and professor of quantum photonics at the Royal Institute of Technology in Stockholm, Sweden. In 2004, he was awarded the Descartes Prize for outstanding cross-border research. | https://en.wikipedia.org/wiki?curid=10157536 |
Oleosin Oleosins are structural proteins found in vascular plant oil bodies and in plant cells. Oil bodies are not considered organelles because they have a single layer membrane and lack the pre-requisite double layer membrane in order to be considered an organelle. They are found in plant parts with high oil content that undergo extreme desiccation as part of their maturation process, and help stabilize the bodies. Oleosins are proteins of 16 kDa to 24 kDa and are composed of three domains: an N-terminal hydrophilic region of variable length (from 30 to 60 residues); a central hydrophobic domain of about 70 residues and a C-terminal amphipathic region of variable length (from 60 to 100 residues). The central hydrophobic domain is proposed to be made up of beta-strand structure and to interact with the lipids. It is the only domain whose sequence is conserved. Models show oleosins having a hairpin-like hydrophobic shape that is inserted inside the triacylglyceride (TAG), while the hydrophilic parts are left outside oil bodies. Oleosins have been found on oil bodies of seeds, tapetum cells, and pollen but not fruits. Instead of a stabilizer of oil bodies, oleosins are believed to be involved in water-uptaking of pollen on stigma. Oleosins provide an easy way of purifying proteins which have been produced recombinantly in plants | https://en.wikipedia.org/wiki?curid=10162537 |
Oleosin If the protein is made as a fusion protein with oleosin and a protease recognition site is incorporated between them, the fusion protein will sit in the membrane of the oil body, which can be easily isolated by centrifugation. The oil droplets can then be mixed with aqueous medium again, and oleosin cleaved from the protein of interest. Centrifugation will cause two phases to separate again, and the aqueous medium now contains the purified protein. | https://en.wikipedia.org/wiki?curid=10162537 |
Track Imaging Cherenkov Experiment The (TrICE) is a ground-based cosmic ray telescope located at Argonne National Laboratory near Chicago, IL. The telescope, which contains a Fresnel lens, eight spherical mirrors, and a camera with 16 multianode photomultiplier tubes, uses the atmospheric Cherenkov imaging technique to detect Cherenkov radiation produced when cosmic rays interact with particles in the Earth's atmosphere. The telescope is primarily a research and development tool for improving photomultiplier tube cameras and electronic systems for future gamma and cosmic ray telescopes. It is also used to study the energy and composition of cosmic rays in the TeV–PeV range, and the collaboration is currently conducting pioneering work in detecting direct Cherenkov signals from cosmic rays. | https://en.wikipedia.org/wiki?curid=10167028 |
Richard Weyl Richard H. Weyl (10 August 1912, Kiel - 1988) was a German geologist and noted author on the subject of Central American Geology. Active member of the Bundesanstalt für Geowissenschaften und Rohstoffe (BGR, Federal Institute for Geosciences and Natural Resources; formerly the Bundesanstalt für Bodenforschung, BfB) in Latin-America and the Caribbean. 1941-1956 Scientist at Paläontologisches Institut und Museum (University of Kiel, Germany) Since 1956 Professor at the University of Giessen, Geologisch-Paläontologisches Institut. Apparently there is a peak in Costa Rica named after him. (Near Mount Chirripó) Ph.D. Geology (University of Heidelberg, 1936) Die Geologie Mittelamerikas (The Geology of Central America)/ von Dr. Richard Weyl. Berlin-Nikolassee : Gebruder Borntraeger, 1961. Geologie der Antillen / Richard Weil. Berlin-Nikolassee : Borntraeger, 1966. Geology of Central America / by Richard Weyl. Berlin : Gebr. Borntraeger, 1980. Departmental History Webpage by Hubert Miller <webpage now defunct> List mentioning correspondence with Weyl Costa Rica Guides: Mountains and Volcanos Der Nationalsozialismus und Lateinamerika by Sandra Carreras 1912-1988 by Ekke W. Guenther. | https://en.wikipedia.org/wiki?curid=10173747 |
Phytopharmacology is the study and practice of eradicating plant pathology originated from the "Verbandes Deutscher Pflanzenärzte" (1928–1939), (German Plant Physicians Society), headed by Otto Appel, known as the "Organiser of German Plant Protection", who initially defined the terminology of "Phyto-Medicine" or "Plant Medicine". The Deutsche Phytomedizinische Gesellschaft (German Phytomedicine Society) is the German association of phytomedicine practitioners. Academic programs in phytomedicine, such as at the University of Hohenheim, consider the interrelationships between pathogenic microorganisms and crops, disease control methods, and research programs. In 1936, the term "phytopharmacology" was used for the field of study on drugs that affect plants. | https://en.wikipedia.org/wiki?curid=10176965 |
Felix Karl Ludwig Machatschki Karl Ludwig Felix Machatschki (22 September 1895 – 17 February 1970) was an Austrian mineralogist. He was born in Arnfels (near Leibnitz) in Styria, Austria. He studied at the University of Graz, obtaining his habilitation in 1925; in 1927 he joined the group of Victor Goldschmidt in Oslo for one year. In 1930 he was appointed as a professor at the University of Tübingen. He changed university twice, first in 1941 to the University of Munich and finally in 1944 to the University of Vienna. In 1928 he published "Zur Frage der Struktur und Konstitution der Feldspäte", a paper in which he develops the concept of the atomic structure of silicates and formulates the construction principle of feldspars. In 1946 he published "Grundlagen der allgemeinen Mineralogie und Kristallchemie" ("Fundamentals of general mineralogy and crystal chemistry"). In 1961, Machatschki was awarded the Austrian Medal for Science and Art. The "Felix-Machatschki-Preis" is an award given by the Österreichische Mineralogische Gesellschaft in recognition of outstanding international scientific work in the field of mineralogy. The mineral machatschkiite commemorates his name. He was also the author of 140 individual articles in scientific journals. | https://en.wikipedia.org/wiki?curid=10180670 |
Nanogeoscience is the study of nanoscale phenomena related to geological systems. Predominantly, this is investigated by studying environmental nanoparticles between 1–100 nanometers in size. Other applicable fields of study include studying materials with at least one dimension restricted to the nanoscale (e.g. thin films, confined fluids) and the transfer of energy, electrons, protons, and matter across environmental interfaces. As more dust enters the atmosphere due to the consequences of human activity (from direct effects, such as clearing of land and desertification, versus indirect effects, such as global warming), it becomes more important to understand the effects of mineral dust on the gaseous composition of the atmosphere, cloud formation conditions, and global-mean radiative forcing (i.e., heating or cooling effects). Oceanographers generally study particles that measure 0.2 micrometres and larger, which means a lot of nanoscale particles are not examined, particularly with respect to formation mechanisms. is in a relatively early stage of development. The future directions of nanoscience in the geosciences will include a determination of the identity, distribution, and unusual chemical properties of nanosized particles and/or films in the oceans, on the continents, and in the atmosphere, and how they drive Earth processes in unexpected ways. Further, nanotechnology will be the key to developing the next generation of Earth and environmental sensing systems | https://en.wikipedia.org/wiki?curid=10185698 |
Nanogeoscience deals with structures, properties and behaviors of nanoparticles in soils, aquatic systems and atmospheres. One of the key features of nanoparticles is the size-dependence of the nanoparticle stability and reactivity. This arises from the large specific surface area and differences in surface atomic structure of nanoparticles at small particle sizes. In general, the free energy of nanoparticles is inversely proportional to their particle size. For materials that can adopt two or more structures, size-dependent free energy may result in phase stability crossover at certain sizes. Free energy reduction drives crystal growth (atom-by-atom or by oriented attachment ), which may again drive the phase transformation due to the change of the relative phase stability at increasing sizes. These processes impact the surface reactivity and mobility of nanoparticles in natural systems. Well-identified size-dependent phenomena of nanoparticles include: These size-dependent properties highlight the importance of the particle size in nanoparticle stability and reactivity. research groups: | https://en.wikipedia.org/wiki?curid=10185698 |
Kenneth Newbey Kenneth Raymond Newbey (11 June 1936 – 24 July 1988) was a plant ecologist, botanical collector and horticulturist. Born in Katanning, Western Australia, he collected over 12000 specimens from the Albany-Esperance, Wheatbelt, goldfields and Pilbara regions of Western Australia. He died in White Gum Valley in 1988. His collection was incorporated into the CALM office in Albany. Publications include | https://en.wikipedia.org/wiki?curid=10193506 |
Canwarn CANWARN, acronym for CANadian Weather Amateur Radio Network, is an organized severe weather spotting and reporting program organized and run by the Meteorological Services Division of Environment Canada. What CANWARN members do is called ground truthing, they confirm and add information to the remote sensing observations of satellites and radar as well as provide information not observable by these technologies. The program was first theorized by members of the Windsor Amateur Radio Club in Windsor, Ontario in 1986. Randy Mawson VE3TRW, Paul Robertson VE3HFQ, Jerry Beneteau VE3EXT and Bill Leal VE3ES established the original parameters and processes at that time with the first training session held in Windsor during the winter of 1986/1987 at the Windsor Airport, home at the time of the Windsor Weather Office of Environment Canada. Paul VE3HFQ and Bill VE3ES were literally putting the final touches on the station (VE3YQG) located at the Windsor Weather Office in early April 1987 when the very first CANWARN net was called to order. A report of a tornado in south east Michigan on a path towards Essex County was relayed to Environment Canada's severe weather desk in Toronto, Ontario. Later that year, after the Edmonton Tornado and at the request of the Hage Report CANWARN was expanded beyond the initial program run out of the Windsor (Ontario) Weather Office | https://en.wikipedia.org/wiki?curid=10194232 |
Canwarn Organized storm spotting in Canada had existed prior but operated independently of Environment Canada and never fully achieved the success that the CANWARN program did. Initially, CANWARN was predominantly based in southern Ontario and central Alberta but eventually grew to encompass the entire country by the early 1990s. The United States began a national storm spotting program in the 1950s. Prior to that, it too had only local spotting programs. In the 1970s, it increased spotting efforts and launched its Skywarn program, which partly inspired CANWARN. In the 2000s, Europe also began organized spotting efforts under the auspices of Skywarn Europe, which consists of autonomous branches in about a dozen countries. | https://en.wikipedia.org/wiki?curid=10194232 |
Gaisser–Hillas function The is used in astroparticle physics. It parameterizes the longitudinal particle density in a cosmic ray air shower. The function was proposed in 1977 by Thomas K. Gaisser and Anthony Michael Hillas. The number of particles formula_1 as a function of traversed atmospheric depth formula_2 is expressed as where formula_4 is maximum number of particles observed at depth formula_5, and formula_6 and formula_7 are primary mass and energy dependent parameters. Using substitutions formula_8, formula_9 and formula_10 the function can be written in an alternative one-parametric ("m") form as | https://en.wikipedia.org/wiki?curid=10196392 |
Meade LX90 The is a Schmidt-Cassegrain design of telescope made by Meade Instruments for the mid-priced (2000 USD circa 2008) commercial telescope market. It uses a similar optical system to the bigger and more expensive Meade LX200—although it lacks some useful functions like primary mirror locking. The LX90 telescopes were equipped with Autostar soon after its 1999 introduction by Meade instruments. Optical apertures included in the product line included 8 (20 cm), 10 (25 cm) and 12 (30 cm) inches on a double tine fork mount and Autostar system. | https://en.wikipedia.org/wiki?curid=10201721 |
Ali Abdelghany Ali Ezzeldin Abdelghany (born 16 June 1944 in Cairo) is an Egyptian academic and marine biologist. Abdelghany graduated with a bachelor of science degree from Cairo University in 1967. Abdelghany received a master's degree from Auburn University in 1982 with a specialization in Fisheries Management and his doctorate in Aquaculture nutrition from the University of Idaho in 1986. Before earning his graduate degrees, Abdelghany received a fellowship with the Food and Agriculture Organization of the United Nations. After a formal education, Abdelghany returned to Egypt in 1986 and joined the Central Laboratory of Aquaculture Research at Sharqiyah as head of the department of nutrition. Since 1986, he has done research on various fish-related issues, including improving dietary growth and the reduction of feeding costs by using alternative methods. He has been appointed the director of CLAR twice (1993/1994 and 2001/2002). | https://en.wikipedia.org/wiki?curid=10201896 |
Biological computing Bio computers use systems of biologically derived molecules—such as DNA and proteins—to perform computational calculations involving storing, retrieving, and processing data. The development of biocomputers has been made possible by the expanding new science of nanobiotechnology. The term nanobiotechnology can be defined in multiple ways; in a more general sense, nanobiotechnology can be defined as any type of technology that uses both nano-scale materials (i.e. materials having characteristic dimensions of 1-100 nanometers) and biologically based materials. A more restrictive definition views nanobiotechnology more specifically as the design and engineering of proteins that can then be assembled into larger, functional structures The implementation of nanobiotechnology, as defined in this narrower sense, provides scientists with the ability to engineer biomolecular systems specifically so that they interact in a fashion that can ultimately result in the computational functionality of a computer. Biocomputers use biologically derived materials to perform computational functions. A biocomputer consists of a pathway or series of metabolic pathways involving biological materials that are engineered to behave in a certain manner based upon the conditions (input) of the system. The resulting pathway of reactions that takes place constitutes an output, which is based on the engineering design of the biocomputer and can be interpreted as a form of computational analysis | https://en.wikipedia.org/wiki?curid=10203313 |
Biological computing Three distinguishable types of biocomputers include biochemical computers, biomechanical computers, and bioelectronic computers. Biochemical computers use the immense variety of feedback loops that are characteristic of biological chemical reactions in order to achieve computational functionality. Feedback loops in biological systems take many forms, and many different factors can provide both positive and negative feedback to a particular biochemical process, causing either an increase in chemical output or a decrease in chemical output, respectively. Such factors may include the quantity of catalytic enzymes present, the amount of reactants present, the amount of products present, and the presence of molecules that bind to and thus alter the chemical reactivity of any of the aforementioned factors. Given the nature of these biochemical systems to be regulated through many different mechanisms, one can engineer a chemical pathway comprising a set of molecular components that react to produce one particular product under one set of specific chemical conditions and another particular product under another set of conditions. The presence of the particular product that results from the pathway can serve as a signal, which can be interpreted—along with other chemical signals—as a computational output based upon the starting chemical conditions of the system (the input) | https://en.wikipedia.org/wiki?curid=10203313 |
Biological computing Biomechanical computers are similar to biochemical computers in that they both perform a specific operation that can be interpreted as a functional computation based upon specific initial conditions which serve as input. They differ, however, in what exactly serves as the output signal. In biochemical computers, the presence or concentration of certain chemicals serves as the output signal. In biomechanical computers, however, the mechanical shape of a specific molecule or set of molecules under a set of initial conditions serves as the output. Biomechanical computers rely on the nature of specific molecules to adopt certain physical configurations under certain chemical conditions. The mechanical, three-dimensional structure of the product of the biomechanical computer is detected and interpreted appropriately as a calculated output. Biocomputers can also be constructed in order to perform electronic computing. Again, like both biomechanical and biochemical computers, computations are performed by interpreting a specific output that is based upon an initial set of conditions that serve as input. In bioelectronic computers, the measured output is the nature of the electrical conductivity that is observed in the bioelectronic computer. This output comprises specifically designed biomolecules that conduct electricity in highly specific manners based upon the initial conditions that serve as the input of the bioelectronic system | https://en.wikipedia.org/wiki?curid=10203313 |
Biological computing In networks-based biocomputation , self-propelled biological agents, such as molecular motor proteins or bacteria, explore a microscopic network that encodes a mathematical problem of interest. The paths of the agents through the network and/or their final positions represent potential solutions to the problem. For instance, in the system described by Nicolau et al. , mobile molecular motor filaments are detected at the "exits" of a network encoding the NP-complete problem SUBSET SUM. All exits visited by filaments represent correct solutions to the algorithm. Exits not visited are non-solutions. The motility proteins are either actin and myosin or kinesin and microtubules. The myosin and kinesin, respectively, are attached to the bottom of the network channels. When adenosine triphosphate (ATP) is added, the actin filaments or microtubules are propelled through the channels, thus exploring the network. The energy conversion from chemical energy (ATP) to mechanical energy (motility) is highly efficient when compared with e.g. electronic computing, so the computer, in addition to being massively parallel, also uses orders of magnitude less energy per computational step. The behavior of biologically derived computational systems such as these relies on the particular molecules that make up the system, which are primarily proteins but may also include DNA molecules. Nanobiotechnology provides the means to synthesize the multiple chemical components necessary to create such a system | https://en.wikipedia.org/wiki?curid=10203313 |
Biological computing The chemical nature of a protein is dictated by its sequence of amino acids—the chemical building blocks of proteins. This sequence is in turn dictated by a specific sequence of DNA nucleotides—the building blocks of DNA molecules. Proteins are manufactured in biological systems through the translation of nucleotide sequences by biological molecules called ribosomes, which assemble individual amino acids into polypeptides that form functional proteins based on the nucleotide sequence that the ribosome interprets. What this ultimately means is that one can engineer the chemical components necessary to create a biological system capable of performing computations by engineering DNA nucleotide sequences to encode for the necessary protein components. Also, the synthetically designed DNA molecules themselves may function in a particular biocomputer system. Thus, implementing nanobiotechnology to design and produce synthetically designed proteins—as well as the design and synthesis of artificial DNA molecules—can allow the construction of functional biocomputers (e.g. Computational Genes). Biocomputers can also be designed with cells as their basic components. Chemically induced dimerization systems can be used to make logic gates from individual cells. These logic gates are activated by chemical agents that induce interactions between previously non-interacting proteins and trigger some observable change in the cell | https://en.wikipedia.org/wiki?curid=10203313 |
Biological computing Network-based biocomputers are engineered by nanofabrication of the hardware from wafers where the channels are etched by electron-beam lithography or nano-imprint lithography. The channels are designed to have a high aspect ratio of cross section so the protein filaments will be guided. Also, split and pass junctions are engineered so filaments will propagate in the network and explore the allowed paths. Surface silanization ensures that the motility proteins can be affixed to the surface and remain functional. The molecules that perform the logic operations are derived from biological tissue. All biological organisms have the ability to self-replicate and self-assemble into functional components. The economical benefit of biocomputers lies in this potential of all biologically derived systems to self-replicate and self-assemble given appropriate conditions. For instance, all of the necessary proteins for a certain biochemical pathway, which could be modified to serve as a biocomputer, could be synthesized many times over inside a biological cell from a single DNA molecule. This DNA molecule could then be replicated many times over. This characteristic of biological molecules could make their production highly efficient and relatively inexpensive. Whereas electronic computers require manual production, biocomputers could be produced in large quantities from cultures without any additional machinery needed to assemble them | https://en.wikipedia.org/wiki?curid=10203313 |
Biological computing Currently, biocomputers exist with various functional capabilities that include operations of "binary " logic and mathematical calculations. Tom Knight of the MIT Artificial Intelligence Laboratory first suggested a biochemical computing scheme in which protein concentrations are used as binary signals that ultimately serve to perform logical operations. At or above a certain concentration of a particular biochemical product in a biocomputer chemical pathway indicates a signal that is either a 1 or a 0. A concentration below this level indicates the other, remaining signal. Using this method as computational analysis, biochemical computers can perform logical operations in which the appropriate binary output will occur only under specific logical constraints on the initial conditions. In other words, the appropriate binary output serves as a logically derived conclusion from a set of initial conditions that serve as premises from which the logical conclusion can be made. In addition to these types of logical operations, biocomputers have also been shown to demonstrate other functional capabilities, such as mathematical computations. One such example was provided by W.L. Ditto, who in 1999 created a biocomputer composed of leech neurons at Georgia Tech which was capable of performing simple addition. These are just a few of the notable uses that biocomputers have already been engineered to perform, and the capabilities of biocomputers are becoming increasingly sophisticated | https://en.wikipedia.org/wiki?curid=10203313 |
Biological computing Because of the availability and potential economic efficiency associated with producing biomolecules and biocomputers—as noted above—the advancement of the technology of biocomputers is a popular, rapidly growing subject of research that is likely to see much progress in the future. In March 2013. a team of bioengineers from Stanford University, led by Drew Endy, announced that they had created the biological equivalent of a transistor, which they dubbed a "transcriptor". The invention was the final of the three components necessary to build a fully functional computer: data storage, information transmission, and a basic system of logic. Parallel biological computing with networks, where bio-agent movement corresponds to arithmetical addition was demonstrated in 2016 on a SUBSET SUM instance with 8 candidate solutions. Many examples of simple biocomputers have been designed, but the capabilities of these biocomputers are very limited in comparison to commercially available non-bio computers. Some people believe that biocomputers have great potential, but this has yet to be demonstrated. The potential to solve complex mathematical problems using far less energy than standard electronic supercomputers, as well as to perform more reliable calculations simultaneously rather than sequentially, motivates the further development of "scalable" biological computers, and several funding agencies are supporting these efforts. | https://en.wikipedia.org/wiki?curid=10203313 |
Seaport Centre is a high-tech business park located in Redwood City, California, United States, and as of 2007 is one of the largest biotechnology research complexes in the San Francisco Bay Area. The property consists of of developed building area, and is situated in proximity to the Port of Redwood City. The property is classified as Class A office space and is constructed as a series of separate buildings. The original lands of the were used as salt evaporation ponds on tidal lands of the San Francisco Bay, a land use that started sometime prior to 1940. As of 2002 leasing rates at were in the range of $27 per square foot per annum. In 2005, Slough Estates, a United Kingdom-based REIT, purchased the entirety of to develop it as a biotechnology research center to compete with the existing biotech hubs in Silicon Valley and South San Francisco. The is located on generally level ground at approximately above mean sea level. Stormwater surface runoff is pumped from to discharge into Redwood Creek Due to the large scale of this area and its historical lack of accessibility, the area history can be revealed well with aerial photographic records. In 1989, the firm of Earth Metrics Incorporated conducted a review of historic aerial photographs dating back to 1956.(Earth Metrics, 1989) The site and its environs, as of the 1956 aerial stereo photo, were essentially undeveloped, although extensive salt evaporation ponds were evident on site | https://en.wikipedia.org/wiki?curid=10205458 |
Seaport Centre Redwood City Planning Department records confirm that the site was used for salt evaporation since sometime prior to World War II. Analysis of five sets of time staged stereo pairs of aerial photos reveal that the site remained undeveloped until 1982; up until that time, city records show that the site was zoned as "Tidal Plain", a designation not allowing urban development. Area land use gradually changed from 1956 until 1982, in the form of gradual building development in the local area. As early as 1956 a rail spur is evident, which served the loading of raw sea salt for export from the area. No agricultural history is associated with the site, other than the salt harvesting. Since development of in the early 1980s there has been a consistent trend of the property's use as a research and development hub for the biotech industry. In a 1989 tenant survey, some of the principal tenants present were: Vascualar Interventions Inc., Genelabs Incorporated, Precision Images Inc., Invitron, Charles Evans Inc., ICT Corporation, Resound Incorporated, Ideon Corporation, Cygnus Research Corp, Color Prep, Aurora Systems Inc., Abekas Video Systems Inc., Personics Inc., Instor Corporation and Visucom, Inc. A number of these early firms utilized a variety of hazardous materials in their normal processes. These substances included a variety of toxic solvents and other organic chemicals such as acetone, benzene, methyl ethyl ketone and toluene | https://en.wikipedia.org/wiki?curid=10205458 |
Seaport Centre Use of such chemicals on site is considered a risk, since the brackish groundwater at the is shallow. Current tenants still largely focus on the medical and health fields and include: AcelRx Pharmaceuticals, Bavarian Nordic, Codexis, Genomic Health, Guardant Health, Gynesonics, Minerva Surgical, OncoMed, Stem Cell Theranostics, and Teva. | https://en.wikipedia.org/wiki?curid=10205458 |
Autoconstructive evolution is a process in which the entities undergoing evolutionary change are themselves responsible for the construction of their own offspring and thus for aspects of the evolutionary process itself. Because biological evolution is always autoconstructive, this term mainly occurs in evolutionary computation, to distinguish artificial life type systems from conventional genetic algorithms where the GA performs replication artificially. The term was coined by Lee Spector. is a good platform for answering theoretical questions about the evolution of evolvability. Preliminary evidence suggests that the way in which offspring are generated changes substantially over the course of evolution. By studying these patterns, we can begin to understand how evolving systems organize themselves to evolve faster. Ultimately, such an understanding could allow us to improve our ability to solve problems with evolutionary computation. This increased ability for the process of self-replication to evolve is also thought to be important for recreating the open-ended evolutionary process observed on earth A relatively simple form of autoconstruction occurs in systems such as Tierra and Avida. In these systems, programs replicate themselves by allocating space in memory for their offspring and then looping over all of the instructions in their genome and copying each into the newly allocated space. This is autoconstruction in that the programs are responsible for determining what code ends up in the offspring | https://en.wikipedia.org/wiki?curid=10207027 |
Autoconstructive evolution Programs most commonly make exact copies of themselves, with changes being introduced exclusively through mutation events. In principle, however, programs can compose a wide range of possible offspring by only copying a subset of their genomes. PushGP is a genetic programming system which evolves code written in the Push language. Push is a stack-based language designed for easy use in genetic programming, in which every variable type (e.g. strings, integers, etc.) has its own stack. All variables are stored on the stack associated with their type. One of the variable types is executable Push code. As a result, this language design allows for rich autoconstructive evolution by treating all code left on the code stack at the end of program execution as the program's offspring. Using this approach, programs have complete control over the offspring programs that they create. | https://en.wikipedia.org/wiki?curid=10207027 |
Exner function The is an important parameter in atmospheric modeling. The can be viewed as non-dimensionalized pressure and can be defined as: where formula_2 is a standard reference surface pressure, usually taken as 1000 hPa; formula_3 is the gas constant for dry air; formula_4 is the heat capacity of dry air at constant pressure; formula_5 is the absolute temperature; and formula_6 is the potential temperature. | https://en.wikipedia.org/wiki?curid=10211794 |
Mid Atlantic Star Party The (MASP) was an annual regional gathering of amateur astronomers (star party) held each fall around October near Robbins, North Carolina. First held in 1995, MASP is located in one of the darkest spots along the eastern U.S. coast and is the largest annual gathering of amateur astronomers between Vermont's Stellafane and Florida's Winter Star Party. With attendance usually numbering in the hundreds, MASP was held at the Occoneechee Council Boy Scout camp for the first decade of operation before scheduling conflicts forced a site change. The star party has become a focus of the town of Robbins' economic and cultural planning process and has spurred the creation of a regional "dark park" to control light pollution. | https://en.wikipedia.org/wiki?curid=10231661 |
Fluorosulfonate Fluorosulfonate, in organic chemistry, is a functional group that has the chemical formula F-SO-O-R, and typically is a very good leaving group. In inorganic chemistry, fluorosulfonate is another term for fluorosulfate, the anion F-SO-O, the conjugate base of fluorosulfonic acid. They form a series of salts with metal and organic cations called fluorosulfates. Organic (alkyl) fluorosulfonates are usually strong alkylation agents, similar to triflate esters (FC-SO-OR). But unlike the triflate group, the fluorosulfonate group is not stable against hydrolysis. Therefore, fluorosulfonate esters are less frequently used as alkylation agents than triflate esters. | https://en.wikipedia.org/wiki?curid=10244534 |
Natural gasoline is a liquid hydrocarbon mixture condensed from natural gas, similar to common gasoline (petrol) derived from petroleum. The chemical composition of natural gasoline is mostly five- and six-carbon alkanes (pentanes and hexanes) with smaller amounts of alkanes with longer chains. It contains significant amounts of isopentane (methyl butane) , which is rare in the petroleum product. Its boiling point is within the standard range for gasoline, and its vapor pressure is intermediate between those of natural gas condensate (drip gas) and liquefied petroleum gas. Its typical gravity is around 80 API. is rather volatile and unstable, and has a low octane rating, but can be blended with other hydrocarbons to produce commercial gasoline. It is also used as a solvent to extract oil from oil shale. Its properties are standardized by GPA Mindstream (formerly Gas Processors Association). is often used as a denaturant for fuel-grade ethanol, where it is commonly added volumetrically between 2.0% and 2.5% to make denatured fuel ethanol (DFE), or E98. This process renders the fuel-grade ethanol undrinkable. It is then transferred to a blender, which will add this E98 to conventional gasoline to make common 87 octane fuels (E10). It can also be added to ethanol in higher volumetric concentrations to produce high-level blends of ethanol, such as E85. has a lower octane content (RON roughly equal to 70) than conventional commercial distilled gasoline, so it cannot normally be used by itself for fuel for modern automobiles | https://en.wikipedia.org/wiki?curid=10249946 |
Natural gasoline However, when mixed with higher concentrations of ethanol (RON roughly equal to 113) to produce products such as E85, the octane level of the natural gasoline and ethanol mixture is now within the usable range for flex-fuel vehicles. It may be sourced from production of natural-gas wells (see "drip gas") or produced by extraction processes in the field, as opposed to refinery cracking of conventional gasoline. | https://en.wikipedia.org/wiki?curid=10249946 |
Chemical library A chemical library or compound library is a collection of stored chemicals usually used ultimately in high-throughput screening or industrial manufacture. The chemical library can consist in simple terms of a series of stored chemicals. Each chemical has associated information stored in some kind of database with information such as the chemical structure, purity, quantity, and physiochemical characteristics of the compound. In drug discovery high-throughput screening, it is desirable to screen a drug target against a selection of chemicals that try to take advantage of as much of the appropriate chemical space as possible. The chemical space of all possible chemical structures is extraordinarily large. Most stored chemical libraries do not typically have a fully represented or sampled chemical space mostly because of storage and cost concerns. However, since many molecular interactions cannot be predicted, the wider the chemical space that is sampled by the chemical library, the better the chance that high-throughput screening will find a "hit"—a chemical with an appropriate interaction in a biological model that might be developed into a drug. An example of a chemical library in drug discovery would be a series of chemicals known to inhibit kinases, or in industrial processes, a series of catalysts known to polymerize resins. Chemical libraries are usually generated for a specific goal and larger chemical libraries could be made of several groups of smaller libraries stored in the same location | https://en.wikipedia.org/wiki?curid=10257708 |
Chemical library In the drug discovery process for instance, a wide range of organic chemicals are needed to test against models of disease in high-throughput screening. Therefore, most of the chemical synthesis needed to generate chemical libraries in drug discovery is based on organic chemistry. A company that is interested in screening for kinase inhibitors in cancer may limit their chemical libraries and synthesis to just those types of chemicals known to have affinity for ATP binding sites or allosteric sites. Generally, however, most chemical libraries focus on large groups of varied organic chemical series where an organic chemist can make many variations on the same molecular scaffold or molecular backbone. Sometimes chemicals can be purchased from outside vendors as well and included into an internal chemical library. Depending upon their scope and design, chemical libraries can also be classified as diverse oriented, Drug-like, Lead-like, peptide-mimetic, Natural Product-like, Targeted against a specific family of biological targets such Kinases, GPCRs, Proteases, PPI etc. Among the compound libraries should be annotated the Fragment Compound Libraries, which are mainly used for Fragment Based Drug Discovery FBDD. Chemical libraries are usually designed by chemists and chemoinformatics scientists and synthesized by organic chemistry and medicinal chemistry | https://en.wikipedia.org/wiki?curid=10257708 |
Chemical library The method of chemical library generation usually depends on the project and there are many factors to consider when using rational methods to select screening compounds. Typically, a range of chemicals is screened against a particular drug target or disease model, and the preliminary "hits", or chemicals that show the desired activity, are re-screened to verify their activity. Once they are qualified as a "hit" by their repeatability and activity, these particular chemicals are registered and analysed. Commonalities among the different chemical groups are studied as they are often reflective of a particular chemical subspace. Additional chemistry work may be needed to further optimize the chemical library in the active portion of the subspace. When it is needed, more synthesis is completed to extend out the chemical library in that particular subspace by generating more compounds that are very similar to the original hits. This new selection of compounds within this narrow range are further screened and then taken on to more sophisticated models for further validation in the Drug Discovery Hit to Lead process. The "chemical space" of all possible organic chemicals is large and increases exponentially with the size of the molecule. Most chemical libraries do not typically have a fully represented chemical space mostly because of storage and cost concerns | https://en.wikipedia.org/wiki?curid=10257708 |
Chemical library Because of the expense and effort involved in chemical synthesis, the chemicals must be correctly stored and banked away for later use to prevent early degradation. Each chemical has a particular shelf life and storage requirement and in a good-sized chemical library, there is a timetable by which library chemicals are disposed of and replaced on a regular basis. Some chemicals are fairly unstable, radioactive, volatile or flammable and must be stored under careful conditions in accordance with safety standards such as OSHA. Most chemical libraries are managed with information technologies such as barcoding and relational databases. Additionally, robotics are necessary to fetch compounds in larger chemical libraries. Because a chemical library's individual entries can easily reach up into the millions of compounds, the management of even modest-sized chemical libraries can be a full-time endeavor. Compound management is one such field that attempts to manage and upkeep these chemical libraries as well as maximizing safety and effectiveness in their management. | https://en.wikipedia.org/wiki?curid=10257708 |
Princeton Ocean Model The (POM) is a community general numerical model for ocean circulation that can be used to simulate and predict oceanic currents, temperatures, salinities and other water properties. The model code was originally developed at Princeton University (G. Mellor and Alan Blumberg) in collaboration with Dynalysis of Princeton (H. James Herring, Richard C. Patchen). The model incorporates the Mellor–Yamada turbulence scheme developed in the early 1970s by George Mellor and Ted Yamada; this turbulence sub-model is widely used by oceanic and atmospheric models. At the time, early computer ocean models such as the Bryan–Cox model (developed in the late 1960s at the Geophysical Fluid Dynamics Laboratory, GFDL, and later became the Modular Ocean Model, MOM)), were aimed mostly at coarse-resolution simulations of the large-scale ocean circulation, so there was a need for a numerical model that can handle high-resolution coastal ocean processes. The Blumberg–Mellor model (which later became POM) thus included new features such as free surface to handle tides, sigma vertical coordinates (i.e., terrain-following) to handle complex topographies and shallow regions, a curvilinear grid to better handle coastlines, and a turbulence scheme to handle vertical mixing | https://en.wikipedia.org/wiki?curid=10285944 |
Princeton Ocean Model At the early 1980s the model was used primarily to simulate estuaries such as the Hudson–Raritan Estuary (by Leo Oey) and the Delaware Bay (Boris Galperin), but also first attempts to use a sigma coordinate model for basin-scale problems have started with the coarse resolution model of the Gulf of Mexico (Blumberg and Mellor) and models of the Arctic Ocean (with the inclusion of ice-ocean coupling by Lakshmi Kantha and Sirpa Hakkinen). In the early 1990s when the web and browsers started to be developed, POM became one of the first ocean model codes that were provided free of charge to users through the web. The establishment of the POM users group and its web support (by Tal Ezer) resulted in a continuous increase in the number of POM users which grew from about a dozen U.S. users in the 1980s to over 1000 users in 2000 and over 4000 users by 2009; there are users from over 70 different countries. In the 1990s the usage of POM expands to simulations of the Mediterranean Sea (Zavatarelli) and the first simulations with a sigma coordinate model of the entire Atlantic Ocean for climate research (Ezer). The development of the Mellor–Ezer optimal interpolation data assimilation scheme that projects surface satellite data into deep layers allows the construction of the first ocean forecast systems for the Gulf Stream and the U.S. east coast running operationally at the NOAA's National Weather Service (Frank Aikman and others) | https://en.wikipedia.org/wiki?curid=10285944 |
Princeton Ocean Model Operational forecast system for other regions such as the Great Lakes, the Gulf of Mexico (Oey), the Gulf of Maine (Huijie Xue) and the Hudson River (Blumberg) followed. For more information on applications of the model, see the searchable database of over 1800 POM-related publications. In the late 1990s and the 2000s many other terrain-following community ocean models have been developed; some of their features can be traced back to features included in the original POM, other features are additional numerical and parameterization improvements. Several ocean models are direct descendants of POM such as the commercial version of POM known as the estuarine and coastal ocean model (ECOM), the navy coastal ocean model (NCOM) and the finite-volume coastal ocean model (FVCOM). Recent developments in POM include a generalized coordinate system that combines sigma and z-level grids (Mellor and Ezer), inundation features that allow simulations of wetting and drying (e.g., flood of land area) (Oey), and coupling ocean currents with surface waves (Mellor). Efforts to improve turbulent mixing also continue (Galperin, Kantha, Mellor and others). POM users' meetings were held every few years, and in recent years the meetings were extended to include other models and renamed the International Workshop on Modeling the Ocean (IWMO) | https://en.wikipedia.org/wiki?curid=10285944 |
Princeton Ocean Model List of meetings: Reviewed papers from the IWMO meetings are published by "Ocean Dynamics" in special issues (IWMO-2009 Part-I, IWMO-2009 Part-II, IWMO-2010, IWMO-2011, IWMO-2012, IWMO-2013, IWMO-2014). | https://en.wikipedia.org/wiki?curid=10285944 |
Ice III is a form of solid matter which consists of tetragonal crystalline ice, formed by cooling water down to at . It is the least dense of the high-pressure water phases, with a density of (at 350 MPa). The proton-ordered form of is ice IX. Ordinary water ice is known as , (in the Bridgman nomenclature). Different types of ice, from Ice II to Ice XVIII, have been created in the laboratory at different temperatures and pressures. | https://en.wikipedia.org/wiki?curid=10293481 |
Ice Ic Ice I (pronounced "ice one c" or "ice icy") is a metastable cubic crystalline variant of ice. H. König was the first to identify and deduce the structure of ice I. The oxygen atoms in ice I are arranged in a diamond structure and is extremely similar to ice I having nearly identical densities and the same lattice constant along the hexagonal puckered-planes. It forms at temperatures between upon cooling, and can exist up to upon warming, when it transforms into ice I. Apart from forming from supercooled water, ice I has also been reported to form from amorphous ice as well as from the high pressure ices II, III and V. It can form in and is occasionally present in the upper atmosphere and is believed to be responsible for the observation of Scheiner's halo, a rare ring that occurs near 28 degrees from the Sun or the Moon. Ordinary water ice is known as ice I (in the Bridgman nomenclature). Different types of ice, from ice II to ice XVI, have been created in the laboratory at different temperatures and pressures. But there are doubts whether "ice I" really has a cubic crystal system: Some authors claim that it is merely "stacking-disordered ice I" (“ice I”), and it has been dubbed the ″most faceted ice phase in a literal and a more general sense.″ | https://en.wikipedia.org/wiki?curid=10293493 |
Anders Knutsson Ångström (1888, Stockholm – 1981) was a Swedish physicist and meteorologist who was known primarily for his contributions to the field of atmospheric radiation. However, his scientific interests encompassed many diverse topics. He was the son of physicist Knut Ångström. He graduated with a BS from the University of Upsala in 1909. Then he completed his MS at the University of Upsala in 1911. He taught at the University of Stockholm Later, he was the department head of the Meteorology department at State Meteorological and Hydrological Institute (SMHI) of Sweden 1945–1949 and SMHI's chancellor 1949–1954. He is credited with the invention of the pyranometer, the first device to accurately measure direct and indirect solar radiation. In 1962 he was awarded the International Meteorological Organization Prize by the World Meteorological Organization. | https://en.wikipedia.org/wiki?curid=10303893 |
Larry Martin Larry Dean Martin (December 8, 1943 – March 9, 2013) was an American vertebrate paleontologist and curator of the Natural History Museum and Biodiversity Research Center at the University of Kansas. Among Martin's work is research on the Triassic reptile "Longisquama" and theropod dinosaur (or fossil bird) "Caudipteryx" and "Dakotaraptor". According to the University of Kansas, he "has been a leading opponent of the theory that birds are 'living dinosaurs.'" He has also appeared in a few television documentaries about dinosaurs, including "Jurassic Fight Club". He died of cancer at the age of 69 on March 9, 2013, after a long battle with the disease. | https://en.wikipedia.org/wiki?curid=10309887 |
Wan Chun Cheng or Zheng Wanjun (, 1908–1987) was a Chinese botanist. Initially one of the Chinese plant collectors who followed in the wake of the Europeans after 1920, he became one of the world's leading authorities on the taxonomy of gymnosperms. Working at the National Central University in Nanjing, he was instrumental in the identification in 1944 of the Dawn Redwood, "Metasequoia glyptostroboides" previously known only from fossils. The plant "Juniperus chengii" is named in his honour. | https://en.wikipedia.org/wiki?curid=10328125 |
Shell grit is coarsely ground or broken seashells. It is used, among other things, by birds as a source of calcium for egg shell production, and to aid digestion. Other uses include protecting plants from slugs or snails and for aquariums. | https://en.wikipedia.org/wiki?curid=10328644 |
Leslie Pedley (19 May 1930 – 27 November 2018) was an Australian botanist who specialised in the genus "Acacia". He is notable for bringing into use the generic name "Racosperma", creating a split in the genus with some 900 Australian species requiring to be renamed, since the type species of "Acacia", "Acacia nilotica", now "Vachellia nilotica", had a different lineage from the Australian wattles. However, the International Botanical Congress (IBC) in Melbourne in 2011 ratified their earlier decision to retain the name "Acacia" for the Australian species and to rename the African species. See also: "Acacia" and "Vachellia nilotica" (for the dispute) and APNI for a brief history of the name, "Racosperma". | https://en.wikipedia.org/wiki?curid=10332826 |
Li-kuo Fu Professor (or Li Kuo Fu) (born 1934) worked for the Institute of Botany at the Chinese Academy of Sciences, Beijing. The author of numerous treatises on Chinese plants, notably the "China Red Data Book" of rare and endangered species in the 1990s. In 1973, he took part in the Qinghai - Tibet Expedition, during which he discovered and named the Tibetan elm, "Ulmus microcarpa". | https://en.wikipedia.org/wiki?curid=10344162 |
NGC 4121 is a dwarf elliptical galaxy in the constellation Draco. | https://en.wikipedia.org/wiki?curid=10346860 |
NGC 4125 is an elliptical galaxy in the constellation Draco. In 2016, the telescope KAIT discovered the super nova SN 2016coj in this galaxy. After detection it became brighter over the course of several days, indicating a Type Ia supernova. | https://en.wikipedia.org/wiki?curid=10347116 |
Shirshov Institute of Oceanology The (P.P. (IO) RAN, ) is the premier research institution for ocean, climate, and earth science in Russia. It was established in 1946 and is part of the Russian Academy of Sciences. It is headquartered in Moscow. The institute is named after Pyotr Shirshov, who founded it in 1946. Amongst others, Andrei Monin served as director. Mathematicians Grigory Barenblatt and Andrei Monin, physical oceanographers Vladimir Shtokman and Leonid Brekhovskikh, and biologist Igor Akimushkin have been or are researchers at IORAN. Explorer and pilot of the MIR submersible to the seabed under the North Pole (a.k.a. the Arktika 2007 project) Anatoly Sagalevich and renowned Russian poet and geologist Alexander Gorodnitsky are also current researchers. | https://en.wikipedia.org/wiki?curid=10366081 |
NGC 4236 (also known as Caldwell 3) is a barred spiral galaxy located in the constellation Draco. The galaxy is a member of the M81 Group, a group of galaxies located at a distance of approximately 11.7 Mly (3.6 Mpc) from Earth. The group also contains the spiral galaxy Messier 81 and the starburst galaxy Messier 82. is located away from the central part of the M81 group at a distance of 14.5 Mly (4.45 Mpc) from Earth. | https://en.wikipedia.org/wiki?curid=10373670 |
NGC 4261 is an elliptical galaxy located around 100 million light-years away in the constellation Virgo. The galaxy is a member of its own galaxy group known as the group. The active galactic nucleus (AGN) contains a 400-million-solar mass supermassive black hole (SMBH) with an 800-light-year-wide spiral-shaped disk of dust fueling it. The galaxy is estimated to be about 60 thousand light-years across, and a jet emanating from it is estimated to span about 88 thousand light-years. | https://en.wikipedia.org/wiki?curid=10373988 |
NGC 4650A is a polar-ring lenticular galaxy located in the constellation Centaurus. It should not be confused with the spiral galaxy NGC 4650, which shares almost the same radial distance as NGC 4650A. The real distance between both galaxies is only about 6 times the optical radius of NGC 4650. | https://en.wikipedia.org/wiki?curid=10374398 |
NGC 6782 is a barred spiral galaxy located in the constellation Pavo. The galaxy exhibits two distinct ring structures. | https://en.wikipedia.org/wiki?curid=10374695 |
NGC 4319 is a face-on barred spiral galaxy located about 77 million light years away in the constellation Draco. The morphological classification is SB(r)ab, which indicates it is a barred spiral with an inner ring structure and moderate to tightly wound arms. It is situated in physical proximity to the galaxies NGC 4291 and NGC 4386, with X-ray emissions from the intervening gap indicating and NGC 4291 may be interacting. has a much higher proportion of ionized hydrogen compared to the Milky Way galaxy. In 1971, American astronomer Halton Arp noted what appeared to be a physical connection between and Markarian 205, a quasi-stellar object with a much higher redshift. He suggested that if Markarian 205 is not an accidentally projected background object, then it may instead have been ejected from the nucleus of this galaxy. The discovery of an apparent luminous connection between the two created a storm of controversy as astronomers sought to refute the assertion and provide other explanations. The matter was effectively settled when observations using the Hubble Space Telescope showed that the light from Markarian 205 was passing through the disk and halo of to reach the observer, placing Markarian 205 behind this galaxy and thus further away. | https://en.wikipedia.org/wiki?curid=10375094 |
Mount Fuji Radar System The is a historic weather radar system located on the summit of Mount Fuji, Japan. It was completed on August 15, 1964 , and is now recorded on the list of IEEE Milestones in electrical engineering. When first built, the was the world's highest weather radar (elevation 3776 meters), and could observe major weather phenomena, such as destructive typhoons, at a range of more than 800 kilometers. It was designed by the Japan Meteorological Agency and built by Mitsubishi Electric Corporation. The system is notable for its advances in weather radar technology, remote control, and difficulty of construction, as it required the transport and assembly of some 500 tons of material during mountain's short summer. It operated at a frequency of 2880 megahertz, with output power of 1500 kilowatts, and a pulse width of 3.5 microseconds. Its antenna was a circular dish, 5 meters in diameter, of parabolic shape, rotating at either 3 or 5 revolutions per minute, and housed within a 9-meter radome. The system was decommissioned in 1999, as it was superseded by weather satellites. The dome, radar dish and support equipment were relocated to a purpose-built museum in Fujiyoshida, Yamanashi in 2001. It was replaced by an automated weather system in 2004. | https://en.wikipedia.org/wiki?curid=10378389 |
NGC 4308 is an elliptical galaxy in the constellation Coma Berenices. It is a member of the Coma I Group. | https://en.wikipedia.org/wiki?curid=10383286 |
BamHI "Bam"H I (from "Bacillus amyloliquefaciens") is a type II restriction endonuclease, having the capacity for recognizing short sequences (6 b.p.) of DNA and specifically cleaving them at a target site. This exhibit focuses on the structure-function relations of "Bam"H I as described by Newman, et al. (1995). "Bam"H I binds at the recognition sequence 5'-GGATCC-3', and cleaves these sequences just after the 5'-guanine on each strand. This cleavage results in sticky ends which are 4 b.p. long. In its unbound form, "Bam"H I displays a central b sheet, which resides in between α-helices. "Bam"H I undergoes a series of unconventional conformational changes upon DNA recognition. This allows the DNA to maintain its normal B-DNA conformation without distorting to facilitate enzyme binding. "Bam"H I is a symmetric dimer. DNA is bound in a large cleft that is formed between dimers; the enzyme binds in a "crossover" manner. Each "Bam"H I subunit makes the majority of its backbone contacts with the phosphates of a DNA half site but base pair contacts are made between each "Bam"H I subunit and nitrogenous bases in the major groove of the opposite DNA half site. The protein binds the bases through either direct hydrogen bonds or water-mediated H-bonds between the protein and every H-bond donor/acceptor group in the major groove. Major groove contacts are formed by atoms residing on the amino-terminus of a parallel 4 helix bundle | https://en.wikipedia.org/wiki?curid=10388309 |
BamHI This bundle marks the "Bam"H I dimer interface, and it is thought that the dipole moments of the NH2-terminal atoms on this bundle may contribute to electrostatic stabilization. The "Bam"H I enzyme is capable of making a large number of contacts with DNA. Water-mediated hydrogen bonding, as well as both main-chain and side-chain interactions aid in binding of the "Bam"H I recognition sequence. In the major groove, the majority of enzyme/DNA contacts take place at the amino terminus of the parallel-4-helix bundle, made up of a4 and a6 from each subunit. Although a6 from each subunit does not enter the DNA major groove, its preceding loops interact with the outer ends of the recognition site. Conversely, a4 from each subunit does enter the major groove in the center of the recognition sequence. A total of 18 bonds are formed between the enzyme and DNA across the 6 base pair recognition sequence (12 direct and 6 water mediated bonds). Arg155 and Asp154 located in a spiral ring before a6 are connected with G:C base pairs outside while the middle G:C pairs are connected with Asp154, Arg122, and Asn116 (direct binding). Hydrogen bonding between water and Asn116 results in binding at A:T base pairs inside (water-mediated binding). As discussed above, the L and R subunits bind in a cross over manner, whereby the R-subunit of "Bam"H I contacts the left DNA half-site of the recognition sequence. The binding of each "Bam"H I subunit is precisely the same as its symmetrical partner | https://en.wikipedia.org/wiki?curid=10388309 |
BamHI The recognition site for "Bam"H I has a palindromic sequence which can be cut in half for ease in showing bonds. As of the end of 2010, there were 5 crystal structures of "Bam"H I in the Protein Data Bank BamHI, type II restriction endonucleases, often requires divalent metals as cofactors to catalyze DNA cleavage. Two-metal ion mechanism is one of the possible catalytic mechanisms of since the crystal structure has the ability to bind two metal ions at the active site, which is suitable for the classical two-metal ion mechanism to proceed. Two-metal ion mechanism is the use of two metal ions to catalyze the cleavage reaction of restriction enzyme. has three critical active site residues that are important for metal catalyst. They are known as Asp94, Glu111 and Glu113. These residues are usually acidic. In the presence of a metal ion, the residues are pointed toward the metal ion. In the absence of metal ions, the residues are pointed outward. The two metal ions (A and B) are 4.1 apart from each other in the active site and are in-line with these residues. In general, when the two metal ions (A and B) are bonded to the active site, they help stabilize a cluster distribution of negative charges localized at the active site created by the leaving of an oxygen atom during the transition state. First, a water molecule will be activated by metal ion A at the active site. This water molecule will act as the attacking molecule attacking the BamHI-DNA complex and thus making the complex negative | https://en.wikipedia.org/wiki?curid=10388309 |
BamHI Later, another water will bound to metal ion B and donate a proton to the leaving group of complex, stabilizing the build-up of negative charge on the leaving oxygen atom. The function of Ca2+ in the active site of is known. It is an inhibitor of DNA cleavage, converting into the pre-reactive state. This revealed the water molecular is the attacking molecule. It donates a proton to the leaving group that is bounded to Ca2+ forming a 90o O-P-O bond angles. If Glu 113 is replaced by lysine, the cleavage is lost since Glu 113 accepts the proton from the attacking water molecule. Because of its ability to recognize specific DNA sequence and cleave by a nuclease, "Bam"H I carries various importances in understanding Type II restriction endonuclease, cloning DNA, and possibly treating certain DNA mutation-derived diseases through genetic therapy. NARP and MILS syndromes, for example, are mitochondrial diseases that can be caused by mutations in the mitochondrial DNA. Mitochondria can recover its functions after the excision of the mutant sequence through restriction endonuclease. | https://en.wikipedia.org/wiki?curid=10388309 |
Satori Kato was a Japanese chemist. Kato was initially thought to be the inventor of the first soluble instant coffee whilst working in Chicago, after filing a patent in 1901 and exhibiting the product at the Pan-American Exposition until it was realised that David Strang of Invercargill, New Zealand had invented the product two years earlier. | https://en.wikipedia.org/wiki?curid=10397639 |
NGC 2535 is an unbarred spiral galaxy exhibiting a weak inner ring structure around the nucleus in the constellation Cancer that is interacting with NGC 2536. The interaction has warped the disk and spiral arms of NGC 2535, producing an elongated structure, visible at ultraviolet wavelengths, that contain many bright, recently formed blue star clusters in addition to enhanced star forming regions around the galaxy center. The two galaxies are listed together in the "Atlas of Peculiar Galaxies" as an example of a spiral galaxy with a high surface brightness companion. | https://en.wikipedia.org/wiki?curid=10397972 |
NGC 2536 is a barred spiral galaxy with a prominent inner ring structure encircling the bar in the constellation Cancer that is interacting with NGC 2535. The two galaxies are listed together in the Atlas of Peculiar Galaxies as an example of a spiral galaxy with a high surface brightness companion. | https://en.wikipedia.org/wiki?curid=10398015 |
Mixed dark matter (MDM) is a dark matter (DM) model proposed during the late 1990s. is also called hot + cold dark matter. The most abundant form of dark matter is cold dark matter, almost one fourth of the energy contents of the Universe. Neutrinos are the only known particles whose Big-Bang thermal relic should compose at least a fraction of Hot dark matter (HDM), albeit other candidates are speculated to exist. In the early 1990s, the power spectrum of fluctuations in the galaxy clustering did not agree entirely with the predictions for a standard cosmology built around pure cold DM. with a composition of about 80% cold and 20% hot (neutrinos) was investigated and found to agree better with observations. This large amount of HDM was made obsolete by the discovery in 1998 of the acceleration of universal expansion, which eventually led to the dark energy + dark matter paradigm of this decade. The cosmological effects of cold DM are almost opposite to the hot DM effects. Given that cold DM promotes the growth of large scale structures, it is often believed to be composed of Weakly interacting massive particles (WIMPs). Conversely hot DM suffers of free-streaming for most of the history of the Universe, washing-out the formation of small scales. In other words, the mass of hot DM particles is too small to produce the observed gravitationally bounded objects in the Universe. For that reason, the hot DM abundance is constrained by Cosmology to less than one percent of the Universe contents | https://en.wikipedia.org/wiki?curid=10400397 |
Mixed dark matter The Mixed Dark Matter scenario recovered relevance when DM was proposed to be a thermal relic of a Bose–Einstein condensate made of very light bosonic particles, as light as neutrinos or even lighter like the Axion. This cosmological model predicts that cold DM is made of many condensed particles, while a small fraction of these particles resides in excited energetic states contributing to hot DM. | https://en.wikipedia.org/wiki?curid=10400397 |
Susan G. Ernst is a professor of Biology at Tufts University and was the first Dean of the School of Arts and Sciences. Her research is in Developmental biology and primarily focuses on the role of the Endo16 gene in embryogenesis. She uses the sea urchin as her model system for research. Ernst graduated with honors from Louisiana State University in 1968 with a B.S. in Zoology. She received her Ph.D. in Zoology in 1975 from the University of Massachusetts Amherst. After completing post-doctoral fellowships first at Case Western Reserve and then the California Institute of Technology, Ernst became an Assistant Professor at Tufts University in 1979. From 1997 to 2005, Ernst held a number of deanships at Tufts serving, most notably, as the Dean of the School of Arts and Sciences from 2001 to 2005. Throughout this time she continued to teach undergraduate and graduate courses and pursue her research. In 2005, Ernst returned to teaching and research full-time. | https://en.wikipedia.org/wiki?curid=10404604 |
NGC 1566 NGC 1566, sometimes known as the Spanish Dancer, is an intermediate spiral galaxy in the constellation Dorado. It is the dominant and brightest member of the Dorado Group, being among the brightest Seyfert galaxies in the sky. Absolute luminosity is , and is calculated to contain of H I. On June 19, 2010, Berto Monard from South Africa detected a magnitude 16 supernova 13" west and 22" south of the center of at coordinates 04 19 58.83 -54 56 38.5. | https://en.wikipedia.org/wiki?curid=10407184 |
NGC 2537 is a blue compact dwarf galaxy in the constellation Lynx, located around 3 degrees NNW of 31 Lyncis. This is Arp 6 or Mrk 86. It belongs to the iE class of Blue Compact Dwarf (BCD) classification, which is described as galactic spectra with an underlying smooth elliptical Low Surface Brightness component with a superimposed "knotted" star formation component (Gil de Paz et al., 2000, Page 378 Astron. Astrophys. Suppl. Ser. 145). It was long thought to be possibly interacting with IC 2233. However, this is now considered highly unlikely as radio observations with the Very Large Array showed the two galaxies lie at different distances. | https://en.wikipedia.org/wiki?curid=10407390 |
Trogloxene Trogloxenes or subtroglophiles, also called cave guests, are animal species which periodically live in underground habitats such as caves or at the very entrance, but cannot live exclusively in such habitats. Among many scientists, trogloxenes and subtroglophile have slightly different but closely related meanings, with the former covering species that are occasional visitors to underground habitat and the latter species that live more permanently there, but have to go outside (for example, to find food). Both these are in contrast to troglobites, which strictly live in underground habitats. Examples of trogloxene/subtroglophile species are bats, rats, raccoons and some opiliones (this last group also has fully troglobitic species). Several extinct trogloxenes are known like cave bears, cave lions, cave leopards, and cave hyenas. Indications trusted by geologists and archaeologists combine to show that these animals lived there in the latter part, at least, of the third interglacial epoch, and on through the fourth and last glacial advance, when, although central Europe was free from an ice cap, an almost Arctic climate prevailed, with much rain. This is what is known as the Upper Paleolithic when humanity in Europe was represented by the Neanderthals. During the Upper Paleolithic, many carnivores gradually adapted by increased fur and resorting far more than previously to the shelter of caves | https://en.wikipedia.org/wiki?curid=10407886 |
Trogloxene The cave bear ("Ursus spelaeus") was the most habitual in its use of caves, and occupied caves before humans began to do so. | https://en.wikipedia.org/wiki?curid=10407886 |
Hypohalite A hypohalite is an oxyanion containing a halogen in oxidation state +1. This includes hypoiodite, hypobromite and hypochlorite. In hypofluorite (oxyfluoride) the fluorine atom is in a −1 oxidation state. Hypohalites are also encountered in organic chemistry, often as acyl hypohalites (see the Hunsdiecker reaction). Sodium hypohalite is used in the haloform reaction as a test for methyl ketones. | https://en.wikipedia.org/wiki?curid=10409512 |
Strain scanning In physics, strain scanning is the general name for various techniques that aim to measure the strain in a crystalline material through its effect on the diffraction of X-rays and neutrons. In these methods the material itself is used as a form of strain gauge. The various methods are derived from powder diffraction but look for the small shifts in the diffraction spectrum that indicate a change a lattice parameter instead of trying to derive unknown structural information. By comparing the lattice parameter to a known reference value it is possible to determine the. If sufficient measurements are made in different directions it is possible to derive the strain tensor. If the elastic properties of the material are known, one can then compute the stress tensor. At its most basic level strain scanning uses shifts in Bragg diffraction peaks to determine the strain. Strain is defined as the change in length (shift in lattice parameter, d) divided by the original length (unstrained lattice parameter, d). In diffraction based strain scanning this becomes the change in peak position divided by the original position. The precise equation is presented in terms of diffraction angle, energy, or - for relatively slow moving neutrons - time of flight: The details of the technique are heavily influenced by the type of radiation used since lab X-rays, synchrotron X-rays and neutrons have very different properties. Nevertheless, there is considerable overlap between the various methods. | https://en.wikipedia.org/wiki?curid=10410181 |
NGC 4448 is a barred spiral galaxy with a prominent inner ring structure in the constellation Coma Berenices. The galaxy is a member of the Coma I Group. | https://en.wikipedia.org/wiki?curid=10413368 |
NGC 4450 is a spiral galaxy in the constellation Coma Berenices. is a member of the Virgo Cluster that, like Messier 90, shows smooth, nearly featureless spiral arms, with few star formation regions and little neutral hydrogen compared to other similar spiral galaxies, something that justifies its classification as an anemic galaxy. Measurements with the help of the Hubble Space Telescope show the center of this galaxy has a supermassive black hole. | https://en.wikipedia.org/wiki?curid=10413599 |
NGC 4639 is a barred spiral galaxy located in the constellation Virgo. It lies over 70 million light-years away from planet Earth. Its core contains a massive black hole. is also classified as a Seyfert galaxy. is a member of the Virgo Cluster. | https://en.wikipedia.org/wiki?curid=10414047 |
Moisture advection is the horizontal transport of water vapor by the wind. Measurement and knowledge of atmospheric water vapor, or "moisture", is crucial in the prediction of all weather elements, especially clouds, fog, temperature, humidity thermal comfort indices and precipitation. Regions of moisture advection are often co-located with regions of warm advection. Using the classical definition of advection, moisture advection is defined as: in which V is the horizontal wind vector, and formula_2 is the density of water vapor. However, water vapor content is usually measured in terms of mixing ratio (mass fraction) in reanalyses or dew point (temperature to partial vapor pressure saturation, i.e. relative humidity to 100%) in operational forecasting. The advection of dew point itself can be thought as moisture advection: In terms of mixing ratio, horizontal transport/advection can be represented in terms of moisture flux: in which q is the mixing ratio. The value can be integrated throughout the atmosphere to total transport of moisture through the vertical: where formula_6 is the density of air, and P is pressure at the ground surface. For the far right definition, we have used Hydrostatic equilibrium approximation. And its divergence (convergence) imply net evapotranspiration (precipitation) as adding (removing) moisture from the column: where P, E, and the integral term are—precipitation, evapotranspiration, and time rate of change of precipitable water, all represented in terms of mass/(unit area * unit time) | https://en.wikipedia.org/wiki?curid=10414062 |
Moisture advection One can convert to more typical units in length such as mm by multiplying the density of liquid water and the correct length unit conversion factor. | https://en.wikipedia.org/wiki?curid=10414062 |
NGC 4605 is a dwarf barred spiral galaxy in the constellation Ursa Major, located at a distance of from the Milky Way. Physically it is similar in size and in B-band absolute magnitude to the Large Magellanic Cloud. It is a member of the M81 Galaxy Group, along with Messier 81 and Messier 101. | https://en.wikipedia.org/wiki?curid=10414274 |
NGC 4627 is a dwarf elliptical galaxy in the constellation Canes Venatici. | https://en.wikipedia.org/wiki?curid=10414409 |
NGC 4710 is an edge-on spiral galaxy in the constellation Coma Berenices. Its prominent x-shaped structure reveals the existence of an underlying bar. | https://en.wikipedia.org/wiki?curid=10421151 |
NGC 4976 is a peculiar elliptical galaxy in the constellation Centaurus. It was detected with a 5" telescope working at 20x magnification by comet hunter Jack Bennett. | https://en.wikipedia.org/wiki?curid=10421313 |
NGC 4984 is an intermediate lenticular galaxy exhibiting a double ring structure in the constellation Virgo. In December 2011, supernova 2011iy was discovered in it. | https://en.wikipedia.org/wiki?curid=10421423 |
Journal of Quaternary Science The is a peer-reviewed academic journal published on behalf of the Quaternary Research Association. It covers research on any aspect of quaternary science. The journal publishes predominantly research articles with two thematic issues published annually, although discussions and letters are occasionally published along with invited reviews. According to the "Journal Citation Reports", the journal has a 2012 impact factor of 2.939. | https://en.wikipedia.org/wiki?curid=10421576 |
Pair potential In physics, a pair potential is a function that describes the potential energy of two interacting objects. Examples of pair potentials include the Coulomb's law in electrodynamics, Newton's law of universal gravitation in mechanics, the Lennard-Jones potential and the Morse potential. Pair potentials are very common in physics; exceptions are very rare. An example of a potential energy function that is "not" a pair potential is the three-body Axilrod-Teller potential. Another example is the Stillinger-Weber potential for silicon which includes the angle in a triangle of silicon atoms as a parameter. | https://en.wikipedia.org/wiki?curid=10422831 |
NGC 5010 is a lenticular galaxy located about 140 million light years away, in the constellation Virgo. It is considered a "LIRG" (Luminous Infrared Galaxy). As the galaxy has few young blue stars and mostly red old stars and dust, it is transitioning from being a spiral galaxy to being an elliptical galaxy, with its spiral arms having burned out and become dusty arms. | https://en.wikipedia.org/wiki?curid=10431374 |
NGC 5087 is an elliptical galaxy located in the constellation Virgo. | https://en.wikipedia.org/wiki?curid=10431475 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.