text stringlengths 11 1.65k | source stringlengths 38 44 |
|---|---|
Jalkr is a bright crater on Jupiter's moon Callisto measuring 74 km across (in the lower part of the image). This an example of a central dome impact crater. A smaller degraded crater in the upper part of the image is called Audr. | https://en.wikipedia.org/wiki?curid=10848731 |
Carl Eduard Adolph Gerstaecker (30 August 1828 – 20 June 1895) was a German zoologist and entomologist. He was born in Berlin, where he studied medicine and natural sciences, receiving his PhD in 1855 as a student of Johann Christoph Friedrich Klug. In 1856 he obtained his habilitation for zoology, and soon afterwards, became a curator at the Zoological Museum of Humboldt University. In 1864 he began work as a lecturer at the Landwirtschaftlichen Lehranstalt (Agricultural Educational Facility) in Berlin. In 1874 he became an associate professor for zoology at the University of Berlin, and in 1876, a professor of zoology at the University of Greifswald. He died in Greifswald. | https://en.wikipedia.org/wiki?curid=10849479 |
Valley exit jet A valley exit jet is a strong, down-valley, elevated air current that emerges above the intersection of the valley and its adjacent plain. These winds frequently reach a maximum of at a height of above the ground. Surface winds below the jet may sway vegetation but are significantly weaker. The presence of these strong nighttime down-valley air flows has been documented at the mouth of many Alpine valleys that merge with basins, such as the Inn Valley of Austria, where the jet is strong enough to be heard at the ground. In the United States, exit jet signatures have been observed at the North Fork Gunnison River at Paonia, Colorado; the exit of South Boulder Creek south of Boulder, Colorado; Albuquerque, New Mexico at the mouth of Tijeras Canyon; and the mouth of Spanish Fork Canyon in Utah. Exit jets are likely to be found in valley regions that exhibit diurnal mountain wind systems, such as those of the dry mountain ranges of the US. These diurnal wind systems are driven by horizontal pressure gradients. Due to the abrupt transition over a short distance between the valley high pressure and the basin low pressure, the gradients are strongest near the valley exit, producing a jet. Other meteorological factors acting to increase exit wind speeds are the acceleration of winds originating inside the valley as they travel to lower elevations downvalley, and the process of cold valley air sinking and ejecting into the plain | https://en.wikipedia.org/wiki?curid=10857373 |
Valley exit jet Deep valleys that terminate abruptly at a plain are more impacted by these factors than are those that gradually become shallower as downvalley distance increases. Valley exit jets can play a major role in the mitigation of air pollution: Methods of examining exit jets include remote sensing and direct observation. SODAR and Doppler LIDAR have been used in numerous studies to identify, quantify and relate the jets to atmospheric transport of hazardous materials. Detailed profiles of winds at canyon exits can be directly observed and calculated using a single or double theodolite and tethersondes. The identification and measurement of valley exit jets can also significantly aid in fire control, as fire often rides valley jets, as well as the development of wind energy. | https://en.wikipedia.org/wiki?curid=10857373 |
NOAA-17 was a weather forecasting satellite operated by NOAA. It was launched on June 24, 2002, in a sun-synchronous orbit, 824 km above the Earth, orbiting every 101 minutes. It hosted the AMSU, AVHRR and High Resolution Infrared Radiation Sounder (HRIS) instruments. The satellite was retired in 2013. Automatic Picture Transmission frequency was 137.5 MHz. was decommissioned on April 10, 2013. | https://en.wikipedia.org/wiki?curid=10859332 |
NOAA-16 NOAA-16, designated NOAA-L before launch, is one of the NASA-provided TIROS series of weather forecasting satellites operated by NOAA. It was launched on September 21, 2000, in a sun-synchronous orbit, 849 km above the Earth, orbiting every 102 minutes. It hosts the AMSU, AVHRR and High Resolution Infrared Radiation Sounder (HIRS) instruments' APT transmitter. has the same suite of instruments as carried by NOAA-15 plus an SBUV/2 instrument as well. NOAA-16's APT has been inoperable due to sensor degradation since November 15, 2000, and High Resolution Picture Transmission has been via STX-1 (1698 MHz) since November 9, 2010. was decommissioned on June 9, 2014 after a critical anomaly. On November 25, 2015, at 08:16, the JSpOC identified a possible breakup of NOAA 16 (#26536). All associated objects have been added to conjunction assessment screenings, and satellite operators will be notified of close approaches between the debris and active satellites. The JSpOC catalogs the debris objects when sufficient data is available. As of March 26, 2016, 275 pieces of debris were being tracked. | https://en.wikipedia.org/wiki?curid=10859405 |
NOAA-18 NOAA-18, known before launch as NOAA-N, is a weather forecasting satellite run by NOAA. NOAA-N (18) was launched on May 20, 2005, into a sun-synchronous orbit at an altitude of 854 km above the Earth, with an orbital period of 102 minutes. It hosts the AMSU-A, MHS, AVHRR, Space Environment Monitor SEM/2 instrument and High Resolution Infrared Radiation Sounder (HIRS) instruments, as well as the SBUV/2 ozone-monitoring instrument. It is the first NOAA POES satellite to use MHS in place of AMSU-B. also hosts Cospas-Sarsat payloads. APT transmission frequency is 137.9125 MHz (changed frequencies with NOAA-19 on June 23, 2009). | https://en.wikipedia.org/wiki?curid=10859435 |
NOAA-15 (designated NOAA-K before launch) is one of the NASA-provided TIROS series of weather forecasting satellite run by NOAA. It was launched on May 13, 1998, and is currently semi-operational, in a sun-synchronous orbit, 807 km above the Earth, orbiting every 101 minutes. It hosts the AMSU-A and AMSU-B instruments, the AVHRR and High Resolution Infrared Radiation Sounder (HIRS/3) instruments, as well as a Space Environment Monitor (SEM/2). It also hosts Cospas-Sarsat payloads. APT transmission frequency is 137.62 MHz. Due to problems with the S-band transmitter high-gain antennas, has been configured for High Resolution Picture Transmission using the S-Band Transmitter #2 (1702.5 MHz) omnidirectional antenna. On July 22, 2019 began transmitting corrupt data. The cause appears to be instability of the scanning motor for the AVHRR sensor. According to an official release from NOAA, on July 23rd at 0400 Zulu time, the current draw of that motor spiked, as did the motor temperature. Additionally, the sensor stopped producing data. NOAA says this is consistent with a motor stall, and could be permanent. On July 25th 2019, the AVHRR motor spontaneously recovered. The cause of the failure is still under investigation. On July 30th, 2019 the AVHRR motor suffered another failure consistent with motor stall. As per the previous statement by NOAA, recovery is unlikely. As of ~0000 UTC July 30, 2019 (DOY 211), the AVHRR motor current has once again started spiking, becoming saturated above 302mA at ~0600 UTC | https://en.wikipedia.org/wiki?curid=10859460 |
NOAA-15 The instrument is once again no longer producing data and may be stalled. The current plan is to leave the instrument powered as this issue may be intermittent. | https://en.wikipedia.org/wiki?curid=10859460 |
Emil von Brück Emil vom Brück (1807 in Krefeld – 1884 in Krefeld) was a German dealer and entomologist mainly interested in Coleoptera. Brück led an extensive correspondence with the coleopterists of his time, especially Ernst Gustav Kraatz, Lucas von Heyden, Ernest August Hellmuth von Kiesenwetter and Alexander Henry Haliday. He made collecting trips to Italy and Spain and financially supported the expeditions of Gustav Zebe into the Balkans, Greece and Crete. The expeditions to Spain were in the spring of 1868 and 1870, when he met Lucas von Heyden in Granada. He collected in Gibraltar, Puerto Santa Maria, Seville, Alicante, Málaga, Cartagena and Valencia. He became a member of the Netherlands Entomological Society in 1853 and in 1858 joined the Entomological Society of Berlin. | https://en.wikipedia.org/wiki?curid=10867950 |
Achille Deyrolle (2 October 1813 in Lille – 31 December 1865) was a French entomologist mainly interested in Coleoptera. Born in Lille Deyrolle eventually settled in Brussels where he worked with his father in the City Museum. He went on a scientific mission to Brazil. This lasted five months. During his lifetime Deyrolle amassed a large collection of Coleoptera, but published very little. There is a copy of his manuscript “Liste des Elaterides de Deyrolle Avril 1864” in the Natural History Museum. He owned a taxidermy and natural history shop in Paris, originally owned by his naturalist father, Jean-Baptiste Deyrolle who opened for business in 1831 at 23, Rue de la Monnaie. The business which published natural history books as Deyrolles et fils was later owned by Émile Deyrolle Achille's son. Chevrolat, L. A., 1840. Description de quelques Coléopteres de la Galice et du Portugal provenant d'envois de M. Deyrolles fils. "Rev. Zool"., 3: 8-18. | https://en.wikipedia.org/wiki?curid=10868279 |
Supercompensation In sports science theory, supercompensation is the post training period during which the trained function/parameter has a higher performance capacity than it did prior to the training period. The fitness level of a human body in training can be broken down into four periods: initial fitness, parietal fitness training, recovery, and supercompensation. During the initial fitness period, the target of the training has a base level of fitness (shown by the first time sector in the graph). Upon entering the training period, the target's level of fitness decreases (shown by the second time sector in the graph). After training, the body enters the recovery period during which level of fitness increases up to the initial fitness level (shown by the third time sector in the graph). Because the human body is an adjustable organism, it will feel the need to adjust itself to a higher level of fitness in anticipation of the next training session. Accordingly, the increase in fitness following a training session does not stop at the initial fitness level. Instead the body enters a period of supercompensation during which fitness surpasses the initial fitness level (shown by the fourth time sector in the graph). If there are no further workouts, this fitness level will slowly decline back towards the initial fitness level (shown by the last time sector in the graph). First put forth by Russian scientist Nikolai N. Yakovlev (1911–1992) in 1949-1959, this theory is a basic principle of athletic training | https://en.wikipedia.org/wiki?curid=10869371 |
Supercompensation If the next workout takes place during the recovery period, overtraining may occur. If the next workout takes place during the supercompensation period, the body will advance to a higher level of fitness. If the next workout takes place after the supercompensation period, the body will remain at the base level. More complex variations are possible; for instance, sometimes a few workouts are intentionally made in the recovery period to achieve greater supercompensation effects. At a first glance, creating effective training programs might look simple. All you need is to determine the intensity level and how long it takes you to get to the supercompensation period. Afterwards, continue training with the intensity level that was determined previously and keep the necessary intervals between workouts required for supercompensation. However, things become more complex because training affects many different bodily functions and parameters. Each bodily function or parameter has a different recovery time, a different amount of time needed to reach peak supercompensation, a different amount of time between supercompensation peak and return to base fitness. The aforementioned functions and parameters are basic ones. Muscle strength and mass are complex parameters. For instance, muscle mass is a function of many different simple parameters. For example, amount of glycogen in muscles is a basic parameter that influences muscle mass | https://en.wikipedia.org/wiki?curid=10869371 |
Supercompensation In classical sport science, the yearly (sometimes multi-yearly) period is divided to micro and macro cycles, where each microcycle is responsible for the development of a specific (sometimes several) basic training function and parameter, whereas macrocycles are responsible for the development of complex parameters/functions (such as muscle strength). During each microcycle, the resting period is the same as the amount of time needed for reaching the supercompensation stage of the current training parameter/function (also during such a micro cycle there shouldn't be any negative influence on the recovery of the main function). Such a training method will work only when the developed functions/parameters are non-related. Unfortunately, for muscle strength and mass this is not the case (functions/parameters are related). Therefore, for muscle strength and mass different approaches are needed. During a training cycle the intensity and volume of training varies, waves of different functions are overlaid so that until the end of the microcycle supercompensation of the main required functions is achieved. | https://en.wikipedia.org/wiki?curid=10869371 |
Jean Baptiste Lucien Buquet (4 March 1807, Deinze –14 December 1889, Paris) was a French entomologist and insect dealer mainly interested in Coleoptera. He described many new genera and species. Buquet’s business dealt in exotic Coleoptera, especially Dynastidae, Buprestidae, Lucanidae, Scarabeidae and Cerambycidae. He also sold Lepidoptera, especially "Morpho" and "Agrias". The insects came mainly from the French colonial empires. He was a member of the Société entomologique de France Partial list | https://en.wikipedia.org/wiki?curid=10869558 |
Rope caulk or caulking cord is a type of pliable putty or caulking formed into a rope-like shape. It is typically off-white in color, relatively odorless, and stays pliable for an extended period of time. can be used as caulking or weatherstripping around conventional windows installed in conventional wooden or metal frames (see glazing). It is also used as a form for epoxy work, since epoxy does not adhere to this material. has also been applied to the metallic structure supporting the magnet for a dynamic speaker to cut unwanted resonance of the metal structure, leading to improved speaker performance. It has also been used as a sonic damping material in sensitive phonograph components. Mortite brand rope caulk was introduced by the J.W. Mortell Co. of Kankakee, Illinois in the 1940s, and called "pliable plastic tape". The trademark application was filed in March, 1943. It was later marketed as "caulking cord". The company was later acquired by Thermwell Products. Mortite putty is a brand of rope caulk marketed under the Frost King brand. Its primary ingredient is titanium dioxide; it has a specific gravity of 1.34. | https://en.wikipedia.org/wiki?curid=10874478 |
Particle physics in cosmology Particle physics is the study of the interactions of elementary particles at high energies, whilst physical cosmology studies the universe as a single physical entity. The interface between these two fields is sometimes referred to as particle cosmology. Particle physics must be taken into account in cosmological models of the early universe, when the average energy density was very high. The processes of particle pair production, scattering and decay influence the cosmology. As a rough approximation, a particle scattering or decay process is important at a particular cosmological epoch if its time scale is shorter than or similar to the time scale of the universe's expansion. The latter quantity is formula_1 where formula_2 is the time-dependent Hubble parameter. This is roughly equal to the age of the universe at that time. For example, the pion has a mean lifetime to decay of about 26 nanoseconds. This means that particle physics processes involving pion decay can be neglected until roughly that much time has passed since the Big Bang. Cosmological observations of phenomena such as the cosmic microwave background and the cosmic abundance of elements, together with the predictions of the Standard Model of particle physics, place constraints on the physical conditions in the early universe. The success of the Standard Model at explaining these observations support its validity under conditions beyond those which can be produced in a laboratory | https://en.wikipedia.org/wiki?curid=10875676 |
Particle physics in cosmology Conversely, phenomena discovered through cosmological observations, such as dark matter and baryon asymmetry, suggest the presence of physics that goes beyond the Standard Model. | https://en.wikipedia.org/wiki?curid=10875676 |
Fusobacterium polymorphum is a bacterium that has been isolated from the gingival crevice in humans, and has been implicated in the immunopathology of periodontal disease. It has also been isolated in guinea pigs in research studies. | https://en.wikipedia.org/wiki?curid=10882960 |
PEGylation (often styled pegylation) is the process of both covalent and non-covalent attachment or amalgamation of polyethylene glycol (PEG, in pharmacy called macrogol) polymer chains to molecules and macrostructures, such as a drug, therapeutic protein or vesicle, which is then described as PEGylated (pegylated). is routinely achieved by the incubation of a reactive derivative of PEG with the target molecule. The covalent attachment of PEG to a drug or therapeutic protein can "mask" the agent from the host's immune system (reducing immunogenicity and antigenicity), and increase its hydrodynamic size (size in solution), which prolongs its circulatory time by reducing renal clearance. can also provide water solubility to hydrophobic drugs and proteins. Having proven its pharmacological advantages and acceptability, technology is the foundation of a growing multibillion-dollar industry. is the process of attaching the strands of the polymer PEG to molecules, most typically peptides, proteins, and antibody fragments, that can improve the safety and efficiency of many therapeutics. It produces alterations in the physiochemical properties including changes in conformation, electrostatic binding, hydrophobicity etc. These physical and chemical changes increase systemic retention of the therapeutic agent. Also, it can influence the binding affinity of the therapeutic moiety to the cell receptors and can alter the absorption and distribution patterns | https://en.wikipedia.org/wiki?curid=10931846 |
PEGylation PEGylation, by increasing the molecular weight of a molecule, can impart several significant pharmacological advantages over the unmodified form, such as improved drug solubility, reduced dosage frequency, without diminished efficacy with potentially reduced toxicity, extended circulating life, increased drug stability, and enhanced protection from proteolytic degradation; peglyated forms may also be eligible for patent protection. The attachment of an inert and hydrophilic polymer was first reported around 1970 to extend blood life and control immunogenicity of proteins. Polyethylene glycol was chosen as the polymer. In 1981 Davis and Abuchowski founded Enzon, Inc., which brought three PEGylated drugs to market. Abuchowski later founded and is CEO of Prolong Pharmaceuticals. The clinical value of is now well established. ADAGEN (pegademase bovine) manufactured by Enzon Pharmaceuticals, Inc., US was the first PEGylated protein approved by the U.S. Food and Drug Administration (FDA) in March 1990, to enter the market. It is used to treat a form of severe combined immunogenicity syndrome (ADA-SCID), as an alternative to bone marrow transplantation and enzyme replacement by gene therapy. Since the introduction of ADAGEN, a large number of PEGylated protein and peptide pharmaceuticals have followed and many others are under clinical trial or under development stages. Sales of the two most successful products, Pegasys and Neulasta, exceeded $5 billion in 2011 | https://en.wikipedia.org/wiki?curid=10931846 |
PEGylation All commercially available PEGylated pharmaceuticals contain methoxypoly(ethylene glycol) or mPEG. PEGylated pharmaceuticals on the market (in reverse chronology by FDA approval year) have included: has practical uses in biotechnology for protein delivery, cell transfection, and gene editing in non-human cells. The first step of the is the suitable functionalization of the PEG polymer at one or both ends. PEGs that are activated at each end with the same reactive moiety are known as "homobifunctional", whereas if the functional groups present are different, then the PEG derivative is referred as "heterobifunctional" or "heterofunctional". The chemically active or activated derivatives of the PEG polymer are prepared to attach the PEG to the desired molecule. The overall processes used to date for protein conjugation can be broadly classified into two types, namely a solution phase batch process and an on-column fed-batch process. The simple and commonly adopted batch process involves the mixing of reagents together in a suitable buffer solution, preferably at a temperature between 4 and 6 °C, followed by the separation and purification of the desired product using a suitable technique based on its physicochemical properties, including size exclusion chromatography (SEC), ion exchange chromatography (IEX), hydrophobic interaction chromatography (HIC) and membranes or aqueous two phase systems | https://en.wikipedia.org/wiki?curid=10931846 |
PEGylation The choice of the suitable functional group for the PEG derivative is based on the type of available reactive group on the molecule that will be coupled to the PEG. For proteins, typical reactive amino acids include lysine, cysteine, histidine, arginine, aspartic acid, glutamic acid, serine, threonine and tyrosine. The N-terminal amino group and the C-terminal carboxylic acid can also be used as a site specific site by conjugation with aldehyde functional polymers. The techniques used to form first generation PEG derivatives are generally reacting the PEG polymer with a group that is reactive with hydroxyl groups, typically anhydrides, acid chlorides, chloroformates and carbonates. In the second generation chemistry more efficient functional groups such as aldehyde, esters, amides etc. made available for conjugation. As applications of have become more and more advanced and sophisticated, there has been an increase in need for heterobifunctional PEGs for conjugation. These heterobifunctional PEGs are very useful in linking two entities, where a hydrophilic, flexible and biocompatible spacer is needed. Preferred end groups for heterobifunctional PEGs are maleimide, vinyl sulfones, pyridyl disulfide, amine, carboxylic acids and NHS esters. Third generation pegylation agents, where the polymer has been branched, Y shaped or comb shaped are available and show reduced viscosity and lack of organ accumulation | https://en.wikipedia.org/wiki?curid=10931846 |
PEGylation Unpredictability in clearance times for PEGylated compounds may lead to the accumulation of large molecular weight compounds in the liver leading to inclusion bodies with no known toxicologic consequences. Furthermore, alteration in the chain length may lead to unexpected clearance times "in vivo". Moreover, the experimental conditions of reaction (i.e. pH, temperature, reaction time, overall cost of the process and molar ratio between PEG derivative and peptide) also have an impact on the stability of the final PEGylated products. To overcome the above mentioned limitations different strategies such as changing the size (Mw), the number, the location and the type of linkage of PEG molecule were offered by several researchers | https://en.wikipedia.org/wiki?curid=10931846 |
Gilavar is a name of the warm southern wind which blows across eastern Azerbaijan throughout the year, particularly in Baku and Shamakhi. is one of the two main winds that dominates Baku, along with Khazri, the cold northern wind. | https://en.wikipedia.org/wiki?curid=10939879 |
Émile Dottrens (21 July 1900 – 29 September 1990) was a Swiss zoologist and nature conservationist. He became a scientific assistant for zoology at the Natural History Museum of Geneva in 1942 and was the director of that museum from 1953 to 1969. He wrote several articles about the Swiss freshwater fish species from the genus "Coregonus". He has worked for the IUCN, for the Swiss nature conservation organisation Pro Natura and at the Council of Europe. He was the president of the International Commission for the Protection of the Alps (CIPRA) from 1960 to 1968. In addition he was a member of the Swiss Academy of Sciences and the Société de Physique et d'Histoire Naturelle de Genève. | https://en.wikipedia.org/wiki?curid=10955428 |
International Geoscience Programme The International Geoscience and Geoparks Programme (IGCP) is a cooperative enterprise of UNESCO (the United Nations Educational, Scientific and Cultural Organization) and the International Union of Geological Sciences (IUGS). It was launched in 1972 and originally termed the "International Geological Correlation Programme", the source of the acronym IGCP which it retains. For decades the programme was known as the "International Geoscience Programme". In November 2015 the name was changed to "International Geoscience and Geoparks Programme" as the global geoparks were made part of the programme. The aim of the IGCP is to facilitate research cooperation among geoscientists across frontiers and national boundaries, through joint research work, meetings and workshops. At the present time IGCP has about 400 active projects involving thousands of scientists from about 150 countries. | https://en.wikipedia.org/wiki?curid=10958224 |
Photobiotin is a derivative of biotin used as a biochemical tool. It is composed of a biotin group, a linker group, and a photoactivatable aryl azide group. The photoactivatable group provides nonspecific labeling of proteins, DNA and RNA probes or other molecules. Biotinylation of DNA and RNA with photoactivatable biotin is easier and less expensive than enzymatic methods since the DNA and RNA does not degrade. is most effectively activated by light at 260-475 nm. | https://en.wikipedia.org/wiki?curid=10977940 |
Rotational modulation collimator Rotational modulation collimators (or RMCs) are a specialization of the modulation collimator, an imaging device invented by Minoru Oda. Devices of this type create images of high energy X-rays (or other radiations that cast shadows). Since high energy X-rays are not easily focused, such optics have found applications in various instruments. RMCs selectively block and unblock X-rays in a way which depends on their incoming direction, converting image information into time variations. Various mathematical transformations can then reconstitute the image of the source. The Small Astronomy Satellite 3, launched in 1975, was one orbiting experiment that used RMCs. A more recent satellite that used RMCs was RHESSI. | https://en.wikipedia.org/wiki?curid=11005376 |
Multiplicity (chemistry) In spectroscopy and quantum chemistry, the multiplicity of an energy level is defined as "2S+1", where "S" is the total spin angular momentum. States with multiplicity 1, 2, 3, 4, 5 are respectively called singlets, doublets, triplets, quartets and quintets. The multiplicity is also equal to the number of unpaired electrons plus one. The multiplicity is often equal to the number of possible orientations of the total spin relative to the total orbital angular momentum "L", and therefore to the number of near–degenerate levels that differ only in their spin–orbit interaction energy. For example, the ground state of the carbon atom is a P state. The superscript three (read as "triplet") indicates that the multiplicity "2S+1" = 3, so that the total spin "S" = 1. This spin is due to two unpaired electrons, as a result of Hund's rule which favors the single filling of degenerate orbitals. The triplet consists of three states with spin components +1, 0 and –1 along the direction of the total orbital angular momentum, which is also 1 as indicated by the letter P. The total angular momentum quantum number J can vary from L+S = 2 to L–S = 0 in integer steps, so that J = 2, 1 or 0. However the multiplicity equals the number of spin orientations only if S ≤ L. When S > L there are only 2L+1 orientations of total angular momentum possible, ranging from S+L to S-L. The ground state of the nitrogen atom is a S state, for which "2S + 1" = 4 in a "quartet" state, "S" = 3/2 due to three unpaired electrons | https://en.wikipedia.org/wiki?curid=11020249 |
Multiplicity (chemistry) For an S state, L = 0 so that J can only be 3/2 and there is only one level even though the multiplicity is 4. Most stable organic molecules have complete electron shells with no unpaired electrons and therefore have singlet ground states. This is true also for inorganic molecules containing only main-group elements. Important exceptions are dioxygen (O) as well as methylene (CH) and other carbenes. However, higher spin ground states are very common in coordination complexes of transition metals. A simple explanation of the spin states of such complexes is provided by crystal field theory. The highest occupied orbital energy level of dioxygen is a pair of degenerate π* orbitals. In the ground state of dioxygen, this energy level is occupied by two electrons of the same spin, as shown in the molecular orbital diagram. The molecule therefore has two unpaired electrons and is in a triplet state. In contrast, the first excited state of dioxygen has two electrons of opposite spin in the π* level so that there are no unpaired electrons. In consequence it is a singlet state and is known as singlet oxygen. In organic chemistry, carbenes are molecules which have carbon atoms with only have six valence electrons and therefore disobey the octet rule. Carbenes generally split into singlet carbenes and triplet carbenes, named for their spin multiplicities | https://en.wikipedia.org/wiki?curid=11020249 |
Multiplicity (chemistry) Both have two non-bonding electrons; in singlet carbenes these exist as a lone pair and have opposite spins so that there is no net spin, while in triplet carbenes these electrons have parallel spins. | https://en.wikipedia.org/wiki?curid=11020249 |
Slitless spectroscopy is astronomical spectroscopy done without a small slit to allow only light from a small region to be diffracted. It works best in sparsely populated fields, as it spreads each point source out into its spectrum, and crowded fields will be too confused to be useful. It also faces the problem that for extended sources, nearby emission lines will overlap. The Crossley telescope utilized a slitless spectrograph that was originally employed by Nicholas Mayall. | https://en.wikipedia.org/wiki?curid=11024738 |
Evolved gas analysis (EGA) is a method used to study the gas evolved from a heated sample that undergoes decomposition or desorption. It is either possible just to detect evolved gases using evolved gas detection (EGD) or to analyse explicitly which gases evolved using evolved gas analysis (EGA). Therefore different analytical methods can be employed such as mass spectrometry, Fourier transform spectroscopy, gas chromatography, or optical in-situ evolved gas analysis. By coupling the thermal analysis instrument, e. g. TGA (thermogravimetry) or DSC (differential scanning calorimetry), with a fast Quadrupole Mass Spectrometer (QMS) the detection of gas separation and identification of the separated components are possible in exact time correlation with the other thermal analysis signals. DSC/TGA-QMS or TGA-QMS yields information on the composition (mass numbers of elements and molecules) of the evolved gases. It allows fast and easy interpretation of atomic/inorganic vapors and standard gases like H, HO, CO, etc. Fragmentation, interpretation of organic molecules is sometimes difficult. The combination with an FTIR (Fourier transform infrared spectrometer) has become popular, especially in the polymer producing, chemical and pharmaceutical industry. DSC/TGA-FTIR or TGA-FTIR yields information on the composition (absorption bands) of the evolved gases (bonding conditions). The advantage is an easy interpretation (spectra data bases) of organic vapors without fragmentation. Symmetrical molecules can not be detected | https://en.wikipedia.org/wiki?curid=11026728 |
Evolved gas analysis An EGA instrument named the Thermal and Evolved-Gas Analyzer was flown on the Phoenix Lander probe that reached Mars in May 2008. Its purpose was to study Martian soil samples. | https://en.wikipedia.org/wiki?curid=11026728 |
Fluvioglacial landform Fluvioglacial landforms are landforms molded by glacial meltwater. This discharge of glacial streams, both over the surface (supraglacial) and beneath the ice sheet (subglacial), is higher in the warmer summer months. As subglacial water often flows under pressure, it has a high velocity and is very turbulent. This high velocity enables it to transport a large amount of material which would not normally be transported in a regular stream of similar size; boulders thus transported may remain as glacial erratics in the post-glacial terrain. As this material is transported it makes contact with the underside of a glacier and erodes vertically from below by abrasion. This erosion creates subglacial valleys. When there is a decrease in discharge of the glacial steams, deposition occurs, and is responsible for large groupings of landforms like eskers and drumlins in glaciated areas. Pro-glacial lakes leave their own passive impress on the glaciated landscape, recognizable by the flatness of the former lakebed and raised terraces that mark former shorelines. More active massive scouring occurs when ice dams of pro-glacial lakes fail catastrophically in a glacial lake outburst flood; the results can be seen, for example in the Channeled Scablands in the U.S. state of Washington, created by the cataclysmic Missoula Floods that swept periodically across eastern Washington and down the Columbia River Plateau during the Pleistocene epoch | https://en.wikipedia.org/wiki?curid=11034502 |
Fluvioglacial landform The Shonkin Sag along the northern edge of the Highwood Mountains, Montana, is a channel formed by the Missouri River and glacial meltwater pouring from Glacial Lake Great Falls; it is one of the most famous prehistoric meltwater channels in the world. | https://en.wikipedia.org/wiki?curid=11034502 |
Procell ProCell — Programmable cell chip: culturing and manipulation of living cells with real-time reaction monitoring is a research project at DTU Nanotech and DTU Informatics funded by the Danish Agency for Science, Technology and Innovation under the Programme Commission on Strategic Growth Technologies with co-funding from DTU and industrial as well as academic partners. In project ProCell a new device is built for culturing and monitoring of living cells in real-time. ProCell will contribute to the transformation of life sciences by allowing (i) automatic manipulation of cells based on their observed behavior using embedded computer controllers, (ii) to make complex cell experiments in shorter time, from years to weeks, and (iii) to simulate "in vivo" conditions by "in vitro" experiments, thus reducing the need for animal experiments. In particular, ProCell will be applied to toxicity studies in the pharmaceutical industry and to cancer and stem cell research. | https://en.wikipedia.org/wiki?curid=11040192 |
Hopi Buttes volcanic field is a monogenetic volcanic field located on the Colorado Plateau mostly on the Navajo Reservation around the town of Dilkon in northeastern Arizona north of Holbrook. The volcanic field covers an area of approximately and contains about 300 maars and diatremes. The erosional exposure of the deposits varies with those in the eastern portion exhibiting the shallowly eroded maar deposits and those in the western portion the more deeply eroded feeder diatremes. The maars result from explosive interaction of the hot diatreme material with the groundwater system and result in a mixture of volcanic tuff material and sediments of the Miocene–Pliocene lacustrine sediments of the Bidahochi Formation. In the western portion of the field the buttes consist of the feeder diatremes of monchiquite and nepheline syenite magmas. Most of the volcanic activity occurred between 8.5 and 6 million years ago, with the most recent dated at 4.2 million years ago. | https://en.wikipedia.org/wiki?curid=11045360 |
Belgian marble is the name given to limestone extracted in Wallonia, southern Belgium. It is quarried around the cities of Namur, Dinant, Tournai, Basecles, Theux, and Mazy/Golzinne. The rock is actually not a true marble (a metamorphic rock), but a type of limestone (a calcareous sedimentary rock). Belgian marbles are available in solid dark greys or blacks; and in polychroms of red, grey, and/or pink. After polishing slabs with several colors exhibit natural decorative patterns. Named Belgian marbles include: has been quarried, cut, and finished as a building stone, stone cladding, and stone veneer since the Ancient Roman era, in Roman Gaul and Rome, such as in the Basilica of Junius Bassus. It has been used in important European religious and secular buildings since the Renaissance, including the Palazzo Pitti and Palace of Versailles. | https://en.wikipedia.org/wiki?curid=11045543 |
Encyclopedia of Analytical Chemistry The is an English-language multivolume encyclopedia published by John Wiley & Sons. It is a comprehensive analytical chemistry reference, covering all aspects from theory and instrumentation through applications and techniques. Containing over 600 articles and over 6500 illustrations the 15-volume print edition published in 2000. The encyclopedia has also been available online since the end of 2006. Online access to the complete encyclopedia requires a subscription or one-time purchase, but individual articles can be accessed by pay-per-view. Free sample articles are also available. | https://en.wikipedia.org/wiki?curid=11051981 |
Encyclopedia of Reagents for Organic Synthesis The is published in print and online by John Wiley & Sons Ltd. The online version is also known as e-EROS. The encyclopedia contains a description of the use of reagents used in organic chemistry. The eight-volume print version includes 3500 alphabetically arranged articles and the online version is regularly updated to include new reagents and catalysts. | https://en.wikipedia.org/wiki?curid=11052312 |
Cap carbonate Cap carbonates are layers of distinctively textured carbonate rocks (either limestone or dolomite) that occur at the uppermost layer of sedimentary sequences reflecting major glaciations in the geological record. Cap carbonates are found on most continents. They are typically 3–30 meters thick, laminated structures. They are depleted in C compared to other carbonates. The progression of late Neoproterozoic glaciations portrayed by substantial δC deviations in cap carbonates suggest out of control ice albedo. There are several different hypotheses for cap carbonate formation. Physical stratification results in a strong carbon isotopic gradient in the ocean. Massive carbonates will precipitate when the postglacial upwelling carries the alkalinity and isotopically light carbon to the continents. In this model, cap carbonates is the by-product of continental flooding. The short-lived change in carbon isotopic composition is the foundation for this theory. In the snowball Earth episode, the surface ocean of Earth is covered by the sea ice that separates the ocean and the atmospheric CO reservoirs. The atmospheric CO then built up to ~100,000 ppm and triggered the rapid deglaciation and melting of the sea ice, which reconnects the ocean and the atmosphere and provides access alkalinity to the ocean. The transport of carbon dioxide from that atmosphere to the ocean will lead to carbonate precipitation. This is caused by mixing upwelling, isotopically-depleted, alkaline bottom water and calcium-rich surface water | https://en.wikipedia.org/wiki?curid=11052882 |
Cap carbonate A third theory for cap carbonate formation is that methane hydrate destabilization results in the formation of cap carbonate and strongly negative carbon anomalies The unusual fabrics within the cap carbonate is similar to carbonate fabrics as from cold methane seeps. Experiments have been performed to see if the massive abiotic carbonate is possible in extreme environments. What are Cap Carbonates? at www.snowballearth.org | https://en.wikipedia.org/wiki?curid=11052882 |
Rudolf Wegscheider (October 18, 1859 – January 8, 1935) was an Austrian chemist. Wegscheider studied chemistry and was the founder of the Austrian School of Chemistry. He taught at the University of Vienna, and from 1902 to 1931 he was departmental Chair. He was the chairman of the association of Austrian chemists from 1904 to 1929. R. Wegscheider introduced the principle of detailed balance for chemical kinetics. | https://en.wikipedia.org/wiki?curid=11061801 |
Multispectral Scanner The (MSS) is one of the Earth's observing sensors introduced in the Landsat program. A was placed aboard each of the first five Landsat satellites. | https://en.wikipedia.org/wiki?curid=11065056 |
Cancer Minor (constellation) Cancer Minor (Latin for "lesser crab") was a constellation composed from a few stars in Gemini adjacent to Cancer. The constellation was introduced in 1612 (or 1613) by Petrus Plancius. The 5th-magnitude stars constituting Cancer Minor were HIP 36616, and 68, 74, 81 and 85 Geminorum, forming a faint natural arrow-shaped asterism. It is only found on a few 17th-century Dutch celestial globes and in the atlas of Andreas Cellarius. It was no longer used after the 18th century. | https://en.wikipedia.org/wiki?curid=11071105 |
Type II sensory fiber (group Aβ) is a type of sensory fiber, the second of the two main groups of touch receptors. The responses of different type Aβ fibers to these stimuli can be subdivided based on their adaptation properties, traditionally into rapidly adapting (RA) or slowly adapting (SA) neurons. Type II sensory fibers are slowly-adapting (SA), meaning that even when there is no change in touch, they keep respond to stimuli and fire action potentials. In the body, Type II sensory fibers belong to pseudounipolar neurons . The most notable example are neurons with Merkel cell-neurite complexes on their dendrites (sense static touch) and Ruffini endings (sense stretch on the skin and over-extension inside joints). Under pathological conditions they may become hyper-excitable leading to stimuli that would usually elicit sensations of tactile touch causing pain. These changes are in part induced by PGE2 which is produced by COX1, and type II fibers with free nerve endings are likely to be the subdivision of fibers that carry out this function. (group Aα) is another type of sensory fiber, which participate in the sensation of body position (proprioception) . In each muscle, we have 10-100 tiny muscle-like pockets called muscle spindles. The type II fibers (aka secondary fibers) connect to nuclear chain fibers and static nuclear bag fibers in muscle spindles, but not to dynamic nuclear bag fibers. The typical innervation to muscle spindles consists of one type Ia fiber and 2 type II fibers | https://en.wikipedia.org/wiki?curid=11078164 |
Type II sensory fiber The type Ia fiber has "anulospiral" endings around the middle parts of the intrafusal fibers compared to type II fibers that have "flower spray" endings which may be spray shaped or anular, spreading in narrow bands on both sides of the chain or bag fiber. It is thought that the Ia fibers signal the degree of change in muscle movement, and the type II fibers signal the length of the muscle (which is later used for forming the perception of the body in space). | https://en.wikipedia.org/wiki?curid=11078164 |
Qiudong Wang is a Professor at the Department of Mathematics, the University of Arizona. In 1982 he received a B.S. at Nanjing University and in 1994 a Ph.D. at the University of Cincinnati. Wang is best known for his 1991 paper "The global solution of the n-body problem", in which he generalised Karl F. Sundman's results from 1912 to a system of more than three bodies. However, L. K. Babadzanjanz claims to have made the same generalization earlier, in 1979. | https://en.wikipedia.org/wiki?curid=11084814 |
The central science Chemistry is often called the central science because of its role in connecting the physical sciences, which include chemistry, with the life sciences and applied sciences such as medicine and engineering. The nature of this relationship is one of the main topics in the philosophy of chemistry and in scientometrics. The phrase was popularized by its use in a textbook by Theodore L. Brown and H. Eugene LeMay, titled "Chemistry: The Central Science", which was first published in 1977, with a thirteenth edition published in 2014. The central role of chemistry can be seen in the systematic and hierarchical classification of the sciences by Auguste Comte in which each discipline provides a more general framework for the area it precedes (mathematics → astronomy → physics → chemistry → physiology and medicine → social sciences). Balaban and Klein have more recently proposed a diagram showing partial ordering of sciences in which chemistry may be argued is “the central science” since it provides a significant degree of branching. In forming these connections the lower field cannot be fully reduced to the higher ones. It is recognized that the lower fields possess emergent ideas and concepts that do not exist in the higher fields of science. Thus chemistry is built on an understanding of laws of physics that govern particles such as atoms, protons, neutrons, electrons, thermodynamics, etc. although it has been shown that it has not been “fully 'reduced' to quantum mechanics” | https://en.wikipedia.org/wiki?curid=11092324 |
The central science Concepts such as the periodicity of the elements and chemical bonds in chemistry are emergent in that they are more than the underlying forces that are defined by physics. In the same way, biology cannot be fully reduced to chemistry despite the fact that the machinery that is responsible for life is composed of molecules. For instance, the machinery of evolution may be described in terms of chemistry by the understanding that it is a mutation in the order of genetic base pairs in the DNA of an organism. However, chemistry cannot fully describe the process since it does not contain concepts such as natural selection that are responsible for driving evolution. Chemistry is fundamental to biology since it provides a methodology for studying and understanding the molecules that compose cells. Connections made by chemistry are formed through various sub-disciplines that utilize concepts from multiple scientific disciplines. Chemistry and physics are both needed in the areas of physical chemistry, nuclear chemistry, and theoretical chemistry. Chemistry and biology intersect in the areas of biochemistry, medicinal chemistry, molecular biology, chemical biology, molecular genetics, and immunochemistry. Chemistry and the earth sciences intersect in areas like geochemistry and hydrology. | https://en.wikipedia.org/wiki?curid=11092324 |
Asymmetric bacterium Asymmetric bacteria are bacteria that undergo "non-symmetrical" life cycles. This especially includes those that differentiate temporally, such as prosthecate bacteria. Cell division asymmetries have appeared alongside the evolution of complex developmental processes. While bacteria were historically considered symmetric simple cells, this idea has been overturned by novel technology and observation techniques. However, asymmetric bacteria remain difficult to detect. Asymmetrical growth aids in determining the age of bacteria, because it gives rise to an "old pole", or region of inert cell wall material found at the ends of a rod-shaped bacterial cell. Following the "old pole" of the cell wall material allows an observer to create a bacterial lineage. Bacteria exhibit three different types of asymmetry: conditional asymmetry, reproductive asymmetry, and morphological asymmetry. Conditional asymmetry is well defined in the case of endospore formation, which is triggered by stressful environmental conditions such as increased heat, pH change, and nutrient depletion. This type of asymmetry is usually seen in Bacilli and Clostridia. Reproductive asymmetry is classically linked to bacterial budding, where a mother cell concentrates cell wall material to one area and a daughter cell begins to bud from that thickening. Cell growth which gives rise to reproductive asymmetry occurs in three phases: stalk elongation, daughter cell elongation, and septum formation | https://en.wikipedia.org/wiki?curid=11094525 |
Asymmetric bacterium Morphological asymmetry is classified by polar elongation. In this type of asymmetrical growth, the daughter cell receives most of the new cell wall material. | https://en.wikipedia.org/wiki?curid=11094525 |
Ligand dependent pathway There are two types of pathway for substitution of ligands in a complex. The ligand dependent pathway is the one whereby the chemical properties of the ligand affect the rate of substitution. Alternatively, there is the ligand independent pathway, which is where the ligand does not have an effect. This is of vital importance in the world of inorganic chemistry and complex ions. | https://en.wikipedia.org/wiki?curid=11103761 |
Condosity is a comparative measurement of electrical conductivity of a solution. The condosity of any given solution is defined as the molar concentration of a sodium chloride (NaCl) solution that has the same specific electrical conductance as the solution under test. By way of example, for a 2 Molar potassium chloride (KCl) solution, the condosity would be expected to be somewhat greater than 2.0. This is because potassium is a better conductor than sodium. The measurement is sometimes used in biological systems to provide an assessment of the properties of bodily or cellular liquids, or the properties of solutes in the physical environment. When measuring the properties of bodily fluids such as urine, condosity is expressed in units of millimoles per litre (mM/l). | https://en.wikipedia.org/wiki?curid=11105645 |
Arsenical Arsenicals are chemical compounds that contain arsenic. In a military context, the term arsenical refer to toxic arsenic compounds that are used as chemical warfare agents. This include blister agents, blood agents and vomiting agents. | https://en.wikipedia.org/wiki?curid=11106681 |
Division on Dynamical Astronomy The (DDA) is a branch of the American Astronomical Society that focuses on the advancement of all aspects of dynamical astronomy, including celestial mechanics, solar system dynamics, stellar dynamics, as well as the dynamics of the interstellar medium and galactic dynamics, and coordination of such research with other branches of science. It awards the Brouwer Award every year, which was established to recognize outstanding contributions to the field of Dynamical Astronomy, including celestial mechanics, astrometry, geophysics, stellar systems, galactic and extra galactic dynamics. The Division also awards the Vera Rubin Early Career Prize for promise of continued excellence for an astronomer no more than 10 years beyond receipt of their doctorate. | https://en.wikipedia.org/wiki?curid=11111551 |
NGC 1260 is a spiral or lenticular galaxy in the constellation Perseus. It was discovered by astronomer Guillaume Bigourdan on October 19, 1884. is a member of the Perseus Cluster and forms a tight pair with the galaxy PGC 12230. In 2006, it was home to the second brightest supernova in the observable universe, supernova SN 2006gy. | https://en.wikipedia.org/wiki?curid=11112810 |
Jet noise In aeroacoustics, jet noise is the field that focuses on the noise generation caused by high-velocity jets and the turbulent eddies generated by shearing flow. Such noise is known as broadband noise and extends well beyond the range of human hearing (100 kHz and higher). is also responsible for some of the loudest sounds ever produced by mankind. The primary sources of jet noise for a high-speed air jet (meaning when the exhaust velocity exceeds about 100 m/s; 360 km/h; 225 mph) are "jet mixing noise" and, for supersonic flow, shock associated noise. Acoustic sources within the "jet pipe" also contribute to the noise, mainly at lower speeds, which include combustion noise, and sounds produced by interactions of a turbulent stream with fans, compressors, and turbine systems. The jet mixing sound is created by the turbulent mixing of a jet with the ambient fluid, in most cases, air. The mixing initially occurs in an annular shear layer, which grows with the length of the nozzle. The mixing region generally fills the entire jet at four or five diameters from the nozzle. The high-frequency components of the sound are mainly stationed close to the nozzle, where the dimensions of the turbulence eddies are small. Further down the jet, where the eddy size is similar to the jet diameter, is where lower frequency begins. In Supersonic, or choked jets there are cells through which the flow continuously expands and contracts | https://en.wikipedia.org/wiki?curid=11121146 |
Jet noise Several of these "shock cells" can be seen extending up to ten jet diameters from the nozzle and are responsible for two additional components of jet noise, screech tones, and broadband shock associated noises. Screech is produced by a feedback mechanism in which a disturbance convecting in the shear layer generates sound as it traverses the standing system of shock waves in the jet. Even though screech is a side effect of the jet's flight, it can be suppressed by an appropriate design for a nozzle. Aircraft noise is also sometimes called jet noise when emanating from jet aircraft, regardless of the mechanism of noise production. Works cited | https://en.wikipedia.org/wiki?curid=11121146 |
Direct methods (crystallography) "This article is about direct methods in x-ray crystallography. For electron diffraction direct methods, see Direct methods (electron microscopy)."In crystallography, direct methods are a family of methods for estimating the phases of the Fourier transform of the scattering density from the corresponding magnitudes. The methods generally exploit constraints or statistical correlations between the phases of different Fourier components that result from the fact that the scattering density must be a positive real number. In two dimensions, it is relatively easy to solve the phase problem directly, but not so in three dimensions. The key step was taken by Hauptman and Karle, who developed a practical method to employ the Sayre equation for which they were awarded the 1985 Nobel prize in Chemistry. The Nobel Prize citation was "for their outstanding achievements in the development of direct methods for the determination of crystal structures." At present, direct methods are the preferred method for phasing crystals of small molecules having up to 1000 atoms in the asymmetric unit. However, they are generally not feasible by themselves for larger molecules such as proteins. Several packages implement direct methods. | https://en.wikipedia.org/wiki?curid=11125044 |
Christian Hee Hwass (1731–1803) was a Danish malacologist who is remembered for his work in conchology. Although born in Denmark, Hwass did most of his important work in France. He moved to Paris in 1780, and later Auteuil (1794). In France, he collaborated with famous scientists Jean-Baptiste Lamarck (1744–1829) and Heinrich Christian Friedrich Schumacher (1757–1830), a fellow Dane on a research trip in Paris during the 1780s. Hwass is remembered for amassing a large shell collection that included numerous rare specimens. Many European scientists used the collection as a reference work for their personal publications. Hwass's best-known written work was the 1792 publication of "Encyclopedie Methodique". Although his friend, Jean Guillaume Bruguière (1750–1798), is often credited as author of the encyclopedia, the majority of the work was done by Hwass. | https://en.wikipedia.org/wiki?curid=11131531 |
Polarization spectroscopy comprises a set of spectroscopic techniques based on polarization properties of light (not necessarily visible one; UV, X-ray, infrared, or in any other frequency range of the electromagnetic radiation). By analyzing the polarization properties of light, decisions can be made about the media that emitted the light (or the media the light passes/scatters through). Alternatively, a source of polarized light may be used to probe a media; in this case, the changes in the light polarization (comparing to the incidental one) allow to infer the media properties. In general, any kind of anisotropy in the media results in some sort of change in polarization. Such an anisotropy can be either inherent to the media (e.g., in the case of a crystal substance), or imposed externally (e.g., in the presence of magnetic field in plasma). | https://en.wikipedia.org/wiki?curid=11133153 |
Acousto-optic deflector An acousto-optic deflector (AOD) spatially controls the optical beam. In the operation of an acousto-optic deflector the power driving the acoustic transducer is kept on, at a constant level, while the acoustic frequency is varied to deflect the beam to different angular positions. The acousto-optic deflector makes use of the acoustic frequency dependent diffraction angle, where a change in the angle formula_1 as a function of the change in frequency formula_2 given as, where formula_4 is the optical wavelength and formula_5 is the velocity of the acoustic wave. AOM technology has made practical the Bose–Einstein condensation for which the 2001 Nobel Prize in Physics was awarded to Eric A. Cornell, Wolfgang Ketterle and Carl E. Wieman. Another application of acoustic-optical deflection is optical trapping of small molecules. AODs are essentially the same as acousto-optic modulators (AOMs). In both an AOM and an AOD, the amplitude and frequency of different orders are adjusted as light is diffracted. | https://en.wikipedia.org/wiki?curid=11146921 |
Purple Earth hypothesis The "Purple Earth hypothesis" is an astrobiological hypothesis that life forms of early Earth were retinal-based rather than chlorophyll-based, making Earth appear purple rather than green. An example of retinal-based organisms that exist today are the photosynthetic microbes collectively called Haloarchaea. Many Haloarchaea contain the retinal protein, bacteriorhodopsin, in their purple membrane which carries out light-driven proton pumping, generating a proton-motive gradient across the cell membrane and driving ATP synthesis. The haloarchaeal purple membrane constitutes one of the simplest known bioenergetic systems for harvesting light energy. Retinal-containing purple membrane exhibits a single light absorption peak centered in the green-yellow energy-rich region of the solar spectrum, allowing transmission of red and green light, and resulting in deep purple color. Chlorophyll pigments, in contrast, absorb red and blue light, but little or no green light, which results in the characteristic green color of plants, cyanobacteria, and photosynthetic membranes. Microorganisms with purple and green pigments frequently co-exist in stratified communities where they may utilize complementary regions of the solar spectrum | https://en.wikipedia.org/wiki?curid=11147646 |
Purple Earth hypothesis The simplicity of haloarchaeal retinal pigments in comparison to the more complex chlorophyll-based photosynthetic membrane, their association with isoprenoid lipids in the cell membrane, as well as the discovery of archaeal membrane components in ancient sediments on the early Earth are consistent with an early appearance of life forms with purple membrane prior to green photosynthesis. Co-existence of purple and green pigment containing microorganisms in many environments suggests their co-evolution. Astrobiologists have suggested that retinal pigments may serve as remote biosignatures in exoplanet research. | https://en.wikipedia.org/wiki?curid=11147646 |
FLUKA (FLUktuierende KAskade) is a closed-source semi-integrated Monte Carlo simulation package for the interaction and transport of particles and nuclei in matter with limited scalability. has many applications in particle physics, high energy experimental physics and engineering, shielding, detector and telescope design, cosmic ray studies, dosimetry, medical physics, radiobiology. A recent line of development concerns hadron therapy. is available in the form of a pre-compiled object library for a number of computer platforms. Source code is available subject to the conditions specified in the license. Alternatives to are GEANT4, MCNP and PHITS. is developed using the FORTRAN language. Under Linux the g77 compiler is at present necessary to build and run user programs. A 64 bit version compiled in GNU Fortran has been available since 2011 (for version >4.5). A graphical user interface to run named Flair has been developed using Python (programming language) and is available at the project's web-site. The software is sponsored and copyrighted by INFN and CERN. Early versions of the hadronic event generator were implemented in other codes (in particular, GEANT3) and should be referenced as such (e.g. GEANT-FLUKA) and not as FLUKA. The hadronic generator in GEANT3 is no more developed since 1993 and cannot be compared with the present stand-alone FLUKA. software code is used by Epcard, which is a software program for simulating radiation exposure on airline flights. | https://en.wikipedia.org/wiki?curid=11157848 |
Quantum instrument In physics, a quantum instrument is a mathematical abstraction of a quantum measurement, capturing both the classical and quantum outputs. It combines the concepts of measurement and quantum operation. It can be equivalently understood as a quantum channel that takes as input a quantum system and has as its output two systems: a classical system containing the outcome of the measurement and a quantum system containing the post-measurement state. Let formula_1 be a countable set describing the outcomes of a measurement, and let formula_2 denote a collection of trace-non-increasing completely positive maps, such that the sum of all formula_3 is trace-preserving, i.e. formula_4 for all positive operators formula_5. Now for describing a quantum measurement by an instrument formula_6, the maps formula_3 are used to model the mapping from an input state formula_5 to the output state of a measurement conditioned on a classical measurement outcome formula_9. Therefore, the probability of measuring a specific outcome formula_9 on a state formula_5 is given by formula_12 The state after a measurement with the specific outcome formula_9 is given by formula_14 If the measurement outcomes are recorded in a classical register, whose states are modeled by a set of orthonormal projections formula_15 , then the action of an instrument formula_6 is given by a quantum channel formula_17 with formula_18 Here formula_19 and formula_20 are the Hilbert spaces corresponding to the input and the output systems of the instrument | https://en.wikipedia.org/wiki?curid=11158400 |
Quantum instrument A quantum instrument is an example of a quantum operation in which an "outcome" formula_9 indicating which operator acted on the state is recorded in a classical register. An expanded development of quantum instruments is given in quantum channel. | https://en.wikipedia.org/wiki?curid=11158400 |
RADOM-7 RADOM is a Bulgarian Liulin-type spectrometry-dosimetry instrument, designed to precisely measure cosmic radiation around the Moon. It is installed on the Indian satellite Chandrayaan-1. Another three instruments were deployed on the International Space Station. All Liulin-type instruments are designed and build by the Solar-Terrestrial Influences Laboratory at the Bulgarian Academy of Sciences. | https://en.wikipedia.org/wiki?curid=11191703 |
Julius Wilhelm Gintl (November 12, 1804 – December 22, 1883) was an Austrian physicist. He was notable as the developer of an early form of duplex electrical telegraph, which allowed two messages to be transmitted on a single wire, in opposite directions. This "duplex" communication was an early specific case of the general practice of multiplexing. Gintl's method would be developed to economic viability by J. B. Stearns, and the refined method used in Edison's implementation of a quadruplex telegraph. | https://en.wikipedia.org/wiki?curid=11193275 |
Anindya Sinha Anindya (Rana) Sinha is an Indian primatologist. He is a professor at the National Institute of Advanced Studies (NIAS), India. After obtaining an undergraduate degree in botany from the University of Calcutta in 1983, he went on to earn a postgraduate degree in the same university in 1985, specializing in cytogenetics. He is on the executive board of Nature Conservation Foundation, India. His research is mostly centered on the field of cognition and consciousness of bonnet macaque ("Macaca radiata") but he also has been involved in many genetics projects on Indian primates. He is also involved with Biology Olympiad as the leader of the Indian team. He is the son of the Indian director and film-maker, Tapan Sinha. and actress / singer Arundhati Devi. In 2009, he was chosen as a TED Fellow. | https://en.wikipedia.org/wiki?curid=11199808 |
Speleogenesis is the origin and development of caves, the primary process that determines essential features of the hydrogeology of karst and guides its evolution. It often deals with the development of caves through limestone, caused by the presence of water with carbon dioxide dissolved within it, producing carbonic acid which permits the dissociation of the calcium carbonate in the limestone. The majority of limestone caves are created by calcium carbonate dissolution by the solvent action of meteoric waters circulating through the rock. In the presence of carbon dioxide saturated water, calcium carbonate reacts to form the soluble calcium bicarbonate. CaCO + CO + HO → Ca(HCO) As meteoric waters precipitate they dissolve atmospheric carbon dioxide to form a dilute carbonic acid solution, which builds up in permeable fissures, bedding planes, joints, and faults within limestone rocks. The exposed limestone then reacts to become calcium bicarbonate which dissolves in the water and is removed from the fault as the solution flows away. Phreatic passages develop in conditions of complete water-fill meaning that ceilings and walls may be eroded as readily as floors. The form is generally that of an ellipse along the host fissure, whilst more circular forms generally indicate faster solvent flow and deep pockets are often indicative of slower flow. Vadose passages develop where the water has a free surface (i.e., in the vadose zone), and are varieties of entrenched, canyon-like channels as found with surface rivers | https://en.wikipedia.org/wiki?curid=11214534 |
Speleogenesis It is common to see a younger canyon entrenched in the floor of a phreatic passage, signifying a lowering of the water table. | https://en.wikipedia.org/wiki?curid=11214534 |
Appetein or APC-is a processed granulated plasma and serum blend ingredient for animal feeds, mostly used for young animals. The appetein name is a patented, belongs to the company APC. | https://en.wikipedia.org/wiki?curid=11217735 |
National Institute of Agricultural Botany The (NIAB) is a plant science research company based in Cambridge, UK. The NIAB group consists of: NIAB was founded in 1919 by Sir Lawrence Weaver, celebrating its centenary in 2019. The original Huntingdon Road HQ was opened in 1921, by King George V and Queen Mary. NIAB operates 11 regional centres throughout England: A 12th centre is expected to open at Cirencester (Gloucestershire) in 2020. | https://en.wikipedia.org/wiki?curid=11235396 |
Pedicel (botany) A pedicel is a stem that attaches a single flower to the inflorescence. Such inflorescences are described as pedicellate. Pedicel refers to a structure connecting a single flower to its inflorescence. In the absence of a pedicel, the flowers are described as sessile. Pedicel is also applied to the stem of the infructescence. The word "pedicel" is derived from the Latin "pediculus", meaning "little foot". The stem or branch from the main stem of the inflorescence that holds a group of pedicels is called a peduncle. A pedicel may be associated with a bract or bracts. In Halloween types of pumpkin or squash plants, the shape of the pedicel has received particular attention because plant breeders are trying to optimize the size and shape of the pedicel for the best "lid" for a "jack-o'-lantern". | https://en.wikipedia.org/wiki?curid=11235593 |
North American Mesoscale Model The (NAM) refers to a numerical weather prediction model run by National Centers for Environmental Prediction for short-term weather forecasting. Currently, the Weather Research and Forecasting Non-hydrostatic Mesoscale Model (WRF-NMM) model is run as the NAM, thus, three names (NAM, WRF, or NMM) typically refer to the same model output. The WRF replaced the Eta model on June 13, 2006. The model is run four times a day (00, 06, 12, 18 UTC) out to 84 hours. It is currently run with 12 km horizontal resolution and with three-hour temporal resolution, providing finer detail than other operational forecast models. The NAM ensemble is known as the "Short Range Ensemble Forecast" (SREF) and runs out 87 hours. | https://en.wikipedia.org/wiki?curid=11238257 |
NGC 3344 is a relatively isolated barred spiral galaxy located 22.5 million light years away in the constellation Leo Minor. This galaxy belongs to the group known as the Leo spur, which is a branch of the Virgo Supercluster. has the morphological classification (R)SAB(r)bc, which indicates it is a weakly barred spiral galaxy that exhibits rings and moderate to loosely wound spiral arms. There is both an inner and outer ring, with the prominent arms radiating outward from the inner ring and the slightly elliptical bar being situated inside. At the center of the bar is an HII nucleus with an angular diameter of about 3″. | https://en.wikipedia.org/wiki?curid=11250389 |
Offshore survey is a specific discipline of hydrographic survey primarily concerned with the description of the condition of the seabed and the condition of the subsea oilfield infrastructure that interacts with it. | https://en.wikipedia.org/wiki?curid=11251777 |
Centre de données astronomiques de Strasbourg The Centre de Données astronomiques de Strasbourg (CDS; English translation: "Strasbourg Astronomical Data Center") is a data hub which collects and distributes astronomical information. It was established in 1972 under the name "Centre de Données Stellaires". The on-line services currently provided by the CDS include: | https://en.wikipedia.org/wiki?curid=11268399 |
Dynamic aperture In acoustics, dynamic aperture is analogous to aperture in photography. The arrays in side-scan sonar can be programmed to transmit just a few elements at a time or all the elements at once. The more elements transmitting, the narrower the beam and the better the resolution. The ratio of the imaging depth to the aperture size is known as the F-number. is keeping this number constant by growing the aperture with the imaging depth until the physical aperture cannot be increased. A modern medical ultrasound machine has a typical F-number of 0.5. Side Scan Sonar systems produce images by forming angular “beams”. Beam width is determined by length of the sonar array, narrower beams resolve finer detail. Longer arrays with narrower beams provide finer spatial resolution. | https://en.wikipedia.org/wiki?curid=11277734 |
Cryophorus A cryophorus is a glass container containing liquid water and water vapor. It is used in physics courses to demonstrate rapid freezing by evaporation. A typical cryophorus has a bulb at one end connected to a tube of the same material. When the liquid water is manipulated into the bulbed end and the other end is submerged into a freezing mixture (such as liquid nitrogen), the gas pressure drops as it is cooled. The liquid water begins to evaporate, producing more water vapor. Evaporation causes the water to cool rapidly to its freezing point and it solidifies suddenly. Wollaston's cryophorus was a precursor to the modern heat pipe. The cryophorus was first described by William Hyde Wollaston in an 1813 paper titled, "On a method of freezing at a distance." | https://en.wikipedia.org/wiki?curid=11282791 |
Hamburg/ESO Survey The is an astrometric star catalogue published by the University of Hamburg. The catalog contains stars between magnitudes 13 and 18 covering the Southern extragalactic sky. The stated goals of the catalog are | https://en.wikipedia.org/wiki?curid=11293812 |
Frederic Rousseau is a Flemish Belgian molecular biologist and researcher at the KU Leuven (Leuven, Belgium). Together with Joost Schymkowitz he is group leader at the VIB Switch Laboratory, KU Leuven. His research interest is on essential cellular processes where functional regulation is governed by protein conformational switches that have to be actively controlled to ensure cell viability He obtained a PhD at the University of Cambridge (Cambridge, United Kingdom) in 2001. He did a post-doctorate work at the EMBL in Heidelberg Germany from 2001 until 2003. Rousseau is a VIB Group leader since 2003. | https://en.wikipedia.org/wiki?curid=11305204 |
Joost Schymkowitz is a Belgian molecular biologist and researcher at the KU Leuven (Brussels, Leuven). Together with Frederic Rousseau he is group leader at the VIB Switch Laboratory, KU Leuven. His research interest is on essential cellular processes where functional regulation is governed by protein conformational switches that have to be actively controlled to ensure cell viability He obtained a PhD at the University of Cambridge (Cambridge, United Kingdom) in 2001. He did a Postdoc at the EMBL in Heidelberg Germany from 2001 until 2003. is a VIB Group leader since 2003. | https://en.wikipedia.org/wiki?curid=11305248 |
Biobased economy Biobased economy, bioeconomy or biotechonomy refers to economic activity involving the use of biotechnology in the production of (bio-based) goods, services, or energy from biological material (or biomass) as the primary resource base. An important aspect of the bioeconomy is understanding mechanisms and processes at the genetic, molecular, and genomic levels, and applying this understanding to creating or improving industrial processes, developing new products and services, and producing new energy. The terms are widely used by regional development agencies, national and international organizations, and biotechnology companies. They are closely linked to the evolution of the biotechnology industry and the capacity to study, understand, and manipulate genetic material that has been possible due to scientific research and technological development. This includes the application of scientific and technological developments to agriculture, health, chemical, and energy industries. The term 'biotechonomy' was used by Juan Enríquez and Rodrigo Martinez at the Genomics Seminar in the 1997 AAAS meeting. An excerpt of this paper was published in "Science"." Enríquez and Martinez' 2002 Harvard Business School working paper, "Biotechonomy 1.0: A Rough Map of Biodata Flow", showed the global flow of genetic material into and out of the three largest public genetic databases: GenBank, EMBL and DDBJ | https://en.wikipedia.org/wiki?curid=11339988 |
Biobased economy The authors then hypothesized about the economic impact that such data flows might have on patent creation, evolution of biotech startups and licensing fees. An adaptation of this paper was published in "Wired" magazine in 2003. The term 'bioeconomy' became popular from the mid-2000s with its adoption by the European Union and Organisation for Economic Co-operation and Development as a policy agenda and framework to promote the use of biotechnology to develop new products, markets, and uses of biomass. Since then, both the EU (2012) and OECD (2006) have created dedicated bioeconomy strategies, as have an increasing number of countries around the world. Often these strategies conflate the bioeconomy with the term 'bio-based economy'. For example, since 2005 the Netherlands has sought to promote the creation of a biobased economy. Pilot plants have been started i.e. in Lelystad (Zeafuels), and a centralised organisation exists (Interdepartementaal programma biobased economy), with supporting research (Food & Biobased Research) being conducted. Other European countries have also developed and implemented bioeconomy or bio-based economy policy strategies and frameworks. In 2012 president Barack Obama of the USA announced intentions to encourage biological manufacturing methods, with a National Bioeconomy Blueprint. The biobased economy uses first-generation biomass (crops), second-generation biomass (crop refuge), and third-generation biomass (seaweed, algae) | https://en.wikipedia.org/wiki?curid=11339988 |
Biobased economy Several methods of processing are then used (in biorefineries) to gather the most out of the biomass. This includes techniques such as Anaerobic digestion is generally used to produce biogas, fermentation of sugars produces ethanol, pyrolysis is used to produce pyrolysis-oil (which is solidified biogas), and torrefaction is used to create biomass-coal. Biomass-coal and biogas is then burnt for energy production, ethanol can be used as a (vehicle)-fuel, as well as for other purposes, such as skincare products. For economic reasons, the processing of the biomass is done according to a specific pattern (a process called cascading). This pattern depends on the types of biomass used. The whole of finding the most suitable pattern is known as biorefining. A general list shows the products with high added value and lowest volume of biomass to the products with the lowest added value and highest volume of biomass: Organisms, ranging from bacteria over yeasts up to plants are used for production of enzymatic catalysis. Genetically modified bacteria have been used to produce insulin, artemisinic acid was made in engineered yeast. Some bioplastics (based on polyhydroxylbutyrate or polyhydroxylalkanoates are produced from sugar using genetically modified microbes. Genetically modified organisms are also used for the production of biofuels. Biofuels are a type of Carbon-neutral fuel. Research is also being done towards CO2 fixation using a synthetic metabolic pathway. By genetically modifying E | https://en.wikipedia.org/wiki?curid=11339988 |
Biobased economy coli bacteria so as to allow them to consume CO2, the bacterium may provide the infrastructure for the future renewable production of food and green fuels. One of the organisms (Ideonella S-sakaiensis) that is able to break down PET (a plastic) into other substances has been genetically modified to break down PET even faster and also break down PEF. Once plastics (which are normally non-biodegradable) are broken down and recycled into other substances (i.e. biomatter in the case of Tenebrio molitor larvae) it can be used as an input for other animals. Genetically modified crops are also used. Genetically modified energy crops for instance may provide some additional advantages such as reduced associated costs (i.e. costs during the manufacturing process ) and less water use. One example are trees have been genetically modified to either have less lignin, or to express lignin with chemically labile bonds. With genetically modified crops however, there are still some challenges involved (hurdles to regulatory approvals, market adoption and public acceptance). | https://en.wikipedia.org/wiki?curid=11339988 |
Microturbulence is a form of turbulence that varies over small distance scales. (Large-scale turbulence is called macroturbulence.) is one of several mechanisms that can cause broadening of the absorption lines in the stellar spectrum. Stellar microturbulence varies with the effective temperature and the surface gravity. The microturbulent velocity is defined as the microscale non-thermal component of the gas velocity in the region of spectral line formation. Convection is the mechanism believed to be responsible for the observed turbulent velocity field, both in low mass stars and massive stars. When examined by a spectroscope, the velocity of the convective gas along the line of sight produces Doppler shifts in the absorption bands. It is the distribution of these velocities along the line of sight that produces the microturbulence broadening of the absorption lines in low mass stars that have convective envelopes. In massive stars convection can be present only in small regions below the surface; these sub-surface convection zones can excite turbulence at the stellar surface through the emission of acoustic and gravity waves. The strength of the microturbulence (symbolized by ξ, in units of km s) can be determined by comparing the broadening of strong lines versus weak lines. plays a critical role in energy transport during magnetic nuclear fusion experiments, such as the Tokamak. | https://en.wikipedia.org/wiki?curid=11341364 |
Biomolecular engineering is the application of engineering principles and practices to the purposeful manipulation of molecules of biological origin. Biomolecular engineers integrate knowledge of biological processes with the core knowledge of chemical engineering in order to focus on molecular level solutions to issues and problems in the life sciences related to the environment, agriculture, energy, industry, food production, biotechnology and medicine. Biomolecular engineers purposefully manipulate carbohydrates, proteins, nucleic acids and lipids within the framework of the relation between their structure (see: nucleic acid structure, carbohydrate chemistry, protein structure,), function (see: protein function) and properties and in relation to applicability to such areas as environmental remediation, crop and livestock production, biofuel cells and biomolecular diagnostics. The thermodynamics and kinetics of molecular recognition in enzymes, antibodies, DNA hybridization, bio-conjugation/bio-immobilization and bioseparations are studied. Attention is also given to the rudiments of engineered biomolecules in cell signaling, cell growth kinetics, biochemical pathway engineering and bioreactor engineering. During World War II, the need for large quantities of penicillin of acceptable quality brought together chemical engineers and microbiologists to focus on penicillin production. This created the right conditions to start a chain of reactions that lead to the creation of the field of biomolecular engineering | https://en.wikipedia.org/wiki?curid=11344743 |
Biomolecular engineering was first defined in 1992 by the U.S. National Institutes of Health as research at the interface of chemical engineering and biology with an emphasis at the molecular level". Although first defined as research, biomolecular engineering has since become an academic discipline and a field of engineering practice. Herceptin, a humanized Mab for breast cancer treatment, became the first drug designed by a biomolecular engineering approach and was approved by the U.S. FDA. Also, "Biomolecular Engineering" was a former name of the journal "New Biotechnology". Bio-inspired technologies of the future can help explain biomolecular engineering. Looking at the Moore's law "Prediction", in the future quantum and biology-based processors are "big" technologies. With the use of biomolecular engineering, the way our processors work can be manipulated in order to function in the same sense a biological cell work. has the potential to become one of the most important scientific disciplines because of its advancements in the analyses of gene expression patterns as well as the purposeful manipulation of many important biomolecules to improve functionality. Research in this field may lead to new drug discoveries, improved therapies, and advancement in new bioprocess technology. With the increasing knowledge of biomolecules, the rate of finding new high-value molecules including but not limited to antibodies, enzymes, vaccines, and therapeutic peptides will continue to accelerate | https://en.wikipedia.org/wiki?curid=11344743 |
Biomolecular engineering will produce new designs for therapeutic drugs and high-value biomolecules for treatment or prevention of cancers, genetic diseases, and other types of metabolic diseases. Also, there is anticipation of industrial enzymes that are engineered to have desirable properties for process improvement as well the manufacturing of high-value biomolecular products at a much lower production cost. Using recombinant technology, new antibiotics that are active against resistant strains will also be produced. deals with the manipulation of many key biomolecules. These include, but are not limited to, proteins, carbohydrates, nucleic acids, and lipids. These molecules are the basic building blocks of life and by controlling, creating, and manipulating their form and function there are many new avenues and advantages available to society. Since every biomolecule is different, there are a number of techniques used to manipulate each one respectively. Proteins are polymers that are made up of amino acid chains linked with peptide bonds. They have four distinct levels of structure: primary, secondary, tertiary, and quaternary. Primary structure refers to the amino acid backbone sequence. Secondary structure focuses on minor conformations that develop as a result of the hydrogen bonding between the amino acid chain. If most of the protein contains intermolecular hydrogen bonds it is said to be fibrillar, and the majority of its secondary structure will be beta sheets | https://en.wikipedia.org/wiki?curid=11344743 |
Biomolecular engineering However, if the majority of the orientation contains intramolecular hydrogen bonds, then the protein is referred to as globular and mostly consists of alpha helices. There are also conformations that consist of a mix of alpha helices and beta sheets as well as a beta helixes with an alpha sheets. The tertiary structure of proteins deal with their folding process and how the overall molecule is arranged. Finally, a quaternary structure is a group of tertiary proteins coming together and binding. With all of these levels, proteins have a wide variety of places in which they can be manipulated and adjusted. Techniques are used to affect the amino acid sequence of the protein (site-directed mutagenesis), the folding and conformation of the protein, or the folding of a single tertiary protein within a quaternary protein matrix. Proteins that are the main focus of manipulation are typically enzymes. These are proteins that act as catalysts for biochemical reactions. By manipulating these catalysts, the reaction rates, products, and effects can be controlled. Enzymes and proteins are important to the biological field and research that there are specific divisions of engineering focusing only on proteins and enzymes. Carbohydrates are another important biomolecule. These are polymers, called polysaccharides, which are made up of chains of simple sugars connected via glycosidic bonds | https://en.wikipedia.org/wiki?curid=11344743 |
Biomolecular engineering These monosaccharides consist of a five to six carbon ring that contains carbon, hydrogen, and oxygen - typically in a 1:2:1 ratio, respectively. Common monosaccharides are glucose, fructose, and ribose. When linked together monosaccharides can form disaccharides, oligosaccharides, and polysaccharides: the nomenclature is dependent on the number of monosaccharides linked together. Common dissacharides, two monosaccharides joined together, are sucrose, maltose, and lactose. Important polysaccharides, links of many monosaccharides, are cellulose, starch, and chitin. Cellulose is a polysaccharide made up of beta 1-4 linkages between repeat glucose monomers. It is the most abundant source of sugar in nature and is a major part of the paper industry. Starch is also a polysaccharide made up of glucose monomers; however, they are connected via an alpha 1-4 linkage instead of beta. Starches, particularly amylase, are important in many industries, including the paper, cosmetic, and food. Chitin is a derivation of cellulose, possessing an acetamide group instead of an –OH on one of its carbons. Acetimide group is deacetylated the polymer chain is then called chitosan. Both of these cellulose derivatives are a major source of research for the biomedical and food industries. They have been shown to assist with blood clotting, have antimicrobial properties, and dietary applications. A lot of engineering and research is focusing on the degree of deacetylation that provides the most effective result for specific applications | https://en.wikipedia.org/wiki?curid=11344743 |
Biomolecular engineering Nucleic acids are macromolecules that consist of DNA and RNA which are biopolymers consisting of chains of biomolecules. These two molecules are the genetic code and template that make life possible. Manipulation of these molecules and structures causes major changes in function and expression of other macromolecules. Nucleosides are glycosylamines containing a nucleobase bound to either ribose or deoxyribose sugar via a beta-glycosidic linkage. The sequence of the bases determine the genetic code. Nucleotides are nucleosides that are phosphorylated by specific kinases via a phosphodiester bond. Nucleotides are the repeating structural units of nucleic acids. The nucleotides are made of a nitrogenous base, a pentose (ribose for RNA or deoxyribose for DNA), and three phosphate groups. See, Site-directed mutagenesis, recombinant DNA, and ELISAs. Lipids are biomolecules that are made up of glycerol derivatives bonded with fatty acid chains. Glycerol is a simple polyol that has a formula of C3H5(OH)3. Fatty acids are long carbon chains that have a carboxylic acid group at the end. The carbon chains can be either saturated with hydrogen; every carbon bond is occupied by a hydrogen atom or a single bond to another carbon in the carbon chain, or they can be unsaturated; namely, there are double bonds between the carbon atoms in the chain. Common fatty acids include lauric acid, stearic acid, and oleic acid. The study and engineering of lipids typically focuses on the manipulation of lipid membranes and encapsulation | https://en.wikipedia.org/wiki?curid=11344743 |
Biomolecular engineering Cellular membranes and other biological membranes typically consist of a phospholipid bilayer membrane, or a derivative thereof. Along with the study of cellular membranes, lipids are also important molecules for energy storage. By utilizing encapsulation properties and thermodynamic characteristics, lipids become significant assets in structure and energy control when engineering molecules. Recombinant DNA are DNA biomolecules that contain genetic sequences that are not native to the organism's genome. Using recombinant techniques, it is possible to insert, delete, or alter a DNA sequence precisely without depending on the location of restriction sites. Recombinant DNA is used for a wide range of applications. The traditional method for creating recombinant DNA typically involves the use of plasmids in the host bacteria. The plasmid contains a genetic sequence corresponding to the recognition site of a restriction endonuclease, such as EcoR1. After foreign DNA fragments, which have also been cut with the same restriction endonuclease, have been inserted into host cell, the restriction endonuclease gene is expressed by applying heat, or by introducing a biomolecule, such as arabinose. Upon expression, the enzyme will cleave the plasmid at its corresponding recognition site creating sticky ends on the plasmid. Ligases then joins the sticky ends to the corresponding sticky ends of the foreign DNA fragments creating a recombinant DNA plasmid | https://en.wikipedia.org/wiki?curid=11344743 |
Biomolecular engineering Advances in genetic engineering have made the modification of genes in microbes quite efficient allowing constructs to be made in about a weeks worth of time. It has also made it possible to modify the organism's genome itself. Specifically, use of the genes from the bacteriophage lambda are used in recombination. This mechanism, known as recombineering, utilizes the three proteins Exo, Beta, and Gam, which are created by the genes exo, bet, and gam respectively. Exo is a double stranded DNA exonuclease with 5’ to 3’ activity. It cuts the double stranded DNA leaving 3’ overhangs. Beta is a protein that binds to single stranded DNA and assists homologous recombination by promoting annealing between the homology regions of the inserted DNA and the chromosomal DNA. Gam functions to protect the DNA insert from being destroyed by native nucleases within the cell. Recombinant DNA can be engineered for a wide variety of purposes. The techniques utilized allow for specific modification of genes making it possible to modify any biomolecule. It can be engineered for laboratory purposes, where it can be used to analyze genes in a given organism. In the pharmaceutical industry, proteins can be modified using recombination techniques. Some of these proteins include human insulin. Recombinant insulin is synthesized by inserting the human insulin gene into "E. coli", which then produces insulin for human use. Other proteins, such as human growth hormone, factor VIII, and hepatitis B vaccine are produced using similar means | https://en.wikipedia.org/wiki?curid=11344743 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.