id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
44,745,151
https://en.wikipedia.org/wiki/Automotive%20Dealership%20Institute
Automotive Dealership Institute is an Arizona-based and licensed training program that offers classroom and online instruction in management, finance, and insurance for the auto industry. History Automotive Dealership Institute was founded in December 2004. The institute offers management training for automotive dealerships. Alan Algan is Executive Director, and Robert W. Serum is the Chancellor. References 2004 establishments in Arizona Automotive industry in the United States Transport education Vocational education in the United States
Automotive Dealership Institute
[ "Physics" ]
89
[ "Physical systems", "Transport", "Transport education" ]
44,747,619
https://en.wikipedia.org/wiki/Chemiresistor
A chemiresistor is a material that changes its electrical resistance in response to changes in the nearby chemical environment. Chemiresistors are a class of chemical sensors that rely on the direct chemical interaction between the sensing material and the analyte. The sensing material and the analyte can interact by covalent bonding, hydrogen bonding, or molecular recognition. Several different materials have chemiresistor properties: semiconducting metal oxides, some conductive polymers, and nanomaterials like graphene, carbon nanotubes and nanoparticles. Typically these materials are used as partially selective sensors in devices like electronic tongues or electronic noses. A basic chemiresistor consists of a sensing material that bridges the gap between two electrodes or coats a set of interdigitated electrodes. The resistance between the electrodes can be easily measured. The sensing material has an inherent resistance that can be modulated by the presence or absence of the analyte. During exposure, analytes interact with the sensing material. These interactions cause changes in the resistance reading. In some chemiresistors the resistance changes simply indicate the presence of analyte. In others, the resistance changes are proportional to the amount of analyte present; this allows for the amount of analyte present to be measured. History As far back as 1965 there are reports of semiconductor materials exhibiting electrical conductivities that are strongly affected by ambient gases and vapours. However, it was not until 1985 that Wohltjen and Snow coined the term chemiresistor. The chemiresistive material they investigated was copper phthalocyanine, and they demonstrated that its resistivity decreased in the presence of ammonia vapour at room temperature. In recent years chemiresistor technology has been used to develop promising sensors for many applications, including conductive polymer sensors for secondhand smoke, carbon nanotube sensors for gaseous ammonia, and metal oxide sensors for hydrogen gas. The ability of chemiresistors to provide accurate real-time information about the environment through small devices that require minimal electricity makes them an appealing addition to the internet of things. Types of chemiresistor sensors Device architectures Chemiresistors can be made by coating an interdigitated electrode with a thin film or by using a thin film or other sensing material to bridge the single gap between two electrodes. Electrodes are typically made of conductive metals such as gold and chromium which make good ohmic contact with thin films. In both architectures, the sensing material controls the conductance between the two electrodes; however, each device architecture has its own advantages and disadvantages. Interdigitated electrodes allow for a greater amount of the film's surface area to be in contact with the electrode. This allows for more electrical connections to be made and increases the overall conductivity of the system. Interdigitated electrodes with finger sizes and finger spacing on the order of microns are difficult to manufacture and require the use of photolithography. Larger features are easier to fabricate and can be manufactured using techniques such as thermal evaporation. Both interdigitated electrode and single-gap systems can be arranged in parallel to allow for the detection of multiple analytes by one device. Sensing materials Semiconducting metal oxides Metal oxide chemiresistor sensors were first commercialized in 1970 in a carbon monoxide detector that used powdered SnO2. However, there are many other metal oxides that have chemiresistive properties. Metal oxide sensors are primarily gas sensors, and they can sense both oxidizing and reducing gases. This makes them ideal for use in industrial situations where gases used in manufacturing can pose a risk to worker safety. Sensors made from metal oxides require high temperatures (200 °C or higher) to operate because, in order for the resistivity to change, an activation energy must be overcome. Graphene In comparison to the other materials graphene chemiresistor sensors are relatively new but have shown excellent sensitivity. Graphene is an allotrope of carbon that consists of a single layer of graphite. It has been used in sensors to detect vapour-phase molecules, pH, proteins, bacteria, and simulated chemical warfare agents. Carbon nanotubes The first published report of nanotubes being used as chemiresistors was made in 2000. Since then there has been research into chemiresistors and chemically sensitive field effect transistors fabricated from individual single-walled nanotubes, bundles of single-walled nanotubes, bundles of multi-walled nanotubes, and carbon nanotube–polymer mixtures. It has been shown that a chemical species can alter the resistance of a bundle of single-walled carbon nanotubes through multiple mechanisms. Carbon nanotubes are useful sensing materials because they have low detection limits, and quick response times; however, bare carbon nanotube sensors are not very selective. They can respond to the presence of many different gases from gaseous ammonia to diesel fumes. Carbon nanotube sensors can be made more selective by using a polymer as a barrier, doping the nanotubes with heteroatoms, or adding functional groups to the surface of the nanotubes. . Nanoparticles Many different nanoparticles of varying size, structure and composition have been incorporated into chemiresistor sensors. The most commonly used are thin films of gold nanoparticles coated with self-assembled monolayers (SAMs) of organic molecules. The SAM is critical in defining some of the nanoparticle assembly’s properties. Firstly, the stability of the gold nanoparticles depends upon the integrity of the SAM, which prevents them from sintering together. Secondly, the SAM of organic molecules defines the separation between the nanoparticles, e.g. longer molecules cause the nanoparticles to have a wider average separation. The width of this separation defines the barrier that electrons must tunnel through when a voltage is applied and electric current flows. Thus by defining the average distance between individual nanoparticles the SAM also defines the electrical resistivity of the nanoparticle assembly. Finally, the SAMs form a matrix around the nanoparticles that chemical species can diffuse into. As new chemical species enter the matrix it changes the inter-particle separation which in turn affects the electrical resistance. Analytes diffuse into the SAMs at proportions defined by their partition coefficient and this characterizes the selectivity and sensitivity of the chemiresistor material. Conductive polymers Conductive polymers such as polyaniline and polypyrrole can be used as sensing materials when the target interacts directly with the polymer chain resulting in a change in conductivity of the polymer. These types of systems lack selectivity due to the wide range of target molecules that can interact with the polymer. Molecularly imprinted polymers can add selectivity to conductive polymer chemiresistors. A molecularly imprinted polymer is made by polymerizing a polymer around a target molecule and then removing the target molecule from the polymer leaving behind cavities matching the size and shape of the target molecule. Molecularly imprinting the conductive polymer increases the sensitivity of the chemiresistor by selecting for the target's general size and shape as well as its ability to interact with the chain of the conductive polymer. References See also Chemical field-effect transistor Materials Sensors
Chemiresistor
[ "Physics", "Technology", "Engineering" ]
1,513
[ "Materials", "Sensors", "Matter", "Measuring instruments" ]
44,750,391
https://en.wikipedia.org/wiki/Griffiths%20group
In mathematics, more specifically in algebraic geometry, the Griffiths group of a projective complex manifold X measures the difference between homological equivalence and algebraic equivalence, which are two important equivalence relations of algebraic cycles. More precisely, it is defined as where denotes the group of algebraic cycles of some fixed codimension k and the subscripts indicate the groups that are homologically trivial, respectively algebraically equivalent to zero. This group was introduced by Phillip Griffiths who showed that for a general quintic in (projective 4-space), the group is not a torsion group. Notes References Algebraic geometry
Griffiths group
[ "Mathematics" ]
122
[ "Fields of abstract algebra", "Algebraic geometry" ]
44,750,399
https://en.wikipedia.org/wiki/Tri-level%20sync
Tri-level sync is an analogue video synchronization pulse primarily used for the locking of high-definition video signals (genlock). It is preferred in HD environments over black and burst, as timing jitter is reduced due to the nature of its higher frequency. It also benefits from having no DC content, as the pulses are in both polarities. Synchronization Modern real-time multi-source HD facilities have many pieces of equipment that all output HD-SDI video. If this baseband video is to be mixed, switched or luma keyed with any other sources, then they will need to be synchronous, i.e. the first pixel of the first line must be transmitted at the same time (within a few microseconds). This then allows the switcher to cut, mix or key these sources together with a minimal amount of delay (~1 HD video line 1/(1125×25) seconds for 50i video). This synchronization is done by supplying each piece of equipment with either a tri-level sync, or black-and-burst input. There are video switchers that do not require synchronous sources, but these operate with a much bigger delay Waveform The main pulse definition is as follows: a negative-going pulse of 300 mV lasting 40 sample clocks followed by a positive-going pulse of 300 mV lasting 40 sample clocks. The allowed rise/fall time for each of the transitions is 4 sample clocks. This is with a clock rate of 74.25 MHz. References Synchronization Film and video technology Broadcast engineering Television terminology
Tri-level sync
[ "Engineering" ]
334
[ "Broadcast engineering", "Electronic engineering", "Telecommunications engineering", "Synchronization" ]
44,750,726
https://en.wikipedia.org/wiki/Design%20space%20verification
Design space verification is defined by the European Medicines Agency as the verification that material inputs and processes are able to scale to commercial manufacturing levels while maintaining a standard of quality. Therefore, it is difficult to conduct design space verification while not operating at target levels and should be conducted over the manufacturing lifecycle. Changes in manufacturing output within the design space should not present any risks. Should the manufacturing load exceed the boundaries defined as normal operating ranges unanticipated scale-dependent issues can occur. Design space verification is a part of process validation as defined by the EMA in conjunction with the FDA. Its purpose is to guarantee end product quality within a range of manufacturing boundaries. The effects of scale up activities should be fully understood by the manufacturer. Most initial design space conclusions are based upon laboratory testing or pilot batches with scale up effects being inferred by experimentation or based on statistical evidence, simulations, or studies. Ongoing design space verification should be dependent upon the results of an assessment of risk involved with scale up activities. More specifically, how scaling up production affects scale-dependent variables. Design space verification is much more focused in scope than overall process validation. Design space verification specifically aims to confirm output quality within a given operating range. This allows for changes in operating level flexibility while guaranteeing production quality, and allows for changes in production quantities without necessitating a reevaluation of the production process. References External links NSF Health Sciences BioPharm International Quality management Formal methods Drug manufacturing
Design space verification
[ "Engineering" ]
299
[ "Software engineering", "Formal methods" ]
44,752,216
https://en.wikipedia.org/wiki/Elena%20Aprile
Elena Aprile (born March 12, 1954) is an Italian-American experimental particle physicist. She has been a professor of physics at Columbia University since 1986. She is the founder and spokesperson of the XENON Dark Matter Experiment. Aprile is well known for her work with noble liquid detectors and for her contributions to particle astrophysics in the search for dark matter. Education and academic career Aprile studied physics at the University of Naples and completed her masters thesis at CERN under the supervision of Professor Carlo Rubbia. After receiving her Laurea degree in 1978, she enrolled at the University of Geneva, from which she received her Ph.D. in physics in 1982. She moved to Harvard University in 1983 as a postdoctoral researcher in Carlo Rubbia's group. Aprile joined the faculty of Columbia University in 1986, attaining her full professorship in 2001. From 2003 to 2009, Aprile served as co-director of the Columbia Astrophysics Laboratory. Research Aprile is a specialist in noble liquid detectors and their application in particle physics and astrophysics. She began working on liquid argon detectors as a graduate student at CERN, continuing her research as a postdoctoral fellow at Harvard. At Columbia she investigated the properties of noble liquids for radiation spectroscopy and imaging in astrophysics. This work led to the realization of the first liquid xenon time projection chamber (LXeTPC) as a Compton telescope for MeV gamma rays. From 1996 to 2001, Aprile was spokesperson of the NASA-sponsored Liquid Xenon Gamma-Ray Imaging Telescope (LXeGRIT) project, leading the first engineering test of the telescope in a near-space environment and subsequent science campaigns with long-duration balloon flights. LXeGRIT used a liquid xenon time projection chamber as a Compton telescope for imaging cosmic sources in the 0.15 to 10 MeV energy band. A total of about 36 hours of data were gathered from two long-duration flights in 1999 and 2000, at an average altitude of 39 km. Since 2001, Aprile's research focus shifted to particle astrophysics, specifically to direct detection of dark matter with liquid xenon. Aprile is the founder and spokesperson of the XENON dark matter experiment, which aims to discover WIMPs as they scatter off xenon atoms in massive yet ultra-low background liquid xenon detectors operated deep underground. Awards Aprile received the National Science Foundation Career Award in 1991 and the Japan Society for the Promotion of Science Award in 1999. She has been a Fellow of the American Physical Society since 2000. In 2005 she received the medal of Ufficiale della Republica Italiana from the Italian President Carlo Azeglio Ciampi. Asteroid 268686 Elenaaprile, discovered by Italian amateur astronomer Silvano Casulli in 2006, was named in her honor. The official was published by the Minor Planet Center on 4 November 2017 (). In 2019 she received the Lancelot M. Berkeley–New York Community Trust Prize for Meritorious Work in Astronomy from the American Astronomical Society. In 2020 she was selected as Margaret Burbidge Visiting professor of physics at UC San Diego, and elected to the American Academy of Arts and Sciences. In 2021 she was elected to the National Academy of Sciences References External links Columbia University faculty homepage XENON1T homepage XENON Columbia homepage LXeGRIT homepage Studio 360 story Oral History Interview of Elena Aprile by David Zierler on April 30, 2021, Niels Bohr Library & Archives, American Institute of Physics, College Park, MD USA 1954 births Living people Italian expatriates in Switzerland Scientists from Milan Experimental physicists People associated with CERN 20th-century Italian physicists Particle physicists Columbia University faculty Italian women physicists Women astrophysicists Italian astrophysicists 21st-century Italian physicists Italian emigrants to the United States Fellows of the American Physical Society Fellows of the American Academy of Arts and Sciences Members of the United States National Academy of Sciences 21st-century Italian women scientists
Elena Aprile
[ "Physics" ]
816
[ "Particle physicists", "Particle physics" ]
21,912,073
https://en.wikipedia.org/wiki/Neutron%20capture%20nucleosynthesis
Neutron capture nucleosynthesis describes two nucleosynthesis pathways: the r-process and the s-process, for rapid and slow neutron captures, respectively. R-process describes neutron capture in a region of high neutron flux, such as during supernova nucleosynthesis after core-collapse, and yields neutron-rich nuclides. S-process describes neutron capture that is slow relative to the rate of beta decay, as for stellar nucleosynthesis in some stars, and yields nuclei with stable nuclear shells. Each process is responsible for roughly half of the observed abundances of elements heavier than iron. The importance of neutron capture to the observed abundance of the chemical elements was first described in 1957 in the B2FH paper. References Further reading capture nucleosynthesis Nucleosynthesis
Neutron capture nucleosynthesis
[ "Physics", "Chemistry" ]
170
[ "Nuclear fission", "Astrophysics", "Nucleosynthesis", "Nuclear chemistry stubs", "Nuclear physics", "Nuclear fusion" ]
24,884,635
https://en.wikipedia.org/wiki/Tiamulin
Tiamulin (previously thiamutilin) is a pleuromutilin antibiotic drug that is used in veterinary medicine particularly for pigs and poultry. Tiamulin is a diterpene antimicrobial with a pleuromutilin chemical structure similar to that of valnemulin. References Pleuromutilin antibiotics Secondary alcohols Ketones Carboxylate esters Thioethers Cyclopentanes Diethylamino compounds Vinyl compounds
Tiamulin
[ "Chemistry" ]
98
[ "Ketones", "Functional groups" ]
24,891,245
https://en.wikipedia.org/wiki/Dental%20compomer
Dental compomers, also known as polyacid-modified resin composite, are used in dentistry as a filling material. They were introduced in the early 1990s as a hybrid of two other dental materials, dental composites and glass ionomer cement, in an effort to combine their desirable properties: aesthetics for dental composites (they are white and closely mimic tooth tissue, so can camouflage into a tooth very well) and the fluoride releasing ability for glass ionomer cements (helps to prevent further tooth decay). History Compomers were introduced in the early 1990s. Previous available restorative materials included dental amalgam, glass ionomer cement, resin modified glass ionomer cement and dental composites. Composition Compomers are resin-based materials like dental composites, and the components are largely the same. The setting reaction is similarly a polymerisation process of resin monomers (e.g. urethane dimethacrylate) which have been modified by polyacid groups, and is induced by free radicals released from a photoinitiator such as camphorquinone. To induce the release of these free radicals, the photoinitiator must be exposed to a specific wavelength of light, blue light in the case of camphorquinone. There is a second less significant acid-base setting reaction which takes place after the light-cured polymerisation reaction; this setting reaction occurs as the compomer absorbs water from the oral environment. Also in compomer is fluoroaluminosilicate glass which, when broken down by hydrogen ions through an acid-base reaction, releases fluoride. This process requires water absorbed from the oral environment. To aid water absorption and fluoride release, some of the resins in the compomer matrix are more hydrophilic (e.g. glycerol dimethacrylate). The source of the hydrogen ions that break the fluoroaluminosilicate glass particles apart are certain resin monomers that have a carboxyl group attached. Some compomers instead source their hydrogen ions from a methacrylated polycarboxylic acid copolymer that is similarly used in some resin modified glass ionomer cements. Properties Aesthetics Compomers are tooth coloured materials, and so their aesthetics can immediately be seen as better than that of dental amalgams. It has been shown that ratings in various aesthetic areas are better for compomers than resin modified glass ionomer cements. Compomers are also available in various non-natural colours from various dental companies for use in deciduous teeth. Compomers and resin-modified glass ionomers have better aesthetics than conventional glass ionomer cements. Fluoride release Compomers and glass ionomer cements can release fluoride. This property can be useful in cases where a patient has a higher risk of experience tooth decay in future. Fluoride is a mineral which helps strengthen our teeth and protects them from decay, and it is found in many dental products including toothpaste. Compomers and glass ionomer cements are able to release fluoride over extended periods, and this may help to reduce the risk of a tooth decaying further. However, such a property does not negate the need for excellent oral hygiene to prevent oral disease. Compomers are recommended for patients at medium risk of developing dental caries. There is conflicting evidence regarding the amount of fluoride compomers can release: Powers, Wataha and Chen (2017) state compomers do not release as much fluoride as glass ionomer cements because they have a lower concentration of fluoroaluminosilicate glass particles; there is supporting evidence to suggest compomers only release 10% of that of glass ionomer cement. On the other hand, Richard van Noort (2013) states that, due to recent developments, modern compomers are now capable of releasing the same amount of fluoride over the lifetime of the restoration as glass ionomer cements. Emerging evidence has shown that compomers and glass ionomer cements are able to absorb fluoride from the oral environment when their own fluoride stores are depleted, a process described as 'recharging'. The material can then release this stored fluoride when the fluoride concentration in the oral environment falls, thus exposing the teeth to fluoride for longer. This recharging ability is not as effective in compomers as it is in glass ionomers cements. Nevertheless, this can further prevent the risk of tooth decay. There is evidence to show compomers have no advantage over an amalgam restoration with a fluoride releasing bonding agent, which releases mercury and fluoride. Polymerisation shrinkage Compomers undergo some shrinkage during the setting reaction, and the extent of this polymerisation shrinkage is similar to that of dental composites. Water uptake Compomers absorb water more rapidly than dental composites due to the addition of hydrophilic resin monomers within the matrix (see Composition section above). As such, water equilibrium is reached within days rather than weeks, months or even years in the case of dental composite materials. This property has the advantage of compensating for the polymerisation shrinkage during the setting reaction, thus reducing any gap that develops at the cavity margins. However, it can also cause fracture of all-ceramic crowns when compomer is used as the luting cement. Therefore, it is not recommended to use the luting version of compomer for cementing all-ceramic crowns. More information on luting compomer can be found below. Mechanical properties Compomers have poorer mechanical properties than dental composites, with a lower compressive, flexural and tensile strength. Therefore, compomers are not an ideal material for load bearing restorations. In terms of wear resistance, compomers wear less quickly than glass ionomer and resin modified glass ionomer cements, but do not perform as well as dental composites. Clinical application Handling Handling and ease of use of composites is generally seen as good by dental professionals. Compomers are available in both normal and flowable forms, with the manufacturers of the flowable compomers claiming that they have the ability to shape to the cavity without the need for hand instruments. Adhesion to tooth tissue It is important to note that compomers do not bond to tooth tissue like glass ionomer cements; this is the same issue with dental composites. It is therefore essential to use bonding agents to aid adhesion of the compomer to tooth. Finishing and polishing The process of finishing and polishing compomers is similar to that of dental composites. After finishing and polishing, compomers have a similar surface roughness to dental composites. Indications for use As a restorative material, compomers are limited to low-stress bearing situations (proximal and cervical restorations) due to their mechanical properties and wear resistance as detailed in the Properties section above. Compomers can be used as a cavity lining material to provide pulpal protection. Compomers are notable used in Paediatric dentistry. Possible uses include: As a restorative material, particularly for Class I and II cavities (see ) Fissure sealants For cementation of orthodontic bands Survival rate Studies have shown compomers to have high survival rates 2-4 years following placement. Some issues that were identified 2-3 years after placement include discolouration around the restoration margins and loss of marginal integrity. Compomer luting cement Composition A powder and liquid are mixed together to form the luting cement. The powder contains fluoroaluminosilicate glass particles, sodium fluoride, and self-cured and light-cured initiators. The liquid contains poly-acid modified monomers and water. The carboxylic acid groups in the methacrylate-carboxylic acid monomer help with adhesion. Properties The advantages of compomer luting cement are listed below: Retentive Highbond strength High compressive strength High flexural strength High fracture toughness Low solubility Sustained fluoride release with the potential to act as a fluoride reservoir (recharges when it becomes depleted of fluoride, see 'Fluoride release' in Properties section above for more details) The compressive and tensile strength of compomer cements are comparable to that of glass ionomer, resin-modified glass ionomer, and zinc polycarboxylate cements. Indications for use The use of the luting version of compomer is not recommended for all-ceramic crowns, nor as a core or filling material. See 'Water uptake' in Properties section above for more details. Compomer luting cement can however be used for cast alloy and ceramic-metal restorations. See also Dental restorative materials Dental composite Glass ionomer cement References Composite materials Dental materials
Dental compomer
[ "Physics" ]
1,872
[ "Materials", "Composite materials", "Dental materials", "Matter" ]
33,506,152
https://en.wikipedia.org/wiki/Phytoextraction%20process
Phytoextraction is a subprocess of phytoremediation in which plants remove dangerous elements or compounds from soil or water, most usually heavy metals, metals that have a high density and may be toxic to organisms even at relatively low concentrations. The heavy metals that plants extract are toxic to the plants as well, and the plants used for phytoextraction are known hyperaccumulators that sequester extremely large amounts of heavy metals in their tissues. Phytoextraction can also be performed by plants that uptake lower levels of pollutants, but due to their high growth rate and biomass production, may remove a considerable amount of contaminants from the soil. Heavy metals and biological systems Heavy metals can be a major problem for any biological organism as they may be reactive with a number of chemicals essential to biological processes. They can also break apart other molecules into even more reactive species (such as reactive oxygen species), which also disrupt biological processes. These reactions deplete the concentration of important molecules and also produce dangerously reactive molecules such as the radicals O. and OH.. Non-hyperaccumulators also absorb some concentration of heavy metals, as many heavy metals are chemically similar to other metals that are essential to the plants' life. The process For a plant to extract a heavy metal from water or soil, five things need to happen. The metal must dissolve in something the plant roots can absorb. The plant roots must absorb the heavy metal. The plant must chelate the metal to both protect itself and make the metal more mobile (this can also happen before the metal is absorbed). Chelation is a process by which a metal is surrounded and chemically bonded to an organic compound. The plant moves the chelated metal to a place to safely store it. Finally, the plant must adapt to any damage the metals cause during transportation and storage. Dissolution In their normal states, metals cannot be taken into any organism. They must be dissolved as an ion in solution to be mobile in an organism. Once the metal is mobile, it can either be directly transported over the root cell wall by a specific metal transporter or carried over by a specific agent. The plant roots mediate this process by secreting things that will capture the metal in the rhizosphere and then transport the metal over the cell wall. Some examples are: phytosiderophores, organic acids, or carboxylates If the metal is chelated at this point, then the plant does not need to chelate it later and the chelater serves as a case to conceal the metal from the rest of the plant. This is a way for a hyper-accumulator to protect itself from the toxic effects of poisonous metals. Root absorption The first thing that happens when a metal is absorbed is it binds to the root cell wall. The metal is then transported into the root. Some plants then store the metal through chelation or sequestration. Many specific transition metal ligands contributing to metal detoxification and transport are up-regulated in plants when metals are available in the rhizosphere. At this point the metal can be alone or already sequestered by a chelating agent or other compound. To get to the xylem, the metal must then pass through the root symplasm. Root-to-shoot transport The systems that transport and store heavy metals are the most critical systems in a hyper-accumulator because the heavy metals will damage the plant before they are stored. The root-to-shoot transport of heavy metals is strongly regulated by gene expression. The genes that code for metal transport systems in plants have been identified. These genes are expressed in both hyper-accumulating and non-hyper-accumulating plants. There is a large body of evidence that genes known to code for the transport systems of heavy metals are constantly over-expressed in hyper-accumulating plants when they are exposed to heavy metals. This genetic evidence suggests that hyper-accumulators overdevelop their metal transport systems. This may be to speed up the root-to-shoot process limiting the amount of time the metal is exposed to the plant systems before it is stored. Cadmium accumulation has been reviewed. These transporters are known as heavy metal transporting ATPases (HMAs). One of the most well-documented HMAs is HMA4, which belongs to the Zn/Co/Cd/Pb HMA subclass and is localized at xylem parenchyma plasma membranes. HMA4 is upregulated when plants are exposed to high levels of Cd and Zn, but it is downregulated in its non-hyperaccumulating relatives. Also, when the expression of HMA4 is increased there is a correlated increase in the expression of genes belonging to the ZIP (Zinc regulated transporter Iron regulated transporter Proteins) family. This suggests that the root-to-shoot transport system acts as a driving force of the hyper-accumulation by creating a metal deficiency response in roots. Storage Systems that transport and store heavy metals are the most critical systems in a hyper-accumulator, because heavy metals damage the plant before they are stored. Often in hyperaccumulators the heavy metals are stored in the leaves. How phytoextraction can be useful For plants There are several theories to explain why it would be beneficial for a plant to do this. For example, the "elemental defence" hypothesis assumes that maybe predators will avoid eating hyperaccumulators because of the heavy metals. So far, scientists have not been able to determine a correlation. In 2002 a study was done by the Department of Pharmacology at Bangabandhu Sheikh Mujib Medical University in Bangladesh that used water hyacinth to remove arsenic from water. This study proved that water could be completely purified of arsenic in a few hours and that the plant then could be used as animal feed, fire wood and many other practical purposes. Since water hyacinth is invasive, it is inexpensive to grow and extremely practical for this purpose. See also Biomining References Ecological restoration Bioremediation -
Phytoextraction process
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
1,266
[ "Ecological restoration", "Phytoremediation plants", "Biodegradation", "Environmental engineering", "Ecological techniques", "Bioremediation", "Environmental soil science" ]
33,509,814
https://en.wikipedia.org/wiki/Inverse%20electron-demand%20Diels%E2%80%93Alder%20reaction
The inverse electron demand Diels–Alder reaction, or DAINV or IEDDA is an organic chemical reaction, in which two new chemical bonds and a six-membered ring are formed. It is related to the Diels–Alder reaction, but unlike the Diels–Alder (or DA) reaction, the DAINV is a cycloaddition between an electron-rich dienophile and an electron-poor diene. During a DAINV reaction, three pi-bonds are broken, and two sigma bonds and one new pi-bond are formed. A prototypical DAINV reaction is shown on the right. DAINV reactions often involve heteroatoms, and can be used to form heterocyclic compounds. This makes the DAINV reaction particularly useful in natural product syntheses, where the target compounds often contain heterocycles. Recently, the DAINV reaction has been used to synthesize a drug transport system which targets prostate cancer. History The Diels–Alder reaction was first reported in 1928 by Otto Diels and Kurt Alder; they were awarded the Nobel Prize in chemistry for their work in 1950. Since that time, use of the Diels–Alder reaction has become widespread. Conversely, DAINV does not have a clear date of inception, and lacks the comparative prominence of the standard Diels-Alder reaction. DAINV does not have a clear date of discovery, because of the difficulty that chemists had in differentiating normal from inverse electron-demand Diels-Alder reactions before the advent of modern computational methods. Much of the work in this area is attributed to Dale Boger, though other authors have published numerous papers on the subject. Mechanism Formal mechanism The mechanism of the DAINV reaction is controversial. While it is accepted as a formal [4+2] cycloaddition, it is not well understood whether or not the reaction is truly concerted. The accepted view is that most DAINV reactions occur via an asynchronous mechanism. The reaction proceeds via a single transition state, but not all bonds are formed or broken at the same time, as would be the case in a concerted mechanism. The formal DAINV mechanism for the reaction of acrolein and methyl vinyl ether is shown in the figure to the right. Though not entirely accurate, it provides a useful model for the reaction. During the course of the reaction, three pi-bonds (labeled with red) are broken, and three new bonds are formed (labeled in blue): two sigma bonds and one new pi-bond. Transition state Like the standard DA, DAINV reactions proceed via a single boat transition state, despite not being concerted. The single boat transition state is a simplification, but DFT calculations suggest that the time difference in bond scission and formation is minimal, and that despite potential asynchronicity, the reaction is concerted, with relevant bonds being either partially broken or partially formed at some point during the reaction. The near synchronicity of the DAINV means it can be treated similarly to the standard Diels-Alder reaction. The reaction can be modeled using a closed, boat-like transition state, with all bonds being in the process of forming or breaking at some given point, and therefore must obey the Woodward–Hoffman general selection rules. This means that for a three component, six electron system, all components must interact in a suprafacial manner (or one suprafacial and two antarfacial). With all components being suprafacial, the allowed transition state is boat-like; a chair-like transition state would result in three two-electron antarafacial components. The chair-like case is thermally disallowed by the Woodward-Hoffman rules. Molecular orbital theory Standard DA reactions In the standard Diels-Alder reaction, there are two components: the diene, which is electron rich, and the dienophile, which is electron poor. The relative electron-richness and electron deficiency of the reactants can best be described visually, in a molecular orbital diagram. In the standard Diels–Alder, the electron rich diene has molecular orbitals that are higher in energy than the orbitals of the electron poor dienophile. This difference in relative orbital energies means that, of the frontier molecular orbitals the HOMO of the diene (HOMOdiene) and the LUMO of the dienophile (LUMOdienophile) are more similar in energy than the HOMOdienophile and the LUMOdiene. The strongest orbital interaction is between the most similar frontier molecular orbitals: HOMOdiene and LUMOdienophile. [4+2] dimerization reactions Dimerization reactions are neither normally or inversely accelerated, and are usually low yielding. In this case, two monomers react in a DA fashion. Because the orbital energies are identical, there is no preference for interaction of the HOMO or the LUMO of either the diene or dienophile. The low yield of dimerization reactions is explained by second-order perturbation theory. The LUMO and HOMO of each species are farther apart in energy in a dimerization than in either normally or inversely accelerated Diels–Alder. This means that the orbitals interact less, and there is a lower thermodynamic drive for dimerization. Diels–Alder with inverse electron demand In the dimerization reactions, the diene and dienophile were equally electron rich (or equally electron poor). If the diene becomes any less electron rich, or the dienophile any more so, the possible [4+2] cycloaddition reaction will then be a DAINV reaction. In the DAINV reaction, the LUMOdiene and HOMOdienophile are closer in energy than the HOMOdiene and LUMOdienophile. Thus, the LUMOdiene and HOMOdienophile are the frontier orbitals that interact the most strongly, and result in the most energetically favourable bond formation. Regiochemistry and stereochemistry of DAINV Regiochemistry Regiochemistry in DAINV reactions can be reliably predicted in many cases. This can be done one of two ways, either by electrostatic (charge) control, or orbital control. To predict the regiochemistry via charge control, one must consider the resonance forms of the reactants. These resonance forms can be used to assign partial charges to each of the atoms. Partially negative atoms on the diene will bond to partially positive atoms on the dienophile, and vice versa. Predicting the regiochemistry of the reaction via orbital control requires one to calculate the relative orbital coefficients on each atom of the reactants. The HOMO of the dienophile reacts with the LUMO of the diene. The relative orbital size on each atom is represented by orbital coefficients in the Frontier molecular orbital theory (FMO). Orbitals will align to maximize the bonding interactions, and minimize the anti-bonding interactions. Alder–Stein principle The Alder–Stein principle states that the stereochemistry of the reactants is maintained in the stereochemistry of the products during a Diels–Alder reaction. This means that groups which were cis in relation to one another in the starting materials will be syn to one another in the product, and groups that were trans to one another in the starting material will be anti in the product. The Alder–Stein principle has no bearing on the relative orientation of groups on the two starting materials. One cannot predict, via this principle, whether a substituent on the diene will be syn or anti to a substituent on the dienophile. The Alder–Stein principle is only consistent across the self-same starting materials. The relationship is only valid for the groups on the diene alone, or the groups on the dienophile, alone. The relative orientation of groups between the two reactants can be predicted by the endo selection rule. Endo selection rule Similarly to the standard Diels–Alder reaction, the DAINV also obeys a general endo selection rule. In the standard Diels–Alder, it is known that electron withdrawing groups on the dienophile will approach endo, with respect to the diene. The exact cause of this selectivity is still debated, but the most accepted view is that endo approach maximizes secondary orbital overlap. The DAINV favors an endo orientation of electron donating substituents on the dienophile. Since all Diels–Alder reactions proceed through a boat transition state, there is an "inside" and an "outside" of the transition state (inside and outside the "boat"). The substituents on the dienophile are considered "endo" if they are 'inside' the boat, and "exo" if they are on the outside. The exo pathway would be favored by sterics, so a different explanation is needed to justify the general predominance of endo products. Frontier molecular orbital theory can be used to explain this outcome. When the substituents of the dienophile are exo, there is no interaction between those substituents and the diene. However, when the dienophile substituents are endo, there is considerable orbital overlap with the diene. In the case of DAINV the overlap of the orbitals of the electron withdrawing substituents with the orbitals of the diene create a favorable bonding interaction, stabilizing the transition state relative to the exo transition state. The reaction with the lower activation energy will proceed at a greater rate. Common dienes The dienes used in Inverse electron demand Diels-Alder are relatively electron-deficient species; compared to the standard Diels-Alder, where the diene is electron rich. These electron-poor species have lower molecular orbital energies than their standard DA counterparts. This lowered energy results from the inclusion of either: A) electron withdrawing group, or B) electronegative heteroatoms. Aromatic compounds can also react in DAINV reactions, such as triazines and tetrazines. Other common classes of dienes are oxo- and aza- butadienes. The key quality of a good DAINV diene is a significantly lowered HOMO and LUMO, as compared to standard DA dienes. Below is a table showing a few commonly used DAINV dienes, their HOMO and LUMO energies, and some standard DA dienes, along with their respective MO energies. Common dienophiles The dienophiles used in inverse electron demand Diels-Alder reactions are, unlike in the standard DA, very electron rich, containing one or more electron donating groups. This results in higher orbital energies, and thus more orbital overlap with the LUMO of the diene. Common classes of dieneophiles for DAINV reaction include vinyl ethers and vinyl acetals, imine, enamines, alkynes and highly strained olefins. The most important consideration in choice of dienophile is its relative orbital energies. Both HOMO and LUMO impact the rate and selectivity of the reaction. A table of common DAINV dienophiles, standard DA dienophiles, and their respective MO energies can be seen below. A second table shows how electron richness in the dienophiles affects the rate of reaction with a very electron poor diene, namely hexachlorocyclopentadiene. The more electron rich the dienophile is, the higher the rate of the reaction will be. This is very clear when comparing the relative rates of reaction for styrene and the less electron rich p-nitrostyrene; the more electron rich styrene reactions roughly 40% faster than p-nitrostyrene. Scope and applications DAINV reactions provide a pathway to a rich library of synthetic targets, and have been utilized to form many highly functionalized systems, including selectively protected sugars, an important contribution to the field of sugar chemistry. In addition, DAINV reactions can produce an array of different products from a single starting material, such as tetrazine. DAINV reactions have been utilized for the synthesis of several natural products, including (-)-CC-1065, a parent compound in the Duocarmycin series, which found use as an anticancer treatment. Several drug candidates in this series have progress into clinical trials. The DAINV reaction was used to synthesise the PDE-I and PDE-II sections of (-)-CC-1065. The first reaction in the sequence is a DAINV reaction between the tetrazine and vinyl acetal, followed by a retro-Diels–Alder reaction to afford a 1,2-diazine product. After several more steps, an intramolecular DAINV reaction occurs, followed again by a retro Diels-Alder in situ, to afford an indoline product. This indoline is a converted into either PDE-I or PDE-II in a few synthetic steps. DAINV reaction between 2,3,4,5-tetrachlorothiophene-1,1-dioxide (diene) and 4,7-dihydroisoindole derivative (dienophile) afforded a new precursor for tetranaphthoporphyrins (TNP) bearing perchlorinated aromatic rings. This precursor can be transformed into corresponding porphyrins by Lewis acid-catalyzed condensation with aromatic aldehydes and further oxidation by DDQ. Polychlorination of the TNP system has a profound favorable effect on its solubility. Heavy aggregation and poor solubility of the parent tetranaphthoporphyrins severely degrade the usefulness of this potentially very valuable porphyrin family. Thus, the observed effect of polychlorination is very welcome. Besides the effect on the solubility, polychlorination also turned out to improve substantially the stability of these compounds towards photooxidation, which has been known to be another serious drawback of tetranaphthoporphyrins. See also Diels–Alder reaction Cycloaddition Pericyclic reaction Molecular orbital theory Boger pyridine synthesis External links Organic Syntheses Preparation References Cycloadditions Name reactions
Inverse electron-demand Diels–Alder reaction
[ "Chemistry" ]
3,017
[ "Name reactions" ]
33,509,906
https://en.wikipedia.org/wiki/Superparamagnetic%20iron%E2%80%93platinum%20particles
Superparamagnetic iron platinum particles (SIPPs) are nanoparticles that have been reported as magnetic resonance imaging contrast agents. These are, however, investigational agents which have not yet been tried in humans. References Magnetic resonance imaging MRI contrast agents
Superparamagnetic iron–platinum particles
[ "Chemistry" ]
54
[ "Magnetic resonance imaging", "Nuclear chemistry stubs", "Nuclear magnetic resonance", "Nuclear magnetic resonance stubs" ]
33,510,850
https://en.wikipedia.org/wiki/Windowpane%20oyster
The windowpane oyster (Placuna placenta) is a bivalve marine mollusk in the family of Placunidae. It is edible, but valued more for its shell (and its rather small pearls). The oyster's shells have been used for thousands of years as a glass substitute because of their durability and translucence. More recently, they have been used in the manufacture of decorative items such as chandeliers and lampshades; in this use, the shell is known as the capiz shell (kapis). Capiz shells are also used as raw materials for glue, chalk and varnish. Distribution extends from the shallows of the Gulf of Aden to around the Philippines, where it is abundant in the eponymous province of Capiz. The mollusks are found in muddy or sandy shores, in bays, coves and lagoons to a depth of about . Populations have been in decline because of destructive methods of fishing and gathering such as trawling, dredging, blast fishing and surface-supplied diving. In the Philippines, fisheries are now regulated through permits, quotas, size limits and protected habitats. In spite of this, resources continue to be depleted. The nearly flat shells of the capiz can grow to over in diameter, reaching maturity between . The shell is secured by a V-shaped ligament. Males and females are distinguished by the color of the gonads. Fertilization is external and larvae are free-swimming like plankton for 14 days or attached to surfaces via byssal thread during metamorphosis, eventually settling on the bottom. They consume plankton filtered from the water passing through their slightly opened shell; the oyster's shell closes when the bivalve is above water during low tide. The capiz industry of Samal, Philippines Aside from being abundant in the province of Capiz, capiz shells are also abundant in the island of Samal in the Philippines, where 500 tons of capiz shells are harvested every other year. The capiz shells found around the island are harvested and transformed into various decorative products. As late as 2005, the residents of the island were trained to sustain the industry. However, the transfer of institutional knowledge to new generations to maintain the industry is in danger of being lost. See also Oyster Parol Bahay na bato Philippine arts References Placunidae Commercial molluscs Molluscs described in 1758 Taxa named by Carl Linnaeus Oysters Edible molluscs Philippine handicrafts Building materials Windows
Windowpane oyster
[ "Physics", "Engineering" ]
512
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
33,511,110
https://en.wikipedia.org/wiki/Nanofoundry
A nanofoundry is considered to be a foundry that performs on a scale similar to nanotechnology. This concept makes it similar to the role that the nanofactory would play because it is considered to be a factory that operates on that same scale model. The closest thing that nature has to a nanofoundry is the simple biological cell. Summary General information In silico biology attempts to duplicate nature by creating a virtual cell with the complete cycle of metabolism. The idea of creating an artificial cell along with working nanofoundries is highlighted in the phenomena of bioconvergence; which may advance us from the Information Age to the "Nanotechnology Age." Nanofoundries and artificial cells are creating a world where health care, the very definition of "medicine", along with life itself is entering a state of transition. This phenomenon is directly in parallel with changes in the procedures used in agriculture, managing our bioresources, ultimately leading up to the de facto equivalent of bio-engineering entire ecosystems from scratch. the latest area of research involved using both micro- and nano-focused ion beams is found using nanomachining. Preliminary studies have indicated that tissues can be successfully grown on three-dimensional structures while using ion beams on a substrate. On a larger scale, materials that appear to be smooth still have an abrasive appearance to them. Using the nanoscale, however, atoms rub off one a time. This creates new challenges for researchers who build their devices that are only 10 atoms wide. Current progress One of the first nanofoundries has been set up at the University of Madras in Chennai, Tamil Nadu, India. Knowledge about nanotechnology would be converted into useful consumer goods through the usage of nanofoundries. Scientists do not want nanotechnology to be confined to publishing research papers in journals when it could be useful for creating nanotechnology-enhanced consumer products that would be beneficial in our 21st century society. By converting the nanotechnology curriculum of the major universities into a more industry-oriented format, it makes the technology more practical for employers as well as consumers. The ability to grow more complex structures with a high ratio allows for drug release devices, biosensors, nanoreactors, and other countless discoveries. During the following decades to come, researchers will scramble to construct the world's first nuclear nanobeam complex. This facility would offer state-of-the-art facilities to a wide range of disciplines; including the conventional sciences. Commercial manufacturing could easily be scaled up thanks to nanofoundries. Nanofactories will most likely use metal nanoparticles instead of glass, plastic or rare earth minerals that are currently used to make most of our products. References Manufacturing plants Nanotechnology
Nanofoundry
[ "Materials_science", "Engineering" ]
561
[ "Nanotechnology", "Materials science" ]
33,518,864
https://en.wikipedia.org/wiki/KIVA%20%28software%29
KIVA is a family of Fortran-based computational fluid dynamics software developed by Los Alamos National Laboratory (LANL). The software predicts complex fuel and air flows as well as ignition, combustion, and pollutant-formation processes in engines. The KIVA models have been used to understand combustion chemistry processes, such as auto-ignition of fuels, and to optimize diesel engines for high efficiency and low emissions. General Motors has used KIVA in the development of direct-injection, stratified charge gasoline engines as well as the fast burn, homogeneous-charge gasoline engine. Cummins reduced development time and cost by 10%–15% using KIVA to develop its high-efficiency 2007 ISB 6.7-L diesel engine that was able to meet 2010 emission standards in 2007. At the same time, the company realized a more robust design and improved fuel economy while meeting all environmental and customer constraints. History LANL's Computational Fluid Dynamics expertise hails from the very beginning of the Manhattan Project in the 1940s. When the United States found itself in the midst of the first energy crisis in the 1970s, this core Laboratory capability transformed into KIVA, an internal combustion engine modeling tool designed to help make automotive engines more fuel-efficient and cleaner-burning. A "kiva" is actually a round Pueblo ceremonial chamber that is set underground and entered from above by means of a ladder through its roof; drawing on LANL's southwestern heritage, an analogy is made with the typical engine cylinder in which the entrance and exit of gases is achieved through valves set in the cylinder. The first public release of KIVA was made in 1985 through the National Energy Software Center (NESC) at Argonne National Laboratory, which served at the time as the official distribution hub for Department of Energy-sponsored software. Distribution of KIVA continued through the Energy Science and Technology Software Center (ESTSC) in Oak Ridge, Tennessee until 2008, when distribution of multiple versions of KIVA returned to LANL's Technology Transfer Division (TT). KIVA is used by hundreds of institutions worldwide, including the Big Three U.S. auto makers, Cummins, Caterpillar, and various federal laboratories. Overview Fuel economy is heavily dependent upon engine efficiency, which in turn depends to a large degree on how fuel is burned within the cylinders of the engine. Higher in-cylinder pressures and temperatures lead to increased fuel economy, but they also create more difficulty in controlling the combustion process. Poorly controlled and incomplete combustion can cause higher levels of emissions and lower engine efficiencies. In order to optimize combustion processes, engine designers have traditionally undertaken manual engine modifications, conducted testing, and analyzed the results. This iterative process is painstakingly slow, costly, and does not lend itself to identifying the optimal engine design specifications. In response to these problems, Los Alamos National Laboratory scientists developed KIVA, an advanced computational fluid dynamics (CFD) modeling code that accurately simulates the in-cylinder processes of engines. KIVA, a transient, three-dimensional, multiphase, multicomponent code for the analysis of chemically reacting flows with sprays has been under development at LANL for decades. The code uses an Arbitrary Lagrangian Eulerian (ALE) methodology on a staggered grid, and discretizes space using the finite volume method. The code uses an implicit time-advancement with the exception of the advective terms that are cast in an explicit but second-order monotonicity-preserving manner. Also, the convection calculations can be subcycled in the desired regions to avoid restricting the time step due to Courant conditions. KIVA's functionality extends from low speeds to supersonic flows for both laminar and turbulent regimes. Transport and chemical reactions for an arbitrary number of species and their chemical reactions is provided. A stochastic particle method is used to calculate evaporating liquid sprays, including the effects of droplet collisions, agglomeration, and aerodynamic breakups. Although specifically designed for simulating internal combustion engines, the modularity of the code facilitates easy modifications for solving a variety of hydrodynamics problems involving chemical reactions. The versatility and range of features have made KIVA programs attractive to a variety of non-engine applications as well; these range from convection towers to modeling silicon dioxide condensation in high pressure oxidation chambers. Other applications have included the analysis of flows in automotive catalytic converters, power plant smokestack cleaning, pyrolytic treatment of biomass, design of fire suppression systems, Pulsed Detonation Engines (PDEs), stationary burners, aerosol dispersion, and design of heating, ventilation, and air conditioning systems. The code has found a widespread application in the automotive industry. Versions KIVA-3V KIVA-3V is the most mature version of KIVA still maintained and distributed through LANL; it is an improved version of the earlier Federal Laboratory Consortium Excellence in Technology Transfer Award-winning KIVA3 (1993), extended to model vertical or canted valves in the cylinder head of a gasoline or diesel engine. KIVA3, in turn, was based on the earlier KIVA2 (1989) and used the same numerical solution procedure and solved the same types of equations. KIVA-3V uses a block-structured mesh with connectivity defined through indirect addressing. The departure from a single rectangular structure in logical space allows complex geometries to be modeled with significantly greater efficiency because large regions of deactivated cells are no longer necessary. Cell-face boundary conditions permit greater flexibility and simplification in the application of boundary conditions. KIVA-3V also contains a number of significant improvements over its predecessors. New features enhanced the robustness, efficiency, and usefulness of the overall program for engine modeling. Automatic restart of the cycle with a reduced timestep in case of iteration limit or temperature overflow effectively reduced code crashes. A new option provided automatic deactivation of a port region when it is closed from the cylinder and reactivation when it communicates with the cylinder. Extensions to the particle-based liquid wall film model made the model more complete and a split injection option was also added. A new subroutine monitors the liquid and gaseous fuel phases and energy balance data and emissions are monitored and printed. In addition, new features were added to the LANL-developed grid generator, K3PREP, and the KIVA graphics post processor, K3POST. KIVA-4 KIVA-4 is maintained and distributed through LANL. While KIVA-4 maintains the full generality of KIVA-3V, it adds the capability of computing with unstructured grids. Unstructured grids can be generated more easily than structured grids for complex geometries. The unstructured grids may be composed of a variety of elements including hexahedra, prisms, pyramids, and tetrahedra. However, the numerical accuracy diminishes when the grid is not composed of hexahedra. KIVA-4 was developed to work with the many geometries accommodated within KIVA-3V, which includes 2D axisymmetric, 2D planar, 3D axisymmetric sector geometries, and full 3D geometries. KIVA-4 also features a multicomponent fuel evaporation algorithm. Many of the numerical algorithms in KIVA-3V generalize properly to unstructured meshes; however, fundamental changes were needed in the solution of the pressure equation and the fluxing of momentum. In addition, KIVA-4 loops over cell faces to compute diffusion terms. KIVA-4mpi Recently, LANL researchers developed KIVA-4mpi, a parallel version of KIVA-4, and the most advanced version of KIVA maintained and distributed by LANL. KIVA-4mpi also solves chemically reacting, turbulent, multi-phase viscous flows, but does this on multiple computer processors with a distributed computational domain (grid). KIVA-4mpi internal combustion engine modeling capabilities are the same as that of KIVA-4, and are based on the KIVA-4 unstructured grid code. The software is well suited for modeling internal combustion engines on multiple processors using the message passing interface (MPI). On August 9, 2011, LANL honored the authors of KIVA-4mpi with the Distinguished Copyright Award for demonstrating a breadth of commercial applications, potential to create economic value, and the highest level of technical excellence. KIVA-EXEC KIVA-EXEC is a free, reduced-functionality executable-only trial version of KIVA-4. KIVA-EXEC has all the performance of Los Alamos National Laboratory's premier KIVA-4 code, but with a 45K cell limitation. KIVA-EXEC is perfect for beginners who do not need or intend to modify the source code. KIVA videos KIVA4 slant valve Cubit scalloped bowl 4 Valve KIVA-4 mpi 4 Valve FEARCE, 2018 new FEM based LANL T-3 software (David Carrington and Jiajia Waters ) Alternative software Advanced Simulation Library (open source: AGPL) COMSOL Multiphysics CLAWPACK Code Saturne (GPL) Coolfluid (LGPLv3) deal.II FEATool Multiphysics FreeCFD Gerris Flow Solver Nektar++ OpenFVM SU2 code (LGPL) References External links Free, personal-use Linux-compatible (Red Hat recommended) KIVA-EXEC download KIVA-4 User’s Manual KIVA-3V manual (0.4MB, searchable within READER.) KIVA-3V manual (1.5MB, searchable within READER.) KIVA-3 manual (2.2 MB, searchable within READER.) KIVA-II manual (9.0MB, scanned document, not searchable.) Los Alamos National Laboratory’s Fluid Dynamics and Solid Mechanics Group Los Alamos National Laboratory Technology Transfer Division Fortran software Physics software Industrial software Computational fluid dynamics Finite element software for Linux
KIVA (software)
[ "Physics", "Chemistry", "Technology" ]
2,088
[ "Computational fluid dynamics", "Physics software", "Computational physics", "Industrial software", "Industrial computing", "Fluid dynamics" ]
35,089,957
https://en.wikipedia.org/wiki/The%20Biology%20of%20the%20Cell%20Surface
The Biology of the Cell Surface is a book by American biologist Ernest Everett Just. It was published by P. Blakiston’s Son & Co in 1939. Just began writing the book in 1934 in Naples and finished it in France, shortly before being sent to a prisoner-of-war camp. He considered the book to be his "crowning achievement". The book examined the role of the cell surface in embryology, development and evolution, and presented a critique of gene theory, particularly the views of Jacques Loeb. Sapp suggests that "Just’s theorizing on the cell cortex [in this work] was unsurpassed". References External links The Biology of the Cell Surface - full text Biodiversity Heritage Library 1939 non-fiction books 1939 in biology Cell biology Biology books
The Biology of the Cell Surface
[ "Biology" ]
164
[ "Cell biology" ]
35,090,132
https://en.wikipedia.org/wiki/World%20Hydrogen%20Energy%20Conference
World Hydrogen Energy Conference (WHEC) is a series of international events covering the complex issues of utilizing hydrogen as an energy carrier. These include methods of production of hydrogen, materials for hydrogen storage, infrastructure development and hydrogen utilization technologies, particularly fuel cell system application. WHEC is held every two years at different locations around the world and, in combination with World Hydrogen Technology Conventions (WHTC), is organized with the support of International Association for Hydrogen Energy. Past events Upcoming events References Recurring events established in 1976 Academic conferences
World Hydrogen Energy Conference
[ "Engineering" ]
108
[ "Energy organizations", "Hydrogen economy organizations" ]
35,093,912
https://en.wikipedia.org/wiki/Non-Archimedean%20geometry
In mathematics, non-Archimedean geometry is any of a number of forms of geometry in which the axiom of Archimedes is negated. An example of such a geometry is the Dehn plane. Non-Archimedean geometries may, as the example indicates, have properties significantly different from Euclidean geometry. There are two senses in which the term may be used, referring to geometries over fields which violate one of the two senses of the Archimedean property (i.e. with respect to order or magnitude). Geometry over a non-Archimedean ordered field The first sense of the term is the geometry over a non-Archimedean ordered field, or a subset thereof. The aforementioned Dehn plane takes the self-product of the finite portion of a certain non-Archimedean ordered field based on the field of rational functions. In this geometry, there are significant differences from Euclidean geometry; in particular, there are infinitely many parallels to a straight line through a point—so the parallel postulate fails—but the sum of the angles of a triangle is still a straight angle. Intuitively, in such a space, the points on a line cannot be described by the real numbers or a subset thereof, and there exist segments of "infinite" or "infinitesimal" length. Geometry over a non-Archimedean valued field The second sense of the term is the metric geometry over a non-Archimedean valued field, or ultrametric space. In such a space, even more contradictions to Euclidean geometry result. For example, all triangles are isosceles, and overlapping balls nest. An example of such a space is the p-adic numbers. Intuitively, in such a space, distances fail to "add up" or "accumulate". References Fields of geometry
Non-Archimedean geometry
[ "Mathematics" ]
380
[ "Fields of geometry", "Geometry" ]
35,095,526
https://en.wikipedia.org/wiki/Gable%20CAD
Gable CAD, or Gable 4D Series, was a British architectural computer-aided design package initially developed in the early 1980s. History Gable CAD was developed at the University of Sheffield in the mid-1980s under the leadership of Professor Bryan Lawson. It was spun out into Gable CAD Systems Limited (incorporated in 1984) and retained links with the university until its demise in 1996 when a court order was made for compulsory winding up. An early building information modeling application, Gable CAD was an advanced 2D and 3D design package with different modules, and was operated via a Windows-style interface and mouse running on UNIX. It was possible to create detailed 3D models and then generate 2D drawings or rendered visualisations from the data. The assets of the company were acquired by Auxer in 1997 and aimed to complete the conversion of Gable CAD to Windows NT but this does not appear to have ever been released. References Computer-aided design software Data modeling Building information modeling
Gable CAD
[ "Engineering" ]
190
[ "Data modeling", "Data engineering", "Building information modeling", "Building engineering" ]
29,527,111
https://en.wikipedia.org/wiki/NGC%204452
NGC 4452 is an edge-on lenticular galaxy that is part of the Virgo Cluster. NGC 4452 is about away from Earth and in diameter. This galaxy was first seen by William Herschel in 1784 with his telescope. NGC 4452 is so thin that it is actually difficult to determine what type of disk galaxy it is. Its lack of a visible dust lane indicates that it is a low-dust lenticular galaxy, although it is still possible that a view from on top would reveal spiral structure. The unusual stellar line segment spans about 35,000 light-years from end to end. Near NGC 4452's center is a slight bulge of stars, while hundreds of background galaxies are visible far in the distance. Galaxies that appear this thin are rare mostly because Earth must reside (nearly) in the extrapolated planes of their thin galactic disks. Galaxies that actually are this thin are relatively common; for example, the Milky Way Galaxy is thought to be about this thin. NGC 4452 appears to be very similar to galaxy IC 335, an edge-on galaxy in Fornax Cluster, in constellation Fornax. References External links An Extraordinarily Slender Galaxy – ESA/Hubble Picture of the week Virgo Cluster 4452 Lenticular galaxies 17840315 Virgo (constellation) 041060
NGC 4452
[ "Astronomy" ]
270
[ "Virgo (constellation)", "Constellations" ]
29,528,690
https://en.wikipedia.org/wiki/List%20of%20types%20of%20numbers
Numbers can be classified according to how they are represented or according to the properties that they have. Main types Natural numbers (): The counting numbers {1, 2, 3, ...} are commonly called natural numbers; however, other definitions include 0, so that the non-negative integers {0, 1, 2, 3, ...} are also called natural numbers. Natural numbers including 0 are also sometimes called whole numbers. Integers (): Positive and negative counting numbers, as well as zero: {..., −3, −2, −1, 0, 1, 2, 3, ...}. Rational numbers (): Numbers that can be expressed as a ratio of an integer to a non-zero integer. All integers are rational, but there are rational numbers that are not integers, such as . Real numbers (): Numbers that correspond to points along a line. They can be positive, negative, or zero. All rational numbers are real, but the converse is not true. Irrational numbers (): Real numbers that are not rational. Imaginary numbers: Numbers that equal the product of a real number and the imaginary unit , where . The number 0 is both real and imaginary. Complex numbers (): Includes real numbers, imaginary numbers, and sums and differences of real and imaginary numbers. Hypercomplex numbers include various number-system extensions: quaternions (), octonions (), sedenions (), trigintaduonions (), and other hypercomplex numbers of dimensions 64 and greater. Less common variants include as bicomplex numbers, coquaternions, and biquaternions. -adic numbers: Various number systems constructed using limits of rational numbers, according to notions of "limit" different from the one used to construct the real numbers. Number representations Decimal: The standard Hindu–Arabic numeral system using base ten. Binary: The base-two numeral system used by computers, with digits 0 and 1. Ternary: The base-three numeral system with 0, 1, and 2 as digits. Quaternary: The base-four numeral system with 0, 1, 2, and 3 as digits. Hexadecimal: Base 16, widely used by computer system designers and programmers, as it provides a more human-friendly representation of binary-coded values. Octal: Base 8, occasionally used by computer system designers and programmers. Duodecimal: Base 12, a numeral system that is convenient because of the many factors of 12. Sexagesimal: Base 60, first used by the ancient Sumerians in the 3rd millennium BC, was passed down to the ancient Babylonians. See positional notation for information on other bases. Roman numerals: The numeral system of ancient Rome, still occasionally used today, mostly in situations that do not require arithmetic operations. Tally marks: Usually used for counting things that increase by small amounts and do not change very quickly. Fractions: A representation of a non-integer as a ratio of two integers. These include improper fractions as well as mixed numbers. Continued fraction: An expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer part and another reciprocal, and so on. Scientific notation: A method for writing very small and very large numbers using powers of 10. When used in science, such a number also conveys the precision of measurement using significant figures. Knuth's up-arrow notation and Conway chained arrow notation: Notations that allow the concise representation of some extremely large integers such as Graham's number. Signed numbers Positive numbers: Real numbers that are greater than zero. Negative numbers: Real numbers that are less than zero. Because zero itself has no sign, neither the positive numbers nor the negative numbers include zero. When zero is a possibility, the following terms are often used: Non-negative numbers: Real numbers that are greater than or equal to zero. Thus a non-negative number is either zero or positive. Non-positive numbers: Real numbers that are less than or equal to zero. Thus a non-positive number is either zero or negative. Types of integer Even and odd numbers: An integer is even if it is a multiple of 2, and is odd otherwise. Prime number: A positive integer with exactly two positive divisors: itself and 1. The primes form an infinite sequence 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, ... Composite number: A positive integer that can be factored into a product of smaller positive integers. Every integer greater than one is either prime or composite. Polygonal numbers: These are numbers that can be represented as dots that are arranged in the shape of a regular polygon, including Triangular numbers, Square numbers, Pentagonal numbers, Hexagonal numbers, Heptagonal numbers, Octagonal numbers, Nonagonal numbers, Decagonal numbers, Hendecagonal numbers, and Dodecagonal numbers. There are many other famous integer sequences, such as the sequence of Fibonacci numbers, the sequence of factorials, the sequence of perfect numbers, and so forth, many of which are enumerated in the On-Line Encyclopedia of Integer Sequences. Algebraic numbers Algebraic number: Any number that is the root of a non-zero polynomial with rational coefficients. Transcendental number: Any real or complex number that is not algebraic. Examples include and . Trigonometric number: Any number that is the sine or cosine of a rational multiple of . Quadratic surd: A root of a quadratic equation with rational coefficients. Such a number is algebraic and can be expressed as the sum of a rational number and the square root of a rational number. Constructible number: A number representing a length that can be constructed using a compass and straightedge. Constructible numbers form a subfield of the field of algebraic numbers, and include the quadratic surds. Algebraic integer: A root of a monic polynomial with integer coefficients. Non-standard numbers Transfinite numbers: Numbers that are greater than any natural number. Ordinal numbers: Finite and infinite numbers used to describe the order type of well-ordered sets. Cardinal numbers: Finite and infinite numbers used to describe the cardinalities of sets. Infinitesimals: These are smaller than any positive real number, but are nonetheless greater than zero. These were used in the initial development of calculus, and are used in synthetic differential geometry. Hyperreal numbers: The numbers used in non-standard analysis. These include infinite and infinitesimal numbers which possess certain properties of the real numbers. Surreal numbers: A number system that includes the hyperreal numbers as well as the ordinals. Fuzzy numbers: A generalization of the real numbers, in which each element is a connected set of possible values with weights. Computability and definability Computable number: A real number whose digits can be computed by some algorithm. Period: A number which can be computed as the integral of some algebraic function over an algebraic domain. Definable number: A real number that can be defined uniquely using a first-order formula with one free variable in the language of set theory. See also Almost integer Scalar (mathematics) References Mathematics-related lists Types
List of types of numbers
[ "Mathematics" ]
1,521
[ "Arithmetic", "Mathematical objects", "Numbers", "Number-related lists" ]
29,530,278
https://en.wikipedia.org/wiki/ACS%20Catalysis
ACS Catalysis is a monthly peer-reviewed scientific journal established in 2011 by the American Chemical Society. The journal covers research on all aspects of heterogeneous, homogeneous, and biocatalysis. The editor-in-chief is Cathleen Crudden, who assumed the position in early 2021. The journal received the Association of American Publishers’ PROSE Award for "Best New Journal in Science, Technology & Medicine" in 2013. Types of content The journal publishes the following types of articles: Letters, Articles, Reviews, Perspectives, and Viewpoints. Reviews, Perspectives, and Viewpoints appear mostly on invitation. Abstracting and indexing The journal is abstracted and indexed in: Chemical Abstracts Service Current Contents/Physical, Chemical & Earth Sciences Ei Compendex Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2023 impact factor of 11.3. References External links Catalysis Monthly journals Academic journals established in 2011 English-language journals Catalysis
ACS Catalysis
[ "Chemistry" ]
207
[ "Catalysis", "Chemical kinetics" ]
29,536,434
https://en.wikipedia.org/wiki/Acoustic%20resonance%20technology
Acoustic resonance technology (ART) is an acoustic inspection technology developed by Det Norske Veritas over the past 20 years. ART exploits the phenomenon of half-wave resonance, whereby a suitably excited resonant target (such as a pipeline wall) exhibits longitudinal resonances at certain frequencies characteristic of the target's thickness. Knowing the speed of sound in the target material, the half-wave resonant frequencies can be used to calculate the target's thickness. ART differs from traditional ultrasonic testing: although both are forms of nondestructive testing based on acoustics, ART generally uses lower frequencies and has a wider bandwidth. This has enabled its use in gaseous environments without a liquid couplant. Det Norske Veritas has licensed the technology for use in on-shore water pipes worldwide to Breivoll Inspection Technologies AS. Breivoll has proven the efficiency of the technology in assessing the condition of metallic water pipes, both with and without coating. The company has since 2008 successfully developed a method to enter and inspect water mains, and is a world-leader in their market. ART has also been used in field tests at Gassco's Kårstø facility. In 2012, DNV's ART activities were spun out into a subsidiary, HalfWave, and was further developed through investment by Shell Technology Ventures, Chevron Technology Ventures, and Energy Ventures. In 2020, Halfwave was acquired by Previan, who shared the technology between two of its business units, NDT Global, for its In-Line Inspection (ILI) solutions and TSC Subsea for subsea applications. TSC Subsea has a long history of developing subsea inspection robotics, deploying multiple non-destructive testing (NDT) techniques. Since the merger, substantial enhancements have been made to the ART technology. As a result, Acoustic Resonance Technology has proven to penetrate thick subsea attenuative coatings of more than 100mm (4 inches). TSC Subsea has successfully deployed ART with its Artemis solution to inspect subsea pipelines, flowlines, and flexible and rigid risers down to water depths of 3000m (10,000ft.) Most recently, many companies have been researching the ability of Artificial Intelligence (AI) to perform Acoustic Resonant Testing in order to eliminate the subjectivity that can arise through manual ART. By utilising an AI model to plot all of the points in a frequency spectrogram and compare bodies' spectrograms to one another, AI is able to identify even the slightest changes in material. In industries such as aerospace and military, the ability of AI to identify these minuscule anomalies is of utmost importance as missing them can result in dire consequences. Companies such as RESONIKS have worked towards patenting specific AI models which are extensively trained in locating structural defects. Main features Uses lower frequencies than ultrasonic testing Effective in gases and liquids (i.e. requires no liquid couplant) Can be used to characterize multi-layered media (e.g. pipelines with coatings) Can penetrate coatings Can measure inside and outside metal loss RUV (resonance ultrasonic vibrations) In a closely related technique, the presence of cracks in a solid structure can be detected by looking for differences in resonance frequency, bandwidth and resonance amplitude compared to a nominally identical but non-cracked structure. This technique, called RUV (Resonance Ultrasonic Vibrations), has been developed for use in the photovoltaics industry by a group of researchers from the University of South Florida, Ultrasonic Technologies Inc. (Florida, US), and Isofoton S.A. (Spain). The method was able to detect mm-size cracks in as-cut and processed silicon wafers, as well as finished solar cells, with a total test time of under 2 seconds per wafer. References Quality Nondestructive testing Casting (manufacturing) Ultrasound
Acoustic resonance technology
[ "Materials_science" ]
802
[ "Nondestructive testing", "Materials testing" ]
36,149,098
https://en.wikipedia.org/wiki/Triple-resonance%20nuclear%20magnetic%20resonance%20spectroscopy
Triple resonance experiments are a set of multi-dimensional nuclear magnetic resonance spectroscopy (NMR) experiments that link three types of atomic nuclei, most typically consisting of 1H, 15N and 13C. These experiments are often used to assign specific resonance signals to specific atoms in an isotopically-enriched protein. The technique was first described in papers by Ad Bax, Mitsuhiko Ikura and Lewis Kay in 1990, and further experiments were then added to the suite of experiments. Many of these experiments have since become the standard set of experiments used for sequential assignment of NMR resonances in the determination of protein structure by NMR. They are now an integral part of solution NMR study of proteins, and they may also be used in solid-state NMR. Background There are two main methods of determining protein structure on the atomic level. The first of these is by X-ray crystallography, starting in 1958 when the crystal structure of myoglobin was determined. The second method is by NMR, which began in the 1980s when Kurt Wüthrich outlined the framework for NMR structure determination of proteins and solved the structure of small globular proteins. The early method of structural determination of protein by NMR relied on proton-based homonuclear NMR spectroscopy in which the size of the protein that may be determined is limited to ~10 KDa. This limitation is due to the need to assign NMR signals from the large number of nuclei in the protein – in larger protein, the greater number of nuclei results in overcrowding of resonances, and the increasing size of the protein also broadens the signals, making resonance assignment difficult. These problems may be alleviated by using heteronuclear NMR spectroscopy which allows the proton spectrum to be edited with respect to the 15N and 13C chemical shifts, and also reduces the overlap of resonances by increasing the number of dimensions of the spectrum. In 1990, Ad Bax and coworkers developed the triple resonance technology and experiments on proteins isotopically labelled with 15N and 13C, with the result that the spectra are dramatically simplified, greatly facilitating the process of resonance assignment, and increasing the size of the protein that may be determined by NMR. These triple resonance experiments utilize the relatively large magnetic couplings between certain pairs of nuclei to establish their connectivity. Specifically, the 1JNH, 1JCH, 1JCC, and 1JCN couplings are used to establish the scalar connectivity pathway between nuclei. The magnetization transfer process takes place through multiple, efficient one-bond magnetization transfer steps, rather than a single step through the smaller and variable 3JHH couplings. The relatively large size and good uniformity of the one-bond couplings allowed the design of efficient magnetization transfer schemes that are effectively uniform across a given protein, nearly independent of conformation. Triple resonance experiments involving 31P may also be use for nucleic acid studies. Suite of experiments These experiments are typically named by the nuclei (H, N, and C) involved in the experiment. CO refers to the carbonyl carbon, while CA and CB refer to Cα and Cβ respectively, similarly HA and HB for Hα and Hβ (see diagram for examples of experiments). The nuclei in the name are ordered in the same sequence as in the path of magnetization transfer, those nuclei placed within parentheses are involved in the magnetization transfer pathway but are not recorded. For reason of sensitivity, these experiments generally start on a proton and end on a proton, typically via INEPT and reverse INEPT steps. Therefore, many of these experiments are what may be called "out-and-back" experiments where, although not indicated in the name, the magnetization is transferred back to the starting proton for signal acquisition. Some of the experiments are used in tandem for the resonance assignment of protein, for example HNCACB may be used together with CBCA(CO)NH as a pair of experiments. Not all of these experiments need to be recorded for sequential assignment (it can be done with as few as two), however extra pairs of experiments are useful for independent assessment of the correctness of the assignment, and the redundancy of information may be necessary when there is ambiguity in the assignments. Other experiments are also necessary to fully assign the side chain resonances. TROSY versions of many of these experiments exist for improvement in sensitivity. Triple resonance experiments can also be used in sequence-specific backbone resonance assignment of magic angle spinning NMR spectra in solid-state NMR. A large number triple-resonance NMR experiments have been created, and the experiments listed below is not meant to be exhaustive. HNCO The experiment provides the connectivities between the amide of a residue with the carbonyl carbon of the preceding residues. It is the most sensitive of the triple resonance experiments. The sidechains carboxamides of asparagine and glutamine are also visible in this experiment. Additionally, the guanidino group of arginine, which has similar coupling constant to the carboxamide group, may also appear in this spectrum. This experiment is sometimes used together with HN(CA)CO. HN(CA)CO Here, the amide resonance of a residue is correlated with the carbonyl carbon of the same residue, as well as that of the preceding residue. The intra-residue resonances are usually stronger than the inter-residues one. HN(CO)CA This experiment correlates the resonances of the amide of a residue with the Cα of the preceding residue. This experiment is often used together with HNCA. HNCA This experiment correlates the chemical shift of amide of a residue the Cα of the same residue as well as those of the preceding residue. Each strip gives two peaks, the inter and intra-residue Cα peaks. Peak from the preceding Cα may be identified from the HN(CO)CA experiment which gives only the inter-residue Cα. CBCA(CO)NH CBCA(CO)NH, or alternatively HN(CO)CACB, correlates the resonances of the amide of a residue with the Cα and Cβ of the preceding residue. Two peaks corresponding to the Cα and Cβ are therefore visible for each residue. This experiment is normally used together with HNCACB. The sidechain carboxamide of glutamines and asparagines also appear in this spectra in this experiment. CBCA(CO)NH is sometimes more precisely called (HBHA)CBCA(CO)NH as it starts with aliphatic protons and ends on an amide proton, and is therefore not an out-and-back experiment like HN(CO)CACB. HNCACB HNCACB, or alternatively CBCANH, correlates the chemical shift of amide of a residue the Cα and Cβ of the same residue as well as those of the preceding residue. In each strip, four peaks may be visible – 2 from the same residue and 2 from the preceding residue. Peaks from the preceding residue are usually weaker, and may be identified using CBCA(CO)NH. In this experiment, the Cα and Cβ peaks are in opposite phase, i.e. if Cα appears as a positive peak, then Cβ will be negative, making identification of Cα and Cβ straightforward. The extra information of Cβ from the CBCA(CO)NH/HNCACB set of experiments makes identification of residue type easier than HN(CO)CA/HNCA, however the HNCACB is a less sensitive experiment and may be unsuitable for some proteins. The CBCANH experiment is less suitable for larger protein as it is more susceptible to the line-width problem than HNCACB. CBCACO(CA)HA This experiment provides the connectivities between the Cα and Cβ with the carbonyl carbon and Hα atoms within the same residue. The sidechain carboxyl group of aspartate and glutamate may appear weakly in this spectrum. CC(CO)NH This experiment provides connectivities between the amide of a residue and the aliphatic carbon atoms of the preceding residue. H(CCO)NH This experiment provides connectivities between the amide of a residue and the hydrogen atoms attached to the aliphatic carbon of the preceding residue. HBHA(CO)NH This experiment correlates the amide resonance to the Hα and Hβ of the preceding residue. Sequential assignment Pairs of experiments are normally used for sequential assignment, for example, the HNCACB and CBCA(CO)NH pair, or HNCA and HNC(CO)CA. The spectra are normally analyzed as strips of peaks, and strips from the pair of experiments may be presented together side by side or as an overlay of two spectra. In the HNCACB spectra 4 peaks are usually present in each strip, the Cα and Cβ of one residue as well as those of its preceding residue. The peaks from the preceding residue can be identified from the CBCA(CO)NH experiment. Each strip of peaks can therefore be linked to the next strip of peaks from an adjoining residue, allowing the strips to be connected sequentially. The residue type can be identified from the chemical shifts of the peaks, some, such as serine, threonine, glycine and alanine, are much easier to identify than others. The resonances can then be assigned by comparing the sequence of peaks with the amino acid sequence of the protein. References External links Triple resonance experiments for proteins Introduction to 3D Triple Resonance Experiments Protein NMR – A Practical Guide Protein structure Nuclear magnetic resonance spectroscopy
Triple-resonance nuclear magnetic resonance spectroscopy
[ "Physics", "Chemistry" ]
2,000
[ "Nuclear magnetic resonance", "Spectrum (physical sciences)", "Nuclear magnetic resonance spectroscopy", "Structural biology", "Protein structure", "Spectroscopy" ]
26,288,711
https://en.wikipedia.org/wiki/Coulomb%27s%20law
Coulomb's inverse-square law, or simply Coulomb's law, is an experimental law of physics that calculates the amount of force between two electrically charged particles at rest. This electric force is conventionally called the electrostatic force or Coulomb force. Although the law was known earlier, it was first published in 1785 by French physicist Charles-Augustin de Coulomb. Coulomb's law was essential to the development of the theory of electromagnetism and maybe even its starting point, as it allowed meaningful discussions of the amount of electric charge in a particle. The law states that the magnitude, or absolute value, of the attractive or repulsive electrostatic force between two point charges is directly proportional to the product of the magnitudes of their charges and inversely proportional to the square of the distance between them. Coulomb discovered that bodies with like electrical charges repel: Coulomb also showed that oppositely charged bodies attract according to an inverse-square law: Here, is a constant, and are the quantities of each charge, and the scalar r is the distance between the charges. The force is along the straight line joining the two charges. If the charges have the same sign, the electrostatic force between them makes them repel; if they have different signs, the force between them makes them attract. Being an inverse-square law, the law is similar to Isaac Newton's inverse-square law of universal gravitation, but gravitational forces always make things attract, while electrostatic forces make charges attract or repel. Also, gravitational forces are much weaker than electrostatic forces. Coulomb's law can be used to derive Gauss's law, and vice versa. In the case of a single point charge at rest, the two laws are equivalent, expressing the same physical law in different ways. The law has been tested extensively, and observations have upheld the law on the scale from 10−16 m to 108 m. History Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers and pieces of paper. Thales of Miletus made the first recorded description of static electricity around 600 BC, when he noticed that friction could make a piece of amber attract small objects. In 1600, English scientist William Gilbert made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the Neo-Latin word electricus ("of amber" or "like amber", from [elektron], the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646. Early investigators of the 18th century who suspected that the electrical force diminished with distance as the force of gravity did (i.e., as the inverse square of the distance) included Daniel Bernoulli and Alessandro Volta, both of whom measured the force between plates of a capacitor, and Franz Aepinus who supposed the inverse-square law in 1758. Based on experiments with electrically charged spheres, Joseph Priestley of England was among the first to propose that electrical force followed an inverse-square law, similar to Newton's law of universal gravitation. However, he did not generalize or elaborate on this. In 1767, he conjectured that the force between charges varied as the inverse square of the distance. In 1769, Scottish physicist John Robison announced that, according to his measurements, the force of repulsion between two spheres with charges of the same sign varied as . In the early 1770s, the dependence of the force between charged bodies upon both distance and charge had already been discovered, but not published, by Henry Cavendish of England. In his notes, Cavendish wrote, "We may therefore conclude that the electric attraction and repulsion must be inversely as some power of the distance between that of the and that of the , and there is no reason to think that it differs at all from the inverse duplicate ratio". Finally, in 1785, the French physicist Charles-Augustin de Coulomb published his first three reports of electricity and magnetism where he stated his law. This publication was essential to the development of the theory of electromagnetism. He used a torsion balance to study the repulsion and attraction forces of charged particles, and determined that the magnitude of the electric force between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them. The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls and derive his inverse-square proportionality law. Mathematical form Coulomb's law states that the electrostatic force experienced by a charge, at position , in the vicinity of another charge, at position , in a vacuum is equal to where is the displacement vector between the charges, a unit vector pointing from to and the electric constant. Here, is used for the vector notation. The electrostatic force experienced by , according to Newton's third law, is If both charges have the same sign (like charges) then the product is positive and the direction of the force on is given by ; the charges repel each other. If the charges have opposite signs then the product is negative and the direction of the force on is the charges attract each other. System of discrete charges The law of superposition allows Coulomb's law to be extended to include any number of point charges. The force acting on a point charge due to a system of point charges is simply the vector addition of the individual forces acting alone on that point charge due to each one of the charges. The resulting force vector is parallel to the electric field vector at that point, with that point charge removed. Force on a small charge at position , due to a system of discrete charges in vacuum is where is the magnitude of the th charge, is the vector from its position to and is the unit vector in the direction of . Continuous charge distribution In this case, the principle of linear superposition is also used. For a continuous charge distribution, an integral over the region containing the charge is equivalent to an infinite summation, treating each infinitesimal element of space as a point charge . The distribution of charge is usually linear, surface or volumetric. For a linear charge distribution (a good approximation for charge in a wire) where gives the charge per unit length at position , and is an infinitesimal element of length, For a surface charge distribution (a good approximation for charge on a plate in a parallel plate capacitor) where gives the charge per unit area at position , and is an infinitesimal element of area, For a volume charge distribution (such as charge within a bulk metal) where gives the charge per unit volume at position , and is an infinitesimal element of volume, The force on a small test charge at position in vacuum is given by the integral over the distribution of charge The "continuous charge" version of Coulomb's law is never supposed to be applied to locations for which because that location would directly overlap with the location of a charged particle (e.g. electron or proton) which is not a valid location to analyze the electric field or potential classically. Charge is always discrete in reality, and the "continuous charge" assumption is just an approximation that is not supposed to allow to be analyzed. Coulomb constant The constant of proportionality, , in Coulomb's law: is a consequence of historical choices for units. The constant is the vacuum electric permittivity. Using the CODATA 2022 recommended value for , the Coulomb constant is Limitations There are three conditions to be fulfilled for the validity of Coulomb's inverse square law: The charges must have a spherically symmetric distribution (e.g. be point charges, or a charged metal sphere). The charges must not overlap (e.g. they must be distinct point charges). The charges must be stationary with respect to a nonaccelerating frame of reference. The last of these is known as the electrostatic approximation. When movement takes place, an extra factor is introduced, which alters the force produced on the two objects. This extra part of the force is called the magnetic force. For slow movement, the magnetic force is minimal and Coulomb's law can still be considered approximately correct. A more accurate approximation in this case is, however, the Weber force. When the charges are moving more quickly in relation to each other or accelerations occur, Maxwell's equations and Einstein's theory of relativity must be taken into consideration. Electric field An electric field is a vector field that associates to each point in space the Coulomb force experienced by a unit test charge. The strength and direction of the Coulomb force on a charge depends on the electric field established by other charges that it finds itself in, such that . In the simplest case, the field is considered to be generated solely by a single source point charge. More generally, the field can be generated by a distribution of charges who contribute to the overall by the principle of superposition. If the field is generated by a positive source point charge , the direction of the electric field points along lines directed radially outwards from it, i.e. in the direction that a positive point test charge would move if placed in the field. For a negative point source charge, the direction is radially inwards. The magnitude of the electric field can be derived from Coulomb's law. By choosing one of the point charges to be the source, and the other to be the test charge, it follows from Coulomb's law that the magnitude of the electric field created by a single source point charge Q at a certain distance from it r in vacuum is given by A system of n discrete charges stationed at produces an electric field whose magnitude and direction is, by superposition Atomic forces Coulomb's law holds even within atoms, correctly describing the force between the positively charged atomic nucleus and each of the negatively charged electrons. This simple law also correctly accounts for the forces that bind atoms together to form molecules and for the forces that bind atoms and molecules together to form solids and liquids. Generally, as the distance between ions increases, the force of attraction, and binding energy, approach zero and ionic bonding is less favorable. As the magnitude of opposing charges increases, energy increases and ionic bonding is more favorable. Relation to Gauss's law Deriving Gauss's law from Coulomb's law Deriving Coulomb's law from Gauss's law Strictly speaking, Coulomb's law cannot be derived from Gauss's law alone, since Gauss's law does not give any information regarding the curl of (see Helmholtz decomposition and Faraday's law). However, Coulomb's law can be proven from Gauss's law if it is assumed, in addition, that the electric field from a point charge is spherically symmetric (this assumption, like Coulomb's law itself, is exactly true if the charge is stationary, and approximately true if the charge is in motion). In relativity Coulomb's law can be used to gain insight into the form of the magnetic field generated by moving charges since by special relativity, in certain cases the magnetic field can be shown to be a transformation of forces caused by the electric field. When no acceleration is involved in a particle's history, Coulomb's law can be assumed on any test particle in its own inertial frame, supported by symmetry arguments in solving Maxwell's equation, shown above. Coulomb's law can be expanded to moving test particles to be of the same form. This assumption is supported by Lorentz force law which, unlike Coulomb's law is not limited to stationary test charges. Considering the charge to be invariant of observer, the electric and magnetic fields of a uniformly moving point charge can hence be derived by the Lorentz transformation of the four force on the test charge in the charge's frame of reference given by Coulomb's law and attributing magnetic and electric fields by their definitions given by the form of Lorentz force. The fields hence found for uniformly moving point charges are given by:where is the charge of the point source, is the position vector from the point source to the point in space, is the velocity vector of the charged particle, is the ratio of speed of the charged particle divided by the speed of light and is the angle between and . This form of solutions need not obey Newton's third law as is the case in the framework of special relativity (yet without violating relativistic-energy momentum conservation). Note that the expression for electric field reduces to Coulomb's law for non-relativistic speeds of the point charge and that the magnetic field in non-relativistic limit (approximating ) can be applied to electric currents to get the Biot–Savart law. These solutions, when expressed in retarded time also correspond to the general solution of Maxwell's equations given by solutions of Liénard–Wiechert potential, due to the validity of Coulomb's law within its specific range of application. Also note that the spherical symmetry for gauss law on stationary charges is not valid for moving charges owing to the breaking of symmetry by the specification of direction of velocity in the problem. Agreement with Maxwell's equations can also be manually verified for the above two equations. Coulomb potential Quantum field theory The Coulomb potential admits continuum states (with E > 0), describing electron-proton scattering, as well as discrete bound states, representing the hydrogen atom. It can also be derived within the non-relativistic limit between two charged particles, as follows: Under Born approximation, in non-relativistic quantum mechanics, the scattering amplitude is: This is to be compared to the: where we look at the (connected) S-matrix entry for two electrons scattering off each other, treating one with "fixed" momentum as the source of the potential, and the other scattering off that potential. Using the Feynman rules to compute the S-matrix element, we obtain in the non-relativistic limit with Comparing with the QM scattering, we have to discard the as they arise due to differing normalizations of momentum eigenstate in QFT compared to QM and obtain: where Fourier transforming both sides, solving the integral and taking at the end will yield as the Coulomb potential. However, the equivalent results of the classical Born derivations for the Coulomb problem are thought to be strictly accidental. The Coulomb potential, and its derivation, can be seen as a special case of the Yukawa potential, which is the case where the exchanged boson – the photon – has no rest mass. Verification It is possible to verify Coulomb's law with a simple experiment. Consider two small spheres of mass and same-sign charge , hanging from two ropes of negligible mass of length . The forces acting on each sphere are three: the weight , the rope tension and the electric force . In the equilibrium state: and Dividing () by (): Let be the distance between the charged spheres; the repulsion force between them , assuming Coulomb's law is correct, is equal to so: If we now discharge one of the spheres, and we put it in contact with the charged sphere, each one of them acquires a charge . In the equilibrium state, the distance between the charges will be and the repulsion force between them will be: We know that and: Dividing () by (), we get: Measuring the angles and and the distance between the charges and is sufficient to verify that the equality is true taking into account the experimental error. In practice, angles can be difficult to measure, so if the length of the ropes is sufficiently great, the angles will be small enough to make the following approximation: Using this approximation, the relationship () becomes the much simpler expression: In this way, the verification is limited to measuring the distance between the charges and checking that the division approximates the theoretical value. See also Biot–Savart law Darwin Lagrangian Electromagnetic force Gauss's law Method of image charges Molecular modelling Newton's law of universal gravitation, which uses a similar structure, but for mass instead of charge Static forces and virtual-particle exchange Casimir effect References Spavieri, G., Gillies, G. T., & Rodriguez, M. (2004). Physical implications of Coulomb’s Law. Metrologia, 41(5), S159–S170. doi:10.1088/0026-1394/41/5/s06 Related reading External links Coulomb's Law on Project PHYSNET Electricity and the Atom —a chapter from an online textbook A maze game for teaching Coulomb's law—a game created by the Molecular Workbench software Electric Charges, Polarization, Electric Force, Coulomb's Law Walter Lewin, 8.02 Electricity and Magnetism, Spring 2002: Lecture 1 (video). MIT OpenCourseWare. License: Creative Commons Attribution-Noncommercial-Share Alike. Electromagnetism Electrostatics Eponymous laws of physics Force Scientific laws
Coulomb's law
[ "Physics", "Mathematics" ]
3,739
[ "Physical phenomena", "Force", "Electromagnetism", "Physical quantities", "Quantity", "Mass", "Mathematical objects", "Classical mechanics", "Equations", "Scientific laws", "Fundamental interactions", "Wikipedia categories named after physical quantities", "Matter" ]
26,292,477
https://en.wikipedia.org/wiki/Krichevsky%E2%80%93Trofimov%20estimator
In information theory, given an unknown stationary source with alphabet A and a sample w from , the Krichevsky–Trofimov (KT) estimator produces an estimate pi(w) of the probability of each symbol i ∈ A. This estimator is optimal in the sense that it minimizes the worst-case regret asymptotically. For a binary alphabet and a string w with m zeroes and n ones, the KT estimator pi(w) is defined as: This corresponds to the posterior mean of a Beta-Bernoulli posterior distribution with prior . For the general case the estimate is made using a Dirichlet-Categorical distribution. See also Rule of succession Bayesian inference using conjugate priors for the categorical distribution References Information theory Data compression
Krichevsky–Trofimov estimator
[ "Mathematics", "Technology", "Engineering" ]
166
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
26,294,679
https://en.wikipedia.org/wiki/Mastic%20%28plant%20resin%29
Mastic () is a resin obtained from the mastic tree (Pistacia lentiscus). It is also known as tears of Chios, being traditionally produced on the island Chios, and, like other natural resins, is produced in "tears" or droplets. Mastic is excreted by the resin glands of certain trees and dries into pieces of brittle, translucent resin. When chewed, the resin softens and becomes a bright white and opaque gum. The flavor is bitter at first, but after some chewing, it releases a refreshing flavor similar to pine and cedar. History Chios mastic gum has been used as a traditional medicine over the last 2,500 years. The word mastic is derived indirectly from , which may be related to . The first mention of actual mastic 'tears' was by Hippocrates. Hippocrates used mastic for the prevention of digestive problems, colds and as a breath freshener. Romans used mastic along with honey, pepper, and egg in the spiced wine conditum paradoxum. Under the Byzantine Empire, the mastic trade became the Emperor's monopoly. In the Ottoman Empire, the Sultan gathered the finest mastic crop to send to his harem. During the Ottoman rule of Chios, mastic was worth its weight in gold. The penalty for stealing mastic was execution by order of the sultans. In the Chios Massacre of 1822, the people of the Mastichochoria region were spared by the sultan to provide mastic to him and his harem. , the Turkish name for the island of Chios, means 'gum island'. The mastic villages are fortress-like, out of sight from the sea, surrounded by high walls and with no doors at street level (meaning that the villages were entered only by ladders), in order to protect the sap from invaders. Although the liqueur is much younger, it is still tied up with Greek history. Digestive liqueurs, similar to Mastichato (Mastika), but made with grapes, were known as Greek elixirs before the French Revolution. The production of mastic was threatened by the Chios forest fire that destroyed some mastic groves in August 2012. Cultivation Producing the mastic resin is a whole-year process for the local growers. The harvest is known as kentos and takes place from the beginning of July to the beginning of October. First, the area around the trees is cleared and sprinkled with inert calcium carbonate. Then, every 4–5 days, 5–10 incisions are made in the bark of each tree to release the resin. As these clear drops hang from the tree, and sparkle in the sunlight, they are said to resemble crystalline teardrops; for this reason, the mastic resin is known as the "tears of Chios". It takes about 15–20 days for the first resin crystals to harden and fall to the ground. The farmers then collect the pieces of dry mastic and wash them in natural spring water, and spend most of the winter cleaning and separating the tears from the sand. This cleaning process is performed by hand and is regulated by the legislative framework of the Mastic Growers' Association. In addition to mastic, mastic oil is also produced. Mastichochoria there are twenty-four mastichochoria, or mastic villages, on the island of Chios dedicated to the cultivation and production of mastic. The designation "Masticha Chiou" ("Khios mastic") is protected by a European Union protected designation of origin (PDO). The island's mastic production is controlled by a co-operative. Founded in 1938, the Chios Gum Mastic Growers Association (), abbreviated CGMGA, is a secondary cooperative organisation and acts as the collective representative organ of twenty primary cooperatives founded in the twenty-four mastic villages. it has the exclusive management of natural Chios Mastiha in Greece and abroad. The Chios Mastic Museum offers a permanent exhibition about mastic production on the island, explaining its history and cultivation techniques as well as demonstrating its different uses today. Turkey Traditionally there has also been limited production of mastic on the Çeşme peninsula, on the Turkish coast eight nautical miles from Chios, with similar ecological conditions suitable for mastic production. The Turkish Foundation for Combating Soil Erosion, for Reforestation and the Protection of Natural Habitats (TEMA) has led an effort to protect the native Turkish mastic trees and to plant new ones in the Çeşme peninsula to revive viable commercial production of the product. As part of this project, which was expected to last through 2016, over 3,000 mastic tree saplings were planted between 2008 and October 2011 to over 368 acres (149 hectares) of dedicated farm land provided by the Izmir Institute of Technology. Uses Culinary In the Eastern Mediterranean, mastic is commonly used in brioches, ice cream, and other desserts. In Syria and Palestine, mastic is added to booza (Levantine ice cream), and in Turkey, mastic is widely used in desserts such as Turkish delight and dondurma, in puddings such as sütlaç, salep, tavuk göğsü, mamelika, and in soft drinks. Mastic syrup is added to Turkish coffee on the Aegean coast. In Greece, mastic is used in liqueurs such as Mastika (or Mastichato), in a spoon sweet known as a "submarine" (), in beverages, chewing gum, sweets, desserts, breads and cheese. It is also used to stabilise loukoumi and ice cream. In the Maghreb, mastic is used mainly in cakes, sweets, and pastries and as a stabilizer in meringue and nougat. In Morocco, mastic is used in the preparation of smoked foods. One of the earliest uses of mastic was as chewing gum. Mastic (מסטיק) is the colloquial Hebrew word for chewing gum. In religion Some scholars identify the bakha mentioned in the Bible with the mastic plant. Bakha appears to be derived from , weeping, and is thought to refer to the "tears" of resin secreted by the mastic plant. Ancient Jewish halachic sources indicate mastic as a treatment for bad breath: "Mastic is not chewed on Shabbat. When [is it forbidden to chew mastic on Shabbat]? When the intention is medicinal. If it is used for bad breath, it is permissible." Mastic is an essential ingredient of chrism, the holy oil used for anointing by the Eastern Orthodox Churches. Medicinal Traditional use Ancient Greek physicians such as Hippocrates, Dioscorides, Galenus, and Theophrastus recommended it for a range of gastrointestinal disorders. During 15th century, Andrés Laguna, a prominent Spanish physician and botanist, utilized mastic gum to treat pyorrhea and advocated its use in dental care formulations, including infusions and concoctions for toothpaste and breath fresheners. He also recommended the use of the tree's twigs as toothpicks. Beyond its oral health applications, mastic gum was applied as a beauty enhancer for the skin and used to alleviate menstrual discomfort. It was also utilized to mask the unpleasant odors associated with chronic mercury exposure. Current research In February 2016, the European Medicines Agency (EMA) published the final assessment of Pistacia lentiscus L. resin. The EMA concluded that the available clinical studies, though numerous, were too small and methodologically weak to support a "well-established use" designation for mastic resin. These studies primarily investigated its oral (as a sole agent) and cutaneous applications (in combination with other products). Despite these shortcomings, the EMA found that these studies did not raise any significant safety concerns, thus supporting the traditional use of mastic. The assessment highlighted that mastic has been part of traditional and folk medicine for more than 30 years in several countries such as Iraq, Turkey, Japan, South Korea, the United States, and particularly, within the European Union, in Greece. Considering this long-standing use, the EMA deemed the requirements for traditional medicinal products according to Directive 2001/83/EC to be fulfilled for the medicinal use of powdered mastic. The EMA reports also note the antimicrobial activity of mastic in non-clinical in vitro studies and its particular effectiveness against Helicobacter pylori. Based on these findings, the EMA approved the use of powdered mastic as a traditional herbal medicinal product for two indications: treatment of mild dyspeptic disorders in adults and the elderly for the symptomatic treatment of minor skin inflammations and aid in healing minor wounds The agency stipulated that due to the lack of sufficient data, the use of mastic in children, during pregnancy, and lactation is not recommended. Other uses Mastic is used in some varnishes. Mastic varnish was used to protect and preserve photographic negatives. Mastic is also used in perfumes, cosmetics, soap, body oils, and body lotion. In ancient Egypt, mastic was used in embalming. In its hardened form, mastic can be used, like frankincense or Boswellia resin, to produce incense. See also Gum arabic Mastika (liqueur with mastic aroma) Megilp (art medium) References External links "Can This Ancient Greek Medicine Cure Humanity?"—Opinion piece in The New York Times Chios Greek cuisine Greek products with protected designation of origin Greek words and phrases Incense material Medicinal plants Mediterranean cuisine Natural gums Resins Spices Tree tapping
Mastic (plant resin)
[ "Physics" ]
2,033
[ "Resins", "Unsolved problems in physics", "Incense material", "Materials", "Amorphous solids", "Matter" ]
26,294,691
https://en.wikipedia.org/wiki/Water%20activity
In food science, water activity (aw) of a food is the ratio of its vapor pressure to the vapor pressure of water at the same temperature, both taken at equilibrium. Pure water has a water activity of one. Put another way, aw is the equilibrium relative humidity (ERH) expressed as a fraction instead of as a percentage. As temperature increases, aw typically increases, except in some products with crystalline salt or sugar. Water migrates from areas of high aw to areas of low aw. For example, if honey (aw ≈ 0.6) is exposed to humid air (aw ≈ 0.7), the honey absorbs water from the air. If salami (aw ≈ 0.87) is exposed to dry air (aw ≈ 0.5), the salami dries out, which could preserve it or spoil it. Lower aw substances tend to support fewer microorganisms since these get desiccated by the water migration. Water activity is not simply a function of water concentration in food. The water in food has a tendency to evaporate, but the water vapor in the surrounding air has a tendency to condense into the food. When the two tendencies are in balance— and the air and food are stable—the air's relative humidity (expressed as a fraction instead of as a percentage) is taken to be the water activity, aw. Thus, water activity is the thermodynamic activity of water as solvent and the relative humidity of the surrounding air at equilibrium. Formula The definition of is where is the partial water vapor pressure in equilibrium with the solution, and is the (partial) vapor pressure of pure water at the same temperature. An alternate definition can be where is the activity coefficient of water and is the mole fraction of water in the aqueous fraction. Relationship to relative humidity: The relative humidity (RH) of air in equilibrium with a sample is also called the Equilibrium Relative Humidity (ERH) and is usually given as a percentage. It is equal to water activity according to The estimated mold-free shelf life (MFSL) in days at 21 °C depends on water activity according to Uses Water activity is an important characteristic for food product design and food safety. Food product design Food designers use water activity to formulate shelf-stable food. If a product is kept below a certain water activity, then mold growth is inhibited. This results in a longer shelf life. Water activity values can also help limit moisture migration within a food product made with different ingredients. If raisins of a higher water activity are packaged with bran flakes of a lower water activity, the water from the raisins migrates to the bran flakes over time, making the raisins hard and the bran flakes soggy. Food formulators use water activity to predict how much moisture migration affects their product. Food safety Water activity is used in many cases as a critical control point for Hazard Analysis and Critical Control Points (HACCP) programs. Samples of the food product are periodically taken from the production area and tested to ensure water activity values are within a specified range for food quality and safety. Measurements can be made in as little as five minutes, and are made regularly in most major food production facilities. For many years, researchers tried to equate bacterial growth potential with water content. They found that the values were not universal, but specific to each food product. W. J. Scott first established that bacterial growth correlated with water activity, not water content, in 1953. It is firmly established that growth of bacteria is inhibited at specific water activity values. U.S. Food and Drug Administration (FDA) regulations for intermediate moisture foods are based on these values. Lowering the water activity of a food product should not be seen as a kill step. Studies in powdered milk show that viable cells can exist at much lower water activity values, but that they never grow. Over time, bacterial levels decline. Measurement Water activity values are obtained by either a resistive electrolytic, a capacitance or a dew point hygrometer. Resistive electrolytic hygrometers Resistive electrolytic hygrometers use a sensing element in the form of a liquid electrolyte held in between of two small glass rods by capillary force. The electrolyte changes resistance if it absorbs or loses water vapor. The resistance is directly proportional to relative air humidity and therefore also to water activity of the sample (once vapor–liquid equilibrium is established). This relation can be checked by either verification or calibration using saturated salt-water mixtures, which provide a well-defined and reproducible air humidity in the measurement chamber. The sensor does not have any physically given hysteresis as it is known from capacitance hygrometers and sensors, and does not require regular cleaning as its surface is not the effectively sensing element. Volatiles, in principle, influence the measurement performance—especially those that dissociate in the electrolyte and thereby change its resistance. Such influences can easily be avoided by using chemical protection filters that absorb the volatile compound before arriving at the sensor. Capacitance hygrometers Capacitance hygrometers consist of two charged plates separated by a polymer membrane dielectric. As the membrane adsorbs water, its ability to hold a charge increases and the capacitance is measured. This value is roughly proportional to the water activity as determined by a sensor-specific calibration. Capacitance hygrometers are not affected by most volatile chemicals and can be much smaller than other alternative sensors. They do not require cleaning, but are less accurate than dew point hygrometers (+/- 0.015 aw). They should have regular calibration checks and can be affected by residual water in the polymer membrane (hysteresis). Dew point hygrometers The temperature at which dew forms on a clean surface is directly related to the vapor pressure of the air. Dew point hygrometers work by placing a mirror over a closed sample chamber. The mirror is cooled until the dew point temperature is measured by means of an optical sensor. This temperature is then used to find the relative humidity of the chamber using psychrometrics charts. This method is theoretically the most accurate (+/- 0.003 aw) and often the fastest. The sensor requires cleaning if debris accumulates on the mirror. Equilibration With either method, vapor–liquid equilibrium must be established in the sample chamber. This takes place over time or can be aided by the addition of a fan in the chamber. Thermal equilibrium must also be achieved unless the sample temperature is measured. Moisture content Water activity is related to water content in a non-linear relationship known as a moisture sorption isotherm curve. These isotherms are substance- and temperature-specific. Isotherms can be used to help predict product stability over time in different storage conditions. Use in humidity control There is net evaporation from a solution with a water activity greater than the relative humidity of its surroundings. There is net absorption of water by a solution with a water activity less than the relative humidity of its surroundings. Therefore, in an enclosed space, an aqueous solution can be used to regulate humidity. Selected aw values Solar planets habitability Water is necessary for life under all its forms presently known on Earth. Without water, microbial activity is not possible. Even if some micro-organisms can be preserved in the dry state (e.g., after freeze-drying), their growth is not possible without water. Micro-organisms also require sufficient space to develop. In highly compacted bentonite and deep clay formations, microbial activity is limited by the lack of space and the transport of nutrients towards bacteria and the elimination of toxins produced by their metabolism is controlled by diffusion in the pore water. So, "space and water restrictions" are two limiting factors of the microbial activity in deep sediments. Early biotic diagenesis of sediments just below the ocean floor driven by microbial activity (e.g., of sulfate reducing bacteria) end up when the degree of compaction becomes too important to allow microbial life development. At the surface of planets and in their atmosphere, space restrictions do not apply, therefore, the ultimate limiting factor is water availability and thus the water activity. Most extremophile micro-organisms require sufficient water to be active. The threshold of water activity for their development is around 0.6. The same rule should also apply for other planets than Earth. After the tantalizing detection of phosphine (PH3) in the atmosphere of Venus, in the absence of known and plausible chemical mechanism to explain the formation of this molecule, the presence of micro-organisms in suspension in Venus's atmosphere has been suspected and the hypothesis of the microbial formation of phosphine has been formulated by Greaves et al. (2020) from Cardiff University envisaging the possibility of a liveable window in the Venusian clouds at a certain altitude with an acceptable temperature range for microbial life. Hallsworth et al. (2021) from the School of Biological Sciences at Queen's University Belfast have studied the conditions required to support the life of extremophile micro-organisms in the clouds at high altitude in the Venus atmosphere where favorable temperature conditions might prevail. Beside the presence of sulfuric acid in the clouds which already represent a major challenge for the survival of most micro-organisms, they came to the conclusion that the atmosphere of Venus is much too dry to host microbial life. Indeed, Hallsworth et al. (2021) have determined a water activity of ≤ 0.004, two orders of magnitude below the 0.585 limit for known extremophiles. So, with a water activity in the Venus clouds 100 times lower than the threshold of 0.6 known in Earth conditions, the hypothesis envisaged by Greaves et al. (2020) to explain the biotic origin of phosphine in the Venus atmosphere is ruled out. Direct measurements of the Venusian atmosphere by spatial probes point to very harsh conditions, likely making Venus an uninhabitable world, even for the most extreme forms of life known on Earth. The extremely low water activity of the desiccated Venusian atmosphere represents the very limiting factor for life, much more severe than the infernal conditions of temperature and pressure, or the presence of sulfuric acid. Astrobiologists presently consider that more favorable conditions could be encountered in the clouds of Jupiter where a sufficient water activity could prevail in the atmosphere provided that other conditions necessary for life are also met in the same environment (sufficient supply of nutrients and energy in a non-toxic medium). References Further reading External links Isotopic effect Measurement http://ac.els-cdn.com/ Why to measure water activity?, Syntilab How to measure water activity?, Syntilab Food science Thermodynamic properties Water
Water activity
[ "Physics", "Chemistry", "Mathematics", "Environmental_science" ]
2,269
[ "Thermodynamic properties", "Hydrology", "Physical quantities", "Quantity", "Thermodynamics", "Water" ]
26,295,401
https://en.wikipedia.org/wiki/Angstr%C3%B6mquelle%20Karlsruhe
ANKA (abbreviation for „Angströmquelle Karlsruhe“) is a synchrotron light source facility at the Karlsruhe Institute of Technology (KIT) in Karlsruhe, Germany. The KIT runs ANKA as a national synchrotron light source and as a large scale user facility for the international science community. Being a large scale machine of the performance category LK II of the Helmholtz Association (Helmholtz Association of German Research Centres), ANKA is part of a national and European infrastructure offering research services to scientific and commercial users for their purposes in research and development. The facility was opened to external users in 2003. History In 1997, the decision was formed to realize the project of building the large scale facility ANKA on the premises of the former Research Center Karlsruhe. By the end of 1998, the outer structure of ANKA was erected, and already in 1999, the first electrons were inducted into the storage ring. After a few more years of machine and laboratory development, in March 2003 ANKA opened its doors for users from the science community and industry, initially featuring seven beamlines: six analytical beamlines and one for the generation of microstructures using X-ray lithography. Since then further improvements and extensions were continuously developed and implemented: Currently 15 beamlines are in operation, three more are in the process of installation. The machine itself saw the implementation of several updated generations of its insertion devices (undulators and wigglers) that were in part developed at ANKA. Moreover, a fully developed infrastructure supports users at ANKA, as for example fully equipped user apartments on the premises of the KIT Campus North can be booked by external ANKA users. Technical Details ANKA features a storage ring with a circumference of 110.4 m (120.7 yards) that stores electrons at the energy of 2.5 GeV. For this purpose, electrons (90 keV) are generated by a triode and preaccelerated to 500 MeV via a “Racetrack Microtron” (53 MeV) and a booster. The actual working energy is finally reached in the storage ring, where the electrons are now spinning at almost the speed of light. The storage ring contains an ultra-high vacuum of 10−9 mbar. The synchrotron light is thereby generated by the constant deflection of 16 magnets that keep the electrons focused in the center of the tube. In addition to that, wigglers and undulators – specialized magnet configurations with alternating straight and reverse polarity – are used to deflect the electrons into a sinus-curve-like course on which they emit synchrotron radiation. A special feature of the ANKA synchrotron configuration is the super conducting SCU15 undulator that was – as its predecessor SCU14 – co-developed at the ANKA facility. This new undulator does not only generate synchrotron light of enhanced brilliance, but also a much more variable spectrum of radiation easily adjustable to the respective research requirements. ANKA beamlines and their applications Imaging methods IMAGE Use of X-rays for imaging procedures in 2D- and 3D-fields, static as well as dynamic – in the state of installation MPI-MF Conducted by the Max-Planck-Institute for Intelligent Systems, Specialized on in situ analyses of interfaces and thin films NANO High definition in-situ X-ray diffraction – in the final phase of installation PDIFF Analysis using the Debye-Scherrer-Powder diffraction (examination and identification of crystalline substances in powdered samples SCD Analysis of X-ray diffraction on single crystals TOPO-TOMO Topography, microradiology and microtomography using polychromatic light and X-rays Spectroscopy FLUO X-ray fluorescence spectroscopy, non-destructive qualitative and quantitative identification of the elemental composition of a sample INE Installed and conducted by the KIT Institute for Nuclear Waste Disposal for the sake of actinide-research IR1 Infrared spectroscopy and infrared ellipsometry including terahertz radiation IR2 Infrared spectroscopy and infrared microscopy including terahertz radiation SUL-X Absorption, fluorescence- and diffraction analysis as part of the synchrotron environmental laboratory UV-CD12 Conducted by the KIT Institute for Biological Interfaces, UV-circular dichroism-spectroscopy (structural analysis of biological substances) WERA Soft X-ray analysis conducted by the KIT Institute for Solid-State Physics XAS X-ray absorption spectroscopy, XANES (chemical composition of a sample) and EXAFS (Number, distance and type of neighboring atom (also in non-crystalline form) Microfabrication LIGA I, II, III Deep X-ray lithography according to the LIGA-procedure developed at the KIT. The three beamlines differ regarding the level of available energy Access for scientific users Besides the scientists at ANKA and IPS that contribute to the development of the synchrotron and its components, external users in particular have the opportunity to use the radiation generated at ANKA for their own research projects. Users of the international science community are coordinated by ANKA's user office. Twice a year, proposals for beamtime at ANKA are collected via an online application procedure. The actual beamtime is then allocated by an international scientific committee that evaluates the submitted proposals. On the premises of KIT Campus North a guest house grants the accommodation of external users for the time of their project at ANKA. More information on the allocation of beamtime can be found on the web pages of the user office. External links Official Website User office Website ANKA Commercial Services Synchrotron radiation facilities
Angströmquelle Karlsruhe
[ "Materials_science" ]
1,156
[ "Materials testing", "Synchrotron radiation facilities" ]
26,296,965
https://en.wikipedia.org/wiki/Chandra%E2%80%93Toueg%20consensus%20algorithm
The Chandra–Toueg consensus algorithm, published by Tushar Deepak Chandra and Sam Toueg in 1996, is an algorithm for solving consensus in a network of unreliable processes equipped with an eventually strong failure detector. The failure detector is an abstract version of timeouts; it signals to each process when other processes may have crashed. An eventually strong failure detector is one that never identifies some specific non-faulty process as having failed after some initial period of confusion, and, at the same time, eventually identifies all faulty processes as failed (where a faulty process is a process which eventually fails or crashes and a non-faulty process never fails). The Chandra–Toueg consensus algorithm assumes that the number of faulty processes, denoted by , is less than n/2 (i.e. the minority), i.e. it assumes < /2, where n is the total number of processes. The algorithm The algorithm proceeds in rounds and uses a rotating coordinator: in each round , the process whose identity is given by mod is chosen as the coordinator. Each process keeps track of its current preferred decision value (initially equal to the input of the process) and the last round where it changed its decision value (the value's timestamp). The actions carried out in each round are: All processes send (r, preference, timestamp) to the coordinator. The coordinator waits to receive messages from at least half of the processes (including itself). It then chooses as its preference a value with the most recent timestamp among those sent. The coordinator sends (r, preference) to all processes. Each process waits (1) to receive (r, preference) from the coordinator, or (2) for its failure detector to identify the coordinator as crashed. In the first case, it sets its own preference to the coordinator's preference and responds with ack(r). In the second case, it sends nack(r) to the coordinator. The coordinator waits to receive ack(r) or nack(r) from a majority of processes. If it receives ack(r) from a majority, it sends decide(preference) to all processes. Any process that receives decide(preference) for the first time relays decide(preference) to all processes, then decides preference and terminates. Note that this algorithm is used to decide only on one value. Correctness Problem definition An algorithm which "solves" the consensus problem must ensure the following properties: termination: all processes decide on a value; agreement: all processes decide on the same value; and validity: all processes decide on a value that was some process's input value; Assumptions Before arguing that the Chandra–Toueg consensus algorithm satisfies the three properties above, recall that this algorithm requires = 2* + 1 processes, where at most f of which are faulty. Furthermore, note that this algorithm assumes the existence of eventually strong failure detector (which are accessible and can be used to detect the crash of a node). An eventually strong failure detector is one that never identifies some specific non-faulty (or correct) process as having failed, after some initial period of confusion, and, at the same time, eventually identifies all faulty processes as failed. Proof of correctness Termination holds because eventually the failure detector stops suspecting some non-faulty process p and eventually p becomes the coordinator. If the algorithm has not terminated before this occurs in some round r, then every non-faulty process in round r waits to receive p's preference and responds with ack(r). This allows p to collect enough acknowledgments to send decide(preference), causing every process to terminate. Alternatively, it may be that some faulty coordinator sends decide only to a few processes; but if any of these processes are non-faulty, they broadcast the decision to all the remaining processes, causing them to decide and terminate. Validity follows from the fact that every preference starts out as some process's input; there is nothing in the protocol that generates new preferences. Agreement is potentially the most difficult to achieve. It could be possible that a coordinator, in one round r, might send a decide message from some value v that propagates only to a few processes before some other coordinator, in a later round r', sends a decide message for some other value v'. To show that this does not occur, observe that before the first coordinator can send decide(v), it must have received ack(r) from a majority of processes; but, then, when any later coordinator polls a majority of processes, the later majority will overlap the earlier one and v will be the most recent value. So, any two coordinators that send out decide message send out the same value. References Chandra and Toueg. Unreliable failure detectors for reliable distributed systems. JACM 43(2):225–267, 1996. Course notes at Yale on failure detectors. Distributed algorithms Fault-tolerant computer systems
Chandra–Toueg consensus algorithm
[ "Technology", "Engineering" ]
1,018
[ "Fault-tolerant computer systems", "Reliability engineering", "Computer systems" ]
26,298,245
https://en.wikipedia.org/wiki/Safe%20household%20water%20storage
Safe household water storage is a critical component of a Household Water Treatment and Safe Storage (HWTS) system being promoted by the World Health Organization (WHO) worldwide in areas that do not have piped drinking water. In these areas, it is not uncommon for drinking water to be stored in a pot, jar, crock or other container in the home. Even if this drinking water was of acceptable microbiological quality initially, it can become contaminated by dirty hands and utensils, such as dirty diapers and cups. Drinking water containers with "narrow dispensers are key" to keeping water from being contaminated while being stored in the home. All types of 'safe household water storage must be used with water from known clean sources or with water having received prior efficacious treatment. Examples of containers Solar Cookers International (SCI) has incorporated the Safe Household Water Storage container in their water pasteurization programs in Kenya. They are part of a safe water package that consists of a CooKit solar cooker, a black pot, a Water Pasteurization Indicator (WAPI), and a Safe Household Water Storage container. The containers are handmade out of clay by local artisans. Their design incorporates a small opening at the top to help prevent children from dipping cups and possibly dirty hands into the drinking water. There is a spigot at the bottom. "Unfortunately, the spigot is almost as expensive as the container itself." In total each costs about KSh.450/= or about US$6.00. The unglazed clay container helps to keep the water naturally somewhat cool in dry climates because a very small amount of the water is absorbed by the container and then evaporates. Background The United Nations' Millennium Declaration adopted by its General Assembly in September 2000 set Millennium Development Goals (MDG) that have a purpose of significantly reducing the proportion of people in the world in extreme poverty. Resolution 19 specifically states with respect to drinking water, "To halve, by the year 2015...the proportion of the world's people who are unable to reach or to afford safe drinking water". In 2009 the United Nations published The Millennium Development Goals Report that states: "The world is well on its way to meeting the drinking water target, though some countries still face enormous challenges." One way that the World Health Organization (WHO) has supported the safe drinking water goal is with its Household Water Treatment and Safe Storage (HWTS) program which targets people who are not connected to community water systems. Their website states that improved HWTS techniques can dramatically improve drinking water quality and reduce diarrhoeal diseases for those who must rely on unsafe water supplies. It reminds us that there are 1.6 million diarrhoeal deaths per year related to unsafe water, sanitation, and hygiene and that these are mostly of children under 5 years old. See also Point-of-use water treatment which discusses a variety of water treatment methods which may be used by households to improve water quality. Self-supply of water and sanitation UN-Water is a mechanism of the United Nations with the purpose of supporting water-related efforts. References External links World Health Organization (WHO): Household Water Treatment and Safe Storage (HWTS) program. WHO/SDE/WSH/02.07 report: Managing water in the home: accelerated health gains from improved water supply prepared by Professor Mark D. Sobsey, School of Public Health, University of North Carolina, Chapel Hill, North Carolina, USA, report covers many aspects of a HWTS system including storage, treatment, social/economic aspects, and monitoring. The portion of the above report most pertinent to this article is Chapter 4: Storage and treatment of household water. WHO: Other related drinking water quality programs. World Health Organization United Nations Development Programme Appropriate technology Water treatment Water supply infrastructure Drinking water Containers
Safe household water storage
[ "Chemistry", "Engineering", "Environmental_science" ]
793
[ "Water treatment", "Environmental engineering", "Water technology", "Water pollution" ]
28,149,071
https://en.wikipedia.org/wiki/Maxwell%27s%20thermodynamic%20surface
Maxwell’s thermodynamic surface is an 1874 sculpture made by Scottish physicist James Clerk Maxwell (1831–1879). This model provides a three-dimensional space of the various states of a fictitious substance with water-like properties. This plot has coordinates volume (x), entropy (y), and energy (z). It was based on the American scientist Josiah Willard Gibbs’ graphical thermodynamics papers of 1873. The model, in Maxwell's words, allowed "the principal features of known substances [to] be represented on a convenient scale." Construction of the model Gibbs' papers defined what Gibbs called the "thermodynamic surface," which expressed the relationship between the volume, entropy, and energy of a substance at different temperatures and pressures. However, Gibbs did not include any diagrams of this surface. After receiving reprints of Gibbs' papers, Maxwell recognized the insight afforded by Gibbs' new point of view and set about constructing physical three-dimensional models of the surface. This reflected Maxwell's talent as a strong visual thinker and prefigured modern scientific visualization techniques. Maxwell sculpted the original model in clay and made several plaster casts of the clay model, sending one to Gibbs as a gift, keeping two in his laboratory at Cambridge University. Maxwell's copy is on display at the Cavendish Laboratory of Cambridge University, while Gibbs' copy is on display at the Sloane Physics Laboratory of Yale University, where Gibbs held a professorship. Two copies reside at the National Museum of Scotland, one via Peter Tait and the other via George Chrystal. Another was sent to Thomas Andrews. A number of historic photographs were taken of these plaster casts during the middle of the twentieth century – including one by James Pickands II, published in 1942 – and these photographs exposed a wider range of people to Maxwell's visualization approach. Uses of the model As explained by Gibbs and appreciated by Maxwell, the advantage of a U-V-S (energy-volume-entropy) surface over the usual P-V-T (pressure-volume-temperature) surface was that it allowed to geometrically explain sharp, discontinuous phase transitions as emerging from a purely continuous and smooth state function ; Maxwell's surface demonstrated the generic behaviour for a substance that can exist in solid, liquid, and gaseous phases. The basic geometrical operation involved simply placing a tangent plane (such as a flat sheet of glass) on the surface and rolling it around, observing where it touches the surface. Using this operation, it was possible to explain phase coexistence, the triple point, to identify the boundary between absolutely stable and metastable phases (e.g., superheating and supercooling), the spinodal boundary between metastable and unstable phases, and to illustrate the critical point. Maxwell drew lines of equal pressure (isopiestics) and of equal temperature (isothermals) on his plaster cast by placing it in the sunlight, and "tracing the curve when the rays just grazed the surface." He sent sketches of these lines to a number of colleagues. For example, his letter to Thomas Andrews of 15 July 1875 included sketches of these lines. Maxwell provided a more detailed explanation and a clearer drawing of the lines (pictured) in the revised version of his book Theory of Heat, and a version of this drawing appeared on a 2005 US postage stamp in honour of Gibbs. As well as being on display in two countries, Maxwell's model lives on in the literature of thermodynamics, and books on the subject often mention it, though not always with complete historical accuracy. For example, the thermodynamic surface represented by the sculpture is often reported to be that of water, contrary to Maxwell's own statement. Related models Maxwell's model was not the first plaster model of a thermodynamic surface: in 1871, even before Gibbs' papers, James Thomson had constructed a plaster pressure-volume-temperature plot, based on data for carbon dioxide collected by Thomas Andrews. Around 1900, the Dutch scientist Heike Kamerlingh Onnes, together with his student Johannes Petrus Kuenen and his assistant Zaalberg van Zelst, continued Maxwell's work by constructing their own plaster thermodynamic surface models. These models were based on accurate experimental data obtained in their laboratory, and were accompanied by specialised tools for drawing the lines of equal pressure. See also History of thermodynamics Process leading from Gibbs' drawings to Maxwell's thermodynamic surface References External links Photograph of one of the two Cambridge copies in the Museum at the Cavendish Laboratory; for better readable legends to go with the axes, see here Thermodynamic Case Study: Gibbs' Thermodynamic Graphical Method at Virginia Tech's Laboratory for Scientific Visual Analysis Maxwell’s thermodynamic surface at the "Encyclopedia of Human Thermodynamics" 1874 in science 1874 sculptures Historical scientific instruments Sculptures in England Sculptures in the United States Scottish sculpture Numerical function drawing Scientific models Thermodynamics Works by James Clerk Maxwell
Maxwell's thermodynamic surface
[ "Physics", "Chemistry", "Mathematics" ]
1,046
[ "Thermodynamics", "Dynamical systems" ]
28,150,288
https://en.wikipedia.org/wiki/Peptide%20spectral%20library
A peptide spectral library is a curated, annotated and non-redundant collection/database of LC-MS/MS peptide spectra. One essential utility of a peptide spectral library is to serve as consensus templates supporting the identification of peptides and proteins based on the correlation between the templates with experimental spectra. One potential application of peptide spectral libraries is the identification of new, currently unknown mass spectra. Here, the spectra from the library are compared to the new spectra and if a match is found, the unknown spectra can be assigned the identity of the known peptide in the library. Spectral libraries have been used in the small molecules mass spectra identification since the 1980s. In the early years of shotgun proteomics, pioneer investigations suggested that a similar approach might be applicable in shotgun proteomics for peptide/protein identification. Shotgun proteomics Modern tandem mass spectrometry (MS) instruments combine features of fast duty cycle, exquisite sensitivity, and unprecedented mass accuracy. Tandem mass spectrometry, which is an ideal match for the large-scale protein identification and quantification in complex biological systems. In a shotgun proteomics approach, proteins in a complex mixture are digested by proteolytic enzymes such as trypsin. Subsequently, one or more chromatographic separations are applied to resolve resulting peptides, which are then ionized and analyzed in a mass spectrometer. To acquire tandem mass spectra, a particular peptide precursor is isolated, and fragmented in a mass spectrometer; the mass spectra corresponding to the fragments of peptide precursor is recorded. Tandem mass spectra contains specific information regarding the sequence of the peptide precursor, which can aid the identification of the peptide/protein. Protein identification via sequence database searching Sequence database searching is widely used currently for mass spectra based protein identification. In this approach, a protein sequence database is used to calculate all putative peptide candidates in the given setting (proteolytic enzymes, miscleavages, post-translational modifications). The sequence search engines use various heuristics to predict the fragmentation pattern of each peptide candidate. Such derivative patterns are used as templates to find a sufficiently close match within experimental mass spectra, which serves as the basis for peptide/protein identification. Many tools have been developed for this practice, which have enabled many past discoveries, e.g. SEQUEST, Mascot. Shortcomings of the sequence database searching workflow Due to the complex nature of peptide fragmentation in a mass spectrometer, derivative fragmentation patterns fall short of reproducing experimental mass spectra, especially relative intensities among distinct fragments. Thus, sequence database searching faces a bottleneck of limited specificity. Sequence database searching also demands vast search space, which still could not cover all possibilities of peptide dynamics, exhibiting limited efficiency post-translational modifications). The search process is sometimes slow and requires costly high-performance computers. In addition, the nature of sequence database searching disconnects the research discoveries among different groups or at different times. Advantages and limitations First, a greatly reduced search space will decrease the searching time. Second, by taking full advantage of all spectral features including relative fragment intensities, neutral losses from fragments and various additional specific fragments, the process of spectra searching will be more specific, and it will generally provide better discrimination between true and false matches. Spectral library searching is not applicable in a situation where the discovery of novel peptides or proteins is the goal. However, more and more high-quality mass spectra are being acquired by the collective contribution of the scientific community, which will continuously expand the coverage of peptide spectral libraries. Research community-focused libraries For a peptide spectral library, to reach a maximal coverage is a long-term goal, even with the support of scientific community and ever-growing proteomic technologies. However, the optimization for a particular module of the peptide spectra library is a more manageable goal, e.g. the proteins in a particular organelle or relevant to a particular biological phenotype. For example, a researcher studying the mitochondrial proteome will likely focus on analyses within protein modules within the mitochondria. The research community focused peptide spectral library supports targeted research in a comprehensive fashion for a particular research community. References External links COPa Library SpectraST Peptide therapeutics Mass spectrometry software
Peptide spectral library
[ "Physics", "Chemistry" ]
871
[ "Mass spectrometry software", "Mass spectrometry", "Spectrum (physical sciences)", "Chemistry software" ]
28,152,615
https://en.wikipedia.org/wiki/Static%20forces%20and%20virtual-particle%20exchange
Static force fields are fields, such as a simple electric, magnetic or gravitational fields, that exist without excitations. The most common approximation method that physicists use for scattering calculations can be interpreted as static forces arising from the interactions between two bodies mediated by virtual particles, particles that exist for only a short time determined by the uncertainty principle. The virtual particles, also known as force carriers, are bosons, with different bosons associated with each force. The virtual-particle description of static forces is capable of identifying the spatial form of the forces, such as the inverse-square behavior in Newton's law of universal gravitation and in Coulomb's law. It is also able to predict whether the forces are attractive or repulsive for like bodies. The path integral formulation is the natural language for describing force carriers. This article uses the path integral formulation to describe the force carriers for spin 0, 1, and 2 fields. Pions, photons, and gravitons fall into these respective categories. There are limits to the validity of the virtual particle picture. The virtual-particle formulation is derived from a method known as perturbation theory which is an approximation assuming interactions are not too strong, and was intended for scattering problems, not bound states such as atoms. For the strong force binding quarks into nucleons at low energies, perturbation theory has never been shown to yield results in accord with experiments, thus, the validity of the "force-mediating particle" picture is questionable. Similarly, for bound states the method fails. In these cases, the physical interpretation must be re-examined. As an example, the calculations of atomic structure in atomic physics or of molecular structure in quantum chemistry could not easily be repeated, if at all, using the "force-mediating particle" picture. Use of the "force-mediating particle" picture (FMPP) is unnecessary in nonrelativistic quantum mechanics, and Coulomb's law is used as given in atomic physics and quantum chemistry to calculate both bound and scattering states. A non-perturbative relativistic quantum theory, in which Lorentz invariance is preserved, is achievable by evaluating Coulomb's law as a 4-space interaction using the 3-space position vector of a reference electron obeying Dirac's equation and the quantum trajectory of a second electron which depends only on the scaled time. The quantum trajectory of each electron in an ensemble is inferred from the Dirac current for each electron by setting it equal to a velocity field times a quantum density, calculating a position field from the time integral of the velocity field, and finally calculating a quantum trajectory from the expectation value of the position field. The quantum trajectories are of course spin dependent, and the theory can be validated by checking that Pauli's exclusion principle is obeyed for a collection of fermions. Classical forces The force exerted by one mass on another and the force exerted by one charge on another are strikingly similar. Both fall off as the square of the distance between the bodies. Both are proportional to the product of properties of the bodies, mass in the case of gravitation and charge in the case of electrostatics. They also have a striking difference. Two masses attract each other, while two like charges repel each other. In both cases, the bodies appear to act on each other over a distance. The concept of field was invented to mediate the interaction among bodies thus eliminating the need for action at a distance. The gravitational force is mediated by the gravitational field and the Coulomb force is mediated by the electromagnetic field. Gravitational force The gravitational force on a mass exerted by another mass is where is the Newtonian constant of gravitation, is the distance between the masses, and is the unit vector from mass to mass . The force can also be written where is the gravitational field described by the field equation where is the mass density at each point in space. Coulomb force The electrostatic Coulomb force on a charge exerted by a charge is (SI units) where is the vacuum permittivity, is the separation of the two charges, and is a unit vector in the direction from charge to charge . The Coulomb force can also be written in terms of an electrostatic field: where being the charge density at each point in space. Virtual-particle exchange In perturbation theory, forces are generated by the exchange of virtual particles. The mechanics of virtual-particle exchange is best described with the path integral formulation of quantum mechanics. There are insights that can be obtained, however, without going into the machinery of path integrals, such as why classical gravitational and electrostatic forces fall off as the inverse square of the distance between bodies. Path-integral formulation of virtual-particle exchange A virtual particle is created by a disturbance to the vacuum state, and the virtual particle is destroyed when it is absorbed back into the vacuum state by another disturbance. The disturbances are imagined to be due to bodies that interact with the virtual particle’s field. Probability amplitude Using natural units, , the probability amplitude for the creation, propagation, and destruction of a virtual particle is given, in the path integral formulation by where is the Hamiltonian operator, is elapsed time, is the energy change due to the disturbance, is the change in action due to the disturbance, is the field of the virtual particle, the integral is over all paths, and the classical action is given by where is the Lagrangian density. Here, the spacetime metric is given by The path integral often can be converted to the form where is a differential operator with and functions of spacetime. The first term in the argument represents the free particle and the second term represents the disturbance to the field from an external source such as a charge or a mass. The integral can be written (see ) where is the change in the action due to the disturbances and the propagator is the solution of Energy of interaction We assume that there are two point disturbances representing two bodies and that the disturbances are motionless and constant in time. The disturbances can be written where the delta functions are in space, the disturbances are located at and , and the coefficients and are the strengths of the disturbances. If we neglect self-interactions of the disturbances then W becomes which can be written Here is the Fourier transform of Finally, the change in energy due to the static disturbances of the vacuum is If this quantity is negative, the force is attractive. If it is positive, the force is repulsive. Examples of static, motionless, interacting currents are the Yukawa potential, the Coulomb potential in a vacuum, and the Coulomb potential in a simple plasma or electron gas. The expression for the interaction energy can be generalized to the situation in which the point particles are moving, but the motion is slow compared with the speed of light. Examples are the Darwin interaction in a vacuum and in a plasma. Finally, the expression for the interaction energy can be generalized to situations in which the disturbances are not point particles, but are possibly line charges, tubes of charges, or current vortices. Examples include: two line charges embedded in a plasma or electron gas, Coulomb potential between two current loops embedded in a magnetic field, and the magnetic interaction between current loops in a simple plasma or electron gas. As seen from the Coulomb interaction between tubes of charge example, shown below, these more complicated geometries can lead to such exotic phenomena as fractional quantum numbers. Selected examples Yukawa potential: the force between two nucleons in an atomic nucleus Consider the spin-0 Lagrangian density The equation of motion for this Lagrangian is the Klein–Gordon equation If we add a disturbance the probability amplitude becomes If we integrate by parts and neglect boundary terms at infinity the probability amplitude becomes With the amplitude in this form it can be seen that the propagator is the solution of From this it can be seen that The energy due to the static disturbances becomes (see ) with which is attractive and has a range of Yukawa proposed that this field describes the force between two nucleons in an atomic nucleus. It allowed him to predict both the range and the mass of the particle, now known as the pion, associated with this field. Electrostatics Coulomb potential in vacuum Consider the spin-1 Proca Lagrangian with a disturbance where charge is conserved and we choose the Lorenz gauge Moreover, we assume that there is only a time-like component to the disturbance. In ordinary language, this means that there is a charge at the points of disturbance, but there are no electric currents. If we follow the same procedure as we did with the Yukawa potential we find that which implies and This yields for the timelike propagator and which has the opposite sign to the Yukawa case. In the limit of zero photon mass, the Lagrangian reduces to the Lagrangian for electromagnetism Therefore the energy reduces to the potential energy for the Coulomb force and the coefficients and are proportional to the electric charge. Unlike the Yukawa case, like bodies, in this electrostatic case, repel each other. Coulomb potential in a simple plasma or electron gas Plasma waves The dispersion relation for plasma waves is where is the angular frequency of the wave, is the plasma frequency, is the magnitude of the electron charge, is the electron mass, is the electron temperature (the Boltzmann constant equal to one), and is a factor that varies with frequency from one to three. At high frequencies, on the order of the plasma frequency, the compression of the electron fluid is an adiabatic process and is equal to three. At low frequencies, the compression is an isothermal process and is equal to one. Retardation effects have been neglected in obtaining the plasma-wave dispersion relation. For low frequencies, the dispersion relation becomes where is the Debye number, which is the inverse of the Debye length. This suggests that the propagator is In fact, if the retardation effects are not neglected, then the dispersion relation is which does indeed yield the guessed propagator. This propagator is the same as the massive Coulomb propagator with the mass equal to the inverse Debye length. The interaction energy is therefore The Coulomb potential is screened on length scales of a Debye length. Plasmons In a quantum electron gas, plasma waves are known as plasmons. Debye screening is replaced with Thomas–Fermi screening to yield where the inverse of the Thomas–Fermi screening length is and is the Fermi energy This expression can be derived from the chemical potential for an electron gas and from Poisson's equation. The chemical potential for an electron gas near equilibrium is constant and given by where is the electric potential. Linearizing the Fermi energy to first order in the density fluctuation and combining with Poisson's equation yields the screening length. The force carrier is the quantum version of the plasma wave. Two line charges embedded in a plasma or electron gas We consider a line of charge with axis in the z direction embedded in an electron gas where is the distance in the xy-plane from the line of charge, is the width of the material in the z direction. The superscript 2 indicates that the Dirac delta function is in two dimensions. The propagator is where is either the inverse Debye–Hückel screening length or the inverse Thomas–Fermi screening length. The interaction energy is where and are Bessel functions and is the distance between the two line charges. In obtaining the interaction energy we made use of the integrals (see ) and For , we have Coulomb potential between two current loops embedded in a magnetic field Interaction energy for vortices We consider a charge density in tube with axis along a magnetic field embedded in an electron gas where is the distance from the guiding center, is the width of the material in the direction of the magnetic field where the cyclotron frequency is (Gaussian units) and is the speed of the particle about the magnetic field, and B is the magnitude of the magnetic field. The speed formula comes from setting the classical kinetic energy equal to the spacing between Landau levels in the quantum treatment of a charged particle in a magnetic field. In this geometry, the interaction energy can be written where is the distance between the centers of the current loops and is a Bessel function of the first kind. In obtaining the interaction energy we made use of the integral Electric field due to a density perturbation The chemical potential near equilibrium, is given by where is the potential energy of an electron in an electric potential and and are the number of particles in the electron gas in the absence of and in the presence of an electrostatic potential, respectively. The density fluctuation is then where is the area of the material in the plane perpendicular to the magnetic field. Poisson's equation yields where The propagator is then and the interaction energy becomes where in the second equality (Gaussian units) we assume that the vortices had the same energy and the electron charge. In analogy with plasmons, the force carrier is the quantum version of the upper hybrid oscillation which is a longitudinal plasma wave that propagates perpendicular to the magnetic field. Currents with angular momentum Delta function currents Unlike classical currents, quantum current loops can have various values of the Larmor radius for a given energy. Landau levels, the energy states of a charged particle in the presence of a magnetic field, are multiply degenerate. The current loops correspond to angular momentum states of the charged particle that may have the same energy. Specifically, the charge density is peaked around radii of where is the angular momentum quantum number. When we recover the classical situation in which the electron orbits the magnetic field at the Larmor radius. If currents of two angular momentum and interact, and we assume the charge densities are delta functions at radius , then the interaction energy is The interaction energy for is given in Figure 1 for various values of . The energy for two different values is given in Figure 2. Quasiparticles For large values of angular momentum, the energy can have local minima at distances other than zero and infinity. It can be numerically verified that the minima occur at This suggests that the pair of particles that are bound and separated by a distance act as a single quasiparticle with angular momentum . If we scale the lengths as , then the interaction energy becomes where The value of the at which the energy is minimum, , is independent of the ratio . However the value of the energy at the minimum depends on the ratio. The lowest energy minimum occurs when When the ratio differs from 1, then the energy minimum is higher (Figure 3). Therefore, for even values of total momentum, the lowest energy occurs when (Figure 4) or where the total angular momentum is written as When the total angular momentum is odd, the minima cannot occur for The lowest energy states for odd total angular momentum occur when or and which also appear as series for the filling factor in the fractional quantum Hall effect. Charge density spread over a wave function The charge density is not actually concentrated in a delta function. The charge is spread over a wave function. In that case the electron density is The interaction energy becomes where is a confluent hypergeometric function or Kummer function. In obtaining the interaction energy we have used the integral (see ) As with delta function charges, the value of in which the energy is a local minimum only depends on the total angular momentum, not on the angular momenta of the individual currents. Also, as with the delta function charges, the energy at the minimum increases as the ratio of angular momenta varies from one. Therefore, the series and appear as well in the case of charges spread by the wave function. The Laughlin wavefunction is an ansatz for the quasiparticle wavefunction. If the expectation value of the interaction energy is taken over a Laughlin wavefunction, these series are also preserved. Magnetostatics Darwin interaction in a vacuum A charged moving particle can generate a magnetic field that affects the motion of another charged particle. The static version of this effect is called the Darwin interaction. To calculate this, consider the electrical currents in space generated by a moving charge with a comparable expression for . The Fourier transform of this current is The current can be decomposed into a transverse and a longitudinal part (see Helmholtz decomposition). The hat indicates a unit vector. The last term disappears because which results from charge conservation. Here vanishes because we are considering static forces. With the current in this form the energy of interaction can be written The propagator equation for the Proca Lagrangian is The spacelike solution is which yields where . The integral evaluates to (see ) which reduces to in the limit of small . The interaction energy is the negative of the interaction Lagrangian. For two like particles traveling in the same direction, the interaction is attractive, which is the opposite of the Coulomb interaction. Darwin interaction in plasma In a plasma, the dispersion relation for an electromagnetic wave is () which implies Here is the plasma frequency. The interaction energy is therefore Magnetic interaction between current loops in a simple plasma or electron gas Interaction energy Consider a tube of current rotating in a magnetic field embedded in a simple plasma or electron gas. The current, which lies in the plane perpendicular to the magnetic field, is defined as where and is the unit vector in the direction of the magnetic field. Here indicates the dimension of the material in the direction of the magnetic field. The transverse current, perpendicular to the wave vector, drives the transverse wave. The energy of interaction is where is the distance between the centers of the current loops and is a Bessel function of the first kind. In obtaining the interaction energy we made use of the integrals and See . A current in a plasma confined to the plane perpendicular to the magnetic field generates an extraordinary wave. This wave generates Hall currents that interact and modify the electromagnetic field. The dispersion relation for extraordinary waves is which gives for the propagator where in analogy with the Darwin propagator. Here, the upper hybrid frequency is given by the cyclotron frequency is given by (Gaussian units) and the plasma frequency (Gaussian units) Here is the electron density, is the magnitude of the electron charge, and is the electron mass. The interaction energy becomes, for like currents, Limit of small distance between current loops In the limit that the distance between current loops is small, where and and and are modified Bessel functions. we have assumed that the two currents have the same charge and speed. We have made use of the integral (see ) For small the integral becomes For large the integral becomes Relation to the quantum Hall effect The screening wavenumber can be written (Gaussian units) where is the fine-structure constant and the filling factor is and is the number of electrons in the material and is the area of the material perpendicular to the magnetic field. This parameter is important in the quantum Hall effect and the fractional quantum Hall effect. The filling factor is the fraction of occupied Landau states at the ground state energy. For cases of interest in the quantum Hall effect, is small. In that case the interaction energy is where (Gaussian units) is the interaction energy for zero filling factor. We have set the classical kinetic energy to the quantum energy Gravitation A gravitational disturbance is generated by the stress–energy tensor ; consequently, the Lagrangian for the gravitational field is spin-2. If the disturbances are at rest, then the only component of the stress–energy tensor that persists is the component. If we use the same trick of giving the graviton some mass and then taking the mass to zero at the end of the calculation the propagator becomes and which is once again attractive rather than repulsive. The coefficients are proportional to the masses of the disturbances. In the limit of small graviton mass, we recover the inverse-square behavior of Newton's Law. Unlike the electrostatic case, however, taking the small-mass limit of the boson does not yield the correct result. A more rigorous treatment yields a factor of one in the energy rather than 4/3. References Quantum field theory
Static forces and virtual-particle exchange
[ "Physics" ]
4,186
[ "Quantum field theory", "Quantum mechanics" ]
28,157,027
https://en.wikipedia.org/wiki/Crystallographic%20image%20processing
Crystallographic image processing (CIP) is traditionally understood as being a set of key steps in the determination of the atomic structure of crystalline matter from high-resolution electron microscopy (HREM) images obtained in a transmission electron microscope (TEM) that is run in the parallel illumination mode. The term was created in the research group of Sven Hovmöller at Stockholm University during the early 1980s and became rapidly a label for the "3D crystal structure from 2D transmission/projection images" approach. Since the late 1990s, analogous and complementary image processing techniques that are directed towards the achieving of goals with are either complementary or entirely beyond the scope of the original inception of CIP have been developed independently by members of the computational symmetry/geometry, scanning transmission electron microscopy, scanning probe microscopy communities, and applied crystallography communities. HREM image contrasts and crystal potential reconstruction methods Many beam HREM images of extremely thin samples are only directly interpretable in terms of a projected crystal structure if they have been recorded under special conditions, i.e. the so-called Scherzer defocus. In that case the positions of the atom columns appear as black blobs in the image (when the spherical aberration coefficient of the objective lens is positive - as always the case for uncorrected TEMs). Difficulties for interpretation of HREM images arise for other defocus values because the transfer properties of the objective lens alter the image contrast as function of the defocus. Hence atom columns which appear at one defocus value as dark blobs can turn into white blobs at a different defocus and vice versa. In addition to the objective lens defocus (which can easily be changed by the TEM operator), the thickness of the crystal under investigation has also a significant influence on the image contrast. These two factors often mix and yield HREM images which cannot be straightforwardly interpreted as a projected structure. If the structure is unknown, so that image simulation techniques cannot be applied beforehand, image interpretation is even more complicated. Nowadays two approaches are available to overcome this problem: one method is the exit-wave function reconstruction method, which requires several HREM images from the same area at different defocus and the other method is crystallographic image processing (CIP) which processes only a single HREM image. Exit-wave function reconstruction provides an amplitude and phase image of the (effective) projected crystal potential over the whole field of view. The thereby reconstructed crystal potential is corrected for aberration and delocalisation and also not affected by possible transfer gaps since several images with different defocus are processed. CIP on the other side considers only one image and applies corrections on the averaged image amplitudes and phases. The result of the latter is a pseudo-potential map of one projected unit cell. The result can be further improved by crystal tilt compensation and search for the most likely projected symmetry. In conclusion one can say that the exit-wave function reconstruction method has most advantages for determining the (aperiodic) atomic structure of defects and small clusters and CIP is the method of choice if the periodic structure is in focus of the investigation or when defocus series of HREM images cannot be obtained, e.g. due to beam damage of the sample. However, a recent study on the catalyst related material Cs0.5[Nb2.5W2.5O14] shows the advantages when both methods are linked in one study. Brief history of crystallographic image processing Aaron Klug suggested in 1979 that a technique that was originally developed for structure determination of membrane protein structures can also be used for structure determination of inorganic crystals. This idea was picked up by the research group of Sven Hovmöller which proved that the metal framework partial structure of the K8−xNb16−xW12+xO80 heavy-metal oxide could be determined from single HREM images recorded at Scherzer defocus. (Scherzer defocus ensures within the weak-phase object approximation a maximal contribution to the image of elastically scattered electrons that were scattered just once while contributions of doubly elastically scattered electrons to the image are optimally suppressed.) In later years the methods became more sophisticated so that also non-Scherzer images could be processed. One of the most impressive applications at that time was the determination of the complete structure of the complex compound Ti11Se4, which has been inaccessible by X-ray crystallography. Since CIP on single HREM images works only smoothly for layer-structures with at least one short (3 to 5 Å) crystal axis, the method was extended to work also with data from different crystal orientations (= atomic resolution electron tomography). This approach was used in 1990 to reconstruct the 3D structure of the mineral staurolite HFe2Al9Si4O4 and more recently to determine the structures of the huge quasicrystal approximant phase ν-AlCrFe and the structures of the complex zeolites TNU-9 and IM-5. As mentioned below in the section on crystallographic processing of images that were recorded from 2D periodic arrays with other types of microscopes, the CIP techniques were taken up since 2009 by members of the scanning transmission electron microscopy, scanning probe microscopy and applied crystallography communities. Contemporary robotics and computer vision researchers also deal with the topic of "computational symmetry", but have so far failed to utilize the spatial distribution of site symmetries that result from crystallographic origin conventions. In addition, a well known statistician noted in his comments on "Symmetry as a continuous feature" that symmetry groups possess inclusion relations (are not disjoint in other words) so that conclusions about which symmetry is most likely present in an image need to be based on "geometric inferences". Such inferences are deeply rooted in information theory, where one is not trying to model empirical data, but extracts and models the information content of the data. The key difference between geometric inference and all kinds of traditional statistical inferences is that the former merely states the co-existence of a set of definitive (and exact geometrical) constraints and noise, whereby noise is nothing else but an unknown characteristic of the measurement device and data processing operations. From this follows that "in comparing two" (or more) "geometric models we must take into account the fact that the noise is identical (but unknown) and has the same characteristic for both" (all) "models". Because many of these approaches use linear approximations, the level of random noise needs to be low to moderate, or in other words, the measuring devices must be very well corrected for all kinds of known systematic errors. These kinds of ideas have, however, only been taken up by a tiny minority of researchers within the computational symmetry and scanning probe microscopy / applied crystallography communities. It is fair to say that the members of computational symmetry community are doing crystallographic image processing under a different name and without utilization of its full mathematical framework (e.g. ignorance to the proper choice of the origin of a unit cell and preference for direct space analyses). Frequently, they are working with artificially created 2D periodic patterns, e.g. wallpapers, textiles, or building decoration in the Moorish/Arabic/Islamic tradition. The goals of these researchers are often related to the identification of point and translation symmetries by computational means and the subsequent classifications of patterns into groups. Since their patterns were artificially created, they do not need to obey all of the restrictions that nature typically imposes on long range periodic ordered arrays of atoms or molecules. Computational geometry takes a broader view on this issue and concluded already in 1991 that the problem of testing approximate point symmetries in noisy images is in general NP-hard and later on that it is also NP-complete. For restricted versions of this problem, there exist polynomial time algorithms that solve the corresponding optimization problems for a few point symmetries in 2D. Crystallographic image processing of high-resolution TEM images The principal steps for solving a structure of an inorganic crystal from HREM images by CIP are as follows (for a detailed discussion see ). Selecting the area of interest and calculation of the Fourier transform (= power spectrum consisting of a 2D periodic array of complex numbers) Determining the defocus value and compensating for the contrast changes imposed by the objective lens (done in Fourier space) Indexing and refining the lattice (done in Fourier space) Extracting amplitudes and phase values at the refined lattice positions (done in Fourier space) Determining the origin of the projected unit cell and determining the projected (plane group) symmetry Imposing constrains of the most likely plane group symmetry on the amplitudes an phases. At this step the image phases are converted into the phases of the structure factors. Calculating the pseudo-potential map by Fourier synthesis with corrected (structure factor) amplitudes and phases (done in real space) Determining 2D (projected) atomic co-ordinates (done in real space) A few computer programs are available which assist to perform the necessary steps of processing. The most popular programs used by materials scientists (electron crystallographers) are CRISP, VEC, and the EDM package. There is also the recently developed crystallographic image processing program EMIA, but so far there do not seem to be reports by users of this program. Structural biologists achieve resolutions of a few ångströms (up from a to few nanometers in the past when samples used to be negatively stained) for membrane forming proteins in regular two-dimensional arrays, but prefer the usage of the programs 2dx, EMAN2, and IPLT. These programs are based on the Medical Research Council (MRC) image processing programs and possess additional functionality such as the "unbending" of the image. As the name suggests, unbending of the image is conceptually equivalent to "flattening out and relaxing to equilibrium positions" one building block thick samples so that all 2D periodic motifs are as similar as possible and all building blocks of the array possess the same crystallographic orientation with respect to a cartesian coordinate system that is fixed to the microscope. (The microscope's optical axis typically serves as the z-axis.) Unbending is often necessary when the 2D array of membrane proteins is paracrystalline rather than genuinely crystalline. It was estimated that unbending approximately doubles the spatial resolution with which the shape of molecules can be determined Inorganic crystals are much stiffer than 2D periodic protein membrane arrays so that there is no need for the unbending of images that were taken from suitably thinned parts of these crystals. Consequently, the CRISP program does not possess the unbending image processing feature but offers superior performance in the so-called phase origin refinement. The latter feature is particularly important for electron crystallographers as their samples may possess any space group out of the 230 possible groups types that exist in three dimensions. The regular arrays of membrane forming proteins that structural biologists deal with are, on the other hand, restricted to possess one out of only 17 (two-sided/black-white) layer group types (of which there are 46 in total and which are periodic only in 2D) due to the chiral nature of all (naturally occurring) proteins. Different crystallographic settings of four of these layer group types increase the number of possible layer group symmetries of regular arrays of membrane forming proteins to just 21. All 3D space groups and their subperiodic 2D periodic layer groups (including the above-mentioned 46 two-sided groups) project to just 17 plane space group types, which are genuinely 2D periodic and are sometimes referred to as the wallpaper groups. (Although quite popular, this is a misnomer because wallpapers are not restricted to possess these symmetries by nature.) All individual transmission electron microscopy images are projections from the three-dimensional space of the samples into two dimensions (so that spatial distribution information along the projection direction is unavoidably lost). Projections along prominent (i.e. certain low-index) zone axes of 3D crystals or along the layer normal of a membrane forming protein sample ensure the projection of 3D symmetry into 2D. (Along arbitrary high-index zone axes and inclined to the layer normal of membrane forming proteins, there will be no useful projected symmetry in transmission images.) The recovery of 3D structures and their symmetries relies on electron tomography techniques, which use sets of transmission electron microscopy images. The origin refinement part of CIP relies on the definition of the plane symmetry group types as provided by the International Tables of Crystallography, where all symmetry equivalent positions in the unit cell and their respective site symmetries are listed along with systematic absences in reciprocal space. Besides plane symmetry groups p1, p3, p3m1 and p31m, all other plane group symmetries are centrosymmetric so that the origin refinement simplifies to the determination of the correct signs of the amplitudes of the Fourier coefficients. When crystallographic image processing is utilized in scanning probe microscopy, the symmetry groups to be considered are just the 17 plane space group types in their possible 21 settings. Crystallographic processing of images that were recorded from 2D periodic arrays with other types of microscopes Because digitized 2D periodic images are in the information theoretical approach just data organized in 2D arrays of pixels, core features of Crystallographic Image Processing can be utilized independent of the type of microscope with which the images/data were recorded. The CIP technique has, accordingly been applied (on the basis of the 2dx program) to atomic resolution Z-contrast images of Si-clathrates, as recorded in an aberration-corrected scanning transmission electron microscope. Images of 2D periodic arrays of flat lying molecules on a substrate as recorded with scanning tunneling microscopes were also crystallographic processed utilizing the program CRISP. References External links MRC Image Processing programs, the classical standard for structural biology, free for academic usages (Fortran source code) CRISP commercial, but superior for inorganic electron crystallography (for Windows PCs) VEC free for academic usages, particularly useful for the analysis of incommensurately modulated structures (for Windows PCs) IPLT open source, structural biology (for Mac PCs, Linux, and a Windows PC demo version) EMAN Vers. 2, open source, structural biology including single particle reconstruction (Linux) 2dx open source, mainly for structural biology (for Mac PCs, Linux) EDM open source, free for non-commercial purposes - a ready to go version of EDM is implemented on the elmiX Linux live CD Further reading see also the Wiki on Electron Crystallography Crystallography
Crystallographic image processing
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,027
[ "Crystallography", "Condensed matter physics", "Materials science" ]
28,158,585
https://en.wikipedia.org/wiki/Hellenic%20Trench
The Hellenic Trench (HT) is an oceanic trough located in the forearc of the Hellenic arc, an arcuate archipelago on the southern margin of the Aegean Sea plate, or Aegean Plate, also called Aegea, the basement of the Aegean Sea. The HT begins in the Ionian Sea near the mouth of the Gulf of Corinth and curves to the south, following the margin of the Aegean Sea. It passes close to the south shore of Crete and ends near the island of Rhodes just offshore Anatolia. In the classical theory of its origin the HT is an oceanic trench containing the Hellenic subduction zone, directly related to the subduction of the African plate under the Eurasian plate. Alternate views developed later on additional data question the classical view postulating that the HT may be the result wholly or partially of back-arc extension and slab rollback. The "partial" view hypothesizes that the western leg of the HT, Ionian Sea east to eastern Crete, exhibits the line of subduction and therefore is an oceanic trench. The "not at all" view, relying on the theory that the subduction line is under or south of the Mediterranean Ridge, questions whether any of the HT is currently subductional. If not, it is merely a legacy, a remnant of a previous subduction zone that has gone elsewhere. North of this subduction the Adriatic or Apulian Plate subducts under the Balkans. More recently and rarely the terms "North Hellenic Subduction" and "North Hellenic Trench" have been applied there, rendering the HT and HS into the "South HT" and "South HS." The distinction is based on a differentation of North Hellenides from South Hellenides. The dividing feature is the Gulfs of Patras and Corinth. From their vicinity and southward an extensional regime prevails, while the north remains in a compressional. The Hellenides are the mountains of Greece, divided into an inner and outer range. The extensional regime cuts across them transversely, producing four quarters. The South Hellenic Subduction Zone, and the Hellenic Trench, if different (many still consider them not to be so) are located in the southern outer Hellenides. Meanwhile, the deep basins of the Trench and their marine ecologies are the homes of a number of marine mammals, such as Cetaceans, some of which are endangered species threatened by maritime traffic in the Eastern Mediterranean. The study of the overall features of the surface of the Earth has been the concern of plate tectonics since the Plate Tectonics Revolution of the 1970s. It was a development of the continental drift theory of Alfred Wegener. These features are often called lineaments. The Hellenic Trench along with the Hellenic Arc and other related features are lineaments important to the geology primarily of Greece and secondarily of Turkey. Morphology or geomorphology studies the "shapes" (morphai) of the lineaments, while kinesiology studies their "motions" (kineseis). Both topics as used typically in geology articles do not go beyond plane geometry, trigonometry, elementary algebra, and elementary statistics, which are taught at the high school level. More daunting are the geologic special terms, which are numerous, and continue to be innovated. This article assumes basic knowledge of mathematics and science, but includes parenthetical clues as to the meaning of the special terms as well as links to articles explaining them. Hellenic subduction zone Subduction applied to the trench In subduction one plate dives under another at a convergent plate boundary and the band across this line is termed the subduction, or more rarely subductive, zone. It features an upper plate and a lower plate. The initial line of the subduction, traditionally believed to be located in the trench, and to be at the foot of the margin of the overriding plate, has a direction, the strike. The plate diving down does so at an angle, the dip. The direction of dip is roughly perpendicular, or normal, to the strike (not to be confused with a normal fault). It goes under the highlands raised by the collision, in this case the Hellenic arc. The two plates moving across each other (a dip-slip movement) generate earthquakes, so the subducted part of the plate is basically a seismic zone, called the Wadati–Benioff zone. As it turns out, further research on the Hellenic Trench revealed that the concept of a subductive trench, where subduction is occurring now (if that is what it is), would only strictly apply to the west side; moreover, not all the subduction, wherever it does occur, is due to plate collision. The east side of the trench is not a trench but is a series of ascending scarps of faults where strike slippage is the main movement, due to further complexities discovered later (see this article, below). However, the term "trench" and the concept of "subductive zone" continue to be used of the whole arc, sometimes in questioning quotes, by some sort of analogy, perhaps because the zone once was or could be a convergent subductive plate border. The basis of the analogy is the Hellenic arc, the raised border. It could not have been raised all the way around without a subduction zone all the way around. The search for data revealing possible reasons for the asymmetry is an area of active research. If the problem is in part a matter of definition of terms, then the answer as far as it goes is a matter of redefinition. One redefinition distinguishes the Hellenic Trench from the Hellenic Trough, or Hellenic Subduction Trough. The Trench is only the foredeep of the Hellenic arc on the west side. It is possibly the location of the line of subduction, but the subduction begins with the first flexure of the African plate downward (deformation front), which at least one source places at the Libyan continental margin. The Mediterranean Ridge is in this theory an accretionary complex associated with the subduction; that is, a collection of loose material left over from previous subduction. The term "subduction zone" also includes the slab of the overridden Wadati–Benioff zone. These definitions appear to solve the contradiction of the Hellenic Trench not going far enough around the arc to account for the eastern side. The Trough and the zone go all the way around. Subduction zone geometry Hellenic Arc amphitheater The Hellenic Arc seen on a map or in high-altitude photographs appears to be, if not actually is, an amphitheater, at least a bilaterally symmetrical arc about a N–S axis, with vertex on Crete, opening to the north. The wings of the arc are somewhat flatter than the vertex. The radius has been calculated at , which places the center at about , in the middle of the north Aegean Sea. The parallel trend of the volcanic arc at a radius of seems to give some approximate verification. One might suppose at first glance that some anomalous curvature of the African plate had surrounded the Aegean Sea and was compressing it inward toward a point in the north Aegean, and that one might expect a mountain range to arise there. The western side of the trench has the appropriate faulting, a destructive convergent border in a reverse fault with a dip under the Hellenic Arc perpendicular to the strike. Further investigation in the second half of the 20th century soon quelled any such speculation. A plate-compressive velocity of the Hellenic Arc ought to have been in evidence, given the precision with which GPS can measure geological movement. Instead all investigations began to report a closure of the Hellenic Arc on the coast of Africa (or Nubia as is currently said) at various estimated rates that were far larger than the small rate of convergence of Africa on Eurasia. Aegean Plate extension The expected closure of the Hellenic Arc on the north Aegean turned out to be a vigorous motion in the opposite direction, a theoretical paradox requiring additional geological theory to explain. The final solutions were back-arc extension and slab rollback. As the subducting plate, or slab, rolls under the overriding plate, an arc of highlands is pushed up on the margin of the overriding plate. For reasons still not entirely understood, the back of the arc begins to thin and extend, pushing the arc in a "back" direction projectively across the foredeep. This extension may or may not happen in a subduction, but if it does, the spread is like the expansion of space, applicable everywhere, but only in a given direction. The entire Aegean Plate comes from this extension behind the Hellenic Arc. Circles on the early plate would eventually have become ellipses pointing in the direction of expansion. The Aegean Plate stretches out to the south, becoming thin and shallow, allowing a volcanic arc to break out to the north of the Hellenic Arc, which is moving to the south on the edge of the extension. There are two layers on the overriding plate margin: the contact surface with the subducting plate, and a thinned surface layer moving "back." As it does the trench must move back, "consuming" more plate. The mechanism is that the slab flexes down ("deformation front") further and further back, a phenomenon called "slab roll back." In geologic terminology, the part of the plate rolling under is termed "negatively buoyant," meaning the segment of combined overriding and overridden plates have not found the depth at which they float over the mantle. One study notes that the rollback of the HT is so severe that the negative buoyancy is the major cause of subduction; that is, northward thrusting of the African plate still is present, but the slab has already started to flex long before it gets to the point where thrusting makes a difference. But there are other complexities as well. Hellenic Trough morphology A number of mapping techniques have been applied to research the arc zone, such as seafloor mapping, reflection seismology, and application of the Global Positioning System, which can detect changes of position in millimeters; i.e., geologic movement, good for measuring geologic velocities. The work done so far indicates that the appearance of symmetry is an illusion based on the shape of the forearc; that is, on the raised arc of the margin of the overriding plate. Bathymetric representations of the Hellenic Trench to the south of the arc depict a different shape. As far as the major parameters are concerned: fault type, dip, depth, velocity, seismicity, etc., the subduction zone in the trench is asymmetric, which some consider a unique distinction. The zone begins near the Gulf of Corinth and trends ESE in an arc approximating a straight line. It terminates to the south of Crete in an angular vertex. This leg of the HT contains mainly dip-slip faults (a hanging wall slips up or down over the dip of a floor wall). North of its end on the west another subduction zone is created by the Adriatic plate diving under the Balkans, which are in the Eurasian plate proper, and not the Aegean Plate. The subduction line between the two is not continuous; there is a gap of about . Between the south end of the Adriatic plate subduction and the north end of the Aegean Plate subduction is the Kephallenia Fault Zone (KFZ), or Kephallenia Transform Fault (KTF), or Cephalonia–Lefkada Transform Fault Zone (CTF). The Aegean Plate slips along the side of the Adriatic plate in a SSW direction. A second leg trends N60E, which is ENE, to the island of Rhodes, where it ceases. There is not a singular vertex. Prior to reaching its end point the ESE leg has two more vertices, so that the ENE leg is distributed into three ENE lines, the Ptolemy Trench, the Pliny Trench outside of and parallel to it, and the outer Strabo Trench, parallel to the other two. The overall appearance resembles an arc inscribed in a vertex angle, except for the asymmetry. The three trenches fall short of Rhodes, the Strabo Trench going the farthest east. Between it and the Cyprus Trench are the Anaximander Mountains, a submarine range thought to be the subduction arc of the African plate under the Anatolian plate. The Strabo Trench does not connect with it. Instead there is a gap, the Rhodian Basin. On its north boundary is the Rhodian Fault, trending NNE, and making the final connection to the Anatolian Fault. Length of the Hellenic trough The linear distance of around the trough depends on its definition. Various estimates are available. The main requirements for definition are two end points and the shape of the path between them. One source specifying end points of " offshore the island of Zakynthos" and " offshore of the island of Rhodes" offers an arcuate distance of for "the arc," here used loosely. Neither coordinates are on or next to the Hellenic Arc; rather, the line (approximated by the method of small straight segments on the map) to achieve 1200 km must follow the outer edge of the foredeep zone, located toward the midline of the Hellenic Trough. Being further out on the radius of the Arc as a segment of a circle, it has a longer peripheral distance. In this definition "the arc" is both the Hellenic Arc and its foredeep, measured on the outer periphery. The northern end point is more solid, being located on or near the Cephalonia–Lefkada Transform Fault Zone, generally agreed to be the northern edge of the subduction zone. The southern end point is placed arbitrarily in the Rhodes Basin at the end of the Hellenic subduction. No point chosen there would cause significant variation in the 1200 km length. Another source concentrates on the line of subduction, which is an angle at the intersection of two roughly straight lines (see article above). The vertex is to the south of Crete. A leg bears to the NW from there and is long. The line is a scarp, though not visible because the trench has filled with sediment. A second leg bears to the NE and is long, for a total of , which is also the southern peripheral distance around the Hellenic Arc. The Arc is arcuate; the angle is straight lines, another paradox, if one assumes a single subduction. The general geologic answer is that the subduction due to the compression of Africa against Eurasia is a different movement from the southward thrusting of the Aegean Plate. There are two different resultants of all the small motion vectors. The subduction is not at 90° to the NW-bearing scarp, but is at 70°–75°. The scarp is believed to be rotating CW away from perpendicularity. Geologic history of the current regime Initially the trench was considered the surface expression of African and Eurasian plate collision. Such a view could not be verified because the trench was full of obscuring sediment, and because the arc-shaped Mediterranean Ridge seemed part of the subduction complex. If the strike of the subducting plate is in the Hellenic Trench (often termed "the classical view"), then it is far distant from the accretionary ridge supposed to have been accreted there. Subsequent data, especially earthquake, made possible other theories. Perhaps the bottom of the trench did not connect with (was decoupled from) the subducting plate at all but was a "pull apart" fault basin in the forearc (the raised chain of highlands and islands), or perhaps it was part of a wrinkle in the foredeep produced by compressional motion of the Aegean Plate against the "backstop" of the Mediterranean Ridge. Or, perhaps it was a normal fault, a "half-graben" produced by extension of the Aegean Plate. In these other theories, the subducting plate would start its subduction under the Mediterranean Ridge, and pass under the Hellenic Trench decoupled from it. However, it cannot be seen under the ridge. Moreover, the Hellenic Arc would not be the forearc, the edge of the Aegean Plate, but this edge would be hidden under the ridge. It would now be necessary to find a reason for the trench. Opinions vary. The search goes on. Historical geology offers reasons for hypothesizing that, in its earlier development, there was one trench traversing what is now the Aegean, and that it contained the subduction zone and the edge of the Eurasian Continent. The compressional regime If one imagines all the geologic changes brought about by extension to be reversed, then all the islands descend from an ancestral Hellenic Arc traversing the North Aegean. The Gulf of Patras was closed, as well as the Gulf of Corinth. Lefkadi, Ithaki, and Kefalonia were telescoped into a single ancestor. The Adriatic plate and the Ionian Plate (under the Ionian Sea) were one. Zakynthos was in the line of islands at the edge of the future border between the two plates. Greece lacked its current projection into the Aegean; in fact, the Aegean was not there. At this stage, as early as 30 MYA in the Oligocene, the mainland of the Balkans had been formed by successive waves of subduction of the African plate under the Eurasian, called "thrusts" from their thrusting of the Eurasian plate to the NE. The various forearcs, or "thrust sheets," created by this thrusting had moved to the north and had docked against the preceding, closing the ancient seas between them. Each forearc was a complex of folds, or "nappes," raised by compression (or "shortening of the crust"), which had a tendency to fall over, creating tilted layers exposed later in highlands. The general hypothesis is that throughout these successive subductions there was only one subduction zone acting continuously to convey (as on a conveyor belt) and emplace (obduct) microcontinents broken from the African slab. Between each microcontinent was a local ocean, which was subducted and closed in turn: in the Cenozoic the Vardar, subducted; the Pindos, subducted; and the eastern Mediterranean, still being suducted. Between the Vardar and the Pindos was the Pelagian microcontinent; between the Pindos and the Mediterranean was the Apulian (or Adriatic) microcontinent, with subducted for the two, amounting to a closure of between Africa and Eurasia. Individual subductions thus varied between Oceanic and Continental, the current being Oceanic. This Hellenic orogeny to this point was part of the Alpine orogeny. The newly formed Alps connected to the Dinaric Alps, which were continuous with a chain called the Outer Hellenides, the last to form. Each former forearc was its own type of rock, or facies. Mainland Greece thus consists geologically of strips, or isopic zones ("same facies"), or "tectono-stratigraphic units" of distinct rock trending from NW to SE. The regime through the Oligocene, evidenced in the zone structure of Greece, was compressional. The subduction was in the Trench and its forearc was the edge of the overriding plate (the classical model). Subsequently, a superimposed extensional regime moved the subduction and the Trench back, but not necessarily at the same rate, nor did they always necessarily coincide. The former reverse faults were converted to normal, and many new extensional lineaments (tectonic features), such as pull-apart basins, appeared. The extensional regime The start line of the extension was a transform fault that has been called the Eastern Mediterranean North Transform (EMNT). It trended from the SW corner of Anatolia in a NW direction through the future center of the forearc across Central Greece well north of the future Gulf of Corinth. At some point the new forces began to pull apart the former strike-slip fault north of Anatolia merging it with the subduction, and pulling out a separate forearc from the previously docked coastal ridge, consisting of strips of the Outer Hellenides in the Ionian and some other zones. CW rotation of the subduction zone Slab rollback moved the subduction zone away from, but not parallel to, the continental coastline. A bathymetric view of the current configuration suggests that an angle was generated on the west by rotating the subduction zone away from the original strike of the EMNT as a baseline in the CW direction about a vertex, or pole, on the coast of Apulia, Italy. A triangle was formed of the base line, the subduction line, and a chord across the arc of the subtended angle. Currently the vertex opposite the base line does not extend as far as the chord. The east leg curves, shortening the west leg. The curvature demonstrates that the east leg is not as rigid as the west. Plate consumption varies slightly over the west leg but falls off sharply over the east. It is hypothesized that the consumption on the east is expressed by short segments cutting across the scarps, which nevertheless have slip vectors aligned with the western vectors over the entire arc in a wheel-spoke pattern; that is, the azimuths of the vectors decrease regularly from west to east. Though often shown crossing the Adriatic on maps, the subduction does not actually do so. The stress of the rotation was too great for the rock. The subducting plate broke along the KTF and also along the Plato–Strabo trench area, forming a parallelogram that slipped outward between the two strike-slip cross-faults. More than one fault was required to release the stress to the east because the velocity of the rotating subduction increases outward along the radius of rotation. Subduction zone structure The western trough The surface expression of the KFZ appears to come to an end on the west at . It is generally sgreed that the fault represents the offsetting of the Hellenic Arc from the Hellenides north of the Gulf of Corinth due to Aegean Plate extension. Prior to the offset, the subduction zone of the Adriatic, or Apulian, Plate under the edge of the Balkans was continuous with the Hellenic Trench. One might conclude that the Trench is the location of the subduction and the border of the Aegean Plate, as some have. As it turns out, the Mediterranean Ridge (MR), also arcuate, curves a little more to the north to intersect the KFZ a little further out than the HT. There is evidence that the KFZ projects further into the Abyssal Plain of the Ionian Sea at an angle to the strike of the previously known KFZ. The Plain is the site of the Mesozoic basement that further east is subducted. It is believed the KFZ may extend into it to a depth of as much as . As the KFZ may terminate both the HT and the MR on the north, either may be the location of the subduction. The location of the border between Aegean Plate and Ionian Sea Plain is again deferred until more definitive evidence can be obtained. The Hellenic Trench from the intersection with the KFZ to south of Crete consists of a line of deep-sea basins named after surface features and divided from each other by gravity rises. The three major parts of the western trench are as follows. The Zakynthos-Strophades basins The KFZ is on the outer border of an archipelago termed (by some) the Southern Ionian Island Chain. The four main islands are Lefkada, Ithaki, Kefalonia, and Zakynthos. The geographical custom in designating the waters between an island and the mainland is to call it a basin: the Zakynthos Basin (ZB), etc. The Southern Ionians also include the diminutive islands around the larger, including the two small islands to the south of Zakynthos, the Strofades. They and Zakynthos are joined by the submarine Zakynthos–Strofades Ridge. The waters around Zakynthos are the ZB; around the Strofades, the SB. The two together are the Zakynthos-Strofades System. Matapan deep The Matapan Deep or Matapan–Vavilov Deep is roughly . The Calypso Deep, located in the Matapan–Vavilov Deep, is roughly deep and is the deepest point in the Mediterranean Sea. Kithera–Antikithera deep The Kithera–Antikithera deep is . The eastern trough Hellenic arc ecology The trench and the arc to the north of it, including a strip of southern Anatolia, are home to some of the larger marine mammals, some of which are endangered species. Accordingly, the ACCOBAMS, an organization based on an international agreement to work for the conservation of these animals, has declared the trench and arc an IMMA, International Marine Mammal Area, and an MPA, Marine Protected Area. For example, the animals are at risk of, and suffer decimation and mutilation from, being run down inadvertently by ships. The ACCOBAMS keeps in contact with the navies of its members to try to avoid adverse encounters. Sometimes it conducts rescues of animals, and polices against hunting. The ACCOBAMS's Scientific Committee conducts investigations, manages data, and makes recommendations to member countries. Those currently include every state that borders on the Mediterranean. The Hellenic trench region is an ecosystem to sperm whales and other aquatic life and has been used by marine biologists to study the behaviour of various aquatic species. This is the trench where several earthquakes, including the 365 Crete earthquake, occurred. See also Extremes on Earth Santorini Mediterranean Ridge Aegean Sea plate Hellenic arc 365 earthquake Footnotes Citations Reference bibliography Map of downloadable separately. External links Oceanography Extreme points of Earth Ionian Sea Landforms of the Mediterranean Sea Oceanic trenches of the Atlantic Ocean
Hellenic Trench
[ "Physics", "Environmental_science" ]
5,406
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
43,258,042
https://en.wikipedia.org/wiki/Adaptive%20immunity%20in%20jawless%20fish
Jawless vertebrates, which today consist entirely of lampreys and hagfish, have an adaptive immune system similar to that found in jawed vertebrates. The cells of the agnathan AIS have roles roughly equivalent to those of B-cells and T-cells, with three lymphocyte lineages identified so far: VLRA (most similar to α/β T cells, in its role and pathway of differentiation) VLRB (most similar to B cells) VLRC (most similar to γ/δ T cells) VLRA and VLRB were identified in 2009, while VLRC was discovered in 2013. Instead of immunoglobulins, they use variable lymphocyte receptors. Antigen receptors Jawless vertebrates do not have immunoglobulins (Igs), the key proteins to B-cells and T-cells. However, they do possess a system of leucine-rich repeat (LRR) proteins that make up variable lymphocyte receptors (VLRs). This system can produce roughly the same number of potential receptors that the Ig-based system found in jawed vertebrates can. Instead of recombination-activating genes (RAGs), genes coding for VLRs can be altered by a family of cytidine deaminases known as APOBEC, possibly through gene conversion. Cytidine deaminase 1 is associated with the assembly of VLRA and VLRC and cytidine deaminase 2 appears to assemble VLRB. Evolution The gene expression profiles of lymphocyte-like cells (LLCs) in jawless vertebrates indicate that VLRB+ LLCs and B cells share a common ancestor, and VLRA+ and VLRC+ LLCs and T cells share a common ancestor. Like B cells and T cells, the development of VLRB+ LLCs is spatially separated from the development of VLRA+ and VLRC+ LLCs. VLRB+ LLCs and B cells develop in hematopoietic tissues: VLRB+ LLCs develop in the typhlosole and kidneys and B cells develop in bone marrow. VLRA+ and VLRC+ LLCs develop in a thymus-like organ called the thymoid, similar to T cells developing in the thymus. VLRB molecules and B cells can directly bind to antigens and VLRB-transfected cells secrete VLRB protein products, similar to B cells in jawed vertebrates. VLRA+ LLCs were unable to bind Bacillus anthracis, Escherichia coli, Salmonella typhimurium, or Streptococcus pneumoniae before or after immunization, suggesting that VLRAs require antigen processing like TCRs. However, MHCs or MHC-like molecules that could present processed antigens have not been found in lampreys, and some VLRAs expressed in yeast were able to directly bind to antigens. The antigen binding of VLRCs has not been studied. However, the VLRC gene is close in proximity and sequence to the VLRA gene and the two are often co-expressed in LLCs, suggesting that both are TCR-like receptors. References Further reading Cyclostomi Immune system
Adaptive immunity in jawless fish
[ "Biology" ]
708
[ "Immune system", "Organ systems" ]
43,263,796
https://en.wikipedia.org/wiki/Hilbert%E2%80%93Kunz%20function
In algebra, the Hilbert–Kunz function of a local ring (R, m) of prime characteristic p is the function where q is a power of p and m[q] is the ideal generated by the q-th powers of elements of the maximal ideal m. The notion was introduced by Ernst Kunz, who used it to characterize a regular ring as a Noetherian ring in which the Frobenius morphism is flat. If d is the dimension of the local ring, Monsky showed that f(q)/(q^d) is c+O(1/q) for some real constant c. This constant, the "Hilbert-Kunz multiplicity", is greater than or equal to 1. Watanabe and Yoshida strengthened some of Kunz's results, showing that in the unmixed case, the ring is regular precisely when c=1. Hilbert–Kunz functions and multiplicities have been studied for their own sake. Brenner and Trivedi have treated local rings coming from the homogeneous co-ordinate rings of smooth projective curves, using techniques from algebraic geometry. Han, Monsky, and Teixeira have treated diagonal hypersurfaces and various related hypersurfaces. But there is no known technique for determining the Hilbert–Kunz function or c in general. In particular the question of whether c is always rational wasn't settled until recently (by Brenner—it needn't be, and indeed can be transcendental). Hochster and Huneke related Hilbert-Kunz multiplicities to "tight closure" and Brenner and Monsky used Hilbert–Kunz functions to show that localization need not preserve tight closure. The question of how c behaves as the characteristic goes to infinity (say for a hypersurface defined by a polynomial with integer coefficients) has also received attention; once again open questions abound. A comprehensive overview is to be found in Craig Huneke's article "Hilbert-Kunz multiplicities and the F-signature" arXiv:1409.0467. This article is also found on pages 485-525 of the Springer volume "Commutative Algebra: Expository Papers Dedicated to David Eisenbud on the Occasion of His 65th Birthday", edited by Irena Peeva. References Bibliography E. Kunz, "On noetherian rings of characteristic p," Am. J. Math, 98, (1976), 999–1013. 1 Ring theory
Hilbert–Kunz function
[ "Mathematics" ]
531
[ "Fields of abstract algebra", "Algebra stubs", "Ring theory", "Algebra" ]
43,264,630
https://en.wikipedia.org/wiki/Physics%20outreach
Physics outreach encompasses facets of science outreach and physics education, and a variety of activities by schools, research institutes, universities, clubs and institutions such as science museums aimed at broadening the audience for and awareness and understanding of physics. While the general public may sometimes be the focus of such activities, physics outreach often centers on developing and providing resources and making presentations to students, educators in other disciplines, and in some cases researchers within different areas of physics. History Ongoing efforts to expand the understanding of physics to a wider audience have been undertaken by individuals and institutions since the early 19th century. Historic works, such as the Dialogue Concerning the Two Chief World Systems, and Two New Sciences by Galileo Galilei, sought to present revolutionary knowledge in astronomy, frames of reference, and kinematics in a manner that a general audience could understand with great effect. In the mid-1800s, English physicist and chemist, Michael Faraday gave a series of nineteen lectures aimed towards young adults with the hopes of conveying scientific phenomena. His intentions were to raise awareness, inspire them and generate revenue of the Royal Institution. This series became known as the Christmas lectures, and still continues today. By the early 20th century, the public notoriety of physicists such as Albert Einstein and Marie Curie, and inventions such as radio led to a growing interest in physics. In 1921, in the United States, the establishment of Sigma Pi Sigma physics honor society at universities was instrumental in the expanding number of physics presentations, and led to the creation of physics clubs open to all students. Museums were an important form of outreach but most early science museums were generally focused on natural history. Some specialized museums, such as the Cavendish Museum at University of Cambridge, housed many of the historically important pieces of apparatus that contributed to the major discoveries by Maxwell, Thomson, Rutherford, etc. However, such venues provided little opportunity for hands-on learning or demonstrations. In August 1969, Frank Oppenheimer dedicated his new Exploratorium in San Francisco primarily to interactive science exhibits that demonstrated principles in physics. The Exploratorium published the details of their own exhibits in "Cookbooks" that served as an inspiration to many other museums around the world, and since then has diversified into many outreach programs. Oppenheimer had researched European science museums while on a Guggenheim Fellowship in 1965. He noted that three museums served as important influences on the Exploratorium: the Palais de la Découverte, which displayed models to teach scientific concepts and employed students as demonstrators, a practice that directly inspired the Exploratorium's much-lauded High School Explainer Program; the South Kensington Museum of Science and Art, which Oppenheimer and his wife visited frequently; and the Deutsches Museum in Munich, the world's largest science museum, which had a number of interactive displays that impressed the Oppenheimers. In the ensuing years, physics outreach, and science outreach more generally, continued to expand and took on new popular forms, including highly successful television shows such as Cosmos: A Personal Voyage, first broadcast in 1980. As a form of outreach within the physics education community for teachers and students, in 1997 the US National Science Foundation (NSF) and Department of Energy USDOE established QuarkNet, a professional teacher development program. In 2012, the University of Notre Dame received a $6.1M, five-year grant to support a nationwide expansion of the Quarknet program. Also in 1997, the European Particle Physics Outreach Group, led by Christopher Llewellyn Smith, FRS, and Director General of CERN, was formed to create a community of scientists, science educators, and communication specialists in science education and public outreach for particle physics. This group became the International Particle Physics Outreach Group (IPPOG) in 2011 after the start up of the LHC. Innovation Many contemporary initiatives in physics outreach have begun to shift focus, transcending traditional field boundaries, seeking to engage students and the public by integrating elements of aesthetic design and popular culture. The goal has been not only to push physics out of a strictly science education framework but also to draw in professionals and students from other fields to bring their perspectives on physical phenomena. Such work includes artists creating sculptures using ferrofluids, and art photography using high speed and ultra high speed photography. Other efforts, such as University of Cambridge's Physics at Work program have created annual events to demonstrate to secondary students uses of physics in everyday life and a Senior Physics Challenge. Seeing the importance these initiatives, Cambridge has established a full-time physics outreach organization, an Educational Outreach Office, and aspirations for a Center of Physics and expanded industrial partnerships that "would include a well equipped core team of outreach officers dedicated to demonstrating the real life applications of physics, showing that physics is an accessible and relevant subject". The French research group, La Physique Autrement (Physics Reimagined), of the Laboratoire de Physique des Solides, works on research about new ways to present modern solid-state physics and to engage the general public. In 2013, Physics Today covered this group in an article entitled "Quantum Physics For Everyone" which discussed how with the help of designers and unconventional demonstrations, the project sought out and succeeded to engage people who never thought of themselves as interested in science. The Science & Entertainment Exchange was developed by the United States National Academy of Sciences (NAS) to increase public awareness, knowledge, and understanding of science and advanced science technology through its representation in television, film, and other media. It was officially launched in 2008 as a partnership between the NAS and Hollywood. The Exchanged is Based in Los Angeles, California. Museums and public venues primarily focused on physical phenomena Canada Montreal Science Centre (Montreal, Quebec) displays many hands-on activities involving various physics phenomena. Finland Heureka (Helsinki) is an NPO science center run by the Finnish Science Centre Foundation with a broad spectrum of physics-related exhibits. France Cité des Sciences et de l'Industrie (Paris) is the largest French science museum, and contains permanent exhibits and hands-on experiments. Palais de la Découverte (Paris) contains permanent exhibits and interactive experiments with commentaries by lecturers. It includes a Zeiss planetarium with 15-metre dome. It was created in 1937 by the French Nobel Prize physicist Jean Baptiste Perrin. Musée des Arts et Métiers (Paris) focuses on the preservation of scientific instruments and inventions. Other science museums that are part of the Cultural Center of Science, Technology and Industry (CCSTI) exist all across France : Espace des Sciences (Rennes), La Casemate (Grenoble), the Cité de l'espace (Toulouse). Germany Deutsches Museum (Munich) is the world's largest science museum. One of the most popular events is the high voltage demonstration of a Faraday cage as part of their series on electric power. Islamic Republic of Iran Iran Science and Technology Museum (Tehran) is the largest science museum in Iran. This museum, by holding varied scientific and educational programs, provides the required situation for creation and propagation of scientific thought in the society. One of these programs is the "Physics Show". Netherlands NEMO (Amsterdam) is the largest science center in the Netherlands, with hands-on science exhibitions. United States Exploratorium (San Francisco) is one of the foremost interactive science and art museums in the United States dedicated to exploring how the world works and consists of interactive exhibits, experiences and curious exploration. The Exploratorium was opened in 1969, and now attracts over a million visitors annually. The American Museum of Natural History in New York City is both a museum and a research facility with a department in astrophysics. As a natural history museum, it focuses on educating the public about human cultures, the natural world, and the universe, and has many interactive programs and lectures all year round. The Franklin Institute in Philadelphia is one of the oldest centers for science education and research in the United States. Scientific institutions and societies with physics outreach programs Canada Perimeter Institute for Theoretical Physics was founded in 1999 in Waterloo, Ontario, Canada, the institute is a center for scientific research, training and educational outreach in theoretical physics. Located in Vancouver, British Columbia, TRIUMF is Canada's national laboratory for particle and nuclear physics and accelerator-based science. In addition to its science mission, the laboratory is committed to physics outreach, offering public tours of its facilities, public talks, an artist in residence program, student fellowships, and other opportunities. The Canadian Association of Physicists (CAP), or in French Association canadienne des physiciens et physiciennes (ACP) is a Canadian professional society that focuses on creating awareness amongst Canadians and Canadian legislators of physics issues, sponsoring physics related events, [physics outreach], and publishes Physics in Canada. France French Physics Society has a specific section devoted to outreach and popularization of science. The European Physical Society (EPS) is based in France, but works to promote physics and physicists in Europe. Germany Deutsche Physikalische Gesellschaft (DPG, German Physical Society) is the world's largest organization of physicists. The DPG actively participates in communication between physics and the general public with several popular scientific publications and events such as the "Highlights of Physics" which is an annual physics festival organized jointly by the DPG and the Federal Ministry of Education and Research. This festival is the largest of its kind in Germany and attracts about 30,000 visitors every year. United Kingdom Institute of Physics is an international charitable institution that aims to advance physics education, research and application. United States American Association for the Advancement of Science American Association of Physics Teachers American Institute of Physics (AIP) has an outreach program focused on advocating science policy to the US Congress and the general public. American Physical Society (APS) has a program dedicated to "Communicating the excitement and importance of physics to everyone." Leonardo, the International Society for the Arts, Sciences and Technology (Leonardo/ISAST) is a nonprofit organization that serves the global network of distinguished scholars, artists, scientists, researchers and thinkers. The institution focuses on interdisciplinary work, creative output and innovation. Its journal Leonardo is published by MIT Press. Media and Internet Media The Big Bang Theory is an American sitcom created in 2007 and revolves around the lives of scientists at the California Institute of Technology. This show has been widely recognized for popularizing science and noted by the New York Times as "helping physics and fiction collide". In 2014, the program was the most popular sitcom and most popular non-sports program on American TV with an average of 20 million viewers. However, the show has been criticized for sometimes portraying the scientific community inaccurately. C'est pas sorcier is a French educational television program that originally aired from November 5, 1994, to present. 20 shows dealt with astronomy and space topics and 13 about physics. Particle Fever is a 2013 documentary film that provides an intimate and accessible view of the first experiments at the Large Hadron Collider from the perspectives of the experimental physicists at CERN who run the experiments, as well as the theoretical physicists who attempt to provide a conceptual framework for the LHC's results. Reviewers praised the film for making theoretical arguments seem comprehensible, for making scientific experiments seem thrilling, and for making particle physicists seem human. Through the Wormhole is an American science documentary television series narrated and hosted by American actor Morgan Freeman and has featured physicists such as such as Michio Kaku and Brian Cox (physicist). Internet MinutePhysics is a series of educational videos created by Henry Reich and disseminated through its YouTube channel. It displays a series of pedagogical short videos about various physics phenomena and theories. Physics World publication, run by the Institute of Physics, started explaining scientific concepts through its YouTube channel. Palais de la Découverte in Paris hosts online videos that display various interviews about science, including physics. Unisciel, a French online university, hosts educational videos through its YouTube channel. Veritasium is a series of educational videos created by Derek Muller and disseminated through its YouTube channel. It displays a series of pedagogical short videos about science, including physics. Saint Mary's Physics Demonstrations is an online repository for physics classroom demonstrations. It shows teachers the experiments they can do in class while also hosting videos of said experiments. Periodic Videos is a portal of educational videos explaining the characteristics of each element and supporting topics such as nuclear reactions. The project is sponsored by the University of Nottingham and hosted by Prof. Sir Martyn Poliakoff. Prominent individuals Austria Fritjof Capra is an Austrian-born American physicist, who attended the University of Vienna, where he earned his Ph.D. in theoretical physics in 1966. He is a founding director of the Center for Ecoliteracy in Berkeley, California, and is on the faculty of Schumacher College. Capra is the author of several books, including The Tao of Physics (1975) and has also done research in Paris and London. France Camille Flammarion was a French astronomer author of many popular science books. Étienne Klein is a French physicist and philosopher of science involved in outreach efforts about particle and quantum physics. Roland Lehoucq is a French astrophysicist known for his outreach efforts especially in relationship with fiction and science fiction. Hubert Reeves is a French Canadian astrophysicist and popularizer of science. United Kingdom Brian Cox is a British physicist and musician best known to the public as the presenter of a number of science programs for the BBC. Wendy J. Sadler promotes science and engineering as part of popular culture through Science Made Simple, an educational spin-off company of Cardiff University that reaches students through live presentations. She also trains scientists and engineers to improve their communications skills to enable them to extend their research across a broader audience. Sadler was the IoP Young Professional Physicist of the Year in 2005. Robert Matthews is a Fellow of the Royal Statistical Society, a Chartered Physicist, a Member of the Institute of Physics, and a Fellow of the Royal Astronomical Society. Matthews is a distinguished science journalist. He is currently anchorman for the science magazine BBC Focus, and a freelance columnist for the Financial Times. In the past, he has been science correspondent for the Sunday Telegraph. United States Richard Feynman was a Nobel-prize-winning theoretical physicist also known as a science popularizer through his books and lectures ranging from physics topics (quantum physics, nanophysics...) to autobiographical essays. George Gamow was a theoretical physicist and cosmologist who also wrote popular books on science, some of which are still in print more than a half-century after their original publication Brian Greene is a theoretical physicist involved in various outreach activities (books, TV shows). He co-founded the World Science Festival in 2008. Clifford Victor Johnson is a theoretical physicist involved in various outreach activities (blog, TV shows...). Michio Kaku is a theoretical physicist who is a futurist and communicator and popularizer of physics. He is most well known for his three New York Times Best Sellers on physics: Physics of the Impossible (2008), Physics of the Future (2011), and The Future of the Mind (2014). Lawrence M. Krauss is an American theoretical physicist and cosmologist who is Foundation Professor of the School of Earth and Space Exploration at Arizona State University and is known as an advocate of the public understanding of science, of public policy based on sound empirical data, of scientific skepticism and of science education and works to reduce the impact of superstition and religious dogma in pop culture. Don Lincoln is a physicist at Fermi National Accelerator Laboratory. While his research focuses on the Large Hadron Collider, he is known for his efforts to spread public awareness of physics and cosmology. He is the face of the Fermilab YouTube channel, where he has made over 150 videos. He is also a frequent contributor to CNN, Forbes, and many other online journals. He is also author of several books, including "Understanding the Universe", published by World Scientific, and "The Large Hadron Collider: The Extraordinary Story of the Higgs Boson and Other Things That Will Blow Your Mind," published by Johns Hopkins University Press. Jennifer Ouellette is the former director of the Science & Entertainment Exchange, an initiative of the National Academy of Sciences (NAS) designed to connect entertainment industry professionals with top scientists and engineers to help the creators of television shows, films, video games, and other productions incorporate science into their work. She is currently a freelance writer contributing to a physics outreach dialogue with articles in a variety of publications such as Physics World, Discover magazine, New Scientist, Physics Today, and The Wall Street Journal. Carl Sagan was an astrophysicist and science popularizer, one of his important contributions being the 1980 television series Cosmos: A Personal Voyage Neil deGrasse Tyson is an astrophysicist and science communicator who participated to TV and radio shows and wrote various outreach books. Jearl Walker is a physics professor at Cleveland State University. He wrote the Amateur Scientist column in Scientific American from 1978 to 1988 and authored the popular science book The Flying Circus of Physics. Funding sources American Physical Society awards grants up to $10,000 to help APS members develop new physics outreach activities. Institute for Complex Adaptive Matter (ICAM) provides grants and fellowships for physics outreach. Wellcome Trust, while mostly focused on biological sciences, the Wellcome Trust also touches on physics and encourages physics outreach. They aim to improve biology, chemistry, and physics A levels in the UK. Institute of Physics (IoP) The IoP aims to provide positive and compelling experiences of physics for public audiences through engaging and entertaining activities and events. The public engagement grant scheme is designed to give financial support of up to £1500 to individuals and organisations running physics-based events and activities in the UK and Ireland. Awards Kalinga Prize for the Popularization of Science is an award given by UNESCO for exceptional skill in presenting scientific ideas to lay people Klopsteg Memorial Award is presented by the American Association of Physics Teachers and given in memory of the physicist Paul E. Klopsteg Kelvin Prize is awarded by the Institute of Physics to acknowledge outstanding contributions to the public understanding of physics. The Michael Faraday Prize for communicating science to a UK audience is awarded by the Royal Society. Prix Jean Perrin for popularization in physics is attributed by the French Physics Society. References Physics education History of physics Science in society Science communication
Physics outreach
[ "Physics" ]
3,780
[ "Applied and interdisciplinary physics", "Physics education" ]
43,264,947
https://en.wikipedia.org/wiki/Micro-spectrophotometry
Microspectrophotometry is the measure of the spectra of microscopic samples using different wavelengths of electromagnetic radiation (e.g. ultraviolet, visible and near infrared, etc.) It is accomplished with microspectrophotometers, cytospectrophotometers, microfluorometers, Raman microspectrophotometers, etc. A microspectrophotometer can be configured to measure transmittance, absorbance, reflectance, light polarization, fluorescence (or other types of luminescence such as photoluminescence) of sample areas less than a micrometer in diameter through a modified optical microscope. Applications The main reason to use microspectrophotometry is the ability to measure the optical spectra of samples with a spatial resolution on the micron scale. Optical spectra may be acquired of either microscopic samples or larger samples with a micron-scale spatial resolution. Another reason microspectrophotometry is useful is that measurements are made without destroying the samples. This is important when dealing with stained/unstained histological or cytochemical biological sections, when measuring film thickness in semi-conductor integrated circuits, when matching paints and fibers (forensic science), when studying gems and coal (geology), and in paint/ink/color analysis in paint chemistry or art-work. Variations An advantage of the 'microscope spectrometer' is its ability to use microscope apertures to precisely control the area of sample analysis. Flat capillaries can be used for analyzing small liquid samples, up to about 10 micro-liters in volume. Quartz or mirror-based optics can be used for studying samples from the ultraviolet (UV), down to 200 nm, to the near infrared (NIR) up to 2100 nm. Samples that emit electromagnetic radiation via fluorescence, phosphorescence or photoluminescence when exposed to light, can be quantitatively investigated using a variety of excitation and barrier filters. A variety of observations can be made on samples of interest by using different illumination sources such as halogen, xenon, deuterium and mercury lamps. Plane polarized light can also be used for studying birefringent samples. See also Fluorescence spectroscopy Infrared spectroscopy Microfluorimetry Raman Microspectrosopy Spectrophotometry Ultraviolet-visible Microspectroscopy Ultraviolet External links CRAIC Technologies - The Science of Microspectroscopy Leica - MICROPHOTOMETRY AND MICROSPECTROPHOTOMETRY FBI — Standards and Guidelines - Forensic Science Communications - October 2007 Spectroscopy
Micro-spectrophotometry
[ "Physics", "Chemistry" ]
539
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
40,333,066
https://en.wikipedia.org/wiki/Soci%C3%A9t%C3%A9%20Fran%C3%A7aise%20de%20G%C3%A9nie%20des%20Proc%C3%A9d%C3%A9s
The Société Française de Génie des Procédés (French Society of Process Engineers) or SFGP is a French organization for chemical engineers. It is a member of the European Federation of Chemical Engineering and acts as joint Secretariat, and of la Fédération Française pour les sciences de la Chimie (FFC). It publishes a technical journal "Récents progrès en Génie des Procédés", and news for members "Procédique" (first published April 1988), and organizes a congress every alternate year. As of 2014 its membership is in excess of 600. Its history dates back to a congress in 1987, 1er Congrès Français de Génie des Procédés, and the formation the following year of a Groupe Français de Génie des Procédés (GFGP), which in 1989 had 340 members. and formally transformed into the present organization in 1997. Its mission statement is to: promote Process Engineering. promote exchanges between academics, trainers and researchers, manufacturers developing and operating processes, engineering companies and suppliers at the national, European and global levels. build a network of experts to respond to societal challenges and the innovation needs of Process Industries. be a force for representations to political and institutional decision-makers. References Chemical engineering organizations Chemical industry in France Engineering societies based in France Organizations established in 1997 1997 establishments in France
Société Française de Génie des Procédés
[ "Chemistry", "Engineering" ]
269
[ "Chemical engineering", "Chemical engineering organizations" ]
40,333,498
https://en.wikipedia.org/wiki/Chaplygin%20sleigh
The Chaplygin sleigh is a simple pedagogical example of a nonholonomic system in mechanics, described by Sergey Chaplygin. It consists of a body that slides frictionlessly on a horizontal plane, with a knife edge that constrains its motion so that the knife slides only longitudinally. Because this constraint is nonholonomic, Liouville's theorem does not apply, and although energy is conserved, the motion is dissipative in the sense that phase-space volume is not conserved. The motion is attracted to an equilibrium, in which the sleigh moves without rotation, with the knife edge trailing the center of mass. There are several ways of seeing that the system is nonholonomic. The dimension of the phase space is 5, which is odd. The constraint on the velocity is not derivable from a constraint on the coordinates. The motion of the system can be characterized simply. Let v be the velocity, with positive values indicating that the knife edge trails. Let ω be the angular velocity. Then the equations of motion are where a is the distance between the center of mass and contact point which is frequently the front of the sleigh, I is the moment of inertia, and m is the mass. The solutions are ellipses in the v–ω plane. The equations of motion are symmetric under time reversal, but asymmetric under inversion of the body-fixed axis aligned with the knife edge. In geometric mechanics, the Chaplygin sleigh lives in the special Euclidean group since the position and direction are accounted for. The analogue of the Chaplygin sleigh is the torpedo. Now, the body glides frictionlessly through space rather than the plane. The system is constrained to slide only longitudinally and cannot twist around the direction of motion similar to how a torpedo moves through water. The dimension of the phase space is 9 since there are 6 positions and 6 velocities coupled with 3 constraints. This system is the natural analogue to the Chaplygin sleigh since it lives in . References A.M. Bloch, Nonholonomic Mechanics and Control. Springer New York, NY. W.A. Clark, A.M. Bloch, Existence of Invariant Volumes in Nonholonomic Systems.'' Mechanics
Chaplygin sleigh
[ "Physics", "Engineering" ]
476
[ "Mechanics", "Mechanical engineering" ]
40,335,735
https://en.wikipedia.org/wiki/Topological%20fluid%20dynamics
Topological ideas are relevant to fluid dynamics (including magnetohydrodynamics) at the kinematic level, since any fluid flow involves continuous deformation of any transported scalar or vector field. Problems of stirring and mixing are particularly susceptible to topological techniques. Thus, for example, the Thurston–Nielsen classification has been fruitfully applied to the problem of stirring in two-dimensions by any number of stirrers following a time-periodic 'stirring protocol' (Boyland, Aref & Stremler 2000). Other studies are concerned with flows having chaotic particle paths, and associated exponential rates of mixing (Ottino 1989). At the dynamic level, the fact that vortex lines are transported by any flow governed by the classical Euler equations implies conservation of any vortical structure within the flow. Such structures are characterised at least in part by the helicity of certain sub-regions of the flow field, a topological invariant of the equations. Helicity plays a central role in dynamo theory, the theory of spontaneous generation of magnetic fields in stars and planets (Moffatt 1978, Parker 1979, Krause & Rädler 1980). It is known that, with few exceptions, any statistically homogeneous turbulent flow having nonzero mean helicity in a sufficiently large expanse of conducting fluid will generate a large-scale magnetic field through dynamo action. Such fields themselves exhibit magnetic helicity, reflecting their own topologically nontrivial structure. Much interest attaches to the determination of states of minimum energy, subject to prescribed topology. Many problems of fluid dynamics and magnetohydrodynamics fall within this category. Recent developments in topological fluid dynamics include also applications to magnetic braids in the solar corona, DNA knotting by topoisomerases, polymer entanglement in chemical physics and chaotic behavior in dynamical systems. A mathematical introduction to this subject is given by Arnold & Khesin (1998) and recent survey articles and contributions may be found in Ricca (2009), and Moffatt, Bajer & Kimura (2013). Topology is also crucial to the structure of neutral surfaces in a fluid (such as the ocean) where the equation of state nonlinearly depends on multiple components (e.g. salinity and heat). Fluid parcels remain neutrally buoyant as they move along neutral surfaces, despite variations in salinity or heat. On such surfaces, the salinity and heat are functionally related, but this function is multivalued. The spatial regions within which this function becomes single-valued are those where there is at most one contour of salinity (or heat) per isovalue, which are precisely the regions associated with each edge of the Reeb graph of the salinity (or heat) on the surface (Stanley 2019). References Arnold, V. I. & Khesin, B. A. (1998) Topological Methods in Hydrodynamics. Applied Mathematical Sciences 125, Springer-Verlag. Boyland, P.L., Aref, H. & Stremler, M.A. (2000) Topological fluid mechanics of stirring. J.Fluid Mech. 403, pp. 277–304. Krause, F. & Rädler, K.-H. (1980) Mean-field Magnetohydrodynamic and Dynamo Theory. Pergamon Press, Oxford. Moffatt, H.K. (1978) Magnetic Field Generation in Electrically Conducting Fluids. Cambridge Univ. Press. Moffatt, H.K., Bajer, K., & Kimura, Y. (Eds.) (2013) Topological Fluid Dynamics, Theory and Applications. Kluwer. Ottino, J. (1989) The Kinematics of Mixing: Stretching, Chaos and Transport. Cambridge Univ. Press. Parker, E.N. (1979) Cosmical Magnetic Fields: their Origin and their Activity. Oxford Univ. Press. Ricca, R.L. (Ed.) (2009) Lectures on Topological Fluid Mechanics. Springer-CIME Lecture Notes in Mathematics 1973. Springer-Verlag. Heidelberg, Germany. Stanley, G. J., 2019: Neutral surface topology. Ocean Modelling 138, 88–106. Fluid dynamics Topological dynamics
Topological fluid dynamics
[ "Chemistry", "Mathematics", "Engineering" ]
874
[ "Chemical engineering", "Topology", "Piping", "Fluid dynamics", "Topological dynamics", "Dynamical systems" ]
40,339,867
https://en.wikipedia.org/wiki/Rectified%20tesseractic%20honeycomb
In four-dimensional Euclidean geometry, the rectified tesseractic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 4-space. It is constructed by a rectification of a tesseractic honeycomb which creates new vertices on the middle of all the original edges, rectifying the cells into rectified tesseracts, and adding new 16-cell facets at the original vertices. Its vertex figure is an octahedral prism, {3,4}×{}. It is also called a quarter tesseractic honeycomb since it has half the vertices of the 4-demicubic honeycomb, and a quarter of the vertices of a tesseractic honeycomb. Related honeycombs See also Regular and uniform honeycombs in 4-space: Tesseractic honeycomb Demitesseractic honeycomb 24-cell honeycomb Truncated 24-cell honeycomb Snub 24-cell honeycomb 5-cell honeycomb Truncated 5-cell honeycomb Omnitruncated 5-cell honeycomb Notes References Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318 George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) o4x3o3o4o, o3o3o *b3x4o, x3o3x *b3o4o, x3o3x *b3o *b3o - rittit - O87 Honeycombs (geometry) 5-polytopes
Rectified tesseractic honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
424
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
40,341,221
https://en.wikipedia.org/wiki/Jeewanu
Jeewanu (Sanskrit for "particles of life") are synthetic chemical particles that possess cell-like structure and seem to have some functional properties; that is, they are a model of primitive cells, or protocells. It was first synthesised by Krishna Bahadur (20 January 1926 — 5 August 1994), an Indian chemist and his team in 1963. Using photochemical reaction, they produced coacervates, microscopic cell-like spheres from a mixture of simple organic and inorganic compounds. Bahadur named these particles 'Jeewanu' because they exhibit some of the basic properties of a cell, such as the presence of semipermeable membrane, amino acids, phospholipids and carbohydrates. Further, like living cells, they had several catalytic activities. Jeewanu are cited as models of protocells for the origin of life, and as artificial cells. Etymology Jeewanu is derived from Sanskrit जीव jīvá, meaning "life", and अणु aṇu, meaning "smallest particle", or the "indivisible". In contemporary Hindi, jeewanu also means unicellular organisms such as bacteria. Bahadur specifically used the term to represent the Indian philosophical tradition not only through the use of Sanskrit but also by inferring ideas on the origin of life from the Vedas. Bahadur, while employing the traditional Hindu philosophy, attempted to incorporate the advances in cell biology to the concept of abiogenesis. Synthesis In 1954 and 1958 Krishna Bahadur and co-workers published the successful synthesis of amino acids from a mixture of paraformaldehyde, colloidal molybdenum oxide or potassium nitrate and ferric chloride under sunlight. It appears that this experimental approach was seminal for the assays to produce Jeewanu, which he first reported in 1963 in an obscure Indian journal, Vijnana Parishad Anusandhan Patrika. His detailed syntheses were published in Germany in 1964 in a series of articles. Their initial experiment consisted of a sterilised apparatus in which inorganic nitrogenous compounds (such as ammonium phosphate and ammonium molybdate) and organic compounds such as citric acid (C6H8O7), paraformaldehyde (OH(CH2O)nH) and formaldehyde (CH2O) for carbon sources were mixed with minerals commonly found in living cells. Inorganic substances such as colloidal ferric chloride or molybdenum compounds supposedly acted as cofactors and catalysts. When the apparatus was exposed to sunlight for several days and constantly shaken, microscopic spherical particles were formed. The interesting features of these particles were that they were enclosed in a semipermeable membrane, like the typical cell membrane. Like living cells, they were reported to contain amino acids, phospholipid membrane and carbohydrates. In addition, they were claimed to have reproductive capability by budding, much like unicellular organisms, but did not grow on any bacterial culture medium. Bahadur reported that the Jeewanu exhibited various catalytic properties and produced their own peptides by metabolic reactions. Bahadur's later work on the Jeewanu also detected the presence of amino acids in peptide form and sugars in the form of ribose, deoxyribose, fructose and glucose, as well as nucleic acid bases (DNA and RNA building blocks) including adenine, guanine, cytosine, thymine and uracil. Bahadur also reported having detected ATPase-like and peroxidase-like activity. Bahadur stated that by using molybdenum as a cofactor, the Jeewanu showed capability of reversible photochemical electron transfer, and released a gas mixture of oxygen and hydrogen at a 1:2 ratio. Scientific reviews Bahadur's publications were ambivalently received, and the overall attention of the scientific community seemed limited since Krishna Bahadur and his co-workers reported that the Jeewanus are alive (a striking statement), the team changed the protocols frequently and documented them somewhat idiosyncratically. Bahadur defined "living units" as "[...] those which grow, multiply, and are metabolically active in a systematic, harmonious, and synchronized manner". Then, NASA's Exobiology Division tasked two biologists in 1967 to review and evaluate the literature so far published by Krishna Bahadur (not to replicate the experiments) on the synthesis and characteristics of the Jeewanu. The two NASA biologists did not debate whether these three criteria are an adequate definition of life, but whether the Jeewanu satisfy these criteria. The NASA report concluded that "the evidence presented on these three points is on the whole unconvincing". The report also stated that the postulated existence of these living units has not been proved and "the nature and properties of the Jeewanu remains to be clarified." In the 1980s, the Hungarian chemist Tibor Gánti discussed the Jeewanu at length in his 'chemoton theory'—an abstract model of autocatalytic chemical reactions—published first in Hungarian and translated into English in 2003. In the context of self-organizing structures, Gánti considered the Jeewanu a promising model system to understand the origin and fundamentals of life, and one that had never received due attention. In 2011, a German scientist stated that the Jeewanu story pertains to concepts of life, its beginnings, as well as possible artificially created cells. Experimental duplication work published in 2013 by Gupta and Rai reported that their size varies from 0.5 μ to 3.5 μ in diameter, growth from within, metabolic activities, and "the presence of RNA-like material". The authors stated that the RNA-like material detected in the Jeewanu protocells support the RNA world hypothesis. See also Abiogenesis Artificial cell Emergence Endocytosis Endosymbiotic theory Entropy and life Evolutionary developmental biology Last universal ancestor Lipid bilayer characterization Lipid bilayer phase behavior Model lipid bilayer Protocell Circus – a film Pseudo-panspermia References Books "Synthesis of Jeewanu, the Protocell." Bahadur, Krishna. (In English) Ram Narain Lal Beni Prasad, New Katra, Allahabad-211002 (U.P) India. ASIN: B0007JHWU0 (1966) "Origin of Life: A Functional Approach." Bahadur K. and Ranganayaki S.Ram Narain Lal Beni Prasad, New Katra, Allahabad-211002(U.P), India, (1981) External links jeewanu.com Dr. Krishna Bahadur's homepage. Evolutionary biology Membrane biology Origin of life Synthetic biology Prebiotic chemistry
Jeewanu
[ "Chemistry", "Engineering", "Biology" ]
1,396
[ "Synthetic biology", "Evolutionary biology", "Biological engineering", "Origin of life", "Membrane biology", "Prebiotic chemistry", "Bioinformatics", "Molecular genetics", "Molecular biology", "Biological hypotheses" ]
40,341,247
https://en.wikipedia.org/wiki/Tridemorph
Tridemorph is a fungicide used to control Erysiphe graminis. It was developed by BASF in the 1960s who use the trade name Calixin. The World Health Organization has categorized it as a Class II "moderately hazardous" pesticide because it is believed harmful if swallowed and can cause irritation to skin and eyes. One theory for the cause of the Hollinwell incident is that it might have been caused by inhalation of tridemorph. References External links Fungicides Morpholines
Tridemorph
[ "Chemistry", "Biology" ]
112
[ "Fungicides", "Biocides" ]
41,765,358
https://en.wikipedia.org/wiki/Phi%20Josephson%20junction
A φ Josephson junction (pronounced phi Josephson junction) is a particular type of the Josephson junction, which has a non-zero Josephson phase φ across it in the ground state. A π Josephson junction, which has the minimum energy corresponding to the phase of π, is a specific example of it. Introduction The Josephson energy depends on the superconducting phase difference (Josephson phase) periodically, with the period . Therefore, let us focus only on one period, e.g. . In the ordinary Josephson junction the dependence has the minimum at . The function , where is the critical current of the junction, and is the flux quantum, is a good example of conventional . Instead, when the Josephson energy has a minimum (or more than one minimum per period) at , these minimum (minima) correspond to the lowest energy states (ground states) of the junction and one speaks about "φ Josephson junction". Consider two examples. First, consider the junction with the Josephson energy having two minima at within each period, where (such that ) is some number. For example, this is the case for , which corresponds to the current-phase relation . If and , the minima of the Josephson energy occur at , where . Note, that the ground state of such a Josephson junction is doubly degenerate because . Another example is the junction with the Josephson energy similar to conventional one, but shifted along -axis, for example , and the corresponding current-phase relation . In this case the ground state is and it is not degenerate. The above two examples show that the Josephson energy profile in φ Josephson junction can be rather different, resulting in different physical properties. Often, to distinguish, which particular type of the current-phase relation is meant, the researches are using different names. At the moment there is no well-accepted terminology. However, some researchers use the terminology after A. Buzdin: the Josephson junction with double degenerate ground state , similar to the first example above, are indeed called φ Josephson junction, while the junction with non-degenerate ground state, similar to the second example above, are called Josephson junctions. Realization of φ junctions The first indications of φ junction behavior (degenerate ground states or unconventional temperature dependence of its critical current) were reported in the beginning of the 21st century. These junctions were made of d-wave superconductors. The first experimental realization of controllable φ junction was reported in September 2012 by the group of Edward Goldobin at University of Tübingen. It is based on a combination of 0 and π segments in one superconducting-insulator-ferromagnetic-superconductor hybrid device and clearly demonstrates two critical currents corresponding to two junction states . The proposal to construct a φ Josephson junction out of (infinitely) many 0 and π segments has appeared in the works by R. Mints and coauthors, although at that time there was no term φ junction. For the first time the word φ Josephson junction appeared in the work of Buzdin and Koshelev, whose idea was similar. Following this idea, it was further proposed to use a combination of only two 0 and π segments. In 2016, a junction based on the nanowire quantum dot was reported by the group of Leo Kouwenhoven at Delft University of Technology. The InSb nanowire has strong spin-orbit coupling, and magnetic field was applied leading to Zeeman effect. This combination breaks both inversion and time-reversal symmetries creating finite current at zero phase difference. Other theoretically proposed realization include geometric φ junctions. There is a theoretical prediction that one can construct the so-called geometric φ junction based on nano-structured d-wave superconductor. As of 2013, this was not demonstrated experimentally. Properties of φ junctions Two critical currents related to the escape (depinning) of the phase from two different wells of the Josephson potential. The lowest critical current can be seen experimentally only at low damping (low temperature). The measurements of the critical current can be used to determine the (unknown) state (+φ or -φ) of φ junction. In the case of φ junction constructed out of 0 and π segments, magnetic field can be used to change the asymmetry of the Josephson energy profile up to the point that one of the minima disappears. This allows to prepare the desired state (+φ or -φ). Also, asymmetric periodic Josephson energy potential can be used to construct ratchet-like devices. Long φ junctions allow special types of soliton solutions --- the splintered vortices of two types: one carries the magnetic flux , while the other carries the flux . Here is the magnetic flux quantum. These vortices are the solitons of a double sine-Gordon equation. They were observed in d-wave grain boundary junctions. Applications Similar to π Josephson junction, φ junctions can be used as a phase battery. Two stable states +φ and -φ can be used to store a digital information. To write the desired state one can apply magnetic field, so that one of the energy minima disappears, so the phase has no choice as to go to the remaining one. To read out an unknown state of the φ junctions one can apply the bias current with value between the two critical currents. If the φ junctions switches to the voltage state, its state was −φ, otherwise, it was +φ. The use of φ junctions as a memory cell (1 bit) was already demonstrated. In quantum domain the φ junction can be used as a two-level system (qubit). See also Semifluxon Fractional vortices References Superconductivity Josephson effect
Phi Josephson junction
[ "Physics", "Materials_science", "Engineering" ]
1,198
[ "Josephson effect", "Physical quantities", "Superconductivity", "Materials science", "Condensed matter physics", "Electrical resistance and conductance" ]
41,766,064
https://en.wikipedia.org/wiki/Film%20coating
A film coating is a thin polymer-based coat that is typically sprayed onto solid pharmaceutical dosage forms, such as tablets, capsules, pellets or granules. Film coating can impact both its appearance and its pharmacokinetics making it an essential process in making the final drug product. Film coatings are the most common form of drug coating and are generally applied in orally-administered pharmaceuticals. The motivation for applying film coatings to dosage forms range from cosmetic considerations (colour, gloss and branding), improving the shelf life by providing a protective barrier between the drug and the surrounding environment. These types of film coatings are known as non-functional film coatings. They may also be used to delay or augment the delivery and uptake of medications or delay release and uptake until the medication passes through the stomach. These types of film coatings are known as functional film coatings. Process The conventional method of applying film coatings to oral dosage forms comprises a spraying phase and a drying phase. The spraying phase consists of applying a layer of a polymer, plasticizer, colourant, opacifier, solvent, and a vehicle to the oral dosage form's core. Once applied, the oral dosage form is dried by passing hot air over the dosage form, which typically also removes the solvent as well. The final result is a thin film coating with the desired plasticizer, colour, opacifier, and vehicle. Properties Non-functional coating Non-functional film coating involves changes made to the aesthetic of the oral dosage form. Such changes affect an oral dosage form's appearance, organoleptic properties, swallowing properties, and provide protection against harsh environmental conditions that can damage the active pharmaceutical ingredient. These changes are conducted to improve the compliance and effects of the oral dosage form. For instance, changing appearance can be done by changing the colour of the drug, leading to a more appealing product. Changing the swallowing properties can make it easier for those suffering from dysphagia. Finally, adding a film that protects from harsh environments, such conditions include humidity, oxidation, or light, increases the shelf life of the final product. Functional coating Functional film coating provides the same properties as non-functional film coating, but also has added properties that affect drug release. These changes alter the region in the gastrointestinal tract that the final drug product is released. See modified-release dosage and enteric coatings. Types Organic solvent-based coating Organic solvent use in film coating is typically used to incorporate protective coatings to the oral dosage form, which aids in increasing the shelf life of the final drug product. This type of film coating can be dangerous due to its potential for toxicity in the final product and flammability during the film coating process. As such, it is integral to have proper safety measures and ventilation in place when film coating. Aqueous coating Aqueous film coating is the most common film coating method currently used. This type of film coating uses water to aid in the film coating process instead of organic solvents. The result is a safer means of film coating, as it avoids the toxic and flammable properties of organic solvents. Aqueous film coating requires the use of water insoluble polymer mixtures, with the addition of a plasticizer. Despite its widespread popularity, aqueous film coating is more time consuming than organic solvent-based coating, due to the increased time needed for complete evaporation of water. Solvent-free coating Solvent-free film coating is most commonly used in coating heat sensitive drugs due to the benefit of not requiring a drying phase. The end result of this type of film coating is an inert film coating that does not react with the active pharmaceutical ingredients. Some methods to create a solvent-free film coating include injection molding coating, hot-melt coating, and spray congealing. Each method has its own advantages and disadvantages, but the common theme amongst them is the need for very precise conditions that can satisfactorily apply a film coating to the oral dosage form. As such, it is an inefficient type of film coating that has resulted in its lack of widespread use. References Pharmacy Coatings Polymers
Film coating
[ "Chemistry", "Materials_science" ]
867
[ "Pharmacology", "Pharmacy", "Coatings", "Polymer chemistry", "Polymers" ]
41,767,506
https://en.wikipedia.org/wiki/Structured%20what-if%20technique
The structured what-if technique (SWIFT) is a prospective hazards analysis method that uses structured brainstorming with guidewords and prompts to identify risks, with the aim of being quicker than more intensive methods like failure mode and effects analysis (FMEA). It is used in various settings, including healthcare. As with other methods, SWIFT may not be comprehensive and the approach has some limitations. In a healthcare context, SWIFT was found to reveal significant risks, but like similar methods (including healthcare failure mode and effects analysis) it may have limited validity when used in isolation. References Process safety Quality assurance Systems analysis Reliability analysis Reliability engineering
Structured what-if technique
[ "Chemistry", "Engineering" ]
128
[ "Systems engineering", "Reliability analysis", "Reliability engineering", "Safety engineering", "Process safety", "Chemical process engineering" ]
41,768,703
https://en.wikipedia.org/wiki/GLIC
The GLIC receptor is a bacterial (Gloeobacter) Ligand-gated Ion Channel, homolog to the nicotinic acetylcholine receptors. It is a proton-gated (the channel opens when it binds a proton, ion), cation-selective channel (it selectively lets the positive ions through). Like the nicotinic acetylcholine receptors is a functional pentameric oligomer (the channel normally works as an assembly of five subunits). However while its eukaryotic homologues are hetero-oligomeric (assembled from different subunits), all until now known bacteria known to express LICs encode a single monomeric unit, indicating the GLIC to be functionally homo-oligomeric (assembled from identical subunits). The similarity of amino-acid sequence to the eukaryotic LGICs is not localized to any single or particular tertiary domain, indicating the similar function of the GLIC to its eukaryotic equivalents. Regardless, the purpose of regulating the threshold for action potential excitation in the nerve signal transmission of multicellular organisms cannot translate to single-cell organisms, thereby not making the purpose of bacterial LGICs immediately obvious. Structure The structure of the open channel structure was solved by two independent research teams in 2009 at low pH values of 4-4.6 (GLIC being proton-gated). See also Cys-loop receptors Ion channel Receptor (biochemistry) References External links Electrophysiology Ion channels Ionotropic receptors Molecular neuroscience Neurochemistry Protein families
GLIC
[ "Chemistry", "Biology" ]
331
[ "Ionotropic receptors", "Signal transduction", "Protein classification", "Molecular neuroscience", "Molecular biology", "Biochemistry", "Protein families", "Neurochemistry", "Ion channels" ]
46,388,068
https://en.wikipedia.org/wiki/Phage%20P22%20tailspike%20protein
The tailspike protein (P22TSP) of Enterobacteria phage P22 mediates the recognition and adhesion between the bacteriophage and the surface of Salmonella enterica cells. It is anchored within the viral coat and recognizes the O-antigen portion of the lipopolysaccharide (LPS) on the outer-membrane of Gram-negative bacteria. It possesses endoglycanase activity, serving to shorten the length of the O-antigen during infection. History The initial interest in tailspike proteins was in the study of the effect the mutations on protein folding. Some mutations affect the folding efficiency of the protein but have no effect on the final native structure. Other mutations have been identified that lead to a temperature sensitive phenotype. Reconstitution experiments have demonstrated that the in vitro folding process closely mirrors the in vivo folding pathway. It has been further been demonstrated that folding yields in vitro decrease strongly with increasing temperature. Function O-antigen binding P22TSP recognizes the O-antigen polysaccharide of LPS serotypes A, B, or D1. The serotypes correspond to species S. Typhimurium, S. Enteritidis, and S. Paratyphi A. These carbohydrates share the same main chain trisaccharide repeating unit alpha-D-mannose-(1—4)-alpha-L-rhamnose-(1—3)-alpha-D-galactose-(1—2), but each have a different 2,6-dideoxyhexose substituent at C-3 of the mannose. In vivo, P22TSP binds as a homotrimer and one phage particle can carry up to 6 tailspikes. P22TSP can bind multivalently, leading to an essentially irreversible attachment. It was shown that a minimum of two repeating units or an octasaccharide is required for binding. The TSP is also capable of binding longer fragments with similar affinity. Endoglycosidase activity P22TSP has endorhamnosidase activity and cleaves the glycosidic bond of the rhamnose group, producing an octasaccharide product. Two aspartic acids and one glutamic acid in the active site have been strongly linked to enzymatic activity. Different biological functions for this cleavage have been proposed. Cleavage could facilitate access to the membrane. or allow the phage to find the optimal position for infection. Role in DNA injection It has been demonstrated that cleavage of the O-antigen is necessary for DNA ejection by the phage. It has been proposed that P22TSP binding positions the phage to inject its DNA. Structure P22TSP is a homotrimeric structural protein consisting of 666 amino acids. It is noncovalently bound to the neck of the viral capsid. It has been crystallized in space group P213 and has one monomer in the asymmetric unit. The secondary structure of P22TSP is dominated by a parallel Beta helix comprising 13 complete turns. This structure is further characterized as a beta-solenoid domain. P22TSP is compOsed of two domains, each with distinct function. An N-terminal domains serves to bind to the phage particle and a C-terminal domains that interacts with the Salmonella surface. These two domains are connected by a flexible linker. The binding site of P22TSP is located in the central part of the Beta helix. A deep cleft is formed by a 60-residue insertion on one side along with three smaller 5-25 residue insertions on the other. Homologous proteins Several functional homologues of P22TSP has been identified belonging to the bacteriophages HK620 and Sf6. Both of these tailspike proteins also contain right-handed parallel beta-helices and share similar O-antigen binding and cleavage to P22TSP. These proteins share 70% sequence identity in their N-terminal domains, but no sequence similarities have been found in the C-terminal domains. Translational applications Carbohydrate binding scaffolds P22TSP has also been studied due to its high kinetic stability. As it exists and functions in the extracellular environment, it must endure harsh conditions such as highly variable temperatures or high concentrations of protein degrading enzymes. The kinetic stability of P22TSP derives from its compact beta-solenoid architecture. It was shown that like other viral fibrous proteins, P22 tailspike protein possesses a high stability against denaturation. This makes P22TSP a promising candidate for use as thermostable scaffold capable of being tailor-made to recognize heteropolymers. Therapy against Salmonella infection Tailspike proteins have also shown potential for more translational applications such as fighting bacterial infections. A study has demonstrated that orally administered P22TSP markedly reduced Salmonella colonization in a group of chickens. They suggest that the endorhamnosidase activity of the free tailspike molecule serves to modify O-antigen, compromising the LPS structure and thereby preventing the binding of a phage P22-attached tailspike protein. The authors suggest that this has the potential to be a novel therapy meant to fight bacterial infections. References Proteins
Phage P22 tailspike protein
[ "Chemistry" ]
1,126
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
46,389,483
https://en.wikipedia.org/wiki/Crossover%20interference
Crossover interference is the term used to refer to the non-random placement of crossovers with respect to each other during meiosis. The term is attributed to Hermann Joseph Muller, who observed that one crossover "interferes with the coincident occurrence of another crossing over in the same pair of chromosomes, and I have accordingly termed this phenomenon ‘interference’." Meiotic crossovers (COs) appear to be regulated to ensure that COs on the same chromosome are distributed far apart (crossover interference). In the nematode worm Caenorhabditis elegans, meiotic double-strand breaks (DSBs) outnumber COs. Thus not all DSBs are repaired by a recombination process(es) leading to COs. The RTEL-1 protein is required to prevent excess meiotic COs. In rtel-1 mutants meiotic CO recombination is significantly increased and crossover interference appears to be absent. RTEL1 likely acts by promoting synthesis-dependent strand annealing which results in non-crossover (NCO) recombinants instead of COs (see diagram). Normally, about half of all DSBs are converted into NCOs. RTEL-1 appears to enforce meiotic crossover interference by directing the repair of some DSBs towards NCOs rather than COs. In humans, recombination rate increases with maternal age. Furthermore, placement of female recombination events appears to become increasingly deregulated with maternal age, with a larger fraction of events occurring within closer proximity to each other than would be expected under simple models of crossover interference. High negative interference Bacteriophage T4 High negative interference (HNI), in contrast to positive interference, refers to the association of recombination events ordinarily measured over short genomic distances, usually within a gene. Over such short distances there is a positive correlation (negative interference) of recombinational events. As studied in bacteriophage T4 this correlation is greater the shorter the interval between the sites used for detection. HNI is due to multiple exchanges within a short region of the genome during an individual mating event. What is counted as a “single exchange” in a genetic cross involving only distant markers may in reality be a complex event that is distributed over a finite region of the genome. Switching between template DNA strands during DNA synthesis (see Figure, SDSA pathway), referred to as copy-choice recombination, was proposed to explain the positive correlation of recombination events within the gene. HNI appears to require fairly precise base complementarity in the regions of the parental genomes where the associated recombination events occur. HIV Each human immunodeficiency virus (HIV) particle contains two single-stranded positive sense RNA genomes. After infection of a host cell, a DNA copy of the genome is formed by reverse transcription of the RNA genomes. Reverse transcription is accompanied by template switching between the two RNA genome copies (copy-choice recombination). From 5 to 14 recombination events per genome occur at each replication cycle. This recombination exhibits HNI. HNI is apparently caused by correlated template switches during minus-strand DNA synthesis. Template switching recombination appears to be necessary for maintaining genome integrity and as a repair mechanism for salvaging damaged genomes. References External links Cellular processes Molecular genetics
Crossover interference
[ "Chemistry", "Biology" ]
703
[ "Molecular genetics", "Cellular processes", "Molecular biology" ]
37,572,010
https://en.wikipedia.org/wiki/Yerba%20buena
Yerba buena or hierba buena is the Spanish name for a number of aromatic plants, most of which belong to the mint family. Yerba buena translates as "good herb". The specific plant species regarded as yerba buena varies from region to region, depending on what grows wild in the surrounding landscape, or which species is customarily grown in local gardens. Perhaps the most common variation of this plant is spearmint (Mentha spicata). The term has been (and is currently) used to cover a number of aromatic true mints and mint relatives of the genera Clinopodium, Satureja or Micromeria. All plants so named are associated with medicinal properties, and some have culinary value as herbal teas or seasonings as well. Local variants In the western United States, yerba buena most often refers to the species Clinopodium douglasii (synonyms: Satureja douglasii, Micromeria douglasii), but may rarely refer to Eriodictyon californicum, which is more commonly known as yerba santa. In parts of Central America yerba buena often refers to Eau de Cologne mint, a true mint sometimes called "bergamot mint" with a strong citrus-like aroma that is used medicinally and as a cooking herb and tea. In Cuba and the Philippines, yerba buena generally refers to Mentha nemorosa, a popular plant also known as large apple mint, foxtail mint, hairy mint, woolly mint or, simply, Cuban mint. In Puerto Rico, Clinopodium vimineum (formerly Satureja viminea), is sometimes called yerba buena. In Colombia, Yerba Buena is known for having many medicinal purposes, As it helps with digestion and Bilis activity and with Inflammation. The herb is mostly located in the Andino region of the country. Cundinamarca and Antioquia. References 5) https://www.colombia.co/pais-colombia/hierbas-aromaticas-que-existen-en-colombia/ 6) Yerba Buena External links Herbs Herbal teas Plant common names
Yerba buena
[ "Biology" ]
447
[ "Plant common names", "Common names of organisms", "Plants" ]
37,579,128
https://en.wikipedia.org/wiki/Two-dimensional%20polymer
A two-dimensional polymer (2DP) is a sheet-like monomolecular macromolecule consisting of laterally connected repeat units with end groups along all edges. This recent definition of 2DP is based on Hermann Staudinger's polymer concept from the 1920s. According to this, covalent long chain molecules ("Makromoleküle") do exist and are composed of a sequence of linearly connected repeat units and end groups at both termini. Moving from one dimension to two offers access to surface morphologies such as increased surface area, porous membranes, and possibly in-plane pi orbital-conjugation for enhanced electronic properties. They are distinct from other families of polymers because 2D polymers can be isolated as multilayer crystals or as individual sheets. The term 2D polymer has also been used more broadly to include linear polymerizations performed at interfaces, layered non-covalent assemblies, or to irregularly cross-linked polymers confined to surfaces or layered films. 2D polymers can be organized based on these methods of linking (monomer interaction): covalently linked monomers, coordination polymers and supramolecular polymers. 2D polymers containing pores are also known as porous polymers. Topologically, 2DPs may thus be understood as structures made up from regularly tessellated regular polygons (the repeat units). Figure 1 displays the key features of a linear and a 2DP according to this definition. For usage of the term "2D polymer" in a wider sense, see "History". Covalently-linked polymers There are several examples of covalently linked 2DPs which include the individual layers or sheets of graphite (called graphenes), MoS2, (BN)x and layered covalent organic frameworks. As required by the above definition, these sheets have a periodic internal structure. A well-known example of a 2D polymer is graphene; whose optical, electronic and mechanical properties have been studied in depth. Graphene has a honeycomb lattice of carbon atoms that exhibit semiconducting properties. A potential repeat unit of graphene is a sp2-hybridized carbon atom. Individual sheets can in principle be obtained by exfoliation procedures, though in reality this is a non-trivial enterprise. Molybdenumdisulfide can exist in two-dimensional, single or layered polymers where each Mo(IV) center occupies a trigonal prismatic coordination sphere. Boron nitride polymers are stable in its crystalline hexagonal form where it has a two-dimensional layered structure similar to graphene. There are covalent bonds formed between boron and nitrogen atoms, yet the layers are held together by weak van der Waals interactions, in which the boron atoms are eclipsed over the nitrogen. Two dimensional covalent organic frameworks (COFs) are one type of microporous coordination polymer that can be fabricated in the 2D plane. The dimensionality and topology of the 2D COFs result from both the shape of the monomers and the relative and dimensional orientations of their reactive groups. These materials contain desirable properties in fields of materials chemistry including thermal stability, tunable porosity, high specific surface area, and the low density of organic material. By careful selection of organic building units, long range π-orbital overlap parallel to the stacking direction of certain organic frameworks can be achieved. Many covalent organic frameworks derive their topology from the directionality of the covalent linkages, thus small changes in organic linkers can dramatically affect their mechanical and electronic properties. Even small changes in their structure can induce dramatic changes in stacking behavior of molecular semiconductors. Porphyrins are an additional class of conjugated, heterocyclic macrocycles. Control of monomer assembly through covalent assembly has also been demonstrated using covalent interactions with porphyrins. Upon thermal activation of porphyrin building blocks, covalent bonds form to create a conductive polymer, a versatile route for bottom-up construction of electronic circuits has been demonstrated. COF synthesis It is possible to synthesize COFs using both dynamic covalent and non-covalent chemistry. The kinetic approach involves a stepwise process of polymerizing pre-assembled 2D-monomer while thermodynamic control exploits reversible covalent chemistry to allow simultaneous monomer assembly and polymerization. Under thermodynamic control, bond formation and crystallization also occur simultaneously. Covalent organic frameworks formed by dynamic covalent bond formation involves chemical reactions carried out reversibly under conditions of equilibrium control. Because the formation of COFs in dynamic covalent formation occurs under thermodynamic control, product distributions depend only on the relative stabilities of the final products. Covalent assembly to form 2D COFs has been previously done using boronate esters from catechol acetonides in the presence of a lewis acid (BF3*OEt2). 2D polymerization under kinetic control relies on non-covalent interactions and monomer assembly prior to bond formation. The monomers can be held together in a pre-organized position by non-covalent interactions, such as hydrogen bonding or van der Waals. Coordination polymers Metal organic frameworks Self-assembly can also be observed in the presence of organic ligands and various metals centers through coordinative bonds or supramolecular interactions. Molecular self- assembly involves the association by many weak, reversible interactions to obtain a final structure that represents a thermodynamic minimum. A class of coordination polymers, known also as metal-organic frameworks (MOFs), are metal-ligand compounds that extend "infinitely" into one, two or three dimensions. MOF synthesis Availability of modular metal centers and organic building blocks generate wide diversity in synthetic versatility. Their applications range from industrial use to chemiresistive sensors. The ordered structure of the frame is largely determined by the coordination geometry of the metal and directionality of functional groups upon the organic linker. Consequently, MOFs contain highly defined pore dimensions when compared with conventional amorphous nanoporous materials and polymers. Reticular Synthesis of MOFs is a term that has been recently coined to describe the bottom-up method of assembling cautiously designed rigid molecular building blocks into prearranged structures held together by strong chemical bonds. The synthesis of two-dimensional MOFs begins with the knowledge of a target "blueprint" or a network, followed by identification of the required building blocks for its assembly. By interchanging metal centers and organic ligands, one can fine-tune electronic and magnetic properties observed in MOFs. There have been recent efforts synthesize conductive MOFs using triyphenylene linkers. Additionally, MOFs have been utilized as reversible chemiresistive sensors. Supramolecular polymers Supramolecular assembly requires non-covalent interactions directing the formation of 2D polymers by relying on electrostatic interactions such as hydrogen bonding and van der Waals forces. To design artificial assemblies capable of high selectivity requires correct manipulation of energetic and stereochemical features of non-covalent forces. Some benefits of non-covalent interactions is their reversible nature and response to external factors such as temperature and concentration. The mechanism of non-covalent polymerization in supramolecular chemistry is highly dependent on the interactions during the self-assembly process. The degree of polymerization depends highly on temperature and concentration. The mechanisms may be divided into three categories: isodesmic, ring-chain, and cooperative. One example of isodesmic associations in supramolecular aggregates is seen in Figure 7, (CA*M) cyanuric acid (CA) and melamine (M) interactions and assembly through hydrogen bonding. Hydrogen bonding has been used to guide assembly of molecules into two-dimensional networks, that can then serve as new surface templates and offer an array of pores of sufficient capacity to accommodate large guest molecules. An example of utilizing surface structures through non-covalent assembly uses adsorbed monolayers to create binding sites for target molecules through hydrogen bonding interactions. Hydrogen bonding is used to guide the assembly of two different molecules into a 2D honeycomb porous network under ultra high vacuum seen in figure 8. 2D polymers based on DNA have been reported Characterization 2DPs as two dimensional sheet macromolecules have a crystal lattice, that is they consist of monomer units that repeat in two dimensions. Therefore, a clear diffraction pattern from their crystal lattice should be observed as a proof of crystallinity. The internal periodicity is supported by electron microscopy imaging, electron diffraction and Raman-spectroscopic analysis. 2DPs should in principle also be obtainable by, e.g., an interfacial approach whereby proving the internal structure, however, is more challenging and has not yet been achieved. In 2014 a 2DP was reported synthesised from a trifunctional photoreactive anthracene derived monomer, preorganised in a lamellar crystal and photopolymerised in a [4+4]cycloaddition. Another reported 2DP also involved an anthracene-derived monomer Applications 2DPs are expected to be superb membrane materials because of their defined pore sizes. Furthermore, they can serve as ultrasensitive pressure sensors, as precisely defined catalyst supports, for surface coatings and patterning, as ultrathin support for cryo-TEM, and many other applications. Since 2D polymers provide an availability of large surface area and uniformity in sheets, they also found useful applications in areas such as selective gas adsorption and separation. Metal organic frameworks have become popular recently due to the variability of structures and topology which provide tunable pore structures and electronic properties. There are also ongoing methods for creation of nanocrystals of MOFs and their incorporation into nanodevices. Additionally, metal-organic surfaces have been synthesized with cobalt dithionlene catalysts for efficient hydrogen production through reduction of water as an important strategy for fields of renewable energy. The fabrication of 2D organic frameworks, have also synthesized two-dimensional, porous covalent organic frameworks to be used as storage media for hydrogen, methane and carbon dioxide in clean energy applications. History First attempts to synthesize 2DPs date back to the 1930s when Gee reported interfacial polymerizations at the air/water interface in which a monolayer of an unsaturated fatty acid derivative was laterally polymerized to give a 2D cross-linked material. Since then a number of important attempts were reported in terms of cross-linking polymerization of monomers confined to layered templates or various interfaces. These approaches provide easy accesses to sheet-like polymers. However, the sheets' internal network structures are intrinsically irregular and the term "repeat unit" is not applicable (See for example:). In organic chemistry, creation of 2D periodic network structures has been a dream for decades. Another noteworthy approach is "on-surface polymerization" whereby 2DPs with lateral dimensions not exceeding some tens of nanometers were reported. Laminar crystals are readily available, each layer of which can ideally be regarded as latent 2DP. There have been a number of attempts to isolate the individual layers by exfoliation techniques (see for example:). References Polymer chemistry
Two-dimensional polymer
[ "Chemistry", "Materials_science", "Engineering" ]
2,326
[ "Materials science", "Polymer chemistry" ]
37,580,917
https://en.wikipedia.org/wiki/Moving%20load
In structural dynamics, a moving load changes the point at which the load is applied over time. Examples include a vehicle that travels across a bridge and a train moving along a track. Properties In computational models, load is usually applied as a simple massless force, an oscillator, or an inertial force (mass and a massless force). Numerous historical reviews of the moving load problem exist. Several publications deal with similar problems. The fundamental monograph is devoted to massless loads. Inertial load in numerical models is described in Unexpected property of differential equations that govern the motion of the mass particle travelling on the string, Timoshenko beam, and Mindlin plate is described in. It is the discontinuity of the mass trajectory near the end of the span (well visible in string at the speed v=0.5c). The moving load significantly increases displacements. The critical velocity, at which the growth of displacements is the maximum, must be taken into account in engineering projects. Structures that carry moving loads can have finite dimensions or can be infinite and supported periodically or placed on the elastic foundation. Consider simply supported string of the length l, cross-sectional area A, mass density ρ, tensile force N, subjected to a constant force P moving with constant velocity v. The motion equation of the string under the moving force has a form Displacements of any point of the simply supported string is given by the sinus series where and the natural circular frequency of the string In the case of inertial moving load, the analytical solutions are unknown. The equation of motion is increased by the term related to the inertia of the moving load. A concentrated mass m accompanied by a point force P: The last term, because of complexity of computations, is often neglected by engineers. The load influence is reduced to the massless load term. Sometimes the oscillator is placed in the contact point. Such approaches are acceptable only in low range of the travelling load velocity. In higher ranges both the amplitude and the frequency of vibrations differ significantly in the case of both types of a load. The differential equation can be solved in a semi-analytical way only for simple problems. The series determining the solution converges well and 2-3 terms are sufficient in practice. More complex problems can be solved by the finite element method or space-time finite element method. The discontinuity of the mass trajectory is also well visible in the Timoshenko beam. High shear stiffness emphasizes the phenomenon. The Renaudot approach vs. the Yakushev approach Renaudot approach Yakushev approach Massless string under moving inertial load Consider a massless string, which is a particular case of moving inertial load problem. The first to solve the problem was Smith. The analysis will follow the solution of Fryba. Assuming =0, the equation of motion of a string under a moving mass can be put into the following form We impose simply-supported boundary conditions and zero initial conditions. To solve this equation we use the convolution property. We assume dimensionless displacements of the string and dimensionless time : where st is the static deflection in the middle of the string. The solution is given by a sum where is the dimensionless parameters : Parameters , and are given below In the case of =1, the considered problem has a closed solution: References Mechanical vibrations
Moving load
[ "Physics", "Engineering" ]
697
[ "Structural engineering", "Mechanics", "Mechanical vibrations" ]
21,921,011
https://en.wikipedia.org/wiki/Retrograde%20condensation
Retrograde condensation occurs when gas in a tube is compressed beyond the point of condensation with the effect that the liquid evaporates again. This is the opposite of condensation: the so-called retrograde condensation. Description If the volume of two gases that are kept at constant temperature and pressure below critical conditions is gradually reduced, condensation will start. When a certain volume is reached, the amount of condensation will gradually increase upon further reduction in volume until the gases are liquefied. If the composition of the gases lies between their true and pseudo critical points the condensate formed will disappear on continued reduction of volume. This disappearance of condensation is called retrograde condensation. Because most natural gas found in petroleum reservoirs is not a pure product, when non-associated gas is extracted from a field under supercritical pressure/temperature conditions (i.e., the pressure in the reservoir decreases below dewpoint), condensate liquids may form during the isothermic depressurizing, an effect called retrograde condensation. Discovery Dutch physicist Johannes Kuenen discovered retrograde condensation and published his findings in April 1892 in his Ph.D. thesis with the title "Metingen betreffende het oppervlak van Van der Waals voor mengsels van koolzuur en chloormethyl" (Measurements on the Van der Waals surface for mixtures of carbonic acid and methyl chloride). References Phase transitions Natural gas
Retrograde condensation
[ "Physics", "Chemistry" ]
314
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Statistical mechanics", "Matter" ]
21,921,347
https://en.wikipedia.org/wiki/Virtual%20prototyping
Virtual prototyping is a method in the process of product development. It involves using computer-aided design (CAD), computer-automated design (CAutoD) and computer-aided engineering (CAE) software to validate a design before committing to making a physical prototype. This is done by creating (usually 3D) computer generated geometrical shapes (parts) and either combining them into an "assembly" and testing different mechanical motions, fit and function. The assembly or individual parts can be opened in CAE software as digital twins to simulate the behavior of the product in the real world. Background The product design and development process used to rely primarily on engineers' experience and judgment in producing an initial concept design. A physical prototype was then constructed and tested in order to evaluate its performance. Without any way to evaluate its performance in advance, the initial prototype was highly unlikely to meet expectations. Engineers usually had to re-design the initial concept multiple times to address weaknesses that were revealed in physical testing. Move towards virtual prototypes Today, manufacturers are under pressure to reduce time to market and optimize products to higher levels of performance and reliability. A much higher number of products are being developed in the form of virtual prototypes in which engineering simulation software is used to predict performance prior to constructing physical prototypes. Engineers can quickly explore the performance of thousands of design alternatives without investing the time and money required to build physical prototypes. The ability to explore a wide range of design alternatives leads to improvements in performance and design quality. Yet the time required to bring the product to market is usually reduced substantially because virtual prototypes can be produced much faster than physical prototypes. End-to-end prototyping End-to-end prototyping accounts fully for how a product or a component is manufactured and assembled, and it links the consequences of those processes to performance. Early availability of such physically realistic virtual prototypes allows testing and performance confirmation to take place as design decisions are made; enabling the acceleration of the design activity and providing more insight on the relationship between manufacturing and performance than can be achieved by building and testing physical prototypes. The benefits include reduced costs in both design and manufacturing as physical prototyping and testing is dramatically reduced/eliminated and lean but robust manufacturing processes are selected. Effects The research firm Aberdeen Group reports that best-in-class manufacturers, who make extensive use of simulation early in the design process, hit revenue, cost, and launch date and quality targets for 86% or more of their products. Best-in-class manufacturers of the most complex products get to market 158 days earlier with $1.9 million lower costs than all other manufacturers. Best-in-class manufacturers of the simplest products get to market 21 days earlier with $21,000 fewer product development costs. Examples Fisker Automotive used virtual prototyping to design the rear structure and other areas of its Karma plug-in hybrid to ensure the integrity of the fuel tank in a rear end crash as required for Federal Motor Vehicle Safety Standards (FMVSS) 301 certification. Agilent Technologies used virtual prototyping to design cooling systems for the calibration head for a new high-speed oscilloscope. Miele used virtual prototyping to improve the development of its washer-disinfector machines by simulating their operational characteristics early in the design cycle. Several CAE software solutions (for example, Working Model and SimWise) offer the possibility to check the benefits of virtual prototyping even for students and small companies, and collection of case studies are available since 1996. See also Crash simulation Finite element analysis Computer simulation Paper prototyping References Product development Automotive engineering Aerospace engineering
Virtual prototyping
[ "Engineering" ]
735
[ "Automotive engineering", "Mechanical engineering by discipline", "Aerospace engineering" ]
21,923,868
https://en.wikipedia.org/wiki/Oscillating%20gene
In molecular biology, an oscillating gene is a gene that is expressed in a rhythmic pattern or in periodic cycles. Oscillating genes are usually circadian and can be identified by periodic changes in the state of an organism. Circadian rhythms, controlled by oscillating genes, have a period of approximately 24 hours. For example, plant leaves opening and closing at different times of the day or the sleep-wake schedule of animals can all include circadian rhythms. Other periods are also possible, such as 29.5 days resulting from circalunar rhythms or 12.4 hours resulting from circatidal rhythms. Oscillating genes include both core clock component genes and output genes. A core clock component gene is a gene necessary for to the pacemaker. However, an output oscillating gene, such as the AVP gene, is rhythmic but not necessary to the pacemaker. History The first recorded observations of oscillating genes come from the marches of Alexander the Great in the fourth century B.C. At this time, one of Alexander's generals, Androsthenes, wrote that the tamarind tree would open its leaves during the day and close them at nightfall. Until 1729, the rhythms associated with oscillating genes were assumed to be "passive responses to a cyclic environment". In 1729, Jean-Jacques d'Ortous de Mairan demonstrated that the rhythms of a plant opening and closing its leaves continued even when placed somewhere where sunlight could not reach it. This was one of the first indications that there was an active element to the oscillations. In 1923, Ingeborg Beling published her paper "Über das Zeitgedächtnis der Bienen" ("On the Time Memory of Bees") which extended oscillations to animals, specifically bees In 1971, Ronald Konopka and Seymour Benzer discovered that mutations of the PERIOD gene caused changes in the circadian rhythm of flies under constant conditions. They hypothesized that the mutation of the gene was affecting the basic oscillator mechanism. Paul Hardin, Jeffrey Hall, and Michael Rosbash demonstrated that relationship by discovering that within the PERIOD gene, there was a feedback mechanism that controlled the oscillation. The mid-1990s saw an outpouring of discoveries, with CLOCK, CRY, and others being added to the growing list of oscillating genes. Molecular circadian mechanisms The primary molecular mechanism behind an oscillating gene is best described as a transcription/translation feedback loop. This loop contains both positive regulators, which increase gene expression, and negative regulators, which decrease gene expression. The fundamental elements of these loops are found across different phyla. In the mammalian circadian clock, for example, transcription factors CLOCK and BMAL1 are the positive regulators. CLOCK and BMAL1 bind to the E-box of oscillating genes, such as Per1, Per2, and Per3 and Cry1 and Cry2, and upregulate their transcription. When the PERs and CRYs form a heterocomplex in the cytoplasm and enter the nucleus again, they inhibit their own transcription. This means that over time the mRNA and protein levels of PERs and CRYs, or any other oscillating gene under this mechanism, will oscillate. There also exists a secondary feedback loop, or 'stabilizing loop', which regulates the cyclic expression of Bmal1. This is caused by two nuclear receptors, REV-ERB and ROR, which suppresses and activates Bmal1 transcription, respectively. In addition to these feedback loops, post-translational modifications also play a role in changing the characteristics of the circadian clock, such as its period. Without any type of feedback repression, the molecular clock would have a period of just a few hours. Casein kinase members CK1ε and CK1δ were both found to be mammalian protein kinases involved in circadian regulation. Mutations in these kinases are associated with familial advanced sleep phase syndrome (FASPS). In general, phosphorylation is necessary for the degradation of PERs via ubiquitin ligases. In contrast, phosphorylation of BMAL1 via CK2 is important for accumulation of BMAL1. Examples The genes provided in this section are only a small number of the vast amount of oscillating genes found in the world. These genes were selected because they were determined to be the some of most important genes in regulating the circadian rhythm of their respective classification. Mammalian genes Cry1 and Cry2 – Cryptochromes are a class of blue light sensitive flavoproteins found in plants and animals. Cry1 and Cry2 code for the proteins CRY1 and CRY2. In Drosophila, CRY1 and CRY2 bind to TIM, a circadian gene that is a component of the transcription-translation negative feedback loop, in a light dependent fashion and blocks its function. In mammals, CRY1 and CRY2 are light independent and function to inhibit the CLOCK-BMAL1 dimer of the circadian clock which regulates cycling of Per1 transcription. Bmal1 – Bmal1 also known as ARNTL or Aryl hydrocarbon receptor nuclear translocator-like, encodes a protein that forms a heterodimer with the CLOCK protein. This heterodimer binds to E-box enhancers found in the promoter regions of many genes such as Cry1 and Cry2 and Per1-3, thereby activating transcription. The resulting proteins translocate back into the nucleus and act as negative regulators by interacting with CLOCK and/or BMAL1 inhibiting transcription. Clock – Clock, also known as Circadian Locomotor Output Cycles Kaput, is a transcription factor in the circadian pacemaker of mammals. It affects both the persistence and period of circadian rhythms by its interactions with the gene Bmal1. For more information, refer to Bmal1. Per genes – There are three different per genes, also known as Period genes, (per 1, per 2, and per 3) that are related by sequence in mice. Transcription levels for mPer1 increase in the late night before subjective dawn and is followed by increases in the levels of mPer3 and then by mPer2. mPer1 peaks at CT 4-6, mPer3 at CT 4 and 8 and mPer2 at CT 8. mPer1 is necessary for phase shifts induced by light or glutamate release. mPer 2 and mPer3 are involved in resetting the circadian clock to environmental light cues. Drosophila genes Clock – The clock gene in Drosophila encodes for the CLOCK protein and forms a heterodimer with the protein CYCLE in order to control the main oscillating activity of the circadian clock. The heterodimer binds to the E-box promoter region of both per and tim which causes activation of their respective gene expression. Once protein levels for both PER and TIM have reached a critical point, they too dimerize and interact with the CLOCK-CYCLE heterodimer to prevent it from binding to the E-Box and activating transcription. This negative feedback loop is essential for the functioning and timing of the circadian clock. Cycle – the cycle gene encodes for the CYCLE protein to form a heterodimer with the protein CLOCK. The heterodimer creates a transcription-translation feedback loop that controls the levels of both the PER and TIM gene. This feedback loop has been shown to be imperative for both the functioning and timing of the circadian clock in Drosophila. For more information, refer to Clock. Per – The per gene is a clock gene that encodes for the PER protein in Drosophila. The protein levels and transcription rates of PER demonstrate robust circadian rhythms that peak around CT 16. It creates a heterodimer with TIM to control the circadian rhythm. The heterodimer enters the nucleus in order to inhibit the CLOCK-CYCLE heterodimer which acts as a transcriptional activator for per and tim. This results in an inhibition of the transcription factors of per and tim thereby lowering the respective mRNA levels and protein levels. For more information, refer to Clock. Timeless – The tim gene encodes for the TIM protein that is critical in circadian regulation in Drosophila. Its protein levels and transcription rates demonstrate a circadian oscillation that peaks at around CT 16. TIM binds to the PER protein to create a heterodimer whose transcription-translation feedback loop controls the periodicity and phase of the circadian rhythms. For more information, refer to Period and Clock. Fungal genes Frq – The Frq gene, also known as the Frequency gene, encodes central components of an oscillatory loop within the circadian clock in Neurospora. In the oscillator's feedback loop, frq gives rise to transcripts that encode for two forms of the FRQ protein. Both forms are required for robust rhythmicity throughout the organism. Rhythmic changes in the amount of frq transcript are essential for synchronous activity, and abrupt changes in frq levels reset the clock. Bacterial genes Kai genes – Found in the Synechococcus elongatus, these genes are essential components of the cyanobacterium clock, the leading example of bacterial circadian rhythms. Kai proteins regulate genome wide gene expression. The oscillation of phosphorylation and dephosphorylation of KaiC acts as the pacemaker of the circadian clock. Plant genes CCA1 – The CCA1 gene, also known as Circadian and Clock Associated Gene 1, is a gene that is especially important in maintaining the rhythmicity of plant cellular oscillations. Overexpression, results in the loss of rhythmic expression of clock controlled genes (CCGs), loss of photoperiod control, and loss of rhythmicity in LHY expression. See LHY gene below for more information. LHY – The LHY gene, also known as the Late Elongated Hypocotyl gene, is a gene found in plants that encodes components of mutually regulatory negative feedback loops with CCA1 in which overexpression of either results in dampening of both of their expression. This negative feedback loop affects the rhythmicity of multiple outputs creating a daytime protein complex. Toc1 gene – Toc1, also known as Timing of CAB Expression 1 gene, is an oscillating gene found in the plants that is known to control the expression of CAB. It has been shown to affect the period of circadian rhythms through its repression of transcription factors. This was found through mutations of toc1 in plants that had shortened period of CAB expression. See also Chronobiology References Molecular biology Chronobiology
Oscillating gene
[ "Chemistry", "Biology" ]
2,234
[ "Biochemistry", "Chronobiology", "Molecular biology" ]
21,924,889
https://en.wikipedia.org/wiki/Nanopunk
Nanopunk refers to an emerging subgenre of science fiction that is still very much in its infancy in comparison to its ancestor-genre, cyberpunk, and some of its other derivatives. The genre is especially similar to biopunk, but describes a world where nanites and bio-nanotechnologies are widely in use and nanotechnologies are the predominant technological forces in society. The genre is mainly concerned with the artistic, psychological, and societal impact of nanotechnology, rather than aspects of the technology which itself is still in its infancy. Unlike cyberpunk, which can be distinguished by a gritty and low-life yet technologically advanced character, nanopunk can have a darker dystopian character that might examine potential risks by nanotechnology as well as a more optimistic outlook that might emphasize potential uses of nanotechnology. Comics M. Rex (1999) features nanites as the source of power for the title character. Scooby Apocalypse (2016–2019) reveals early on that a nanite virus originating from Velma's 'Elysium Project' experiment is the reason behind people becoming monsters. Literature Kathleen Ann Goonan (Queen City Jazz – 1997) and Linda Nagata were some of the earliest writers to feature nanotech as the primary element in their work. Neal Stephenson's The Diamond Age is a coming of age story, set in a future where set in a future world in which nanotechnology affects all aspects of life. Some novels of Stanislaw Lem, including Weapon System of the Twenty First Century or The Upside-down Evolution, The Invincible and Peace on Earth as well as Greg Bear's Blood Music could also be considered precursors of nanopunk. Michael Crichton novel Prey (2002). Another of Crichton's novels, Micro (2011), could also be an example, but it focuses more on the idea of size-manipulation and shrinking of objects rather than nanotechnology. Nathan McGrath's Nanopunk (2013) is set in an icebound near-future where almost half the world's population has been wiped out. Alister, a child when "The Big Freeze" began is now a teenager in a society slowly finding its feet. Unaware of his nano-infection he sets out to find his lost sister and is joined by Suzie, a militant cyber-activist. Their hacking attracts the attention of Secret Services and a ruthless private military corporation and their search becomes a deadly race for survival. Linda Nagata's Tech Heaven (1995) is a futuristic thriller about Katie, a woman whose husband is about to die of injuries sustained in a helicopter crash. Instead of dying, he gets his body cryogenically preserved so that he can be reawakened when med-tech is advanced enough to heal him. The problem is that it winds up taking far more than the estimated few years for this to happen. Alastair Reynolds' Chasm City could also be considered nanopunk. Film and television Film Faction (2020 Film) Honey, I Shrunk the Kids (1989 film) Osmosis Jones (2001 film) The Day the Earth Stood Still (2008 film) G.I. Joe: The Rise of Cobra (2009 film) Transcendence (2014 film) Ant-Man (2015 film) Television Futurama, "Parasites Lost" (2001) - Fry is infected by parasites that increase his intelligence and health, but ultimately chooses to get rid of them with miniature droids. Justice League, "Tabula Rasa" (2003) - The villain, Amazo, is an android composed of nanites that allow him to mimic abilities. Static Shock, "Hoop Squad" (2004) - The villain, Dr. Odium, is a scientist specializing in nanotechnology who was fired for attempting to experiment on humans. Doctor Who, "The Doctor Dances" (2005) - Two ships seen in the episode contain nanogenes that can heal wounds. Generator Rex (2010–2013) - Nanites are central to the premise of the series, in which an accident caused them to spread across the world and infect almost all life. Protagonist Rex Salazar is able to control his own nanites and cure the mutations caused by them, and thus works for the government agency Providence, battling nanite mutants (called E.V.O.S). Video games Anarchy Online (2001) Crysis (2007–2013) Deus Ex (2000) Metal Gear Solid series Supreme Ruler 2020 (2008) Red Faction (2001) See also Biopunk Cyberpunk derivatives Kathleen Ann Goonan Nanotechnology in fiction Posthuman Postcyberpunk Societal impact of nanotechnology References Cyberpunk subgenres Postcyberpunk Biopunk Nano 2000s neologisms
Nanopunk
[ "Materials_science", "Technology", "Engineering", "Biology" ]
982
[ "Genetic engineering", "Transhumanism", "Fiction about nanotechnology", "Ethics of science and technology", "Nanotechnology" ]
21,929,114
https://en.wikipedia.org/wiki/SENSOR-Pesticides
Sentinel Event Notification System for Occupational Risks (SENSOR)-Pesticides is a U.S. state-based surveillance program that monitors pesticide-related illness and injury. It is administered by the National Institute for Occupational Safety and Health (NIOSH), twelve state health agencies participate. NIOSH provides technical support to all participating states. It also provides funding to some states, in conjunction with the US Environmental Protection Agency (US EPA). Pesticide-related illness is a significant occupational health issue, but it is believed to be underreported. Because of this, NIOSH proposed the SENSOR program to track pesticide poisonings. Because workers in many industries are at risk for pesticide exposure, and public concern exists regarding the use of and exposure to pesticides, government and regulatory authorities experience pressure to monitor health effects associated with them. SENSOR-Pesticides state partners collect case data from several different sources using a standard case definition and set of variables. This information is then forwarded to the program headquarters at NIOSH where it is compiled and put into a national database. Researchers and government officials from the SENSOR-Pesticides program have published research articles that highlight findings from the data and their implications for environmental and occupational pesticide issues. These issues include eradication of invasive species, pesticide poisoning in schools, birth defects, and residential use of total release foggers, or "bug bombs," which are devices that release an insecticide mist. Background Although it is a significant occupational health issue, work-related pesticide poisoning is believed to be underreported. Before the SENSOR program began, state programs that collected reports of occupational diseases did not usually conduct interventions. While over 25 states required reporting of pesticide-related illness, most of them could not compile useful information on incidence or prevalence. In response to these challenges, NIOSH proposed the SENSOR program as a model to track certain occupational conditions, including pesticides. Pesticide poisoning is an important occupational health issue because pesticides are used in a large number of industries, which puts many different categories of workers at risk. From 1995 to 2001, use in agriculture accounted for at least 70% of conventional pesticide use in the U.S., and the US EPA estimates that the agricultural sector has had a similar market share of pesticides since 1979. Pesticides are particularly useful in agriculture because they increase crop yields and reduce the need for manual labor. However, this extensive use puts agricultural workers at increased risk for pesticide illnesses. Workers in other industries are at risk for exposure as well. For example, commercial availability of pesticides in stores puts retail workers at risk for exposure and illness when they handle pesticide products. The ubiquity of pesticides puts emergency responders such as fire-fighters and police officers at risk, because they are often the first responders to emergency events and may be unaware of the presence of a poisoning hazard. The process of aircraft disinsection, in which pesticides are used on inbound international flights for insect and disease control, can also make flight attendants sick. The widespread use of pesticides, their release into the environment, and the potential for adverse public health effects due to exposure may raise public concern. Some feel that regulatory authorities have an ethical obligation to track the health effects of such chemicals. In the Handbook of Pesticide Toxicology, Calvert et al. write "[b]ecause society allows pesticides to be disseminated into the environment, society also incurs the obligation to track the health effects of pesticides." Jay Vroom, president of CropLife America, said in a press release that "...our industry has a moral and ethical obligation...to know how these products impact humans." Surveillance of pesticide-related injuries and illnesses is recommended by the American Medical Association, the Council of State and Territorial Epidemiologists (CSTE), the Pew Environmental Health Commission, and the Government Accountability Office. History Beginning in 1987, NIOSH supported the implementation of the Sentinel Event Notification System for Occupational Risks (SENSOR) program in ten state health departments. The objectives of the program were to help state health departments develop and refine reporting systems for certain occupational disorders so that they could conduct and evaluate interventions and prevention efforts. The disorders covered by SENSOR included silicosis, occupational asthma, carpal tunnel syndrome, lead poisoning, and pesticide poisoning. While each participating state health department had previously done surveillance or interventions for some of these occupational illnesses, SENSOR helped the states to develop and refine their reporting systems and programs. The original SENSOR-Pesticides model was based on physician reporting. Each state contacted a select group of sentinel health care professionals on a regular basis to collect information. However, this system was labor-intensive and did not yield many cases. Because different states used different methods for collecting information, their data could not be compiled or compared to analyze for trends. In response, NIOSH, along with other federal agencies (US EPA, National Center for Environmental Health), non-federal agencies (CSTE, Association of Occupational and Environmental Clinics), and state health departments, developed a standard case definition and a set of standardized variables. As of 2013, SENSOR-Pesticides had 12 participating states contributing occupational pesticide-related injury and illness data: California, Florida, Iowa, Louisiana, Michigan, New York, North Carolina, and Washington received federal funding to support surveillance activities, while Nebraska, New Mexico, Oregon, and Texas were unfunded SENSOR-Pesticides program partners. Case definition A case of pesticide-related illness or injury is characterized by an acute onset of symptoms that are temporally related to a pesticide exposure. Cases are classified as occupational if exposure occurs at work, unless the case was a suicide or an attempted suicide. Cases are reportable when: there is documentation of new adverse health effects temporally related to a documented pesticide exposure AND there is consistent evidence of a causal relationship between the pesticide and the health effects based on the known toxicology of the pesticide OR there is not enough information to determine whether there is a causal relationship between the exposure and the health effects. State public health officials rate each case as definite, probable, possible or suspicious. Illness severity is assigned as low, moderate, severe, or fatal. Data collection All states in the program require physicians to report pesticide-related injuries and illnesses; however, most states collect the majority of their data from workers’ compensation claims, poison control centers, and state agencies with jurisdiction over pesticide use, such as state departments of agriculture. When they receive a report, health department officials review the information to determine whether it was pesticide related. If it was, they request medical records and try to interview the patient (or a proxy) and anyone else involved in the incident (e.g. supervisors, applicators, and witnesses). The data is compiled each year and put into a national database. In addition to identifying, classifying, and tabulating pesticide poisoning cases, the states periodically investigate pesticide-related events and develop interventions aimed at particular industries or pesticide hazards. Impact Federal and state-level scientists and researchers with SENSOR-Pesticides have published articles on pesticide exposure events and trends using program data. These articles include MMWR publications and articles in peer-reviewed journals on exposures such as acute pesticide-related illness in youth, agricultural workers, retail workers, migrant farm workers, and flight attendants. Several articles have attracted media attention and motivated legislative or other governmental action. Florida Medfly Eradication Program In response to a Mediterranean fruit fly (also known as “Medfly”) outbreak, officials from the Florida Department of Agriculture sprayed pesticides (primarily malathion) and bait over five counties during the spring and summer of 1998. Scientists from the University of Florida’s Institute of Food and Agricultural Sciences stated that malathion was being sprayed in a manner that did not pose a significant risk to public health. During the eradication effort, the Florida Department of Health investigated 230 cases of illness that were attributed to the pesticide. Officials from the Florida Department of Health and the SENSOR-Pesticides program published an article in Centers for Disease Control and Prevention (CDC) Morbidity and Mortality Weekly Report (MMWR) that described these case reports and recommended alternative methods for Medfly control, including exclusion activities at ports of entry to prevent importation, more rapid detection through increased sentinel trapping densities, and the release of sterile male flies to interrupt the reproductive cycle. The United States Department of Agriculture (USDA) incorporated these suggestions into their 2001 Environmental Impact Statement on the Fruit Fly Cooperative Control Program. These impact statements guide the USDA's development of insect control strategies and decisions. Pesticides in schools Researchers from the SENSOR-Pesticides program published an article in 2005 in the Journal of the American Medical Association (JAMA) on pesticide poisoning in schools. The article, which included data collected by SENSOR, described illnesses in students and school employees associated with pesticide exposures. The study found that rates of pesticide-related illnesses in children rose significantly from 1998 to 2002 and called for a reduction in pesticide use to prevent pesticide-related illness on or near school grounds. The article generated media coverage and drew attention to pesticide safety in schools and to safer alternatives to pesticides through integrated pest management (IPM). "[T]he study does provide evidence that using pesticides at schools is not innocuous and that there are better ways to use pesticides," said study co-author Dr. Geoffrey Calvert. Officials in organizations supporting the pesticide industry, such as CropLife America and RISE (Responsible Industry for a Sound Environment, a trade association representing pesticide manufacturers and suppliers), reacted strongly to the report, calling it “alarmist” and “incomplete” in its health reporting. CropLife America president Jay Vroom claimed that the report was “written without context about the proper use of pesticides in schools and [did] not mention the positive public health protections they provide" and stated that pesticide use in schools is "well regulated" and can be managed so that the risk is low. RISE president Allen James faulted the article for relying on unverified reports and said that evidence suggested that such incidents were extremely rare. The increased awareness of pesticide use in schools influenced parents and other stakeholders in numerous states to call for the adoption of integrated pest management programs. According to the National Pest Management Association, three more states passed IPM rules or laws between October 2005 and October 2008. Birth defects in Florida and North Carolina In February 2005, three infants were born with birth defects to migrant farmworkers within eight weeks of each other in Collier County, Florida. Because one of the mothers had worked in North Carolina and the other two worked in Florida, neither state's health department attributed the cluster to pesticide exposure at first. However, when they presented their findings at the annual SENSOR-Pesticides workshop in 2006, they realized that all three mothers worked for the same tomato grower during the period of organogenesis while pregnant, and that they may have been exposed to pesticides. The state health departments reported the cluster to their respective state agricultural departments. The Florida Department of Agriculture and Consumer Services inspected the grower's farms in Florida and fined the company $111,200 for violations they discovered; the North Carolina Department of Agriculture and Consumer Services conducted a similar inspection of farms in North Carolina and fined the company $184,500. After the investigation, North Carolina Governor Mike Easley assembled the “Governor’s Task Force on Preventing Agricultural Pesticide Exposure.” It presented its findings in April 2008, which caused the state legislature to pass anti-retaliation and recordkeeping laws, training mandates to protect the health of agricultural workers, and funding for improved surveillance. In Florida, the state legislature added ten new pesticide inspectors to the Florida Department of Agriculture and Consumer Services. Total release foggers Total release foggers (TRFs), or "bug bombs," release a fog of insecticide to kill bugs in a room and coat surfaces with a chemical so the insects do not return. It is estimated that 50 million TRFs are used in the US annually. SENSOR-Pesticides federal and state staff, along with officials from the California Department of Pesticide Regulation (CDPR), published an article in the CDC MMWR that called attention to injuries and illnesses resulting from use of total release foggers. The New York State Department of Environmental Conservation (DEC) published a press release in response, stating that the state would restrict their use. DEC Commissioner Pete Granis announced that the department would move to classify foggers as a restricted-use product in New York State, meaning that only certified pesticide applicators would be able to obtain them. In March 2010, US EPA announced required label changes on indoor TRF products that reflect the label change recommendations made in the MMWR article. References External links SENSOR-Pesticides Program Case Definition for Acute Pesticide-Related Illness and Injury Standardized Variables Chemical safety Environmental effects of pesticides Toxic effects of pesticides Environmental monitoring Epidemiology National Institute for Occupational Safety and Health Occupational safety and health Pesticides in the United States Pesticide organizations Pesticide regulation in the United States
SENSOR-Pesticides
[ "Chemistry", "Environmental_science" ]
2,689
[ "Chemical accident", "nan", "Epidemiology", "Chemical safety", "Environmental social science" ]
39,033,457
https://en.wikipedia.org/wiki/Quake%20Global
Quake Global is a technology company that provides products for asset tracking, location, and analytics. Quake Global designs and implements facility or campus level asset tracking products typically based around RFID and/ or BLE technology. In 2012, Quake Global received an Inc. 5000 rank of 82 in the San Diego Metro Area. The company is based in San Diego. Company Quake Global was founded in 1998. The company provides products and services in providing products and services for asset tracking, using RFID technology The company also designs and implements machine to machine (M2M) communication devices that are intended to utilize cellular networks or satellite networks to manage devices globally. They service organizations in various fields such as healthcare and law enforcement. In 2010, Quake Global purchased transportation-tracking company, Stellar Satellite Communications, from Orbcomm. In 2012, Quake acquired Odin, a radio-frequency identification (RFID) company. The company used this acquisition to enter the healthcare sector and marked its introduction to the RFID market. The acquisition also gave Quake Global additional market share in RF M2M technology. Companies in more than 150 different countries utilize Quake's products and services, and the company holds seventeen patents. In 2012, Quake Global partnered with SkyWave Global Communications to create QPRO. This technology was implemented in the Russian Federation's Inmarsat satellite network. QPRO is designed for fast transmission of data. Quake's other clients include Volvo, Komatsu, Caterpillar, Hitachi, Faria and Bell Equipment. In 2019 Quake Global acquired Skynet Healthcare Technologies which enabled Quake Global to add BLE active technology to their passive RFID system already used in healthcare markets, as well as enter the senior living market. Polina Braunstein is the president and CEO of Quake Global. She took the post in 2003. In 2009, Braunstein was a finalist for the Ernst & Young Entrepreneur of the Year Award. Born in Russia, she received a Master of Science in business and industrial engineering from Novocherkassk State Polytechnic University. Braunstein was nominated as a Top Influential Nominee in 2011. Awards and recognition Quake Global reported revenue of $26.2 million in 2011 and has a three-year growth of 30%. In 2008 and 2009, it was listed on the Deloitte & Touche Technology Fast 500, which chronicles the fastest-growing private technology companies. References Technology companies of the United States Modems Wireless Companies based in San Diego
Quake Global
[ "Engineering" ]
490
[ "Wireless", "Telecommunications engineering" ]
39,033,720
https://en.wikipedia.org/wiki/Stochastic%20cellular%20automaton
Stochastic cellular automata or probabilistic cellular automata (PCA) or random cellular automata or locally interacting Markov chains are an important extension of cellular automaton. Cellular automata are a discrete-time dynamical system of interacting entities, whose state is discrete. The state of the collection of entities is updated at each discrete time according to some simple homogeneous rule. All entities' states are updated in parallel or synchronously. Stochastic cellular automata are CA whose updating rule is a stochastic one, which means the new entities' states are chosen according to some probability distributions. It is a discrete-time random dynamical system. From the spatial interaction between the entities, despite the simplicity of the updating rules, complex behaviour may emerge like self-organization. As mathematical object, it may be considered in the framework of stochastic processes as an interacting particle system in discrete-time. See for a more detailed introduction. PCA as Markov stochastic processes As discrete-time Markov process, PCA are defined on a product space (cartesian product) where is a finite or infinite graph, like and where is a finite space, like for instance or . The transition probability has a product form where and is a probability distribution on . In general some locality is required where with a finite neighbourhood of k. See for a more detailed introduction following the probability theory's point of view. Examples of stochastic cellular automaton Majority cellular automaton There is a version of the majority cellular automaton with probabilistic updating rules. See the Toom's rule. Relation to lattice random fields PCA may be used to simulate the Ising model of ferromagnetism in statistical mechanics. Some categories of models were studied from a statistical mechanics point of view. Cellular Potts model There is a strong connection between probabilistic cellular automata and the cellular Potts model in particular when it is implemented in parallel. Non Markovian generalization The Galves–Löcherbach model is an example of a generalized PCA with a non Markovian aspect. References Further reading . . . . . Cellular automata Lattice models Self-organization Complex systems theory Spatial processes Markov models
Stochastic cellular automaton
[ "Physics", "Materials_science", "Mathematics" ]
461
[ "Self-organization", "Recreational mathematics", "Lattice models", "Computational physics", "Cellular automata", "Condensed matter physics", "Statistical mechanics", "Dynamical systems" ]
39,034,242
https://en.wikipedia.org/wiki/GP5%20chip
The GP5 is a co-processor accelerator built to accelerate discrete belief propagation on factor graphs and other large-scale tensor product operations for machine learning. It is related to, and anticipated by a number of years, the Google Tensor Processing Unit It is designed to run as a co-processor with another controller (such as a CPU (x86) or an ARM/MIPS/Tensilica core). It was developed as the culmination of DARPA's Analog Logic program The GP5 has a fairly exotic architecture, resembling neither a GPU nor a DSP, and leverages massive fine-grained and coarse-grained parallelism. It is deeply pipelined. The different algorithmic tasks involved in performing belief propagation updates are performed by independent, heterogeneous compute units. The performance of the chip is governed by the structure of the machine learning workload being evaluated. In typical cases, the GP5 is roughly 100 times faster and 100 times more energy efficient than a single core of a modern core i7 performing a comparable task. It is roughly 10 times faster and 1000 times more energy efficient than a state-of-the art GPU. It is roughly 1000 times faster and 10 times more energy efficient than a state-of-the-art ARM processor. It was benchmarked on typical machine learning and inference workloads that included protein side-chain folding, turbo error correction decoding, stereo vision, signal noise reduction, and others. Analog Devices, Inc. acquired the intellectual property for the GP5 when it acquired Lyric Semiconductor, Inc. in 2011. References External links Dimple Lyric Semiconductor, now Analog Devices: Lyric Labs Darpa's Analog Logic program Integrated circuits Microtechnology
GP5 chip
[ "Materials_science", "Technology", "Engineering" ]
352
[ "Integrated circuits", "Computer engineering", "Microtechnology", "Materials science" ]
39,036,556
https://en.wikipedia.org/wiki/Photon%20counting
Photon counting is a technique in which individual photons are counted using a single-photon detector (SPD). A single-photon detector emits a pulse of signal for each detected photon. The counting efficiency is determined by the quantum efficiency and the system's electronic losses. Many photodetectors can be configured to detect individual photons, each with relative advantages and disadvantages. Common types include photomultipliers, geiger counters, single-photon avalanche diodes, superconducting nanowire single-photon detectors, transition edge sensors, and scintillation counters. Charge-coupled devices can be used. Advantages Photon counting eliminates gain noise, where the proportionality constant between analog signal out and number of photons varies randomly. Thus, the excess noise factor of a photon-counting detector is unity, and the achievable signal-to-noise ratio for a fixed number of photons is generally higher than the same detector without photon counting. Photon counting can improve temporal resolution. In a conventional detector, multiple arriving photons generate overlapping impulse responses, limiting temporal resolution to approximately the fall time of the detector. However, if it is known that a single photon was detected, the center of the impulse response can be evaluated to precisely determine its arrival time. Using time-correlated single-photon counting (TCSPC), temporal resolution of less than 25 ps has been demonstrated using detectors with a fall time more than 20 times greater. Disadvantages Single-photon detectors are typically limited to detecting one photon at a time and may require time between detection events to reset. Photons that arrive during this interval may not be detected. Therefore, the maximum light intensity that can be accurately measured is typically low. Measurements composed of small numbers of photons intrinsically have a low signal-to-noise ratio caused by the randomly varying numbers of emitted photons. This effect is less pronounced in conventional detectors that can concurrently detect large numbers of photons. Because of the lower maximum signal level, either the signal-to-noise ratio will be lower or the exposure time longer than for conventional detection. Applications Single-photon detection is useful in fields such as: Fiber-optic communication Quantum information science Quantum encryption Medical imaging Light detection and ranging DNA sequencing Astrophysics Materials science Medicine In radiology, one of the major disadvantages of X-ray imaging modalities is the negative effects of ionising radiation. Although the risk from small exposures (as used in most medical imaging) is thought to be small, the radiation protection principle of "as low as reasonably practicable" (ALARP) is always applied. One way of reducing exposures is to make X-ray detectors as efficient as possible, so that lower doses can be used for a given diagnostic image quality. Photon counting detectors could help, due to their ability to reject noise more easily. Photon counting is analogous to color photography, where each photon's differing energy affects the output, as compared to charge integration, which considers only the intensity of the signal, as in black and white photography. Photon-counting mammography was introduced commercially in 2003. Although such systems are not widespread, some evidence supports their ability to produce comparable images at an approximately 40% lower dose than other digital mammography systems with flat panel detectors. Spectral imaging technology was subsequently developed to discriminate between photon energies, with the possibility to further improve image quality and to distinguish tissue types. Photon-counting computed tomography is another interest area, which is rapidly evolving and is approaching clinical feasibility. Fluorescence-lifetime imaging microscopy Time-correlated single-photon counting (TCSPC) precisely records the arrival times of individual photons, enabling measurement of picosecond time-scale differences in the arrival times of photons generated by fluorescent, phosphorescence or other chemical processes that emit light, providing additional molecular information about samples. The use of TCSPC enables relatively slow detectors to measure extremely minute time differences that would be obscured by overlapping impulse responses if multiple photons were incident concurrently. LIDAR Some pulse LIDAR systems operate in single photon counting mode using TCSPC to achieve higher resolution. Infrared photon-counting technologies for LIDAR are advancing rapidly. Measured quantities The number of photons observed per unit time is the photon flux. The photon flux per unit area is the photon irradiance if the photons are incident on a surface, or photon exitance if the emission of photons from a broad-area source is being considered. The flux per unit solid angle is the photon intensity. The flux per unit source area per unit solid angle is photon radiance. SI units for these quantities are summarized in the table below. See also Single-photon source Visible Light Photon Counter Transition edge sensor Superconducting nanowire single-photon detector Time-correlated single photon counting Oversampled binary image sensor References Optical metrology Photonics Particle detectors
Photon counting
[ "Technology", "Engineering" ]
992
[ "Particle detectors", "Measuring instruments" ]
39,039,959
https://en.wikipedia.org/wiki/Aluminium%20granules
Aluminium granules are fine spherical aggregates of aluminium. Manufacture Aluminium granules are manufactured by the melting of primary or secondary aluminium and blown in air or vacuum, or are cast in sand and then sieved off. Other methods include casting of molten aluminium in water. Granules versus powders Aluminium granules have been found safer and economical compared to atomized aluminium powder. Aluminium granules have lower explosion risk in production and in use of the product itself. Advantages The density of aluminium granules ranges from 1.0 to 1.8 g/cm3 and is much higher compared to aluminium powder. See also Aluminium powder References Aluminium alloys Deoxidizers
Aluminium granules
[ "Chemistry", "Materials_science" ]
134
[ "Deoxidizers", "Alloys", "Metallurgy", "Aluminium alloys" ]
39,040,085
https://en.wikipedia.org/wiki/Jessiko
Jessiko is a long autonomous robot fish developed by French start-up Robotswim, located in Palaiseau, Paris area. History Jessiko project was launched in 2005 by Christophe Tiraby, a French engineer. In March 2009, Christophe Tiraby founded Robotswim company to industrialize and commercialize Jessiko. In November 2009, "Jessiko robot fish" wins the Great Prize of Innovation of Paris City in the Design category. Jessiko is first exhibited at Innorobo robotic fair in March 2011. From May 12 to August 12, Jessiko is exhibited on France Pavilion Expo 2012 Yeosu World Fair in South Korea. Robotswim implemented at this occasion the first school of robot fish in permanent conditions. Versions 5 versions of prototypes have been developed (V0 to V4) Industrial version commercialized since 2012 is V5. Characteristics Use Jessiko was marketed as a luxury decoration for businesses such as hotels, restaurants, and museums. Tiraby expressed hope that one day it would be common to find his invention in household ponds and swimming pools. References See also Biomechatronics External links Robotswim Youtube channel Robots of France Electromechanical engineering Robotic animals Underwater robots Fish and humans
Jessiko
[ "Engineering", "Biology" ]
248
[ "Animals", "Electromechanical engineering", "Mechanical engineering by discipline", "Robotic animals", "Electrical engineering" ]
39,040,452
https://en.wikipedia.org/wiki/King%20Abdullah%20City%20for%20Atomic%20and%20Renewable%20Energy
The King Abdullah City for Atomic and Renewable Energy (K.A.C.A.R.E.) is a scientific research and governmental entity in the Kingdom of Saudi Arabia and is chaired by the Minister of Energy. K.A.C.A.R.E. was founded in 2010 with a mandate to develop nuclear and renewable energy in Saudi Arabia. It is headquartered in Riyadh city. K.A.C.A.R.E. conducts research and focuses on collaborating with government agencies, scientific institutions, and international partners. Leadership H.R.H. Abdulaziz bin Salman – President (2022 – present) H.E Dr. Khalid ben Saleh Al-Sultan – President (2018 – 2022) H.E Dr. Hashim ben Abdullah Yamani - President (2010 – 2018) H.E Dr. Walid ben Hussain Abu Alfaraj - Vice President (2010 – 2018) H.E Dr. Khalid ben Mohammed Al Sulaiman – Vice President for Renewable Energy (2010 – 2014) Mission K.A.C.A.R.E works on proposing a national policy for nuclear and renewable energy and implementing this strategic plan. It also establishes and manages projects to achieve its objectives, including using nuclear and renewable energy sources to achieve a sustainable national energy mix, as well as research and development centers to promote sustainable development within the Saudi Arabian economy. K.A.C.A.R.E. also specializes in achieving the following: Proposing the national policy for the development of an efficient and balanced national energy sector that contributes to the development of the local economy as well as the development of the strategies and plans necessary for its implementation. Qualification of national work force Provide renewable energy data to the private sector in order to select the areas with the best solar radiation for the establishment of renewable energy projects in the Kingdom. Supports joint research programs between the Kingdom and the international scientific institutions to keep abreast of the continuous scientific development in nuclear and renewable energy technologies. Conduct feasibility studies for nuclear reactors for peaceful uses. Raise awareness of the importance of atomic and renewable energy for the future of the national economy Partnership and cooperation with the largest international suppliers of nuclear technology to familiarize them with the objectives of the components of the “Saudi National Atomic Energy Project” in the Kingdom, especially in the field of large nuclear reactors, and the Kingdom's aim to enter into the peaceful use of nuclear energy to produce electricity and desalinated water. Projects and Initiatives Saudi National Atomic Energy Project (SNEAP): The National Atomic Energy Project was established in 2017 and consists of four main components: 1. LeadershipLarge Nuclear Power Plants (LNPP): These are reactors with an electric capacity of 1,200-1,600 megawatts of power per reactor, which contribute to support the base load in the grid throughout the year. 2. LeadershipSmall Module Reactors (SMR): These reactors enable the Kingdom to own and develop atomic energy technologies and build them in isolated places from the electrical grid which suits its water desalination requirements and various thermal applications in the petrochemical industries. Small Module Reactors consist of HTGR reactors and SMART technology reactors. 3. LeadershipNuclear Fuel Cycle (NFC): It represents the first step of the Kingdom in the path of self-sufficiency in the production of nuclear fuel, which will contribute to the rehabilitation of the national workforce competent in the process of exploration and production of uranium and the use of experience gained in this project to develop the Kingdom's natural resources of uranium. 4. Nuclear & Radiological Regulatory Commission (NRRC): The Regulator is an independent body that monitors and supervises the implementation of all components of the “Saudi National Atomic Energy Project” in Saudi Arabia to ensure the highest levels of safety aimed at protecting individuals, society, environment and nuclear installations from ionizing radiation and radiation activities in the Kingdom. K.A.C.A.R.E’s Renewable Energy Initiatives K.A.C.A.R.E has three initiatives linked to the “National Industrial Development and Logistics Program”, which is one of the programs to realize the kingdom's Vision 2030. (1) National Data Centers for Renewable Energy Initiative:   The National Data Center for Renewable Energy provides data for studies, research and projects to serve a wide range of users, such as investors, researchers, technology developers and others. Additionally the National Data Center also provides simulation, modeling and forecasting tools for renewable energy. It provides a picture of the state of the renewable energy sector in the Kingdom and its growth rates employing means of data intelligence. (2) Renewable Energy Technologies Localization Initiative:   The Renewable Energy Technologies Localization Program aims at increasing the local content of the renewable energy technologies sector by accelerating the growth of the local private sector and supporting local companies to develop products, applications and services in the field of renewable energy. The empowerment of the local private sector is achieved through the establishment of joint ventures projects led by the private sector and in accordance to international best practices, as well as the standard studies carried out by K.A.C.A.R.E. through applying the principle of cost sharing between the government and the local private sector. (3) Human Capacity Building Initiative: K.A.C.A.R.E. cooperates with various stakeholders within the Kingdom and with international institutions to further develop human capital in line with the labor market. The main objectives of the capacity building initiative are: Attracting manpower and improving employment Supporting the development of the environmental education system Support the localization of technology and the transfer of knowledge through the development of human workforce. See also Energy in Saudi Arabia Nuclear energy in Saudi Arabia Nuclear program of Saudi Arabia References K.A.CARE website Vision 2030 External links King Abdullah City for Atomic and Renewable Energy official website The Kingdom's Sustainable Energy Portal for Developers & Investors 2010 establishments in Saudi Arabia Government agencies established in 2010 Economy of Saudi Arabia Energy in Saudi Arabia Nuclear technology in Saudi Arabia Government of Saudi Arabia Renewable energy organizations
King Abdullah City for Atomic and Renewable Energy
[ "Engineering" ]
1,229
[ "Renewable energy organizations", "Energy organizations" ]
31,947,931
https://en.wikipedia.org/wiki/Problem%20book
Problem books are textbooks, usually at advanced undergraduate or post-graduate level, in which the material is organized as a series of problems, each with a complete solution given. Problem books are distinct from workbooks in that the problems are designed as a primary means of teaching, not merely for practice on material learned elsewhere. Problem books are found most often in the mathematical and physical sciences; they have a strong tradition within the Russian educational system. At some American universities, problem books are associated with departmental preliminary or candidacy examinations for the Ph.D. degree. Such books may exemplify decades of actual examinations and, when published, are studied by graduate students at other institutions. Other problem books are specific to graduate fields of study. While certain problem books are collected, written, or edited by worthy but little-known toilers, others are done by renowned scholars and researchers. The casebook for law and other non-technical fields can provide a similar function. Notable problem books in mathematics George Pólya and Gábor Szegő (1925) Problems and Theorems in Analysis () Paul Halmos (1982) A Hilbert Space Problem Book () Frederick Mosteller (1965,1987) Fifty Challenging Problems in Probability with Solutions () Arthur Engel (1997) Problem-Solving Strategies () Notable problem books in physics V. V. Batygin and I. N. Toptygin (1964,1978) Problems in Electrodynamics (ASIN B003X6BPSE) I. E. Irodov (1981) Problems in General Physics () Kyriakos Tamvakis (2005) Problems and Solutions in Quantum Mechanics () A.P. Lightman, W.H. Press, R.H. Price, and S.A. Teukolsky (1979) Problem Book in Relativity and Gravitation () W-H. Steeb (2006) Problems And Solutions in Quantum Computing And Quantum Information () Notable problem books in physics based on candidacy examinations J.A. Cronin, D.F. Greenberg, and V.L. Telegdi (1967,1979) University of Chicago Graduate Problems in Physics with Solutions () Nathan Newbury, John Ruhl, Suzanne Staggs, Stephen Thorsett, and Michael Newman. (1991) Princeton Problems in Physics with Solutions () References External links Problem Book in Relativity and Gravitation (online) Physics education Mathematics education
Problem book
[ "Physics" ]
493
[ "Applied and interdisciplinary physics", "Physics education" ]
31,948,434
https://en.wikipedia.org/wiki/Hydrogen%20ion%20cluster
A hydrogen molecular ion cluster or hydrogen cluster ion is a positively charged cluster of hydrogen molecules. The hydrogen molecular ion () and trihydrogen ion () are well defined molecular species. However hydrogen also forms singly charged clusters () with n up to 120. Experiments Hydrogen ion clusters can be formed in liquid helium or with lesser cluster size in pure hydrogen. is far more common than higher even numbered clusters. is stable in solid hydrogen. The positive charge is balanced by a solvated electron. It is formed when ionizing radiation impinges on solid hydrogen, and so is formed in radioactive solid tritium. In natural hydrogen treated with radiation, the positive charge transfers to HD molecules, in preference to , with the ultimate most stable arrangement being HD(HD)+HD. can migrate through solid hydrogen by linking a hydrogen molecule at one end and losing it at the other: + → + . This migration stops once an HD molecule is added resulting in a lower energy level. HD or is added in preference over . Clampitt and Gowland found clusters with an odd number of hydrogen atoms and later showed that was relatively stable. formed the core of this cluster with six molecules surrounding it. Hiroka studied the stability of the odd numbered clusters in gas up to . Bae determined that was especially stable amongst the odd numbered clusters. Kirchner discovered even numbered atomic clusters in gas at lower concentrations than the odd numbered atom clusters. was twenty times less abundant than . , and were detected at lesser amounts than . Kurosaki and Takayanagi showed that is much more stable than other even clusters and showed antiprismatic symmetry of order 4 ( molecular symmetry). This turnstile structured molecule was computationally found to be more energetically stable than a ring of five hydrogen atoms around a proton. Negative hydrogen clusters have not been found to exist. is theoretically unstable, but in theory is bound at 0.003 eV. Decay in the free gas state decays by giving off H atoms and molecules. Different energies of decay occur with levels averaging at 0.038 eV and peaking at 0.14 eV. Formation Hydrogen molecular ion clusters can be formed through different kinds of ionizing radiation. High energy electrons capable of ionizing the material can perform this task. When hydrogen dissolved in liquid helium is irradiated with electrons their energy must be sufficient to ionize helium to produce significant hydrogen clusters. Irradiation of solid hydrogen by gamma rays or X-rays also produces . Positive ion clusters are also formed when compressed hydrogen expands though a nozzle. Kirchner's theory for the formation of even numbered clusters was that neutral molecules reacted with the ion (or other odd clusters) to make . Properties Solvation of in solid hydrogen had little effect on its spectrum. Use SRI International studied solid ionic hydrogen fuel. They believed that a solid containing and H− ions could be manufactured. If it could be made it would have a higher energy than other rocket fuels with only 2% concentration of ions. However they could not contain the H− in a stable way, but determined that other negative ions would do as well. This theoretical impulse exceeds that of solid and liquid fuel rockets. SRI developed a cluster ion gun that could make positive and negative ion clusters at a current of 500 pA. Nuclear fusion using ion clusters can impact far more atoms than single ions in one hit. This concept is called cluster ion fusion (CIF). Lithium deuteride (LiD) is a potential starter material for generating the ions. References Hydrogen Cations
Hydrogen ion cluster
[ "Physics", "Chemistry" ]
715
[ "Cations", "Ions", "Matter" ]
31,950,045
https://en.wikipedia.org/wiki/Wood%20method
The Wood method, also known as the Merchant–Rankine–Wood method, is a structural analysis method which was developed to determine estimates for the effective buckling length of a compressed member included in a building frames, both in sway and a non-sway buckling modes. It is named after R. H. Wood. According to this method, the ratio between the critical buckling length and the real length of a column is determined based on two redistribution coefficients, and , which are mapped to a ratio between the effective buckling length of a compressed member and its real length. The redistribution coefficients are obtained through the following expressions: where are the stiffness coefficients for the adjacent length of columns. Although this method was included in ENV 1993-1-1:1992, it is absent from EN 1993-1-1. See also EN 1993 Merchant–Rankine method Horne method References Structural analysis
Wood method
[ "Engineering" ]
182
[ "Structural engineering", "Structural analysis", "Mechanical engineering", "Aerospace engineering" ]
31,950,365
https://en.wikipedia.org/wiki/Circle%20packing%20in%20an%20isosceles%20right%20triangle
Circle packing in a right isosceles triangle is a packing problem where the objective is to pack unit circles into the smallest possible isosceles right triangle. Minimum solutions (lengths shown are length of leg) are shown in the table below. Solutions to the equivalent problem of maximizing the minimum distance between points in an isosceles right triangle, were known to be optimal for and were extended up to . In 2011 a heuristic algorithm found 18 improvements on previously known optima, the smallest of which was for . References Circle packing
Circle packing in an isosceles right triangle
[ "Mathematics" ]
109
[ "Geometry problems", "Circle packing", "Mathematical problems", "Packing problems" ]
30,960,051
https://en.wikipedia.org/wiki/Sengkang%20Riverside%20Park
Sengkang Riverside Park is a riverine park located at Anchorvale Street and Fernvale Street abutting Sungei Punggol, Singapore. The park consist of three open-space land parcels and is also home to a constructed wetland. The Sengkang Sports Centre is located just adjacent to the riverine park, connected by a floating wetland, Sengkang Floating Wetland. History The 21-hectare park was opened to the public in November 2008. The park is situated alongside the Sungei Punggol. The integration of urban planning and protection of Singapore's water resources has earned Sengkang Riverside Park an ABC Waters Certification. Sengkang Floating Wetland On 7 November 2010, Prime Minister Lee Hsien Loong officially opened the floating wetland that is designed with a "fruitful" theme. The wetland is about half the size of a football field, and connects the Sengkang Riverside Park with the Sengkang Sports Centre. The floating wetland helps to collect and filter rainwater naturally through its aquatic plants. The wetland also acts as a habitat for fish, birds and other wildlife. Since the completion of the wetland, more birds and dragonflies were noticed to be attracted in the area. Facilities Visitor centre The Visitor Centre is a covered meeting point, located at the central part of the park. The sheltered centre provides visitors to the park with a shelter from bad weather. Public toilets and vending machines are also available here. Fruit Tree Trail Sengkang Riverside Park is also unique for its Fruit Tree Trail that consist of 16 different fruit trees, with some that cannot be found in the local supermarkets and are really rare. The trail runs along the parameters of the constructed wetlands. The 16 different fruit trees that can be found in the trail are Mangosteen Tree, Ordeal Tree, Custard Apple, Pomelo, Lime, Weeping Tea Tree, Island Lychee, Mango, Pond Apple Tree, Asam Tree, Java Olive Tree, Elephant Apple, Fish Killer Tree, Starfruit, Pig's Mango and Wine Palm. Others Visitors to the park can use the available facilities to cycle and exercise at the park. There are various resting points throughout the park and along the tracks. The Civic Event Lawn at the park provides a venue for events to be hosted here. Travel The park can be reached via the Sengkang LRT line at Farmway LRT station & Kupang LRT station. Visitors can also choose to reach there by car, where the park's car park is located in front of the Visitor Centre. See also List of parks in Singapore References External links National Parks Board, Singapore Parks in Singapore Sengkang Constructed wetlands Tourist attractions in North-East Region, Singapore Wetlands of Singapore
Sengkang Riverside Park
[ "Chemistry", "Engineering", "Biology" ]
561
[ "Bioremediation", "Constructed wetlands", "Environmental engineering" ]
30,962,412
https://en.wikipedia.org/wiki/Human-transcriptome%20DataBase%20for%20Alternative%20Splicing
The Human-transcriptome DataBase for Alternative Splicing (H-DBAS) is a database of alternatively spliced human transcripts based on H-Invitational. See also Alternative splicing References External links https://web.archive.org/web/20110208034608/http://jbirc.jbic.or.jp/h-dbas/. Biological databases Gene expression Spliceosome RNA splicing
Human-transcriptome DataBase for Alternative Splicing
[ "Chemistry", "Biology" ]
98
[ "Gene expression", "Bioinformatics", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Biological databases" ]
30,967,636
https://en.wikipedia.org/wiki/Genome%20evolution
Genome evolution is the process by which a genome changes in structure (sequence) or size over time. The study of genome evolution involves multiple fields such as structural analysis of the genome, the study of genomic parasites, gene and ancient genome duplications, polyploidy, and comparative genomics. Genome evolution is a constantly changing and evolving field due to the steadily growing number of sequenced genomes, both prokaryotic and eukaryotic, available to the scientific community and the public at large. History Since the first sequenced genomes became available in the late 1970s, scientists have been using comparative genomics to study the differences and similarities between various genomes. Genome sequencing has progressed over time to include more and more complex genomes including the eventual sequencing of the entire human genome in 2001. By comparing genomes of both close relatives and distant ancestors the stark differences and similarities between species began to emerge as well as the mechanisms by which genomes are able to evolve over time. Prokaryotic and eukaryotic genomes Prokaryotes Prokaryotic genomes have two main mechanisms of evolution: mutation and horizontal gene transfer. A third mechanism, sexual reproduction, is prominent in eukaryotes and also occurs in bacteria. Prokaryotes can acquire novel genetic material through the process of bacterial conjugation in which both plasmids and whole chromosomes can be passed between organisms. An often cited example of this process is the transfer of antibiotic resistance utilizing plasmid DNA. Another mechanism of genome evolution is provided by transduction whereby bacteriophages introduce new DNA into a bacterial genome. The main mechanism of sexual interaction is natural genetic transformation which involves the transfer of DNA from one prokaryotic cell to another though the intervening medium. Transformation is a common mode of DNA transfer and at least 67 prokaryotic species are known to be competent for transformation. Genome evolution in bacteria is well understood because of the thousands of completely sequenced bacterial genomes available. Genetic changes may lead to both increases or decreases of genomic complexity due to adaptive genome streamlining and purifying selection. In general, free-living bacteria have evolved larger genomes with more genes so they can adapt more easily to changing environmental conditions. By contrast, most parasitic bacteria have reduced genomes as their hosts supply many if not most nutrients, so that their genome does not need to encode for enzymes that produce these nutrients themselves. Eukaryotes Eukaryotic genomes are generally larger than that of the prokaryotes. While the E. coli genome is roughly 4.6Mb in length, in comparison the Human genome is much larger with a size of approximately 3.2Gb. The eukaryotic genome is linear and can be composed of multiple chromosomes, packaged in the nucleus of the cell. The non-coding portions of the gene, known as introns, which are largely not present in prokaryotes, are removed by RNA splicing before translation of the protein can occur. Eukaryotic genomes evolve over time through many mechanisms including sexual reproduction which introduces much greater genetic diversity to the offspring than the usual prokaryotic process of replication in which the offspring are theoretically genetic clones of the parental cell. Genome size Genome size is usually measured in base pairs (or bases in single-stranded DNA or RNA). The C-value is another measure of genome size. Research on prokaryotic genomes shows that there is a significant positive correlation between the C-value of prokaryotes and the amount of genes that compose the genome. This indicates that gene number is the main factor influencing the size of the prokaryotic genome. In eukaryotic organisms, there is a paradox observed, namely that the number of genes that make up the genome does not correlate with genome size. In other words, the genome size is much larger than would be expected given the total number of protein coding genes. Genome size can increase by duplication, insertion, or polyploidization. Recombination can lead to both DNA loss or gain. Genomes can also shrink because of deletions. A famous example for such gene decay is the genome of Mycobacterium leprae, the causative agent of leprosy. M. leprae has lost many once-functional genes over time due to the formation of pseudogenes. This is evident in looking at its closest ancestor Mycobacterium tuberculosis. M. leprae lives and replicates inside of a host and due to this arrangement it does not have a need for many of the genes it once carried which allowed it to live and prosper outside the host. Thus over time these genes have lost their function through mechanisms such as mutation causing them to become pseudogenes. It is beneficial to an organism to rid itself of non-essential genes because it makes replicating its DNA much faster and requires less energy. An example of increasing genome size over time is seen in filamentous plant pathogens. These plant pathogen genomes have been growing larger over the years due to repeat-driven expansion. The repeat-rich regions contain genes coding for host interaction proteins. With the addition of more and more repeats to these regions the plants increase the possibility of developing new virulence factors through mutation and other forms of genetic recombination. In this way it is beneficial for these plant pathogens to have larger genomes. Chromosomal evolution The evolution of genomes can be impressively shown by the change of chromosome number and structure over time. For instance, the ancestral chromosomes corresponding to chimpanzee chromosomes 2A and 2B fused to produce human chromosome 2. Similarly, the chromosomes of more distantly related species show chromosomes that have been broken up into more parts over the course of evolution. This can be demonstrated by Fluorescence in situ hybridization. Mechanisms Gene duplication Gene duplication is the process by which a region of DNA coding for a gene is duplicated. This can occur as the result of an error in recombination or through a retrotransposition event. Duplicate genes are often immune to the selective pressure under which genes normally exist. As a result, a large number of mutations may accumulate in the duplicate gene code. This may render the gene non-functional or in some cases confer some benefit to the organism. Whole genome duplication Similar to gene duplication, whole genome duplication is the process by which an organism's entire genetic information is copied, once or multiple times which is known as polyploidy. This may provide an evolutionary benefit to the organism by supplying it with multiple copies of a gene thus creating a greater possibility of functional and selectively favored genes. However, tests for enhanced rate and innovation in teleost fishes with duplicated genomes compared with their close relative holostean fishes (without duplicated genomes) found that there was little difference between them for the first 150 million years of their evolution. In 1997, Wolfe & Shields gave evidence for an ancient duplication of the Saccharomyces cerevisiae (Yeast) genome. It was initially noted that this yeast genome contained many individual gene duplications. Wolfe & Shields hypothesized that this was actually the result of an entire genome duplication in the yeast's distant evolutionary history. They found 32 pairs of homologous chromosomal regions, accounting for over half of the yeast's genome. They also noted that although homologs were present, they were often located on different chromosomes. Based on these observations, they determined that Saccharomyces cerevisiae underwent a whole genome duplication soon after its evolutionary split from Kluyveromyces, a genus of ascomycetous yeasts. Over time, many of the duplicate genes were deleted and rendered non-functional. A number of chromosomal rearrangements broke the original duplicate chromosomes into the current manifestation of homologous chromosomal regions. This idea was further solidified in looking at the genome of yeast's close relative Ashbya gossypii. Whole genome duplication is common in fungi as well as plant species. An example of extreme genome duplication is represented by the Common Cordgrass (Spartina anglica) which is a dodecaploid, meaning that it contains 12 sets of chromosomes, in stark contrast to the human diploid structure in which each individual has only two sets of 23 chromosomes. Transposable elements Transposable elements are regions of DNA that can be inserted into the genetic code through one of two mechanisms. These mechanisms work similarly to "cut-and-paste" and "copy-and-paste" functionalities in word processing programs. The "cut-and-paste" mechanism works by excising DNA from one place in the genome and inserting itself into another location in the code. The "copy-and-paste" mechanism works by making a genetic copy or copies of a specific region of DNA and inserting these copies elsewhere in the code. The most common transposable element in the human genome is the Alu sequence, which is present in the genome over one million times. Mutation Spontaneous mutations often occur which can cause various changes in the genome. Mutations can either change the identity of one or more nucleotides, or result in the addition or deletion of one or more nucleotide bases. Such changes can lead to a frameshift mutation, causing the entire code to be read in a different order from the original, often resulting in a protein becoming non-functional. A mutation in a promoter region, enhancer region or transcription factor binding region can also result in either a loss of function, or an up or downregulation in the transcription of the gene targeted by these regulatory elements. Mutations are constantly occurring in an organism's genome and can cause either a negative effect, positive effect or neutral effect (no effect at all). Pseudogenes Often a result of spontaneous mutation, pseudogenes are dysfunctional genes derived from previously functional gene relatives. There are many mechanisms by which a functional gene can become a pseudogene including the deletion or insertion of one or multiple nucleotides. This can result in a shift of reading frame, causing the gene to no longer code for the expected protein, introduce a premature stop codon or a mutation in the promoter region. Often cited examples of pseudogenes within the human genome include the once functional olfactory gene families. Over time, many olfactory genes in the human genome became pseudogenes and were no longer able to produce functional proteins, explaining the poor sense of smell humans possess in comparison to their mammalian relatives. Similarly, bacterial pseudogenes commonly arise from adaptation of free-living bacteria to parasitic lifestyles, so that many metabolic genes become superfluous as these species become adapted to their host. Once a parasite obtains nutrients (such as amino acids or vitamins) from its host it has no need to produce these nutrients itself and often loses the genes to make them. Exon shuffling Exon shuffling is a mechanism by which new genes are created. This can occur when two or more exons from different genes are combined or when exons are duplicated. Exon shuffling results in new genes by altering the current intron-exon structure. This can occur by any of the following processes: transposon mediated shuffling, sexual recombination or non-homologous recombination (also called illegitimate recombination). Exon shuffling may introduce new genes into the genome that can be either selected against and deleted or selectively favored and conserved. Genome reduction and gene loss Many species exhibit genome reduction when subsets of their genes are not needed anymore. This typically happens when organisms adapt to a parasitic life style, e.g. when their nutrients are supplied by a host. As a consequence, they lose the genes needed to produce these nutrients. In many cases, there are both free living and parasitic species that can be compared and their lost genes identified. Good examples are the genomes of Mycobacterium tuberculosis and Mycobacterium leprae, the latter of which has a dramatically reduced genome (see figure under pseudogenes above). Another beautiful example are endosymbiont species. For instance, Polynucleobacter necessarius was first described as a cytoplasmic endosymbiont of the ciliate Euplotes aediculatus. The latter species dies soon after being cured of the endosymbiont. In the few cases in which P. necessarius is not present, a different and rarer bacterium apparently supplies the same function. No attempt to grow symbiotic P. necessarius outside their hosts has yet been successful, strongly suggesting that the relationship is obligate for both partners. Yet, closely related free-living relatives of P. necessarius have been identified. The endosymbionts have a significantly reduced genome when compared to their free-living relatives (1.56 Mbp vs. 2.16 Mbp). Speciation A major question of evolutionary biology is how genomes change to create new species. Speciation requires changes in behavior, morphology, physiology, or metabolism (or combinations thereof). The evolution of genomes during speciation has been studied only very recently with the availability of next-generation sequencing technologies. For instance, cichlid fish in African lakes differ both morphologically and in their behavior. The genomes of 5 species have revealed that both the sequences but also the expression pattern of many genes has quickly changed over a relatively short period of time (100,000 to several million years). Notably, 20% of duplicate gene pairs have gained a completely new tissue-specific expression pattern, indicating that these genes also obtained new functions. Given that gene expression is driven by short regulatory sequences, this demonstrates that relatively few mutations are required to drive speciation. The cichlid genomes also showed increased evolutionary rates in microRNAs which are involved in gene expression. Gene expression Mutations can lead to changed gene function or, probably more often, to changed gene expression patterns. In fact, a study on 12 animal species provided strong evidence that tissue-specific gene expression was largely conserved between orthologs in different species. However, paralogs within the same species often have a different expression pattern. That is, after duplication of genes they often change their expression pattern, for instance by getting expressed in another tissue and thereby adopting new roles. Composition of nucleotides (GC content) The genetic code is made up of sequences of four nucleotide bases: Adenine, Guanine, Cytosine and Thymine, commonly referred to as A, G, C, and T. The GC-content is the percentage of G & C bases within a genome. GC-content varies greatly between different organisms. Gene coding regions have been shown to have a higher GC-content and the longer the gene is, the greater the percentage of G and C bases that are present. A higher GC-content confers a benefit because a Guanine-Cytosine bond is made up of three hydrogen bonds while an Adenine-Thymine bond is made up of only two. Thus the three hydrogen bonds give greater stability to the DNA strand. So, it is not surprising that important genes often have a higher GC-content than other parts of an organism's genome. For this reason, many species living at very high temperatures such as the ecosystems surrounding hydrothermal vents, have a very high GC-content. High GC-content is also seen in regulatory sequences such as promoters which signal the start of a gene. Many promoters contain CpG islands, areas of the genome where a cytosine nucleotide occurs next to a guanine nucleotide at a greater proportion. It has also been shown that a broad distribution of GC-content between species within a genus shows a more ancient ancestry. Since the species have had more time to evolve, their GC-content has diverged further apart. Evolving translation of genetic code Amino acids are made up of three base long codons and both Glycine and Alanine are characterized by codons with Guanine-Cytosine bonds at the first two codon base positions. This GC bond gives more stability to the DNA structure. It has been hypothesized that as the first organisms evolved in a high-heat and pressure environment they needed the stability of these GC bonds in their genetic code. De novo origin of genes Novel genes can arise from non-coding DNA. De novo origin of (protein-coding) genes only requires two features, namely the generation of an open reading frame, and the creation of a transcription factor binding site. For instance, Levine and colleagues reported the origin of five new genes in the D. melanogaster genome from noncoding DNA. Subsequently, de novo origin of genes has been also shown in other organisms such as yeast, rice and humans. For instance, Wu et al. (2011) reported 60 putative de novo human-specific genes all of which are short consisting of a single exon (except one). In bacteria, 'grounded' prophages (i.e. integrated phage that cannot produce new phage) are buffer zones which would tolerate variations thereby increasing the probability of de novo gene formation. These grounded prophages and other such genetic elements are sites where genes could be acquired through horizontal gene transfer (HGT). Origin of life and the first genomes In order to understand how the genome arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of the genome under plausible prebiotic conditions. According to the RNA world hypothesis free-floating ribonucleotides were present in the primitive soup. These were the fundamental molecules that combined in series to form the original RNA genome. Molecules as complex as RNA must have arisen from small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of purine and pyrimidine nucleotides, both of which are necessary for reliable information transfer, and thus Darwinian natural selection and evolution. Nam et al. demonstrated the direct condensation of nucleobases with ribose to give ribonucleosides in aqueous microdroplets, a key step leading to formation of the RNA genome. Also, a plausible prebiotic process for synthesizing pyrimidine and purine ribonucleotides leading to genome formation using wet-dry cycles was presented by Becker et al. See also De novo gene birth Exon shuffling Gene fusion Gene duplication Horizontal gene transfer Mobile genetic elements References Extended evolutionary synthesis Genomics Molecular evolution
Genome evolution
[ "Chemistry", "Biology" ]
3,858
[ "Evolutionary processes", "Molecular evolution", "Molecular biology" ]
47,882,363
https://en.wikipedia.org/wiki/Universal%20space
In mathematics, a universal space is a certain metric space that contains all metric spaces whose dimension is bounded by some fixed constant. A similar definition exists in topological dynamics. Definition Given a class of topological spaces, is universal for if each member of embeds in . Menger stated and proved the case of the following theorem. The theorem in full generality was proven by Nöbeling. Theorem: The -dimensional cube is universal for the class of compact metric spaces whose Lebesgue covering dimension is less than . Nöbeling went further and proved: Theorem: The subspace of consisting of set of points, at most of whose coordinates are rational, is universal for the class of separable metric spaces whose Lebesgue covering dimension is less than . The last theorem was generalized by Lipscomb to the class of metric spaces of weight , : There exist a one-dimensional metric space such that the subspace of consisting of set of points, at most of whose coordinates are "rational" (suitably defined), is universal for the class of metric spaces whose Lebesgue covering dimension is less than and whose weight is less than . Universal spaces in topological dynamics Consider the category of topological dynamical systems consisting of a compact metric space and a homeomorphism . The topological dynamical system is called minimal if it has no proper non-empty closed -invariant subsets. It is called infinite if . A topological dynamical system is called a factor of if there exists a continuous surjective mapping which is equivariant, i.e. for all . Similarly to the definition above, given a class of topological dynamical systems, is universal for if each member of embeds in through an equivariant continuous mapping. Lindenstrauss proved the following theorem: Theorem: Let . The compact metric topological dynamical system where and is the shift homeomorphism is universal for the class of compact metric topological dynamical systems whose mean dimension is strictly less than and which possess an infinite minimal factor. In the same article Lindenstrauss asked what is the largest constant such that a compact metric topological dynamical system whose mean dimension is strictly less than and which possesses an infinite minimal factor embeds into . The results above implies . The question was answered by Lindenstrauss and Tsukamoto who showed that and Gutman and Tsukamoto who showed that . Thus the answer is . See also Universal property Urysohn universal space Mean dimension References Mathematical terminology Topology Dimension theory Topological dynamics
Universal space
[ "Physics", "Mathematics" ]
504
[ "Topology", "Space", "nan", "Geometry", "Spacetime", "Topological dynamics", "Dynamical systems" ]
47,885,527
https://en.wikipedia.org/wiki/HD%20137388
HD 137388 is an orange-hued star in the southern constellation of Apus. It has the proper name Karaka, after the native New Zealand karaka tree. The name was assigned by representatives of New Zealand in the IAU's NameExoWorlds contest. The star is too faint to be visible to the naked eye, having an apparent visual magnitude of 8.70. It is located at a distance of 132 light years from the Sun based on parallax. The star is drifting further away with a radial velocity of +26 km/s, having come as close as some 1.2 million years ago. It has an absolute magnitude of 5.75. The stellar classification of HD 137388 is K2IV, matching that of an evolving subgiant star. However, in 2011 Dumusque and colleagues found a class of K0/K1V, suggesting it is instead a K-type main-sequence star. It is around three billion years old and is spinning with a projected rotational velocity of 2.2 km/s. The star shows a magnetic activity cycle, similar to the solar cycle. It has 93% of the mass of the Sun and 86% of the Sun's radius. Based on the abundance of iron in the spectrum, it is a high metallicity star with a greater abundance of heavy elements compared to the Sun. The star is radiating 53% of the luminosity of the Sun from its photosphere at an effective temperature of 5,297 K. Planetary system Radial velocity studies indicate that it has a planet, originally named HD 137388 b (mass 0.223 MJ, period 330d). It orbits at a typical distance of 0.89 AU with an eccentricity of 0.36, completely overlapping the star's habitable zone. The planet was officially designated Kererū, the Māori name of the New Zealand pigeon, by the IAU in the same contest that named its parent star. References K-type subgiants K-type main-sequence stars Planetary systems with one confirmed planet Apus J15353994-8012164 Durchmusterung objects 137388 076351 Karaka Giant planets in the habitable zone
HD 137388
[ "Astronomy" ]
463
[ "Apus", "Constellations" ]
47,886,556
https://en.wikipedia.org/wiki/Sarcodon%20pakaraimensis
Sarcodon pakaraimensis is a species of tooth fungus in the family Bankeraceae. Found in Guyana, where it grows in mixed Pakaraimaea–Dicymbe forest, it was described as new to science in 2015. It is differentiated from other Sarcodon species by its smooth to pitted, pinkish-gray cap that stains black, its hollow stipe, and the pink staining reaction of injured flesh. Its spores measure 5–7 μm long by 5–9 μm wide. They make a fresh dark reddish-brown spore print, which tends to lighten to yellowish brown when it is dry. Molecular analysis of DNA sequences shows the fungus to be closely related to S. umbilicatus. The specific epithet pakaraimensis refers to the Pakaraima Mountains—the type locality. References External links Fungi described in 2015 Fungi of Guyana pakaraimensis Fungus species
Sarcodon pakaraimensis
[ "Biology" ]
186
[ "Fungi", "Fungus species" ]
47,887,859
https://en.wikipedia.org/wiki/Cycle%20of%20quantification/qualification
Cycle of quantification/qualification (Cq) is a parameter used in real-time polymerase chain reaction techniques, indicating the cycle number where a PCR amplification curve meets a predefined mathematical criterion. A Cq may be used for quantification of the target sequence or to determine whether the target sequence is present or not. Two criteria to determine the Cq are used by different thermocyclers: threshold cycle (Ct) is the number of cycles required for the fluorescent signal to cross a given value threshold. Usually, the threshold is set above the baseline, about 10 times the standard deviation of the noise of the baseline, to avoid random effects on the Ct. However, the threshold shouldn't be set much higher than that to avoid reduced reproducibility due to uncontrolled factors. Crossing point (Cp) and Take off point (TOP) are the cycle value of the maximum second derivative of the amplification curve. References Molecular biology Laboratory techniques Polymerase chain reaction Real-time technology
Cycle of quantification/qualification
[ "Chemistry", "Technology", "Biology" ]
214
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction", "nan", "Molecular biology", "Biochemistry" ]
47,888,776
https://en.wikipedia.org/wiki/Advanced%20Simulation%20Library
Advanced Simulation Library (ASL) is a free and open-source hardware-accelerated multiphysics simulation platform. It enables users to write customized numerical solvers in C++ and deploy them on a variety of massively parallel architectures, ranging from inexpensive FPGAs, DSPs and GPUs up to heterogeneous clusters and supercomputers. Its internal computational engine is written in OpenCL and utilizes matrix-free solution techniques. ASL implements variety of modern numerical methods, i.a. level-set method, lattice Boltzmann, immersed boundary. The mesh-free, immersed boundary approach allows users to move from CAD directly to simulation, reducing pre-processing efforts and number of potential errors. ASL can be used to model various coupled physical and chemical phenomena, especially in the field of computational fluid dynamics. It is distributed under the free GNU Affero General Public License with an optional commercial license (which is based on the permissive MIT License). History Advanced Simulation Library is being developed by Avtech Scientific, an Israeli company. Its source code was released to the community on 14 May 2015, whose members packaged it for scientific sections of all major Linux distributions shortly thereafter. Subsequently, Khronos Group acknowledged the significance of ASL and listed it on its website among OpenCL-based resources. Application areas Computational fluid dynamics Computer-assisted surgery Virtual sensing Industrial process data validation and reconciliation Multidisciplinary design optimization Design space exploration Computer-aided engineering Crystallography Microfluidics Advantages and disadvantages Advantages C++ API (no OpenCL knowledge required) Mesh-free, immersed boundary approach allows users to move from CAD directly to computations reducing pre-processing effort Dynamic compilation enables an additional layer of optimization at run-time (i.e. for a specific parameters set the application was provided with) Automatic hardware acceleration and parallelization of applications Deployment of same program on a variety of parallel architectures - GPU, APU, FPGA, DSP, multicore CPUs Ability to deal with complex boundaries Ability to incorporate microscopic interactions Availability of the source code Disadvantages Absence of detailed documentation (besides the Developer Guide generated from the source code comments) Not all OpenCL drivers are mature enough for the library Features ASL provides a range of features to solve number of problems - from complex fluid flows involving chemical reactions, turbulence and heat transfer, to solid mechanics and elasticity. Interfacing: VTK/ParaView, MATLAB (export). import file formats: .stl .vtp .vtk .vti .mnc .dcm export file formats: .vti .mat Geometry: flexible and complex geometry using simple rectangular grid mesh-free, immersed boundary approach generation and manipulation of geometric primitives Implemented phenomena: Transport processes multicomponent transport processes compressible and incompressible fluid flow Chemical reactions electrode reactions Elasticity homogeneous isotropic elasticity homogeneous isotropic poroelasticity Interface tracking evolution of an interface evolution of an interface with crystallographic kinetics Uses ACTIVE - Active Constraints Technologies for Ill-defined or Volatile Environments (European FP7 Project) References Computational chemistry software Computer-aided engineering software for Linux Software using the GNU Affero General Public License Computational fluid dynamics Free science software Open Source computer aided engineering applications Articles containing video clips Libraries C++ numerical libraries Free software programmed in C++
Advanced Simulation Library
[ "Physics", "Chemistry" ]
688
[ "Computational chemistry software", "Chemistry software", "Computational fluid dynamics", "Computational physics", "Computational chemistry", "Fluid dynamics" ]
47,889,207
https://en.wikipedia.org/wiki/Helium-3%20surface%20spin%20echo
Helium-3 surface spin echo (HeSE) is an inelastic scattering technique in surface science that has been used to measure microscopic dynamics at well-defined surfaces in ultra-high vacuum. The information available from HeSE complements and extends that available from other inelastic scattering techniques such as neutron spin echo and traditional helium-4 atom scattering (HAS). Principles The experimental principles of the HeSE experiment are analogous to those of neutron spin echo, differing in details such as the nature of the probe/sample interactions that give rise to scattering. In outline, a polarized 3He beam is created by a supersonic expansion followed by a spin-filtering stage (polariser). The helium scatters from the experimental sample and is detected at the end of the beamline after another spin-filtering stage (analyser). Before and after the scattering process, the beam passes through magnetic fields that precess the probe spins in the usual sense of a spin echo experiment. The raw data of the experiment are the spin-resolved scattered helium intensities as a function of the incoming magnetic field integral, outgoing field integral and any other variable parameters relevant to specific experiments, such as surface orientation and temperature. In the most general kind of scattering-with-precession experiment, the data can be used to construct the 2D 'wavelength intensity matrix' for the surface scattering process, i.e. the probability that a helium atom of a certain incoming wavelength scatters into a state with a certain outgoing wavelength. Conventional 'spin echo' measurements are a common special case of the more general scattering-with-precession measurements, in which the incoming and outgoing magnetic field integrals are constrained to be equal. The polarization of the outgoing beam is measured as a function of the precession field integral by measuring the intensity of the outgoing beam resolved into different spin states. The spin echo case is referred to as a type of 'tilted projection measurement'. Spin echo measurements are an appropriate tilted projection for quasi-elastic measurements of surface dynamics because the raw data are closely related to the intermediate scattering function (ISF), which in many cases can be interpreted in terms of standard dynamical signatures. Applications The surface processes that HeSE can measure can be broadly divided into elastic, quasielastic and inelastic processes. Measurements in which the predominant signal is elastically scattered include standard helium diffraction and the measurement of selective adsorption resonances. Quasielastic measurements generally correspond to measurements of microscopic surface diffusion in which the Doppler-like energy gain and loss of the helium atoms is small compared to the beam energy. More strongly inelastic measurements can provide information about energy loss channels on the surface such as surface phonons. Microscopic diffusion HeSE has been used to study the diffusion rates and mechanisms of atoms and molecules ('adsorbates') at surfaces. A non-exhaustive list of the research themes associated with HeSE diffusion measurements include: nuclear quantum effects in the surface diffusion of atomic hydrogen; benchmarking the adsorbate/surface free energy landscape; energy exchange ('friction') between adsorbates and the surface; pairwise and many-body inter-adsorbate interactions. Selective adsorption resonances HeSE has been used to construct empirical helium-surface scattering potentials through the measurement of selective adsorption resonances (bound state resonances) on the clean LiF(001) surface and the hydrogenated Si(111) surface. References Scientific techniques Surface science
Helium-3 surface spin echo
[ "Physics", "Chemistry", "Materials_science" ]
719
[ "Condensed matter physics", "Surface science" ]
47,889,743
https://en.wikipedia.org/wiki/Conductive%20elastomer
A conductive elastomer is a form of elastomer, often natural rubber or other rubber substitute, that is manufactured to conduct electricity. This is commonly accomplished by distributing carbon or other conductive particles throughout the raw material prior to setting it. Carbon black and silica are common additives to induce conductivity in elastomers. Silica has been studied more so than other additives due to its low cost however, its conductance is also lower. These additives can not only enable conductance but can increase the mechanical properties of the elastomer. Conductive elastomers are often pressure-sensitive, with their conductivity varying with the amount of pressure put on it, and can be used to make pressure sensors. Other uses of conductive elastomers include conductive flexible seals and gaskets, and conductive mats used to prevent electrostatic damage to electronic devices. These elastomers also have uses in the energy industry, where they could be used to make flexible solar cells or stretchable devices for converting mechanical energy to electrical energy. Making solar cells and various sensors able to stretch and bend would allow them to be incorporated into wearable electronics. Recently, there has also been focus on preparation of elastomers that do not lose conductivity upon stretching. A novel approach for the design of an elastomer that actually increases conductivity with strain has recently been published. See also Metal rubber Magnetorheological elastomer Elastomeric connector References Elastomers Composite materials
Conductive elastomer
[ "Physics", "Chemistry" ]
316
[ "Synthetic materials", "Composite materials", "Elastomers", "Materials", "Matter" ]
47,894,032
https://en.wikipedia.org/wiki/Blasting%20mat
A blasting mat is a mat usually made of sliced-up rubber tires bound together with ropes, cables or chains. They are used during rock blasting to contain the blast, prevent flying rocks and suppress dust. Use Blasting mats are used when explosives are detonated in places such as quarries or construction sites. The mats are placed over the blasting area to contain the blast, suppress noise and dust as well as prevent high velocity rock fragments called fly rock (or flyrock) from damaging structures, people or the environment in proximity to the blast site. The amount of fly rock can be reduced by proper drilling in the bedrock for the explosives, but in practice it is hard to avoid. Mats can be used singly or in layers depending on the size of the blast charge, the type of mat and the amount of protection needed. They can be used horizontally on the ground or vertically hanging from cranes or attached to structures. In the vertical capacity the mats are sometimes referred to as blasting curtains. When used in blasting tunnels the mats can be placed in patterns designed to let the mats stabilize each other and to direct the discharge from the explosion out of the tunnel. To prevent mats from being displaced by the explosion, they can be covered with layers of soil or anchored to the ground. Anchoring the mats is also essential when the blasting is done on an incline where the mats may slide down from the rock face. Blasting mats are often used in combination with blasting blankets as an additional layer over the mats. The blankets are larger than the mats designed to retain the fragments that have managed to pass through the mat. Blasting blankets are used for both horizontal and vertical blasting. Blasting blankets consist of a fine-mesh strong net or industrial felt from paper mills. Both mats and blankets are designed to let air and gasses from the explosion pass through the cover and retain fragments. Knowledge of the proper use of blasting mats is required in order to obtain a blaster's certificate issued by organizations such as the WorkSafeBC. Blasting mats made from used tires can serve a double purpose as road stabilizers, or road surface, in locations where roads leading to the blasting site are unstable or nonexistent, or in areas where the surface needs to be protected from heavy machinery. Materials A number of materials are used for making blasting mats and new materials with different properties are constantly being introduced into this field. The most common materials are strips of old tires held together by steel cables, mats woven from manila rope or wire cables, logs or conveyor belts. Layers of wire netting can also be used. Several methods of assembling a blasting mat are patented. Blasting mats made from rope woven on wires were first used during the construction of the IRT Third Avenue Line in New York City in the early 1900s. They were used to protect the surrounding buildings and were favored since they prevented fly rock but vented gasses. Mats made from recycled tires can be fashioned in a number of ways depending on how the tires are cut. Some examples are tread mats, sidewall mats and mats from non-flattened sections of tires. Manufacturing The manufacturing of blasting mats is integrated with the mining industry and many companies specialize in making mats, especially those made from recycled tires. Military use When charges are used to dig foxholes, an improvised blasting mat made from whole tires tied together with rope to reduce noise and fly rock, is recommended in the A Soldiers Handbook (United States). A tarp may also be used as a blasting blanket. Accidents Over the years, a number of incidents with fatal outcomes have been caused by fly rock. In most of these, blasting mats were not used or they were placed over the blasting face in an incorrect manner. Such an incident occurred in August 2015, in Cape Ray, Newfoundland and Labrador when a fly rock travelled about from the blast site and crashed through the kitchen ceiling of a nearby house. Although designed to prevent accidents, as blasting mats weigh between , they have also caused injuries when falling on workers on construction sites. Blasting mats must be thoroughly inspected before each use to ensure there are no blown out segments or broken cables, by a blaster. Blasting mats will deteriorate with each use to the point where they become ineffective for their intended purpose. Only trained experienced and adequately supervised crews should be used in the placement of these devices over a loaded shot. A common complaint is accidental breakage of bus wires, leg wires or pinching off the non electric tubes that may result in the misfire of the shot. References Explosives engineering Explosion protection Military engineering Mining engineering Mining equipment Mine safety Vehicle recycling Improvisation Recycling by product
Blasting mat
[ "Chemistry", "Engineering" ]
924
[ "Explosion protection", "Mining engineering", "Mining equipment", "Explosives engineering", "Combustion engineering", "Construction", "Military engineering", "Explosions" ]
43,269,014
https://en.wikipedia.org/wiki/G%C3%B6del%20logic
In mathematical logic, a Gödel logic, sometimes referred to as Dummett logic or Gödel–Dummett logic, is a member of a family of finite- or infinite-valued logics in which the sets of truth values V are closed subsets of the unit interval [0,1] containing both 0 and 1. Different such sets V in general determine different Gödel logics. The concept is named after Kurt Gödel. In 1959, Michael Dummett showed that infinite-valued propositional Gödel logic can be axiomatised by adding the axiom schema to intuitionistic propositional logic. See also Intermediate logic References Set theory Mathematical logic Formal methods
Gödel logic
[ "Mathematics", "Engineering" ]
140
[ "Set theory", "Mathematical logic", "Software engineering", "Mathematical logic stubs", "Formal methods" ]
43,270,387
https://en.wikipedia.org/wiki/Formvar
Formvar refers to any of several thermoplastic resins that are polyvinyl formals, which are polymers formed from polyvinyl alcohol and formaldehyde as copolymers with polyvinyl acetate. They are typically used as coatings, adhesives, and molding materials. "Formvar" used to be the registered trade name of the polyvinyl formal resin produced by Monsanto Chemical Company in St. Louis, Missouri. That manufacturing unit was sold and formvar is now distributed under the name "Vinylec". Applications Formvar is used in many different applications, such as wire insulation, coatings for musical instruments, magnetic tape backing, and support films for electron microscopy. Formvar is also used as a main ingredient for special adhesives in structural applications such as the aircraft industry. Magnet wire The major application of formvar resins is as electrical insulation for magnet wire. It is combined with other "wire enamels" which are then coated onto copper wire and cured in an oven to create a crosslinked film coating. Transmission electron microscopy Most specimens used in transmission electron microscopy (TEM) need to be supported by a thin electron-transparent film to hold the sample in place. Formvar films are a common choice of film grid for TEM. Formvar is favored because it allows users to utilize grids with lower mesh rating. Physical characteristics Formvar resin has a high softening point and strong electric insulation properties. It is also very flexible, water-insoluble, and resistant to abrasion. Formvar is also halogen free. Formvar resins are combustible and can cause dust explosions. For this reason exposure to heat, sparks, and flame should be avoided. Formvar is most commonly dissolved in ethylene dichloride, chloroform, and dioxane. References External links Industrial Glues Vinyl polymers Dielectrics Adhesives
Formvar
[ "Physics" ]
397
[ "Materials", "Dielectrics", "Matter" ]
36,160,226
https://en.wikipedia.org/wiki/Octant%20%28solid%20geometry%29
An octant in solid geometry is one of the eight divisions of a Euclidean three-dimensional coordinate system defined by the signs of the coordinates. It is analogous to the two-dimensional quadrant and the one-dimensional ray. The generalization of an octant is called orthant or hyperoctant. Naming and numbering A convention for naming an octant is to give its list of signs, e.g. (+,−,−) or (−,+,−). Octant (+,+,+) is sometimes referred to as the first octant, although similar ordinal name descriptors are not defined for the other seven octants. The advantages of using the (±,±,±) notation are its unambiguousness, and extensibility for higher dimensions. The following table shows the sign tuples together with likely ways to enumerate them. A binary enumeration with − as 1 can be easily generalized across dimensions. A binary enumeration with + as 1 defines the same order as balanced ternary. The Roman enumeration of the quadrants is in Gray code order, so the corresponding Gray code is also shown for the octants. Verbal descriptions are ambiguous, because they depend on the representation of the coordinate system. In the two depicted representations of a right-hand coordinate system, the first octant could be called right-back-top or right-top-front respectively. See also Orthant Octant (plane geometry) Octree Spherical octant, the intersection of an octant of space and a sphere Trirectangular tetrahedron References Euclidean solid geometry
Octant (solid geometry)
[ "Physics" ]
338
[ "Spacetime", "Space", "Euclidean solid geometry" ]
36,160,752
https://en.wikipedia.org/wiki/Morley%20centers
In plane geometry, the Morley centers are two special points associated with a triangle. Both of them are triangle centers. One of them called first Morley center (or simply, the Morley center ) is designated as X(356) in Clark Kimberling's Encyclopedia of Triangle Centers, while the other point called second Morley center (or the 1st Morley–Taylor–Marr Center) is designated as X(357). The two points are also related to Morley's trisector theorem which was discovered by Frank Morley in around 1899. Definitions Let be the triangle formed by the intersections of the adjacent angle trisectors of triangle . is called the Morley triangle of . Morley's trisector theorem states that the Morley triangle of any triangle is always an equilateral triangle. First Morley center Let be the Morley triangle of . The centroid of is called the first Morley center of . Second Morley center Let be the Morley triangle of . Then, the lines are concurrent. The point of concurrence is called the second Morley center of triangle . Trilinear coordinates First Morley center The trilinear coordinates of the first Morley center of triangle are Second Morley center The trilinear coordinates of the second Morley center are References Triangle centers
Morley centers
[ "Physics", "Mathematics" ]
253
[ "Point (geometry)", "Triangle centers", "Points defined for a triangle", "Geometric centers", "Symmetry" ]
35,098,864
https://en.wikipedia.org/wiki/Narasimhan%E2%80%93Seshadri%20theorem
In mathematics, the Narasimhan–Seshadri theorem, proved by , says that a holomorphic vector bundle over a Riemann surface is stable if and only if it comes from an irreducible projective unitary representation of the fundamental group. The main case to understand is that of topologically trivial bundles, i.e. those of degree zero (and the other cases are a minor technical extension of this case). This case of the Narasimhan–Seshadri theorem says that a degree zero holomorphic vector bundle over a Riemann surface is stable if and only if it comes from an irreducible unitary representation of the fundamental group of the Riemann surface. gave another proof using differential geometry, and showed that the stable vector bundles have an essentially unique unitary connection of constant (scalar) curvature. In the degree zero case, Donaldson's version of the theorem says that a degree zero holomorphic vector bundle over a Riemann surface is stable if and only if it admits a flat unitary connection compatible with its holomorphic structure. Then the fundamental group representation appearing in the original statement is just the monodromy representation of this flat unitary connection. See also Nonabelian Hodge correspondence Kobayashi–Hitchin correspondence Stable vector bundle References Riemann surfaces Theorems in analysis
Narasimhan–Seshadri theorem
[ "Mathematics" ]
270
[ "Mathematical analysis", "Theorems in mathematical analysis", "Mathematical theorems", "Mathematical problems" ]
35,099,255
https://en.wikipedia.org/wiki/Fenchel%E2%80%93Moreau%20theorem
In convex analysis, the Fenchel–Moreau theorem (named after Werner Fenchel and Jean Jacques Moreau) or Fenchel biconjugation theorem (or just biconjugation theorem) is a theorem which gives necessary and sufficient conditions for a function to be equal to its biconjugate. This is in contrast to the general property that for any function . This can be seen as a generalization of the bipolar theorem. It is used in duality theory to prove strong duality (via the perturbation function). Statement Let be a Hausdorff locally convex space, for any extended real valued function it follows that if and only if one of the following is true is a proper, lower semi-continuous, and convex function, , or . References Convex analysis Theorems in analysis Theorems involving convexity
Fenchel–Moreau theorem
[ "Mathematics" ]
169
[ "Mathematical analysis", "Theorems in mathematical analysis", "Mathematical theorems", "Mathematical problems" ]
35,100,531
https://en.wikipedia.org/wiki/Virtual%20Cell
Virtual Cell (VCell) is an open-source software platform for modeling and simulation of living organisms, primarily cells. It has been designed to be a tool for a wide range of scientists, from experimental cell biologists to theoretical biophysicists. Concept Virtual Cell is an advanced software platform for modeling and simulating reaction kinetics, membrane transport and diffusion in the complex geometries of cells and multicellular tissues. VCell models have a hierarchical tree structure. The trunk level is the "Physiology" consisting of compartments, species and chemical reactions, and reaction rates that are functions of concentrations. Given initial concentrations of species, VCell can calculate how these concentrations change over time. How these numerical simulations are performed, is determined through a number of "Applications", which specify whether simulations will be deterministic or stochastic, and spatial or compartmental; multiple "Applications" can also specify initial concentrations, diffusion coefficients, flow rates and a variety of modeling assumptions. Thus "Applications" can be viewed as computational experiments to test ideas about the physiological system. Each "Application" corresponds to a mathematical description, which is automatically translated into the VCell Math Description Language. Multiple "Simulations", including parameter scans and changes in solver specifications, can be run within each "Application". Models can range from the simple to the highly complex, and can represent a mixture of experimental data and purely theoretical assumptions. The Virtual Cell can be used as a distributed application over the Internet or as a standalone application. The graphical user interface allows construction of complex models in biologically relevant terms: compartment dimensions and shape, molecular characteristics, and interaction parameters. VCell converts the biological description into an equivalent mathematical system of differential equations. Users can switch back-and-forth between the schematic biological view and the mathematical view in the common graphical interface. Indeed, if users desire, they can manipulate the mathematical description directly, bypassing the schematic view. VCell allows users a choice of numerical solvers to translate the mathematical description into software code which is executed to perform the simulations. The results can be displayed on-line, or they can be downloaded to the user's computer in a wide variety of export formats. The Virtual Cell license allows free access to all members of the scientific community. Users may save their models in the VCell DataBase, which is maintained on servers at U. Connecticut. The VCell Database uses an access control system with permissions to allow users to maintain their models private, share them with select collaborators or make them public. The VCell website maintains a searchable list of models that are public and associated with research publications. Features VCell supports the following features: Within the "Physiology", models can be specified as reaction networks or reaction rules. Simulations can be chosen to either resolve variations of concentrations over space (spatial simulations) or assume concentrations constant across compartments (compartmental simulations). For spatial simulations, geometries can be specified by analytic geometry equations, derived from combination of simple shapes or derived from imported images, such as 3D confocal microscope stacks. Utilities for 3D segmentation of image data into regions such as nucleus, mitochondria, cytosol and extracellular are provided. Simulations can be based on either integration of differential equations without use of random numbers (deterministic simulations) or be based on random events (stochastic simulations). Simulations can be run using a variety of solvers including: 6 ordinary differential equation (ODE) solvers, 2 partial differential equation (PDE) solvers, 4 non-spatial stochastic solvers and Smoldyn for stochastic spatial simulations. VCell also offers a hybrid deterministic/stochastic spatial solver for situations where some species are present in low copy number and others are present in high copy number. Most recently, a network free solver, NFSim, was made available for stochastic simulation of large combinatorially complex rule-based models. Most solvers can be run locally, all solvers can be run remotely on VCell servers. For compartmental deterministic models, the best parameter values to fit experimental data can be estimated using algorithms developed by the COPASI software system. These tools are available in VCell. Models and simulation setups (so-called Applications) can be stored in local files as Virtual Cell Markup Language (VCML) or stored remotely in the VCell database. Models can be imported and exported as Systems Biology Markup Language (SBML) Biological pathways can be imported as Biological Pathway Exchange (BioPAX) to build and annotate models. Biological and related data sources VCell allows users integrated access to a variety of sources to help build and annotate models: Models stored in the VCell database can be made accessible by their authors to some users (shared) or all users (public). VCell can import models from the BioModels Database. Biological pathways can be imported from Pathway Commons. Model elements can be annotated with IDs from Pubmed UniProt (proteins) KEGG (reactions and species) GeneOntology (reactions and species), Reactome (reactions and species) and ChEBI (mostly small molecules). Development The Virtual Cell is being developed at the R. D Berlin Center for Cell Analysis and Modeling at the University of Connecticut Health Center. The team is primarily funded through research grants through the National Institutes of Health. See also List of systems biology modeling software References External links VCell home page VCell Download VCell user forum VCell Models in publications Tutorial about using VCell on YouTube. Source code on GitHub Mathematical and theoretical biology Systems biology Numerical software Numerical differential equations Free science software
Virtual Cell
[ "Mathematics", "Biology" ]
1,162
[ "Mathematical and theoretical biology", "Applied mathematics", "Mathematical software", "Numerical software", "Systems biology" ]
35,102,687
https://en.wikipedia.org/wiki/Metasymplectic%20space
In mathematics, a metasymplectic space, introduced by and , is a Tits building of type F4 (a specific generalized incidence structure). The four types of vertices are called points, lines, planes, and symplecta. References Incidence geometry
Metasymplectic space
[ "Mathematics" ]
53
[ "Incidence geometry", "Combinatorics" ]
35,109,206
https://en.wikipedia.org/wiki/Plastoglobulin
Plastoglobulins is a family of proteins prominent found in lipid globules in plastids of flowering plants. It shows sequence similarities to the PAP/fibrillin family. PGL and similar proteins can be found in most algae, cyanobacteria and plants, but no other life forms; it suggests a role for PGL in oxygenic photosynthesis. Research A group from the University of Maryland conducted research whereby they inactivated the genes (pgl1 and pgl2) encoding for plastoglobulin-like proteins in Synechocystis. The results show a decrease in the number of chlorophyll, as well as a lower photosystem I (PSI)/PSII ratio in the mutants. However, the concentration of carotenoid and myxoxanthophyll in each chlorophyll has increased. These mutants produced no observable change in growth rates under low light, but did grow slowing under moderate light. The concentration of the two PGL proteins (PGL1 and PGL2) in the wild-type increased under more intense light; this led to the conclusion that PGL plays a role in photo-oxidative damage repair. Structure and function The sequence for Plastoglobulin-1 has been elucidated in Pisum sativum (garden pea); it was found to be synthesised as a 358 residue pro-peptide, containing a 47 residue transit peptide for localisation to the chloroplast; the transit peptide is cleaved to produce the mature plastoglobulin. Plastoglobulin are known to interact with each other to form a coat on lipid globules, that either recruits or maintain receptors for attachment to the thylakoid membrane, or for transport of lipids across the thylakoid membrane. References Plant proteins
Plastoglobulin
[ "Chemistry" ]
397
[ "Biochemistry stubs", "Protein stubs" ]
35,109,416
https://en.wikipedia.org/wiki/Eady%20model
The Eady model is an atmospheric model for baroclinic instability first posed by British meteorologist Eric Eady in 1949 based on his PhD work at Imperial College London. Assumptions The Eady model makes several assumptions about the state of the atmosphere. First, the model assumes the atmosphere is composed of fluid obeying quasi-geostrophic motion. Second, the model assumes a constant Coriolis parameter. The model also assumes a constant static stability parameter and that fluctuations in the density of the air are small (obeys the Boussinesq approximation). Structurally, the model is bounded by two flat layers or “rigid lids”: one layer representing the Earth's surface and the other the tropopause at fixed height H. To simplify numerical solutions, the model also assumes rigid walls longitudinally at x=-L and x=L. Lastly, the model assumes that there is constant shear in the zonal component of the mean state wind; the mean state zonal wind varies linearly with altitude. Equations Starting with the quasi-geostrophic equations, applying the Eady model assumptions, and linearizing yields the linearized differential equations governing the time evolution of the state of the atmosphere in the Eady model: where ψ denotes the streamfunction (which can be used to derive all other variables from quasi-geostrophic theory), z denotes altitude, y denotes latitude, q denotes the quasi-geostrophic potential vorticity, denotes the mean zonal wind, T denotes the temperature, v denotes the meridional wind, denotes the Coriolis parameter, taken as a constant, Λ denotes the zonal wind shear, and H denotes the tropopause height. The third equation is valid at z = 0 and the fourth is valid at z = H. Results The Eady model yields stable and unstable modes. Unstable modes have height, vorticity, vertical velocity, and several other atmospheric parameters with contours that tilt westward with height, though temperature contours tilt eastward with height for unstable modes. A poleward heat flux is observed in unstable modes, yielding the positive feedback necessary for cyclogenesis. Low pressure, high vorticity regions are then “stretched”, and high pressure and low vorticity regions are “squashed”, yielding higher and lower vorticity, respectively. In contrast, the opposite is observed in decaying modes: height, vorticity, etc. contours tilt eastward with height, except temperature which tilts westward with height. An equatorward heat flux is induced, decreasing potential vorticity and pressure anomalies and yielding cyclolysis. Making Fourier decompositions on the linearized Eady model equations and solving for the dispersion relation for the Eady model system allows one to solve for the growth rate of the modes (the imaginary component of the frequency). This yields a growth rate that increases with increasing wavenumber for small wavenumbers, reaches a maximum growth rate at roughly , where κ is the wavenumber and is the Rossby radius of deformation. As wavenumber increases from here, growth rate decreases, reaching zero growth rate around . Beyond here, modes will not grow under the Eady model, so too large of wavenumbers (too small of scales) do not yield unstable modes in the Eady model. See also Quasi-geostrophic equations Charney Model Cyclogenesis References Holton, James R. Introduction To Dynamic Meteorology 4th Ed. Chapter 8 Eady, E.T. (1949), Long Waves and Cyclone Waves. Tellus, 1: 33-52. https://doi.org/10.1111/j.2153-3490.1949.tb01265.x Chapter 6. The Eady problem Eady Model The Eady Model Atmospheric dynamics Fluid dynamic instabilities
Eady model
[ "Chemistry" ]
798
[ "Atmospheric dynamics", "Fluid dynamic instabilities", "Fluid dynamics" ]
35,112,066
https://en.wikipedia.org/wiki/Equivariant%20index%20theorem
In differential geometry, the equivariant index theorem, of which there are several variants, computes the (graded) trace of an element of a compact Lie group acting in given setting in terms of the integral over the fixed points of the element. If the element is neutral, then the theorem reduces to the usual index theorem. The classical formula such as the Atiyah–Bott formula is a special case of the theorem. Statement Let be a clifford module bundle. Assume a compact Lie group G acts on both E and M so that is equivariant. Let E be given a connection that is compatible with the action of G. Finally, let D be a Dirac operator on E associated to the given data. In particular, D commutes with G and thus the kernel of D is a finite-dimensional representation of G. The equivariant index of E is a virtual character given by taking the supertrace: See also Equivariant topological K-theory Kawasaki's Riemann–Roch formula References Berline, Nicole; Getzler, E.; Vergne, Michèle (2004), Heat Kernels and Dirac Operators, Berlin, New York: Springer-Verlag Differential geometry
Equivariant index theorem
[ "Mathematics" ]
257
[ "Theorems in differential geometry", "Theorems in geometry" ]
28,162,705
https://en.wikipedia.org/wiki/Acorn%20tube
An acorn tube, or acorn valve, refers to any member of a family of VHF/UHF vacuum tubes starting just before World War II. They were named after their resemblance to the acorn, specifically due to the glass cap at one end of the tube that looked similar to the cap on an acorn. The acorn tubes found widespread use in radios and radar systems. High-frequency performance is limited by (1) parasitic lead inductance and capacitance and skin effect, and (2) electron transit time (the time required to travel from cathode to anode). Transit time effects are complicated, but one simple effect is the phase margin; another one is input conductance, also known as grid loading. At extremely high frequencies, electrons arriving at the grid may become out of phase with those departing towards the anode. This imbalance of charge causes the grid to exhibit a reactance that is much less than its low-frequency "open circuit" characteristic. Acorn- as well as Lighthouse tubes and Nuvistors attempt to minimize this effect by arranging cathode, grid(s) and anode as closely spaced together as possible. The original range included about half a dozen tubes, designed to work in the VHF range. The 955 is a triode. The 954 and 956 types are sharp and remote cut-off pentodes, respectively, all with indirect 6.3 V, 150 mA heaters. Types 957, 958 and 959 are for portable equipment and have 1.25 V NiCd battery heaters. The 957 is a medium-μ signal triode, the 958 is a transmitting triode with dual, paralleled filaments for increased emission, and the 959 is a sharp cut-off pentode like the 954. The 957 and 959 draw 50 mA heater current, the 958 twice as much. In 1942, the 958A with tightened emission specifications was introduced after it turned out that 958s with excessively high emission kept working after the filament power was turned off, the filament still sufficiently heating on the anode current alone. After the introduction of the miniature 7-pin base, the 954, 955 and 956 were made available with this base as 9001, 9002 and 9003. Other acorn tubes include: American: EIA: 5731 RETMA: 6F4, 6L4 European: British: A40, A41, AP4, AT4, HA1, ZA1, ZA2 Continental: 4671, 4672, 4674, 4675, 4676, 4695 Mullard-Philips: D1C, D2C, D1F, D2F, D3F, D11F, D12F, E1C, E1F, E2F Russian: 6С1Ж, 6Ж1Ж, 6К1Ж Japanese: UN954, UN955 Larger, higher-power types such as the 316A, 368A, 388A, and 703A triodes and the 713A and 717A pentodes were referred to as Doorknob tubes. The introduction of the EF50 was the first serious competition for the acorn design, and replaced the acorns in many roles, especially post-war when millions of surplus EF50s were dumped on the market. See also 955 acorn triode References Vacuum tubes Acorns
Acorn tube
[ "Physics" ]
731
[ "Vacuum tubes", "Vacuum", "Matter" ]
28,167,194
https://en.wikipedia.org/wiki/Value-driven%20maintenance
Value-driven maintenance (VDM) is a maintenance management methodology. History VDM was developed by Mark Haarman and Guy Delahay, both former chairmen of the Dutch Maintenance Association (NVDO) in their book entitled Value Driven Maintenance, New Faith in Maintenance published by Mainnovation in 2004. Value drivers in maintenance In financial literature, value (net present value) is defined as the sum of all free future cash flows, discounted to the present date. A cash flow is the difference between income and expenditure. It is not the difference between turnover and costs, because this is easy to manipulate through accounting. Some companies may use creative lease, depreciation and reservation techniques to keep book profits artificially high or low; this does not always contribute to shareholder value. Recent stock market scandals are an example of what may happen as a result of this. The second part of the definition concerns the knowledge that the value of a cash flow is time-related, given the term "present value". Future cash flows must be corrected or discounted to today. Managing by value necessitates maximizing future cash flows. Managing by value obliges companies to search for new free cash flows. It's no longer enough for a company to go on doing what it is already doing. Once the concept of value is understood, this concept can be translated into maintenance. Within VDM, there are four axes along which maintenance can contribute to value creation within a company. The axes are also called the four value drivers. Asset utilization The first value driver, asset utilization seeks to increase the technical availability of a technical equipment. With higher technical availability, it is possible to produce and sell more products with the same invested capital, generating more income while the fixed costs remain the same. In other words, the free cash flows increase, which automatically means value creation. Maintenance can increase technical availability by preventing unwanted breakdowns, scheduling plant maintenance in a smarter way and performing repairs and inspections faster. A point to note is that higher technical availability produces value not only in growth markets. In markets where demand is stable or declining, greater availability can also create value. By making a plant more efficient, the number of shifts can be reduced or it may even be possible to close down sister plants. At corporate level, this does not generate more turnover, but it does significantly reduce costs, which is another way of creating value. Safety, health, and environment An increasingly important value driver for maintenance is safety, health, and environment (SHE) in VDM terminology. Compliance with legal directives covering SHE creates value in two ways. Firstly, it avoids the imposition of government penalties for breaches of legislation. Secondly, a good SHE policy has a positive effect on retention of the License to Operate. This is something else that has value, because it increases the likelihood of future cash flows actually materializing. Without a License to Operate, there will be no future cash flows and thus no value. Problem here is that it does not take the other error type into account (related to a lower false safety trip rate, but higher accident rate). It is very controversial to put a value on human life and therefore this VDM theory is just like the ALARP logic dangerous to use and might not be accepted by all legislation world-wide (specially in the US). The importance of the SHE value driver becomes apparent when looking at the recent incident with the BP Oil Spill in the Gulf of Mexico; the Deepwater Horizon oil spill. Poor maintenance is believed to be the cause of one of the biggest oil spills in history, causing massive damage to the environment. Total accumulated consequence costs (both clean-up costs and loss of company value) are estimated on 12 billion. Cost control Even though maintenance is not a cost center in itself, it does consume significant quantities of money. The maintenance budget consists mainly of wage and training costs of technicians, managers and indirect personnel, the costs of spare parts and tools and the costs of contracted personnel and outsourced work. Savings on the maintenance budget automatically generate free cash flows in the future and, by consequence, value. The savings are achievable by having a smarter Preventive maintenance program, higher technician productivity, lower procurement prices for materials and services and the right ratio of the number of technicians, managers and indirect personnel. Controlling maintenance costs is called "Cost Control" in VDM. Resource allocation Finally, maintenance can create value through the smarter management of resources. This is called "Resource Allocation" in VDM. It is not about the consumption of resources, because that is already covered in the Cost Control value driver. Within VDM, a distinction is made between four types of resources: technicians, spare parts, contractors and knowledge. One need to think only of cash flows freed up as a result of smarter management or savings on warehouses, logistical employees, insurance and the avoidance of obsolete and surplus spare parts. In practise, it often turns out that the value potential of smart management of spare parts by far exceeds the value potential of the other resources. A natural tension exists between the four value drivers. The theory states that a maintenance manager should be continually searching for the right balance between these value drivers. It is important to know how they compare. One can determine this instinctively but many corporate managers require a financial validation because, for example, it will be a decisive factor in investment decisions or because stakeholders (e.g. financial manager, production manager) are not prepared to accept a judgement based on instinct. This is a situation where one should use the VDM formula. Formula The VDM formula is derived from the net present value formula and can be used to calculate the value of maintenance. The VDM formula is: PVmaintenance = Σ {FSHE,t x (CFAU,t + CFCC,t + CFRA,t + CFSHE,t) / (1+r)t} where: PVmaintenance = present value potential of maintenance FSHE,t = SHE factor in year t CFAU,t = future free cash flow in year t from Asset Utilization CFCC,t = future free cash flow in year t from Cost Control CFRA,t = future free cash flow in year t from Resource Allocation CFSHE,t = future free cash flow in year t from Safety, Health & Environment r = Discount rate SHE stands out prominently in the formula. This factor shows how great the probability is that the License to Operate will be retained in the coming years and that the expected cash flows from all four value drivers will actually be attainable in the future. Consequently, the SHE factor is a probability factor with a value of 0 to 1. A SHE factor of 0 means 0% probability of retention of the License to Operate, for example in a fictitious case where a company decides to stop performing all maintenance for cost reasons. The free cash flow that this creates on Cost Control will be enormous. But because the company fails to satisfy SHE laws and this loses its License to Operate, this cash flow will not create any value. The VDM model can be simplified in certain situations. Assume that we work with Perpetuity. This means there will be indefinitely be a free cash flow that is the same year after year and the SHE factor is constant; the VDM formula thus becomes: PVmaintenance = FSHE,t x (CFAU,t + CFCC,t + CFRA,t + CFSHE,t) / r See also Condition Based Maintenance Corrective maintenance Planned maintenance Reliability Centered Maintenance References Maintenance
Value-driven maintenance
[ "Engineering" ]
1,538
[ "Maintenance", "Mechanical engineering" ]
28,168,060
https://en.wikipedia.org/wiki/Cargo%20control%20room
The cargo control room, CCR, or cargo office of a tankship is where the person in charge (PIC) can monitor and control the loading and unloading of the ship's liquid cargo. Prevalent on automated vessels, the CCR may be in its own room, or located on the ship's bridge. Among other things, the equipment in the CCR may allow the person in charge to control cargo and stripping pumps, control and monitor valve positions, and monitor cargo tank liquid levels. History Cargo control rooms began to appear on U.S.-flag tankers in the mid-1960s. Prior to this time, valves were operated manually on deck by reach rods and liquid levels were monitored by a roving watch consisting of the mate and seamen on watch. The use of computers in the cargo control room began in the 1980s. As technology developed, computerized systems began to centralize tasks such as cargo control per se, tank level monitoring, and real-time computation of hull stress information in the cargo control room. Characteristics The design and layout of an individual cargo control room is determined by the ship's design, owner's requirements and the capabilities of the shipyard in which the ship is built. Modern cargo control rooms offer some or all of these components: main cargo pump and stripping pump control, valve control, tank level monitoring, and auxiliary functions. Main cargo pumps and stripping pumps are used to discharge cargo from the ship. From the cargo control room, the person in charge of the discharge can typically turn pumps on and off, set pump speeds, and monitor pipeline pressures on the suction- and discharge-sides of pumps. By actuating cargo valves, the person in charge can control where cargo is pumped from, where it is pumped to, and in systems that use throttle valves, can control the relative flow rates of cargo through the valves. Modern cargo control rooms allow the person in charge to remotely control some or all of the valves in the cargo system and monitor the state of all valves. Valve indicators are typically laid out on a "mimic panel" which displays the cargo system piping, valves and pumps in a schematic diagram. Tank level monitoring is another key functionality often provided in modern cargo control rooms. One aspect of tank level monitoring is overfill alarms, which sound throughout the ship when cargo levels exceed the ship's design specifications. Many systems allow the person in charge to monitor tank levels at all tank levels. Tank level monitoring allows the person in charge to take early action to avoid oil spills, especially when loading the ship. Tank level information is often sent to computers that calculate hull stresses such as shear forces and bending moments. Various other functions are available in some cargo control rooms. Many offer the person in charge additional monitoring and control systems, the ability to monitor inert gas systems, and tank pressures. Modern cargo control rooms typically allow the person in charge to control ballast pumps and valves, and monitor oil content of ballast water by the use of oily water separators. In cases where ships carry specialty products, specialized monitoring systems are available in the cargo control room. See also Oil tanker Chemical tanker Notes References . External links In Alaska: An Oil Tanker Sails NK Rules For Centralized Cargo Monitoring and Control Systems Petroleum technology Tankers
Cargo control room
[ "Chemistry", "Engineering" ]
662
[ "Petroleum engineering", "Petroleum technology" ]
29,545,659
https://en.wikipedia.org/wiki/Bell%E2%80%93Evans%E2%80%93Polanyi%20principle
In physical chemistry, the Evans–Polanyi principle (also referred to as the Bell–Evans–Polanyi principle, Brønsted–Evans–Polanyi principle, or Evans–Polanyi–Semenov principle) observes that the difference in activation energy between two reactions of the same family is proportional to the difference of their enthalpy of reaction. This relationship can be expressed as where is the activation energy of a reference reaction of the same class, is the enthalpy of reaction, characterizes the position of the transition state along the reaction coordinate (such that ). The Evans–Polanyi model is a linear energy relationship that serves as an efficient way to calculate activation energy of many reactions within a distinct family. The activation energy may be used to characterize the kinetic rate parameter of a given reaction through application of the Arrhenius equation. The Evans–Polanyi model assumes that the pre-exponential factor of the Arrhenius equation and the position of the transition state along the reaction coordinate are the same for all reactions belonging to a particular reaction family. See also Hammond's postulate Free-energy relationship Brønsted catalysis equation Catalytic resonance theory References Notes Enthalpy Thermochemistry
Bell–Evans–Polanyi principle
[ "Physics", "Chemistry", "Mathematics" ]
258
[ "Thermodynamic properties", "Thermochemistry", "Physical quantities", "Quantity", "Enthalpy" ]
29,548,278
https://en.wikipedia.org/wiki/Minimum%20design%20metal%20temperature
MDMT is one of the design conditions for pressure vessels engineering calculations, design and manufacturing according to the ASME Boilers and Pressure Vessels Code. Each pressure vessel that conforms to the ASME code has its own MDMT, and this temperature is stamped on the vessel nameplate. The precise definition can sometimes be a little elaborate, but in simple terms the MDMT is a temperature arbitrarily selected by the user of type of fluid and the temperature range the vessel is going to handle. The so-called arbitrary MDMT must be lower than or equal to the CET (which is an environmental or "process" property, see below) and must be higher than or equal to the (MDMT)M (which is a material property). Critical exposure temperature (CET) is the lowest anticipated temperature to which the vessel will be subjected, taking into consideration lowest operating temperature, operational upsets, autorefrigeration, atmospheric temperature, and any other sources of cooling. In some cases it may be the lowest temperature at which significant stresses will occur and not the lowest possible temperature. (MDMT)M is the lowest temperature permitted according to the metallurgy of the vessel fabrication materials and the thickness of the vessel component, that is, according to the low temperature embrittlement range and the charpy impact test requirements per temperature and thickness, for each one of the vessel's components. References ASME, Boilers and Pressure Vessels Code Dennis R. Moss, Pressure Vessel Design Manual, 1997 (2nd ed.) Pressure vessels Threshold temperatures
Minimum design metal temperature
[ "Physics", "Chemistry", "Engineering" ]
316
[ "Structural engineering", "Physical phenomena", "Phase transitions", "Chemical equipment", "Threshold temperatures", "Physical systems", "Hydraulics", "Mechanical engineering", "Mechanical engineering stubs", "Pressure vessels" ]
45,004,055
https://en.wikipedia.org/wiki/Empirical%20valence%20bond
In theoretical chemistry, the Empirical Valence Bond (EVB) approach is an approximation for calculating free-energies of a chemical reaction in condensed-phase. It was first developed by Israeli chemist Arieh Warshel, and was inspired by the way Marcus theory uses potential surfaces to calculate the probability of electron transfer. Where most methods for reaction free-energy calculations require at least some part of the modeled system to be treated using quantum mechanics, EVB uses a calibrated Hamiltonian to approximate the potential energy surface of a reaction. For a simple 1-step reaction, that typically means that a reaction is modeled using 2 states. These states are valence bond descriptions of the reactants and products of the reaction. The function that gives the ground energy then becomes: where and are the valence bond descriptions of the reactant and product state respectively, and is the coupling parameter. The and potentials are usually modeled using force field descriptions and . is a bit trickier as it needs to be parameterized using a reference reaction. This reference reaction can be experimental, typically from a reaction in water or other solvents. Alternatively quantum chemical calculations can be used for calibration. Free energy calculations To obtain free-energies from the created ground state energy potential one needs to perform sampling. This can be done using sampling methods like molecular dynamics or Monte Carlo simulations at different states along the reaction coordinates. Typically this is done using a free energy perturbation / umbrella sampling approach. Application EVB has been successfully applied to calculating reaction free energies of enzymes. More recently it has been looked at as a tool to study enzyme evolution and to assist in enzyme design. Software Molaris Q RAPTOR (Rapid Approach for Proton Transport and Other Reactions) See also Electron equivalent Gregory A. Voth References Physical chemistry Reaction mechanisms
Empirical valence bond
[ "Physics", "Chemistry" ]
364
[ "Reaction mechanisms", "Applied and interdisciplinary physics", "Physical organic chemistry", "nan", "Chemical kinetics", "Physical chemistry", "Physical chemistry stubs" ]
45,004,618
https://en.wikipedia.org/wiki/Hydrodenitrogenation
Hydrodenitrogenation (HDN) is an industrial process for the removal of nitrogen from petroleum. Organonitrogen compounds, even though they occur at low levels, are undesirable because they cause poisoning of downstream catalysts. Furthermore, upon combustion, organonitrogen compounds generate NOx, a pollutant. HDN is effected as general hydroprocessing, which traditionally focuses on hydrodesulfurization (HDS) because sulfur compounds are even more problematic. To some extent, hydrodeoxygenation (HDO) is also effected. Typical organonitrogen compounds in petroleum include quinolines and porphyrins and their derivatives. The total nitrogen content is typically less than 1% and the targeted levels are in the ppm range. As described in organic geochemistry, organonitrogen compounds are derivatives or degradation products of the compounds in the living matter that comprised the precursor to fossil fuels. In HDN, the organonitrogen compounds are treated at high temperatures with hydrogen in the presence of a catalyst, the net transformation being: R3N + 3 H2 → 3 RH + NH3 The catalysts generally consist of cobalt and nickel as well as molybdenum disulfide or less often tungsten disulfide supported on alumina. The precise composition of the catalyst, i.e. Co/Ni and Mo/W ratios, are tuned for particular feedstocks. A wide variety of catalyst compositions have been considered, including metal phosphides. References Oil refining Chemical processes Natural gas technology
Hydrodenitrogenation
[ "Chemistry" ]
327
[ "Petroleum technology", "Chemical processes", "Natural gas technology", "Oil refining", "nan", "Chemical process engineering" ]