id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
45,449,629
https://en.wikipedia.org/wiki/Annual%20Review%20of%20Fluid%20Mechanics
Annual Review of Fluid Mechanics is a peer-reviewed scientific journal covering research on fluid mechanics. It is published once a year by Annual Reviews and the editors are Parviz Moin and Howard Stone. As of 2023, Annual Review of Fluid Mechanics is being published as open access, under the Subscribe to Open model. As of 2024, Journal Citation Reports gives the journal a 2023 impact factor of 25.4, ranking it first out of 40 journals in "Physics, Fluids and Plasmas" and first out of 170 journals in the category "Mechanics". History The Annual Review of Fluid Mechanics was first published in 1969 by the nonprofit publisher Annual Reviews. Its inaugural editor was William R. Sears. Taking after the Annual Review of Biochemistry, each volume typically begins with a prefatory chapter in which a notable scientist in the field reflects on their career and accomplishments. As of 2020, it was published both in print and electronically. Some of its articles are available online in advance of the volume's publication date. It defines its scope as covering significant developments in the field of fluid mechanics, including its history and foundations, non-newtonian fluids, rheology, incompressible and compressible flow, plasma flow, flow stability, multiphase flow, heat mixture and transport, control of fluid flow, combustion, turbulence, shock waves, and explosions. It is abstracted and indexed in Scopus, Science Citation Index Expanded, PASCAL, Inspec, GEOBASE, and Academic Search, among others. Editorial processes The Annual Review of Fluid Mechanics is helmed by the editor or the co-editors. The editor is assisted by the editorial committee, which includes associate editors, regular members, and occasionally guest editors. Guest members participate at the invitation of the editor, and serve terms of one year. All other members of the editorial committee are appointed by the Annual Reviews board of directors and serve five-year terms. The editorial committee determines which topics should be included in each volume and solicits reviews from qualified authors. Unsolicited manuscripts are not accepted. Peer review of accepted manuscripts is undertaken by the editorial committee. Editors of volumes Dates indicate publication years in which someone was credited as a lead editor or co-editor of a journal volume. The planning process for a volume begins well before the volume appears, so appointment to the position of lead editor generally occurred prior to the first year shown here. An editor who has retired or died may be credited as a lead editor of a volume that they helped to plan, even if it is published after their retirement or death. William R. Sears (1969) Milton Van Dyke, Walter G. Vincenti, and John V. Wehausen (1970–1976) Van Dyke, Wehausen, and John L. Lumley (1977–1986) Van Dyke, Lumley, and Helen L. Reed (1987–2000) Lumley, Reed, and Stephen H. Davis (2001) Lumley, Davis, and Parviz Moin (2002) Davis and Moin (2003–2021) Moin and Howard A. Stone (2021-2025) Stone and Jonathan B. Freund (2025-) Current editorial committee As of 2022, the editorial committee consists of the co-editors and the following members: Jonathan B. Freund Dennice F. Gayme Anne Juel Daniel Livescu Beverley J. McKeon Geoff Vallis Roberto Zenit See also List of fluid mechanics journals References Academic journals established in 1969 English-language journals Fluid dynamics journals Annual journals Fluid Mechanics
Annual Review of Fluid Mechanics
[ "Chemistry" ]
729
[ "Fluid dynamics journals", "Fluid dynamics" ]
45,449,941
https://en.wikipedia.org/wiki/G.%20Marius%20Clore
G. Marius Clore MAE, FRSC, FMedSci, FRS is a British-born, Anglo-American molecular biophysicist and structural biologist. He was born in London, U.K. and is a dual U.S./U.K. Citizen. He is a Member of the National Academy of Sciences, a Fellow of the Royal Society, a Fellow of the Academy of Medical Sciences, a Fellow of the American Academy of Arts and Sciences, a NIH Distinguished Investigator, and the Chief of the Molecular and Structural Biophysics Section in the Laboratory of Chemical Physics of the National Institute of Diabetes and Digestive and Kidney Diseases at the U.S. National Institutes of Health. He is known for his foundational work in three-dimensional protein and nucleic acid structure determination by biomolecular NMR spectroscopy, for advancing experimental approaches to the study of large macromolecules and their complexes by NMR, and for developing NMR-based methods to study rare conformational states in protein-nucleic acid and protein-protein recognition. Clore's discovery of previously undetectable, functionally significant, rare transient states of macromolecules has yielded fundamental new insights into the mechanisms of important biological processes, and in particular the significance of weak interactions and the mechanisms whereby the opposing constraints of speed and specificity are optimized. Further, Clore's work opens up a new era of pharmacology and drug design as it is now possible to target structures and conformations that have been heretofore unseen. Biography Clore received his undergraduate degree with first class honours in biochemistry from University College London in 1976 and medical degree from UCL Medical School in 1979. After completing house physician and house surgeon appointments at University College Hospital and St Charles' Hospital (part of the St. Mary's Hospital group), respectively, he was a member of the scientific staff of the Medical Research Council National Institute for Medical Research from 1980 to 1984. He received his PhD from the National Institute for Medical Research in Physical Biochemistry in 1982. He was awarded a joint Lister Institute Research Fellowship from the Lister Institute of Preventive Medicine which he held from 1982 to 1984 at the Medical Research Council. In 1984 he joined the Max Planck Institute for Biochemistry in Martinsried, Germany, where he headed the Biological NMR department from 1984 to 1988. In 1988, Clore was recruited to the National Institutes of Health (NIH) Laboratory of Chemical Physics (National Institute of Diabetes and Digestive and Kidney Diseases) located in Bethesda, Maryland, U.S., where he interacted closely in the late 1980s and early 1990s with NIH colleagues Ad Bax, Angela Gronenborn and Dennis Torchia on the development of multidimensional heteronuclear NMR spectroscopy and a structural biology effort aimed at proteins involved in the pathogenesis of HIV/AIDS. He has remained at the NIH ever since and is currently a NIH Distinguished Investigator and Chief of the Section on Molecular and Structural Biophysics at the NIH. He is an elected Member of the United States National Academy of Sciences, a Fellow of the Royal Society, a Fellow of the Academy of Medical Sciences, a Fellow of the American Academy of Arts and Sciences, and a Foreign Member of the Academia Europaea (Biochemistry and Molecular Biology Section). Clore's citation upon election to the Royal Society reads: Research 3D structure determination in solution by NMR Clore played a pivotal role in the development of three- and four-dimensional NMR spectroscopy, the use of residual dipolar couplings for structure determination, the development of simulated annealing and restrained molecular dynamics for three-dimensional protein and nucleic acid structure determination, the solution NMR structure determination of large protein complexes, the development of the combined use of NMR and small-angle X-ray scattering in solution structure determination, and the analysis and characterization of protein dynamics by NMR. Clore's work on complexes of all the cytoplasmic components of the bacterial phosphotransferase system (PTS) led to significant insights into how signal transduction proteins recognize multiple, structurally dissimilar partners by generating similar binding surfaces from completely different structural elements and exploiting side chain conformational plasticity. Clore is also one of the main authors of the very widely used XPLOR-NIH NMR structure determination program Detection and visualization of excited and sparsely-populated states Clore's recent work has focused on developing new NMR methods (such as paramagnetic relaxation enhancement, dark state exchange saturation transfer spectroscopy and lifetime line broadening) to detect, characterize and visualize the structure and dynamics of sparsely-populated states of macromolecules, which are important in macromolecular interactions but invisible to conventional structural and biophysical techniques. Examples of include the direct demonstration of rotation-coupled sliding and intermolecular translocation as mechanisms whereby sequence-specific DNA binding proteins locate their target site(s) within an overwhelming sea of non-specific DNA sequences; the detection, visualization and characterization of encounter complexes in protein-protein association; the analysis of the synergistic effects of conformational selection and induced fit in protein-ligand interactions; and the uncovering of "dark", spectroscopically invisible states in interactions of NMR-visible proteins and polypeptides (including intrinsically disordered states) with very large megadalton macromolecular assemblies. The latter includes an atomic-resolution view of the dynamics of the amyloid-β aggregation process. and the demonstration of intrinsic unfoldase/foldase activity of the macromolecular machine GroEL. These various techniques have also been used to uncover the kinetic pathway of pre-nucleation transient oligomerization events and associated structures involving the protein encoded by huntingtin exon-1, which may provide a potential avenue for therapeutic intervention in Huntington's disease, a fatal autosomal dominant, neurodegenerative condition. Scientific impact Clore is one of the most highly cited scientists in the fields of molecular biophysics, structural biology, biomolecular NMR and chemistry with over 550 published scientific articles and an h-index (number of papers cited h or more time) of 144. Clore is also one of only five NIH scientists to have been elected to both the United States National Academy of Sciences and The Royal Society, the other four being Julius Axelrod, Francis Collins, Harold Varmus and Ad Bax. Personal life Marius Clore was educated at the Lycee Francais Charles de Gaulle in Kensington, London, University College London and UCL Medical School. Marius Clore's father was the film producer Leon Clore whose credits include The French Lieutenant's Woman. Awards and honors 2024: Elected Fellow of the UK Academy of Medical Sciences 2021: Murray Goodman Memorial Prize 2021: Honorary Doctorate of Science (DSc) from University College London 2021: Royal Society of Chemistry Khorana Prize 2020: Elected Fellow of the Royal Society 2020: Biophysical Society Innovation Award 2015: Elected Foreign Member of the Academia Europaea. 2014: Elected Member of the United States National Academy of Sciences (Biophysics and Computational Biology section) 2012: Biochemical Society 2013 Centenary Award (previously known as the Jubilee Medal) and Sir Frederick Gowland Hopkins Memorial Lecture (U.K.) 2011: Royal Society of Chemistry Centenary Prize 2011: Elected Fellow of the International Society of Magnetic Resonance 2010: Elected Fellow of the American Academy of Arts and Sciences 2010: Hillebrand Award of the Washington Chemical Society 2009: Elected Fellow of the Biophysical Society 2003: Elected Member of the Lister Institute of Preventive Medicine (U.K.) 2001: Original member, Institute for Scientific Information (ISI) Highly Cited Researchers Database (in Biology & Biochemistry and Chemistry sections). 1999: Elected Fellow of the American Association for the Advancement of Science. 1993: Dupont-Merck Young Investigator Award of the Protein Society 1990: Elected Fellow of the Royal Society of Chemistry (FRSC) (U.K). References External links G. Marius Clore personal homepage Oral history interview transcript with Marius Clore on 23 March 2020, American Institute of Physics, Niels Bohr Library & Archives Listing on the United States National Academy of Sciences web site Listing on the Royal Society web site Listing on the Academy of Medical Sciences (United Kingdom) web site / Listing on the American Academy of Arts and Sciences web site Listing on the Academia Europaea web site Listing on NIDDK/NIH web site Listing on NIH Intramural Research Program web site G. Marius Clore Orcid ID Marius Clore on Landmark Article in the Journal of Magnetic Resonance Marius Clore Lecture on "Transient Prenucleation Oligomerization of Huntingtin" at the ICMRBS Webinar on Emerging Topics in Biomolecular Resonance (4/15/2021) List of University College London Honorary Graduates Fellows of the Royal Society Fellows of the Academy of Medical Sciences (United Kingdom) Members of the United States National Academy of Sciences Fellows of the American Academy of Arts and Sciences Members of Academia Europaea Fellows of the Royal Society of Chemistry Fellows of the American Association for the Advancement of Science American scientists English scientists American biophysicists English biophysicists American biochemists English biochemists 20th-century English medical doctors 21st-century English medical doctors 21st-century biochemists Computational chemists Nuclear magnetic resonance National Institutes of Health people National Institutes of Health faculty Alumni of University College London Alumni of the UCL Medical School Alumni of the University of London Living people English emigrants to the United States Year of birth missing (living people)
G. Marius Clore
[ "Physics", "Chemistry" ]
1,982
[ "Nuclear magnetic resonance", "Computational chemists", "Computational chemistry", "Theoretical chemists", "Nuclear physics" ]
42,164,522
https://en.wikipedia.org/wiki/Suru%C3%A7%20Water%20Tunnel
Suruç Water Tunnel ( is a water supply tunnel located in Suruç district of Şanlıurfa Province, southeastern Turkey. The purpose of the tunnel is to provide irrigation for the Suruç Valley from Atatürk Dam. With its length of , it is the country's longest tunnel. Technical features The water tunnel was commissioned by the State Hydraulic Works (DSI) on December 25, 2008. For the building of the water tunnel, Ilci Construction Inc. was contracted. The construction works at an altitude of AMSL began on March 18, 2009. The excavation of the water tunnel was carried out with a tunnel boring machine (TBM), which is long and has a cutting shield of diameter. The TBM was transported from Italy on 300 trailers, and its assembly completed after twelve months on August 21, 2010. Synchronised with the progress of excavation, the inner walls of the tunnel were lined with thick precast concrete hexagons. The average daily progress of the excavation works was between . The water tunnel has an average downhill slope of 0.49% through the Gazientep Formation of the Eocene and Oligocene geological period. Economics The construction cost about 2 billion. As part of the Southeastern Anatolia Project, it supplies water to agricultural land covering an area of about in Suruç Valley and to 134 populated places in and around Suruç. With its inner diameter of , the water tunnel has a discharge capacity of , which makes it bigger than many rivers in Turkey. It is expected that the project will create jobs for at least 190,000 people in the region. With irrigation by the Suruç Water Tunnel, over 8,000 farmers will be able to produce more profitable agricultural product. Ministry of Forest and Water Management Veysel Eroğlu stated that its contribution to the country's economy will as much as 270 million annually. See also List of long tunnels by type References Water tunnels Water supply and sanitation in Turkey Tunnels in Turkey Tunnels completed in 2014 Buildings and structures in Şanlıurfa Province Southeastern Anatolia Project Irrigation in Turkey 2014 establishments in Turkey
Suruç Water Tunnel
[ "Engineering" ]
425
[ "Southeastern Anatolia Project", "Irrigation projects" ]
42,167,253
https://en.wikipedia.org/wiki/Ecomechatronics
Ecomechatronics is an engineering approach to developing and applying mechatronical technology in order to reduce the ecological impact and total cost of ownership of machines. It builds upon the integrative approach of mechatronics, but not with the aim of only improving the functionality of a machine. Mechatronics is the multidisciplinary field of science and engineering that merges mechanics, electronics, control theory, and computer science to improve and optimize product design and manufacturing. In ecomechatronics, additionally, functionality should go hand in hand with an efficient use and limited impact on resources. Machine improvements are targeted in 3 key areas: energy efficiency, performance and user comfort (noise & vibrations). Description Among policy makers and manufacturing industries there is a growing awareness of the scarcity of resources and the need for sustainable development. This results in new regulations with respect to the design of machines (e.g. European Ecodesign Directive 2009/125/EC) and to a paradigm shift in the global machines market: "instead of maximum profit from minimum capital, maximum added value must be generated from minimal resources". Manufacturing industries increasingly require high performance machines that use resources (energy, consumables) economically in a human-centered production. Machine building companies and original equipment manufacturers are thus urged to respond to this market demand with a new generation of high performance machines with higher energy efficiency and user comfort. A reduction of the energy consumption lowers energy costs and reduces environmental impact. Typically more than 80% of the total-life-cycle impact of a machine is attributed to its energy consumption during the use phase. Therefore, improving a machine's energy efficiency is the most effective way of reducing its environmental impact. Performance quantifies how well a machine executes its function and is typically related to productivity, precision and availability. User comfort is related to the exposure of operators and the environment to noise & vibrations due to machine operation. Since energy efficiency, performance and noise & vibrations are coupled in a machine they need to be addressed in an integrated way in the design phase. Example of the interrelation between the 3 key areas: with increasing machine speed typically the machine's productivity increases, but energy consumption will increase as well and machine vibrations may grow such that machine accuracy (e.g. positioning accuracy) and availability (due to downtime and maintenance) decrease. Ecomechatronical design deals with the trade-off between these key areas. Approach Ecomechatronics impacts the way mechatronical systems and machines are being designed and implemented. Therefore, the transformation to a new generation of machines concerns knowledge institutes, original equipment manufacturers, CAE software suppliers, machine builders and industrial machine owners. The fact that about 80% of the environmental impact of a machine is determined by its design puts emphasis on making the right technological design choices. A model-based, multidisciplinary design approach is required in order to address the energy efficiency, performance and user comfort of a machine in an integrated way. The key enabling technologies can be categorized in machine components, machine design methods & tools, and machine control. A few examples are listed below per category. Machine components Energy efficient electrical motors: cf. energy efficiency classes of electric motors, ecodesign requirements for electric motors Variable frequency drives: variable motor speed enables energy reduction with respect to fixed speed applications Variable hydraulic pumps: energy reduction by adapting to required pressure and flow (e.g. variable displacement pump, load sensing pump) Energy storage technologies: electrical (battery, capacitor, supercapacitor), hydraulical (accumulator), kinetic energy (flywheel), pneumatic, magnetic (superconducting magnetic energy storage) Design methods & tools Energetic simulations: using energetic machine models and empirical data (e.g. energy efficiency maps) to estimate the machine's energy consumption in the design phase Energy demand optimization: e.g. load leveling in order to avoid peaks in power demand Hybridization: applying at least one other, intermediate energy form in order to reduce primary power source consumption e.g. in vehicles with internal combustion engines (see hybrid vehicle drivetrain) Vibro-acoustic analysis: study of the noise & vibrations signature of a machine in order to localize and differentiate between their root causes Multibody modeling: simulation of the interaction forces and displacements of coupled rigid bodies, e.g. to assess the effect of vibration dampers on a mechanical structure Active vibration damping: e.g. use of piezoelectric bearings for active control of machine vibrations Rapid control prototyping: provides a fast and inexpensive way for control and signal processing engineers to verify designs early and evaluate design tradeoffs Machine control Energy consumption minimization: control signals are optimized for minimum energy consumption Energy management of energy storage systems: controlling the power flows and state-of-charge of an energy storage system with the aim of achieving maximum energy benefit and maximum system lifespan Model-based control: taking advantage of system models to improve the outcome (accuracy, reaction time, ...) of the controlled system (Self-)learning control: control self-adapting to the system and its changing environment, reducing the need for control parameter tuning and adaptation by the control engineer Optimal machine control: the control of the system is regarded as an optimization problem to which the control rules are considered the optimal solution (see Optimal control) Applications Some examples of ecomechatronical system applications are: Komatsu PC200-8 Hybrid: the world's first hybrid excavator has an energy storage system based on supercapacitors. The energy recuperation in the hydraulic drive line during braking results in a significant improvement of fuel economy. Hybrid bus: different hybrid bus types have been commercialized (e.g. ExquiCity bus by Van Hool), using fuel cells or a diesel engine as a primary energy source and batteries and/or supercapacitors as energy storage systems. Hybrid tram vehicle: hybridization in tram vehicles enables energy recuperation as well as mobility without overhead lines, as applied in e.g. some of the Combino Supra tram vehicles by Siemens Transportation Systems. The system uses a combination of traction batteries and supercapacitors. See also Mechatronics Efficient energy use Automation Noise control & vibration isolation Ecodesign References "Energy related life cycle impact and cost reduction opportunities in machine design: the laser cutting case," T. Devoldere et al., Proceedings of the 15th CIRP International Conference on Life Cycle Engineering, 2008 "More efficient machines through model-based design," W. Symens, Presentation at the Model-Driven Development Day, May 9, 2012, 's-Hertogenbosch, The Netherlands "Towards a mechatronic compiler," H. Van Brussel, Presentation at ACCM Workshop on Mechatronic Design 2012, November 30, 2012, Linz, Austria Learning control of production machines Embedded systems Energy conservation Industrial ecology
Ecomechatronics
[ "Chemistry", "Technology", "Engineering" ]
1,429
[ "Computer engineering", "Embedded systems", "Computer systems", "Industrial engineering", "Computer science", "Environmental engineering", "Industrial ecology" ]
48,459,845
https://en.wikipedia.org/wiki/High-frequency%20impulse-measurement
HFIM, acronym for high-frequency-impulse-measurement, is a type of measurement technique in acoustics, where structure-borne sound signals are detected and processed with certain emphasis on short-lived signals as they are indicative for crack formation in a solid body, mostly steel. The basic idea is to use mathematical signal processing methods such as Fourier analysis in combination with suitable computer hardware to allow for real-time measurements of acoustic signal amplitudes as well as their distribution in frequency space. The main benefit of this technique is the enhanced signal-to-noise ratio when it comes to the separation of acoustic emission from a certain source and other, unwanted contamination by any kinds of noise. The technique is therefore mostly applied in industrial production processes, e.g. cold forming or machining, where a 100 percent quality control is required or in condition monitoring for e.g. quantifying tool wear. Physical basics High-frequency-impulse measurement is an algorithm for obtaining frequency information of any structure- or air-borne sound source on the basis of discrete signal transformations. This is mostly done using [Fourier series] to quantify the distribution of the energy content of a sound signal in frequency space. On the software side, the tool used for this is the fast Fourier transform (FFT) implementation of this mathematical transformation. This allows, in combination with specific hardware, to directly obtain frequency information so that this is accessible in-line, e.g. during a production process. Contrary to classical, off-line frequency analysis methods, the signal is not unfolded before transformation but is directly fed into the FFT computation. Single events, such as cracks, are hence depicted as extremely short-lived signals covering the entire frequency range (the Fourier transform of a single impulse is a signal covering the entire observed frequency space). Therefore, such single events are easily separable from other noises, even if they are much more energetic. Applications Because of its in-line capabilities, HFIM is mostly applied in industrial production processes when it comes to high quality standards e.g. for auto parts that are relevant for crash behavior of a car: Cold forming: In cold forming applications, HFIM is mostly used to detect cracks during the forming process. Since such cracks are vastly due to stress in the manufactured part, the spontaneous formation of a crack is accompanied by a very sharp, impulse-like signal in the HFIM process landscape which can easily be separated from other noise. Therefore, HFIM is the standard technology for crack detection in the automotive sector all over the world. Machining: In many machining applications, HFIM is used to either monitor the status of tool wear and hence enable pedicitive maintenance or to prevent chatter. Plastic injection molding: Here, HFIM is used to monitor the status of the molds which are usually very complex. In particular, breaking off of small pins or other parts of the mold can be detected in-line. Welding: In contrast to most classical monitoring systems for the welding process which usually measure currents or voltages on the welding device, HFIM measures the energy acting directly on the welded workpiece. That allows for detection of various weld imperfections such as burn-through. There are also several applications of HFIM devices in materials science laboratories where the exact timing of crack formation is relevant, for instance when determining the plasticity of a new kind of steel. References S. Barteldes, F. Walther, W. Holweger: Wälzlagerdiagnose und Detektion von White Etching Cracks mit Barkhausen-Rauschen und Hochfrequenz-Impuls-Messung. In: AKIDA. 10. Aachener Kolloquium für Instandhaltung, Diagnose und Anlagenüberwachung. (= Aachener Schriften zur Rohstoff- und Entsorgungstechnik des Instituts für Maschinentechnik der Rohstoffindustrie. Band 84). Zillekens, Stolberg 2014, , S. 435 ff. D. Hülsbusch, F.Walther: Damage detection and fatigue strength estimation of carbon fibre reinforced polymers (CFRP) using combined electrical and high-frequency impulse measurements. In: 6th International Symposium on NDT in Aerospace, 12-14th November 2014, Madrid, Spain. A. Ujma, B. Walder: Werkzeugwartung zur rechten Zeit. In: Kunststoffe. Ausgabe 2/2013, Carl Hanser Verlag. F. Özkan, D. Hülsbusch, F. Walther: High-frequency impulse measurements (HFIM) for damage detection and fatigue strength estimation of carbon fiber reinforced polymers (CFRP). In: Materials Science and Engineering. Darmstadt, Sept. 2014, S. 23–25. External links and further reading Website of QASS, a German company and manufacturer of HFIM measurement devices. (incl. photos, videos and further links) Talk by Dr. Peter-Christian Zinn concerning different applications of HFIM in the context of Smart Factories. Nondestructive testing Acoustics Materials science
High-frequency impulse-measurement
[ "Physics", "Materials_science", "Engineering" ]
1,069
[ "Applied and interdisciplinary physics", "Classical mechanics", "Materials science", "Acoustics", "Nondestructive testing", "Materials testing", "nan" ]
48,466,142
https://en.wikipedia.org/wiki/SPOLD
Society for the Promotion of LCA Development (SPOLD) was the association of multiple companies that wanted to create a file format that would help mold Life-cycle assessment (LCA) software into a better management tool. The SPOLD format, which was created for this task, was meant to be implemented in LCA software so that it could exchange more reliable data in inventories. The original SPOLD format was created in 1997, but was later replaced in 1999 with a newer version. The SPOLD format was then replaced by the ecoSPOLD, which was later integrated with LCA software, replacing the original SPOLD format. SPOLD was discontinued in 2001. References Environmental impact assessment Industrial ecology Sustainability organizations
SPOLD
[ "Chemistry", "Engineering" ]
144
[ "Industrial ecology", "Industrial engineering", "Environmental engineering" ]
50,821,758
https://en.wikipedia.org/wiki/Vibrational%20spectroscopy%20of%20linear%20molecules
To determine the vibrational spectroscopy of linear molecules, the rotation and vibration of linear molecules are taken into account to predict which vibrational (normal) modes are active in the infrared spectrum and the Raman spectrum. Degrees of freedom The location of a molecule in a 3-dimensional space can be described by the total number of coordinates. Each atom is assigned a set of x, y, and z coordinates and can move in all three directions. Degrees of freedom is the total number of variables used to define the motion of a molecule completely. For N atoms in a molecule moving in 3-D space, there are 3N total motions because each atom has 3N degrees of freedom. Vibrational modes N atoms in a molecule have 3N degrees of freedom which constitute translations, rotations, and vibrations. For non-linear molecules, there are 3 degrees of freedom for translational (motion along the x, y, and z directions) and 3 degrees of freedom for rotational motion (rotations in Rx, Ry, and Rz directions) for each atom. Linear molecules are defined as possessing bond angles of 180°, so there are 3 degrees of freedom for translational motion but only 2 degrees of freedom for rotational motion because the rotation about its molecular axis leaves the molecule unchanged. When subtracting the translational and rotational degrees of freedom, the degrees of vibrational modes is determined. Number of degrees of vibrational freedom for nonlinear molecules: 3N-6 Number of degrees of vibrational freedom for linear molecules: 3N-5 Symmetry of vibrational modes All 3N degrees of freedom have symmetry relationships consistent with the irreducible representations of the molecule's point group. A linear molecule is characterized as possessing a bond angle of 180° with either a C∞v or D∞h symmetry point group. Each point group has a character table that represents all of the possible symmetry of that molecule. Specifically for linear molecules, the two character tables are shown below: However, these two character tables have infinite number of irreducible representations, so it is necessary to lower the symmetry to a subgroup that has related representations whose characters are the same for the shared operations in the two groups. A property that transforms as one representation in a group will transform as its correlated representation in a subgroup. Therefore, C∞v will be correlated to C2v and D∞h to D2h. The correlation table for each is shown below: Once the point group of the linear molecule is determined and the correlated symmetry is identified, all symmetry element operations associated to that correlated symmetry's point group are performed for each atom to deduce the reducible representation of the 3N Cartsian displacement vectors. From the right side of the character table, the non-vibrational degrees of freedom, rotational (Rx and Ry) and translational (x, y, and z), are subtracted: Γvib = Γ3N - Γrot - Γtrans. This yields the Γvib, which is used to find the correct normal modes from the original symmetry, which is either C∞v or D∞h, using the correlation table above. Then, each vibrational mode can be identified as either IR or Raman active. Vibrational spectroscopy A vibration will be active in the IR if there is a change in the dipole moment of the molecule and if it has the same symmetry as one of the x, y, z coordinates. To determine which modes are IR active, the irreducible representation corresponding to x, y, and z are checked with the reducible representation of Γvib. An IR mode is active if the same irreducible representation is present in both. Furthermore, a vibration will be Raman active if there is a change in the polarizability of the molecule and if it has the same symmetry as one of the direct products of the x, y, z coordinates. To determine which modes are Raman active, the irreducible representation corresponding to xy, xz, yz, x2, y2, and z2 are checked with the reducible representation of Γvib. A Raman mode is active if the same irreducible representation is present in both. Example Carbon Dioxide, CO2 1. Assign point group: D∞h 2. Determine group-subgroup point group: D2h 3. Find the number of normal (vibrational) modes or degrees of freedom using the equation: 3n - 5 = 3(3) - 5 = 4 4. Derive reducible representation Γ3N: 5. Decompose the reducible representation into irreducible components: Γ3N = Ag + B2g + B3g + 2B1u + 2B2u + 2B3u 6. Solve for the irreducible representation corresponding to the normal modes with the subgroup character table: Γ3N = Ag + B2g + B3g + 2B1u + 2B2u + 2B3u Γrot = B2g + B3g Γtrans = B1u + B2u + B3u Γvib = Γ3N - Γrot - Γtrans Γvib = Ag + B1u + B2u + B3u 7. Use the correlation table to find the normal modes for the original point group: v1 = Ag = Σ v2 = B1u = Σ v3 = B2u = Πu v4 = B3u = Πu 8. Label whether the modes are either IR active or Raman active: v1 = Raman active v2 = IR active v3 = IR active v4 = IR active References Vibrational spectroscopy
Vibrational spectroscopy of linear molecules
[ "Physics", "Chemistry" ]
1,157
[ "Vibrational spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
37,972,292
https://en.wikipedia.org/wiki/Florbetapir%20%2818F%29
{{DISPLAYTITLE:Florbetapir (18F)}} Florbetapir (18F), sold under the brand name Amyvid, is a PET scanning radiopharmaceutical compound containing the radionuclide fluorine-18 that was approved for use in the United States in 2012, as a diagnostic tool for Alzheimer's disease. Florbetapir, like Pittsburgh compound B (PiB), binds to beta-amyloid, however fluorine-18 has a half-life of 109.75 minutes, in contrast to PiB's radioactive half life of 20 minutes. The longer life allows the tracer to accumulate significantly more in the brains of people with AD, particularly in the regions known to be associated with beta-amyloid deposits. Development Since the disease was first described by Alois Alzheimer in 1906, the only certain way to determine if a person indeed had the disease was to perform a biopsy on the patient's brain to find distinctive spots on the brain that show the buildup of amyloid plaque. Doctors must diagnose the disease in patients with memory loss and dementia based on symptoms, and as many as 20% of patients diagnosed with the disease are found after examination of the brain following death not to have had the condition. Other diagnostic tools, such as analysis of cerebrospinal fluid, magnetic resonance imaging scans looking for brain shrinkage and PET scans looking at how glucose was used in the brain, had all been unreliable. The development of florbetapir built on research done by William Klunk and Chester Mathis who had developed a substance they called Pittsburgh compound B as a means of detecting amyloid plaque, after analyzing 400 prospective compounds and developing 300 variations of the substance that they had discovered might work. In 2002, a study performed in Sweden on Alzheimer's patients was able to detect the plaque in PET brain scans. Later studies on a control group member without the disease did not find plaque, confirming the reliability of the compound in diagnosis. While the tool worked, Pittsburgh compound B relies on the use of carbon-11, a radioactive isotope with a half-life of 20 minutes that requires the immediate use of the material prepared in a cyclotron. Avid Radiopharmaceuticals was established in July 2005, with the goal of finding a isotope that could be injected into the body, would cross the blood–brain barrier, and attach itself to amyloid protein deposits in the brain. Avid raised $500,000 from BioAdvance, a medically oriented venture capital firm in Pennsylvania, as seed funding toward the development of a biological marker. Once they found a candidate isotope, they attached the positron-emitting fluorine-18, a radioactive isotope with a half-life over five times longer (109.75 minutes), used in PET scans, and that can last for as long as a day when prepared in the morning by cyclotron. The isotope had been developed and patented by the University of Pennsylvania and was licensed by Avid. Initial tests in 2007 on a patient at Johns Hopkins Hospital in Baltimore previously diagnosed with symptoms of Alzheimer's disease detected plaque in a PET scan in areas where it was typically found in the brain. Further tests found that the scans detected plaque in patients with Alzheimer's, didn't find it in those without the diagnosis and found intermediate amounts in patients with early signs of dementia. The tests found amyloid plaque in 20% of its test patients over age 60 that had been in the normal range, but had performed worse than a control group on tests of mental acuity. Validation by autopsy In order to confirm if the isotope was accurate in detecting Alzheimer's, an advisory committee at the Food and Drug Administration demanded that the team of Avid, Bayer and General Electric perform a study to test their method. Avid established a study with a group of 35 hospice patients, some that had been diagnosed with dementia and others that had no memory problems. The participants and their families agreed that they would undergo the PET scans and would have their brains autopsied after their death by pathologists. After the study was conducted, Avid received confirmation in May 2010 that the results of the test were successful in distinguishing between those with Alzheimer's and those without the disease. In results presented in July 2010, the company showed that for 34 out of the 35 hospice patients who had been scanned, the initial scan results were confirmed when pathologists counted plaque under a microscope and when a computerized scan of the plaque was performed on material from the autopsied brain. The findings required review by the FDA to confirm their reliability as a means of diagnosing the disease. Once confirmed, the technique provided a means to reliably diagnose and monitor the progress of Alzheimer's and allowed potential pharmaceutical treatments to be evaluated. In January 2011, Avid reported on the results of further studies conducted based on 152 test subjects who had agreed to receive the company's PET scans and to have their brains analyzed after death for definitive determination of the presence of amyloid plaques. Of the patients included in the study, 29 who died had autopsies performed on their brains and in all but one the brain autopsy results matched the diagnosis based on the PET scan taken before death. Avid's technique is being used to test the efficacy of Alzheimer's disease treatments being developed by other pharmaceutical firms as a means of determining the ability of the drugs to reduce the buildup of amyloid protein in the brains of living subjects. Approval by FDA In January 2011, an FDA advisory committee unanimously recommended that Avid's PET scan technique be approved for use. The advisory committee included a qualification requiring Avid to develop clear guidelines establishing when the tests had spotted enough of the amyloid plaque to make a diagnosis of Alzheimer's. Acquisition by Eli Lilly Eli Lilly and Company announced in November 2010, that they would acquire Avid for $800 million. References Alzheimer's disease Radiopharmaceuticals Pyridines Fluoroethyl ethers Drugs developed by Eli Lilly and Company X
Florbetapir (18F)
[ "Chemistry" ]
1,238
[ "Chemicals in medicine", "Radiopharmaceuticals", "Medicinal radiochemistry" ]
37,973,155
https://en.wikipedia.org/wiki/Glass%20in%20green%20buildings
A green design concept is to facilitate sustainable use of the resources – energy, water and other materials – all through the complete life cycle of the building including its construction. Glass is a useful material that has such advantages such as transparency, natural day-lighting, permitting a sky view and Acoustic control, depending on the glazing solution used. Glass is a wholly recyclable material. Glass is beloved by architects as well as designers. Glass can play a role in accomplishing greater indoor environmental quality and when used carefully can improve energy efficiency, however a measured approach needs to be taken to ensure the building loads are not excessively increased due to solar gain. The intent of a green building design is to curtail the demand on non-renewable resources, amplify utilization efficiency of these resources when in use, and augment the reuse, recycling, and consumption of renewable resources. Double glazed glass Architects use high-performance double-glazed glass, which is laminated or coated, to moderate interior temperatures by controlling heat loss and gain. The coating filters the heat-producing aspects of solar rays. The use of such glass in green buildings is used comprehensively in tropical climates as well as the Middle East. Solar control glass Solar control glass can be an eye-catching characteristic of a building whilst at the same time diminishing, or even eradicating the need for an air-conditioning system, reducing running costs of the building and saving energy. Solar control glass can be particular for any situation where unwarranted solar heat gain is likely to be a bother. E.g. Large façades, glass walkways, atria and conservatories. See also Building-integrated photovoltaics Curtain wall (architecture) Deep energy retrofit Energy neutral design Environmental design Green building and wood Insulated glazing Low-energy house Passive house Passive solar building design Photovoltaics Quadruple glazing Rooftop solar power Solar shingle Solar water heating Sustainable design Zero-energy building References External links Guidelines for use of Glass in Green Buildings Glass in Buildings Glassisgreen Glass in Architecture Glass in Building Glass and Building Regulations Australia standards for Glass in Buildings CODE OF PRACTICE FOR USE OF GLASS IN BUILDINGS Deutsches - Glass in building Glass architecture Sustainable building
Glass in green buildings
[ "Materials_science", "Engineering" ]
458
[ "Glass engineering and science", "Sustainable building", "Building engineering", "Glass architecture", "Construction" ]
37,974,009
https://en.wikipedia.org/wiki/Bebionic
Bebionic is a commercial prosthetic hand designed to enable amputees to perform everyday activities, such as eating, drinking, writing, typing, turning a key in a lock and picking up small objects. The first version of the Bebionic hand was launched at the World Congress and Orthopädie & Reha-Technik, Trade Show, Leipzig, Germany, in May 2010. Designed in the United Kingdom, the Bebionic hand is manufactured by RSL Steeper and is available worldwide. Since February 2, 2017, Bebionic is owned by Ottobock. Technical specification Dimensions The Bebionic hand is available in either a large or a medium size. The following table shows the dimensions of the two different prostheses. Grips The Bebionic hand offers fourteen different gripping options, of which the user can select a total of eight options, allowing the user to perform a relatively wide variety of tasks. This number of grips can be achieved because the thumb can take two different positions according to the user's needs: the lateral position and the opposition position. In the lateral position, the thumb is parallel to the fingers of the hand, allowing holds such as pointing the finger. In the opposition position, the thumb is opposite to the palm, allowing for grips that can grasp, pinch or hold objects. On the back of the hand, there is a button that will allow the holder of the prosthesis to choose the grip. In fact, this button offers the possibility to choose between two programs, the primary and the secondary. Each of these programs allows two different types of sockets. To switch from one to the other, the user must apply an OPEN OPEN signal, i.e. they must send another OPEN signal after fully opening the hand. In total with the two different thumb positions, we get to 2 × 2 × 2 = 8 different grips, each with a specific name: - Tripod: This is possible when the thumb is in the opposition position. We then have the index and middle fingers in contact with the thumb. For the other two remaining fingers, they continue to close until they reach the palm of the hand and therefore feel a resistance. It is therefore a fairly common grip since it allows its user to hold a variety of everyday objects such as a fork or a pen. - Pinch: This also happens when the thumb is in the opposition position, but it is necessary for the thumb to be manually repositioned by a technician so that only the index finger meets the thumb when the hand is closed. Indeed, the thumb is equipped with an adjustment device that allows it to be repositioned according to the desired grips. The pivot is actually equipped with a screw that once slightly unscrewed allows a small movement of the thumb. Smaller objects, such as a coin, can then be handled. Bebionic 2.0 In September 2011, the Bebionic second-generation prosthetic hand saw improvements to speed, accuracy, grip and durability, later becoming available in different size options making this prosthetic available to a broader range of patients. Since its initial upgrade, patients fitted with the Bebionic 2.0 have seen many improvements in the form of high-capacity 2200mAh split cell internal batteries for increased usage time, gains in natural range of motion, increased accuracy in touch sensitivity sensors, and numerous software upgrades, all of which have played a major role in providing a higher quality of life for those that benefit from this technology. Patients In 2008, Jonathan Metz from West Hartford, Connecticut got his arm wedged in his basement furnace. Trapped in his own basement for three days, he had no alternative to self-amputating his arm. Since receiving a Bebionic prosthetic hand in 2010, his life has dramatically improved. In 2012, Kingston upon Hull resident Mike Swainger was the first person to receive a bionic hand on the NHS. In 2015, Nicky Ashwell, a 26-year-old London-based woman who was born without a right hand, received Bebionic's prosthetic hand. In 2017, Margarita Gracheva from Serpukhov had her hands cut off by her husband. After six months of rehabilitation, dozens of concerned viewers of Andrey Malakhov's live program on the Russia 1 TV channel helped raise money for a Bebionic prosthetic hand. Pop culture In the world of science fiction, the Bebionic hand has been compared to the artificial hands of fictional characters such as The Terminator and Luke Skywalker from Star Wars. References External links Bebionic Website on ottobock.com Prosthetics Bionics
Bebionic
[ "Engineering", "Biology" ]
950
[ "Bionics" ]
37,974,207
https://en.wikipedia.org/wiki/Il%20Galateo
Galateo: The Rules of Polite Behavior () by Florentine Giovanni della Casa (1503–56) was published in Venice in 1558. A guide to what one should do and avoid in ordinary social life, this courtesy book of the Renaissance explores subjects such as dress, table manners, and conversation. It became so popular that the title, which refers to the name of one of the author’s distinguished friends, entered into the Italian language as a general term for social etiquette. Della Casa did not live to see his manuscript’s widespread and lasting success, which arrived shortly after its publication. It was translated into French (1562), English (1576), Latin (1580), Spanish (1585), and German (1587), and has been read and studied in every generation. Della Casa's work set the foundation for modern etiquette writers and authorities on manners, such as “Miss Manners” Judith Martin, Amy Vanderbilt, and Emily Post. Context In the twentieth century, scholars usually situated Galateo among the courtesy books and conduct manuals that were very popular during the Renaissance. In addition to Castiglione’s celebrated Courtier, other important Italian treatises and dialogues include Alessandro Piccolomini’s Moral institutione (1560), Luigi Cornaro’s Treatise on the Sober Life (1558-1565), and Stefano Guazzo’s Art of Civil Conversation (1579). In recent years, attention has turned to the humor and dramatic flair of Della Casa’s book. It has been argued that the style sheds light on Shakespeare’s comedies. When it first appeared in English translation by Robert Peterson in 1575, it would have been available in book stalls in Shakespeare's London. Stephen Greenblatt, author of Will in the World, writes, "To understand the culture out of which Shakespeare is writing, it helps to read Renaissance courtesy manuals like Baldassare Castiglione’s famous Book of the Courtier (1528) or, still better, Giovanni della Casa’s Galateo or, The Rules of Polite Behavior (1558, available in a delightful new translation by M.F. Rusnak). It is fine for gentlemen and ladies to make jokes, della Casa writes, for everyone likes people who are funny, and a genuine witticism produces “joy, laughter, and a kind of astonishment.” But mockery has its risks. It is perilously easy to cross a social and moral line of no return." Distinguished historians argue that Galateo should be read in the context of international European politics, and some contend that the work expresses an attempt to distinguish Italian excellence. “During the half-century when Italy fell prey to foreign invasion (1494-1559) and was overrun by French, Spanish and German armies, the Italian ruling classes were battered by - as they often envisaged them - "barbarians". In their humiliation and laboured responses, Italian writers took to reflecting on ideals, such as the ideal literary language, the ideal cardinal, ideal building types, and the ideal general or field commander. But in delineating the rules of conduct, dress and conversation for the perfect gentleman, they were saying, in effect, "We are the ones who know how to cut the best figure in Europe". A skilled writer in Latin, Della Casa followed Erasmus in presenting a harmonious and simple morality based on Aristotle’s Nicomachean Ethics and notion of the mean, as well as other classical sources. His treatise also reveals an obsession with graceful conduct and self-fashioning during the time of Michelangelo and Titian: “A man must not be content with doing good things, but he must also study to do them gracefully. Grace is nothing other than that luster which shines from the appropriateness of things that are suitably ordered and well arranged one with the other and together.” The work has been edited in this light by such distinguished Italian scholars as Stefano Prandi, Emanuela Scarpa, and Giorgio Manganelli. The work may be read in the context of what Norbert Elias called the “civilizing process.” It is generally agreed that, given the popularity and impact of Galateo, the cultural elite of the Italian Renaissance taught Europe how to behave. Giulio Ferroni argues that Della Casa “proposes a closed and oppressive conformity, made of caution and hypocrisy, hostile to every manifestation of liberty and originality.” Others contend, on the contrary, that the work represents ambivalence, self-control, and a modern understanding of the individual in a society based on civility, intercultural competence and social networking. Content Della Casa addresses gentlemanly citizens who wish to convey a winning and attractive image. With a casual style and dry humor, he writes about everyday concerns, from posture to telling jokes to table manners. "Our manners are attractive when we regard others' pleasure and not our own delight," Della Casa writes. Unlike Baldassare Castiglione's The Book of the Courtier, the rules of polite behavior in Galateo are not directed to ideal men in a Renaissance court. Instead, Della Casa observes the ordinary habits of people who do not realize that clipping one's nails in public is bad. "One should not annoy others with such stuff as dreams, especially since most dreams are by and large idiotic," he advises. Valentina D'Urso, Professor of Psychology and author of Le Buone Maniere, writes, "The founding father of this literary genre, [Galateo] is an extraordinary read, lively and passionate. One doesn’t know whether to admire more its rich style or the wisdom of the practical words of advice." Language and style The work was preceded by a short treatise on the same subject in Latin, De officiis inter tenuiores et potentiores amicos (1546). Latin at the time was the language of learned society, and Della Casa was a first-rate classicist and public speaker. The treatise opens with a Latinate conciossiacosaché, which gained Galateo a reputation for being pedantic and labored. However, Giuseppe Baretti and poets such as Giacomo Leopardi ranked Della Casa alongside Machiavelli as a master of Italian prose style. "Una delle prose più eleganti e più attiche del secolo decimosesto," (one of the most elegant and Attic prose works of the sixteenth century) Leopardi said. Della Casa’s Galateo is, in the words of scholar E. H. Wilkins, “still valuable…for the pleasant ease with which most of it is written, and for its common sense, its plentiful humor, and its general amenity.” Della Casa frequently alludes to Dante and more often to Boccaccio, whose Decameron he evidently knew very well and whose style he imitates. Several comments on language in Galateo reflect the Tuscan language model proposed about the same time by Della Casa’s friend Pietro Bembo. Summary of Galateo In the first chapter it is said that a gentleman should be at all times courteous, pleasant, and in manners beautiful. Although good manners may not appear as important as liberality, constancy, or magnanimity, they are nonetheless a virtue for achieving the esteem of others. One must not mention, do, or think anything that invokes images in the mind that are dirty or disreputable. One should not reveal by one's gestures that said person has just returned from the bathroom, do not blow one's nose and look into the handkerchief, avoid spitting and yawning. Della Casa tells his reader that outward appearance is very important, so clothes must be tailored and conform to prevailing custom, reflecting one’s social status. In Chapter 7, Della Casa deals with a pivotal subject - conversation. Della Casa says to talk about topics of interest to all present and show respect to everyone, avoiding anything that is base or petty. Chapter 14 discusses being in places with other people, starting with types of ceremonies, false flatteries, and fawning behavior. Another matter is whether the ceremonies are made to us: never refuse because it could be taken as a sign of arrogance. Della Casa returns to illustrate the customs of conversation and public speaking. Language should, as much as possible, be "orderly and well-expressed" so that the listener is able to understand what the speaker intends. In addition to the clarity of the words used, it is also important that they sound pleasant. Before talking about any topic, it is good to have thought it out. It is not polite to interrupt someone while talking, nor to help him find his words. In the last three chapters, the author writes about behaviour in general: actions should be appropriate and done with grace. A gentleman should never run, or walk too slowly. Della Casa brings us to behavior at the table, such as not scratching, not eating like a pig, not using a toothpick or sharing food. In Della Casa’s vision, slight slips of decorum become taboo. Publication history and reception It was probably first drafted during his stay at the Abbey of Saint Eustace at Nervesa, near Treviso, between 1551 and 1555. Galateo was first published in Venice, and was edited by Erasmus Gemini in 1558. The first separate publication appeared in Milan a year later. The Vatican manuscript (formerly Parraciani Ricci), in Latin with autograph corrections, was edited and published by Gennaro Barbarisi in 1990. The manuscript contains neither the title nor the division into chapters. Many variants in the first edition are attributed to Erasmus Gemini. The Spanish Galateo of Lucas Gracián Dantisco was very influential in the seventeenth century. In the Enlightenment, the letters of Lord Chesterfield show the influence of Galateo, as does a self-help manuscript of George Washington. The first American edition was published in Baltimore in 1811, with a short appendix on how to slice and serve meats. Editions and translations Giovanni Della Casa, Galateo overo de' costumi, a cura di Emanuela Scarpa, Franco Cosimo Panini Editore, Modena 1990 (based on the 1558 edition). Giovanni Della Casa, Galateo, Galatheo, ò vero de' costumi, a cura di Gennaro Barbarisi, Marsilio, Venezia 1991 (based on the manuscript). Giovanni Della Casa, Galateo. Translated by R. S. Pine-Coffin, Penguin Books, 1950s. Giovanni Della Casa, Galateo: A Renaissance Treatise on Manners. Translated by Konrad Eisenbichler, Kenneth R. Bartlett. Centre for Reformation and Renaissance Studies, 1986, 2009. Giovanni Della Casa, Galateo: The Rules of Polite Behavior. Edited and Translated by M. F. Rusnak. University of Chicago Press, 2013. Notes References External links Digitized book in English Folger Shakespeare Library's Edition of Galateo Complete etext Liber Liber Italian books Italian literature Renaissance literature 1558 books Etiquette
Il Galateo
[ "Biology" ]
2,315
[ "Etiquette", "Behavior", "Human behavior" ]
37,975,023
https://en.wikipedia.org/wiki/Satellite%20crop%20monitoring
Satellite crop monitoring is the technology which facilitates real-time crop vegetation index monitoring via spectral analysis of high resolution satellite images for different fields and crops which enables to track positive and negative dynamics of crop development. The difference in vegetation index informs about single-crop development disproportions that speaks for the necessity of additional agriculture works on particular field zones—that is because satellite crop monitoring belongs to precision agriculture methods. Satellite crop monitoring technology allows to perform online crop monitoring on different fields, located in different areas, regions, even countries and on different continents. The technology's advantage is a high automation level of sown area condition and its interpretation in an interactive map which can be read by different groups of users. Satellite crop monitoring technology users are: agronomists and agriculture companies management (crop vegetation control, crop yield forecasting, management decisions optimization); business owners (business prospects estimates, making reasonable decisions on capital investments, providing information for management decisions); investors and investment analysts (investment potential estimation, making investment decisions, making sustainable forecasts); insurance brokers (data collection, clients claims verification, scale of rates and insurance premium amounts calculation); agriculture machinery producers (integration of crop monitoring solutions with agriculture machinery board computers operations, functional development); state and sectoral organisations engaged in agriculture, food security and ecological problems. Advantages and Benefits Economic Benefits Reduced input costs through precise application Improved yield optimization Better resource allocation Enhanced profit margins Reduced crop loss More efficient farm management Environmental Benefits Reduced chemical usage Optimized water consumption Lower environmental impact Better soil conservation Reduced carbon footprint Enhanced biodiversity protection Operational Benefits Real-time monitoring capability Remote field assessment Reduced manual inspection needs Better decision-making support Improved timing of interventions Enhanced record-keeping See also Normalized Difference Vegetation Index Precision agriculture Remote sensing Satellite imaging References External links FAO International Efficient Agriculture Solutions and Standards Association Satellite Crop Monitoring Review Ministry of Agriculture of China E-agriculture Earth observation satellites Biogeography Remote sensing Environmental monitoring Satellite imagery
Satellite crop monitoring
[ "Biology" ]
395
[ "Biogeography" ]
37,975,092
https://en.wikipedia.org/wiki/C11H14FNO
{{DISPLAYTITLE:C11H14FNO}} The molecular formula C11H14FNO (molar mass: 195.233 g/mol, exact mass: 195.1059 u) may refer to: 4-Fluoroethcathinone (4-FEC) 3-Fluorophenmetrazine (3-FPM) Molecular formulas
C11H14FNO
[ "Physics", "Chemistry" ]
82
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
53,658,651
https://en.wikipedia.org/wiki/Magnetic%20drug%20delivery
Magnetic nanoparticle-based drug delivery is a means in which magnetic particles such as iron oxide nanoparticles are a component of a delivery vehicle for magnetic drug delivery, due to the simplicity with which the particles can be drawn to (external) magnetopuissant targets. Magnetic nanoparticles can impart imaging and controlled release capabilities to drug delivery materials such as micelles, liposomes, and polymers. Synopsis Molecular magnets (single-molecule magnets) are a platform that incorporates insoluble (toxic) drugs into biocompatible carrier materials, without adding magnetic iron oxide nanoparticles which might adversely affect patients susceptible to iron overdose. The drawbacks in conventional magnetic drug delivery methods can be overcome by switching from typical iron oxide nanoparticles to ones based on molecular magnets, such as Fe(salen)-based "anticancer nanomagnet" with proven cancer-fighting ability. However, insoluble drugs including Fe(salen) also have some inherent drawbacks, such as poor water solubility, loss of magnetic activity in solvents, and potential cytotoxicity when accumulated in tissues and organs. As an alternative synthetic method of magnetic drug delivery, a "non-iron oxide"-based smart delivery platform has been very recently developed by self-assembly of the Fe(salen) drugs into nano-cargoes encapsulated by a smart polymer, exhibiting bio-safe multifunctional magnetic capabilities, including MRI, magnetic field- and pH-responsive heat-releasing hyperthermia effects, and controlled release. References Drug delivery devices Magnetic devices
Magnetic drug delivery
[ "Chemistry" ]
338
[ "Pharmacology", "Drug delivery devices" ]
53,663,058
https://en.wikipedia.org/wiki/Q-slope
The Q-slope method for rock slope engineering and rock mass classification is developed by Barton and Bar. It expresses the quality of the rock mass for slope stability using the Q-slope value, from which long-term stable, reinforcement-free slope angles can be derived. The Q-slope value can be determined with: Q-slope utilizes similar parameters to the Q-system which has been used for over 40 years in the design of ground support for tunnels and underground excavations. The first four parameters, RQD (rock quality designation), Jn (joint set number), Jr (joint roughness number) and Ja (joint alteration number) are the same as in the Q-system. However, the frictional resistance pair Jr and Ja can apply, when needed, to individual sides of a potentially unstable wedges. Simply applied orientation factors (0), like (Jr/Ja)1x0.7 for set J1 and (Jr/Ja)2x0.9 for set J2, provide estimates of overall whole-wedge frictional resistance reduction, if appropriate. The Q-system term Jw is replaced with Jwice, and takes into account a wider range of environmental conditions appropriate to rock slopes, which are exposed to the environment indefinitely. The conditions include the extremes of erosive intense rainfall, ice wedging, as may seasonally occur at opposite ends of the rock-type and regional spectrum. There are also slope-relevant SRF (strength reduction factor) categories. Multiplication of these terms results in the Q-slope value, which can range between 0.001 (exceptionally poor) to 1000 (exceptionally good) for different rock masses. A simple formula for the steepest slope angle (β), in degrees, not requiring reinforcement or support is given by: Q-slope is intended for use in reinforcement-free site access road cuts, roads or railway cuttings, or individual benches in open cast mines. It is based on over 500 case studies in slopes ranging from 35 to 90 degrees in fresh hard rock slopes as well as weak, weathered and saprolitic rock slopes. Q-slope has also been applied in slopes with interbedded strata, in faulted rocks and fault zones, and in alpine and Arctic environments, which are susceptible to freeze-thaw and ice wedging. Rock slope design techniques have been derived using Q-slope and geophysical survey data, primarily based on Vp (P-wave velocity). Q-slope has been applied in conjunction with remote sensing (aerial photogrammetry) to assess slope stability in hazardous and 'out-of-reach' natural and excavated slopes. Q-slope is not intended as a substitute for conventional and more detailed slope stability analyses, where these are warranted. Q-slope has been correlated with other rock mass classifications including BQ, RHRS, and SMR. See also Slope failure Rockfall SMR classification References Rock mechanics Slope landforms Soil mechanics Rock mass classification Geotechnical engineering
Q-slope
[ "Physics", "Engineering" ]
607
[ "Soil mechanics", "Civil engineering", "Applied and interdisciplinary physics", "Geotechnical engineering" ]
53,667,030
https://en.wikipedia.org/wiki/Notch%20tensile%20strength
The notch tensile strength (NTS) of a material is the value given by performing a standard tensile strength test on a notched specimen of the material. The ratio between the NTS and the tensile strength is called the notch strength ratio (NSR). See also Charpy impact test References Physical quantities Fracture mechanics Materials testing Elasticity (physics)
Notch tensile strength
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
75
[ "Structural engineering", "Physical phenomena", "Materials science stubs", "Physical quantities", "Fracture mechanics", "Elasticity (physics)", "Deformation (mechanics)", "Quantity", "Materials science", "Materials testing", "Materials degradation", "Physical properties" ]
39,414,609
https://en.wikipedia.org/wiki/Aircraft%20bridge
Aircraft bridges, including taxiway bridges and runway bridges, bring aircraft traffic over motorways, railways, and waterways. Construction Aircraft bridges must be designed to support the heaviest aircraft that may cross them, or that will cross them in the future. In 1963, a taxiway bridge at O'Hare International Airport, one of the busiest airports in the world, was planned to handle future aircraft weighing , but aircraft weights doubled within two years of its construction. Currently, the largest passenger aircraft in the world, the Airbus A380, has a maximum take-off weight (MTOW) of . The largest Boeing planes, i.e. the current "Project Ozark" versions of the Boeing 747-8, are approaching MTOW of greater than . Aircraft bridges must be designed for the substantial forces exerted by aircraft braking, affecting the lateral load in substructure design. Braking force of 70 percent of the live load is assumed in two recent taxiway bridge designs. And "deck design is more apt to be controlled by punching shear than flexure due to the heavy wheel loads." Taxiway bridges are unusually wide relative to their length, and aircraft loading cannot be assumed to be distributed evenly to a bridge superstructure's web, so different modeling is required in these bridges' structural design. In cold climates, provisions for anti-icing must be made. In the U.S., regulations of the Federal Aviation Administration must be met. And there are various other differences versus typical bridges covered by AASHTO standards. A major issue is that closing an airport for construction even temporarily is impossible. Major alternatives considered for construction of a taxiway bridge in 2008 were: use of precast, prestressed concrete I-girders use of precast, prestressed concrete box girders use of steel girders cast-in-place, post-tensioned concrete box girder bridge. Finite Element Analysis has been advocated for, or applied in, taxiway bridge design since at least 1963. List of taxiway bridges, runway bridges, and related tunnels Taxiway bridges and runway bridges are bridges at airports to bring airplane taxiways and runways across motorways, railroads, or waterways. A taxiway bridge must be designed to carry the weight of the maximum size airplanes crossing and perhaps stopping directly upon it. A runway bridge is similar but may have different stresses. Alternatively, a motorway may be brought by tunnel underneath one or more runways and taxiways. Examples include: Part of the taxiway and one runway of Allegheny County Airport in Pittsburgh, Pennsylvania is built on a bridge over Pennsylvania Route 885 and two sets of tracks of the Union Railroad (Pittsburgh). At Amsterdam Schiphol Airport, the Schiphol tunnel takes the A4 motorway underneath an airplane runway and two taxiways. In 2003, a sixth runway was added at quite some distance west of the rest of airport, with use of two connecting taxiway bridges crossing the A5 motorway and the , respectively. Athens International Airport has two taxiways running on two bridges over the A64 motorway. Five taxiway bridges, Beijing Capital Airport. Chung Cheung Road and South Runway Road in Chek Lap Kok. Eastern Vehicular Tunnel in Chek Lap Kok. A 1967-built steel girder taxiway bridge at O'Hare International Airport crosses over Interstate 190 in between Terminals 3 and 5. In 1963, the weight thought to be necessary was for the 1967 built bridge. In 1969, aircraft weights had doubled. It was a 4-span welded steel girder bridge with a concrete deck, long, wide, bridge. Maximum stress for the bridge was found to occur when an aircraft was 6 feet off the centerline. The Copenhagen Airport has one runway and one taxiway running over the Denmark 221 road. Düsseldorf International Airport has the approach end of runway 23L and the last taxiway out of the same runway 05R above a railway line. The Düsseldorf Airport Station offers a very good view of passing aircraft. Fort Lauderdale-Hollywood International Airport has US Route 1 and an active railroad running under a runway and taxiway. Fu Hsing North Road in Taipei. Haneda Airport's runway 04/22 over Bayshore Route (Shuto Expressway); runway 16R/34L over , Tokyo Monorail and Keikyu Airport Line; two taxiway bridges to runway 05/23 (D runway). Hartsfield-Jackson Atlanta International Airport's Runway 10/28 crosses over Interstate 285. (and see Engineering News and American Contract Journal?) Heathrow Airports main entranceway to Tunnel Road East runs under a runway and two taxiways. HKIA APM between the main Terminal One and the Midfield Concourse. Hua Hin Airport's runway crossed over Phet Kasem Road (Thailand Route 4) and Southern Railway Line. Indira Gandhi International Airport in Delhi operates two elevated taxiways. At Indianapolis International Airport, a taxiway bridge was planned to connect a future fourth runway across Interstate 70. During 2002-04, the Indiana Department of Transportation realigned I-70 to accommodate this. John Glenn Columbus International Airport in Columbus, Ohio has a $10.5 million called Port Columbus Airport Crossover Taxiway. Kai Tak Taxiway Bridge No. 3, a fast-track design-and-build contract awarded in 1993, at Hong Kong's Kai Tak Airport, which closed in 1998. It is now repurposed as a road connecting Kai Tak Cruise Terminal and Shing Fung Road and Shing Cheung Road, which is now named . Kai Tak Airport's northwestern end of first-generation of 13/31 runway, across Choi Hung Road (then part of Clear Water Bay Road) and Kai Tak Nullah. Kai Tak Nullah has several bridges across to the northeast apron. Kai Tak Tunnel in New Kowloon. At Los Angeles International Airport, a tunnel was completed in 1953 allowing Sepulveda Boulevard to revert to straight and pass beneath the two runways; it was the first tunnel of its kind. At Manchester Airport in the United Kingdom, the A538 road runs in a pair of twin-bore tunnels underneath the southern ends of both runways. Macau International Airport off Taipa has taxiways. Molde Airport, Norway, has a proper road tunnel under the runway. B Runway (16L/34R) of Narita Airport over . Oakland International Airport has an underpass for Ron Cowan Parkway below a taxiway connecting the commercial runways to the general aviation North Field. The Orlando International Airport authority, planning for a future high-speed rail line, invested in extra length for its taxiway bridges over its southern airport access road. Taxiway bridge over Interstate 73, Piedmont Triad International Airport in Greensboro, North Carolina. Sandane, Trondheim and Tromsø airports in Norway have such bridges. S. 188th Street runs under a runway and a taxiway of Seattle-Tacoma International Airport. Singapore Changi Airport has two taxiway bridges spanning Airport Boulevard. These bridges required shields installed on either side to protect the road from the jet blast. Planning for it since the 1990s, the airport spent $60 million in total in modifications to support the Airbus A380. $35 million Taxiway Sierra Underpass reconstruction at Sky Harbor International Airport in Phoenix, Arizona included a $13 million five-span, cast-in-place, post-tensioned concrete box girder bridge. The airport also has the Taxiway Tango Underpass. The Soekarno–Hatta International Airport has two taxiway bridges located in the southwest corner of the airport connecting the north and the south runway and a third taxiway bridge located in the northeast corner was under construction, scheduled to have finished in 2018. Runway 17R/35L at the defunct Stapleton International Airport in Denver crossed over Interstate 70. The third runway of the Stockholm Arlanda Airport is reached from the main terminal area by two taxiway bridges constructed to be able to handle the heaviest and largest airplanes in traffic. Taxiway B Bridge, Tampa International Airport. References External links Port Columbus International Airport Cross-Over Taxiway Bridge Project Bridges by mode of traffic Airport engineering
Aircraft bridge
[ "Engineering" ]
1,641
[ "Airport engineering" ]
39,415,735
https://en.wikipedia.org/wiki/Origin%20and%20occurrence%20of%20fluorine
Fluorine is relatively rare in the universe compared to other elements of nearby atomic weight. On Earth, fluorine is essentially found only in mineral compounds because of its reactivity. The main commercial source, fluorite, is a common mineral. In the universe At 400 ppb, fluorine is estimated to be the 24th most common element in the universe. It is comparably rare for a light element (elements tend to be more common the lighter they are). All of the elements from atomic number 6 (carbon) to atomic number 12 (magnesium) are hundreds or thousands of times more common than fluorine except for 11 (sodium). One science writer described fluorine as a "shack amongst mansions" in terms of abundance. Fluorine is so rare because it is not a product of the usual nuclear fusion processes in stars. And any created fluorine within stars is rapidly eliminated through strong nuclear fusion reactions—either with hydrogen to form oxygen and helium, or with helium to make neon and hydrogen. The presence of fluorine at all—outside of temporary existence in stars—is somewhat of a mystery because of the need to escape these fluorine-destroying reactions. Three theoretical solutions to the mystery exist: In type II supernovae, atoms of neon could be hit by neutrinos during the explosion and converted to fluorine. In Wolf-Rayet stars (blue stars over 40 times heavier than the Sun), a strong solar wind could blow the fluorine out of the star before hydrogen or helium could destroy it. Finally, in asymptotic giant branch (a type of red giant) stars, fusion reactions occur in pulses and convection could lift fluorine out of the inner star. Only the red giant hypothesis has supporting evidence from observations, fluorine cations have been found in planetary nebulae. In space, fluorine commonly combines with hydrogen to form hydrogen fluoride. (This compound has been suggested as a tracer to enable tracking reservoirs of hydrogen in the universe.) In addition to HF, monatomic fluorine has been observed in the interstellar medium. Fluorine cations have been seen in planetary nebulae and in stars, including the Sun. On Earth Fluorine is the thirteenth most common element in Earth's crust, comprising between 600 and 700 ppm of the crust by mass. Because of its reactivity, it is essentially only found in compounds. Commercial sources Three minerals exist that are industrially relevant sources of fluorine: fluorite, fluorapatite, and cryolite. Fluorite Fluorite (CaF2), also called fluorspar, is the main source of commercial fluorine. Fluorite is a colorful mineral associated with hydrothermal deposits. It is common and found worldwide. China supplies more than half of the world's demand and Mexico is the second-largest producer in the world. The United States produced most of the world's fluorite in the early 20th century, but its last mine, in Illinois, shut down in 1995. Canada also exited production in the 1990s. The United Kingdom has declining fluorite mining and has been a net importer since the 1980s. Fluorapatite Fluorapatite (Ca5(PO4)3F) is mined along with other apatites for its phosphate content and is used mostly for production of fertilizers. Most of the Earth's fluorine is bound in this mineral, but because the percentage within the mineral is low (3.5%), the fluorine is discarded as waste. Only in the United States is there significant recovery. There, the hexafluorosilicates produced as byproducts are used to supply water fluoridation. Cryolite Cryolite (Na3AlF6) is the least abundant of the three major fluorine-containing minerals, but is a concentrated source of fluorine. It was formerly used directly in aluminium production. However, the main commercial mine, on the west coast of Greenland, closed in 1987. Minor occurrences Several other minerals, such as the gemstone topaz, contain fluoride. Fluoride is not significant in seawater or brines, unlike the other halides, because the alkaline earth fluorides precipitate out of water. Commercially insignificant quantities of organofluorines have been observed in volcanic eruptions and in geothermal springs. Their ultimate origin (from biological sources or geological formation) is unclear. The possibility of small amounts of gaseous fluorine within crystals has been debated for many years. One form of fluorite, antozonite, has a smell suggestive of fluorine when crushed. The mineral also has a dark black color, perhaps from free calcium (not bonded to fluoride). In 2012, a study reported detection of trace quantities (0.04% by weight) of diatomic fluorine in antozonite. It was suggested that radiation from small amounts of uranium within the crystals had caused the free fluorine defects. Citations Indexed references Fluorine Geochemistry
Origin and occurrence of fluorine
[ "Chemistry" ]
1,067
[ "nan" ]
39,417,871
https://en.wikipedia.org/wiki/Ahuroa%20Gas%20Storage%20Facility
The Ahuroa Gas Storage Facility is an underground natural gas storage facility situated at Ahuroa in the Taranaki region of New Zealand, owned by Flex Gas, a subsidiary of First Gas. Flex Gas is the trading name of Gas Services New Zealand. The stored gas is used to supply the Stratford Power Station and other major users of gas when needed during periods of peak demand. The facility can store up to 18 PJ of gas, with injection rates up to 65 terajoules per day and withdrawal rates of up to 65 terajoules per day. Flex Gas plans to expand the facility to 65 TJ per day injection and withdrawal by 2021. History The Tariki / Ahuroa field was discovered in 1986. Construction of wellsite facilities began in 1995 and production commenced in 1996. The facility was in turn owned by Fletcher Challenge, Shell and Swift Energy. In 2008, when the field was largely depleted, it was acquired by Origin Energy as part of the Tariki / Ahuroa / Waihapa / Ngaere assets. Gas injection into storage began in 2008 and the surface facility was constructed by Contact Energy in 2009 and 2010. The facility was officially opened in 2011 with a development cost of $177m. In 2017, Contact Energy sold the gas storage facility to Gas Services New Zealand. See also Oil and gas industry in New Zealand References Natural gas storage Natural gas in New Zealand
Ahuroa Gas Storage Facility
[ "Chemistry" ]
286
[ "Natural gas storage", "Natural gas technology" ]
39,421,502
https://en.wikipedia.org/wiki/Peter%20Hansen%20House%20%28Manti%2C%20Utah%29
The Peter Hansen House, located at 247 S. 200 East in Manti, Utah, was built in 1875. It is historically significant as a Scandinavian-American folk architecture example. It was built by Danish-born brickmason Peter Hansen who immigrated in the 1860s. As brick was rare in Manti before the 1880s, it is believed that Hansen fired bricks for this house in a kiln on the property. The house was sold for $500 in 1882. It was listed on the National Register of Historic Places in 1983. References Pair-houses Houses on the National Register of Historic Places in Utah Houses completed in 1875 Houses in Sanpete County, Utah National Register of Historic Places in Sanpete County, Utah
Peter Hansen House (Manti, Utah)
[ "Engineering" ]
146
[ "Pair-houses", "Architecture" ]
36,546,734
https://en.wikipedia.org/wiki/Materials%20%26%20Design
Materials & Design is a peer-reviewed open access scientific journal published by Elsevier. It covers research on the practical applications of engineering materials including materials processing. Article formats are regular, express, and review articles (typically commissioned by the editors). The editor-in-chief is Alexander M. Korsunsky (Trinity College, Oxford). The journal was established in 1978 as the International Journal of Materials in Engineering Applications and obtained its current title in 1980. Abstracting and indexing The journal is abstracted and indexed by: Current Contents/Engineering, Computing & Technology Inspec Materials Science Citation Index Metals Abstracts Physics Abstracts Scopus According to the Journal Citation Reports, the journal has a 2022 impact factor of 8.4. References External links Materials science journals Elsevier academic journals Academic journals established in 1978 English-language journals
Materials & Design
[ "Materials_science", "Engineering" ]
169
[ "Materials science journals", "Materials science" ]
36,549,071
https://en.wikipedia.org/wiki/Vortex%20sheet
A vortex sheet is a term used in fluid mechanics for a surface across which there is a discontinuity in fluid velocity, such as in slippage of one layer of fluid over another. While the tangential components of the flow velocity are discontinuous across the vortex sheet, the normal component of the flow velocity is continuous. The discontinuity in the tangential velocity means the flow has infinite vorticity on a vortex sheet. At high Reynolds numbers, vortex sheets tend to be unstable. In particular, they may exhibit Kelvin–Helmholtz instability. The formulation of the vortex sheet equation of motion is given in terms of a complex coordinate . The sheet is described parametrically by where is the arclength between coordinate and a reference point, and is time. Let denote the strength of the sheet, that is, the jump in the tangential discontinuity. Then the velocity field induced by the sheet is The integral in the above equation is a Cauchy principal value integral. We now define as the integrated sheet strength or circulation between a point with arc length and the reference material point in the sheet. As a consequence of Kelvin's circulation theorem, in the absence of external forces on the sheet, the circulation between any two material points in the sheet remains conserved, so . The equation of motion of the sheet can be rewritten in terms of and by a change of variable. The parameter is replaced by . That is, This nonlinear integro-differential equation is called the Birkoff-Rott equation. It describes the evolution of the vortex sheet given initial conditions. Greater details on vortex sheets can be found in the textbook by Saffman (1977). Diffusion of a vortex sheet Once a vortex sheet, it will diffuse due to viscous action. Consider a planar unidirectional flow at , implying the presence of a vortex sheet at . The velocity discontinuity smooths out according to where is the kinematic viscosity. The only non-zero vorticity component is in the direction, given by . Vortex sheet with periodic boundaries A flat vortex sheet with periodic boundaries in the streamwise direction can be used to model a temporal free shear layer at high Reynolds number. Let us assume that the interval between the periodic boundaries is of length . Then the equation of motion of the vortex sheet reduces to Note that the integral in the above equation is a Cauchy principal value integral. The initial condition for a flat vortex sheet with constant strength is . The flat vortex sheet is an equilibrium solution. However, it is unstable to infinitesimal periodic disturbances of the form . Linear theory shows that the Fourier coefficient grows exponentially at a rate proportional to . That is, higher the wavenumber of a Fourier mode, the faster it grows. However, a linear theory cannot be extended much beyond the initial state. If nonlinear interactions are taken into account, asymptotic analysis suggests that for large and finite , where is a critical value, the Fourier coefficient decays exponentially. The vortex sheet solution is expected to lose analyticity at the critical time. See Moore (1979), and Meiron, Baker and Orszag (1983). The vortex sheet solution as given by the Birkoff-Rott equation cannot go beyond the critical time. The spontaneous loss of analyticity in a vortex sheet is a consequence of mathematical modeling since a real fluid with viscosity, however small, will never develop singularity. Viscosity acts a smoothing or regularization parameter in a real fluid. There have been extensive studies on a vortex sheet, most of them by discrete or point vortex approximation, with or without desingularization. Using a point vortex approximation and delta-regularization Krasny (1986) obtained a smooth roll-up of a vortex sheet into a double branched spiral. Since point vortices are inherently chaotic, a Fourier filter is necessary to control the growth of round-off errors. Continuous approximation of a vortex sheet by vortex panels with arc wise diffusion of circulation density also shows that the sheet rolls-up into a double branched spiral. In many engineering and physical applications the growth of a temporal free shear layer is of interest. The thickness of a free shear layer is usually measured by momentum thickness, which is defined as where and is the freestream velocity. Momentum thickness has the dimension of length and the non-dimensional momentum thickness is given by . Momentum thickness can be used to measure the thickness of a vortex layer. See also Vortex ring Burgers vortex sheet References Fluid dynamics Fluid dynamic instabilities
Vortex sheet
[ "Chemistry", "Engineering" ]
930
[ "Piping", "Chemical engineering", "Fluid dynamic instabilities", "Fluid dynamics" ]
40,744,502
https://en.wikipedia.org/wiki/High%20contrast%20grating
In physics, a high contrast grating is a single layer near-wavelength grating physical structure where the grating material has a large contrast in index of refraction with its surroundings. The term near-wavelength refers to the grating period, which has a value between one optical wavelength in the grating material and that in its surrounding materials. The high contrast gratings have many distinct attributes that are not found in conventional gratings. These features include broadband ultra-high reflectivity, broadband ultra-high transmission, and very high quality factor resonance, for optical beam surface-normal or in oblique incidence to the grating surface. The high reflectivity grating can be ultrathin, only <0.15 optical wavelength. The reflection and transmission phase of the optical beam through the high contrast grating can be engineered to cover a full 2π range while maintaining a high reflection or transmission coefficient. History The concept of high contrast grating took off with a report on a broadband high reflectivity reflector for surface-normal incident light (the ratio between the wavelength bandwidth with a reflectivity larger than 0.99 and the central wavelength is greater than 30%) in 2004 by Constance J. Chang-Hasnain et al., which was demonstrated experimentally in the same year. The key idea is to have the high-refractive-index material all surrounded by low-refractive-index material. They are subsequently applied as a highly reflective mirror in vertical-cavity surface-emitting lasers, as well as monolithic, continuously wavelength tunable vertical-cavity surface-emitting lasers. The properties of high contrast grating are rapidly explored since then. The following lists some relevant examples: In 2008, a single layer of high contrast grating was demonstrated as a high quality factor cavity. In 2009, hollow-core waveguides using high contrast grating were proposed, followed by experimentally demonstration in 2012. This experiment is the first demonstration to show a high contrast grating reflecting optical beam propagating in the direction parallel to the gratings, which is a major distinction from photonic crystal or distributed Bragg reflector. In 2010, planar, single-layer lenses and focusing reflectors with high focusing power using a high contrast grating with spatially varying grating dimensions were proposed and demonstrated. Some literatures quote the high contrast gratings as photonic crystal slabs or photonic crystal membranes. Principle of operation Fully rigorous electromagnetic solutions exist for gratings, which tends to involve heavy mathematical formulism. A simple analytical formulism to explain the various properties of high contrast grating has been developed. A computational program based on this analytical solution has also been developed to solve the electromagnetic properties of high contrast grating, named High Contrast Grating Solver. The following provides a brief overview of the operation principle of high contrast grating. The grating bars can be considered as merely a periodic array of waveguides with wave being guided along the grating thickness direction. Upon plane wave incidence, depending on wavelength and grating dimensions, only a few waveguide-array modes are excited. Due to a large index contrast and near-wavelength dimensions, there exists a wide wavelength range where only two waveguide-array modes have real propagation constants in the z direction and, hence, carry energy. The two waveguide-array modes then depart from the grating input plane and propagate downward to the grating exiting plane, and then reflect back up. After propagating through the grating thickness, each propagating mode accumulates a different phase. At the exiting plane, owing to a strong mismatch with the exiting plane wave, the waveguide modes not only reflect back to themselves but also couple into each other. As the modes propagate and return to the input plane, similar mode coupling occurs. Following the modes through one round trip, the reflectivity solution can be attained. The two modes interfere at the input and exiting plane of the high contrast grating, leading to various distinct properties. Applications High contrast gratings have been employed in many optoelectronic devices. It has been incorporated as the mirrors for vertical-cavity surface-emitting lasers. The light-weight of high contrast grating enables fast microelectromechanical structure actuation for wavelength tuning. The reflection phase of the high contrast grating is engineered to control the emission wavelength of vertical-cavity surface-emitting lasers. By locally changing each grating dimension while keeping its thickness the same, planar, single-layer lenses and focusing reflectors with high focusing power have been obtained. Besides its high reflectivity, the high contrast grating has been designed as a high quality factor resonator. Low-loss hollow-core waveguide are made with high contrast gratings with high reflectivity at oblique incident angle. Applications such as slow light and optical switch can be built on the hollow-core waveguide by using the special phase response and resonance property of high contrast grating. High contrast grating can effectively manipulate the light propagation – directing light from surface-normal to in-plane index-guided waveguide and vice versa. References Waves
High contrast grating
[ "Physics" ]
1,052
[ "Waves", "Physical phenomena", "Motion (physics)" ]
40,744,735
https://en.wikipedia.org/wiki/Stokes%27%20paradox
In the science of fluid flow, Stokes' paradox is the phenomenon that there can be no creeping flow of a fluid around a disk in two dimensions; or, equivalently, the fact there is no non-trivial steady-state solution for the Stokes equations around an infinitely long cylinder. This is opposed to the 3-dimensional case, where Stokes' method provides a solution to the problem of flow around a sphere. Stokes' paradox was resolved by Carl Wilhelm Oseen in 1910, by introducing the Oseen equations which improve upon the Stokes equations – by adding convective acceleration. Derivation The velocity vector of the fluid may be written in terms of the stream function as The stream function in a Stokes flow problem, satisfies the biharmonic equation. By regarding the -plane as the complex plane, the problem may be dealt with using methods of complex analysis. In this approach, is either the real or imaginary part of . Here , where is the imaginary unit, , and are holomorphic functions outside of the disk. We will take the real part without loss of generality. Now the function , defined by is introduced. can be written as , or (using the Wirtinger derivatives). This is calculated to be equal to Without loss of generality, the disk may be assumed to be the unit disk, consisting of all complex numbers z of absolute value smaller or equal to 1. The boundary conditions are: whenever , and by representing the functions as Laurent series: the first condition implies for all . Using the polar form of results in . After deriving the series form of u, substituting this into it along with , and changing some indices, the second boundary condition translates to Since the complex trigonometric functions compose a linearly independent set, it follows that all coefficients in the series are zero. Examining these conditions for every after taking into account the condition at infinity shows that and are necessarily of the form where is an imaginary number (opposite to its own complex conjugate), and and are complex numbers. Substituting this into gives the result that globally, compelling both and to be zero. Therefore, there can be no motion – the only solution is that the cylinder is at rest relative to all points of the fluid. Resolution The paradox is caused by the limited validity of Stokes' approximation, as explained in Oseen's criticism: the validity of Stokes' equations relies on Reynolds number being small, and this condition cannot hold for arbitrarily large distances . A correct solution for a cylinder was derived using Oseen's equations, and the same equations lead to an improved approximation of the drag force on a sphere. Unsteady-state flow around a circular cylinder On the contrary to Stokes' paradox, there exists the unsteady-state solution of the same problem which models a fluid flow moving around a circular cylinder with Reynolds number being small. This solution can be given by explicit formula in terms of vorticity of the flow's vector field. Formula of the Stokes Flow around a circular cylinder The vorticity of Stokes' flow is given by the following relation: Here - are the Fourier coefficients of the vorticity's expansion by polar angle which are defined on , - radius of the cylinder, , are the direct and inverse special Weber's transforms, and initial function for vorticity satisfies no-slip boundary condition. Special Weber's transform has a non-trivial kernel, but from the no-slip condition follows orthogonality of the vorticity flow to the kernel. Derivation Special Weber's transform Special Weber's transform is an important tool in solving problems of the hydrodynamics. It is defined for as where , are the Bessel functions of the first and second kind respectively. For it has a non-trivial kernel which consists of the functions . The inverse transform is given by the formula Due to non-triviality of the kernel, the inversion identity is valid if . Also it is valid in the case of but only for functions, which are orthogonal to the kernel of in with infinitesimal element : No-slip condition and Biot–Savart law In exterior of the disc of radius the Biot-Savart law restores the velocity field which is induced by the vorticity with zero-circularity and given constant velocity at infinity. No-slip condition for leads to the relations for : where is the Kronecker delta, , are the cartesian coordinates of . In particular, from the no-slip condition follows orthogonality the vorticity to the kernel of the Weber's transform : Vorticity flow and its boundary condition Vorticity for Stokes flow satisfies to the vorticity equation or in terms of the Fourier coefficients in the expansion by polar angle where From no-slip condition follows Finally, integrating by parts, we obtain the Robin boundary condition for the vorticity: Then the solution of the boundary-value problem can be expressed via Weber's integral above. Remark Formula for vorticity can give another explanation of the Stokes' Paradox. The functions belong to the kernel of and generate the stationary solutions of the vorticity equation with Robin-type boundary condition. From the arguments above any Stokes' vorticity flow with no-slip boundary condition must be orthogonal to the obtained stationary solutions. That is only possible for . See also Oseen's approximation Stokes' law References Fluid dynamics Equations of fluid dynamics
Stokes' paradox
[ "Physics", "Chemistry", "Engineering" ]
1,100
[ "Equations of fluid dynamics", "Equations of physics", "Chemical engineering", "Piping", "Fluid dynamics" ]
40,748,127
https://en.wikipedia.org/wiki/Bond%20hardening
Bond hardening is a process of creating a new chemical bond by strong laser fields—an effect opposite to bond softening. However, it is not opposite in the sense that the bond becomes stronger, but in the sense that the molecule enters a state that is diametrically opposite to the bond-softened state. Such states require laser pulses of high intensity, in the range of 1013–1015 W/cm2, and they disappear once the pulse is gone. Theory Bond hardening and bond softening share the same theoretical basis, which is described under the latter entry. Briefly, the ground and the first excited energy curves of the H2+ ion are dressed in photons. The laser field perturbs the curves and turns their crossings into anticrossings. Bond softening occurs on the lower branches of the anticrossings and bond hardening happens if the molecule is excited to the upper branches – see Fig. 1. To trap the molecule in the bond-hardened state, the anticrossing gap cannot be too small or too large. If it is too small, the system can undergo a diabatic transition to the lower branch of the anticrossing and dissociate via bond softening. If the gap is too large, the upper branch becomes shallow or even repulsive, and the system can also dissociate. This means that bound bond-hardened states can exist only in relatively narrow range of laser intensities, which makes them difficult to observe. Experimental search for bond hardening When the existence of bond softening was experimentally verified in 1990, the attention turned to bond hardening. Rather noisy photoelectron spectra reported in the early 1990s implied bond hardening occurring at the 1-photon and 3-photon anticrossings. These reports were received with great interest because bond hardening could explain apparent stabilization of the molecular bond in strong laser fields accompanied by a collective ejection of several electrons. However, instead of more convincing evidence, new negative results relegated bond hardening to a remote theoretical possibility. Only at the end of the decade, the reality of bond hardening was established in an experiment where the laser pulse duration was varied by chirping. Conclusive evidence The results of the chirp experiment are shown in Fig. 2 in the form of a map. The central "crater" of the map is a signature of bond hardening. To appreciate the uniqueness of this signature requires explaining other features on the map. The horizontal axis of the map gives the time-of-flight (TOF) of ions produced in ionization and fragmentation of molecular hydrogen exposed to intense laser pulses. The left panel reveals several proton peaks; the right panel shows relatively uninteresting, single peak of molecular hydrogen ion. The vertical axis gives grating position of the compressor in a chirped pulse amplifier of the Ti:Sapphire laser used in the experiment. The grating position controls the pulse duration, which is shortest (42 fs) for the zero position and increases in both directions. While the stretched pulses are also chirped, it is not the chirp but the pulse duration that matters in this experiment, as corroborated by the symmetry of the map in respect to the zero position line. The pulse energy is kept constant, therefore the shortest pulses are also most intense producing most ions at the zero position. Kinetic energy variation The proton TOF spectra allow one to measure the kinetic energy release (KER) in the dissociation process. Protons ejected towards the detector have shorter TOFs than protons ejected away from the detector because the latter have to be turned back by an external electric field applied to the interaction region. This forward-backward symmetry is reflected in the symmetry of the proton map in respect to zero KER (1.27 μs TOF). The most energetic protons come from the Coulomb explosion of the molecule, where laser field completely strips H2 from electrons and the two bare protons repel each other with strong Coulombic force, unimpeded by any chemical bond. The stripping process it not instantaneous but occurs in a stepwise fashion, on the rising edge of the laser pulse. The shorter the laser pulse, the quicker the stripping process and there is less time for the molecule to dissociate before the Coulomb force attains its full strength. Therefore, the KER is highest for the shortest pulses, as demonstrated by the outer curving "lobes" in Fig. 2. The second pair of proton peaks (1 eV KER) comes from bond softening of the H2+ ion, which dissociates into a proton and a neutral hydrogen atom (undetected). The dissociation starts at the 3-photon gap and proceeds to the 2ω limit (the lower blue arrow in Fig. 1). Since both the initial and the final energies of this process are fixed by the 1.55 eV photon energy, the KER is also constant producing the two vertical lines in Fig. 2. The lowest energy protons are produced by the bond hardening process, which also starts at the 3-photon gap but proceeds to the 1ω limit (the lower red trough in Fig. 1). Since the initial and the final energies are also fixed here, the KER should also be constant but clearly it is not, as the round shape of the central "crater" demonstrates it in Fig. 2. To explain this variation, the dynamics of the H2+ states needs to be considered. Dynamics of bond hardening The H2+ ion is created on the leading edge of the laser pulse in the multiphoton ionization process. Since the equilibrium internuclear separation for the neutral molecule is smaller than for the ionized one, the ionic nuclear wave packet finds itself on the repulsive side of the ground state potential well and starts to cross it (see Fig. 3a). In a few femtoseconds it takes the wave packet to cross the potential well, the laser intensity is still modest and the 3-photon gap is small allowing the wave packet to cross it diabatically. At large internuclear separations, the gentle slope of the potential well slowly turns the wave packet back, so when the packet returns to the 3-photon gap, the laser intensity is significantly higher and the gap is wide open trapping the wave packet in a bond-hardened state, which lasts throughout the highest intensities (Fig. 3b). When the laser intensity falls, the bond-hardened energy curve returns to the original shape, flexing up, lifting the wave packet and releasing about a half of it to the 1ω limit (Fig. 3c). The faster intensity falls, the higher the wave packet is lifted and more energy it gains, which explains why the KER of the "crater" in Fig. 1 is highest at the shortest laser pulse. This energy gain, however, is not induced by the rising edge of the laser pulse as one would naively expect, but by the falling edge. Fractional photon energy Note that the maximum energy gain of the nuclear wave packet is about ħω and continuously decreases with the pulse duration. Does it mean we can have a fraction of a photon? There are two valid answers to this puzzling proposition. Breakdown of the photon model One can say that the photon is not a particle but as a mere quantum of energy that is usually exchanged in integer multiples of ħω, but not always, as it is the case in the above experiment. From this point of view, photons are quasiparticles, akin to phonons and plasmons, in a sense less "real" than electrons and protons. Before dismissing this view as unscientific, its worth recalling the words of Willis Lamb, who won a Nobel prize in the area of quantum electrodynamics: There is no such thing as a photon. Only a comedy of errors and historical accidents led to its popularity among physicists and optical scientists. Dynamic Raman effect Alternatively, one can save the photon concept by recalling that the laser field is very strong and the pulse is very short. Indeed, the electric field in the laser pulse is so strong that during the process depicted in Fig. 3 about a hundred of photon absorptions and stimulated emissions can take place. And since the pulse is short, it has sufficiently wide bandwidth to accommodate absorption of photons that are more energetic than the re-emitted ones, giving the net result of a fraction of ħω. Effectively, we have a kind of dynamic Raman effect. Zero-photon dissociation Even more striking challenge to the photon concept comes from the zero-photon dissociation process (ZPD), where nominally no photons are absorbed but some energy is still extracted from the laser field. To demonstrate this process, molecular hydrogen was exposed to 250 fs pulses of the 3rd harmonic of a Ti:Sapphire laser. Since the photon energy was 3 times higher, the spacing of the energy curves shown in Fig. 1 was 3 times larger, replacing the 3-photon crossing with a 1-photon one, as shown in Fig. 4. As before, the laser field changed the crossing to anticrossing, bond softening was induced on its lower branch and bond hardening trapped a part of the vibrational wave packet on the upper branch. In increasing laser intensity the anticrossing gap was getting wider, lifting the wave packet to the 0ω limit and dissociating the molecule with very small KER. The experimental signature of the ZPD was a proton peak at zero KER. Moreover, the probability of a proton being promoted to this peak was found to be independent of the laser intensity, which confirms that it is induced by a zero-photon process because the probability of multiphoton processes is proportional to the intensity, I, raised to the number of photons absorbed, giving I0 = const. See also Conical intersections of energy surfaces in polyatomic molecules share many similarities with the simpler mechanism of bond hardening and bond softening in diatomic molecules. References Molecular physics Quantum chemistry Photochemistry
Bond hardening
[ "Physics", "Chemistry" ]
2,070
[ "Quantum chemistry", "Molecular physics", "Quantum mechanics", "Theoretical chemistry", " molecular", "nan", "Atomic", " and optical physics" ]
40,750,313
https://en.wikipedia.org/wiki/RFEM
RFEM is a 3D finite element analysis software working under Microsoft Windows computer operating systems. RFEM can be used for structural analysis and design of steel, concrete, timber, glass, membrane and tensile structures as well as for plant and mechanical engineering or dynamic analysis and analysis of steel joints. The API technology Web Services allows you to create your own desktop or web-based applications by controlling all objects included in RFEM. By providing libraries and functions, you can develop your own design checks, effective modeling of parametric structures, as well as optimization and automation processes using the programming languages Python and C#. RFEM is used by more than 13,000 companies, 130,000 users and many universities in 132 countries. As part of the research project "Thermal Imaging and Structural Analysis of Sandstone Monuments in Angkor", RFEM was used to create numerical models and for structural analysis. BIM Integration RFEM offers numerous interfaces for data exchange within the BIM process. All relevant building data is digitally maintained within a three-dimensional model, which then is used throughout all of the planning stages. As a result, the various CAD and structural analysis programs are using the same model, which is directly transferred between the programs. Besides direct interfaces to Autodesk AutoCAD, Autodesk Revit Structure, Autodesk Structural Detailing, Bentley Systems applications (ISM), Tekla Structures, Rhino and Grasshopper, RFEM has interfaces for Industry Foundation Classes, CIS/2 and others. Materials and cross-section libraries, load generation RFEM's library of materials includes various types of concrete, metal, timber, glass, foil, gas and soil. RFEM's cross-section library includes rolled, built-up, thin-walled, and thick-walled cross-sections for steel, concrete, and timber. In addition, any open and closed thin-walled or massive cross-sections are available from the program RSECTION. With tools integrated in RFEM, wind, snow, opening and other loads can be generated and surface loads can be converted into member loads. Through integrated CFD wind simulation, wind loads can also be generated using the RWIND program. References Building information modeling Computer-aided design software Computer-aided design software for Windows Finite element software
RFEM
[ "Engineering" ]
459
[ "Building engineering", "Building information modeling" ]
40,751,227
https://en.wikipedia.org/wiki/European-Mediterranean%20Seismological%20Centre
The European-Mediterranean Seismological Centre (EMSC; , ) is an international, non-governmental and not-for-profit organisation. The European-Mediterranean region is prone to destructive earthquakes. When an earthquake occurs, a scientific organisation is needed to determine, as quickly as possible, the characteristics of the seismic event. The European-Mediterranean Seismological Centre (EMSC) receives seismological data from more than 65 national seismological agencies, mostly in the Euro-Mediterranean region. The most relevant earthquake parameters, such as the earthquake location and the earthquake magnitude, and the shaking felt by the population are available within one hour from the earthquake onset. History The European-Mediterranean Seismological Centre (EMSC) is a not-for-profit organisation with 84 member institutes from 55 different countries. The centre was established in 1975 under the request of the European Seismological Commission (ESC). The EMSC became operational on 1 January 1975, at the Institut de Physique du Globe de Strasbourg. It received its final statute in 1983. In 1987, the EMSC was appointed by the Council of Europe as the main organisation to provide the European Alert System under the Open Partial Agreement (OPA) on Major Hazards. In 1993, the EMSC statute and organisation were amended. Its headquarters moved to the Laboratoire de Détection et de Géophysique (LDG) within the Département Analyse, Surveillance, Environnement (DASE) of the French Atomic Energy Commission (CEA), in Bruyères-le-Châtel (Essonne, France). As an international, non-governmental and non-profit organisation, the EMSC also focuses on promoting seismological research within and beyond its community. Hence, the EMSC is involved in many European (FP7 and H2020) projects: FP7 projects: NERA VERCE MARsite REAKT H2020 projects: EPOS-IP IMPROVER CARISMAND ENVRIplus ARISE2 Other projects: SIGMA RELEMR ARISTOTLE Objectives and activities The main scientific objectives of the EMSC are: To establish and operate a system for rapid determination of the European and Mediterranean earthquake epicentres (location of major earthquakes within a delay of approximately one hour). EMSC, acting as the central authority, is responsible for transmitting these results immediately to the appropriate international authorities and to the members in order to meet the needs of protection of society, scientific progress and general information. To determine the main source parameters (epicentre coordinates, depth, magnitude, focal mechanisms, etc) of major seismic events located within the European-Mediterranean region, and to dispatch widely the corresponding results. To collect the data and make them available to other international, regional or national data centres such as the International Seismological Centre (ISC), the United States National Earthquake Information Center (NEIC), etc. To encourage scientific cooperation among European and Mediterranean countries in the field of earthquake research, and to develop studies of general interest such as: epicentre location methods, construction of local and regional travel-time tables, magnitude determination, etc. To promote seismological data exchange between laboratories in the European-Mediterranean area. To afford detailed studies of specific events. To build a European seismological data bank. To improve the observational systems in the European-Mediterranean region through a critical examination of the seismological coverage, and suggest methods in order to improve the quality of observations and their transmission to EMSC. Specific approaches Flashsourcing EMSC has developed a new approach based on internet traffic analysis: when an earthquake occurs, witnesses rush on the EMSC website to look for further explanation of the event. Therefore, they create a surge in the website traffic which can indicate that an earthquake just occurred, even before receiving data provided by national seismological institutes. By identifying the geographical origin of the website's visitors, the area where the earthquake was felt is mapped within a couple of minutes of its occurrence. This technique is named flashsourcing. Citizen seismology Citizens are a primary source of information in the real-time earthquake detections. EMSC involves them in earthquake response by collecting in-situ information (e.g., questionnaires, pictures, videos) on the earthquake impact directly from the earthquake eyewitnesses. Consequently, by involving the citizens in the response, the EMSC paves the way for an efficient strategy to raise seismic risk awareness. References Transforming Earthquake Detection? Transforming Earthquake Detection and Science through Citizen Seismology External links Citizen Seismology Earthquake and seismic risk mitigation Seismological observatories, organisations and projects Seismic networks Organizations based in Île-de-France
European-Mediterranean Seismological Centre
[ "Engineering" ]
962
[ "Structural engineering", "Earthquake and seismic risk mitigation" ]
40,751,355
https://en.wikipedia.org/wiki/Coalbed%20methane%20in%20the%20United%20States
The 2017 production of coalbed methane in the United States was 0.98 trillion cubic feet (TCF), 3.6 percent of all US dry gas production that year. The 2017 production was down from the peak of 1.97 TCF in 2008. Most coalbed methane production came from the Rocky Mountain states of Colorado, Wyoming, and New Mexico. Coalbed methane reserve estimates vary; however a 1997 estimate from the U.S. Geological Survey predicts more than of methane within the US. At a natural gas price of US$6.05 per million Btu (US$5.73/GJ), that volume is worth US$4.37 trillion. At least of it is economically viable to produce. The EIA reports 2017 reserves at 11,878 billion cubic feet (BCF) or 11.878 trillion cubic feet, which at a current market price of US $2.97 as of May 14, 2021, are worth approximately $36.2 Billion USD. History Coalbed methane grew out of venting methane from coal seams. Some coal beds have long been known to be "gassy," and as a safety measure, boreholes were drilled into the seams from the surface, and the methane allowed to vent before mining. Methane produced in connection with coal mining is usually called "coal mine methane." Federal support Coalbed methane received a major push from the US federal government in the late 1970s. Federal price controls were discouraging natural gas drilling by keeping natural gas prices below market levels; at the same time, the government wanted to encourage more gas production. The US Department of Energy funded research into a number of unconventional gas sources, including coalbed methane. Coalbed methane was exempted from federal price controls, and was also given a federal tax credit. Start of coalbed methane in Alabama Coalbed methane as a resource apart from coal mining began in the early 1980s in the Black Warrior Basin of northern Alabama. The American Public Gas Association under a U. S. Department of Energy grant funded a three-well research program in 1980 to produce coalbed methane at Pleasant Grove, Alabama. This program is the first aimed at commercial recovery of gas rather than mine degasification. It is also the first attempt to produce from more than one coal seam in the same wellbore. The coalbed methane wells were drilled on the lawn of the Pleasant Grove court house. The gas was of sufficient quality to be ducted into the kitchens of domestic users after minor processing including odorization as a safety measure. The Pleasant Grove Field, which was established in July 1980 at a ceremony attended by U.S. Senators, Congressmen and officials of the Administration, was Alabama's first coal degasification field. John Gustavson, a Boulder geologist testified on the results in front of the State Oil and Gas Board of Alabama, who in 1983 established the nation's first comprehensive rules and regulations governing coalbed methane. These rules have served as a model for other states. Areas of coalbed methane production Black Warrior Basin, Alabama Powder River Basin, Wyoming Raton Basin, Colorado and New Mexico San Juan Basin, Colorado and New Mexico Produced water The produced water brought to the surface as a byproduct of gas extraction varies greatly in quality from area to area, but may contain undesirably high concentrations of dissolved substances such as salts and total dissolved solids (TDS). Because of the large amounts of water produced, economic water handling is an important factor in the economics of coalbed wells. CBM water from some areas, such as the San Juan Basin of Colorado and New Mexico, is of too poor quality to gain a National Pollutant Discharge Elimination System (NPDES) permit required for discharge to a surface stream, and most be disposed of to a federally licensed Class II disposal well, which injects produced water into saline aquifers below the base of potentially usable water. In 2008, coalbed methane production resulted in 1.23 billion barrels of produced water, of which 371 million barrels (30 percent), was discharged to surface streams under NPDES permits. Almost all the surface-discharged water was from three areas: the Black Warrior Basin of Alabama, the Powder River Basin of Wyoming, and the Raton Basin of Colorado and New Mexico. Powder River Basin Not all coalbed methane produced water is saline or otherwise undesirable. Water from coalbed methane wells in the Powder River Basin of Wyoming, USA, commonly meets federal drinking water standards, and is widely used in the area to water livestock. Its use for irrigation is limited by its relatively high sodium adsorption ratio. Black Warrior Basin A large amount of coalbed methane produced water from the Black Warrior Basin has less than 3,000 parts per million (ppm) total dissolved solids (TDS), and is discharged to surface streams under NPDES permits. However, coal beds in other parts of the basin contain greater than 10,000 ppm TDS, and are classed as saline. Power generation In 2012, the Aspen Skiing Company built a 3-megawatt methane-to-electricity plant in Somerset, Colorado at Oxbow Carbon's Elk Creek Mine. Funding methane capture with carbon offsets The Southern Ute Indian Tribe’s methane capture project has reduced greenhouse gas emissions by the equivalent of about 379,000 metric tons of carbon dioxide between 2009 and 2017. Conventional coal bed methane production wells were not economically feasible in this location due to the low volume of seepage. The project delivers its gas to natural gas pipelines, and generates additional revenue through the sale of carbon offsets. Regulation As natural gas wells, most of the permitting and regulation of coalbed methane is done by state governments. Federal regulations apply to the two most common methods of handling CBM produced water. If the water is discharged to a surface stream, it must be done under an NPDES permit or a federally compliant state equivalent. If the water is disposed of by underground injection, it must be to a Class II Disposal Well. The environmental impacts of CBM development are considered by various governmental bodies during the permitting process and operation, which provide opportunities for public comment and intervention. Operators are required to obtain building permits for roads, pipelines and structures, obtain wastewater (produced water) discharge permits, and prepare Environmental Impact Statements As with other natural resource utilization activities, the application and effectiveness of environmental laws, regulation, and enforcement vary with location. Violations of applicable laws and regulations are addressed through regulatory bodies and criminal and civil judicial proceedings. See also Natural gas in the United States Four Corners Methane Hot Spot References External links Coalbed Gas - News and Publications - US Geological Survey Coalbed Methane Outreach Program - EPA Kansas Geological Survey guide to Coalbed Methane Methane Coal in the United States Unconventional gas Natural gas in the United States
Coalbed methane in the United States
[ "Chemistry" ]
1,384
[ "Greenhouse gases", "Methane" ]
40,752,010
https://en.wikipedia.org/wiki/Lagrangian%20%28field%20theory%29
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom. One motivation for the development of the Lagrangian formalism on fields, and more generally, for classical field theory, is to provide a clear mathematical foundation for quantum field theory, which is infamously beset by formal difficulties that make it unacceptable as a mathematical theory. The Lagrangians presented here are identical to their quantum equivalents, but, in treating the fields as classical fields, instead of being quantized, one can provide definitions and obtain solutions with properties compatible with the conventional formal approach to the mathematics of partial differential equations. This enables the formulation of solutions on spaces with well-characterized properties, such as Sobolev spaces. It enables various theorems to be provided, ranging from proofs of existence to the uniform convergence of formal series to the general settings of potential theory. In addition, insight and clarity is obtained by generalizations to Riemannian manifolds and fiber bundles, allowing the geometric structure to be clearly discerned and disentangled from the corresponding equations of motion. A clearer view of the geometric structure has in turn allowed highly abstract theorems from geometry to be used to gain insight, ranging from the Chern–Gauss–Bonnet theorem and the Riemann–Roch theorem to the Atiyah–Singer index theorem and Chern–Simons theory. Overview In field theory, the independent variable is replaced by an event in spacetime , or more generally still by a point s on a Riemannian manifold. The dependent variables are replaced by the value of a field at that point in spacetime so that the equations of motion are obtained by means of an action principle, written as: where the action, , is a functional of the dependent variables , their derivatives and s itself where the brackets denote ; and s = {sα} denotes the set of n independent variables of the system, including the time variable, and is indexed by α = 1, 2, 3, ..., n. The calligraphic typeface, , is used to denote the density, and is the volume form of the field function, i.e., the measure of the domain of the field function. In mathematical formulations, it is common to express the Lagrangian as a function on a fiber bundle, wherein the Euler–Lagrange equations can be interpreted as specifying the geodesics on the fiber bundle. Abraham and Marsden's textbook provided the first comprehensive description of classical mechanics in terms of modern geometrical ideas, i.e., in terms of tangent manifolds, symplectic manifolds and contact geometry. Bleecker's textbook provided a comprehensive presentation of field theories in physics in terms of gauge invariant fiber bundles. Such formulations were known or suspected long before. Jost continues with a geometric presentation, clarifying the relation between Hamiltonian and Lagrangian forms, describing spin manifolds from first principles, etc. Current research focuses on non-rigid affine structures, (sometimes called "quantum structures") wherein one replaces occurrences of vector spaces by tensor algebras. This research is motivated by the breakthrough understanding of quantum groups as affine Lie algebras (Lie groups are, in a sense "rigid", as they are determined by their Lie algebra. When reformulated on a tensor algebra, they become "floppy", having infinite degrees of freedom; see e.g., Virasoro algebra.) Definitions In Lagrangian field theory, the Lagrangian as a function of generalized coordinates is replaced by a Lagrangian density, a function of the fields in the system and their derivatives, and possibly the space and time coordinates themselves. In field theory, the independent variable t is replaced by an event in spacetime or still more generally by a point s on a manifold. Often, a "Lagrangian density" is simply referred to as a "Lagrangian". Scalar fields For one scalar field , the Lagrangian density will take the form: For many scalar fields In mathematical formulations, the scalar fields are understood to be coordinates on a fiber bundle, and the derivatives of the field are understood to be sections of the jet bundle. Vector fields, tensor fields, spinor fields The above can be generalized for vector fields, tensor fields, and spinor fields. In physics, fermions are described by spinor fields. Bosons are described by tensor fields, which include scalar and vector fields as special cases. For example, if there are real-valued scalar fields, , then the field manifold is . If the field is a real vector field, then the field manifold is isomorphic to . Action The time integral of the Lagrangian is called the action denoted by . In field theory, a distinction is occasionally made between the Lagrangian , of which the time integral is the action and the Lagrangian density , which one integrates over all spacetime to get the action: The spatial volume integral of the Lagrangian density is the Lagrangian; in 3D, The action is often referred to as the "action functional", in that it is a function of the fields (and their derivatives). Volume form In the presence of gravity or when using general curvilinear coordinates, the Lagrangian density will include a factor of . This ensures that the action is invariant under general coordinate transformations. In mathematical literature, spacetime is taken to be a Riemannian manifold and the integral then becomes the volume form Here, the is the wedge product and is the square root of the determinant of the metric tensor on . For flat spacetime (e.g., Minkowski spacetime), the unit volume is one, i.e. and so it is commonly omitted, when discussing field theory in flat spacetime. Likewise, the use of the wedge-product symbols offers no additional insight over the ordinary concept of a volume in multivariate calculus, and so these are likewise dropped. Some older textbooks, e.g., Landau and Lifschitz write for the volume form, since the minus sign is appropriate for metric tensors with signature (+−−−) or (−+++) (since the determinant is negative, in either case). When discussing field theory on general Riemannian manifolds, the volume form is usually written in the abbreviated notation where is the Hodge star. That is, and so Not infrequently, the notation above is considered to be entirely superfluous, and is frequently seen. Do not be misled: the volume form is implicitly present in the integral above, even if it is not explicitly written. Euler–Lagrange equations The Euler–Lagrange equations describe the geodesic flow of the field as a function of time. Taking the variation with respect to , one obtains Solving, with respect to the boundary conditions, one obtains the Euler–Lagrange equations: Examples A large variety of physical systems have been formulated in terms of Lagrangians over fields. Below is a sampling of some of the most common ones found in physics textbooks on field theory. Newtonian gravity The Lagrangian density for Newtonian gravity is: where is the gravitational potential, is the mass density, and in m3·kg−1·s−2 is the gravitational constant. The density has units of J·m−3. Here the interaction term involves a continuous mass density ρ in kg·m−3. This is necessary because using a point source for a field would result in mathematical difficulties. This Lagrangian can be written in the form of , with the providing a kinetic term, and the interaction the potential term. See also Nordström's theory of gravitation for how this could be modified to deal with changes over time. This form is reprised in the next example of a scalar field theory. The variation of the integral with respect to is: After integrating by parts, discarding the total integral, and dividing out by the formula becomes: which is equivalent to: which yields Gauss's law for gravity. Scalar field theory The Lagrangian for a scalar field moving in a potential can be written as It is not at all an accident that the scalar theory resembles the undergraduate textbook Lagrangian for the kinetic term of a free point particle written as . The scalar theory is the field-theory generalization of a particle moving in a potential. When the is the Mexican hat potential, the resulting fields are termed the Higgs fields. Sigma model Lagrangian The sigma model describes the motion of a scalar point particle constrained to move on a Riemannian manifold, such as a circle or a sphere. It generalizes the case of scalar and vector fields, that is, fields constrained to move on a flat manifold. The Lagrangian is commonly written in one of three equivalent forms: where the is the differential. An equivalent expression is with the Riemannian metric on the manifold of the field; i.e. the fields are just local coordinates on the coordinate chart of the manifold. A third common form is with and , the Lie group SU(N). This group can be replaced by any Lie group, or, more generally, by a symmetric space. The trace is just the Killing form in hiding; the Killing form provides a quadratic form on the field manifold, the lagrangian is then just the pullback of this form. Alternately, the Lagrangian can also be seen as the pullback of the Maurer–Cartan form to the base spacetime. In general, sigma models exhibit topological soliton solutions. The most famous and well-studied of these is the Skyrmion, which serves as a model of the nucleon that has withstood the test of time. Electromagnetism in special relativity Consider a point particle, a charged particle, interacting with the electromagnetic field. The interaction terms are replaced by terms involving a continuous charge density ρ in A·s·m−3 and current density in A·m−2. The resulting Lagrangian density for the electromagnetic field is: Varying this with respect to , we get which yields Gauss' law. Varying instead with respect to , we get which yields Ampère's law. Using tensor notation, we can write all this more compactly. The term is actually the inner product of two four-vectors. We package the charge density into the current 4-vector and the potential into the potential 4-vector. These two new vectors are We can then write the interaction term as Additionally, we can package the E and B fields into what is known as the electromagnetic tensor . We define this tensor as The term we are looking out for turns out to be We have made use of the Minkowski metric to raise the indices on the EMF tensor. In this notation, Maxwell's equations are where ε is the Levi-Civita tensor. So the Lagrange density for electromagnetism in special relativity written in terms of Lorentz vectors and tensors is In this notation it is apparent that classical electromagnetism is a Lorentz-invariant theory. By the equivalence principle, it becomes simple to extend the notion of electromagnetism to curved spacetime. Electromagnetism and the Yang–Mills equations Using differential forms, the electromagnetic action S in vacuum on a (pseudo-) Riemannian manifold can be written (using natural units, ) as Here, A stands for the electromagnetic potential 1-form, J is the current 1-form, is the field strength 2-form and the star denotes the Hodge star operator. This is exactly the same Lagrangian as in the section above, except that the treatment here is coordinate-free; expanding the integrand into a basis yields the identical, lengthy expression. Note that with forms, an additional integration measure is not necessary because forms have coordinate differentials built in. Variation of the action leads to These are Maxwell's equations for the electromagnetic potential. Substituting immediately yields the equation for the fields, because is an exact form. The A field can be understood to be the affine connection on a U(1)-fiber bundle. That is, classical electrodynamics, all of its effects and equations, can be completely understood in terms of a circle bundle over Minkowski spacetime. The Yang–Mills equations can be written in exactly the same form as above, by replacing the Lie group U(1) of electromagnetism by an arbitrary Lie group. In the Standard model, it is conventionally taken to be although the general case is of general interest. In all cases, there is no need for any quantization to be performed. Although the Yang–Mills equations are historically rooted in quantum field theory, the above equations are purely classical. Chern–Simons functional In the same vein as the above, one can consider the action in one dimension less, i.e. in a contact geometry setting. This gives the Chern–Simons functional. It is written as Chern–Simons theory was deeply explored in physics, as a toy model for a broad range of geometric phenomena that one might expect to find in a grand unified theory. Ginzburg–Landau Lagrangian The Lagrangian density for Ginzburg–Landau theory combines the Lagrangian for the scalar field theory with the Lagrangian for the Yang–Mills action. It may be written as: where is a section of a vector bundle with fiber . The corresponds to the order parameter in a superconductor; equivalently, it corresponds to the Higgs field, after noting that the second term is the famous "Sombrero hat" potential. The field is the (non-Abelian) gauge field, i.e. the Yang–Mills field and is its field-strength. The Euler–Lagrange equations for the Ginzburg–Landau functional are the Yang–Mills equations and where is the Hodge star operator, i.e. the fully antisymmetric tensor. These equations are closely related to the Yang–Mills–Higgs equations. Another closely related Lagrangian is found in Seiberg–Witten theory. Dirac Lagrangian The Lagrangian density for a Dirac field is: where is a Dirac spinor, is its Dirac adjoint, and is Feynman slash notation for . There is no particular need to focus on Dirac spinors in the classical theory. The Weyl spinors provide a more general foundation; they can be constructed directly from the Clifford algebra of spacetime; the construction works in any number of dimensions, and the Dirac spinors appear as a special case. Weyl spinors have the additional advantage that they can be used in a vielbein for the metric on a Riemannian manifold; this enables the concept of a spin structure, which, roughly speaking, is a way of formulating spinors consistently in a curved spacetime. Quantum electrodynamic Lagrangian The Lagrangian density for QED combines the Lagrangian for the Dirac field together with the Lagrangian for electrodynamics in a gauge-invariant way. It is: where is the electromagnetic tensor, D is the gauge covariant derivative, and is Feynman notation for with where is the electromagnetic four-potential. Although the word "quantum" appears in the above, this is a historical artifact. The definition of the Dirac field requires no quantization whatsoever, it can be written as a purely classical field of anti-commuting Weyl spinors constructed from first principles from a Clifford algebra. The full gauge-invariant classical formulation is given in Bleecker. Quantum chromodynamic Lagrangian The Lagrangian density for quantum chromodynamics combines the Lagrangian for one or more massive Dirac spinors with the Lagrangian for the Yang–Mills action, which describes the dynamics of a gauge field; the combined Lagrangian is gauge invariant. It may be written as: where D is the QCD gauge covariant derivative, n = 1, 2, ...6 counts the quark types, and is the gluon field strength tensor. As for the electrodynamics case above, the appearance of the word "quantum" above only acknowledges its historical development. The Lagrangian and its gauge invariance can be formulated and treated in a purely classical fashion. Einstein gravity The Lagrange density for general relativity in the presence of matter fields is where is the cosmological constant, is the curvature scalar, which is the Ricci tensor contracted with the metric tensor, and the Ricci tensor is the Riemann tensor contracted with a Kronecker delta. The integral of is known as the Einstein–Hilbert action. The Riemann tensor is the tidal force tensor, and is constructed out of Christoffel symbols and derivatives of Christoffel symbols, which define the metric connection on spacetime. The gravitational field itself was historically ascribed to the metric tensor; the modern view is that the connection is "more fundamental". This is due to the understanding that one can write connections with non-zero torsion. These alter the metric without altering the geometry one bit. As to the actual "direction in which gravity points" (e.g. on the surface of the Earth, it points down), this comes from the Riemann tensor: it is the thing that describes the "gravitational force field" that moving bodies feel and react to. (This last statement must be qualified: there is no "force field" per se; moving bodies follow geodesics on the manifold described by the connection. They move in a "straight line".) The Lagrangian for general relativity can also be written in a form that makes it manifestly similar to the Yang–Mills equations. This is called the Einstein–Yang–Mills action principle. This is done by noting that most of differential geometry works "just fine" on bundles with an affine connection and arbitrary Lie group. Then, plugging in SO(3,1) for that symmetry group, i.e. for the frame fields, one obtains the equations above. Substituting this Lagrangian into the Euler–Lagrange equation and taking the metric tensor as the field, we obtain the Einstein field equations is the energy momentum tensor and is defined by where is the determinant of the metric tensor when regarded as a matrix. Generally, in general relativity, the integration measure of the action of Lagrange density is . This makes the integral coordinate independent, as the root of the metric determinant is equivalent to the Jacobian determinant. The minus sign is a consequence of the metric signature (the determinant by itself is negative). This is an example of the volume form, previously discussed, becoming manifest in non-flat spacetime. Electromagnetism in general relativity The Lagrange density of electromagnetism in general relativity also contains the Einstein–Hilbert action from above. The pure electromagnetic Lagrangian is precisely a matter Lagrangian . The Lagrangian is This Lagrangian is obtained by simply replacing the Minkowski metric in the above flat Lagrangian with a more general (possibly curved) metric . We can generate the Einstein Field Equations in the presence of an EM field using this lagrangian. The energy-momentum tensor is It can be shown that this energy momentum tensor is traceless, i.e. that If we take the trace of both sides of the Einstein Field Equations, we obtain So the tracelessness of the energy momentum tensor implies that the curvature scalar in an electromagnetic field vanishes. The Einstein equations are then Additionally, Maxwell's equations are where is the covariant derivative. For free space, we can set the current tensor equal to zero, . Solving both Einstein and Maxwell's equations around a spherically symmetric mass distribution in free space leads to the Reissner–Nordström charged black hole, with the defining line element (written in natural units and with charge ): One possible way of unifying the electromagnetic and gravitational Lagrangians (by using a fifth dimension) is given by Kaluza–Klein theory. Effectively, one constructs an affine bundle, just as for the Yang–Mills equations given earlier, and then considers the action separately on the 4-dimensional and the 1-dimensional parts. Such factorizations, such as the fact that the 7-sphere can be written as a product of the 4-sphere and the 3-sphere, or that the 11-sphere is a product of the 4-sphere and the 7-sphere, accounted for much of the early excitement that a theory of everything had been found. Unfortunately, the 7-sphere proved not large enough to enclose all of the Standard model, dashing these hopes. Additional examples The BF model Lagrangian, short for "Background Field", describes a system with trivial dynamics, when written on a flat spacetime manifold. On a topologically non-trivial spacetime, the system will have non-trivial classical solutions, which may be interpreted as solitons or instantons. A variety of extensions exist, forming the foundations for topological field theories. See also Calculus of variations Covariant classical field theory Euler–Lagrange equation Functional derivative Functional integral Generalized coordinates Hamiltonian mechanics Hamiltonian field theory Kinetic term Lagrangian and Eulerian coordinates Lagrangian mechanics Lagrangian point Lagrangian system Noether's theorem Onsager–Machlup function Principle of least action Scalar field theory Notes Citations Mathematical physics Classical field theory Calculus of variations Quantum field theory
Lagrangian (field theory)
[ "Physics", "Mathematics" ]
4,576
[ "Quantum field theory", "Applied mathematics", "Theoretical physics", "Quantum mechanics", "Classical field theory", "Mathematical physics" ]
55,166,788
https://en.wikipedia.org/wiki/Direct%20methods%20%28electron%20microscopy%29
In crystallography, direct methods is a set of techniques used for structure determination using diffraction data and a priori information. It is a solution to the crystallographic phase problem, where phase information is lost during a diffraction measurement. Direct methods provides a method of estimating the phase information by establishing statistical relationships between the recorded amplitude information and phases of strong reflections. Background Phase Problem In electron diffraction, a diffraction pattern is produced by the interaction of the electron beam and the crystal potential. The real space and reciprocal space information about a crystal structure can be related through the Fourier transform relationships shown below, where is in real space and corresponds to the crystal potential, and is its Fourier transform in reciprocal space. The vectors and are position vectors in real and reciprocal space, respectively. , also known as the structure factor, is the Fourier transform of a three-dimensional periodic function (i.e. the periodic crystal potential), and it defines the intensity measured during a diffraction experiment. can also be written in a polar form , where is a specific reflection in reciprocal space. has an amplitude term (i.e. ) and a phase term (i.e. ). The phase term contains the position information in this form. During a diffraction experiment, the intensity of the reflections are measured as : This is a straightforward method of obtaining the amplitude term of the structure factor. However, the phase term, which contains position information from the crystal potential, is lost. Analogously, for electron diffraction performed in a transmission electron microscope, the exit wave function of the electron beam from the crystal in real and reciprocal space can be written respectively as: Where and are amplitude terms, the exponential terms are phase terms, and is a reciprocal space vector. When a diffraction pattern is measured, only the intensities can be extracted. A measurement obtains a statistical average of the moduli: Here, it is also clear that the phase terms are lost upon measurement in an electron diffraction experiment. This is referred to as the crystallographic phase problem. History In 1952, David Sayre introduced the Sayre equation, a construct that related the known phases of certain diffracted beams to estimate the unknown phase of another diffracted beam. In the same issue of Acta Crystallographica, Cochran and Zachariasen also independently derived relationships between the signs of different structure factors. Later advancements were done by other scientists, including Hauptman and Karle, leading to the awarding of the Nobel Prize in Chemistry (1985) to Hauptman and Karle for their development of direct methods for the determination of crystal structures. Comparison to X-Ray Direct Methods The majority of direct methods was developed for X-ray diffraction. However, electron diffraction has advantages in several applications. Electron diffraction is a powerful technique for analyzing and characterizing nano- and micron-sized particles, molecules, and proteins. While electron diffraction is often dynamical and more complex to understand compared to X-ray diffraction, which is usually kinematical, there are specific cases (detailed later) that have sufficient conditions for applying direct methods for structure determination. Theory Unitary Sayre Equation The Sayre equation was developed under certain assumptions taken from information about the crystal structure, specifically that all atoms considered are identical and there is a minimum distance between atoms. Called the "Squaring Method," a key concept of the Sayre equation is that squaring the electron-density function (for X-ray diffraction) or crystal potential function (for electron diffraction) results in a function that resembles the original un-squared function of identical and resolved peaks. By doing so, it reinforces atom-like features of the crystal. Consider the structure factor in the following form, where is the atomic scattering factor for each atom at position , and is the position of atom : This can be converted to the unitary structure factor by dividing by N (the number of atoms) and : This can be alternatively rewritten in real and reciprocal space as: This equation is a variation of the Sayre equation. Based on this equation, if the phases of and are known, then the phase of is known. Triplet Phase Relationship The triplet phase relationship is an equation directly relating two known phases of diffracted beams to the unknown phase of another. This relationship can be easily derived via the Sayre equation, but it may also be demonstrated through statistical relationships between the diffracted beams, as shown here. For randomly distributed atoms, the following holds true: Meaning that if: Then: In the above equation, and the moduli are known on the right hand side. The only unknown terms are contained in the cosine term that includes the phases. The central limit theorem can be applied here, which establishes that distributions tend to be Gaussian in form. By combining the terms of the known moduli, a distribution function can be written that is dependent on the phases: This distribution is known as the Cochran distribution. The standard deviation for this Gaussian function scales with the reciprocal of the unitary structure factors. If they are large, then the sum in the cosine term must be: This is called the triplet phase relationship (). If the phases and are known, then the phase can be estimated. Tangent Formula The tangent formula was first derived in 1955 by Jerome Karle and Herbert Hauptman. It related the amplitudes and phases of known diffracted beams to the unknown phase of another. Here, it is derived using the Cochran distribution. The most probable value of can be found by taking the derivative of the above equation, which gives a variant of the tangent formula: Practical Considerations The basis behind the phase problem is that phase information is more important than amplitude information when recovering an image. This is because the phase term of the structure factor contains the positions. However, the phase information does not need to be retrieved completely accurately. Often even with errors in the phases, a complete structure determination is possible. Likewise, amplitude errors will not severely impact the accuracy of the structure determination. Sufficient Conditions In order to apply direct methods to a set of data for successful structure determination, there must be reasonable sufficient conditions satisfied by the experimental conditions or sample properties. Outlined here are several cases. Kinematical Diffraction One of the reasons direct methods was originally developed for analyzing X-ray diffraction is because almost all X-ray diffraction is kinematical. While most electron diffraction is dynamical, which is more difficult to interpret, there are instances in which mostly kinematical scattering intensities can be measured. One specific example is surface diffraction in plan view orientation. When analyzing the surface of a sample in plan view, the sample is often tilted off a zone axis in order to isolate the diffracted beams of the surface from those of the bulk. Achieving kinematical conditions is difficult in most cases—it requires very thin samples to minimize dynamical diffraction. Statistical Kinematical Diffraction Even though most cases of electron diffraction are dynamical, it is still possible to achieve scattering that is statistically kinematical in nature. This is what enables the analysis of amorphous and biological materials, where dynamical scattering from random phases add up to be nearly kinematical. Furthermore, as explained earlier, it is not critical to retrieve phase information completely accurately. Errors in the phase information are tolerable. Recalling the Cochran distribution and considering a logarithm of that distribution: In the above distribution, contains normalization terms, terms are the experimental intensities, and contains both of these for simplicity. Here, the most probable phases will maximize the function . If the intensities are sufficiently high and the sum in the cosine term remains , then will also be large, thereby maximizing . With a narrow distribution such as this, the scattering data will be statistically within the realm of kinematical consideration. Intensity Mapping Consider two scattered beams with different intensities. The magnitude of their intensities will then have to be related to the amplitude of their corresponding scattering factors by the relationship: Let ) be a function that relates the intensity to the phase for the same beam, where contains normalization terms: Then, the distribution of values will be directly related to the values of . That is, when the product is large or small, will also be large and small. So, the observed intensities can be used to reasonably estimate the phases for diffracted beams. The observed intensity can be related to the structure factor more formally using the Blackman formula. Other cases to consider for intensity mapping are specific diffraction experiments, including powder diffraction and precession electron diffraction. Specifically, precession electron diffraction produces a quasi-kinematical diffraction pattern that can be used adequately in direct methods. Dominated Scattering In some cases, scattering from a sample can be dominated by one type of atom. Therefore, the exit wave from the sample will also be dominated by that atom type. For example, the exit wave and intensity of a sample dominated by channeling can be written in reciprocal space in the form: is the Fourier transform of , which is complex and represents the shape of an atom, given by the channeling states (e.g. 1s, 2s, etc.). is real in reciprocal space and complex in the object plane. If , a conjugate symmetric function, is substituted for , then it is feasible to retrieve atom-like features from the object plane: In the object plane, the Fourier transform of will be a real and symmetric pseudoatom () at the atomic column positions. will satisfy atomistic constraints as long as they are reasonably small and well-separated, thereby satisfying some constraints required for implementing direct methods. Implementation Direct methods is a set of routines for structure determination. In order to successfully solve for a structure, several algorithms have been developed for direct methods. A selection of these are explained below. Gerchberg-Saxton The Gerchberg-Saxton algorithm was originally developed by Gerchberg and Saxton to solve for the phase of wave functions with intensities known in both the diffraction and imaging planes. However, it has been generalized for any information in real or reciprocal space. Detailed here is a generalization using electron diffraction information. As illustrated in image to the right, one can successively impose real space and reciprocal constraints on an initial estimate until it converges to a feasible solution. Constraints Constraints can be physical or statistical. For instance, the fact that the data is produced by a scattering experiment in a transmission electron microscope imposes several constraints, including atomicity, bond lengths, symmetry, and interference. Constraints may also be statistical in origin, as shown earlier with the Cochran distribution and triplet phase relationship (). According to Combettes, image recovery problems can be considered as a convex feasibility problem. This idea was adapted by Marks et al. to the crystallographic phase problem. With a feasible set approach, constraints can be considered convex (highly convergent) or non-convex (weakly convergent). Imposing these constraints with the algorithm detailed earlier can converge towards unique or non-unique solutions, depending on the convexity of the constraints. Examples Direct methods with electron diffraction datasets have been used to solve for a variety of structures. As mentioned earlier, surfaces are one of the cases in electron diffraction where scattering is kinematical. As such, many surface structures have been solved for by both X-ray and electron diffraction direct methods, including many of the silicon, magnesium oxide, germanium, copper, and strontium titanate surfaces. More recently, methods for automated three dimensional electron diffraction methods have been developed, such as automated diffraction tomography and rotation electron diffraction. These techniques have been used to obtain data for structure solution through direct methods and applied for zeolites, thermoelectrics, oxides, metal-organic frameworks, organic compounds, and intermetallics. In some of these cases, the structures were solved in combination with X-ray diffraction data, making them complementary techniques. In addition, some success has been found using direct methods for structure determination with the cryo-electron microscopy technique Microcrystal Electron Diffraction (MicroED). MicroED has been used for a variety of materials, including crystal fragments, proteins, and enzymes. Software DIRDIF DIRDIF is a computer program for structure determination through using the Patterson function and direct methods applied to difference structure factors. It was first released by Paul Beurkens and his colleagues at the University of Nijmegen in 1999. It is written in Fortran and was most recently updated in 2008. It can be used for structures with heavy atoms, structures of molecules with partly known geometries, and for certain special case structures. Detailed information can be found at its website: http://www.xtal.science.ru.nl/dirdif/software/dirdif.html. EDM Electron Direct Methods is a set of programs developed at Northwestern University by Professor Laurence Marks. First released in 2004, its most recent release was version 3.1 in 2010. Written in C++, C, and Fortran 77, EDM is capable of performing image processing of high resolution electron microscopy images and diffraction patterns and direct methods. It has a standard GNU license and is free to use or modify for non-commercial purposes. It uses a feasible set approach and genetic algorithm search for solving structures using direct methods, and it also has high-resolution transmission electron microscopy image simulation capabilities. More information can be found at the website: http://www.numis.northwestern.edu/edm/index.shtml. The code is no longer being developed. OASIS OASIS was first written by several scientists from the Chinese Academy of Sciences in Fortran 77. The most recent release is version 4.2 in 2012. It is a program for direct methods phasing of protein structures. The acronym OASIS stands for two of its applications: phasing One-wavelength Anomalous Scattering or Single Isomorphous Substitution protein data. It reduces the phase problem to a sign problem by locating the atomic sites of anomalous scatterers or heavy atom substitutions. More details can be found at the website: http://cryst.iphy.ac.cn/Project/IPCAS1.0/user_guide/oasis.html. SIR The SIR (seminvariants representation) suite of programs was developed for solving the crystal structures of small molecules. SIR is updated and released frequently, with the first release in 1988 and the latest release in 2014. It is capable of both ab initio and non-ab-initio direct methods. The program is written in Fortran and C++ and is free for academic use. SIR can be used for the crystal structure determination of small-to-medium-sized molecules and proteins from either X-ray or electron diffraction data. More information can be found at its website: http://www.ba.ic.cnr.it/softwareic/sir2014/. See also Crystallography Transmission electron microscopy Diffraction Precession electron diffraction Dynamical diffraction Electron crystallography Electron diffraction Microcrystal Electron Diffraction References Electron microscopy
Direct methods (electron microscopy)
[ "Chemistry" ]
3,189
[ "Electron", "Electron microscopy", "Microscopy" ]
55,167,350
https://en.wikipedia.org/wiki/SOX17
SRY-box 17 is a protein that in humans is encoded by the SOX17 gene. Regulation at the human SOX17 locus The gene encodes a member of the SOX (SRY-related HMG-box) family of transcription factors, located on Chromosome 8 q11.23. Its gene body is isolated within a CTCF loop domain. Approximately 230 kb upstream of SOX17 it has been identified a tissue specific differentially (hypo-)methylated region (DMR), which consists of SOX17 regulatory elements. The DMR in particular bears the most distal definitive endoderm-specific enhancer at the SOX17 locus. SOX17 itself has recently been defined as so called topologically insulated gene (TIG). TIGs per definition are single protein coding genes (PCGs) within CTCF loop domains, that are mainly enriched in developmental regulators and suggested to be very tightly controlled via their 3D loop-domain architecture. Function in development SOX17 is involved in the regulation of vertebrate embryonic development and in the determination of the endodermal cell fate. The encoded protein acts downstream of TGF beta signaling (Activin) and canonical WNT signaling (Wnt3a). Especially the correct phosphorylation of SMAD2/3 within the respective cell cycle (early G1 phase) is crucial for the activation of cardinal endodermal genes (e.g. SOX17) to further enter the definitive endodermal lineage. Besides that, perturbation of the SOX17 centromertic CTCF-boundary in early definitive endoderm differentiation, leads to massive developmental failure and a so-called mes-endodermal like trapped cell-state, which can be rescued by ectopic SOX17 expression. In Xenopus gastrulae it has been shown that SOX17 modifies Wnt responses, where genomic specificity of Wnt/β-catenin transcription is determined through functional interactions between SOX17 and β-catenin/Tcf transcriptional complexes. References Further reading MacCarthy CM, Malik V, Wu G, et al., & Velychko S (September 2022). "Enhancing Sox/Oct cooperativity induces higher-grade developmental reset". bioRxiv. Transcription factors
SOX17
[ "Chemistry", "Biology" ]
480
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
55,178,071
https://en.wikipedia.org/wiki/River%20bank%20erosion%20along%20the%20Ganges%20in%20Malda%20and%20Murshidabad%20districts
River bank erosion along the Ganges in Malda and Murshidabad districts focusses on river bank erosion along the main channel of the Ganges in Malda and Mushidabad districts of West Bengal, India. Overview The Ganges is a long river carrying a huge discharge of 70,000 m3/s. However, the river bank erosion problems are restricted to a few places. Floods and erosion pose a serious problem in the lower Ganges region, particularly in West Bengal. The Ganges enters West Bengal after wandering around the Rajmahal hills in Jharkhand. After flowing through Malda district, it enters Murshidabad district, where it splits into two river channels – the Bhagirathi flows south through West Bengal and the Padma flows east into Bangladesh. River bank erosion is a common problem in river channels in the deltaic tracts and is widespread throughout the course of the Ganges in West Bengal. Official reports show that on an average 8  km2 of land is engulfed annually by the river in West Bengal. The Ganges forms one of the major river systems in India. From the Gangotri Glacier, it traverses a distance of 2,525 km to the Bay of Bengal. The river carries millions of tonnes of sediment load and deposits it in the plains. The sediment deposition creates many severe problems like the decrease of river depth. The Ganges is a meandering river and Farakka Barrage has disrupted the dynamic equilibrium of the river and hindered the natural oscillation of the river within its meandering belt, which is about 10 km wide in Malda and Murshidabad districts. The river has a general tendency to shift towards the left bank upstream of the Farakka Barrage and towards the right bank downstream of it. River bank failure is because of certain factors like soil stratification of the river bank, presence of hard rocky area (Rajmahal), high load of sediment, difficulty of dredging and construction of Farakka Barrage as an obstruction to the natural river flow. The rivers in Murshidabad district have been continuously changing their meandering-geometry actively since second half of the twentieth century but the dimension of river bank erosion has been increased after construction of Farakka Barrage. More than 200 km2 of fertile land was completely wiped out till 2004 in Malda district. As of 2004, the Ganges had eroded 356 km2 of fertile land and displaced around 80,000 people in the period 1988 – 1994, in Murshidabad district. Malda district In the early decades of the twentieth century, the Ganges flowed in a south-easterly course between Rajmahal and Farakka, but later in the century it formed a large meander to accommodate the additional water because of the barrage construction. Furthermore, nearly 64 crore (640 million) tonnes of silt is accumulated annually on the river bed. All these lead to massive erosion of the left bank. During the period 1969-1999, 4.5 lakh people were affected by left bank erosion of the Ganges in Malda district, upstream of the Farakka Barrage. 22 mouzas in Manickchak, Kaliachak I and Kaliachak II CD Blocks have gone into the river. Other affected areas are in Kaliachak III, Ratua I and Ratua II CD Blocks. The worst-hit areas lie in the left bank of the river stretch between Bhutnidiara and Panchanandapore in the Kaliachak II block. Even in the 1960s, Panchanandapur was a flourishing river-port and trading centre. It had the block headquarters, high school, sugar mill and a regular weekly market where traders used to come by large boats from Rajmahal, Sahebganj, Dhuliyan and other towns. After being hit by river bank erosion much of what was there at Panchanandapur has shifted to Chethrumahajantola. The Ganga Bhangan Pratirodh Action Nagarik Committee’s survey revealed a loss of 750 km2 area in Kaliachak and Manikchak. 60 primary schools, 14 high schools, coveted mango orchards have gone leaving 40,000 affected families. During the period 1990-2001 Hiranandapur, Manikchak, Gopalpur of Manikchak CD Block and Kakribondha Jhaubona of Kaliachak II CD Block were badly affected by river bank erosion. In 2004-05 large scale erosion took place in Kakribondha Jhaubona and Panchanandapur-I gram panchayats of Kaliachak II CD Block and Dakshin Chandipur, Manikchak, and Dharampur gram panchayets of Manikchak CD Block. Kakribondha Jhaubona, a gram panchayat, was totally lost by river bank erosion. The affected persons and their administrative responsibilities were merged with Bangitola gram panchayet administration. River bank failures occur in two phases. Pre-flood bank failure occurs because of the high pressure of increasing water on the bank walls. During floods the area is submerged and water seeps into the weak soil. After the floods, the banks collapse in chunks. Every monsoon a large number people are affected by river bank erosion. They become landless and lose their livelihood. It creates neo-refugees with many social problems. Sometimes, poverty leads to increase in crime. The consequences of floods are of the short range as economic recovery is possible, but effects of the slow and steady disaster of river bank erosion are of permanent nature, where the entire socio-economic structure is damaged and the affected population has to move and settle somewhere else. People seriously affected by river bank erosion in Malda have migrated in search of work to as far as Gujarat and Maharashtra. At Byculla, Mumbai, there is a whole colony of erosion affected people of Malda, where they are often branded as Bangladeshi infiltrators, as they have lost not only their belongings but also their documents in the erosion. Such is the tragedy of these neo-refugees in their own country. In the remote past, the Ganges used to flow past Gauda, 40 km downstream from Rajmahal. Over a long period, the river shifted westward and now it tends to come to its earlier position. Therefore, the whole belt up to Gauda is risk zone for river bank erosion. A group of experts has suggested the pressure on the left bank be reduced by diverting flow from the eroding channel. Alternatively, it is possible that in one devastating flood the Ganges will merge with Kalindri in the eastern side and the combined flow will merge with Mahananda at Nimasarai Ghat of Malda and afterwards the collective flow will merge with Ganges/ Padma in Godagari Ghat of Bangladesh. The Ganges has numerous abandoned channels in the area. Murshidabad district As of 2013, an estimated 2.4 million people reside along the banks of the Ganges alone in Murshidabad district. The main channel of the Ganges has a bankline of 94 km along its right bank from downstream of Farakka Barrage to Jalangi. Severe erosion occurs all along this bank. From a little above Nimtita, about 20 km downstream from Farakka, the Ganges flows along the international boundary with Bangladesh in the left bank. The following blocks have to face the brunt of erosion year after year: Farakka, Samserganj, Suti I, Suti II, Raghunathganj II, Lalgola, Bhagawangola I, Bhagawangola II, Raninagar I, Raninagar II and Jalangi. According to government reports between 1931 and 1977, 26769 hectares of lands have been eroded and many villages have been fully submerged. Thousands of people have lost their dwellings. Between 1988 and 1994, 206.60 square km. land was eroded displacing 14,236 families. During 1952-53 the old Dhuliyan town was completely washed away by the river. Dhuliyan and its adjoining areas were greatly affected in mid 1970s when about 50,000 people became homeless. The encroaching river wiped out 50 mouzas and engulfed about 10,000 hectares of fertile land. In August 2020, this region again faced erosion which washed away dwelling places, temples, schools, litchi and mango orchards and agricultural lands along the right bank nearly after 50 years. It affected namely Dhanghora, Dhusaripara and Natun Shibpur villages of Samserganj block. In September-October 2022, Pratapganj and Maheshtola areas of Samserganj were the new victim of river bank erosion. Five houses, one temple and several bighas of land were washed away by the eroding river. According to the Report on Impact of the Farakka Barrage on the Human Fabric: "People in Murshidabad had been experiencing erosion for the last two centuries but the ravages caused by the mighty Padma at Akheriganj in 1989 and 1990 surpassed all previous records. Akheriganj disappeared from the map destroying 2,766 houses, leaving 23,394 persons homeless many of whom migrated to the newly emerged Nirmal char along the opposite bank…. This area has lost its school, college, places of worship, panchayat office to the raging Padma…. Original Akheriganj of nearly 20,000 inhabitants has gone into the river around 1994." "Jalangi situated 50 km east of Baharampur district headquarter has suffered tremendously in 1994-95. At Jalangi Bazaar severe erosion started in September 1995 engulfing nearly 400 metre width of land within a week and then high built up homestead land thereby destroying Jalangi High School, Gram Panchayat Office, Thana and innumerable buildings rendering nearly 12000 people homeless." "As per official estimate, till 1992-94 more than 10,000 hectares of chars (flood plain sediment island) have developed in main places, which have become inaccessible from the Indian side but can be reached easily from Bangladesh. The erosion wiped away boundary posts at many places creating border dispute. In Parliament when this issue was raised the House was assured that the boundary was fixed on the map even though the river has shifted". "One typical example is that of Nirmal char built by eroding Akheriganj. Here a population of 20,000 lives in an area of 50 sq.km. From here Rajshahi city of Bangladesh can be reached within 45 minutes on road whereas to come to the mainland of India one has to cross the mighty Padma which will take more than three hours. Moreover, the basic infrastructure provided here is too poor and the people’s plight is further heightened by negligence of the mainland administration. Since there is no primary health centre, people go to Rajshahi for treatment. The concept of international border is very much flexible here due to basic problems of living. Instances of fighting for harvesting with Bangladeshi cultivators have been reported again and again apart from the usual problem of allotting created land to the rightful owners. Once again, the question of Bangladeshi infiltrators, the recent fiasco over ISI agents have increased in this district due to these char areas." "Downstream of Jangipur Barrage the river Ganga/Padma is swinging away close to river Bhagirathi at Fazilpur leaving only 1.34 km. in width. In 1996, this distance was 2.86 km. If Ganga/Padma actually merges with Bhagirathi due to the natural tendency, it will lead to flood and catastrophe in the entire Bhagirathi basin. Bhagirathi water remains at a higher elevation than the river Ganga/Padma during lean season and if they merge the water of the feeder canal will flow through Padma to Bangladesh defeating the very purpose of the Farakka Project." References Erosion Hydrology Ganges Malda district Murshidabad district
River bank erosion along the Ganges in Malda and Murshidabad districts
[ "Chemistry", "Engineering", "Environmental_science" ]
2,464
[ "Hydrology", "Environmental engineering" ]
55,178,112
https://en.wikipedia.org/wiki/Angle%20bracket%20%28fastener%29
An angle bracket or angle brace or angle cleat is an L-shaped fastener used to join two parts generally at a 90-degree angle. It is typically made of metal but it can also be made of wood or plastic. Angle brackets feature holes in them for screws. A typical example use of is a shelf bracket for mounting a shelf on a wall. In general, angle brackets have a wide range of applications, and are used, among other things, in building construction, mechanical engineering or to join two pieces of furniture Retailers also use names like corner brace, corner bracket brace, shelf bracket, or L bracket. When the holes are enlarged for allowing adjustments, the name is angle stretcher plates or angle shrinkage. Types There are different sizes available, varying in length, width and angle. See also Shelf supports have many variations, including angle brackets References Fasteners Furniture components
Angle bracket (fastener)
[ "Technology", "Engineering" ]
182
[ "Construction", "Furniture components", "Fasteners", "Components" ]
56,608,952
https://en.wikipedia.org/wiki/Algebraic%20representation
In mathematics, an algebraic representation of a group G on a k-algebra A is a linear representation such that, for each g in G, is an algebra automorphism. Equipped with such a representation, the algebra A is then called a G-algebra. For example, if V is a linear representation of a group G, then the representation put on the tensor algebra is an algebraic representation of G. If A is a commutative G-algebra, then is an affine G-scheme. See also Algebraic character References Claudio Procesi (2007) Lie Groups: an approach through invariants and representation, Springer, . Lie groups Representation theory
Algebraic representation
[ "Mathematics" ]
133
[ "Lie groups", "Mathematical structures", "Algebra stubs", "Fields of abstract algebra", "Algebraic structures", "Representation theory", "Algebra" ]
56,616,299
https://en.wikipedia.org/wiki/Kramers%E2%80%93Moyal%20expansion
In stochastic processes, Kramers–Moyal expansion refers to a Taylor series expansion of the master equation, named after Hans Kramers and José Enrique Moyal. In many textbooks, the expansion is used only to derive the Fokker–Planck equation, and never used again. In general, continuous stochastic processes are essentially all Markovian, and so Fokker–Planck equations are sufficient for studying them. The higher-order Kramers–Moyal expansion only come into play when the process is jumpy. This usually means it is a Poisson-like process. For a real stochastic process, one can compute its central moment functions from experimental data on the process, from which one can then compute its Kramers–Moyal coefficients, and thus empirically measure its Kolmogorov forward and backward equations. This is implemented as a python package Statement Start with the integro-differential master equation where is the transition probability function, and is the probability density at time . The Kramers–Moyal expansion transforms the above to an infinite order partial differential equation and also where are the Kramers–Moyal coefficients, defined byand are the central moment functions, defined by The Fokker–Planck equation is obtained by keeping only the first two terms of the series in which is the drift and is the diffusion coefficient. Also, the moments, assuming they exist, evolves as where angled brackets mean taking the expectation: . n-dimensional version The above version is the one-dimensional version. It generalizes to n-dimensions. (Section 4.7 ) Proof In usual probability, where the probability density does not change, the moments of a probability density function determines the probability density itself by a Fourier transform (details may be found at the characteristic function page):Similarly, Now we need to integrate away the Dirac delta function. Fixing a small , we have by the Chapman-Kolmogorov equation,The term is just , so taking derivative with respect to time, The same computation with gives the other equation. Forward and backward equations The equation can be recast into a linear operator form, using the idea of infinitesimal generator. Define the linear operator then the equation above states In this form, the equations are precisely in the form of a general Kolmogorov forward equation. The backward equation then states thatwhere is the Hermitian adjoint of . Computing the Kramers–Moyal coefficients By definition,This definition works because , as those are the central moments of the Dirac delta function. Since the even central moments are nonnegative, we have for all . When the stochastic process is the Markov process , we can directly solve for as approximated by a normal distribution with mean and variance . This then allows us to compute the central moments, and soThis then gives us the 1-dimensional Fokker–Planck equation: Pawula theorem Pawula theorem states that either the sequence becomes zero at the third term, or all its even terms are positive. Proof By Cauchy–Schwarz inequality, the central moment functions satisfy . So, taking the limit, we have . If some for some , then . In particular, . So the existence of any nonzero coefficient of order implies the existence of nonzero coefficients of arbitrarily large order. Also, if , then . So the existence of any nonzero coefficient of order implies all coefficients of order are positive. Interpretation Let the operator be defined such . The probability density evolves by . Different order of gives different level of approximation. : the probability density does not evolve : it evolves by deterministic drift only. : it evolves by drift and Brownian motion (Fokker-Planck equation) : the fully exact equation. Pawula theorem means that if truncating to the second term is not exact, that is, , then truncating to any term is still not exact. Usually, this means that for any truncation , there exists a probability density function that can become negative during its evolution (and thus fail to be a probability density function). However, this doesn't mean that Kramers-Moyal expansions truncated at other choices of is useless. Though the solution must have negative values at least for sufficiently small times, the resulting approximation probability density may still be better than the approximation. References Statistical mechanics Stochastic calculus
Kramers–Moyal expansion
[ "Physics" ]
896
[ "Statistical mechanics" ]
57,050,741
https://en.wikipedia.org/wiki/Exogenous%20ketone
Exogenous ketones are a class of ketone bodies that are ingested using nutritional supplements or foods. This class of ketone bodies refers mainly to β-hydroxybutyrate [BHB]. The body can make BHB endogenously, via the liver, due to starvation, ketogenic diets, or prolonged exercise, leading to ketosis. However, with the introduction of exogenous ketone supplements, it is possible to provide a user with an instant supply of ketones even if the body is not within a state of ketosis before ingestion. However, drinking exogenous ketones will not trigger fat burning like a ketogenic diet. Most supplements rely on β-hydroxybutyrate as the source of exogenous ketone bodies. It is the most common exogenous ketone body because of its efficient energy conversion and ease of synthesis. In the body, BHB can be converted to acetoacetic acid. It is this acetoacetic acid that will enter the energy pathway using beta-ketothialase, becoming two Acetyl-CoA molecules. The Acetyl CoA is then able to enter the Krebs cycle in order to generate ATP. The remaining BHB molecules that aren't synthesized into acetoacetic acid are then converted to acetone through the acetoacetate decarboxylase waste mechanism. Structure Acetoacetate is produced in the mitochondria of liver cells by the addition of an acetyl group from acetyl CoA. This creates 3-hydroxy-3-methylgluteryl CoA which loses an acetyl group, becoming acetoacetate. β-Hydroxybutyrate, BHB, is also synthesized within liver cells; this is accomplished through the metabolism of fatty acids. Through a series of reactions, acetoacetate is first produced; and it is this acetoacetate that is reduced into β-hydroxybutyrate, catalyzed by the β-hydroxybutyrate dehydrogenase enzyme. Although, β-hydroxybutyrate is technically not a ketone due to the structure of the molecule (OH- attached to carbonyl group makes this an acid),BHB acts like a ketone, providing the body with energy in the absence of glucose. In fact, β-Hydroxybutyrate is the most abundant ketone-like molecule in the blood during ketosis. Acetone is an organic compound with the formula (CH3)2CO and is one of the simplest and smallest ketones. It is synthesized from the breakdown of acetoacetate in ketotic individuals within the liver. Types Ketone salts Ketone salts are usually a synthetic compound of Beta-hydroxybutyric acid, also known as βHB. It is then bonded to sodium, potassium, magnesium, and/or calcium to offset the acidic nature of βHB alone. Most ketone salts are racemic which means only half of it is bioavailable, resulting in double the salt load per D-bhb, and even less bioavailability. Ketone esters There are multiple molecules that qualify as a "Ketone Ester." The most researched ketone ester, or ketone monoester, is called D-Beta Hydroxybutyrate/ R 1,3-butanediol monoester, which is a naturally derived compound through a fermentation process. It was created by Dr. Richard Veech and Todd King at the NIH, and then commercialized by companies including KetoneAid and TDeltaS, and previously by HVMN, a Silicon Valley–based technology company. This monoester links the same beta-hydroxybutyric acid found in ketone salts but bonded with D 1,3-butanediol (also called R 1,3 butanediol) instead of bases (salts). The first part of the metabolism of this monoester takes place in the digestive system (fast release), followed by the remaining portion taking place in the liver (slow release). The metabolic structure of D-Beta Hydroxybutyrate/ R 1,3-butanediol monoester is similar to that of MCT C8 oil, but many times stronger and without GI issues. R 1,3 Butanediol (also known as D 1,3 Butanediol) This is the non racemic form of 1,3 Butanediol. It should not be confused with the similarly named 1,4 Butanediol that converts to γ-Hydroxybutyric acid, also known as GHB. It will significantly raise blood ketone levels, about 60% compared to ketone monoester. It has only been tested once for sports performance and the paper concluded "TTF [time to fatigue] was not significantly different". Other ketone esters Technically there are other ketone esters such as acetoacetate bound to D/L 1,3-butanediol (racemic). This diester has been tested more with deep sea divers. It is not commercially available. Another ketone ester is also referred to as a ketone di-ester which is a bond of C6 and R 1,3 butanediol or C8 and R 1,3 butanediol. It is recommended to be consumed with food and is commercialized by Juvenescence Labs. Effects The consumption of ketone bodies results in several effects, ranging from reduced glucose utilization in peripheral tissues, anti-lipolytic effects on adipose tissue, and reduced proteolysis in skeletal muscle. In addition to this, ketone bodies serve as signaling molecules that regulate gene expression and adaptive responses. When exogenous ketone bodies are ingested, acute and nutritional exogenous ketosis is produced. Blood In human blood, ketone ester and ketone salt consumption deliver a >50% higher plasma concentration of D-β-Hydroxybutyrate, an isoform of regular β-HB. In terms of efficacy, the blood D-βHB concentrations are higher when using ketone esters instead of ketone salts (KE = 2.8±0.2 mM; KS = 1.0±mM). This is due to the fact that the KE supplement contains >99% of the D-βHB-isoform while the KS supplement contains ~50% of the L-βHB-isoform, which is metabolized much slower than the D-βHB-isoform. Also, ketone salt supplements slightly raise the blood pH level. This is mainly due to the conjugate base action of βHB (βHB-) which fully dissociates within the blood; this mildly raises the blood and urine pH which is further increased as the kidneys to excrete the excess cations (Na+, Ca+, K+). Ketone esters reduce the blood pH because KE hydrolysis proves β-HB with butanediol. These two undergo hepatic metabolism, forming a keto-acid. Hormones Exogenous ketones lower blood glucose concentrations. Although carbohydrate stores are plentiful, ketones lower the blood glucose because they limit hepatic gluconeogenesis and increase peripheral glucose intake. They have also been known to reduce hunger and the desire to eat. This is shown by the decreased levels of the hunger hormone, ghrelin. In addition, it has been surmised that exogenous ketones may stimulate insulin secretion. Following exposure to exogenous ketones, small amounts of secreted insulin have been reported in animals. However, because insulin has also been shown to increase in subjects who took an exogenous ketone supplement and dextrose drink, in addition to those who only took the exogenous supplement, more research remains to be seen on the effects of ketone supplements on insulin. See also Acetoacetate Acetone Ketone Ketone bodies Ketosis β-hydroxybutyrate (β-HB) References Dietary supplements
Exogenous ketone
[ "Chemistry" ]
1,700
[ "Ketones", "Functional groups" ]
57,052,173
https://en.wikipedia.org/wiki/Radiation-induced%20lumbar%20plexopathy
Radiation-induced lumbar plexopathy (RILP) or radiation-induced lumbosacral plexopathy (RILSP) is nerve damage in the pelvis and lower spine area caused by therapeutic radiation treatments. RILP is a rare side effect of external beam radiation therapy and both interstitial and intracavity brachytherapy radiation implants. RILP is a Pelvic Radiation Disease symptom. In general terms, such nerve damage may present in stages, earlier as demyelination and later as complications of chronic radiation fibrosis. RILP occurs as a result of radiation therapy administered to treat lymphoma or cancers within the abdomen or pelvic area such as cervical, ovarian, bladder, kidney, pancreatic, prostate, testicular, colorectal, colon, rectal or anal cancer. The lumbosacral plexus area is radiosensitive and radiation plexopathy can occur after exposure to mean or maximum radiation levels of 50-60 Gray with a significant rate difference noted within that range. Signs and symptoms Lumbosacral plexopathy is characterized by any of the following symptoms; usually bi-lateral and symmetrical, though unilateral is known. Lower limb dysaesthesia, abnormal sensations of touch or feeling Lower limb weakness Lower limb numbness Lower limb paresthesia, e.g., foot drop, muscle atrophy Lower limb pain Symptoms are typically a step-wise progression with periods of stability in between, weakness often appearing years later. Weakness frequently presents in the lower leg muscle groups. Symptoms are usually irreversible. Initial onset of symptoms may occur as early as 2 to 3 months after radiotherapy. The median onset is approximately 5 years, but can be highly variable, 2-3 decades after radiation therapy. One case study recorded the initial onset occurring 36 years post treatment. Cause The treatment's ionizing radiation is an activation mechanism for apoptosis (cell death) within the targeted cancer, but it can also impact nearby healthy radiosensitive tissues, like the lumbosacral plexus. The occurrence and severity of RILP is related to the magnitude of ionizing radiation and the radiosensitivity of peripheral nerves may be further aggravated when combined with chemotherapy, like taxanes and platinum drugs, during treatment. Pathophysiology The pathophysiological process behind radiation's RILP nerve damage has been discussed since the 1960s and is still without a precise definition. Consensus does exist on a progression of RILP symptoms, with a stepping (a time delay) between two periods of plexopathy onset, the first from radiation injury and the later from fibrosis. Proposed mechanisms of the early nerve damage include microvascular damage (ischemia) supplying the myelin, radiation damage of the myelin, and oxygen free radical cell damage. The delayed nerve damage is attributed to compression neuropathy and a late fibro-atrophic ischemia from retractile fibrosis. Diagnosis The more common source of lumbar plexopathy is a direct or secondary tumor involvement of the plexus with MRI being the typical confirmation tool. Tumors typically present with enhancement of nerve roots and T2-weighted hyperintensity. The differential consideration of RILP requires taking a medical history and neurologic examination. RILP's neurological symptoms can mimic other nerve disorders. People may present with pure lower motor neuron syndrome, a symptom of amyotrophic lateral sclerosis (ALS). RILP may also be misdiagnosed as leptomeningeal metastasis often showing nodular MRI enhancement of the cauda equina nerve roots or having increased CSF protein content. Other differential diagnoses to consider are Chronic Inflammatory Demyelinating Polyradiculoneuropathy, neoplastic lumbosacral plexopathy, paraneoplastic neuronopathy, diabetic lumbosacral plexopathy, degenerative disk disease (osteoporosis of the spine), Osteoarthritis of the spine, Lumbar Spinal Stenosis, post-infectious plexopathy, carcinomatous meningitis (CM), mononeuritis multiplex, and chemotherapy-induced plexopathy. The testing to resolve a RILP diagnosis involves blood serum analysis, X-rays, EMG, MRI and cerebrospinal fluid analysis. Prevention Since RILP's neurological changes are typically irreversible and a curative strategy has yet to be defined, prevention is the best approach. Treating the primary cancer remains an obvious requirement, but lower levels of lumbar plexus radiation dosing will minimize or eliminate RILP. One method to reduce the lumbosacral plexus' dosing is to include it with other at-risk organs that get spared from radiation. Key to prevention is resolving the lack of clinical evidence between radiation treatments and the onset of neurological problems. That relationship is hidden by RILP's low toxicity rate, the lack of a large monitored population size and the lack of data pooling across multiple institutions. Management Treatment of RILP is primarily supportive with mental, physiological and social aspects and consideration of any aggravating (synergistic) neurological factors. To prevent compounding existing RILP symptoms and to minimize further progression Remove co-morbidity factors control diabetes and hypertension avoid excessive alcohol use avoiding any local trauma in the irradiated volume controlling acute edema control acute inflammation. Pharmaceuticals that may be effective are corticosteroids (Dexamethasone) avoid stretching a plexus immobilized by fibrosis, e.g., carrying heavy loads or extensive movements, which may cause sudden neurological decompensation. The effect on the person with the condition, depends upon the type of impairment. Handicaps may include physical challenges, bowel and/or bladder dysfunction and may occur in multiple settings of work and home. Physical and occupational therapy are important elements in maintaining mobility and use of the lower extremities, along with assistive aides such as Ankle-Foot-Orthotics (AFOs), cane, walkers, etc. Sensory reeducation techniques may be necessary for balance and lymphedema management may be required. Pharmaceuticals that may be effective for RILP's neuropathic pain are tricyclic antidepressants (TCAs) (amitriptyline) Antiepileptics or anticonvulsants (gabapentin, pregabalin, carbamazepine, valproic acid) Selective serotonin re-uptake inhibitors(SSRIs) (duloxetine) to preserve normal norepinephrine and serotonin levels Analgesic drugs (pregabalin, methadone) Opiates may used singularly or to potentate the concomitant use of TCAs. Antiarrhythmics (mexilitine) for muscle stiffness Non-pharmaceutical RILP considerations are acupuncture for pain massage for pain transcutaneous electrical nerve stimulation (TENS) for pain Benzodiazepines may be used for paraesthesia quinine may be used for cramps Functional impairment and residual pain can lead to social isolation. Cancer support groups are valuable resources to learn about the syndrome and therapeutic options, and are a means to voice emotions related to having cancer and surviving it. Outcomes With increasing cancer treatment survival rates, the quality of life for its survivors has become a public health priority. The effects of RILP can be debilitating. With no effective treatment to control radiation damage's progressive nature, limb dysfunction is the likely result. Radiation damage's outcome is related to its initial onset time. Acute symptoms, occurring in the first few days, have the most favorable outcomes, likely diminishing within a few weeks. Early-delayed symptoms, occurring within the first months, typically include myelopathy. These issues frequently resolve without treatment. Late-delayed symptoms, occurring several months or years after treatment, may also include myelopathy, but its severity level is more likely to worsen, resulting in permanent paralysis. Significant neurologic morbidity is typical, with a very slow neurologic recovery. Epidemiology An exact occurrence rate has not been established. Literature on the topic is sparse. Clinical occurrences of RILP are rare, affecting between 0.3 and 1.3% of those treated with abdominal or pelvic radiation. The incidence rate is variable, dependent upon the irradiated zone, dosage level and method of delivery. For example, when alternate dosing levels were compared, higher rates were observed, from 12 to 23%, the higher RILP rates occurring with higher dosages. History As of 1977 lumbosacral neuropathy arising from radiation therapy had been rarely reported. One of the earliest cases was in 1948. The incidence rate of peripheral neuropathy has been demonstrated to decrease when lower therapeutic radiation dosing levels are used. A similar nerve injury, Radiation-induced Brachial Plexopathy (RIBP), may occur secondary to breast radiation therapy. Studies on RIBP have observed the brachial plexus' radiosensitivity. Injury was observed after dosages of 40 Gy in 20 fractions and RIBP significantly increased with doses greater than 2 Gy per fraction. RIBP is more common than lumbosacral radiculoplexopathy and has a clinical history with reduced dosing levels. RIBP occurrence rates were in the 60% range in the 1960s when 60 Gray treatments were applied in 5 Gray fractions; RIBP occurrences in the 2010s approach 1% with 50 Gray treatments applied in 3 Gy fractions. RILP occurrence rates are estimated at 0.3% to 1.3%, though the actual rate is likely higher. The soft tissue damage leading to RILP is more commonly seen with exposure levels over 50 Gy, though has occurred with as little as 30 Gy. A major step toward reducing RILP occurrences is by limiting the lumbosacral plexus' dosing level when treating pelvic malignancies, limiting the mean dose to < 45 Gy. One approach to reduced levels, the plexus' mapping with other organs at risk, was clinically evaluated during the 2010s. Clinical evidence of the cause-and-effect for prevention and the management of radiation induced polyneuropathy is limited. In 2011 the Radiation Oncology Institute (ROI) announced the National Radiation Oncology Registry (NROR). ROI and Massachusetts General Hospital would initially focus the NROR on prostate cancer, collecting efficacy and side effect information (like radiation induced neuropathy, RILP) from people treated with radiotherapy. In 2013 the American Society for Radiation Oncology (ASTRO) joined the effort and the number of data collection sites increased to 30 for a 1-year pilot project. Pitfalls of medical data collection arose with only 14 sites being able to provide data and all those requiring significant manual entry efforts. The first NROR project conclusion was that future registries would need to cope with Big data analytics. In 2015 ASTRO, the National Cancer Institute and the American Association of Physicists in Medicine sponsored a Big Data Workshop at the National Institutes of Health. Research Experimental approaches for RILP treatment and management include: Hyperbaric oxygen (HBO) has had mixed results restoring nerve function, some studies showing benefit, others without. Anticoagulant therapy (warfin, heparin) has been tried for ischemia and capillary restoration, some without clear benefit, others with improved motor function. PENTOCLO therapy- a combination of Pentoxifylline (PTX), vitamin E and clodronate, a bisphosphanate; the PTX for inflammation, vitamin E as a scavenger for oxygen free radicals that can lead to fibrosis and clodronate which may inhibit myelin nerve destruction. Myofascial release may reduce compressive effects of fibrouses, freeing trapped nerves. Mobilization of injured limbs via exoskeletal systems or hybrid assistive devices can provide the mobility lost to nerve damage, offering a workaround until new medical therapies e.g. tissue engineering can repair peripheral nerve injury. See also Radiation poisoning Radiation therapy ICD-10-CM World Health Organization's Code G62.82: Radiation-induced polyneuropathy ICD-11-MMS (2018 version) World Health Organization's Code 8B92.0: Post radiation lumbosacral plexopathy References Peripheral nervous system disorders Radiation health effects Radiation therapy
Radiation-induced lumbar plexopathy
[ "Chemistry", "Materials_science" ]
2,638
[ "Radiation effects", "Radiation health effects", "Radioactivity" ]
57,056,707
https://en.wikipedia.org/wiki/Odilorhabdin
Odilorhabdins are a class of natural antibacterial agents produced by the bacterium Xenorhabdus nematophila. Odilorhabdins act against both Gram-positive and Gram-negative pathogens, and were shown to eliminate infections in mouse models. Mechanism of action Odilorhabdins interfere with the pathogen's protein synthesis and are ribosome-targeting. They bind to the small ribosomal subunit at a site not exploited by previous antibiotics and induce miscoding and premature stop codon bypass. Odilorhabdins were shown to act particularly against carbapenem-resistant members of bacteria family Enterobacteriaceae, having potential to kill pathogens with antimicrobial resistance. Discovery The discovery of odilorhabdins was announced in 2013 and formally described in 2018 by the researchers of the University of Illinois at Chicago and Nosopharm. To identify the antibiotic, the Nosopharm researchers tested 80 cultured bacterial strains for antimicrobial properties and then isolated the active compounds. References Protein synthesis inhibitor antibiotics Antimicrobial peptides
Odilorhabdin
[ "Chemistry" ]
227
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
46,781,249
https://en.wikipedia.org/wiki/Differential%20graded%20module
In algebra, a differential graded module, or dg-module, is a -graded module together with a differential; i.e., a square-zero graded endomorphism of the module of degree 1 or −1, depending on the convention. In other words, it is a chain complex having a structure of a module, while a differential graded algebra is a chain complex with a structure of an algebra. In view of the module-variant of Dold–Kan correspondence, the notion of an -graded dg-module is equivalent to that of a simplicial module; "equivalent" in the categorical sense; see below. The Dold–Kan correspondence Given a commutative ring R, by definition, the category of simplicial modules are simplicial objects in the category of R-modules; denoted by sModR. Then sModR can be identified with the category of differential graded modules which vanish in negative degrees via the Dold-Kan correspondence. See also Differential graded Lie algebra Notes References Henri Cartan, Samuel Eilenberg, Homological algebra Available online. Abstract algebra
Differential graded module
[ "Mathematics" ]
232
[ "Abstract algebra", "Algebra stubs", "Algebra" ]
46,782,092
https://en.wikipedia.org/wiki/Resistance%20paper
Resistance paper, also known as conductive paper and by the trade name Teledeltos paper is paper impregnated or coated with a conductive substance such that the paper exhibits a uniform and known surface resistivity. Resistance paper and conductive ink were commonly used as an analog two-dimensional electromagnetic field solver. Teledeltos paper is a particular type of resistance paper. References Analog computers Electrical resistance and conductance
Resistance paper
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
88
[ "Materials science stubs", "Physical quantities", "Quantity", "Materials science", "Wikipedia categories named after physical quantities", "Electromagnetism stubs", "Electrical resistance and conductance" ]
46,783,337
https://en.wikipedia.org/wiki/Spectro-Polarimetric%20High-Contrast%20Exoplanet%20Research
Spectro-Polarimetric High-contrast Exoplanet REsearch (VLT-SPHERE) is an adaptive optics system and coronagraphic facility at the Very Large Telescope (VLT). It provides direct imaging as well as spectroscopic and polarimetric characterization of exoplanet systems. The instrument operates in the visible and near infrared, achieving exquisite image quality and contrast over a small field of view around bright targets. Results from SPHERE complement those from other planet finder projects, which include HARPS, CoRoT, and the Kepler Mission. The instrument was installed on Unit Telescope "Melipal" (UT3) and achieved first light in May, 2014. At the time of installation, it was the latest of a series of second generation VLT-instruments such as X-shooter, KMOS and MUSE. Science goals Direct imaging of exoplanets is extremely challenging: The brightness contrast between the planet and its host star typically ranges from 10−6 for hot young giant planets emitting significant amounts of near-infrared light, to 10−9 for rocky planets seen exclusively through reflected light. The angular separation between the planet and its host star is very small. For a planet ~10 AU from its host and tens of parsec away, the separation would be only a few tenths of an arcsec. SPHERE is representative of a second generation of instruments devoted towards direct high-contrast imaging of exoplanets. These instruments combine extreme adaptive optics with high-efficiency coronagraphs to correct for the atmospheric turbulence at high cadence and attenuate the glare from the host star. In addition, SPHERE employs differential imaging to exploit differences between planetary and stellar light in terms of its color or polarization. Other high-contrast imaging systems that are operational include Project 1640 at the Palomar Observatory and the Gemini Planet Imager at the Gemini South Telescope. The Large Binocular Telescope, equipped with a less advanced adaptive optics system, has successfully imaged a variety of extrasolar planets. SPHERE is targeted towards direct detection of Jupiter-sized and larger planets separated from their host stars by 5 AU or more. Detecting and characterizing a large number of such planets should offer insight into planetary migration, the hypothetical process whereby hot Jupiters, which theory indicates cannot have formed as close to their host stars as they are found, migrate inwards from where they were formed in the protoplanetary disk. It is also hypothesized that massive distant planets should be numerous; the results from SPHERE should clarify the extent to which the current observed preponderance of closely orbiting hot Jupiters represents observational bias. SPHERE observations will focus on the following types of targets: nearby young stellar associations which may also offer opportunities to detect low-mass planets; stars with known planets, in particular those with long-term residuals appearing in regression analysis of their radial velocity curves which could indicate the presence of more distant companions; the nearest stars, which would allow detecting targets with the smallest orbits, including those which shine only by reflected light; stars with ages in the 100 Myr to 1 Gyr range. In these young systems, even the smaller planets will still be hot and radiating copiously in the infrared, enabling lower detectable masses. SPHERE's high contrast capabilities should also enable it to be used in the study of protoplanetary discs, brown dwarfs, evolved massive stars, and to a lesser extent, in investigations of the Solar System and extragalactic targets. Results from SPHERE complement those of detection projects that use other detection methods such as radial velocity measurements and photometric transits. These projects include HARPS, CoRoT, and the Kepler Mission. Instrument description SPHERE is installed on ESO's VLT Unit Telescope 3 at the Nasmyth focus. It comprises the following subsystems: The Common Path and Infrastructure (CPI) is the main optical bench. It receives direct light from the telescope, and passes on stabilized, adaptive optics-corrected, and coronagraph-filtered beams to the three sub-instruments. One of its core component is the SAXO adaptive optics system that corrects for the atmospheric turbulence 1380 times per second. The Integral Field Spectrograph (IFS) covers a 1.73" x 1.73" field of view, translating the spectral data into a three-dimensional (x,y,λ) data cube. The Infrared Dual-band Imager and Spectrograph (IRDIS) has a field of view of 11" x 12.5" with a pixel scale of 12.25 mas (milliarcsecond). IRDIS can provide classical imaging. Alternatively, it can be configured to provide simultaneous dual-band imaging using two different narrow bandpass filters targeting different spectral features, or it can be configured to provide simultaneous imaging from two crossed polarizers. When operating in long slit spectroscopy mode (LSS), a coronagraphic slit replaces the coronagraph mask. The Zurich Imaging Polarimeter (ZIMPOL) is a high contrast imaging polarimeter operating at the visual and infrared wavelengths, capable of achieving <30 mas resolution. It is also capable of diffraction limited classical imaging. Science results Early results have validated the power of the SPHERE instrument, as well as presenting results that challenge existing theory. SPHERE announced its first planet, HD 131399Ab, in 2016, but another study showed that this was in fact a background star. Finally, in July 2017, the SPHERE consortium announced the detection of a planet, HIP 65426 b, around HIP 65426. The planet appears to have a very dusty atmosphere filled with thick cloud, and it orbits a hot, young star that rotates surprisingly fast. SPHERE was used to search for a brown dwarf expected to be orbiting the eclipsing binary V471 Tauri. Careful measurements of eclipse timings had shown that they were not regular, but these irregularities could be explained by assuming that there was a brown dwarf perturbing the stars' orbits. Surprisingly, although the hypothetical brown dwarf should have been easily resolvable by SPHERE, no such companion was imaged. It would appear that the conventional explanation for the odd behavior of V471 Tauri is wrong. Several alternative explanations for the orbital timing variations have been proposed, including, for example, the possibility that the effects might be due to magnetic field variations in the primary member of the binary pair resulting in regular changes in shape of the star via the Applegate mechanism. Another early SPHERE result is the first image of the spiral protoplanetary disk in HD 100453. The global spiral pattern is a rare phenomena in circumstellar disks that is likely caused by the gravitational attraction of a massive body orbiting the star, such as another star or a giant planet. This disk is the first to have the perturbing companion imaged, providing a test for spiral arm generation theories. The images also reveal a gap extending from the edge of the coronagraphic mask to about the distance of Uranus' orbit in our own solar system. SPHERE was used to capture the first confirmed image of a newborn planet in a June 2018 publication. The young planet, PDS 70b, was seen forming in the protoplanetary disk around the star PDS 70. In July 2020, SPHERE directly imaged two gas giants in orbit around the star TYC 8998-760-1. Performance improvements Several projects have been proposed to improve the performance of the SPHERE instrument: HiRISE (High-Resolution Imaging and Spectroscopy of Exoplanets) is already implemented as a visitor instrument since July 2023. It combines SPHERE with the upgraded CRIRES high-resolution spectrograph, using optical fibers, to improve the characterization of exoplanets detected by SPHERE. The SPHERE+ project aims at upgrading the SAXO adaptive optics system of SPHERE and bringing a medium-resolution IFS. The main science goals are the detection of young giant planets at closer separation from birght stars and around fainter stars and their more detailed spectral characterization. This project is currently under active development with an on-going design study. A more exploratory concept proposed in 2017 was the combination of SPHERE with the ESPRESSO spectrograph in the visible to attempt the detection of the Proxima Cen b planet in reflected light. This concept has been abandonned in favor of a dedicated instrument called RISTRETTO to be installed as a visitor instrument on the VLT. References External links SPHERE - Spectro-Polarimetric High-contrast Exoplanet REsearch Telescope instruments Astronomical imaging Exoplanet search projects Infrared spectroscopy Optical devices
Spectro-Polarimetric High-Contrast Exoplanet Research
[ "Physics", "Chemistry", "Materials_science", "Astronomy", "Engineering" ]
1,744
[ "Glass engineering and science", "Exoplanet search projects", "Spectrum (physical sciences)", "Optical devices", "Telescope instruments", "Infrared spectroscopy", "Astronomical instruments", "Astronomy projects", "Spectroscopy" ]
49,731,496
https://en.wikipedia.org/wiki/Chemical%20cycling
Chemical cycling describes systems of repeated circulation of chemicals between other compounds, states and materials, and back to their original state, that occurs in space, and on many objects in space including the Earth. Active chemical cycling is known to occur in stars, many planets and natural satellites. Chemical cycling plays a large role in sustaining planetary atmospheres, liquids and biological processes and can greatly influence weather and climate. Some chemical cycles release renewable energy, others may give rise to complex chemical reactions, organic compounds and prebiotic chemistry. On terrestrial bodies such as the Earth, chemical cycles involving the lithosphere are known as geochemical cycles. Ongoing geochemical cycles are one of the main attributes of geologically active worlds. A chemical cycle involving a biosphere is known as a biogeochemical cycle. The Sun, other stars and star systems In most hydrogen-fusing stars, including the Sun, a chemical cycle involved in stellar nucleosynthesis occurs which is known as a carbon-nitrogen-oxygen or (CNO cycle). In addition to this cycle, stars also have a helium cycle. Various cycles involving gas and dust have been found to occur in galaxies. Venus The majority of known chemical cycles on Venus involve its dense atmosphere and compounds of carbon and sulphur, the most significant being a strong carbon dioxide cycle. The lack of a complete carbon cycle including a geochemical carbon cycle, for example, is thought to be a cause of its runaway greenhouse effect, due to the lack of a substantial carbon sink. Sulphur cycles including sulphur oxide cycles also occur, sulphur oxide in the upper atmosphere and results in the presence of sulfuric acid in turn returns to oxides through photolysis. Indications also suggest an ozone cycle on Venus similar to that of Earth's. Earth A number of different types of chemical cycles geochemical cycles occur on Earth. Biogeochemical cycles play an important role in sustaining the biosphere. Notable active chemical cycles on Earth include: Carbon cycle – consisting of an atmospheric carbon cycle (and carbon dioxide cycle), terrestrial biological carbon cycle, oceanic carbon cycle and geological carbon cycle Nitrogen cycle – which converts nitrogen between its forms through fixation, ammonification, nitrification, and denitrification Oxygen cycle and Ozone–oxygen cycle – a biogeochemical cycle of circulating oxygen between the atmosphere, biosphere (the global sum of all ecosystems), and the lithosphere Ozone-oxygen cycle – continually regenerates ozone in the atmosphere and converts ultraviolet radiation (UV) into heat Water cycle – moves water continuously on, above and below the surface shifting between states of liquid, solution, ice and vapour Methane cycle – moves methane between geological and biogeochemical sources and reactions in the atmosphere Hydrogen cycle – a biogeochemical cycle brought about by a combination of biological and abiological processes Phosphorus cycle – the movement of phosphorus through the lithosphere, hydrosphere, and biosphere Sulfur cycle – a biogeochemical process resulting form the mineralization of organic sulfur, oxidation, reduction and incorporation into organic compounds Carbonate–silicate cycle transforms silicate rocks to carbonate rocks by weathering and sedimentation and transforms carbonate rocks back into silicates by metamorphism and magmatism. Rock cycle – switches rock between its three forms: sedimentary, metamorphic, and igneous Mercury cycle – a biogeochemical process in which naturally occurring mercury is bioaccumulated before recombining with sulfur and returning to geological sources as sediments Other chemical cycles include hydrogen peroxide. Mars Recent evidence suggests that similar chemical cycles to Earth's occur on a lesser scale on Mars, facilitated by the thin atmosphere, including carbon dioxide (and possibly carbon), water, sulphur, methane, oxygen, ozone, and nitrogen cycles. Many studies point to significantly more active chemical cycles on Mars in the past, however the faint young Sun paradox has proved problematic in determining chemical cycles involved in early climate models of the planet. Jupiter Jupiter, like all the gas giants, has an atmospheric methane cycle. Recent studies indicate a hydrological cycle of water-ammonia vastly different to the type operating on terrestrial planets like Earth and also a cycle of hydrogen sulfide. Significant chemical cycles exist on Jupiter's moons. Recent evidence points to Europa possessing several active cycles, most notably a water cycle. Other studies suggest an oxygen and radiation induced carbon dioxide cycle. Io and Europa, appear to have radiolytic sulphur cycles involving their lithospheres. In addition, Europa is thought to have a sulfur dioxide cycle. In addition, the Io plasma torus contributes to a sulphur cycle on Jupiter and Ganymede. Studies also imply active oxygen cycles on Ganymede and oxygen and radiolytic carbon dioxide cycles on Callisto. Saturn In addition to Saturn's methane cycle some studies suggest an ammonia cycle induced by photolysis similar to Jupiter's. The cycles of its moons are of particular interest. Observations by Cassini–Huygens of Titan's atmosphere and interactions with its liquid mantle give rise to several active chemical cycles including a methane, hydrocarbon, hydrogen, and carbon cycles. Enceladus has an active hydrological, silicate and possibly a nitrogen cycle. Uranus Uranus has an active methane cycle. Methane is converted to hydrocarbons through photolysis which condenses and as they are heated, release methane which rises to the upper atmosphere. Studies by Grundy et al. (2006) indicate active carbon cycles operates on Titania, Umbriel and Ariel and Oberon through the ongoing sublimation and deposition of carbon dioxide, though some is lost to space over long periods of time. Neptune Neptune's internal heat and convection drives cycles of methane, carbon, and a combination of other volatiles within Triton's lithosphere. Models predicted the presence of seasonal nitrogen cycles on the moon Triton, however this has not been supported by observations to date. Pluto-Charon system Models predict a seasonal nitrogen cycle on Pluto and observations by New Horizons appear to support this. References Biogeochemical cycle Geochemistry Planetary science
Chemical cycling
[ "Chemistry", "Astronomy" ]
1,255
[ "Biogeochemical cycle", "Biogeochemistry", "nan", "Planetary science", "Astronomical sub-disciplines" ]
49,732,031
https://en.wikipedia.org/wiki/Transition%20metal%20NHC%20complex
In coordination chemistry, a transition metal NHC complex is a metal complex containing one or more N-heterocyclic carbene ligands. Such compounds are the subject of much research, in part because of prospective applications in homogeneous catalysis. One such success is the second generation Grubbs catalyst. Historically, N-heterocyclic carbenes were thought to mimic properties of tertiary phosphines. Many steric and electronic differences exist between the two ligands. Compared to phosphine ligands, NHC ligands' cone angle is more complex. The imidazole ring of the NHC ligand is angled away from the metal center, yet the substituents at the 1,3 positions of the imidazole ring are angled towards it. The presence of the ligand inside of the metal coordination sphere affects the metal reactivity. In terms of electronic effects, NHC are often stronger sigma donation. Synthesis From free NHCs The popularization of NHC ligands can be traced to Arduengo, who reported the deprotonation of dimesitylimidazolium cation to give IMes. IMes is a free NHC that can be used as a ligand. Other NHCs have been isolated as the free ligands. Aside from IMes, another important NHC ligand is IPr, which features diisopropylphenyl groups in place of the mesityl groups. NHCs with saturated backbones include SIMes and SIPr. Transmetallation of silver-NHC reagents Usually, transition metal NHC complexes are prepared less directly. A popular method entails transmetallation of silver-NHC complexes. Such reagents are generated by the reaction of silver(I) oxide with the imidazolium salt. Other methods A third method involves decarboxylation of NHC-carboxylates. In this approach, N-methylimidazoles react with methyl formate to give zwitterionic N,N'-dimethylimidazolium-2-carboxylate. This zwitterion decarboxylates in the presence of metal ions to give N,N'dimethylimidazolidene-based NHC complexes. See also Palladium–NHC complex References Organometallic chemistry Transition metals Coordination complexes Coordination chemistry
Transition metal NHC complex
[ "Chemistry" ]
492
[ "Organometallic chemistry", "Coordination chemistry", "Coordination complexes" ]
49,733,699
https://en.wikipedia.org/wiki/Ion%20transporter%20superfamily
The ion transporter (IT) superfamily is a superfamily of secondary carriers that transport charged substrates. Families As of early 2016, the currently recognized and functionally defined families that make up the IT superfamily include: 2.A.8 - The Gluconate:H+ Symporter (GntP) Family 2.A.11 - The Citrate-Mg2+:H+ (CitM) Citrate-Ca2+:H+ (CitH) Symporter (CitMHS) Family 2.A.13 - The C4-Dicarboxylate Uptake (Dcu) Family 2.A.14 - The Lactate Permease (LctP) Family 2.A.34 - The NhaB Na+:H+ Antiporter (NhaB) Family 2.A.35 - The NhaC Na+:H+ Antiporter (NhaC) Family 2.A.45 - The Arsenite-Antimonite (ArsB) Efflux Family 2.A.47 - The Divalent Anion:Na+ Symporter (DASS) Family 2.A.61 - The C4-dicarboxylate Uptake C (DcuC) Family 2.A.62 - The NhaD Na+:H+ Antiporter (NhaD) Family 2.A.68 - The p-Aminobenzoyl-glutamate Transporter (AbgT) Family 2.A.94 - The Phosphate Permease (Pho1) Family 2.A.101 - The Malonate Uptake (MatC) Family 2.A.111 - The Na+/H+ Antiporter-E (NhaE) Family 2.A.118 - The Basic Amino Acid Antiporter (ArcD) Family See also Ion transporters Sodium-Proton antiporter Arsenite-Antimonite efflux Amino acid transporter Solute carrier family Transporter Classification Database Membrane protein References Solute carrier family Protein superfamilies
Ion transporter superfamily
[ "Biology" ]
428
[ "Protein superfamilies", "Protein classification" ]
49,734,242
https://en.wikipedia.org/wiki/Metabolite%20damage
Metabolite damage can occur through enzyme promiscuity or spontaneous chemical reactions. Many metabolites are chemically reactive and unstable and can react with other cell components or undergo unwanted modifications. Enzymatically or chemically damaged metabolites are always useless and often toxic. To prevent toxicity that can occur from the accumulation of damaged metabolites, organisms have damage-control systems that: Reconvert damaged metabolites to their original, undamaged form (damage repair) Convert a potentially harmful metabolite to a benign one (damage pre-emption) Prevent damage from happening by limiting the build-up of reactive, but non-damaged metabolites that can lead to harmful products (directed overflow) Damage-control systems can involve one or more specific enzymes. Types of damage Similarly to DNA and proteins, metabolites are prone to damage, which can occur chemically or through enzyme promiscuity. Much less is known about metabolite damage than about DNA and protein damage, in part due to the huge variety and number of damage-prone metabolites. Chemical damage Many metabolites are chemically reactive and unstable, and thus prone to chemical damage. In general, any reaction that occurs in vitro under physiological conditions can also occur in vivo. Some metabolites are so reactive that their half-life in a cell is measured in minutes. For example, the glycolytic intermediate 1,3-bisphosphoglyceric acid has a half-life of 27 minutes in vivo. Typical types of chemical damage reactions that can occur to metabolites are racemization, rearrangement, elimination, photodissociation, addition, and condensation. Enzymatic damage Although enzymes are generally specific towards their substrate, enzymatic side activities (enzyme promiscuity) can lead to toxic or useless products. These side reactions proceed at much lower rates than their normal physiological reactions, but build-up of damaged metabolites can still be significant over time. For example, the mitochondrial malate dehydrogenase reduces alpha-ketoglutarate to L-2-hydroxyglutarate 107 times less efficiently than its regular substrate oxaloacetate, but L-2-hydroxyglutarate can still accumulate to several grams per day in a human adult. Damage control Metabolite damage-control systems fall into three different categories: Damage repair Damage repair is the conversion of a damaged metabolite back to its original state via one or more enzymatic reactions; the concept is similar to DNA repair and protein repair. For example, the promiscuous activity of malate dehydrogenase causes reduction of alpha-ketoglutarate to L-2-hydroxyglutarate. This compound is a dead-end metabolite and is not a substrate for any other enzyme in central metabolism, and its accumulation in humans causes L-2-Hydroxyglutaric aciduria. The repair enzyme L-2-hydroxyglutarate dehydrogenase oxidizes L-2-hydroxyglutarate back to alpha-ketoglutarate, thus repairing this metabolite. In humans, L-2-hydroxyglutarate dehydrogenase uses FAD as the cofactor, while the E. coli enzyme reduces molecular oxygen. Damage pre-emption Pre-emption prevents damage from happening. This is done either by converting reactive metabolites to less harmful ones, or by speeding up an insufficiently fast chemical reaction. The reactive metabolite can be either a side product, or a normal, but highly reactive intermediate. For example, a side activity of Rubisco yields small amounts of xylulose-1,5-bisphosphate, which can inhibit Rubisco activity. The CbbY enzyme dephosphorylates xylulose-1,5-bisphosphate to the natural metabolite xylulose-5-phosphate, thereby preventing inhibition of Rubisco. Directed overflow Directed overflow is a special case of damage pre-emption, where excess of a normal, but reactive metabolite could lead to toxic products. Preventing this excess is thus pre-emption of potential damage. The first two intermediates in riboflavin biosynthesis are highly reactive and can spontaneously break down to 5-phosphoribosylamine and Maillard reaction products, which are highly reactive and harmful. The enzyme COG3236 hydrolyzes these two first intermediates into two less harmful products, thus preventing the harm they would otherwise cause. Disease In humans, L-2-Hydroxyglutaric aciduria was the first disease linked to a missing metabolite repair enzyme. Mutations in the L2HGDH gene cause accumulation of L-2-hydroxyglutarate, which is a structural analog to glutamate and alpha-ketoglutarate and presumably inhibits other enzymes or transporters. Systems biology Metabolic network modelling aims at reproducing cellular metabolism in silico. Metabolite damage and repair create cellular energy costs, and consequently need to be incorporated into genome-scale metabolic models so that these models can more effectively guide metabolic engineering design. In addition, genes encoding so-far unrecognized metabolite damage-control systems may constitute a significant fraction of the many conserved genes of unknown function found in the genomes of all organisms. Synthetic biology / metabolic engineering When an alien pathway is installed in a host ('chassis') organism, and even when a native pathway is massively upregulated, reactive intermediates may accumulate to levels that negatively impact viability, growth, and flux through the pathway because a matching damage-control system is absent or has been overwhelmed. Engineering damage-control systems may thus be needed to support synthetic biology and metabolic engineering projects. See also Metabolomics Systems biology Metabolic flux analysis Metabolic engineering Synthetic biology References External links MINE database of enzymatic damage Blog article about metabolite damage and repair mechanisms Metabolism
Metabolite damage
[ "Chemistry", "Biology" ]
1,250
[ "Biochemistry", "Metabolism", "Cellular processes" ]
49,736,384
https://en.wikipedia.org/wiki/OneSubsea
OneSubsea is a SLB company, headquartered in Oslo, Norway and Houston, Texas, United States. The company is a subsea supplier for the subsea oil and gas market. As of August 2024, the company is the worlds largest in terms of installed subsea christmas trees History In November 2012, Cameron International and Schlumberger announced that they were forming a Joint venture called OneSubsea. Cameron would manage OneSubsea with a 60% interest, with Schlumberger retaining 40%. In January 2015, Helix, OneSubsea and Schlumberger formed the Subsea Services Alliance to develop technologies and deliver equipment and services to optimize the value chain of subsea well intervention systems. In July 2015, Subsea 7 and OneSubsea entered into an agreement to form a non-incorporated alliance. The alliance was formed to focus on subsea production systems (SPS) and subsea processing systems, subsea umbilicals, risers and flowlines systems (SURF), and life-of-field services. In August 2015, OneSubsea was awarded a contract to supply subsea processing systems for Shell's Stones development in the Gulf of Mexico. Schlumberger announced in August 2015 that it was acquiring Cameron and OneSubsea for $14.8 billion. Schlumberger announced in August 2022, that OneSubsea was going into a joint venture with AkerSolutions and Subsea7. The ownership of OneSubsea is 70% with Schlumberger, 20% with AkerSolutions and Subsea7 owning the last 10% In July 2024, it was announced that OneSubsea was awarded the contract to front-end design an all electric subsea tree project from Equinor called Fram-Sør. See also List of oilfield service companies List of companies of Norway Oil industry Wellhead References Energy engineering and contractor companies Oilfield services companies Manufacturing companies based in Houston Multinational companies headquartered in the United States Offshore engineering
OneSubsea
[ "Engineering" ]
414
[ "Construction", "Engineering companies", "Energy engineering and contractor companies", "Offshore engineering" ]
49,742,837
https://en.wikipedia.org/wiki/Sodium%20MRI
Sodium MRI (also known as 23Na-MRI) is a specialised magnetic resonance imaging technique that uses strong magnetic fields, magnetic field gradients, and radio waves to generate images of the distribution of sodium in the body, as opposed to more common forms of MRI that utilise protons (hydrogen atoms) present in water (1H-MRI). Like the proton, sodium is naturally abundant in the body, and thus can be imaged directly without the need for contrast agents or hyperpolarization. Furthermore, sodium ions play a role in important biological processes via their contribution to concentration and electrochemical gradients across cellular membranes, making it of interest as an imaging target in health and disease. In contrast to conventional proton MRI, sodium MRI is complicated by the low concentrations of sodium nuclei relative to the concentration of H2O molecules in biological tissues (10-45 mM) and the lower gyromagnetic ratio of the 23Na nucleus as compared to a 1H nucleus. This causes low NMR sensitivity, meaning that a stronger magnetic field is required to obtain equivalent spatial resolution. The quadrupolar 23Na nucleus also has a faster transverse relaxation rate and multiple quantum coherences as compared to the 1H nucleus, requiring specialized and high-performance MRI sequences to capture information before the contrast used to image the body is lost. Biological significance Tissue sodium concentration (TSC) is tightly regulated by healthy cells and is altered by energy status and cellular integrity, making it an effective marker for disease states. Cells maintain a low intracellular Na+ concentration by actively pumping Na ions out via the Na+/K+ ATPase channel. Any challenge to the cell's metabolism which lowers ATP supply or compromises the cell's membrane integrity will drastically increase intracellular Na+ concentrations. After exhaustive exercise, for example, 23Na MRI can detect Na+ levels in tissues rising sharply, and can even visualize a sodium-rich meal in a patient's stomach. Malignant tumors in particular alter their metabolism drastically, often to account for hypoxic intratumor conditions, leading to an decrease in cytosolic pH. To compensate, Na+ ions from the extracellular space are exchanged for protons in the Na+/H+ antiport, the loss of which often attenuates cancer growth. Therefore, 23Na MRI is a useful clinical tool for detecting a number of disease states, including heart disease and cancer, as well as monitoring therapy. For example, 23Na MRI has been shown to measure cellularity in ovarian cancer. Tissue damage in stroke patients can also be evaluated using 23Na MRI, with one study showing that a change of 50% higher TSC than the TSC in healthy brain tissue is consistent with complete infarction, and therefore can be used to determine tissue viability and treatment options for the patient. Tumor malignancy can also be evaluated based on the increases in TSC of rapidly proliferating cells. Malignant tumors have approximately 50-60% increased TSC relative to that of healthy tissues – however, increases in TSC cannot be determined to be due to changes in extracellular volume, intracellular sodium content or neovascularization. Another interesting use of 23Na MRI is in evaluating multiple sclerosis, wherein accumulation of sodium in axons can lead to axon degeneration. Preliminary studies have shown that there is a positive correlation between elevated TSC and disability. Uses in prostate cancer Recently, work has been undertaken to assess the utility of using sodium MRI to characterize prostate cancer lesions in men. In this study, patients were imaged with sodium MRI prior to surgical removal of the prostate. TSC was extracted from the images and compared to the Gleason score of imaged lesions. This work showed statistically significant increases in TSC as prostate cancer increased in aggression. This preliminary study suggests that sodium MRI can accurately characterize the stage of prostate cancer. This suggests the potential use of sodium MRI for better management and staging of patients with prostate cancer into treatment schemes. Advantages 23Na MRI measures cellular metabolic rate as well as disease-related change in tissues and organs. It has improved from a 45 minute length to only 15 minutes at 1.5T. For cartilage degeneration, proteoglycan degrades with negative charge, and positively charged sodium ion bonds with proteoglycan. Both the proteoglycan and sodium level decrease, so a decrease in signal is observed by sodium MRI and can be used for monitoring of proteoglycan degeneration in cartilage. See also Functional imaging 23Na Hyperpolarized carbon-13 MRI References Magnetic resonance imaging
Sodium MRI
[ "Chemistry" ]
952
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
43,610,211
https://en.wikipedia.org/wiki/Parflange%20F37
The Parflange F37 system is a technology from the hydraulic area of Parker-Hannifin, which allows a non welded flange connection of hydraulic tubes and pipes. Processing Use-orientated, the connection will be done by either flaring or retaining ring technology. Flaring Technology After putting a F37 flange onto a seamless hydraulic tube or pipe, the tube end will be flared to 37°, what also explains the name of this technology. The flaring is done by a special orbitally flaring process, which compresses the surface of the pipe end, gaining an excellent sealing surface. An insert made of carbon or stainless steel will be placed into the flared pipe end. The insert is soft sealed by an O-Ring to the pipe side. To be sealed against a flat counterpart (e.g. manifold or block) the insert has a groove on the front side for a so-called "F37-Seal" made of Polyurethane or optionally an o-ring or bonded seal made of carbon steel or stainless steel with a Nitrile rubber or FKM sealing lip. Alternatively, the front side of the insert can be flat. For a pipe to pipe connection, a special insert design with soft sealed cones on both sides to fit between two flared pipe ends is available. Afterwards, the flange will be positioned to the pipe end and connected to a hydraulic component or another pipe having a similar flange and corresponding insert. Retaining Ring Technology For the Retaining Ring connection, a groove will be machined onto the pipe end to hold the retaining ring. The retaining ring is made of a segmented stainless steel ring covered by a steainless steel spring and is used to fix the flange. For the assembly, the retaining ring flange has to be put onto the machined pipe end. The retaining ring has to be widened for getting it on the pipe end to snap into the before machined groove. The inside contour of the retaining ring flange will cover the retaining ring from the outside. The sealing of the Parflange F37 retaining ring connection is done by a bonded seal on the face side of the pipe end or alternatively by a pipe seal carrier ("PSC"). The pipe seal carrier has soft seales (o-rings or F37-Seals) on both sides. On one side, the pipe seal carrier has a centering aid to improve assembly. Functionality Flaring Technology By flaring the pipe end, the flange gets a supporting surface and is able to build up force to ensure a stable connection. At first, the insert has a sealing function. With its o-ring on the pipe end side, the sealing against the pipe is achieved. The sealing against the connecting part is done by the F37-Seal or a bonded seal. If the connecting part has a soft seal on the face side, an insert with a flat face has to be used. For the connection of two pipes, an insert with cones on both sides, which are soft sealed by o-rings, can be used as well. Simultaneously, the insert stabilises the connection. The achieved pressure by tightening the flange bolts can be spread on a bigger contact surface of the insert, increasing the solidity of the connection. Retaining Ring Technology The special inside contour of the retaining ring flange covers the retaining ring, which is installed onto the machined pipe end. A form-closed connection results from the tightening of the flange, which will be sealed by bonded seal or pipe seal carrier on the face side. Application The Parflange F37 system is used to connect haydraulic tubes, pipes and components without welding. Depending on pipe and flange size, the F37 system is approved for pressure ratings up to 420 bar (6000 psi, respectively 42 MPa). It is commonly used in shipbuilding, offshore and heavy machinery industry for moving and controlling of e.g. cranes and elevators. Furthermore, the Parflange F37 technology allows to connect tubes and pipes from 16 to 273 millimeter outside diameter (1/2" to 10" flange size). Approvals The F37 system is approved by leading classification societies. The flange hole patterns are according to ISO 6162-1/SAE J 518 Code 61 (3000 psi/210 bar), ISO 6162-2/SAE J518 Code 62 (6000 psi/420 bar) and ISO 6164 (400 bar). Other information Advantages of Parflange F37 compared to welded flange connections are mainly in savings on time and costs. No costly inspection of welds (f.e. by x-ray graphing) and post-weld acid cleaning is needed, making the connection also more environment-friendly and safer than welding. Compared to welding, no welding stress corrosion is possible, resulting in maximum service time of the pipe connection and a reduction of service costs. References Fluid Markt 2008 (edition 2008), Measurement Technology Equipment, Tube Fittings (Verlag Moderne Industrie) Fluid (edition 10/2012), Economic and high pressure resistant - flanges for compact, quick installed hydraulic connections (Verlag Moderne Industrie) Fluid (edition 05/2013), From design to installation - hydraulic lines according customer needs (Verlag Moderne Industrie) Parflange F37 for pipe and tube connections, catalog 4162-4 (edition 03/2013), Parker Hannifin Corporation External links Animated Parflange F37 Flaring Connection Animated Parflange F37 Retaining Ring Connection Hydraulics
Parflange F37
[ "Physics", "Chemistry" ]
1,141
[ "Physical systems", "Hydraulics", "Fluid dynamics" ]
43,610,449
https://en.wikipedia.org/wiki/UK%20Native%20Seed%20Hub
The UK Native Seed Hub (UKNSH) is a project of the Royal Botanic Gardens, Kew's Millennium Seed Bank Partnership growing and distributing seeds of UK native plant species. It is in part a response to the 2010 report Making Space for Nature by Sir John Lawton. The project, located at Wakehurst Place, in West Sussex, in the High Weald of southern England (), is dedicated to enhancing the resilience and coherence of the UK's ecological networks by improving the quality, quantity, and diversity of UK seed species available for use in conservation, rehabilitation, and restoration projects. The UKNSH makes available to conservation and restoration projects high quality Millennium Seed Bank seed collections, some of which are of species that are not available on the commercial seed market, and some are local provenance collections of species already available. As part of the Royal Botanic Gardens, the UKNSH is a nonprofit organization which provides seeds under a license agreement, ensuring use of the seed only for projects that directly support UK biodiversity and at a charge that only recoups the financial cost of recollection to replace seeds in the bank. The provision of seed may be accompanied by technical training, advice and research that enable users of the seed and other commercial seed suppliers to improve the knowledge, use, and storage of native seed in the UK. History In 2011 the Esmée Fairbairn Foundation gave £750,000 to the Royal Botanical Gardens, Kew to establish the UKNSH as part of the foundation's 50th Birthday celebrations. The funding was expected to establish the project over a period of four years. Production began at the Wakehurst place nursery in 2011. In 2012 seed production moved to the new production beds that are on display to the public at Wakehurst Place. Production focused on regenerating grassland species such as Campanula rotundifolia (harebell) and Genista tinctoria (dyer's greenweed) from seed in the Millennium Seed Bank's collections. In May 2014 suitable seed from the Millennium Seed Bank's collections were made available by the UKNSH seed online list, making it possible for legitimate conservation projects to request seed from the available UKNSH collections. The Seed Hub The Seed Hub itself is the Wakehurst Place-based production site, with a capacity of 28 beds on about a hectare of land in total, close to the Millennium Seed Bank building and the Visitor Centre (and shown on the visitor's map). The site began construction in 2011 and is maintained by Kew's horticultural staff. Seed production is focused on species that are difficult to obtain on the commercial market, due to harvesting, germination, or processing difficulties. Species are often regenerated to create a large collection of seeds of a particular UK provenance, such as the South Downs' Primula veris (cowslip), or from a particular environmental habitat. The site is also open to the public as part of a visit to Wakehurst, and provides a useful environment to observe and photograph plant pollinators. The Seed List To support UK conservation and restoration projects the UKNSH makes the Millennium Seed Bank's high quality, UK origin seeds available to legitimate initiatives aiming to improve the UKs ecological network. A few of these collections are supplied from the production beds at Wakehurst Place with some being made available from the suitable collections seed bank at the MSBP. Seed is only provided from appropriate collections in the MSBP when the collection is sufficiently large. Where there is not sufficient seed numbers to make a collection available for distribution the species may be bulked up on the production beds. All seed provided is "F1" generation in that is it the offspring of plants harvested from seeds collected directly from the wild. The seed list operates in such a way that the provision of seed to conservation projects will never completely deplete the collection in the bank. Support The UKNSH also provides a range of training for seeds producers, collectors, and users. This support ensures that seed users understand the importance of high quality seed of a UK origin for UK conservation work, as well as providing them with the skills needed to use the seed. The support also ensures that appropriate species and provenance seeds are used for the projects location and habitat. Support covers the entire range of seed handling from harvesting, through processing, testing, and storage, to distribution, and sowing. The aim of this support is to improve best practice in UK native seed use. The UKNSH provides advice and services through consultancy to enable the continuation of the project beyond the four year funding of the Esmée Fairbairn Foundation. References External links Official UK Native Seed Hub website Community seed banks Conservation projects Ecological restoration Native plant societies Rare breed conservation Royal Botanic Gardens, Kew Seed associations Tourist attractions in West Sussex Agricultural organisations based in the United Kingdom
UK Native Seed Hub
[ "Chemistry", "Engineering" ]
987
[ "Ecological restoration", "Environmental engineering" ]
43,611,277
https://en.wikipedia.org/wiki/Regulatory%20macrophages
Regulatory macrophages (Mregs) represent a subset of anti-inflammatory macrophages. In general, macrophages are a very dynamic and plastic cell type and can be divided into two main groups: classically activated macrophages (M1) and alternatively activated macrophages (M2). M2 group can further be divided into sub-groups M2a, M2b, M2c, and M2d. Typically the M2 cells have anti-inflammatory and regulatory properties and produce many different anti-inflammatory cytokines such as IL-4, IL-33, IL-10, IL-1RA, and TGF-β. M2 cells can also secrete angiogenic and chemotactic factors. These cells can be distinguished based on the different expression levels of various surface proteins and the secretion of different effector molecules. M2a, mainly known as alternatively activated macrophages, are macrophages associated with tissue healing due to the production of components of extracellular matrix. M2a cells are induced by IL-4 and IL-13. M2b, generally referred to as regulatory macrophages (Mregs), are characterized by secreting large amounts of IL-10 and small amounts of IL-12. M2c, also known as deactivated macrophages, secrete large amounts of IL-10 and TGF-β. M2c are induced by glucocorticoids and TGF-β. M2d are pro-angiogenic cells that secrete IL-10, TGF-β, and vascular endothelial growth factor and are induced by IL-6 and A2 adenosine receptor agonist (A2R). Mreg origin and induction Mregs can arise following innate or adaptive immune responses. Mregs were first described after FcγR ligation by IgG complexes in the occurrence of pathogen-associated molecular patterns (e. g. lipopolysaccharide or lipoteichoic acid) acting through Toll-like receptors. Coculture of macrophages with regulatory T cells (Tregs) caused differentiation of macrophages toward Mreg phenotype. Similar effect provoked interaction of macrophages and B1 B cells. Mregs can even arise following stress responses. Activation of the hypothalamic-pituitary-adrenal axis leads to production of glucocorticoids that cause decreased production of IL-12 by macrophages. Many cell types including monocytes, M1, and M2 can in a specific microenvironment differentiate to Mregs. Induction of Mregs is strongly linked with the interaction of Fc receptors located on the surface of Mregs with Fc fragments of antibodies. It has been shown that anti-TNF monoclonal antibodies interacting with Fcγ receptor of Mregs induce differentiation of Mregs through activation of STAT3 signaling pathway. Some pathogens can promote the transformation of cells into Mregs as an immune evasion mechanism. Two signals are needed for Mregs inducement. The first signal is stimulation by M-CSF, GM-CSF, PGE2, adenosine, glucocorticoid, or apoptotic cells. The second signal can be stimulation with cytokines or toll-like receptor ligands. The first signal promotes the differentiation of monocytes to macrophages and the second signal promotes immunosuppressive functions. In vitro, M-CSF, IFNγ, and LPS are used for the inducement of Mregs. Other cells such as eosinophils and innate lymphoid cells type 2 (ILC2) can promote M2 polarization by cytokine secretion. IL-9 can function as a growth factor for ILC-2 and thereby assist in the induction of Mregs. Another cytokine that helps the induction of Mregs is IL-35 which is produced by Tregs. Characterization and determination of Mregs Surprisingly, Mregs resemble classically activated macrophages more than alternatively activated macrophages, due to higher biochemical similarity. The difference between M1 macrophages and Mregs is, inter alia, that Mregs secrete high levels of IL-10 and simultaneously low levels of IL-12. Out of all macrophages, Mregs show the highest expression of MHC II molecules and co-stimulatory molecules (CD80/CD86), which differentiates them from the alternatively activated macrophages, which show a very low expression of these molecules. Mregs also differ from alternatively activated macrophages by producing high levels of nitric oxide and low arginase activity. Lastly, they differ in the expression of FIIZ1 (Resistin-like molecule alpha1) and YM1 which are differentiation markers present on alternatively activated macrophages. Mregs are recognized by the expression of PD-L1, CD206, CD80/CD86, HLA-DR, and DHRS9 (dehydrogenase/reductase 9). DHRS9 has been recognized as a stable marker for Mregs in humans. Biochemical and functional characterization of Mregs The physiological role of Mregs is to dampen the immune response and immunopathology. Unlike classically activated macrophages, Mregs produce low levels of IL-12, which is important because IL-12 induces differentiation of naïve helper T cells to Th1 cells which produce high levels of IFNγ. Mregs do not contribute to the production of extracellular matrix because they express low levels of arginase. Mregs show up-regulation of IL-10, TGFβ, PGE2, iNOS, IDO, and down-regulation of IL-1β, IL-6, IL-12, and TNF-α. By secreting TGF-β they help with the induction of Tregs and by producing IL-10 they contribute to the induction of tolerance and regulatory cell types. Mregs can directly inhibit the proliferation of activated T cells. It has been shown that Mregs co-cultured with T cells have a negative effect on the T-cellular ability to secrete IL-2 and IFN-γ. Mregs can also inhibit the arginase activity of alternatively activated macrophages, the proliferation of fibroblasts, and can promote angiogenesis. The use of Mregs is widely studied as a potential cell-based immunosuppressive therapy after organ transplantation. Mregs could potentially solve the problems (susceptibility to infectious diseases and cancer diseases) associated with the current post-transplant therapy. Since Mregs are still producing nitric oxide they may be more suitable than current treatments, when appropriately stimulated. References Immune system Macrophages
Regulatory macrophages
[ "Biology" ]
1,419
[ "Immune system", "Organ systems" ]
43,613,625
https://en.wikipedia.org/wiki/%C5%81ukasiewicz%E2%80%93Moisil%20algebra
Łukasiewicz–Moisil algebras (LMn algebras) were introduced in the 1940s by Grigore Moisil (initially under the name of Łukasiewicz algebras) in the hope of giving algebraic semantics for the n-valued Łukasiewicz logic. However, in 1956 Alan Rose discovered that for n ≥ 5, the Łukasiewicz–Moisil algebra does not model the Łukasiewicz logic. A faithful model for the ℵ0-valued (infinitely-many-valued) Łukasiewicz–Tarski logic was provided by C. C. Chang's MV-algebra, introduced in 1958. For the axiomatically more complicated (finite) n-valued Łukasiewicz logics, suitable algebras were published in 1977 by Revaz Grigolia and called MVn-algebras. MVn-algebras are a subclass of LMn-algebras, and the inclusion is strict for n ≥ 5. In 1982 Roberto Cignoli published some additional constraints that added to LMn-algebras produce proper models for n-valued Łukasiewicz logic; Cignoli called his discovery proper Łukasiewicz algebras. Moisil however, published in 1964 a logic to match his algebra (in the general n ≥ 5 case), now called Moisil logic. After coming in contact with Zadeh's fuzzy logic, in 1968 Moisil also introduced an infinitely-many-valued logic variant and its corresponding LMθ algebras. Although the Łukasiewicz implication cannot be defined in a LMn algebra for n ≥ 5, the Heyting implication can be, i.e. LMn algebras are Heyting algebras; as a result, Moisil logics can also be developed (from a purely logical standpoint) in the framework of Brower's intuitionistic logic. Definition A LMn algebra is a De Morgan algebra (a notion also introduced by Moisil) with n-1 additional unary, "modal" operations: , i.e. an algebra of signature where J = { 1, 2, ... n-1 }. (Some sources denote the additional operators as to emphasize that they depend on the order n of the algebra.) The additional unary operators ∇j must satisfy the following axioms for all x, y ∈ A and j, k ∈ J: if for all j ∈ J, then x = y. (The adjective "modal" is related to the [ultimately failed] program of Tarksi and Łukasiewicz to axiomatize modal logic using many-valued logic.) Elementary properties The duals of some of the above axioms follow as properties: Additionally: and . In other words, the unary "modal" operations are lattice endomorphisms. Examples LM2 algebras are the Boolean algebras. The canonical Łukasiewicz algebra that Moisil had in mind were over the set with negation conjunction and disjunction and the unary "modal" operators: If B is a Boolean algebra, then the algebra over the set B[2] ≝ {(x, y) ∈ B×B | x ≤ y} with the lattice operations defined pointwise and with ¬(x, y) ≝ (¬y, ¬x), and with the unary "modal" operators ∇2(x, y) ≝ (y, y) and ∇1(x, y) = ¬∇2¬(x, y) = (x, x) [derived by axiom 4] is a three-valued Łukasiewicz algebra. Representation Moisil proved that every LMn algebra can be embedded in a direct product (of copies) of the canonical algebra. As a corollary, every LMn algebra is a subdirect product of subalgebras of . The Heyting implication can be defined as: Antonio Monteiro showed that for every monadic Boolean algebra one can construct a trivalent Łukasiewicz algebra (by taking certain equivalence classes) and that any trivalent Łukasiewicz algebra is isomorphic to a Łukasiewicz algebra thus derived from a monadic Boolean algebra. Cignoli summarizes the importance of this result as: "Since it was shown by Halmos that monadic Boolean algebras are the algebraic counterpart of classical first order monadic calculus, Monteiro considered that the representation of three-valued Łukasiewicz algebras into monadic Boolean algebras gives a proof of the consistency of Łukasiewicz three-valued logic relative to classical logic." References Further reading Boicescu, V., Filipoiu, A., Georgescu, G., Rudeanu, S.: Łukasiewicz-Moisil Algebras. North-Holland, Amsterdam (1991) Iorgulescu, A.: Connections between MVn-algebras and n-valued Łukasiewicz–Moisil algebras—II. Discrete Math. 202, 113–134 (1999) Iorgulescu, A.: Connections between MVn-algebras and n-valued Łukasiewicz-Moisil—III. Unpublished Manuscript Iorgulescu, A.: Connections between MVn-algebras and n-valued Łukasiewicz–Moisil algebras—IV. J. Univers. Comput. Sci. 6, 139–154 (2000) R. Cignoli, Algebras de Moisil de orden n, Ph.D. Thesis, Universidad National del Sur, Bahia Blanca, 1969 http://projecteuclid.org/download/pdf_1/euclid.ndjfl/1093635424 Algebraic logic Ockham algebras
Łukasiewicz–Moisil algebra
[ "Mathematics" ]
1,204
[ "Mathematical structures", "Mathematical logic", "Fields of abstract algebra", "Algebraic logic", "Algebraic structures", "Ockham algebras" ]
43,616,041
https://en.wikipedia.org/wiki/Juggling%20robot
A juggling robot is a robot designed to be able to successfully carry out bounce or toss juggling. Robots capable of juggling are designed and built both to increase and test understanding and theories of human movement, juggling, and robotics. Juggling robots may include sensors to guide arm/hand movement or may rely on physical methods such as tracks or funnels to guide prop movement. Since true juggling requires more props than hands, many robots described as capable of juggling are not. Bounce juggling A toss juggling robot that can do more than a two ball column has only recently been built. However, Claude Shannon built the first juggling robot, a 3-ball bounce juggler, from an Erector Set, in the 1970s. "Bounce juggling is easier to accomplish than is toss juggling because the balls are grabbed at the top of their trajectories, when they are moving the slowest," and Shannon's machine tendency to correct throwing errors was through tracks on its hands. By 1992, Christopher G. Atkeson and Stefan K. Schaal of the Georgia Institute of Technology built a similar 5-ball bounce juggling robot. Decorated as and named W. C. Fields, Shannon's machine used grooved cups/tracks instead of sensors or feedback. Shannon also devised a juggling theorem. In 1989 Martin Bühler and Daniel E. Koditschek produced a juggler with one rotating bar, moving one way then the other, that bounces two-props in a fountain of indefinite length. Toss juggling Sakaguchi et al. (1991) and Miyazaki (1993) produced a one-armed two-ball fountain juggler with a two degrees of freedom arm and an unactuated funnel-shaped hand. Kizaki and Namiki (2012) developed a high-speed hand-arm system with actuated fingers that is able to repeatedly juggle two balls in a fountain pattern. Ploeger et al. (2020) achieved stable two-ball juggling in a column pattern for 33 minutes on a four degrees of freedom robotic arm with a funnel-shaped hand using a learning based approach. By 2011 students at the Department of Control Engineering at Prague's Czech Technical University built a 5-ball cascade juggling robot whose arms have both vertical and horizontal motion, whose hands are ring-shaped, and which contains a basket that provides the initial throws and relaunches any failed catches. Disney Research is developing a robot capable of pass juggling with the goal of being able to provide more physical interaction between visitors and mechanized characters. Contact juggling Contact juggling appears to be less common among robots, as it is with people. However, in 2010 undergraduates at Northwestern University developed a robot capable of rolling a grooved disk from the center, over the edge, and to the center of the other side of a figure-eight shaped track capable of rotation. See also Bipedal robot IEEE Motion capture Negative feedback Servomechanism References External links Mason, Matt (1996). "A Survey Of Robotic Juggling And Dynamic Manipulation", Juggling.org. Juggling Robots
Juggling robot
[ "Physics", "Technology" ]
639
[ "Physical systems", "Machines", "Robots" ]
43,619,610
https://en.wikipedia.org/wiki/Process%20network%20synthesis
Process network synthesis (PNS) is a method to represent a process structure in a 'directed bipartite graph'. Process network synthesis uses the P-graph method to create a process structure. The scientific aim of this method is to find optimum structures. Process network synthesis uses a bipartite graph method P-graph and employs combinatorial rules to find all feasible network solutions (maximum structure) and links raw materials to desired products related to the given problem. With a branch and bound optimisation routine and by defining the target value an optimum structure can be generated that optimises a chosen target function. Process Network Synthesis was originally developed to solve chemical process engineering processes. Target value as well as the structure can be changed depending on the field of application. Thus many more fields of application followed. Applications At Pannon University software the tools PNS Editor and PNS Studio were programmed to generate maximum structure of processes. This software includes the p-graph method and MSG, SSG and ABB branch and bound algorithms to detect optimum structures within the maximum available process flows. PNS is used in different applications where it can be used to find optimum process structures like: Process engineering: Chemical process designs and the Synthesis of chemical processes is applied in different case studies. Optimum energy technology networks for regional and urban energy systems: In case of regional and urban energy planning the financially most feasible solution for resource systems is selected as target value. With this setting material- and energy flows, energy demand and cost of technologies are considered and the optimum technology network can be found. Simultaneously the robustness of technologies due to price changes and limitations in resource availability can be identified. Evacuation routes in buildings: The aim is to find optimal routes to evacuate buildings depending on specific side parameters. Transportation routes: In this research area transportation routes with minimum cost and lowest environmental impact can be identified. References External links P-Graph wiki P-graph method Engineering management Chemical engineering
Process network synthesis
[ "Chemistry", "Engineering" ]
399
[ "Chemical engineering", "Engineering economics", "Engineering management", "nan" ]
28,574,558
https://en.wikipedia.org/wiki/Crystal%20structure%20of%20boron-rich%20metal%20borides%20%28data%20page%29
This article contains crystal structure data used in the article crystal structure of boron-rich metal borides. Table I Chemical composition can be calculated as Y0.62Al0.71B14. Table II Table III a The number n in the atom designation Bn,n refers to the B12-nth icosahedron to which the Bn,n belongs. Si6.n and B6.n belong to the B12Si3 unit. b,c,d The Si and B sites are in the same interstice, which is assumed to be fully occupied by both Si and B atoms with occupancies of Occ.(Si) and Occ.(B), respectively, where Occ.(Si)+Occ.(B) = 1. Position of the boron atom was adjusted independently by fixing the thermal parameters at the same value as for the Si atom in the same interstice. e The temperature factor is fixed at this value. f Equivalent isotropic temperature factor. It was calculated from the relation Beq. = 4/3(a2β11 + b2β22 + c2β33). Table IVa Structure data for homologous compounds. The sum of those values was fixed at 1.0. Table IVb Table IVc Table Va The sum of those values was fixed at 1.0. Table Vb Table VI a Obtained by structure analysis. Table VII Table VIII a Anisotropic thermal factors are applied to Sc sites, and Ueq (one-third of the trace of the orthogonalized Uij tensor) is listed in these columns. Table IX a Anisotropic thermal factors are applied to Sc sites, and Ueq (one-third of the trace of the orthogonalized Uij tensor) is listed in these columns. Table X a Anisotropic thermal factors are applied to Sc sites, and Ueq (one-third of the trace of the orthogonalized Uij tensor) is listed in these columns. References Borides Crystallography
Crystal structure of boron-rich metal borides (data page)
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
425
[ "Crystallography", "Condensed matter physics", "Materials science" ]
28,574,607
https://en.wikipedia.org/wiki/Crystal%20structure%20of%20boron-rich%20metal%20borides
Metals, and specifically rare-earth elements, form numerous chemical complexes with boron. Their crystal structure and chemical bonding depend strongly on the metal element M and on its atomic ratio to boron. When B/M ratio exceeds 12, boron atoms form B12 icosahedra which are linked into a three-dimensional boron framework, and the metal atoms reside in the voids of this framework. Those icosahedra are basic structural units of most allotropes of boron and boron-rich rare-earth borides. In such borides, metal atoms donate electrons to the boron polyhedra, and thus these compounds are regarded as electron-deficient solids. The crystal structures of many boron-rich borides can be attributed to certain types including MgAlB14, YB66, REB41Si1.2, B4C and other, more complex types such as RExB12C0.33Si3.0. Some of these formulas, for example B4C, YB66 and MgAlB14, historically reflect the idealistic structures, whereas the experimentally determined composition is nonstoichiometric and corresponds to fractional indexes. Boron-rich borides are usually characterized by large and complex unit cells, which can contain more than 1500 atomic sites and feature extended structures shaped as "tubes" and large modular polyhedra ("superpolyhedra"). Many of those sites have partial occupancy, meaning that the probability to find them occupied with a certain atom is smaller than one and thus that only some of them are filled with atoms. Scandium is distinguished among the rare-earth elements by that it forms numerous borides with uncommon structure types; this property of scandium is attributed to its relatively small atomic and ionic radii. Crystals of the specific rare-earth boride YB66 are used as X-ray monochromators for selecting X-rays with certain energies (in the 1–2 keV range) out of synchrotron radiation. Other rare-earth borides may find application as thermoelectric materials, owing to their low thermal conductivity; the latter originates from their complex, "amorphous-like", crystal structure. Metal borides In metal borides, the bonding of boron varies depending on the atomic ratio B/M. Diborides have B/M = 2, as in the well-known superconductor MgB2; they crystallize in a hexagonal AlB2-type layered structure. Hexaborides have B/M = 6 and form a three-dimensional boron framework based on a boron octahedron (Fig. 1a). Tetraborides, i.e. B/M = 4, are mixtures of diboride and hexaboride structures. Cuboctahedron (Fig. 1b) is the structural unit of dodecaborides, which have a cubic lattice and B/M = 12. When the composition ratio exceeds 12, boron forms B12 icosahedra (Fig. 1c) which are linked into a three-dimensional boron framework, and the metal atoms reside in the voids of this framework. This complex bonding behavior originates from the fact that boron has only three valence electrons; this hinders tetrahedral bonding as in diamond or hexagonal bonding as in graphite. Instead, boron atoms form polyhedra. For example, three boron atoms make up a triangle where they share two electrons to complete the so-called three-center bonding. Boron polyhedra, such as B6 octahedron, B12 cuboctahedron and B12 icosahedron, lack two valence electrons per polyhedron to complete the polyhedron-based framework structure. Metal atoms need to donate two electrons per boron polyhedron to form boron-rich metal borides. Thus, boron compounds are often regarded as electron-deficient solids. The covalent bonding nature of metal boride compounds also give them their hardness and inert chemical reactivity property. Icosahedral B12 compounds include α-rhombohedral boron (B13C2), β-rhombohedral boron (MeBx, 23≤x), α-tetragonal boron (B48B2C2), β-tetragonal boron (β-AlB12), AlB10 or AlC4B24, YB25, YB50, YB66, NaB15 or MgAlB14, γ-AlB12, BeB3 and SiB6. YB25 and YB50 decompose without melting that hinders their growth as single crystals by the floating zone method. However, addition of a small amount of Si solves this problem and results in single crystals with the stoichiometry of YB41Si1.2. This stabilization technique allowed the synthesis of some other boron-rich rare-earth borides. Albert and Hillebrecht reviewed binary and selected ternary boron compounds containing main-group elements, namely, borides of the alkali and alkaline-earth metals, aluminum borides and compounds of boron and the nonmetals C, Si, Ge, N, P, As, O, S and Se. They, however, excluded the described here icosahedron-based rare-earth borides. Note that rare-earth elements have d- and f-electrons that complicates chemical and physical properties of their borides. Werheit et al. reviewed Raman spectra of numerous icosahedron-based boron compounds. Figure 2 shows a relationship between the ionic radius of trivalent rare-earth ions and the composition of some rare-earth borides. Note that scandium has many unique boron compounds, as shown in figure 2, because of the much smaller ionic radius compared with other rare-earth elements. In understanding the crystal structures of rare-earth borides, it is important to keep in mind the concept of partial site occupancy, that is, some atoms in the described below unit cells can take several possible positions with a given statistical probability. Thus, with the given statistical probability, some of the partial-occupancy sites in such a unit cell are empty, and the remained sites are occupied. REAlB14 and REB25 Compounds that were historically given the formulae REAlB14 and REB25 have the MgAlB14 structure with an orthorhombic symmetry and space group Imma (No. 74). In this structure, rare-earth atoms enter the Mg site. Aluminium sites are empty for REB25. Both metal sites of REAlB14 structure have partial occupancies of about 60–70%, which shows that the compounds are actually non-stoichiometric. The REB25 formula merely reflects the average atomic ratio [B]/[RE] = 25. Yttrium borides form both YAlB14 and YB25 structures. Experiments have confirmed that the borides based on rare-earth elements from Tb to Lu can have the REAlB14 structure. A subset of these borides, which contains rare-earth elements from Gd to Er, can also crystallize in the REB25 structure. Korsukova et al. analyzed the YAlB14 crystal structure using a single crystal grown by the high-temperature solution-growth method. The lattice constants were deduced as a = 0.58212(3), b = 1.04130(8) and c = 0.81947(6) nm, and the atomic coordinates and site occupancies are summarized in table I. Figure 3 shows the crystal structure of YAlB14 viewed along the x-axis. The large black spheres are Y atoms, the small blue spheres are Al atoms and the small green spheres are the bridging boron sites; B12 clusters are depicted as the green icosahedra. Boron framework of YAlB14 is one of the simplest among icosahedron-based borides – it consists of only one kind of icosahedra and one bridging boron site. The bridging boron site is tetrahedrally coordinated by four boron atoms. Those atoms are another boron atom in the counter bridge site and three equatorial boron atoms of one of three B12 icosahedra. Aluminium atoms are separated by 0.2911 nm and are arranged in lines parallel to the x-axis, whereas yttrium atoms are separated by 0.3405 nm. Both the Y atoms and B12 icosahedra form zigzags along the x-axis. The bridging boron atoms connect three equatorial boron atoms of three icosahedra and those icosahedra make up a network parallel to the (101) crystal plane (x-z plane in the figure). The bonding distance between the bridging boron and the equatorial boron atoms is 0.1755 nm, which is typical for the strong covalent B-B bond (bond length 0.17–0.18 nm); thus, the bridging boron atoms strengthen the individual network planes. On the other hand, the large distance between the boron atoms within the bridge (0.2041 nm) suggests weaker interaction, and thus the bridging sites contribute little to the bonding between the network planes. The boron framework of YAlB14 needs donation of four electrons from metal elements: two electrons for a B12 icosahedron and one electron for each of the two bridging boron atoms – to support their tetrahedral coordination. The actual chemical composition of YAlB14, determined by the structure analysis, is Y0.62Al0.71B14 as described in table I. If both metal elements are trivalent ions then 3.99 electrons can be transferred to the boron framework, which is very close to the required value of 4. However, because the bonding between the bridging boron atoms is weaker than in a typical B-B covalent bond, less than 2 electrons are donated to this bond, and metal atoms need not be trivalent. On the other hand, the electron transfer from metal atoms to the boron framework implies that not only strong covalent B-B bonding within the framework but also ionic interaction between metal atoms and the framework contribute to the YAlB14 phase stabilization. REB66-type borides In addition to yttrium, a wide range of rare-earth elements from Nd to Lu, except for Eu, can form REB66 compounds. Seybolt discovered the compound YB66 in 1960 and its structure was solved by Richards and Kasper in 1969. They reported that YB66 has a face-centered cubic structure with space group Fmc (No. 226) and lattice constant a = 2.3440(6) nm. There are 13 boron sites B1–B13 and one yttrium site. The B1 sites form one icosahedron and the B2–B9 sites make up another icosahedron. These icosahedra arrange in a thirteen-icosahedron unit (B12)12B12 which is shown in figure 4a and is called supericosahedron. The icosahedron formed by the B1 site atoms is located at the center of the supericosahedron. The supericosahedron is one of the basic units of the boron framework of YB66. There are two types of supericosahedra: one occupies the cubic face centers and another, which is rotated by 90°, is located at the center of the cell and at the cell edges. Thus, there are eight supericosahedra (1248 boron atoms) in the unit cell. Another structure unit of YB66, shown in figure 4b, is B80 cluster of 80 boron sites formed by the B10 to B13 sites. All those 80 sites are partially occupied and in total contain only about 42 boron atoms. The B80 cluster is located at the body center of the octant of the unit cell, i.e., at the 8a position (1/4, 1/4, 1/4); thus, there are eight such clusters (336 boron atoms) per unit cell. Two independent structure analyses came to the same conclusion that the total number of boron atoms in the unit cell is 1584. The boron framework structure of YB66 is shown in figure 5a. To indicate relative orientations of the supericosahedra, a schematic drawing is shown in figure 5b, where the supericosahedra and the B80 clusters are depicted by light green and dark green spheres, respectively; at the top surface of the unit cell, the relative orientations of the supericosahedra are indicated by arrows. There are 48 yttrium sites ((0.0563, 1/4, 1/4) for YB62) in the unit cell. Richards and Kasper fixed the Y site occupancy to 0.5 that resulted in 24 Y atoms in the unit cell and the chemical composition of YB66. As shown in figure 6, Y sites form a pair separated by only 0.264 nm in YB62. This pair is aligned normal to the plane formed by four supericosahedra. The Y site occupancy 0.5 implies that the pair has always one Y atom with one empty site. Slack et al. reported that the total number of boron atoms in the unit cell, calculated from the measured values of density, chemical composition and lattice constant, is 1628 ± 4, which is larger than the value 1584 obtained from the structural analysis. The number of B atoms in the unit cell remains nearly constant when the chemical composition changes from YB56 to YB66. On the other hand, the total number of yttrium atoms per unit cell varies, and it is, for example, ~26.3 for YB62 (see right table). If the total number of Y atoms stays below or equal to 24 then it is possible that one Y atom accommodates in each Y pair (partial occupancy). However, the experimental value of 26.3 significantly exceeds 24, and thus both pair sites might be occupied. In this case, because of the small separation between the two Y atoms, they must be repelled by the Coulomb force. To clarify this point, split Y sites were introduced in the structure analysis resulting in a better agreement with the experiment. The Y site distances and occupancies are presented in the left table. There are twenty Y pair sites with one Y atom and three pairs with two Y atoms; there is also one empty Y pair (partial occupancy = 0). The separation 0.340 nm for the Y2 pair site (two Y atoms in the pair site) is much larger than the separation 0.254 nm for the Y1 pair site (one Y atom in the pair site), as expected. The total number of Y atoms in the unit cell is 26.3, exactly as measured. Both cases are compared in figure 7. The larger separation for the Y2 pair site is clear as compared with that for the Y1 pair site. In case of the Y2 pair, some neighboring boron sites that belong to the B80 cluster must be unoccupied because they are too close to the Y2 site. Splitting the Y site yields right number of Y atoms in the unit cell, but not B atoms. Not only the occupation of the B sites in the B80 cluster must be strongly dependent on whether or not the Y site is the Y1 state or the Y2 state, but also the position of the occupied B sites must be affected by the state of the Y site. Atomic coordinates and site occupancies are summarized in table II. REB41Si1.2 Similar to yttrium, rare-earth metals from Gd to Lu can form REB41Si1.2-type boride. The first such compound was synthesized by solid-state reaction and its structure was deduced as YB50. X-ray powder diffraction (XRD) and electron diffraction indicated that YB50 has an orthorhombic structure with lattice constants a = 1.66251(9), b = 1.76198 and c = 0.94797(3) nm. The space group was assigned as P21212. Because of the close similarity in lattice constants and space group, one might expect that YB50 has the γ-AlB12-type orthorhombic structure whose lattice constants and space group are a = 1.6573(4), b = 1.7510(3) and c = 1.0144(1) nm and P21212. YB50 decomposes at ~1750 °C without melting that hinders growth of single crystals from the melt. Small addition of silicon made YB50 to melt without decomposition, and so enabled single-crystal growth from the melt and single-crystal structure analysis. The structure analysis indicated that YB41Si1.2 has not the γ-AlB12-type lattice but a rare orthorhombic crystal structure (space group: Pbam, No. 55) with lattice constants of a = 1.674(1) nm, b = 1.7667(1) nm and c = 0.9511(7) nm. There are 58 independent atomic sites in the unit cell. Three of them are occupied by either B or Si atoms (mixed-occupancy sites), one is a Si bridge site and one is Y site. From the remaining 53 boron sites, 48 form icosahedra and 5 are bridging sites. Atomic coordinates and site occupancies are summarized in table III. The boron framework of YB41Si1.2 consists of five B12 icosahedra (I1–I5) and a B12Si3 polyhedron shown in figure 8a. An unusual linkage is depicted in figure 8b, where two B12-I5 icosahedra connect via two B atoms of each icosahedron forming an imperfect square. The boron framework of YB41Si1.2 can be described as a layered structure where two boron networks (figures 9a,b) stack along the z-axis. One boron network consists of 3 icosahedra I1, I2 and I3 and is located in the z = 0 plane; another network consists of the icosahedron I5 and the B12Si3 polyhedron and lies at z = 0.5. The icosahedron I4 bridges these networks, and thus its height along the z-axis is 0.25. The I4 icosahedra link two networks along the c-axis and therefore form an infinite chain of icosahedra along this axis as shown in figure 10. The unusually short distances (0.4733 and 0.4788 nm) between the neighboring icosahedra in this direction result in the relatively small c-axis lattice constant of 0.95110(7) nm in this compound – other borides with a similar icosahedral chain have this value larger than 1.0 nm. However, the bonding distances between the apex B atoms (0.1619 and 0.1674 nm) of neighboring I4 icosahedra are usual for the considered metal borides. Another unusual feature of YB41Si1.2 is the 100% occupancy of the Y site. In most icosahedron-based metal borides, metal sites have rather low site occupancy, for example, about 50% for YB66 and 60–70% for REAlB14. When the Y site is replaced by rare-earth elements, REB41Si1.2 can have an antiferromagnetic-like ordering because of this high site occupancy. Homologous icosahedron-based rare-earth borides Rare-earth borides REB15.5CN, REB22C2N and REB28.5C4 are homologous, i.e. have a similar crystal structure, to B4C. The latter has a structure typical of icosahedron-based borides, as shown in figure 11a. There, B12 icosahedra form a rhombohedral lattice unit (space group: Rm (No. 166), lattice constants: a = 0.56 nm and c = 1.212 nm) surrounding a C-B-C chain that resides at the center of the lattice unit, and both C atoms bridge the neighboring three icosahedra. This structure is layered: as shown in figure 11b, B12 icosahedra and bridging carbons form a network plane that spreads parallel to the c-plane and stacks along the c-axis. These homologous compounds have two basic structure units – the B12 icosahedron and the B6 octahedron. The network plane of B4C structure can be periodically replaced by a B6 octahedron layer so that replacement of every third, fourth and fifth layer would correspond to REB15.5CN, REB22C2N and REB28.5C4, respectively. The B6 octahedron is smaller than the B12 icosahedron; therefore, rare-earth elements can reside in the space created by the replacement. The stacking sequences of B4C, REB15.5CN, REB22C2N and REB28.5C4 are shown in figures 12a, b, c and d, respectively. High-resolution transmission electron microscopy (HRTEM) lattice images of the latter three compounds, added to Fig. 12, do confirm the stacking sequence of each compound. The symbols 3T, 12R and 15R in brackets indicate the number of layers necessary to complete the stacking sequence, and T and R refer to trigonal and rhombohedral. Thus, REB22C2N and REB28.5C4 have rather large c-lattice constants. Because of the small size of the B6 octahedra, they cannot interconnect. Instead, they bond to the B12 icosahedra in the neighboring layer, and this decreases bonding strength in the c-plane. Nitrogen atoms strengthen the bonding in the c-plane by bridging three icosahedra, like C atoms in the C-B-C chain. Figure 13 depicts the c-plane network revealing the alternate bridging of the boron icosahedra by N and C atoms. Decreasing the number of the B6 octahedra diminishes the role of nitrogen because the C-B-C chains start bridging the icosahedra. On the other hand, in MgB9N the B6 octahedron layer and the B12 icosahedron layer stack alternatively and there is no C-B-C chains; thus only N atoms bridge the B12 icosahedra. However, REB9N compounds have not been identified yet. Sc, Y, Ho, Er, Tm and Lu are confirmed to form REB15.5CN-type compounds. Single-crystal structure analysis yielded trigonal symmetry for ScB15.5CN (space group Pm1 (No.164) with a = 0.5568(2) and c = 1.0756(2) nm), and the deduced atomic coordinates are summarized in table IVa. REB22C2N was synthesized for Y, Ho, Er, Tm and Lu. The crystal structure, solved for a representative compound YB22C2N, belongs to the trigonal with space group Rm (No.166); it has six formula units in the unit cell and lattice constants a = b = 0.5623(0) nm and c = 4.4785(3) nm. Atomic coordinates of YB22C2N are summarized in table IVb. Y, Ho, Er, Tm and Lu also form REB28.5C4 which has a trigonal crystal structure with space group Rm (No. 166). Lattice constants of the representative compound YB28.5C4 are a = b = 0.56457(9) nm and c = 5.68873(13) nm and there are six formula units in the unit cell. Structure data of YB28.5C4 are summarized in table IVc. RExB12C0.33Si3.0 Initially these were described as ternary RE-B-Si compounds, but later carbon was included to improve the structure description that resulted in a quaternary RE-B-C-Si composition. RExB12C0.33Si3.0 (RE=Y and Gd–Lu) have a unique crystal structure with two units – a cluster of B12 icosahedra and a Si8 ethane-like complex – and one bonding configuration (B12)3≡Si-C≡(B12)3. A representative compound of this group is YxB12C0.33Si3.0 (x=0.68). It has a trigonal crystal structure with space group Rm (No. 166) and lattice constants a = b = 1.00841(4) nm, c = 1.64714(5) nm, α = β = 90° and γ = 120°. The crystal has layered structure. Figure 15 shows a network of boron icosahedra that spreads parallel to the (001) plane, connecting with four neighbors through B1–B1 bonds. The C3 and Si3 site atoms strengthen the network by bridging the boron icosahedra. Contrary to other boron-rich icosahedral compounds, the boron icosahedra from different layers are not directly bonded. The icosahedra within one layer are linked through Si8 ethane-like clusters with (B12)3≡Si-C≡(B12)3 bonds, as shown in figures 16a and b. There are eight atomic sites in the unit cell: one yttrium Y, four boron B1–B4, one carbon C3 and three silicon sites Si1–Si3. Atomic coordinates, site occupancy and isotropic displacement factors are listed in table Va; 68% of the Y sites are randomly occupied and remaining Y sites are vacant. All boron sites and Si1 and Si2 sites are fully occupied. The C3 and Si3 sites can be occupied by either carbon or silicon atoms (mixed occupancy) with a probability of about 50%. Their separation is only 0.413 Å, and thus either the C3 or Si3 sites, but not both, are occupied. These sites form Si-C pairs, but not Si-Si or C-C pairs. The distances between the C3 and Si3 sites and the surrounding sites for YxB12C0.33Si3.0 are summarized in table Vb and the overall crystal structure is shown in figure 14. Salvador et al. reported an isotypic terbium compound Tb3–xC2Si8(B12)3. Most parts of the crystal structure are the same as those described above; however, its bonding configuration is deduced as (B12)3≡C-C≡(B12)3 instead of (B12)3≡Si-C≡(B12)3. The authors intentionally added carbon to grow single crystals whereas the previous crystals were accidentally contaminated by carbon during their growth. Thus, higher carbon concentration was achieved. Existence of both bonding schemes of (B12)3≡Si-C≡(B12)3 and (B12)3≡C-C≡(B12)3 suggests the occupancy of the carbon sites of 50–100%. On the other hand, (B12)3≡Si-Si≡(B12)3 bonding scheme is unlikely because of too short Si-Si distance, suggesting that the minimum carbon occupancy at the site is 50%. Some B atoms may replace C atoms at the C3 site, as previously assigned to the B site. However, the carbon occupation is more likely because the site is tetrahedrally coordinated whereas the B occupation of the site needs an extra electron to complete tetrahedral bonding. Thus, carbon is indispensable for this group of compounds. Scandium compounds Scandium has the smallest atomic and ionic (3+) radii (1.62 and 0.885 Å, respectively) among the rare-earth elements. It forms several icosahedron-based borides which are not found for other rare-earth elements; however, most of them are ternary Sc-B-C compounds. There are many boron-rich phases in the boron-rich corner of Sc-B-C phase diagram, as shown in figure 17. A slight variation of the composition can produce ScB19, ScB17C0.25, ScB15C0.8 and ScB15C1.6; their crystal structures are unusual for borides and are very different from each other. ScB19+xSiy ScB19+xSiy has a tetragonal crystal structure with space group P41212 (No. 92) or P43212 and lattice constants of a, b = 1.03081(2) and c = 1.42589(3) nm; it is isotypic to the α-AlB12 structure type. There are 28 atomic sites in the unit cell, which are assigned to 3 scandium atoms, 24 boron atoms and one silicon atom. Atomic coordinates, site occupancies and isotropic displacement factors are listed in table VI. The boron framework of ScB19+xSiy is based on one B12 icosahedron and one B22 unit. This unit can be observed in β-tetragonal boron and is a modification of the B20 unit of α-AlB12 (or B19 unit in early reports). The B20 unit is a twinned icosahedron made from B13 to B22 sites with two vacant sites and one B atom (B23) bridging both sides of the unit. The twinned icosahedron is shown in figure 18a. B23 was treated as an isolated atom in the early reports; it is bonded to each twinned icosahedra through B18 and to another icosahedron through B5 site. If the twinned icosahedra were independent without twinning then B23 would be a bridge site linking three icosahedra. However, because of twinning, B23 shifts closer to the twinned icosahedra than another icosahedron; thus B23 is currently treated as a member of the twinned icosahedra. In ScB19+xSiy, the two B24 sites which correspond to the vacant sites in the B20 unit are partially occupied; thus, the unit should be referred to as a B22 cluster which is occupied by about 20.6 boron atoms. Scandium atoms occupy 3 of 5 Al sites of α-AlB12, that is Sc1, Sc2 and Sc3 correspond to Al4, Al1 and Al2 sites of α-AlB12, respectively. The Al3 and Al5 sites are empty for ScB19+xSiy, and the Si site links two B22 units. This phase also exists without silicon. Figure 19a shows the network of boron icosahedra in the boron framework of ScB19+xSiy. In this network, 4 icosahedra form a supertetrahedron (figure 18b); its one edge is parallel to the a-axis, and the icosahedra on this edge make up a chain along the a-axis. The opposite edge of the supertetrahedron is parallel to the b-axis and the icosahedra on this edge form a chain along the b-axis. As shown in figure 19, there are wide tunnels surrounded by the icosahedron arrangement along the a- and b-axes. The tunnels are filled by the B22 units which strongly bond to the surrounding icosahedra; the connection of the B22 units is helical and it runs along the c-axis as shown in figure 19b. Scandium atoms occupy the voids in the boron network as shown in figure 19c, and the Si atoms bridge the B22 units. ScB17C0.25 Very small amount of carbon is sufficient to stabilize "ScB17C0.25". This compound has a broad composition range, namely ScB16.5+xC0.2+y with x ≤ 2.2 and y ≤ 0.44. ScB17C0.25 has a hexagonal crystal structure with space group P6mmm (No. 199) and lattice constants a, b = 1.45501(15) nm and c = 0.84543(16) nm. There are 19 atomic sites in the unit cell, which are assigned to one scandium site Sc, 14 boron sites B1–B14 having 100% occupancy, two boron-carbon mixed-occupancy sites B/C15 and B/C16, and two partial-occupancy boron sites B17 and B18. Atomic coordinates, site occupancies and isotropic displacement factors are listed in table VII. Although a very small amount of carbon (less than 2 wt%!) plays an important role in the phase stability, carbon does not have its own sites but shares with boron two interstitial sites B/C15 and B/C16. There are two inequivalent B12 icosahedra, I1 and I2, which are constructed by the B1–B5 and B8–B12 sites, respectively. A "tube" is another characteristic structure unit of ScB17C0.25. It extends along the c-axis and consists of B13, B14, B17 and B18 sites where B13 and B14 form 6-membered rings. B17 and B18 sites also form 6-membered rings; however, their mutual distances (0.985 Å for B17 and 0.955 Å for B18) are too short for a simultaneous occupation of the neighboring sites. Therefore, boron atoms occupy 2nd neighbor site forming a triangle. The occupancies of B17 and B18 sites should be 50%, but the structure analysis suggests larger values. The crystal structure viewed along the a-axis is shown in figure 20, which suggests that the ScB17C0.25 is a layered material. Two layers, respectively constructed by the icosahedra I1 and I2, alternatively stack along the c-axis. However, the ScB17C0.25 crystal is not layered. For example, during arc-melting, ScB17C0.25 needle crystals violently grow along the c-axis – this never happens in layered compounds. The crystal structure viewed along the c-axis is shown in figure 21a. The icosahedra I1 and I2 form a ring centered by the "tube" shown in figure 21b, which probably governs the properties of the ScB17C0.25 crystal. B/C15 and B/C16 mixed-occupancy sites interconnect the rings. A structural similarity can be seen between ScB17C0.25 and BeB3. Figures 22a and b present HRTEM lattice images and electron diffraction patterns taken along the [0001] and [110] crystalline directions, respectively. The HRTEM lattice image of figure 22a reproduces well the (a, b) plane of the crystal structure shown in figure 21a, with the clearly visible rings membered by icosahedra I1 and I2 and centered by the "tube". Figure 22b proves that ScB17C0.25 does not have layered character but its c-axis direction is built up by the ring-like structure and tubular structures. Sc0.83–xB10.0–yC0.17+ySi0.083–z Sc0.83–xB10.0–yC0.17+ySi0.083–z (x = 0.030, y = 0.36 and z = 0.026) has a cubic crystal structure with space group F3m (No. 216) and lattice constant a = 2.03085(5) nm. This compound was initially identified as ScB15C0.8 (phase I in the Sc-B-C phase diagram of figure 17). A small amount of Si was added into the floating zone crystal growth and thus this phase is a quaternary compound. Its rare cubic structure has 26 sites in the unit cell: three Sc sites, two Si sites, one C site and 20 B sites; 4 out of 20 B sites are boron-carbon mixed-occupancy sites. Atomic coordinates, site occupancies and isotropic displacement factors are listed in table VIII. In the unit cell, there are three independent icosahedra, I1, I2 and I3, and a B10 polyhedron which are formed by the B1–B4, B5–B8, B9–B13 and B14–B17 sites, respectively. The B10 polyhedron has not been observed previously and it is shown in figure 23. The icosahedron I2 has a boron-carbon mixed-occupancy site B,C6 whose occupancy is B/C=0.58/0.42. Remaining 3 boron-carbon mixed-occupancy sites are bridge sites; C and Si sites are also bridge sites. More than 1000 atoms are available in the unit cell, which is built up by large structure units such as two supertetrahedra T(1) and T(2) and one superoctahedron O(1). As shown in figure 24a, T(1) consists of 4 icosahedra I(1) which have no direct bonding but are bridged by four B and C20 atoms. These atoms also form tetrahedron centered by the Si2 sites. The supertetrahedron T(2) that consists of 4 icosahedra I(2) is the same as shown in figure 18b; its mixed-occupancy sites B and C6 directly bond with each other. The superoctahedron O(1) consists of 6 icosahedra I(3) and bridge sites B, C18, C1 and Si1; here Si1 and C1 exhibit a tetrahedral arrangement at the center of O(1). The B10 polyhedra also arrange octahedrally, without the central atom, as shown in figure 24c where the B and C19 atoms bridge the B10 polyhedra to form the octahedral supercluster of the B10 polyhedra. Using these large polyhedra, the crystal structure of Sc0.83–xB10.0–yC0.17+ySi0.083–z can be described as shown in figure 25. Owing to the crystal symmetry, the tetrahedral coordination between these superstructure units is again a key factor. The supertetrahedron T(1) lies at the body center and at the edge center of the unit cell. The superoctahedra O(1) locate at the body center (0.25, 0.25, 0.25) of the quarter of the unit cell. They coordinate tetrahedrally around T(1) forming a giant tetrahedron. The supertetrahedra T(2) are located at the symmetry-related positions (0.25, 0.25, 0.75); they also form a giant tetrahedron surrounding T(1). Edges of both giant tetrahedra orthogonally cross each other at their centers; at those edge centers, each B10 polyhedron bridges all the super-structure clusters T(1), T(2) and O(1). The superoctahedron built of B10 polyhedra is located at each cubic face center. Scandium atoms reside in the voids of the boron framework. Four Sc1 atoms form a tetrahedral arrangement inside the B10 polyhedron-based superoctahedron. Sc2 atoms sit between the B10 polyhedron-based superoctahedron and the O(1) superoctahedron. Three Sc3 atoms form a triangle and are surrounded by three B10 polyhedra, a supertetrahedron T(1) and a superoctahedron O(1). ScB14–xCx (x = 1.1) and ScB15C1.6 ScB14–xCx has an orthorhombic crystal structure with space group Imma (No. 74) and lattice constants of a = 0.56829(2), b = 0.80375(3) and c = 1.00488(4) nm. The crystal structure of ScB14–xCx is isotypic to that of MgAlB14 where Sc occupies the Mg site, the Al site is empty and the boron bridge site is a B/C mixed-occupancy site with the occupancy of B/C = 0.45/0.55. The occupancy of the Sc site in flux-grown single crystals is 0.964(4), i.e. almost 1. Solid-state powder-reaction growth resulted in lower Sc site occupancy and in the resulting chemical composition ScB15C1.6. The B-C bonding distance 0.1796(3) nm between the B/C bridge sites is rather long as compared with that (0.15–0.16 nm) of an ordinary B-C covalent bond, that suggests weak bonding between the B/C bridge sites. Sc4.5–xB57–y+zC3.5–z Sc4.5–xB57–y+zC3.5–z (x = 0.27, y = 1.1, z = 0.2) has an orthorhombic crystal structure with space group Pbam (No. 55) and lattice constants of a = 1.73040(6), b = 1.60738(6) and c = 1.44829(6) nm. This phase is indicated as ScB12.5C0.8 (phase IV) in the phase diagram of figure 17. This rare orthorhombic structure has 78 atomic positions in the unit cell: seven partially occupied Sc sites, four C sites, 66 B sites including three partially occupied sites and one B/C mixed-occupancy site. Atomic coordinates, site occupancies and isotropic displacement factors are listed in table IX. More than 500 atoms are available in the unit cell. In the crystal structure, there are six structurally independent icosahedra I1–I6, which are constructed from B1–B12, B13–B24, B25–B32, B33–B40, B41–B44 and B45–B56 sites, respectively; B57–B62 sites form a B8 polyhedron. The Sc4.5–xB57–y+zC3.5–z crystal structure is layered, as shown in figure 26. This structure has been described in terms of two kinds of boron icosahedron layers, L1 and L2. L1 consists of the icosahedra I3, I4 and I5 and the C65 "dimer", and L2 consists of the icosahedra I2 and I6. I1 is sandwiched by L1 and L2 and the B8 polyhedron is sandwiched by L2. An alternative description is based on the same B12(B12)12supericosahedron as in the YB66 structure. In the YB66 crystal structure, the supericosahedra form 3-dimensional boron framework as shown in figure 5. In this framework, the neighboring supericosahedra are rotated 90° with respect to each other. On the contrary, in Sc4.5–xB57–y+zC3.5–z the supericosahedra form a 2-dimensional network where the 90° rotation relation is broken because of the orthorhombic symmetry. The planar projections of the supericosahedron connection in Sc4.5–xB57–y+zC3.5–z and YB66 are shown in figures 27a and b, respectively. In the YB66 crystal structure, the neighboring 2-dimensional supericosahedron connections are out-of-phase for the rotational relation of the supericosahedron. This allows 3-dimensional stacking of the 2-dimensional supericosahedron connection while maintaining the cubic symmetry. The B80 boron cluster occupies the large space between four supericosahedra as described in the REB66 section. On the other hand, the 2-dimensional supericosahedron networks in the Sc4.5–xB57–y+zC3.5–z crystal structure stack in-phase along the z-axis. Instead of the B80 cluster, a pair of the I2 icosahedra fills the open space staying within the supericosahedron network, as shown in figure 28 where the icosahedron I2 is colored in yellow. All Sc atoms except for Sc3 reside in large spaces between the supericosahedron networks, and the Sc3 atom occupies a void in the network as shown in figure 26. Because of the small size of Sc atom, the occupancies of the Sc1–Sc5 sites exceed 95%, and those of Sc6 and Sc7 sites are approximately 90% and 61%, respectively (see table IX). Sc3.67–xB41.4–y–zC0.67+zSi0.33–w Sc3.67–xB41.4–y–zC0.67+zSi0.33–w (x = 0.52, y = 1.42, z = 1.17 and w = 0.02) has a hexagonal crystal structure with space group Pm2 (No. 187) and lattice constants a = b = 1.43055(8) and c = 2.37477(13) nm. Single crystals of this compound were obtained as an intergrowth phase in a float-zoned single crystal of Sc0.83–xB10.0–yC0.17+ySi0.083–z. This phase is not described in the phase diagram of figure 17 because it is a quaternary compound. Its hexagonal structure is rare and has 79 atomic positions in the unit cell: eight partially occupied Sc sites, 62 B sites, two C sites, two Si sites and six B/C sites. Six B sites and one of the two Si sites have partial occupancies. The associated atomic coordinates, site occupancies and isotropic displacement factors are listed in table X. There are seven structurally independent icosahedra I1–I7 which are formed by B1–B8, B9–B12, B13–B20, B/C21–B24, B/C25–B29, B30–B37 and B/C38–B42 sites, respectively; B43–B46 sites form the B9 polyhedron and B47–B53 sites construct the B10 polyhedron. B54–B59 sites form the irregularly shaped B16 polyhedron in which only 10.7 boron atoms are available because most of sites are too close to each other to be occupied simultaneously. Ten bridging sites C60–B69 interconnect polyhedron units or other bridging sites to form a 3D boron framework structure. One description of the crystal structure uses three pillar-like units that extend along the c-axis that however results in undesired overlaps between those three pillar-like units. An alternative is to define two pillar-like structure units. Figure 29 shows the boron framework structure of Sc3.67–xB41.4–y–zC0.67+zSi0.33–w viewed along the c-axis, where the pillar-like units P1 and P2 are colored in dark green and light green respectively and are bridged by yellow icosahedra I4 and I7. These pillar-like units P1 and P2 are shown in figures 30a and b, respectively. P1 consists of icosahedra I1 and I3, an irregularly shaped B16 polyhedron and other bridge site atoms where two supericosahedra can be seen above and below the B16 polyhedron. Each supericosahedron is formed by three icosahedra I1 and three icosahedra I3 and is the same as the supericosahedron O(1) shown in figure 24a.The P2 unit consists of icosahedra I2, I5 and I6, B10 polyhedron and other bridge site atoms. Eight Sc sites with occupancies between 0.49 (Sc8) and 0.98 (Sc1) spread over the boron framework. As described above, this hexagonal phase originates from a cubic phase, and thus one may expect a similar structural element in these phases. There is an obvious relation between the hexagonal ab-plane and the cubic (111) plane. Figures 31a and b show the hexagonal (001) and the cubic (111) planes, respectively. Both network structures are almost the same that allows intergrowth of the hexagonal phase in the cubic phase. Applications The diversity of the crystal structures of rare-earth borides results in unusual physical properties and potential applications in thermopower generation. Thermal conductivity of boron icosahedra based compounds is low because of their complex crystal structure; this property is favored for thermoelectric materials. On the other hand, these compounds exhibit very low (variable range hopping type) p-type electrical conductivity. Increasing the conductivity is a key issue for thermoelectric applications of these borides. YB66 is used as a soft-X-ray monochromator for dispersing 1–2 keV synchrotron radiation at some synchrotron radiation facilities. Contrary to thermoelectric applications, high thermal conductivity is desirable for synchrotron radiation monochromators. YB66 exhibits low, amorphous-like thermal conductivity. However, transition metal doping increases the thermal conductivity twice in YNb0.3B62 as compared to undoped YB66. Notes References Borides Crystallography
Crystal structure of boron-rich metal borides
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
10,836
[ "Crystallography", "Condensed matter physics", "Materials science" ]
45,453,779
https://en.wikipedia.org/wiki/Bargellini%20reaction
The Bargellini reaction is a chemical reaction discovered in 1906 by Italian chemist Guido Bargellini. The original reaction was a mixture of the reagents phenol, chloroform, and acetone in the presence of a sodium hydroxide solution. Prior to Bargellini's research, the product attributed to this multi-component reaction (MCR) had been described as a phenol derivative in chemistry texts at the time. However, Bargellini demonstrated that a carboxylic acid derivative was actually the correct structure. Later, organic chemists have used the reaction as a general method of organic synthesis for highly hindered or bulky morpholinones or piperazinones from ketones (particularly acetone) and either β-amino alcohols or diamines. History Guido Bargellini was a disciple of Hermann Emil Louis Fischer, the German chemist and Nobel laureate famous for the eponymous Fischer esterification reaction. Bargellini did his post-doctoral lab research in Fischer's laboratory. He spent most of his career as a chemist at the University of Rome. His interest in coumarins, a recently isolated compound at the time, led Bargellini to experiment with multi-component reactions (MCRs) between phenols, chloroform, and acetone in a solution of a sodium hydroxide. He discovered the structure given to the compound produced a carboxylic acid instead of a phenol as previously thought. In 1894, Link, a German chemist, had published the reaction in Chemisches Zentralblatt and patented it. However, he wrote the product was either a ketone or a phenol, specifically he claimed it was a "hydroxyphenyl hydroxyisopropyl keton" or "hydroxyisobutyrylphenol." When Bargellini conducted the same experiment and began testing the product, the chemical properties could not be from a ketone or a phenol. Instead, he was certain it was a carboxylic acid, specifically an "α-phenoxyisobutyric acid." Link himself experimented with reactions in 1900 that proved his original claim was erroneous, yet it was never changed. Since Bargellini correctly identified the product, its structure and properties, then published his results in the Gazzetta Chimica Italiana, the reaction was named after him. However, the importance of the reaction in organic synthesis and later the pharmaceutical industry has made it important historically. Since the reaction is relatively easy to perform—the reagents being readily available—many other almost identical reactions were named in the decades after. This discovery led the way for new transformation reaction, the presently-established Bargellini-type reactions, that has been of great importance, specifically in the pharmaceutical industry. It also paved the way for later name reactions, like the Jocic–Reeve reaction and the Corey–Link reaction. The Jocic–Reeve and Corey–Link reactions are almost always featured together with the Bargellini reaction in a MCR. The reaction itself has been modified several times to increase efficiency or produce a modified product. The adaptability of the reaction is one of its greatest aspects. No decade has gone by without an important addition or twist of the reaction taking place. In the author's own words, "The first phase in the reaction is probably the formation of acetonechloroform--(which may, indeed, be used in place of the chloroform), this being then acted on by sodium hydroxide in presence of acetone, yielding α-hydroxyisobutyric acid, which, with the phenol, gives α-phenoxyisobutyric acid. The chloroform may also be replaced by bromoform, bromal, chloral, or carbon tetrachloride or tetrabromide." Most textbooks describe the reaction as a way to make morpholinones or piperazinones, but it use extend much farther than that. One hundred years later, the Bargellini reaction itself was used for the condensation of coumarins, an ironic twist to the history of the reaction since this was Bargellini's primary compound of interest and his own named reaction produced it. Reactions and reaction mechanisms The original Bargellini reaction (1906): Reaction mechanism for original Bargellini reaction (1906): Present-day Bargellini reaction used for synthesis of hindered morpholinones or piperazinones from ketones (primarily acetone) and 2-amino-2-methylpropan-1-ol (β-amino alcohols) OR 1,2-diaminopropanes (diamines). The solvent used is dichloromethane (DCM), also known as methylene chloride with a benzyltriethylammonium chloride catalyst. The solvent and catalyst are frequently changed when using different reagents. Diamines tend to give higher product yields than β-amino alcohols, as shown in the two possible scenarios below: Reaction mechanism for Bargellini reaction: The reaction mechanism proceeds when a sterically accessible ketone, usually acetone, is added to a solution of chloroform (trichloromethane) under strong basic conditions, creating a trichloromethide anion by deprotonation. This forms the corresponding trichloromethyl carbinol or -alkoxide, in a similar way to the Grignard reaction. This trihalogenated product is subject to addition via a base-induced intramolecular etherification gem-dichloro epoxy. The amine can attack the oxirane due to formation of tertiary carbocation in a nucleophilic substitution SN1 concerted elimination of one atom of chlorine. The nucleophilic intermediate is highly reactive and regioselective at the α-carbon, resulting in the formation of a α-substituted carboxylic acid chloride. The final step occurs by nucleophilic acyl substitution and solvolysis, where the amino or hydroxyl group attacks the acid chloride forming the corresponding heterocycle. The end product is a carboxylic acid derivative (primarily lactones and amides). References External links https://pubchem.ncbi.nlm.nih.gov/ http://www.synarchive.com/named-reactions/Bargellini_Reaction Name reactions
Bargellini reaction
[ "Chemistry" ]
1,362
[ "Coupling reactions", "Name reactions", "Organic reactions" ]
45,456,073
https://en.wikipedia.org/wiki/Squirmer
The squirmer is a model for a spherical microswimmer swimming in Stokes flow. The squirmer model was introduced by James Lighthill in 1952 and refined and used to model Paramecium by John Blake in 1971. Blake used the squirmer model to describe the flow generated by a carpet of beating short filaments called cilia on the surface of Paramecium. Today, the squirmer is a standard model for the study of self-propelled particles, such as Janus particles, in Stokes flow. Velocity field in particle frame Here we give the flow field of a squirmer in the case of a non-deformable axisymmetric spherical squirmer (radius ). These expressions are given in a spherical coordinate system. Here are constant coefficients, are Legendre polynomials, and . One finds . The expressions above are in the frame of the moving particle. At the interface one finds and . Swimming speed and lab frame By using the Lorentz Reciprocal Theorem, one finds the velocity vector of the particle . The flow in a fixed lab frame is given by : with swimming speed . Note, that and . Structure of the flow and squirmer parameter The series above are often truncated at in the study of far field flow, . Within that approximation, , with squirmer parameter . The first mode characterizes a hydrodynamic source dipole with decay (and with that the swimming speed ). The second mode corresponds to a hydrodynamic stresslet or force dipole with decay . Thus, gives the ratio of both contributions and the direction of the force dipole. is used to categorize microswimmers into pushers, pullers and neutral swimmers. The above figures show the velocity field in the lab frame and in the particle-fixed frame. The hydrodynamic dipole and quadrupole fields of the squirmer model result from surface stresses, due to beating cilia on bacteria, or chemical reactions or thermal non-equilibrium on Janus particles. The squirmer is force-free. On the contrary, the velocity field of the passive particle results from an external force, its far-field corresponds to a "stokeslet" or hydrodynamic monopole. A force-free passive particle doesn't move and doesn't create any flow field. See also Protist locomotion References Fluid dynamics
Squirmer
[ "Chemistry", "Engineering" ]
498
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
42,177,410
https://en.wikipedia.org/wiki/DNA%20base%20flipping
DNA base flipping, or nucleotide flipping, is a mechanism in which a single nucleotide base, or nucleobase, is rotated outside the nucleic acid double helix. This occurs when a nucleic acid-processing enzyme needs access to the base to perform work on it, such as its excision for replacement with another base during DNA repair. It was first observed in 1994 using X-ray crystallography in a methyltransferase enzyme catalyzing methylation of a cytosine base in DNA. Since then, it has been shown to be used by different enzymes in many biological processes such as DNA methylation, various DNA repair mechanisms, and DNA replication. It can also occur in RNA double helices or in the DNA:RNA intermediates formed during RNA transcription. DNA base flipping occurs by breaking the hydrogen bonds between the bases and unstacking the base from its neighbors. This could occur through an active process, where an enzyme binds to the DNA and then facilitates rotation of the base, or a passive process, where the base rotates out spontaneously, and this state is recognized and bound by an enzyme. It can be detected using X-ray crystallography, NMR spectroscopy, fluorescence spectroscopy, or hybridization probes. Discovery Base flipping was first observed in 1994 when researchers Klimasauskas, Kumar, Roberts, and Cheng used X-ray crystallography to view an intermediate step in the chemical reaction of a methyltransferase bound to DNA. The methyltransferase they used was the C5-cytosine methyltransferase from Haemophilus haemolyticus (M. HhaI). This enzyme recognizes a specific sequence of the DNA (5'-GCGC-3') and methylates the first cytosine base of the sequence at its C5 location. Upon crystallization of the M. HhaI-DNA complex, they saw the target cytosine base was rotated completely out of the double helix and was positioned in the active site of the M. HhaI. It was held in place by numerous interactions between the M. HhaI and DNA. The authors theorized that base flipping was a mechanism used by many other enzymes, such as helicases, recombination enzymes, RNA polymerases, DNA polymerases, and Type II topoisomerases. Much research has been done in the years subsequent to this discovery and it has been found that base flipping is a mechanism used in many of the biological processes the authors suggest. Mechanism DNA nucleotides are held together with hydrogen bonds, which are relatively weak and can be easily broken. Base flipping occurs on a millisecond timescale by breaking the hydrogen bonds between bases and unstacking the base from its neighbors. The base is rotated out of the double helix by 180 degrees, typically via the major groove, and into the active site of an enzyme. This opening leads to small conformational changes in the DNA backbone which are quickly stabilized by the increased enzyme-DNA interactions. Studies looking at the free-energy profiles of base flipping have shown that the free-energy barrier to flipping can be lowered by 17 kcal/mol for M.HhaI in the closed conformation. There are two mechanisms of DNA base flipping: active and passive. In the active mechanism, an enzyme binds to the DNA and then actively rotates the base, while in the passive mechanism a damaged base rotates out spontaneously first, then is recognized and bound by the enzyme. Research has demonstrated both mechanisms: uracil-DNA glycosylase follows the passive mechanism and Tn10 transposase follows the active mechanism. Furthermore, studies have shown that DNA base flipping is used by many different enzymes in a variety biological processes such as DNA methylation, various DNA repair mechanisms, RNA transcription and DNA replication. Biological processes DNA modification and repair DNA can have mutations that cause a base in the DNA strand to be damaged. To ensure genetic integrity of the DNA, enzymes need to repair any damage. There are many types of DNA repair. Base excision repair utilizes base flipping to flip the damaged base out of the double helix and into the specificity pocket of a glycosylase which hydrolyzes the glycosidic bond and removes the base. DNA glycosylases interact with DNA, flipping bases to determine a mismatch. An example of base excision repair occurs when a cytosine base is deaminated and becomes a uracil base. This causes a U:G mispair which is detected by Uracil DNA glycosylase. The uracil base is flipped out into the glycosylase active pocket where it is removed from the DNA strand. Base flipping is used to repair mutations such as 8-Oxoguanine (oxoG) and thymine dimers created by UV radiation. Replication, transcription and recombination DNA replication and RNA transcription both make use of base flipping. DNA polymerase is an enzyme that carries out replication. It can be thought of as a hand that grips the DNA single strand template. As the template passes across the palm region of the polymerase, the template bases are flipped out of the helix and away from the dNTP binding site. During transcription, RNA polymerase catalyzes RNA synthesis. During the initiation phase, two bases in the -10 element flip out from the helix and into two pockets in RNA polymerase. These new interactions stabilize the -10 element and promote the DNA strands to separate or melt. Base flipping occurs during latter stages of recombination. RecA is a protein that promotes strand invasion during homologous recombination. Base flipping has been proposed as the mechanism by which RecA can enable a single strand to recognize homology in duplex DNA. Other studies indicate that it is also involved in V(D)J Recombination. DNA methylation DNA methylation is the process in which a methyl group is added to either a cytosine or adenine. This process causes the activation or inactivation of gene expression, thereby resulting in gene regulation in eukaryotic cells. DNA methylation process is also known to be involved in certain types of cancer formation. In order for this chemical modification to occur, it is necessary that the target base flips out of the DNA double helix to allow the methyltransferases to catalyze the reaction. Target recognition by restriction endonucleases Restriction endonucleases, also known as restriction enzymes are enzymes that cleave the sugar-phosphate backbone of the DNA at specific nucleotides sequences that are usually four to six nucleotides long. Studies performed by Horton and colleagues have shown that the mechanism by which these enzymes cleave the DNA involves base flipping as well as bending the DNA and the expansion of the minor groove. In 2006, Horton and colleagues, x-ray crystallography evidence was presented showing that the restriction endonuclease HinP1I utilizes base flipping in order to recognize its target sequence. This enzyme is known to cleave the DNA at the palindromic tetranucleotide sequence G↓CGC. Experimental approaches for detection X-ray crystallography X-ray crystallography is a technique that measures the angles and intensities of crystalline atoms in order to determine the atomic and molecular structure of the crystal of interest. Crystallographers are then able to produce and three-dimensional picture where the positions of the atoms, chemical bonds as well as other important characteristics can be determined. Klimasaukas and colleagues used this technique to observe the first base flipping phenomenon, in which their experimental procedure involved several steps: Purification Crystallization Data Collection Structure determination and refinement During purification, Haemophilus haemolyticus methyltransferase was overexpressed and purified using a high salt back-extraction step to selectively solubilize M.HhaI, followed by fast protein liquid chromatography (FPLC) as done previously by Kumar and colleagues. Authors utilized a Mono-Q anion exchange column to remove the small quantity of proteinaceous materials and unwanted DNA prior to the crystallization step. Once M.HhaI was successfully purified, the sample was then grown using a method that mixes the solution containing the complex at a temperature of 16 °C and the hanging-drop vapor diffusion technique to obtain the crystals. Authors were then able to collect the x-ray data according to a technique used by Cheng and colleagues in 1993. This technique involved the measurement of the diffraction intensities on a FAST detector, where the exposure times for 0.1° rotation were 5 or 10 seconds. For the structure determination and refinement, Klimasaukas and colleagues used the molecular replacement of the refined apo structure described by Cheng and colleagues in 1993 where the search models X-PLOR, MERLOT, and TRNSUM were used to solve the rotation and translation functions. This part of the study involves the use of a variety of software and computer algorithms to solve the structures and characteristics of the crystal of interest. NMR spectroscopy NMR spectroscopy is a technique that has been used over the years to study important dynamic aspects of base flipping. This technique allows researchers to determine the physical and chemical properties of atoms and other molecules by utilizing the magnetic properties of atomic nuclei. In addition, NMR can provide a variety of information including structure, reaction states, chemical environment of the molecules, and dynamics. During the DNA base flipping discovery experiment, researchers utilized NMR spectroscopy to investigate the enzyme-induced base flipping of HhaI methyltransferase. In order to accomplish this experiment, two 5-fluorocytosine residues were incorporated into the target and the reference position with the DNA substrate so the 19F chemical shift analysis could be performed. Once the 19F chemical shift analysis was evaluated, it was then concluded that the DNA complexes existed with multiple forms of the target 5-fluorocytosine along the base flipping pathway. Fluorescence spectroscopy Fluorescence spectroscopy is a technique that is used to assay a sample using a fluorescent probe. DNA nucleotides themselves are not good candidates for this technique because they do not readily re-emit light upon light excitation. A fluorescent marker is needed to detect base flipping. 2-Aminopurine is a base that is structurally similar to adenine, but is very fluorescent when flipped out from the DNA duplex. It is commonly used to detect base flipping and has an excitation at 305‑320 nm and emission at 370 nm so that it well separated from the excitations of proteins and DNA. Other fluorescent probes used to study DNA base flipping are 6MAP (4‑amino‑6‑methyl‑7(8H)‑pteridone) and Pyrrolo‑C (3-[β-D-2-ribofuranosyl]-6-methylpyrrolo[2,3-d]pyrimidin-2(3H)-one). Time-resolved fluorescence spectroscopy is also employed to provide a more detailed picture of the extent of base flipping as well as the conformational dynamics occurring during base flipping. Hybridization probing Hybridization probes can be used to detect base flipping. This technique uses a molecule that has a complementary sequence to the sequence you would like to detect such that it binds to a single-strand of the DNA or RNA. Several hybridization probes have been used to detect base flipping. Potassium permanganate is used to detect thymine residues that have been flipped out by cytosine-C5 and adenine-N6 methyltransferases. Chloroacetaldehyde is used to detect cytosine residues flipped out by the HhaI DNA cytosine-5 methyltransferase (M. HhaI). See also DNA repair Base excision repair DNA replication RNA transcription DNA methylation DNA methyltransferase Genetic recombination Homologous recombination DNA Epigenetics Epigenomics References Molecular biology Base flipping
DNA base flipping
[ "Chemistry", "Biology" ]
2,499
[ "Biochemistry", "Molecular biology" ]
42,181,888
https://en.wikipedia.org/wiki/Index%20of%20home%20automation%20articles
This is a list of home automation topics on Wikipedia. Home automation is the residential extension of building automation. It is automation of the home, housework or household activity. Home automation may include centralized control of lighting, HVAC (heating, ventilation and air conditioning), appliances, security locks of gates and doors and other systems, to provide improved convenience, comfort, energy efficiency and security. Home automation topics 0-9 6LoWPAN A Alarm.com, Inc. AlertMe AllJoyn Arduino B Belkin Wemo Bluetooth LE (BLE) Brillo (Project Brillo) Bticino Bus SCS Building automation C Connected Device C-Bus (protocol) CHAIN (industry standard) Clipsal C-Bus Comparison of domestic robots Control4 D Daintree Networks Dishwasher Domestic robot Dynalite E ESP32 ESP8266 Ember (company) European Home Systems Protocol Extron Electronics G Generalized Automation Language GreenPeak Technologies H Home Assistant (home automation software) Home automation Home automation for the elderly and disabled HomeLink Wireless Control System HomeOS HomeRF Honeywell, Inc. I Indoor positioning system Internet of Things Insteon Intelligent Home Control IoBridge iSmartAlarm IEEE 802.15.4 L Lagotek Lawn mower Lighting control system LinuxMCE LonWorks List of home automation topics List of home automation software List of network buses M Marata Vision Matter (standard) MCU (Micro Controller Unit) MiWi Mobile device Mobile Internet device N Nest Labs NodeMCU O OpenHAN Openpicus OpenTherm R Responsive architecture Robotic lawn mower Rotimatic S SM4All Smart device Smart environment Smart grid Smartlabs Smart lock Stardraw T Timer Thread (network protocol) U Universal Home API Universal powerline bus V Vacuum cleaner W Web of Things Washing machine Window blind X X10 (industry standard) X10 Firecracker XAP Home Automation protocol XPL Protocol Z Z-Wave Zigbee See also Home automation List of home automation topics List of home automation software List of home appliances Building automation Connected Devices Robotics References Home automation Building engineering
Index of home automation articles
[ "Technology", "Engineering" ]
430
[ "Home automation", "Civil engineering", "Building engineering", "Architecture" ]
42,181,961
https://en.wikipedia.org/wiki/National%20Institute%20of%20Radiological%20Sciences
The (NIRS) is a radiation research institute in Japan. The NIRS was established in 1957 as Japan's only institute of radiology. The NIRS maintains various ion accelerators in order to study the effects of radiation of the human body and medical uses of radiation. History The National Institute of Radiological Sciences hospital established in 1961 is a research hospital with a basic focus on radiation therapy. In 1993, the HIMAC (Heavy Ion Medical Accelerator in Chiba) of NIRS was launched, and in 1997 the Research Center for Charged Particle Therapy was opened as one of the leading medical centers using carbon ions are in operation. On April 1, 2016, the Japan Atomic Energy Agency (JAEA) transferred some of its laboratories to the NIRS, and the NIRS body was renamed to the National Institutes for Quantum and Radiological Science and Technology (QST), which includes existing laboratories of the NIRS; the NIRS is currently a radiological research division of the QST. Organizational structure Auditing and Compliance Office (Headquarters) Department of Planning and Management Department of General Affairs Research, Development and Support Center Research Center for Charged Particle Therapy Hospital (QST [Quantum and Radiological Science and Technology] Hospital; formerly NIRS Hospital) Molecular Imaging Center Research Center for Radiation Protection Research Center for Radiation Emergency Medicine Radiation Emergency Medical Assistance Team Center for Human Resources Development International Open Laboratory Medical Exposure Research Project Fukushima Project Headquarters Notes External links NIRS official website (in English and Japanese) Independent Administrative Institutions of Japan 1957 establishments in Japan International research institutes Nuclear research institutes Medical research institutes in Japan Biological research institutes Nuclear medicine organizations
National Institute of Radiological Sciences
[ "Engineering" ]
326
[ "Nuclear research institutes", "Nuclear medicine organizations", "Nuclear organizations" ]
42,182,392
https://en.wikipedia.org/wiki/Titanocene%20Y
Titanocene Y also known as bis[(p-methoxybenzyl)cyclopentadienyl]titanium(IV) dichloride or dichloridobis(η5-(p-methoxybenzyl)cyclopentadienyl)titanium is an organotitanium compound that has been investigated for use as an anticancer drug. Discovery Titanocene dichloride is known to be a potential anticancer drug since the late 1970s. After initial clinical trials against breast and renal-cell cancer were performed with this compound, the search for improved derivatives started. Particularly, lipophilic titanocene dichloride derivatives derived from fulvenes were synthesised in structural diversity and this led to the development of bis[(p-methoxybenzyl)cyclopentadienyl]titanium(IV) dichloride, which became better known in the literature under its trivial name of Titanocene Y. Mechanism of action Titanocene Y is a cytotoxic apoptosis-inducing and anti-angiogenic drug candidate targeting renal-cell cancer and other solid tumors. The compound is transported via serum albumin selectively into cancer cells and targets their DNA by coordinating strongly to phosphate groups. Additionally, Titanocene Y is able to induce apoptosis via the FAS receptor pathway. Very encouraging is the fact that Titanocene Y is breaking platinum-resistance in human colon and human lung cancer cells, which might make it attractive as a cytotoxic component of future 2nd or 3rd line cancer treatments. Animal testing Titanocene Y was tested extensively in vivo; it showed promising results against xenografted human epidermoid carcinoma and prostate cancer, while best results are reached against breast and renal-cell cancer. Titanocene Y can be given in the mouse in high dosages and it shows generally mild toxicity in the form of diarrhea. Titanocene Y is not patent protected and would therefore benefit from non-commercial sponsoring to develop it into a cytotoxic drug candidate for the treatment of advanced renal-cell cancer – an area in need of better therapies. References External links Titanocenes Chloro complexes Titanium(IV) compounds Half sandwich compounds
Titanocene Y
[ "Chemistry" ]
467
[ "Organometallic chemistry", "Half sandwich compounds" ]
42,182,591
https://en.wikipedia.org/wiki/C18H24N2O3
{{DISPLAYTITLE:C18H24N2O3}} The molecular formula C18H24N2O3 (molar mass: 316.401 g/mol) may refer to: Atizoram RTI-160 Molecular formulas
C18H24N2O3
[ "Physics", "Chemistry" ]
56
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
42,183,700
https://en.wikipedia.org/wiki/Strontium%20barium%20niobate
Strontium barium niobate is the chemical compound SrxBa1−xNb2O6 for 0.32≤x≤0.82. Strontium barium niobate is a ferroelectric material commonly used in single crystal form in electro-optics, acousto-optics, and photorefractive non-linear optics for its photorefractive properties. Strontium barium niobate is one of the few tetragonal tungsten bronze compounds without volatile elements making it a useful system for probing structure-property relations. Strontium barium niobate is a normal ferroelectric for Barium-rich compositions and becomes a relaxor ferroelectric with increasing strontium content. This has been attributed to positional disorder of the A-site cations alongside incommensurate oxygen octahedral tilting Strontium barium niobate is one of numerous ceramic materials that are known to exhibit abnormal grain growth, in which certain grains grow very large within a matrix of finer equiaxed grains. This abnormal grain growth (AGG) has significant consequences on the dielectric and electronic performance of strontium barium niobate References Strontium compounds Barium compounds Niobates
Strontium barium niobate
[ "Chemistry" ]
263
[ "Inorganic compounds", "Inorganic compound stubs" ]
40,754,509
https://en.wikipedia.org/wiki/Endocannabinoid%20transporter
The endocannabinoid transporters (eCBTs) are transport proteins for the endocannabinoids. Most neurotransmitters are water-soluble and require transmembrane proteins to transport them across the cell membrane. The endocannabinoids (anandamide, AEA, and 2-arachidonoylglycerol, 2-AG) on the other hand, are non-charged lipids that readily cross lipid membranes. However, since the endocannabinoids are water immiscible, protein transporters have been described that act as carriers to solubilize and transport the endocannabinoids through the aqueous cytoplasm. These include the heat shock proteins (Hsp70s) and fatty acid-binding proteins for anandamide (FABPs). FABPs such as FABP1, FABP3, FABP5, and FABP7 have been shown to bind endocannabinoids. FABP inhibitors attenuate the breakdown of anandamide by the enzyme fatty acid amide hydrolase (FAAH) in cell culture. One of these inhibitors (SB-FI-26), isolated from a virtual library of a million compounds, belongs to a class of compounds (named the "truxilloids') that act as an anti-nociceptive agent with mild anti-inflammatory activity in mice. These truxillic acids and their derivatives have been known to have anti-inflammatory and anti-nociceptive effects in mice and are active components of a Chinese herbal medicine ((−)-Incarvillateine Incarvillea sinensis) used to treat rheumatism and pain in human. The blockade of anandamide transport may, at least in part, be the mechanism through which these compounds exert their anti-nociceptive effects. Studies have found the involvement of cholesterol in membrane uptake and transport of anandamide. Cholesterol stimulates both the insertion of anandamide into synthetic lipid monolayers and bilayers, and its transport across bilayer membranes, suggest that besides putative anandamide protein-transporters, cholesterol could be an important component of the anandamide transport machinery, and as cholesterol-dependent modulation of CB1 cannabinoid receptors in nerve cells. The catalytic efficiency (i.e., the ratio between maximal velocity and Michaelis–Menten constant) of the AEA membrane transporter (AMT) is almost doubled compared with control cells, demonstrate that, among the proteins of the “endocannabinoid system,” only CB1 and AMT critically depend on membrane cholesterol content, an observation that may have important implications for the role of CB1 in protecting nerve cells against (endo)cannabinoid-induced apoptosis. This can be a reason, why the use of drugs to lower cholesterol is tied to a higher depression risk, and the correlation between levels and increased death rates from suicide and other violent causes. Activation of CB1 enhances AMT activity through increased nitric oxide synthase (NOS) activity and subsequent increase of NO production, whereas AMT activity instead is reduced by activation of the CB2 cannabinoid receptor, which inhibits NOS and NO release, also suggesting the distribution of these receptors may drive AEA directional transport through the blood–brain barrier and other endothelial cells. As reviewed in 2016; "Many of the AMT (EMT) proposals have fallen by the wayside." To date a transmembrane protein transporter has not been identified. See also Endocannabinoid reuptake inhibitor Endocannabinoid enhancer Endocannabinoid system References Trans Neurochemistry
Endocannabinoid transporter
[ "Chemistry", "Biology" ]
790
[ "Biochemistry", "Neurochemistry" ]
40,759,252
https://en.wikipedia.org/wiki/Indolyl-3-acryloylglycine
Indolyl-3-acryloylglycine, also known as trans-indolyl-3-acryloylglycine, or IAG for short, is a compound consisting of an indole group attached to an acrylic acid moiety, which is in turn attached to a glycine molecule. This compound has been shown to isomerize when exposed to light. It is likely a metabolic intermediate in the biosynthesis of tryptophan, and is synthesized from tryptophan via indolepropionic acid and indoleacrylicacid (IAcrA). It is also likely that IAcrA is converted into IAG in the gut wall. It may also be produced by certain elements of the mammalian gut microbiota by phenylalanine ammonia-lyase. Identifiable in the urine by high-performance liquid chromatography, it may be a biomarker for autism spectrum disorders, as demonstrated by the research of Paul Shattock and other researchers from Australia. These researchers have reported that urinary levels of IAG are much higher in autistic children than in controls; however, other researchers have found no association between IAG concentrations in the urine and autism. Its excretion in the urine may also be changed in Hartnup disease and celiac disease, as well as photodermatosis, muscular dystrophy, and liver cirrhosis. References Indoles Biomarkers Amino acids
Indolyl-3-acryloylglycine
[ "Chemistry", "Biology" ]
309
[ "Amino acids", "Biomolecules by chemical classification", "Biomarkers" ]
40,761,507
https://en.wikipedia.org/wiki/EHA101
EHA101 was one of the first and most widely used Agrobacterium helper plasmid for plant gene transfer. Created in 1985 in the laboratory of Mary-Dell Chilton at Washington University in St. Louis, it was named after the graduate student who constructed it. The EH stands for "Elizabeth Hood" and A for "Agrobacterium". The EHA101 helper strain is a derivative of A281, the hypervirulent A. tumefaciens strain that causes large, fast-growing tumors on solanaceous plants. This strain is used for moving genes of interest into many hundreds of species of plants all over the world. For recalcitrant crops such as maize, wheat, and rice, the EHA helper strains are often employed for gene transfer. These strains are efficient at promoting T-DNA transfer because of the hypervirulence of the vir genes suggesting that a higher success rate can be achieved on these "hard to transform" crops or cultivars. The chromosomal background of EHA101 is C58C1, a cured nopaline strain. The helper strains were derived from A281, which is A136(pTiBo542). A281 was genetically engineered through a double crossover, site-directed deletion to yield EHA101, a T-DNA deleted strain useful for target gene transfer into plants. EHA101 is resistant to kanamycin by way of an npt I gene in place of T-DNA. The parent strain, A281, does not show antibiotic resistances at higher levels than normal A. tumefaciens strains. Moreover, other transconjugant strains in the C58C1 background, do not show these increased resistances to antibiotics. Therefore, these characteristics are not simply a manifestation of the chromosomal background, but most likely an interaction of this Ti plasmid and the C58 chromosomal background. The npt I gene in place of the T-DNA in EHA101 requires that binary plasmids that are put into the strain encode a drug resistance other than kanamycin. Strains EHA105 was generated from EHA101 through site-directed deletion of the kanamycin resistance gene from the Ti plasmid, otherwise the strains are identical. This latter strain has been useful to plant biotechnologists who use kanamycin as a selectable marker on their binary plasmids. References Plasmids Molecular biology techniques
EHA101
[ "Chemistry", "Biology" ]
531
[ "Plasmids", "Molecular biology techniques", "Bacteria", "Molecular biology" ]
40,762,191
https://en.wikipedia.org/wiki/Protein%20superfamily
A protein superfamily is the largest grouping (clade) of proteins for which common ancestry can be inferred (see homology). Usually this common ancestry is inferred from structural alignment and mechanistic similarity, even if no sequence similarity is evident. Sequence homology can then be deduced even if not apparent (due to low sequence similarity). Superfamilies typically contain several protein families which show sequence similarity within each family. The term protein clan is commonly used for protease and glycosyl hydrolases superfamilies based on the MEROPS and CAZy classification systems. Identification Superfamilies of proteins are identified using a number of methods. Closely related members can be identified by different methods to those needed to group the most evolutionarily divergent members. Sequence similarity Historically, the similarity of different amino acid sequences has been the most common method of inferring homology. Sequence similarity is considered a good predictor of relatedness, since similar sequences are more likely the result of gene duplication and divergent evolution, rather than the result of convergent evolution. Amino acid sequence is typically more conserved than DNA sequence (due to the degenerate genetic code), so it is a more sensitive detection method. Since some of the amino acids have similar properties (e.g., charge, hydrophobicity, size), conservative mutations that interchange them are often neutral to function. The most conserved sequence regions of a protein often correspond to functionally important regions like catalytic sites and binding sites, since these regions are less tolerant to sequence changes. Using sequence similarity to infer homology has several limitations. There is no minimum level of sequence similarity guaranteed to produce identical structures. Over long periods of evolution, related proteins may show no detectable sequence similarity to one another. Sequences with many insertions and deletions can also sometimes be difficult to align and so identify the homologous sequence regions. In the PA clan of proteases, for example, not a single residue is conserved through the superfamily, not even those in the catalytic triad. Conversely, the individual families that make up a superfamily are defined on the basis of their sequence alignment, for example the C04 protease family within the PA clan. Nevertheless, sequence similarity is the most commonly used form of evidence to infer relatedness, since the number of known sequences vastly outnumbers the number of known tertiary structures. In the absence of structural information, sequence similarity constrains the limits of which proteins can be assigned to a superfamily. Structural similarity Structure is much more evolutionarily conserved than sequence, such that proteins with highly similar structures can have entirely different sequences. Over very long evolutionary timescales, very few residues show detectable amino acid sequence conservation, however secondary structural elements and tertiary structural motifs are highly conserved. Some protein dynamics and conformational changes of the protein structure may also be conserved, as is seen in the serpin superfamily. Consequently, protein tertiary structure can be used to detect homology between proteins even when no evidence of relatedness remains in their sequences. Structural alignment programs, such as DALI, use the 3D structure of a protein of interest to find proteins with similar folds. However, on rare occasions, related proteins may evolve to be structurally dissimilar and relatedness can only be inferred by other methods. Mechanistic similarity The catalytic mechanism of enzymes within a superfamily is commonly conserved, although substrate specificity may be significantly different. Catalytic residues also tend to occur in the same order in the protein sequence. For the families within the PA clan of proteases, although there has been divergent evolution of the catalytic triad residues used to perform catalysis, all members use a similar mechanism to perform covalent, nucleophilic catalysis on proteins, peptides or amino acids. However, mechanism alone is not sufficient to infer relatedness. Some catalytic mechanisms have been convergently evolved multiple times independently, and so form separate superfamilies, and in some superfamilies display a range of different (though often chemically similar) mechanisms. Evolutionary significance Protein superfamilies represent the current limits of our ability to identify common ancestry. They are the largest evolutionary grouping based on direct evidence that is currently possible. They are therefore amongst the most ancient evolutionary events currently studied. Some superfamilies have members present in all kingdoms of life, indicating that the last common ancestor of that superfamily was in the last universal common ancestor of all life (LUCA). Superfamily members may be in different species, with the ancestral protein being the form of the protein that existed in the ancestral species (orthology). Conversely, the proteins may be in the same species, but evolved from a single protein whose gene was duplicated in the genome (paralogy). Diversification A majority of proteins contain multiple domains. Between 66-80% of eukaryotic proteins have multiple domains while about 40-60% of prokaryotic proteins have multiple domains. Over time, many of the superfamilies of domains have mixed together. In fact, it is very rare to find “consistently isolated superfamilies”. When domains do combine, the N- to C-terminal domain order (the "domain architecture") is typically well conserved. Additionally, the number of domain combinations seen in nature is small compared to the number of possibilities, suggesting that selection acts on all combinations. Examples α/β hydrolase superfamily Members share an α/β sheet, containing 8 strands connected by helices, with catalytic triad residues in the same order, activities include proteases, lipases, peroxidases, esterases, epoxide hydrolases and dehalogenases. Alkaline phosphatase superfamily Members share an αβα sandwich structure as well as performing common promiscuous reactions by a common mechanism. Globin superfamily Members share an 8-alpha helix globular globin fold. Immunoglobulin superfamily Members share a sandwich-like structure of two sheets of antiparallel β strands (Ig-fold), and are involved in recognition, binding, and adhesion. PA clan Members share a chymotrypsin-like double β-barrel fold and similar proteolysis mechanisms but sequence identity of <10%. The clan contains both cysteine and serine proteases (different nucleophiles). Ras superfamily Members share a common catalytic G domain of a 6-strand β sheet surrounded by 5 α-helices. RSH superfamily Members share capability to hydrolyze and/or synthesize ppGpp alarmones in the stringent response. Serpin superfamily Members share a high-energy, stressed fold which can undergo a large conformational change, which is typically used to inhibit serine and cysteine proteases by disrupting their structure. TIM barrel superfamily Members share a large α8β8 barrel structure. It is one of the most common protein folds and the monophylicity of this superfamily is still contested. Protein superfamily resources Several biological databases document protein superfamilies and protein folds, for example: Pfam - Protein families database of alignments and HMMs PROSITE - Database of protein domains, families and functional sites PIRSF - SuperFamily Classification System PASS2 - Protein Alignment as Structural Superfamilies v2 SUPERFAMILY - Library of HMMs representing superfamilies and database of (superfamily and family) annotations for all completely sequenced organisms SCOP and CATH - Classifications of protein structures into superfamilies, families and domains Similarly there are algorithms that search the PDB for proteins with structural homology to a target structure, for example: DALI - Structural alignment based on a distance alignment matrix method See also Structural alignment Protein domains Protein family Protein subfamily Protein mimetic Protein structure Homology (biology) Interolog List of gene families SUPERFAMILY CATH References External links Molecular evolution Protein classification
Protein superfamily
[ "Chemistry", "Biology" ]
1,629
[ "Evolutionary processes", "Molecular evolution", "Protein classification", "Molecular biology", "Protein families", "Protein superfamilies" ]
48,471,223
https://en.wikipedia.org/wiki/Drug%20class
A drug class is a group of medications and other compounds that share similar chemical structures, act through the same mechanism of action (i.e., binding to the same biological target), have similar modes of action, and/or are used to treat similar diseases. The FDA has long worked to classify and license new medications. Its Drug Evaluation and Research Center categorizes these medications based on both their chemical and therapeutic classes. In several major drug classification systems, these four types of classifications are organized into a hierarchy. For example, fibrates are a chemical class of drugs (amphipathic carboxylic acids) that share the same mechanism of action (PPAR agonist), the same mode of action (reducing blood triglyceride levels), and are used to prevent and treat the same disease (atherosclerosis). However, not all PPAR agonists are fibrates, not all triglyceride-lowering agents are PPAR agonists, and not all drugs used to treat atherosclerosis lower triglycerides. A drug class is typically defined by a prototype drug, the most important, and typically the first developed drug within the class, used as a reference for comparison. Comprehensive systems Anatomical Therapeutic Chemical Classification System (ATC) – Combines classification by organ system and therapeutic, pharmacological, and chemical properties into five levels. Systematized Nomenclature of Medicine (SNOMED) – includes a section devoted to drug classification Chemical class This type of categorisation of drugs is from a chemical perspective and categorises them by their chemical structure. Examples of drug classes that are based on chemical structures include: Analgesic Benzodiazepine Cannabinoid Cardiac glycoside Fibrate Gabapentinoid Steroid Thiazide diuretic Triptan β-lactam antibiotic Mechanism of action This type of categorisation is from a pharmacological perspective and categorises them by their biological target. Drug classes that share a common molecular mechanism of action modulate the activity of a specific biological target. The definition of a mechanism of action also includes the type of activity at that biological target. For receptors, these activities include agonist, antagonist, inverse agonist, or modulator. Enzyme target mechanisms include activator or inhibitor. Ion channel modulators include opener or blocker. The following are specific examples of drug classes whose definition is based on a specific mechanism of action: 5-alpha-reductase inhibitor ACE inhibitor Alpha-adrenergic agonist Angiotensin II receptor antagonist Beta blocker Cholinergic Dopaminergic GABAergic Incretin mimetic Nonsteroidal anti-inflammatory drug − cyclooxygenase inhibitor Proton-pump inhibitor Renin inhibitor Selective glucocorticoid receptor modulator Serotonergic Statin – HMG-CoA reductase inhibitor Mode of alternative This type of categorisation of drugs is from a biological perspective and categorises them by the anatomical or functional change they induce. Drug classes that are defined by common modes of action (i.e. the functional or anatomical change they induce) include: Antifungals Antimicrobials Antithrombotics Bronchodilator Chronotrope (positive or negative) Decongestant Diuretic or Antidiuretic Inotrope (positive or negative) Therapeutic class This type of categorisation of drugs is from a medical perspective and categorises them by the pathology they are used to treat. Drug classes that are defined by their therapeutic use (the pathology they are intended to treat) include: Analgesics Antibiotic Anticancer Anticoagulant Antidepressant Antidiabetic Antiepileptic Antipsychotic Antispasmodic Antiviral Cardiovascular Depressant Sedative Stimulant Amalgamated classes Some drug classes have been amalgamated from these three principles to meet practical needs. The class of nonsteroidal anti-inflammatory drugs (NSAIDs) is one such example. Strictly speaking, and also historically, the wider class of anti-inflammatory drugs also comprises steroidal anti-inflammatory drugs. These drugs were in fact the predominant anti-inflammatories during the decade leading up to the introduction of the term "nonsteroidal anti-inflammatory drugs." Because of the disastrous reputation that the corticosteroids had got in the 1950s, the new term, which offered to signal that an anti-inflammatory drug was not a steroid, rapidly gained currency. The drug class of "nonsteroidal anti-inflammatory drugs" (NSAIDs) is thus composed by one element ("anti-inflammatory") that designates the mechanism of action, and one element ("nonsteroidal") that separates it from other drugs with that same mechanism of action. Similarly, one might argue that the class of disease-modifying anti-rheumatic drugs (DMARD) is composed by one element ("disease-modifying") that albeit vaguely designates a mechanism of action, and one element ("anti-rheumatic drug") that indicates its therapeutic use. Disease-modifying antirheumatic drug (DMARD) Nonsteroidal anti-inflammatory drug (NSAID) Other systems of classification Other systems of drug classification exist, for example the Biopharmaceutics Classification System which determines a drugs' attributes by solubility and intestinal permeability. Legal classification For the Canadian legal classification, see Controlled Drugs and Substances Act For the UK legal classification, see Drugs controlled by the UK Misuse of Drugs Act For the US legal classification, see Pregnancy category is defined using a variety of systems by different jurisdictions References External links Pharmacodynamics Medicinal chemistry Pharmacological classification systems
Drug class
[ "Chemistry", "Biology" ]
1,208
[ "Pharmacology", "Pharmacological classification systems", "Pharmacodynamics", "nan", "Medicinal chemistry", "Biochemistry" ]
53,675,110
https://en.wikipedia.org/wiki/Noise%20%28spectral%20phenomenon%29
Noise is any type of random, troublesome, problematic, or unwanted signals. Acoustic noise may mar aesthetic experience, such as attending a concert hall. It may also be a medical issue inherent in the biology of hearing. In technology, noise is unwanted signals in a device or apparatus, commonly of an electrical nature. The nature of noise is much studied in mathematics and is a prominent topic in statistics. This article provides a survey of specific topics linked to their primary articles. Acoustic noise In transportation Aircraft noise Jet noise, caused by high-velocity jets and turbulent eddies Noise and vibration on maritime vessels Noise, vibration, and harshness, quality criteria for vehicles Traffic noise, including roadway noise and train noise Other acoustic noise Artificial noise, in spectator sports Background noise, in acoustics, any sound other than the monitored one Comfort noise, used in telecommunications to fill silent gaps Grey noise, random noise with a psychoacoustic adjusted spectrum Industrial noise, relevant to hearing damage and industrial hygiene Noise pollution, that affects negatively the quality of life Noise in biology Cellular noise, in biology, random variability between cells Developmental noise, variations among living beings with the same genome Neuronal noise, in neuroscience Synaptic noise, in neuroscience Transcriptional noise, in biochemistry, errors in genetic transcription Noise in computer graphics Noise in computer graphics refers to various pseudo-random functions used to create textures, including: Gradient noise, created by interpolation of a lattice of pseudorandom gradients Perlin noise, a type of gradient noise developed in 1983 Simplex noise, a method for constructing an n-dimensional noise function comparable to Perlin noise Simulation noise, a function that creates a divergence-free field Value noise, created by interpolation of a lattice of pseudorandom values; differs from gradient noise Wavelet noise, an alternative to Perlin noise which reduces problems of aliasing and detail loss Worley noise, a noise function introduced by Steven Worley in 1996 Noise in electronics and radio Noise (signal processing), various types of interference Noise (electronics), related to electronic circuitry Ground noise, appearing at the ground terminal of audio equipment Image noise, related to digital photography Noise (radio), interference related to radio signals Atmospheric noise, radio noise caused by lightning Cosmic noise, radio noise from outside the Earth's atmosphere Noise (video), "snow" on video or television pictures Noise in mathematics Any one of many statistical types or colors of noise, such as White noise, which has constant power spectral density Gaussian noise, with a probability density function equal to that of the normal distribution Pink noise, with spectral density inversely proportional to frequency Brownian noise or "brown" noise, with spectral density inversely proportional to the square of frequency Pseudorandom noise, in cryptography, artificial signal that can pass for random Statistical noise, a colloquialism for recognized amounts of unexplained variation in a sample Shot noise, noise which can be modeled by a Poisson process Noise-based logic, where logic values are different stochastic processes Noise print, a statistical signature of ambient noise, used in its suppression Other types of noise Electrochemical noise, electrical fluctuations in electrolysis, corrosion, etc. Phonon noise, in materials science Seismic noise, random tremors of the ground Measures of noise intensity Noise figure, the ratio of the output noise power to attributable thermal noise Ambient noise level, the background sound pressure level at a given location Noise power, with several related meanings Noise spectral density, No measured in Watt/Hertz Noise temperature, temperature that would produce equivalent semiconductor noise See also Noise (disambiguation) Broad-concept articles Noise (electronics) Mechanical vibrations
Noise (spectral phenomenon)
[ "Physics", "Engineering" ]
749
[ "Structural engineering", "Mechanics", "Mechanical vibrations" ]
50,830,497
https://en.wikipedia.org/wiki/EcoDemonstrator
The ecoDemonstrator Program is a Boeing flight test research program, which has used a series of specially modified aircraft to develop and test aviation technologies designed to improve fuel economy and reduce the noise and ecological footprint of airliners. Starting in 2012, several aircraft have tested a total of over 250 technologies as of 2024; half remain in further development, but nearly a third have been implemented commercially, such as iPad apps for pilot real-time information to reduce fuel use and emissions; custom approach paths to reduce community noise; and cameras for ground navigation and collision avoidance. Boeing's named airliner technology programs started in 2001 with the Quiet Technology Demonstrator, and have continued, through the ecoDemonstrator, to the ecoDemonstrator Explorer program announced in 2023. Quiet Technology Demonstrator program The ecoDemonstrator program followed the joint Rolls-Royce and Boeing Quiet Technology Demonstrator (QTD) program, which ran in 2001, 2005 and 2018 to develop a quieter engine using chevrons on the rear of the nacelle and exhaust nozzles, as well as an acoustically enhanced inlet liner. In 2001 an American Airlines Boeing 777-200ER with Rolls-Royce Trent 800 engines was used for the flight tests. Much testing was carried out at Glasgow Industrial Airport, Montana, the airport of Boeing's subsidiary, Montana Aviation Research Company (MARCO). The tests were successful, demonstrating better noise reduction than predicted and leading to redesign of wing leading edge de-icing holes to eliminate whistling, a modification which was immediately applied on the 777 production line. Once the QTD2 program began, this program started to be referred to as QTD1. The resulting design changes were demonstrated in the 2005 Quiet Technology Demonstrator Two (QTD2) program in which a new Boeing 777-300ER, fitted with General Electric GE90-115B engines, was used for a three-week trial, again mainly at Boeing's flight test centre at Glasgow Industrial Airport. As well as the modifications, the aircraft was equipped with extensive sound measurement equipment, and microphone arrays were laid out around the airfield. The chevrons have since been adopted on the Boeing 737 MAX series, 747-8 and 787 Dreamliner aircraft. Also tested on the QTD2 were streamlined toboggan fairings on the main landing gear to reduce noise. In 2018 a new design of engine inlet liner was flight tested in a successor program, Quiet Technology Demonstrator 3 (QTD3), using acoustic arrays at Moses Lake, Washington. The NASA-designed inlet was installed in the right-hand nacelle of one of Boeing's two 737 MAX 7 prototypes, powered by CFM International LEAP 1B engines. The testing took place between 27 July and 6 August. QTD aircraft summary ecoDemonstrator program The ecoDemonstrator program was formally launched in 2011, in partnership with American Airlines and the FAA. The first ecoDemonstrator aircraft, a Boeing 737-800, operated during 2012. Since then a different aircraft has been used each year, excepting 2013 and 2017 and a single aircraft from 2022 to 2024, with testing operations lasting from a few weeks to over six months. The testing is usually done in collaboration with many industry partners, including NASA, the FAA, airlines, makers of engines, equipment and software, and academic institutions. The results of the tests are rarely publicised, respecting the confidentiality of the industrial partners. As of 2024 the program has tested over 250 technologies, of which 28% have been implemented, 52% are still under development, and 20% "provided helpful learnings" and were abandoned. The 2022-4 aircraft, the ninth in the program, wore a special 10th anniversary livery. Participating aircraft 2012: Boeing 737-800 This was a new aircraft destined for American Airlines and in their livery. With this, the first ecoDemonstrator, Boeing tested laminar flow technology for winglets, improving fuel efficiency by 1.8 percent. This fed directly into the design of the winglets used on the subsequent 737 MAX series. The aircraft tested other technologies, including: variable area fan nozzle to optimize engine efficiency regenerative hydrogen fuel cell for aircraft electrical power adaptive outer wing trailing edges for greater take-off lift and decreased drag in cruise active engine vibration control flightpath optimization for operational efficiency carpet made from recycled materials sustainable aviation fuel (SAF). 2014: Boeing 787-8 The fourth production 787, a Boeing test airframe, was employed as the second ecoDemonstrator. It conducted 35 projects including: use of a 15% blend of SAF by both engines for nine flights acoustic ceramic matrix composite nozzle for weight and noise reduction aerodynamic and flight control improvements. advanced wing coatings to reduce ice accumulation. software applications and connectivity technologies that can improve flight planning, fuel-load optimization, in-flight routing, and landing. touchscreen displays on the flight deck. wireless sensors to reduce wiring, reduce weight and save fuel. outer wing access doors made from recycled 787 carbon fibre. development of the Airborne Spacing for Terminal Arrival Routes (ASTAR) system to reduce spacing between aircraft on approach to airports. 2015: Boeing 757-200 This aircraft served with United Airlines for 23 years before being used by Boeing for the ecoDemonstrator program. The aircraft was painted in the TUI Group livery as a mark of their collaboration in the project, particularly in the environmental efficiency aspects. NASA's Langley Research Center was also a major participant as part of its Environmentally Responsible Aviation (ERA) project. At the end of the testing period the aircraft was, in conjunction with the Aircraft Fleet Recycling Association and the aircraft lessor Stifel, disassembled for recycling. Around 90% of materials were reused or recycled. Among the 20 technologies explored were: improvement of airflow with insect shields and anti-bug coatings on one wing active flow control over the vertical tail with the aim of increasing efficiency and reducing its size cabin food cart that converts to a waste cart green diesel fuel testing. 2016: Embraer E170 The third E170 prototype first flew in 2002 and was retained by Embraer as a test and demonstration aircraft. It was the only non-Boeing aircraft so far to participate as an ecoDemonstrator. Testing projects included: use of LIDAR to complement existing air data sensors ice-phobic paint to reduce icing and insect debris buildup new noise-reducing flaps special sensors to investigate airflow and improve aerodynamics use of 10% Brazil-produced bio-fuel and 90% standard kerosene. 2018: Boeing 777F FedEx supplied a newly delivered 777 freighter for use in the ecoDemonstrator program. After two months of conversion, it was used in the testing program for around three months before restoration to its freighter role. Technologies explored included: smaller, lighter weight thrust reverser Safran electrical power distribution system use of 100% biofuel – the first commercial airliner to be entirely powered by SAF. The engines were not modified in any way 3D printed titanium tail fin cap using waste material and reducing the weight synthetic ILS using GPS giving increased reliability and potentially allowing reduced separation of aircraft on approach wake riding, involving flying closely behind another aircraft to give a fuel efficiency increase of up to 10% LIDAR clear-air turbulence detector SOCAS – Surface Operations and Collision Avoidance System, merging radar and video images for obstacle detection FLYHT Aerospace Solutions’ Automated Flight Information Reporting System (AFIRS) for tracking, distress and data-streaming from flight data recorders. 2019: Boeing 777-200 This airliner had served Air China since 2001 before Boeing purchased it to join the ecoDemonstrator program. During testing, the aircraft visited Frankfurt, Germany, as several experiments were sponsored by German organisations including the German Aerospace Center (DLR), Diehl Aerospace, and Fraport. Among the 50 projects trialled were: recyclable cabin carpet tiles moisture-absorbent toilet floor made from recycled carbon fibre chromate-free primer for aluminium parts to reduce manufacturing health risks sharing digital information between air traffic control (ATC), the flight deck and an airline's operations center to optimize routing efficiency and safety a connected electronic flight bag (EFB) application to provide re-routing information connected galleys, lavatories, and cabin temperature and humidity sensors cameras for an outside view for passengers. 2020: Boeing 787-10 This new aircraft for Etihad Airways was used for just a few weeks between August and October 2020, with testing mainly carried out at Boeing's Glasgow Industrial Airport, Montana. The program included: noise measurement with over 1400 sensors for internal and external measurements noise reduction including Safran undercarriage modifications SAF testing with blends of 30% to 50% sanitisation methods for the COVID-19 pandemic. digital text-based ATC routing communications. 2021: Boeing 737 MAX 9 This 5-month program was conducted with a new airframe originally destined for Corendon Dutch Airlines but was painted in a special Alaska Airlines livery with ecoDemonstrator stickers. In October 2021 the aircraft flew from Seattle to Glasgow, Scotland, for the United Nations COP26 Climate Change Conference, bringing executives from Boeing and Alaska Airlines and fuelled by a 50% SAF fuel blend. The testing program included: low profile anti-collision light for weight and drag reduction and increased visibility modernised ATC communications including the Inmarsat IRIS satellite communications system halon-free fire extinguishing (ground testing only) noise reduction engine nacelles including testing at Glasgow Industrial Airport, Montana cabin walls made from recycled material 50% SAF blend atmospheric greenhouse gas measurement system integration for airliners passenger air vent designs to create an air curtain between seat rows. 2022: Boeing 777-200ER The aircraft was originally delivered to Singapore Airlines in 2002, and flew most recently for Surinam Airways. It wears a livery celebrating the 10th anniversary of the ecoDemonstrator program. Boeing implied that this aircraft will operate as the ecoDemonstrator test aircraft until 2024. The company stated that the six-month 2022 program would demonstrate 30 new technologies, among which were: the use of a 30% SAF blend disinfection of water from sinks for reuse in toilet flushing weight reduction through 3D printed parts noise reduction techniques vortex generators which retract during cruise head-worn head-up display enhanced vision system fire-fighting system that does not use Halon environmentally-friendly galley cooler refrigerant. 2023 In April 2023 Boeing announced that the 777-200ER would be testing 19 technologies during the year, including: cargo hold wall panels made from recycled and sustainable materials fibre-optic fuel quantity sensors compatible with SAF smart airport maps by Boeing subsidiary Jeppesen for active airport taxiing monitoring for EFBs all flights to use SAF in the highest available blend Between 25 June and 29 June 2023 the aircraft operated from London Stansted Airport, performing flights over The Netherlands, Belgium, Germany and the Czech Republic, subsequently returning to its base at Seattle. no announcement had been made about the purpose of these flights. In December 2023, in cooperation with Nav Canada, the aircraft taxied from stand to runway at Vancouver International Airport using only digital communications via the EFB, with no voice contact with ATC. 2024 In May 2024 Boeing announced that the aircraft would be flying its program with a 30/70 mix of SAF and conventional fuel, starting later the same month. Among the 36 technologies to be tested would be: cabin seat sensors to detect if a passenger leaves their seat during taxiing, takeoff and landing, touchless lavatory and more efficient galleys to increase efficiency and reduce food waste single-engine taxiing and digital taxi clearances steeper, continuous landing approaches for noise, fuel and emissions reduction recyclable, lighter and more durable cabin flooring and ceiling panels improved cabin insulation and bulkhead and galley acoustic panels. OLED display screens integrated into cabin structure. Also tested were enhancements of the Jeppesen EFB which included in-flight fuel saving recommendations and prediction of taxi times using historical and real-time data. In September 2024 Boeing announced that during this round of testing it had performed 85 ground and 15 flight testing hours, testing 10 technologies, and highlighting the fuel and noise reduction benefits of modified approaches, some of which had already been adopted by the ICAO. On 9 September the aircraft flew from Seattle to Victorville, California. ecoDemonstrator aircraft summary Most information from Planespotters.net All aircraft apart from the 2022 777 had ecoDemonstrator stickers applied to the fuselage or engine nacelles, at least one retaining them for some time after its participation in the program ended. ecoDemonstrator Explorer program In April 2023 Boeing announced a new program, ecoDemonstrator Explorer, using "platforms that will focus on short-term testing of a specific technology"'. Projects First project The first ecoDemonstrator Explorer was a 787-10. using the aircraft's technologies along with coordination with the air navigation service providers (ANSPs) of the US, Japan, Singapore and Thailand to optimise routings for greatest possible efficiency across variables such as weather, air traffic and airspace closures. This is the basis of an international form of trajectory-based operations (TBO), already part of the US FAA's national NextGen project. The ANSPs will coordinate to streamline the flow of traffic through multiple national jurisdictions. The test flights will use the highest available blends of SAF in the process. Boeing expects that the fuel burn could be reduced by up to 10%. In June 2023 787-10 N8290V (a Boeing test registration) was used for the first Explorer test/demonstration flights. The aircraft, built in 2021 for Vietnam Airlines but not taken up, was unmarked except for basic Boeing logos and “ecoDemonstrator EXPLORER” stickers. It left Seattle on 11 June, flying first to Tokyo (Narita). From there it flew to Singapore (Changi) on 13 June, then on to Bangkok (Suvarnabhumi) on 14 June, and returned direct to Seattle (Everett) on 16 June. The Civil Aviation Authority of Singapore (CAAS) stated that this was part of a three-year programme. Boeing's chief pilot for product development stated that the TBO system, using technologies already in use aboard many modern airliners, allows pilots and air traffic controllers to make trial route change requests to see the cascading effects of the change on their, and other aircraft's flights, all the way through to airport gate availability, to see if they are likely to be approved. In October 2023 it was announced that the ANSPs of China, Indonesia, Japan, New Zealand, the Philippines, Singapore, Thailand, and the USA would create a Pathfinder project to demonstrate TBO across the region within four years. Separately the ANSPs of Indonesia, New Zealand and Singapore, along with the Civil Air Navigation Services Organisation (CANSO), and IATA agreed to implement within a year a similar Free Route Operations (FRTO) project to provide routings between defined city pairs. Second project On 12 October 2023 Boeing announced a second ecoDemonstrator Explorer project. It evaluated the environmental characteristics of SAF using a new 737 MAX 10 destined for United Airlines. Registered N27602, it made its first flight on 14 September 2023 and wore a special livery with "ecoDemonstrator EXPLORER" titles and "The Future is SAF" markings on the nacelles. It flew running on SAF from one fuel tank, alternating with conventional fuel from another tank. The emissions from the CFM International LEAP 1B engines were sampled by the NASA Douglas DC-8 Airborne Science Lab, registered N817NA, which flew behind the test aircraft. The characteristics of contrails produced were evaluated. Also collaborating on the project are the FAA, GE Aerospace, and the DLR. NASA stated that the test results will be released to the public. Test flights started on 12 October 2023, based at Everett Paine Field. 11 flights were performed, 8 over Montana and three over the Pacific off the coast of Oregon, at constant altitudes ranging from to . The average duration of each flight was around five hours, generally flying an extended racetrack pattern. At the end of the final test flight on 1 November, the Explorer aircraft returned to Everett while the DC-8 flew back to Plant 42, Palmdale, California. ecoDemonstrator Explorer aircraft summary Footnotes See also Boeing Truss-Braced Wing Boeing X-66 Environmental impact of aviation References External links Boeing: Environment Boeing: ecoDemonstrator Aircraft noise reduction Boeing Green vehicles Aviation and the environment 2010s United States experimental aircraft Research projects Research and development in the United States Aerodynamics 2011 establishments Noise control
EcoDemonstrator
[ "Chemistry", "Engineering" ]
3,503
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
50,839,786
https://en.wikipedia.org/wiki/Gulf%20Island%20Fabrication
Gulf Island Fabrication is an American manufacturer of specialized structures and marine vessels used in the energy sector. The company builds offshore oil and gas platforms, ships and also foundations for offshore wind turbines. It also provides maintenance and marine repair services in-shop and out in the field. The company has built some of the largest offshore platforms in the world. The company's headquarters are located in Houston, Texas, and its seven building yards are in Louisiana and Texas. Gulf Island Fabrication and Bechtel are partners. The company was founded by Alden “Doc” Laborde, a World War II Navy commander who later worked the offshore oil and gas industry. In 1985, the company took over a bankrupt rival named Delta Fabrication. The company became publicly listed in 1997. The company offered 2,000,000 shares at $15 per share. With a total offer amount of $30 million. The company had diversified revenue to build ships and expanded by taking over LeeVac Shipyards in the beginning of 2016. The acquisition provided about $112 million incremental contract backlog during the industry downturn. References A Mighty Wind Tries to Lift Rig Builders Past Oil’s Downturn A Debt-Free And Cash-Rich, Dividend-Paying Company With Significant Upside Potential Gulf Island warns of possible layoffs Gulf Island Fabrication bracing for layoffs at Texas subsidiary, newspaper reports Should You Worry About The Future Of Gulf Island Fabrication, Inc. (NASDAQ: GIFI)? Gulf Island Fabrication, Inc. Reports Second Quarter Earnings Gulf Island, Carbo warn of possible layoffs affecting nearly 300 workers My Thesis on Gulf Island Fabrication Gulf Island Fabrication Is A (Blunt) Falling Knife Gulf Island Fabrication wins foundation work for first U.S. offshore wind farm Gulf Island Fabrication CEO sees boom potential in Gulf of Mexico shallow water External links Official site of Gulf Island Fabrication Louisiana builder is hard at work on R.I.'s offshore wind turbines + Video Construction and civil engineering companies of the United States Energy engineering and contractor companies
Gulf Island Fabrication
[ "Engineering" ]
404
[ "Energy engineering and contractor companies", "Engineering companies" ]
50,841,650
https://en.wikipedia.org/wiki/ALPHA%20experiment
The Antihydrogen Laser Physics Apparatus (ALPHA), also known as AD-5, is an experiment at CERN's Antiproton Decelerator, designed to trap antihydrogen in a magnetic trap in order to study its atomic spectra. The ultimate goal of the experiment is to test CPT symmetry through comparing the respective spectra of hydrogen and antihydrogen. Scientists taking part in ALPHA include former members of the ATHENA experiment (AD-1), the first to produce cold antihydrogen in 2002. On 27 September 2023, ALPHA collaborators published findings suggesting that antimatter interacts with gravity in a way similar to regular matter, supporting a prediction of the weak equivalence principle. Experimental setup Working with antimatter presents several experimental challenges. Magnetic traps—wherein neutral atoms are trapped using their magnetic moments—are required to keep antimatter from annihilating with matter, but are notoriously weak. Only atoms with kinetic energies equivalent to less than one kelvin may be trapped. The ATHENA and ATRAP (AD-2) projects produced antihydrogen by merging cold plasmas of positrons and antiprotons. While this method has been quite successful, it creates antimatter atoms with kinetic energies too large to be trapped. Moreover, to do laser spectroscopy on these antimatter atoms, they need to be in their ground state, something that does not appear to be the case for the majority of antimatter atoms created with this technique. Antiprotons are received from the antiproton decelerator and are 'mixed' with positrons from a specially-designed positron accumulator in a versatile Penning trap. The central region where the mixing and thus antihydrogen formation takes place is surrounded by a superconducting octupole magnet and two axially separated short solenoid "mirror-coils" to form a "minimum-B" magnetic trap. Once trapped, antihydrogen can be subjected to study, and the measurements compared to those of hydrogen. Antihydrogen detection In order to detect trapped antihydrogen, ALPHA also includes a 'silicon vertex detector': a cylindrical detector composed of three layers of silicon strips. Each strip acts as a detector for the charged particles passing through. By recording how the strips are excited, ALPHA can reconstruct the traces of particles traveling through the detector. When an antiproton annihilates, the process typically results in the emission of 3 or 4 charged pions. By reconstructing their traces through the detector, the location of the annihilation can be determined. These traces are quite distinct from those of cosmic rays also detected, but due to their high energy they pass straight through the detector. To confirm successful trapping, the ALPHA magnet that creates the minimum B-field was designed to allow rapid and repeated de-energizing. The decay of current during de-energization has a characteristic duration of 9 ms, orders of magnitude faster than similar systems. In theory, the fast turn-off speed and the ability to suppress false cosmic rays signals allows ALPHA to detect the release of single antihydrogen atoms during de-energization. Cooling antihydrogen One of the main challenges of working with antihydrogen is cooling it enough to be able to trap it. Antiprotons and positrons are not easily cooled to cryogenic temperatures, so in order to do this ALPHA has implemented a well known technique from atomic physics known as evaporative cooling. State-of-the art minimum-B traps such as the one ALPHA uses have depths of order 1 Kelvin. Results A preliminary experiment conducted in 2013 found that the gravitational mass of antihydrogen atoms was between −65 and 110 times their inertial mass, leaving considerable room for refinement using larger numbers of colder antihydrogen atoms. ALPHA has succeeded in the laser cooling antihydrogen atoms, a technique known as that was first demonstrated on normal matter in 1978. On 27 September 2023, the ALPHA team published a paper supporting the prediction that the gravitational interaction of antimatter is similar to that of regular matter. For the weak equivalence principle of general relativity to be correct, it is required that the two substances display identical gravitational properties. The findings rule out a 'repulsive [antigravity]', as previously theorized by some in the field. Collaborators ALPHA collaborators include the following institutions: References External links Record for ALPHA experiment on INSPIRE-HEP Antimatter CERN experiments Particle experiments
ALPHA experiment
[ "Physics" ]
934
[ "Antimatter", "Matter" ]
50,841,676
https://en.wikipedia.org/wiki/AEgIS%20experiment
AEgIS (Antimatter Experiment: gravity, Interferometry, Spectroscopy), AD-6, is an experiment at the Antiproton Decelerator facility at CERN. Its primary goal is to measure directly the effect of Earth's gravitational field on antihydrogen atoms with significant precision. Indirect bounds that assume the validity of, for example, the universality of free fall, the Weak Equivalence Principle or CPT symmetry also in the case of antimatter constrain an anomalous gravitational behavior to a level where only precision measurements can provide answers. Vice versa, antimatter experiments with sufficient precision are essential to validate these fundamental assumptions. AEgIS was originally proposed in 2007. Construction of the main apparatus was completed in 2012. Since 2014, two laser systems with tunable wavelengths (few picometer precision) and synchronized to the nanosecond for specific atomic excitation have been successfully commissioned. AEgIS experimental setup and physics AEgIS will attempt to determine if gravity affects antimatter in the same way it affects normal matter by testing its effect on an antihydrogen beam. The aspired experimental setup uses the Moiré deflectometer to measure the vertical displacement of a beam of cold antihydrogen atoms traveling in Earth's gravitational field. In the first phase of the experiment (running until 2018), antiprotons from the Antiproton Decelerator (AD) with a kinetic energy of 5.3MeV had to pass through a series of aluminum foils which acted as so-called degraders, slowing down a fraction of the fast antiprotons to few keV. The slow antiprotons were then further cooled by merging them with extra cold trapped electrons (electron cooling) and finally trapped inside a Malmberg–Penning trap. An intense radioactive β+ source (22Na) was used to produce positrons, which were accumulated in a Surko-type storage trap at low pressure (3e-8 mbar). These positrons were implanted into a nano-structured porous silicon target in order to efficiently form positronium (Ps) - even at cryogenic temperatures in Ultra-high vacuum (UHV). A cloud of positronium emerging from the target was then excited to a Rydberg level of n=16/17 by using laser-induced two-step optical transitions. Inside the Malmberg–Penning trap, the charge exchange reaction between cold antiprotons and Rydberg-Ps took place, leading to the formation of Rydberg-antihydrogen with high efficiency in the form of a 4π pulse. (Charge exchange reaction) The following paragraph is out of date. It appears to have been written in 2014, nine years ago. In the 27 October 2023 issue of Nature, the ALPHA experiment published the result that antihydrogen falls. In the second phase of the AEGIS experiment, starting from 2021 after AEgIS has been successfully connected to the new antiproton deceleration and storage ring ELENA, the Rydberg antihydrogen atoms will be channeled into a beam, which then will pass through a series of matter gratings, the central piece of a Moiré-deflectometer. The antihydrogen atoms will ultimately hit onto the surface of a position and time-resolving detector, where they will annihilate. Areas behind the gratings are shadowed, while those behind the slits are not. The annihilation locations reproduce a periodic pattern of light and shadowed areas. This pattern is highly sensitive to small vertical displacements of the anti-atoms during their horizontal flight - the Earth's gravitational force on antihydrogen can thus be determined. AEgIS collaboration The AEgIS collaboration comprises the following institutions: See also Antiproton Decelerator GBAR experiment ALPHA experiment References External links Record for AEgIS experiment on INSPIRE-HEP AEgIS official home page Particle experiments CERN experiments Antimatter
AEgIS experiment
[ "Physics" ]
818
[ "Antimatter", "Matter" ]
50,847,184
https://en.wikipedia.org/wiki/Fourphit
Fourphit, also known as 4-isothiocyanato-PCP, is an irreversible dopamine transporter (DAT) blocker and a reversible NMDA receptor antagonist. It blocks the binding of methylphenidate to the DAT in vitro, though apparently not in vivo. In any case, the drug reduces the stimulant-like effects of cocaine in animals, whilst producing mostly negligible behavioral effects itself. Fourphit is an acylating derivative of phencyclidine (PCP) and a positional isomer of metaphit (3-isothiocyanato-PCP). See also RTI-76 p-ISOCOC Methocinnamox References Arylcyclohexylamines Covalent inhibitors Dopamine reuptake inhibitors Dissociative drugs Isothiocyanates NMDA receptor antagonists Piperidines
Fourphit
[ "Chemistry" ]
193
[ "Isothiocyanates", "Functional groups" ]
50,848,399
https://en.wikipedia.org/wiki/Segesterone
Segesterone (, ), also known as 17α-hydroxy-16-methylene-19-norprogesterone or as 17α-deacetylnestorone, is a steroidal progestin of the 19-norprogesterone group that was never marketed. An acetate ester, segesterone acetate, better known as nestorone or elcometrine, is marketed for clinical use. Segesterone acetate produces segesterone as a metabolite. References Tertiary alcohols Ketones Norpregnanes Progestogens Vinylidene compounds
Segesterone
[ "Chemistry" ]
132
[ "Ketones", "Functional groups" ]
50,849,684
https://en.wikipedia.org/wiki/Steroid%20ester
A steroid ester is an ester of a steroid. They include androgen esters, estrogen esters, progestogen esters, and corticosteroid esters. Steroid esters may be naturally occurring/endogenous like DHEA sulfate or synthetic like estradiol valerate. Esterification is useful because it is often able to render the parent steroid into a prodrug of itself with altered chemical properties such as improved metabolic stability, water solubility, and/or lipophilicity. This, in turn, can enhance pharmacokinetics, for instance by improving the steroid's bioavailability and/or conferring depot activity and hence an extended duration with intramuscular or subcutaneous injection. Esterification of steroids with fatty acids was developed to prolong the duration of effect of steroid hormones. By 1957, more than 500 steroid esters had been synthesized, most frequently of androgens. The longer the fatty acid chain, up to a certain optimal length, the longer the duration when prepared as an oil solution and injected. Across a chain length range of 6 to 12 carbon atoms, a length of 9 or 10 carbon atoms (nonanoate or decanoate ester) was found to be optimal in rodents in the case of testosterone esters. Fatty acid esters increase the lipophilicity of steroids, with longer fatty acids resulting in greater lipophilicity. The greater solubility in oil allows the steroid esters to be dissolved in a smaller oil volume, thereby allowing for larger doses with intramuscular injection. In addition, the greater the lipophilicity of the steroid, as measured by the octanol/water partition coefficient (logP), the slower its release from the oily depot at the injection site and the longer its duration. Steroid esters can also be prepared as crystalline aqueous suspensions. Aqueous suspensions of steroid crystals result in prolongation of duration with intramuscular injection similarly to oil solutions. The duration is longer than that of oil solutions, intermediate between oil solutions and subcutaneous pellet implants. The sizes of crystals in suspensions varies and can range from 0.1 μm to some hundreds of μm. The duration of crystalline steroid suspensions increases directly with the size of the crystals. However, crystalline suspensions have an irritating effect in the body, and intramuscular injections of crystalline steroid suspensions result in painful local reactions. These reactions worsen with larger crystals, and for this reason, crystal sizes must be limited to minimize local reactions. Particle sizes of more than 300 μg in the case of estradiol benzoate by intramuscular injection have been found to be too painful for use. In some cases, crystalline steroid suspensions are used not for prolongation of effect, but because the solubility of the steroid result in this preparation being the only practical way to deliver the steroid in a reasonable injection volume. Examples include cortisone acetate and hydrocortisone and its esters. A requirement of long-lasting crystalline steroid administration is that the steroid be sufficiently water-insoluble, so that it dissolves slowly and thereby attains a prolonged therapeutic effect. The crystals in suspensions can sometimes clump together or aggregate and grow in size. This can be avoided by careful formulation. Crystalline suspensions of steroids are prepared either by precipitation or by dispersing finely divided material in an aqueous suspension medium. Desired particle size can be achieved by grinding, for instance through the use of an atomizer. Adolf Butenandt reported in 1932 that estrone benzoate in oil solution had a prolonged duration with injection in animals. No such prolongation of action occurred if it was given by intravenous injection. Estradiol benzoate was synthesized in 1933 and was marketed for use the same year. Sulfur-based esters Certain sulfur-based steroid esters have a sulfamate or sulfonamide moiety as the ester, typically at the C3 and/or C17β positions. Like many other steroid esters, they are prodrugs. Unlike other steroid esters however, they bypass first-pass metabolism with oral administration and have high oral bioavailability and potency, abolished first-pass hepatic impact, and long elimination half-lives and durations of action. They are under development for potential clinical use. Examples include the estradiol esters estradiol sulfamate (E2MATE; also a potent steroid sulfatase inhibitor) and EC508 (estradiol 17β-(1-(4-(aminosulfonyl)benzoyl)-L-proline)), the testosterone ester EC586 (testosterone 17β-(1-((5-(aminosulfonyl)-2-pyridinyl)carbonyl)-L-proline)), and sulfonamide esters of levonorgestrel and etonogestrel. See also List of steroid esters Steroid sulfate References Further reading Prodrugs Steroid esters
Steroid ester
[ "Chemistry" ]
1,094
[ "Chemicals in medicine", "Prodrugs" ]
50,850,373
https://en.wikipedia.org/wiki/Low%20field%20magnetoresistance
Colossal magnetoresistance (CMR) is a property in many perovskite oxides. However, the requirement of large external magnetic field hinders the potential applications. On one hand, people were looking for the physical mechanisms for the CMR originality. On the other hand, people were trying to find alternative ways to further improve the CMR effect. Large magnetoresistance at relative low magnetic field had been reported in doped LaMnO3 polycrystal samples, rather than single crystal. The spin polarized tunneling and spin dependent scattering across large angle boundaries are responsible for the Low field magnetoresistance (LFMR). In order to obtain LFMR in epitaxial thin films (single-crystal like materials), epitaxial strain has been used. Wang and Li reported an enhancement of the magnetoresistance in 5- to 15-nm-thick Pr0.67Sr0.33MO3 films using out-of-plane tensile strain. In a conventional strain engineering framework, epitaxial strain is only effective below the critical thickness, which is usually less than a few tens of nanometers. Tuning electron transport by epitaxial strain has only been achieved in ultrathin layers because of the relaxation of epitaxial strains in relatively thick films. Vertically aligned heteroepitaxial nanoscaffolding films have been proposed to generate strain in thick films. A vertical lattice strain as large as 2% has been achieved in La0.7Sr0.3MnO3:MgO vertical nanocomposites. The magnetoresistance, magnetic anisotropy, and magnetization can be tuned by the vertical strain in films over few hundred nanometers thick. References Thin films
Low field magnetoresistance
[ "Materials_science", "Mathematics", "Engineering" ]
368
[ "Nanotechnology", "Planes (geometry)", "Thin films", "Materials science" ]
36,555,808
https://en.wikipedia.org/wiki/C24H26N2O6
{{DISPLAYTITLE:C24H26N2O6}} The molecular formula C24H26N2O6 (molar mass: 438.473 g/mol, exact mass: 438.1791 u) may refer to: JTE-907 Suxibuzone Molecular formulas
C24H26N2O6
[ "Physics", "Chemistry" ]
68
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
36,556,244
https://en.wikipedia.org/wiki/Double%20ionization
Double ionization is a process of formation of doubly charged ions when laser radiation or charged particles like electrons, positrons or heavy ions are exerted on neutral atoms or molecules. Double ionization is usually less probable than single-electron ionization. Two types of double ionization are distinguished: sequential and non-sequential. Sequential double ionization Sequential double ionization is a process of formation of doubly charged ions consisting of two single-electron ionization events: the first electron is removed from a neutral atom/molecule (leaving a singly charged ion in the ground state or an excited state) followed by detachment of the second electron from the ion. Non-sequential double ionization Non-sequential double ionization is a process whose mechanism differs (in any detail) from the sequential one. For example, both the electrons leave the system simultaneously (as in alkaline earth atoms, see below), the second electron's liberation is assisted by the first electron (as in noble gas atoms, see below), etc. The phenomenon of non-sequential double ionization was experimentally discovered by Suran and Zapesochny for alkaline earth atoms as early as 1975. Despite extensive studies, the details of double ionization in alkaline earth atoms remain unknown. It is supposed that double ionization in this case is realized by transitions of both the electrons through the spectrum of autoionizing atomic states, located between the first and second ionization potentials. For noble gas atoms, non-sequential double ionization was first observed by L'Huillier.  The interest to this phenomenon grew rapidly after it was rediscovered in infrared fields and for higher intensities. Multiple ionization has also been observed.  The mechanism of non-sequential double ionization in noble gas atoms differs from the one in alkaline earth atoms. For noble gas atoms in infrared laser fields, following one-electron ionization, the liberated electron can recollide with the parent ion. This electron acts as an "atomic antenna", absorbing the energy from the laser field between ionization and recollision and depositing it into the parent ion. Inelastic scattering on the parent ion results in further collisional excitation and/or ionization. This mechanism is known as the three-step model of non-sequential double ionization, which is also closely related to the three step model of high harmonic generation. Dynamics of double ionization within the three-step model strongly depends on the laser field intensity. The maximum energy (in atomic units) gained by the recolliding electron from the laser field is , where is the ponderomotive energy, is the laser field strength, and is the laser frequency. Even when is far below ionization potential experiments have observed correlated ionization.  As opposed to the high- regime () in the low- regime () the assistance of the laser field during the recollision is vital. Classical and quantum analysis of the low- regime demonstrates the following two ways of electron ejection after the recollision: First, the two electrons can be freed with little time delay compared to the quarter-cycle of the driving laser field. Second, the time delay between the ejection of the first and the second electron is of the order of the quarter-cycle of the driving field. In these two cases, the electrons appear in different quadrants of the correlated spectrum. If following the recollision, the electrons are ejected nearly simultaneously, their parallel momenta have equal signs, and both electrons are driven by the laser field in the same direction toward the detector . If after the recollision, the electrons are ejected with a substantial delay (quarter-cycle or more), they end up going in the opposite directions. These two types of dynamics produce distinctly different correlated spectra (compare experimental results with . See also List of laser articles Nonlinear optics Photoionization Ionization High harmonic generation Above threshold ionization References Atomic, molecular, and optical physics Quantum mechanics Nonlinear optics
Double ionization
[ "Physics", "Chemistry" ]
814
[ "Theoretical physics", "Quantum mechanics", " molecular", "Atomic", " and optical physics" ]
36,559,727
https://en.wikipedia.org/wiki/Horton%20sphere
A Horton sphere (sometimes spelled Hortonsphere), also referred to as a spherical tank or simply sphere, is a spherical pressure vessel, which is used for industrial-scale storage of liquefied gases. Example of materials that can be stored in Horton spheres are liquefied petroleum gas (LPG), liquefied natural gas (LNG), and anhydrous ammonia. History The Horton sphere is named after Horace Ebenezer Horton (1843–1912), founder and financier of a bridge design and construction firm in about 1860, merged to form the Chicago Bridge & Iron Company (CB&I) in 1889 as a bridge building firm and constructed the first bulk liquid storage tanks in the late nineteenth and early twentieth centuries. CB&I built the first field-erected spherical pressure vessels in the world at the Port Arthur, Texas refinery in 1923, and subsequently claimed 'Hortonsphere' as a registered trademark. G.T. Horton was issued a patent on Sept 23, 1947, describing how to make the welded steel support columns resistant to thermal expansion and wind load of the sphere. Because of their distinctive form, some have become subject to conservation campaigns such as that at Poughkeepsie, New York. Construction and use Initially, Horton spheres were constructed by riveting together separate wrought iron or steel plates, but from the 1940s, were of welded construction. The plates are formed in roller plants and cut to patterns. Today, spherical tanks are designed to codes such as ASME VIII, PD 5500, or EN 13445. The spherical geometry minimizes both the mechanical stress imposed on the tank walls by the internal pressure and the heat transfer through the walls. This makes spherical tanks the optimal solution for the storage of large amounts of liquefied gases, where liquefaction is achieved by pressurization, cryogenic refrigeration, or a combination thereof. Minimization of heat transfer is due to the sphere being the solid figure with the minimum surface area per unit volume. This is an advantage because it reduces the production of boil-off gas from both pressurized and refrigerated liquefied gases. Spherical tanks are used extensively for LPG and associated gases, such as propane, propylene, butane, and butadiene. They can be used for cryogenic storage of LNG, methane, ethane, ethylene, hydrogen, oxygen, nitrogen, etc. Support is usually provided by the use of legs attached to the sphere at its equator. Legs are typically braced together with diagonal rods to provide lateral support against wind and seismic loads. Legs are fireproofed if the material is flammable. Pressure relief valves are installed at the top, from where level instrumentation is also accessed. Liquid inlet and outlet connections are at the bottom of the sphere. Bunds are usually provided around the tanks or tank clusters to contain potential leakage. However, if the gas is prone to boiling liquid vapor expanding explosions (BLEVE), spills should be directed away from the leaking tank. Other uses have been applied to the Horton sphere including space chambers, hyperbaric chambers, environmental chambers, vacuum vessels, process vessels, test vessels, containment vessels and surge vessels. Spherical tanks are a distinctive feature of certain sea-going gas carriers. See also Storage tank References Petroleum production Storage tanks
Horton sphere
[ "Chemistry", "Engineering" ]
680
[ "Chemical equipment", "Storage tanks" ]
37,979,747
https://en.wikipedia.org/wiki/Firewall%20%28physics%29
A black hole firewall is a hypothetical phenomenon where an observer falling into a black hole encounters high-energy quanta at (or near) the event horizon. The "firewall" phenomenon was proposed in 2012 by physicists Ahmed Almheiri, Donald Marolf, Joseph Polchinski, and James Sully as a possible solution to an apparent inconsistency in black hole complementarity. The proposal is sometimes referred to as the AMPS firewall, an acronym for the names of the authors of the 2012 paper. The potential inconsistency pointed out by AMPS had been pointed out earlier by Samir Mathur who used the argument in favour of the fuzzball proposal. The use of a firewall to resolve this inconsistency remains controversial, with physicists divided as to the solution to the paradox. The motivating paradox According to quantum field theory in curved spacetime, a single emission of Hawking radiation involves two mutually entangled particles. The outgoing particle escapes and is emitted as a quantum of Hawking radiation; the infalling particle is swallowed by the black hole. Assume that a black hole formed a finite time in the past and will fully evaporate away in some finite time in the future. Then, it will only emit a finite amount of information encoded within its Hawking radiation. For an old black hole that has crossed the half-way point of evaporation, general arguments from quantum-information theory by Page and Lubkin suggest that the new Hawking radiation must be entangled with the old Hawking radiation. However, since the new Hawking radiation must also be entangled with degrees of freedom behind the horizon, this creates a paradox: a principle called "monogamy of entanglement" requires that, like any quantum system, the outgoing particle cannot be fully entangled with two independent systems at the same time; yet here the outgoing particle appears to be entangled with both the infalling particle and, independently, with past Hawking radiation. AMPS initially argued that to resolve the paradox physicists may eventually be forced to give up one of three time-tested principles: Einstein's equivalence principle, unitarity, or existing quantum field theory. However, it is now accepted that an additional tacit assumption in the monogamy paradox was that of locality. A common view is that theories of quantum gravity do not obey exact locality, which leads to a resolution of the paradox. On the other hand, some physicists argue that such violations of locality cannot resolve the paradox. The "firewall" resolution to the paradox Some scientists suggest that the entanglement must somehow get immediately broken between the infalling particle and the outgoing particle. Breaking this entanglement would release large amounts of energy, thus creating a searing "black hole firewall" at the black hole event horizon. This resolution requires a violation of Einstein's equivalence principle, which states that free-falling is indistinguishable from floating in empty space. This violation has been characterized as "outrageous"; theoretical physicist Raphael Bousso has complained that "a firewall simply can't appear in empty space, any more than a brick wall can suddenly appear in an empty field and smack you in the face." Non-firewall resolutions to the paradox Some scientists suggest that there is in fact no entanglement between the emitted particle and previous Hawking radiation. This resolution would require black hole information loss, a controversial violation of unitarity. Others, such as Steve Giddings, suggest modifying quantum field theory so that entanglement would be gradually lost as the outgoing and infalling particles separate, resulting in a more gradual release of energy inside the black hole, and consequently no firewall. The Papadodimas–Raju proposal posited that the interior of the black hole was described by the same degrees of freedom as the Hawking radiation. This resolves the monogamy paradox by identifying the two systems that the late Hawking radiation is entangled with. Since, in this proposal, these systems are the same, there is no contradiction with the monogamy of entanglement. Along similar lines, Juan Maldacena and Leonard Susskind suggested in the ER=EPR proposal that the outgoing and infalling particles are somehow connected by wormholes, and therefore are not independent systems. The fuzzball picture resolves the dilemma by replacing the 'no-hair' vacuum with a stringy quantum state, thus explicitly coupling any outgoing Hawking radiation with the formation history of the black hole. Stephen Hawking received widespread mainstream media coverage in January 2014 with an informal proposal to replace the event horizon of a black hole with an "apparent horizon" where infalling matter is suspended and then released; however, some scientists have expressed confusion about what precisely is being proposed and how the proposal would solve the paradox. Characteristics and detection The firewall would exist at the black hole's event horizon, and would be invisible to observers outside the event horizon. Matter passing through the event horizon into the black hole would immediately be "burned to a crisp" by an arbitrarily hot "seething maelstrom of particles" at the firewall. In a merger of two black holes, the characteristics of a firewall (if any) may leave a mark on the outgoing gravitational radiation as "echoes" when waves bounce in the vicinity of the fuzzy event horizon. The expected quantity of such echoes is theoretically unclear, as physicists don't currently have a good physical model of firewalls. In 2016, cosmologist Niayesh Afshordi and others argued there were tentative signs of some such echo in the data from the first black hole merger detected by LIGO; more recent work has argued there is no statistically significant evidence for such echoes in the data. See also Black hole information paradox Black hole thermodynamics Photon sphere Gravitational time dilation Magnetospheric eternally collapsing object References Black holes Quantum gravity Quantum mechanical entropy Theorems in general relativity
Firewall (physics)
[ "Physics", "Astronomy", "Mathematics" ]
1,231
[ "Physical phenomena", "Black holes", "Physical quantities", "Equations of physics", "Theorems in general relativity", "Unsolved problems in physics", "Astrophysics", "Quantum gravity", "Theorems in mathematical physics", "Entropy", "Density", "Quantum mechanical entropy", "Stellar phenomena"...
37,979,764
https://en.wikipedia.org/wiki/A.%20Hari%20Reddi
A. Hari Reddi (born October 20, 1942) is a University of California Distinguished Professor and holder of the Lawrence J. Ellison Endowed Chair in Musculoskeletal Molecular Biology at the University of California, Davis. His research played an indispensable role in the identification, isolation and purification of bone morphogenetic proteins (BMPs) that are involved in bone formation and repair. The molecular mechanism of bone induction studied by Professor Reddi led to the conceptual advance in tissue engineering that morphogens in the form of metabologens bound to an insoluble extracellular matrix scaffolding act in collaboration to stimulate stem cells to form cartilage and bone. The Reddi laboratory has also made important discoveries unraveling the role of the extracellular matrix in bone and cartilage tissue regeneration and repair. Professor Reddi was previously the Virginia M. and William A. Percy Chair and Professor in Orthopaedic Surgery, Professor of Biological Chemistry, and Professor of Oncology at the Johns Hopkins University School of Medicine. He also past faculty member at the University of Chicago and senior scientist at the National Institutes of Health. Research Professor Reddi discovered that bone induction is a sequential multistep cascade involving chemotaxis, mitosis, and differentiation. Early studies in his laboratory at the University of Chicago and National Institutes of Health unraveled the sequence of events involved in bone matrix-induce bone morphogenesis. Using a battery of in vitro and in vivo bioassays for bone formation, a systematic study was undertaken in his laboratory to isolate and purify putative bone morphogenetic proteins. Reddi and colleagues were the first to identify BMPs as pleiotropic regulators, acting in a concentration dependent manner. They demonstrated first that BMPs bind the extracellular matrix, are present at the apical ectodermal ridge in the developing limb bud, are chemotactic for human monocytes, and have neurotropic potential. His laboratory pioneered the use of BMPs in regenerative orthopedics and dentistry. Professor Reddi's h-index is 109 with over 300 peer-reviewed manuscripts. Education and Mentors Hari Reddi received his PhD from the University of Delhi in reproductive endocrinology under the mentorship of M.R.N. Prasad. Reddi did postdoctoral work with Howard Guy Williams-Ashman at the Johns Hopkins University School of Medicine. Reddi was also a student of Charles Brenton Huggins, the winner of the 1966 Nobel Prize with Peyton Rous for the endocrine regulation of cancer. International Conference of Bone Morphogenetic Proteins Reddi is the founder of the International Conference on Bone Morphogenetic Proteins (BMPs). He organized the first conference at the Johns Hopkins University School of Medicine in 1994. The conference is held every two years rotating between the United States and an international venue. Awards and Honors 1991 Elizabeth Winston Lanier Award Kappa Delta Award by American Academy of Orthopaedic Surgeons 1997 Inaugural winner of the Marshall Urist Award by the Orthopedic Research Society 1999 Nicolas Andry Lifetime Achievement Award by The Association of Bone and Joint Surgeons 2015 Elected Member of the National Academy of Inventors References Biochemistry educators Bone morphogenetic protein Tissue engineering
A. Hari Reddi
[ "Chemistry", "Engineering", "Biology" ]
672
[ "Biological engineering", "Cloning", "Chemical engineering", "Tissue engineering", "Biochemistry educators", "Biochemistry", "Medical technology" ]
37,982,021
https://en.wikipedia.org/wiki/Automatic%20trip
An automatic trip is an action performed by some system, usually a safety instrumented system, programmable logic controller, or distributed control system, to put an industrial process into a safe state. It is triggered by some parameter going into a pre-determined unsafe state. It is usually preceded by an alarm to give a process operator a chance to correct the condition to prevent the trip, since trips are usually costly because of lost production. References Safety engineering
Automatic trip
[ "Engineering" ]
91
[ "Safety engineering", "Systems engineering" ]
33,939,070
https://en.wikipedia.org/wiki/Biobank%20ethics
Biobank ethics refers to the ethics pertaining to all aspects of biobanks. The issues examined in the field of biobank ethics are special cases of clinical research ethics. Overview of issues The following table shows many of the leading controversial issues related to biobanking. The table names an issue, then describes a point on which there is consensus and an aspect of that same point for which there is no consensus. Privacy for research participants There is broad consensus that when a person donates a specimen for research then that person has a right to privacy thereafter. To this end, researchers balance the need for specimens to be anonymous or de-identified from protected health information with the need to have access to data about the specimen so that researchers can use the sample without knowing the identity of the donor. In the United States, for example, the Office for Human Research Protections often promotes a traditional system wherein data which could identify a participant is coded, and then elsewhere stored away from the data is key which could decipher the identities in special circumstances when required outside of usual research. Complications arise in many situations, such as when the identity of the donor is released anyway or when the researchers want to contact the donor of the sample. Donor identities could become known if the data and decipher key are unsecure, but more likely, with rich datasets the identities of donors could be determined only from a few pieces of information which were thought unrelated to disturbing anonymity before the advent of computer communication. Among the concerns which participants in biobanks have expressed are giving personal information to researchers and having data used against them somehow. Scientists have demonstrated that in many cases where participants' names were removed from data, the data still contained enough information to make identification of the participants possible. This is because the historical methods of protecting confidentiality and anonymity have become obsolete when radically more detailed databases became available. Another problem is that even small amounts of genetic data, such as a record of 100 single nucleotide polymorphisms, can uniquely identify anyone. There have been problems deciding what safeguards should be in place for storing medical research data. In response, some researchers have made efforts to describe what constitutes sufficient security and to recognize what seemingly anonymized information can be used to identify donors. Ownership of specimens When a person donates a specimen to a researcher, it is not easy to describe what the participant is donating because ownership of the specimen represents more rights than physical control over the specimen. The specimens themselves have commercial value, and research products made from specimens can also. Fundamental research benefits all sectors, including government, non-profit, and commercial, and these sectors will not benefit equally. Specimens may be subject to biological patenting or research results from specimen experimentation may lead to the development of products which some entity will own. The extent to which a specimen donor should be able to restrict the way their specimen is used is a matter of debate. Some researchers make the argument that the specimens and data should be publicly owned. Other researchers say that by calling for donations and branding the process as altruistic the entities organizing biobank research are circumventing difficult ethical questions which participants and researchers ought to address. Return of results There is broad consensus that participants in clinical research have a right to know the results of a study in which they participated so that they can check the extent to which their participation delivered beneficial results to their community. The right to justice in the Belmont Report is a part of this idea. Despite the consensus that researchers should return some information to communities, there is no universally recognized authoritative policy on how researchers should return results to communities, and the views and practices of researchers in the field vary widely. Returning results can be problematic for many reasons, such as increased difficulty of tracking participants who donated a sample as time passes, the conflict with the participant's right to privacy, the inability of researchers to meaningfully explain scientific results to participants, general disinterest of participants to study results, and deciding what constitutes a return of results. If genetic testing is done, then researches may get health information about participants, but in many cases there is no plan in place for giving participants information derived from their samples. Informed consent Because donating a specimen involves consideration of many issues, different people will have different levels of understanding of what they are doing when they donate a specimen. Since it is difficult to explain every issue to everyone, problems of giving informed consent arise when researchers take samples. A special informed consent problem happened historically with biobanks. Previous to the advent of biobanks, researchers would ask specimen donors for consent to participate in a single study, and give participants information about that study. In a biobank system, a researcher may have many specimens collected over many years and then long after the donors gave the sample, that researcher may want to conduct a new study using those samples but have no good way to give donors information about that study and collect their consent. This problem was first identified in widespread publication in 1995 when an article on this topic was published. Many people have the opportunity to donate samples to medical research in the course of their regular medical care, but there are ethical problems in having one's own doctor request specimens. Most participants are willing to provide consent for biospecimens and disease specific or related biobanks are favorable. Donors to biobanks frequently do not have a good understanding of the concept of a biobank or the implications of donating a specimen to one. Researchers support biobanking despite risk to participants because the benefit is high, it pays respect to people's wishes to involve themselves in research, current practices and culture support this kind of research, and consensus is that the risk of participation is low. References Biobanks Medical ethics Research ethics Bioethics
Biobank ethics
[ "Technology", "Biology" ]
1,169
[ "Bioethics", "Research ethics", "Bioinformatics", "Ethics of science and technology", "Biobanks" ]
33,942,954
https://en.wikipedia.org/wiki/Genetically%20modified%20food%20in%20the%20European%20Union
Genetic engineering in the European Union has varying degrees of regulation. Regulation History Until the 1990s, Europe's regulation was less strict than in the United States, one turning point being cited as the export of the United States' first GM-containing soy harvest in 1996. The GM soy made up about 2% of the total harvest at the time, and Eurocommerce and European food retailers required that it be separated. Although the European Commission (EC) did eventually relent, this sparked American concerns that Europe would soon become a tighter regulatory environment - it was conditioned on sale as processed products and never as seed. The Clinton Administration was widely urged to harmonize standards in its impending second term to guarantee an open European market. In 1998, the use of MON810, a Bt expressing maize conferring resistance to the European corn borer, was approved for commercial cultivation in Europe. Shortly thereafter, the EU enacted a de facto moratorium on new approvals of GMOs pending new regulatory laws passed in 2003. Those new laws provided the EU with possibly the most stringent GMO regulations in the world. The European Food Safety Authority (EFSA) was created in 2002 with the primary goal of preventing future food crises in Europe. All GMOs, along with irradiated food, are considered "new food" and subject to extensive, case-by-case, science-based food evaluation by the EFSA. The criteria for authorization fall into four broad categories: "safety", "freedom of choice", "labelling" and "traceability". The EFSA reports to the European Commission (EC), which then drafts a proposal for granting or refusing the authorisation. This proposal is submitted to the Section on GM Food and Feed of the Standing Committee on the Food Chain and Animal Health; if accepted, it will be adopted by the EC or passed on to the Council of Agricultural Ministers. Once in the Council it has three months to reach a qualified majority for or against the proposal; if no majority is reached, the proposal is passed back to the EC, which will then adopt the proposal. However, even after authorization, individual EU member states can ban individual varieties under a 'safeguard clause' if there are "justifiable reasons" that the variety might cause harm to humans or the environment. The member state must then supply sufficient evidence that this is the case. The commission is obliged to investigate these cases and either overturn the original registrations or request the country to withdraw its temporary restriction. The laws of the EU also required that member nations establish coexistence regulations. In many cases, national coexistence regulations include minimum distances between fields of GM crops and non-GM crops. The distances for GM maize from non-GM maize for the six largest biotechnology countries are: France – 50 metres, Britain – 110 metres for grain maize and 80 for silage maize, Netherlands – 25 metres in general and 250 for organic or GM-free fields, Sweden – 15–50 metres, Finland – data not available, and Germany – 150 metres and 300 from organic fields. Larger minimum distance requirements discriminate against adoption of GM crops by smaller farms. In 2006, the World Trade Organization concluded that the EU moratorium, which had been in effect from 1999 to 2004, had violated international trade rules. The moratorium had not affected previously approved crops. The only crop authorised for cultivation before the moratorium was Monsanto's MON 810. The next approval for cultivation was the Amflora potato for industrial applications in 2010 which was grown in Germany, Sweden and the Czech Republic that year. The slow pace of approval was criticized as endangering European food safety although as of 2012, the EU had authorized the use of 48 genetically modified organisms. Most of these were for use in animal feed (it was reported in 2012 that the EU imports about 30 million tons a year of GM crops for animal consumption.), food or food additives. Of these, 26 were varieties of maize. In July 2012, the EU gave approval for an Irish trial cultivation of potatoes resistant to the blight that caused the Great Irish Famine. The safeguard clause mentioned above has been applied by many member states in various circumstances, and in April 2011 there were 22 active bans in place across six member states: Austria, France, Germany, Luxembourg, Greece, and Hungary. However, on review many of these have been considered scientifically unjustified. In January 2005, the Hungarian government announced a ban on importing and planting of genetic modified maize seeds, which was subsequently authorized by the EU. In February 2008, the French government used the safeguard clause to ban the cultivation of MON810 after Senator Jean-François Le Grand, chairman of a committee set up to evaluate biotechnology, said there were "serious doubts" about the safety of the product (although this ban was declared illegal in 2011 by the European Court of Justice and the French Conseil d'État). The French farm ministry reinstated the ban in 2012, but this was rejected by the EFSA. In 2009 German Federal Minister Ilse Aigner announced an immediate halt to cultivation and marketing of MON810 maize under the safeguard clause. In March 2010, Bulgaria imposed a complete ban on genetically modified crop growing either commercially or for trials. The cabinet of Boyko Borisov initially imposed a five-year moratorium, but later extended this to a permanent ban after widespread public protests against the introduction of genetically modified crops in the country. In January 2013, Poland's government placed a ban on Monsanto's GM corn, MON 810. It launched a communication campaign with farmers, announcing they will now be strictly monitoring farms for GM corn crops. Poland is the eighth EU member to ban the production of GMOs even though they have been approved by European Food Safety Authority. Europe is not officially against the use of GM crops when it comes to laboratory research, and they are working to regulate the field. In 2012, the European Food Safety Authority (EFSA) Panel on Genetically Modified Organisms (GMO) released a "Scientific opinion addressing the safety assessment of plants developed through cisgenesis and intragenesis" in a response to a request from the European Commission. The opinion was that while "the frequency of unintended changes may differ between breeding techniques and their occurrence cannot be predicted and needs to be assessed case by case", "similar hazards can be associated with cisgenic and conventionally bred plants, while novel hazards can be associated with intragenic and transgenic plants." In other words, cisgenic approaches, which introduce genes from the same species, should be considered similar in risk to conventional breeding approaches, whilst transgenic plants can come with new hazards. In 2014, a panel of experts set up by the UK Biotechnology and Biological Sciences Research Council argued that "A regulatory system based on the characteristics of a novel crop, by whatever method it has been produced, would provide a more effective and robust regulation than current EU processes , which consider new crop varieties differently depending on the method used to produce them." They said that new forms of "genome editing" allow targeting specific sites and making precise changes in the DNA of crops. In the future it would become increasingly difficult if not impossible to tell which method has been used (conventional breeding or genetic engineering) to produce a novel crop. They proposed that existing EU regulatory system should be replaced with a more logical system like that used for new medicines. In 2015, Germany, Poland, France, Scotland and several other member states opted out of cultivating GMO crops in their territory. A Eurobarometer survey has indicated that "level of concern" about genetically engineered food in Europe has decreased significantly, from 69% in 2010 to 27% in 2019. Around one quarter (26%) of the EU citizens indicate the presence of genetically modified ingredients in food or drinks as a concern in 2022 while only a smaller proportions (8%), the use of new biotechnology in food production, i.e. genome editing Labeling and traceability The regulations concerning the import and sale of GMOs for human and animal consumption grown outside the EU involve providing freedom of choice to the farmers and consumers. All food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled. On two occasions, GMOs unapproved by the EC have arrived in the EU and been forced to return to their port of origin. The first was in 2006 when a shipment of rice from America containing an experimental GMO variety (LLRice601) not meant for commercialisation arrived at Rotterdam. The second in 2009 when trace amounts of a GMO maize approved in the US were found in a "non-GM" soy flour cargo. The coexistence of GM and non-GM crops has raised significant concern in many European countries and so EU law also requires that all GM food be traceable to its origin, and that all food with GM content greater than 0.9% be labelled. Due to high demand from European consumers for freedom of choice between GM and non-GM foods. EU regulations require measures to avoid mixing of foods and feed produced from GM crops and conventional or organic crops, which can be done via isolation distances or biological containment strategies. (Unlike the US, European countries require labeling of GM food.) European research programs such as Co-Extra, Transcontainer, and SIGMEA are investigating appropriate tools and rules for traceability. The OECD has introduced a "unique identifier" which is given to any GMO when it is approved, which must be forwarded at every stage of processing. Such measures are generally not used in North America because they are very costly and the industry admits of no safety-related reasons to employ them. The EC has issued guidelines to allow the co-existence of GM and non-GM crops through buffer zones (where no GM crops are grown). These are regulated by individual countries, and vary from 15 metres in Sweden to 800 metres in Luxembourg. All food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled. Scope In its regulations the European Union considers genetically modified organisms only to be food and feed for all intents and practical purposes, in difference to the definition of genetically modified organisms which encompasses animals. Approach The EU uses the precautionary principle, demanding a pre-market authorisation for any GMO to enter the market and a post-market environmental monitoring. Both the European Food Safety Authority (EFSA) and the member states author a risk assessment. This assessment must show that the food or feed is safe for human and animal health and the environment "under its intended conditions of use". As of 2010, the EU treats all genetically modified crops (GMO crops), along with irradiated food as "new food". They are subject to extensive, case-by-case, science-based food evaluation by the European Food Safety Authority (EFSA). This agency reports to the European Commission, which then drafts proposals for granting or refusing authorisation. Each proposal is submitted to the "Section on GM Food and Feed of the Standing Committee on the Food Chain and Animal Health". If accepted, it is either adopted by the EC or passed on to the Council of Agricultural Ministers. The council has three months to reach a qualified majority for or against the proposal. If no majority is reached, the proposal is passed back to the EC, which then adopts the proposal. The EFSA uses independent scientific research to advise the European Commission on how to regulate different foods in order to protect consumers and the environment. For GMOs, the EFSA's risk assessment includes molecular characterization, potential toxicity and potential environmental impact. Each GMO must be reassessed every 10 years. In addition, applicants who wish to cultivate or process GMOs must provide a detailed surveillance plan for after authorization. This ensures that the EFSA will know if risk to consumers or the environment heightens and that they can then act to lowed the risk or deauthorize the GMO. , 49 GMO crops, consisting of eight GM cottons, 28 GM maizes, three GM oilseed rapes, seven GM soybeans, one GM sugar beet, one GM bacterial biomass, and one GM yeast biomass have been authorised. Review of authorisation Member States of the EU may invoke a safeguard clause to temporarily restrict or prohibit use and/or sale of a GMO crop within their territory if they have justifiable reasons to consider that an approved GMO crop may be a risk to human health or the environment. The EC is obliged to investigate, and either overturn the original registrations or ask the country to withdraw its temporary restriction. By 2012, seven countries had submitted safeguard clauses. The EC investigated and rejected those from six countries ("...the scientific evidence currently available did not invalidate the original risk assessments for the products in question...") and one, the UK, withdrew. Import rules The EC Directorate-general for agriculture and rural development states that the regulations concerning the import and sale of GMOs for human and animal consumption grown outside the EU provide freedom of choice to farmers and consumers. All food (including processed food) or feed which contains greater than 0.9% of approved GMOs must be labelled. As of 2010, GMOs unapproved by the EC had been found twice and returned to their port of origin: First in 2006 when a shipment of rice from the U.S. containing an experimental GMO variety (LLRice601) not meant for commercialisation arrived at Rotterdam, the second time in 2009, when trace amounts of a GMO maize approved in the US were found in a non-GM soy flour cargo. In 2012, the EU imported about 30 million tons of GM crops for animal consumption. Adoption of GMO crops Spain has been the largest producer of GM crops in Europe with of GM maize planted in 2013 equaling 20% of Spain's maize production. Smaller amounts were produced in the Czech Republic, Slovakia, Portugal, Romania and Poland. France and Germany are the major opponents of genetically modified food in Europe, although Germany has approved Amflora a potato modified with higher levels of starch for industrial purposes. In addition to France and Germany, other European countries that placed bans on the cultivation and sale of GMOs include Austria, Hungary, Greece, and Luxembourg. Poland has also tried to institute a ban, with backlash from the European Commission. Bulgaria effectively banned cultivation of genetically modified organisms on 18 March 2010. In 2010, Austria, Bulgaria, Cyprus, Hungary, Ireland, Latvia, Lithuania, Malta, Slovenia and the Netherlands wrote a joint paper requesting that individual countries should have the right to decide whether to cultivate GM crops. By the year 2010, the only GMO food crop with approval for cultivation in Europe was MON 810, a Bt expressing maize conferring resistance to the European corn borer that gained approval in 1998. In March 2010 a second GMO, a potato called Amflora, was approved for cultivation for industrial applications in the EU by the European Commission and was grown in Germany, Sweden and the Czech Republic that year. Amflora was withdrawn from the EU market in 2012, and in 2013 its approval was annulled by an EU court. Fearing that gene flow could occur between related crops, the EC issued new guidelines in 2010 regarding the co-existence of GM and non-GM crops. Co-existence is regulated by the use of buffer zones and isolation distances between the GM and non-GM crops. The guidelines are not binding and each Member State can implement its own regulations, which has resulted in buffer zones ranging from 15 metres (Sweden) to 800 metres (Luxembourg). Member States may also designate GM-free zones, effectively allowing them to ban cultivation of GM crops in their territory without invoking a safeguard clause. Implementation in the Member States and in Switzerland Bulgaria In October 2015, Bulgaria announced it has opted out of growing genetically modified crops, effectively banning the cultivation of different types of GMO corn and soybeans. France France adopted the EU laws on growing GMOs in 2007 and was fined €10 million by the European Court of Justice for the six-year delay in implementing the laws. In February 2008, the French government used the safeguard clause to ban the cultivation of MON 810 after Senator Jean-François Le Grand, chairman of a committee to evaluate biotechnology, said there were "serious doubts" about the safety of the product. Twelve scientists and two economists on the committee accused Le Grand of misrepresenting the report and said they did not have "serious doubts", although questions remained concerning the impact of Bt-maize on health and the environment. The EFSA reviewed studies the French government had submitted to back up its claim, and concluded that there was no new evidence to undermine its prior safety findings and considered the decision "scientifically unfounded". The High Council for Biotechnology subcommittee dealing with economic, ethical and social aspects recommended an additional "GMO-free" label for anything containing less than 0.1% GMO which is due to come in late 2010. In 2011, the European Court of Justice and the French Conseil d'État ruled that the French farm ministry ban of MON 810 was illegal, as it failed "to give proof of the existence of a particularly high level of risk for the health and the environment". On 17 September 2015, the French government announced it would effectively continue to ban GMO crops by enacting an "opt-out" provision, previously agreed to for the 28 EU member states in March 2015, by asking the European Commission for France to extend the GMO ban on nine additional strains of maize. The policy announcement was made simultaneously by the French farm and environment ministries. Germany In April 2009, German Federal Minister Ilse Aigner announced an immediate halt to cultivation and marketing of MON 810 maize under the safeguard clause. The ban was based on "expert opinion" that suggested there were reasonable grounds to believe that MON 810 maize presents a danger to the environment. Three French scientists reviewing the scientific evidence used to justify the ban concluded that it did not use a case-by-case approach, confused potential hazards with proven risks and ignored the meta-knowledge on Bt expressing maize, instead focusing on selected individual studies. In August 2015, Germany announced its intention to ban genetically modified crops. Northern Ireland In September 2015, Northern Ireland announced a ban on genetically modified crops. Romania Romania grew GM soybeans in 1999, increasing the crop's yield by 30%, permitting the export of excess product. When the country joined the European Union in 2007 it was no longer allowed to grow the GM crop, resulting in the total area planted in soybeans dropping by 70%. The next year, this produced a trade deficit of €117.4m for purchase of replacement products. Romanian farmers have been very much in favour of relegalisation of GM soy. Switzerland In 1992, Switzerland voted in favour of the introduction of an article about assisted reproductive technologies and genetic engineering in the Swiss Federal Constitution. In 1995, Switzerland introduced regulations requiring labelling of food containing genetically modified organisms. It was one of the first countries to introduce labelling requirements for GMOs. In 2003, the Federal Assembly adopted the "Federal Act on Non-Human Gene Technology". A federal popular initiative introducing a moratorium on genetically modified organisms in the Swiss agriculture was introduced from 2005 to 2010. Later, the Swiss parliament extended this moratorium to 2013. Between 2007 and 2011, the Swiss Government funded thirty projects to investigate the risks and benefits of GMOs. These projects concluded that there were no clear health or environmental dangers associated with planting GMOs. However, they also concluded that there was little economic incentive for farmers to adopt GMOs in Switzerland. The Swiss parliament then extended the moratorium to 2017, and then to 2021. As of 2016, six cantons (Bern, Fribourg, Geneva, Jura, Ticino and Vaud) have introduced laws against genetically modified organisms in agriculture. More than one hundred communes have declared themselves free of genetically modified organisms. The cantons of Switzerland perform tests to assess the presence of genetically modified organisms in foodstuffs. In 2008, 3% of the tested samples contained detectable amounts of GMOs. In 2012, 12.1% of the samples analysed contained detectable amounts of GMOs (including 2.4% of GMOs forbidden in Switzerland). All the samples tested (except one) contained less than 0.9% of GMOs, which is the threshold that imposes labelling indicating the presence of GMOs in food. Scotland In August 2015, the Scottish government announced that it would "shortly submit a request that Scotland is excluded from any European consents for the cultivation of GM crops, including the variety of genetically modified maize already approved and six other GM crops that are awaiting authorisation". See also Regulation of genetic engineering European Food Safety Authority Notes and references External links European Food and Safety Authority EU Register of authorised GMOs – European Commission Regulation of genetically modified organisms European Union and agriculture Food safety in the European Union European Union regulations
Genetically modified food in the European Union
[ "Engineering", "Biology" ]
4,323
[ "Regulation of genetically modified organisms", "Genetic engineering", "Regulation of biotechnologies" ]
33,948,767
https://en.wikipedia.org/wiki/Biomarkers%20of%20aging
Biomarkers of aging are biomarkers that could predict functional capacity at some later age better than chronological age. Stated another way, biomarkers of aging would give the true "biological age", which may be different from the chronological age. Validated biomarkers of aging would allow for testing interventions to extend lifespan, because changes in the biomarkers would be observable throughout the lifespan of the organism. Although maximum lifespan would be a means of validating biomarkers of aging, it would not be a practical means for long-lived species such as humans because longitudinal studies would take far too much time. Ideally, biomarkers of aging should assay the biological process of aging and not a predisposition to disease, should cause a minimal amount of trauma to assay in the organism, and should be reproducibly measurable during a short interval compared to the lifespan of the organism. An assemblage of biomarker data for an organism could be termed its "ageotype". Although graying of hair increases with age, hair graying cannot be called a biomarker of ageing. Similarly, skin wrinkles and other common changes seen with aging are not better indicators of future functionality than chronological age. Biogerontologists have continued efforts to find and validate biomarkers of aging, but success thus far has been limited. Levels of CD4 and CD8 memory T cells and naive T cells have been used to give good predictions of the expected lifespan of middle-aged mice. Advances in big data analysis allowed for the new types of "aging clocks" to be developed. The epigenetic clock is a promising biomarker of aging and can accurately predict human chronological age. Basic blood biochemistry and cell counts can also be used to accurately predict the chronological age. Further studies of the hematological clock on the large datasets from South Korean, Canadian, and Eastern European populations demonstrated that biomarkers of aging may be population-specific and predictive of mortality. It is also possible to predict the human chronological age using the transcriptomic clock. Epigenetic marks Loss of histones A new epigenetic mark found in studies of aging cells is the loss of histones. Most evidence shows that loss of histones is linked to cell division. In aging and dividing yeast MNase-seq (Micrococcal Nuclease sequencing) showed a loss of nucleosomes of ~50%. Proper histone dosage is important in yeast as shown from the extended lifespans seen in strains that are overexpressing histones. A consequence of histone loss in yeast is the amplification of transcription. In younger cells, genes that are most induced with age have specific chromatin structures, such as fuzzy nuclear positioning, lack of a nucleosome depleted region (NDR) at the promoter, weak chromatin phasing, a higher frequency of TATA elements, and higher occupancy of repressive chromatin factors. In older cells, however, the same genes nucleosome loss at the promoter is more prevalent which leads to higher transcription of these genes. This phenomenon is not only seen in yeast, but has also been seen in aging worms, during aging of human diploid primary fibroblasts, and in senescent human cells. In human primary fibroblasts, reduced synthesis of new histones was seen to be a consequence of shortened telomeres that activate the DNA damage response. Loss of core histones may be a general epigenetic mark of aging across many organisms. Histone variants In addition to the core histones, H2A, H2B, H3, and H4, there are other versions of the histone proteins that can be significantly different in their sequence and are important for regulating chromatin dynamics. Histone H3.3 is a variant of histone H3 that is incorporated into the genome independent of replication. It is the major form of histone H3 seen in the chromatin of senescent human cells, and it appears that excess H3.3 can drive senescence. There are multiple variants of histone 2, the one most notably implicated in aging is macroH2A. The function of macroH2A has generally been assumed to be transcriptional silencing; most recently, it has been suggested that macroH2A is important in repressing transcription at Senescence-Associated Heterochromatin Foci (SAHF). Chromatin that contains macroH2A is impervious to ATP-dependent remodeling proteins and to the binding of transcription factors. Histone modifications Increased acetylation of histones contributes to chromatin taking a more euchromatic state as an organism ages, similar to the increased transcription seen due to the loss of histones. There is also a reduction in the levels of H3K56ac during aging and an increase in the levels of H4K16ac. Increased H4K16ac in old yeast cells is associated with the decline in levels of the HDAC Sir2, which can increase the life span when overexpressed. Methylation of histones has been tied to life span regulation in many organisms, specifically H3K4me3, an activating mark, and H4K27me3, a repressing mark. In C. elegans, the loss of any of the three Trithorax proteins that catalyze the trimethylation of H3K4 such as, WDR-5 and the methyltransferases SET-2 and ASH-2, lowers the levels of H3K4me3 and increases lifespan. Loss of the enzyme that demethylates H3K4me3, RB-2, increases H3K4me3 levels in C. elegans and decreases their life spans. In the rhesus macaque brain prefrontal cortex, H3K4me2 increases at promoters and enhancers during postnatal development and aging. These increases reflect progressively more active and transcriptionally accessible (or open) chromatin structures that are often associated with stress responses such as the DNA damage response. These changes may form an epigenetic memory of stresses and damages experienced by the organism as it develops and ages. UTX-1, a H3K27me3 demethylase, plays a critical role in the aging of C.elegans: increased utx-1 expression correlates with a decrease in H3K27me3 and a decrease in lifespan. Utx-1 knockdowns showed an increase in lifespan Changes in H3K27me3 levels also have affects on aging cells in Drosophila and humans. DNA methylation Methylation of DNA is a common modification in mammalian cells. The cytosine base is methylated and becomes 5-methylcytosine, most often when in the CpG context. Hypermethylation of CpG islands is associated with transcriptional repression and hypomethylation of these sites is associated with transcriptional activation. Many studies have shown that there is a loss of DNA methylation during ageing in many species such as, rats, mice, cows, hamsters, and humans. It has also been shown that DNMT1 and DNMT3a decrease with aging and DNMT3b increases. Hypomethylation of DNA can lower genomic stability, induce the reactivation of transposable elements, and cause the loss of imprinting, all of which can contribute to cancer progression and pathogenesis. Immune biomarkers Recent data suggests that an increased frequency of senescent CD8+ T cells in the peripheral blood is associated with the development of hyperglycemia from a pre-diabetic state suggestive of senescence playing a role in metabolic aging. Senescent Cd8+ T cells could be utilized as a biomarker to signal the transition from pre-diabetes to overt hyperglycemia. Recently, Hashimoto and coworkers profiled thousands of circulating immune cells from supercentenarians at single-cell resolution. They identified a unique increase in cytotoxic CD4 T cells in these supercentenarians. Generally, CD4 T-cells have helper, but not cytotoxic, functions under physiological conditions however these supercentenarians, subjected to single cell profiling of their T-cell receptors, revealed accumulations of cytotoxic CD4 T-cells through clonal expansion. The conversion of helper CD4 T-cells to a cytotoxic variety might be an adaptation to the late stage of aging aiding in the fighting infections and potentially enhancing tumor surveillance. Applications of aging biomarkers The main mechanisms identified as potential biomarkers of aging are DNA methylation, loss of histones, and histone modification. The uses for biomarkers of aging are ubiquitous and identifying a physical parameter of biological aging would allow humans to determine our true age, mortality, and morbidity. The change in the physical biomarker should be proportional to the change in the age of the species. Thus after establishing a biomarker of aging, humans would be able to dive into research on extending life spans and finding timelines for the arise of potential genetic diseases. One of the applications of this finding would allow for identification of the biological age of a person. DNA methylation uses the structure of DNA at different stages of life to determine an age. DNA methylation is the methylation of the cysteine in the CG or Cpg region. The hypermethylation of this region is associated with decreased transcriptional activity and the opposite for hypomethylation. In other words, the more "tightly" held the DNA region then the more stable and "younger" the species. Looking at DNA methylation's properties in tissues, it was found to be almost zero for embryonic tissues, it can be used to determine acceleration of age and the results can be reproduced in chimpanzee tissue. More recently, biomarkers of aging has been used in multiple clinical trials to measure slowing or reversing of age-related decline or biological aging. The Biomarkers of Aging Consortium is currently examining the application of these biomarkers to identify longevity interventions and ways to validate them. Moreover, open-source resources, such as the R package methylCIPHER and the Python package pyaging are available to the public as hubs for several biomarkers of aging. See also Epigenetic clock Hallmarks of aging Biomarker (medicine) Senescence References External links Biomarkers of Aging News Advisory National Institute on Aging Biogerontology Physiology Senescence Biomarkers
Biomarkers of aging
[ "Chemistry", "Biology" ]
2,214
[ "Biomarkers", "Physiology", "Senescence", "Cellular processes", "Metabolism" ]
33,949,413
https://en.wikipedia.org/wiki/Hypoxia%20in%20fish
Fish are exposed to large oxygen fluctuations in their aquatic environment since the inherent properties of water can result in marked spatial and temporal differences in the concentration of oxygen (see oxygenation and underwater). Fish respond to hypoxia with varied behavioral, physiological, and cellular responses to maintain homeostasis and organism function in an oxygen-depleted environment. The biggest challenge fish face when exposed to low oxygen conditions is maintaining metabolic energy balance, as 95% of the oxygen consumed by fish is used for ATP production releasing the chemical energy of nutrients through the mitochondrial electron transport chain. Therefore, hypoxia survival requires a coordinated response to secure more oxygen from the depleted environment and counteract the metabolic consequences of decreased ATP production at the mitochondria. Hypoxia tolerance A fish's hypoxia tolerance can be represented in different ways. A commonly used representation is the critical O2 tension (Pcrit), which is the lowest water O2 tension (PO2) at which a fish can maintain a stable O2 consumption rate (MO2). A fish with a lower Pcrit is therefore thought to be more hypoxia-tolerant than a fish with a higher Pcrit. But while Pcrit is often used to represent hypoxia tolerance, it more accurately represents the ability to take up environmental O2 at hypoxic PO2s and does not incorporate the significant contributions of anaerobic glycolysis and metabolic suppression to hypoxia tolerance (see below). Pcrit is nevertheless closely tied to a fish's hypoxia tolerance, in part because some fish prioritize their use of aerobic metabolism over anaerobic metabolism and metabolic suppression. It therefore remains a widely used hypoxia tolerance metric. A fish's hypoxia tolerance can also be represented as the amount of time it can spend at a particular hypoxic PO2 before it loses dorsal-ventral equilibrium (called time-to-LOE), or the PO2 at which it loses equilibrium when PO2 is decreased from normoxia to anoxia at some set rate (called PO2-of-LOE). A higher time-to-LOE value or a lower PO2-of-LOE value therefore imply enhanced hypoxia tolerances. In either case, LOE is a more holistic representation of overall hypoxia tolerance because it incorporates all contributors to hypoxia tolerance, including aerobic metabolism, anaerobic metabolism and metabolic suppression. Oxygen sensing Oxygen sensing structures In mammals there are several structures that have been implicated as oxygen sensing structures; however, all of these structures are situated to detect aortic or internal hypoxia since mammals rarely run into environmental hypoxia. These structures include the type I cells of the carotid body, the neuroepithelial bodies of the lungs as well as some central and peripheral neurons and vascular smooth muscle cells. In fish, the neuroepithelial cells (NEC) have been implicated as the major oxygen sensing cells. NEC have been found in all teleost fish studied to date, and are likely a highly conserved structure within many taxa of fish. NEC are also found in all four gill arches within several different structures, such as along the filaments, at the ends of the gill rakers and throughout the lamellae. Two separate neural pathways have been identified within the zebrafish gill arches both the motor and sensory nerve fibre pathways. Since neuroepithelial cells are distributed throughout the gills, they are often ideally situated to detect both arterial as well as environmental oxygen. Mechanisms of neurotransmitter release in neuroepithelial cells Neuroepithelial cells (NEC) are thought to be neuron-like chemoreceptor cells because they rely on membrane potential changes for the release of neurotransmitters and signal transmission onto nearby cells. Once NEC of the zebrafish gills come in contact with either environmental or aortic hypoxia, an outward K+ "leak" channel is inhibited. It remains unclear how these K+ channels are inhibited by a shortage of oxygen because there are yet to be any known direct binding sites for "a lack of oxygen", only whole cell and ion channel responses to hypoxia. K+ "leak" channels are two-pore-domain ion channels that are open at the resting membrane potential of the cell and play a major role in setting the equilibrium resting membrane potential of the cell. Once this "leak" channel is closed, the K+ is no longer able to freely flow out of the cell, and the membrane potential of the NEC increases; the cell becomes depolarized. This depolarization causes voltage-gated Ca2+ channels to open, and for extracellular Ca2+ to flow down its concentration gradient into the cell causing the intracellular Ca2+ concentration to greatly increase. Once the Ca2+ is inside the cell, it binds to the vesicle release machinery and facilitates binding of the t-snare complex on the vesicle to the s-snare complex on the NEC cell membrane which initiates the release of neurotransmitters into the synaptic cleft. Signal transduction up to higher brain centres If the post-synaptic cell is a sensory neuron, then an increased firing rate in that neuron will transmit the signal to the central nervous system for integration. Whereas, if the post-synaptic cell is a connective pillar cell or a vascular smooth muscle cell, then the serotonin will cause vasoconstriction and previously unused lamellae will be recruited through recruitment of more capillary beds, and the total surface area for gas exchange per lamella will be increased. In fish, the hypoxic signal is carried up to the brain for processing by the glossopharyngeal (cranial nerve IX) and vagus (cranial nerve X) nerves. The first branchial arch is innervated by the glossopharyngeal nerve (cranial nerve IX); however all four arches are innervated by the vagus nerve (cranial nerve X). Both the glossopharyngeal and vagus nerves carry sensory nerve fibres into the brain and central nervous system. Locations of oxygen sensors Through studies using mammalian model organisms, there are two main hypotheses for the location of oxygen sensing in chemoreceptor cells: the membrane hypothesis and the mitochondrial hypothesis. The membrane hypothesis was proposed for the carotid body in mice, and it predicts that oxygen sensing is an ion balance initiated process. The mitochondrial hypothesis was also proposed for the carotid body of mice, but it relies on the levels of oxidative phosphorylation and/or reactive oxygen species (ROS) production as a cue for hypoxia. Specifically, the oxygen sensitive K+ currents are inhibited by H2O2 and NADPH oxidase activation. There is evidence for both of these hypotheses depending on the species used for the study. For the neuroepithelial cells in the zebrafish gills, there is strong evidence supporting the "membrane hypothesis" due to their capacity to respond to hypoxia after removal of the contents of the cell. However, there is no evidence against multiple sites for oxygen sensing in organisms. Acute responses to hypoxia Many hypoxic environments never reach the level of anoxia and most fish are able to cope with this stress using different physiological and behavioural strategies. Fish that use air breathing organs (ABO) tend to live in environments with highly variable oxygen content and rely on aerial respiration during times when there is not enough oxygen to support water-breathing. Though all teleosts have some form of swim bladder, many of them are not capable of breathing air, and they rely on aquatic surface respiration as a supply of more oxygenated water at the surface of the water. However, many species of teleost fish are obligate water breathers and do not display either of these surface respiratory behaviours. Typically, acute hypoxia causes hyperventilation, bradycardia and an elevation in gill vascular resistance in teleosts. However, the benefit of these changes in blood pressure to oxygen uptake has not been supported in a recent study of the rainbow trout. It is possible that the acute hypoxia response is simply a stress response, and the advantages found in early studies may only result after acclimatization to the environment. Behavioral responses Hypoxia can modify normal behavior. Parental behaviour meant to provide oxygen to the eggs is often affected by hypoxia. For example, fanning behavior (swimming on the spot near the eggs to create a flow of water over them, and thus a constant supply of oxygen) is often increased when oxygen is less available. This has been documented in sticklebacks, gobies, and clownfishes, among others. Gobies may also increase the size of the openings in the nest they build, even though this may increase the risk of predation on the eggs. Rainbow cichlids often move their young fry closer to the water surface, where oxygen is more available, during hypoxic episodes. Behavioural adaptations meant to survive when oxygen is scarce include reduced activity levels, aquatic surface respiration, and air breathing. Reduced activity levels As oxygen levels decrease, fish may at first increase movements in an attempt to escape the hypoxic zone, but eventually they greatly reduce their activity levels, thus reducing their energetic (and therefore oxygen) demands. Atlantic herring show this exact pattern. Other examples of fishes that reduce their activity levels under hypoxia include the common sole, the guppy, the small-spotted catshark, and the viviparous eelpout. Some sharks that ram-ventilate their gills may understandably increase their swimming speeds under hypoxia, to bring more water to the gills. Aquatic surface respiration In response to decreasing dissolved oxygen level in the environment, fish swim up to the surface of the water column and ventilate at the top layer of the water where it contains relatively higher level of dissolved oxygen, a behavior called aquatic surface respiration (ASR). Oxygen diffuses into water from air and therefore the top layer of water in contact with air contains more oxygen. This is true only in stagnant water; in running water all layers are mixed together and oxygen levels are the same throughout the water column. One environment where ASR often takes place is tidepools, particularly at night. Separation from the sea at low tide means that water is not renewed, fish crowding within the pool means that oxygen is quickly depleted, and absence of light at night means that there is no photosynthesis to replenish the oxygen. Examples of tidepool species that perform ASR include the tidepool sculpin, the three-spined stickleback, and the mummichog. But ASR is not limited to the intertidal environment. Most tropical and temperate fish species living in stagnant waters engage in ASR during hypoxia. One study looked at 26 species representing eight families of non-air breathing fishes from the North American great plains, and found that all but four of them performed ASR during hypoxia. Another study looked at 24 species of tropical fish common to the pet trade, from tetras to barbs to cichlids, and found that all of them performed ASR. An unusual situation in which ASR is performed is during winter, in lakes covered by ice, at the interface between water and ice or near air bubbles trapped underneath the ice. Some species may show morphological adaptations, such as a flat head and an upturned mouth, that allow them to perform ASR without breaking the water surface (which would make them more visible to aerial predators). One example is the mummichog, whose upturned mouth suggests surface feeding, but whose feeding habits are not particularly restricted to the surface. In the tambaqui, a South American species, exposure to hypoxia induces within hours the development of additional blood vessels inside the lower lip, enhancing its ability to take up oxygen during ASR. Swimming upside down may also help fishes perform ASR, as in some upside-down catfish. Some species may hold an air bubble within the mouth during ASR. This may assist buoyancy as well as increase the oxygen content of the water passing over the bubble on its way to the gills. Another way to reduce buoyancy costs is to perform ASR on rocks or plants that provide support near the water surface. ASR significantly affects survival of fish during severe hypoxia. In the shortfin molly for example, survival was approximately four times higher in individuals able to perform ASR as compared to fish not allowed to perform ASR during their exposure to extreme hypoxia. ASR may be performed more often when the need for oxygen is higher. In the sailfin molly, gestating females (this species is a livebearer) spend about 50% of their time in ASR as compared to only 15% in non-gestating females under the same low levels of oxygen. Aerial respiration (air breathing) Aerial respiration is the 'gulping' of air at the surface of water to directly extract oxygen from the atmosphere. Aerial respiration evolved in fish that were exposed to more frequent hypoxia; also, species that engage in aerial respiration tend to be more hypoxia tolerant than those which do not air-breath during the hypoxia. There are two main types of air breathing fish—facultative and non-facultative. Under normoxic conditions facultative fish can survive without having to breathe air from the surface of the water. However, non-facultative fish must respire at the surface even in normal dissolved oxygen levels because their gills cannot extract enough oxygen from the water. Many air breathing freshwater teleosts use ABOs to effectively extract oxygen from air while maintaining functions of the gills. ABOs are modified gastrointestinal tracts, gas bladders, and labyrinth organs; they are highly vascularized and provide additional method of extracting oxygen from the air. Fish also use ABO for storing the retained oxygen. Predation risk associated with ASR and aerial respiration Both ASR and aerial respiration require fish to travel to the top of water column and this behaviour increases the predation risks by aerial predators or other piscivores inhabiting near the surface of the water. To cope with the increased predation risk upon surfacing, some fish perform ASR or aerial respiration in schools to 'dilute' the predation risk. When fish can visually detect the presence of their aerial predators, they simply refrain from surfacing, or prefer to surface in areas where they can be detected less easily (i.e. turbid, shaded areas). Gill remodelling in hypoxia Gill remodelling happens in only a few species of fish, and it involves the buildup or removal of an inter-lamellar cell mass (ILCM). As a response to hypoxia, some fish are able to remodel their gills to increase respiratory surface area, with some species such as goldfish doubling their lamellar surface areas in as little as 8 hours. The increased respiratory surface area comes as a trade-off with increased metabolic costs because the gills are a very important site for many important processes including respiratory gas exchange, acid-base regulation, nitrogen excretion, osmoregulation, hormone regulation, metabolism, and environmental sensing. The crucian carp is one species able to remodel its gill filaments in response to hypoxia. Their inter-lamellar cells have high rates of mitotic activity which are influenced by both hypoxia and temperature. In cold (15 °C) water the crucian carp has more ILCM, but when the temperature is increased to 25 °C the ILCM is removed, just as it would be in hypoxic conditions. This same transition in gill morphology occurs in the goldfish when the temperature was raised from 7.5 °C to 15 °C. This difference may be due to the temperature regimes that these fish are typically found in, or there could be an underlying protective mechanism to prevent a loss of ion balance in stressful temperatures. Temperature also affects the speed at which the gills can be remodelled: for example, at 20 °C in hypoxia, the crucian carp can completely remove its ILCM in 6 hours, whereas at 8 °C, the same process takes 3–7 days. The ILCM is likely removed by apoptosis, but it is possible that when the fish is faced with the double stress of hypoxia at high temperature, the lamellae may be lost by physical degradation. Covering the gill lamellae may protect species like the crucian carp from parasites and environmental toxins during normoxia by limiting their surface area for inward diffusion while still maintaining oxygen transport due to an extremely high hemoglobin oxygen binding affinity. The naked carp, a closely related species native to the high-altitude Lake Qinghai, is also able to remodel their gills in response to hypoxic conditions. In response to oxygen levels 95% lower than normoxic conditions, apoptosis of ILCM increases lamellar surface area by up to 60% after just 24 hours. However, this comes at a significant osmoregulatory cost, reducing sodium and chloride levels in the cytoplasm by over 10%. The morphological response to hypoxia by scaleless carp is the fastest respiratory surface remodelling reported in vertebrates thus far. Oxygen uptake Fish exhibit a wide range of tactics to counteract aquatic hypoxia, but when escape from the hypoxic stress is not possible, maintaining oxygen extraction and delivery becomes an essential component to survival. Except for the Antarctic ice fish that does not, most fish use hemoglobin (Hb) within their red blood cells to bind chemically and deliver 95% of the oxygen extracted from the environment to the working tissues. Maintaining oxygen extraction and delivery to the tissues allows continued activity under hypoxic stress and is in part determined by modifications in two different blood parameters: hematocrit and the binding properties of hemoglobin. Hematocrit In general, hematocrit is the number of red blood cells (RBC) in circulation and is highly variable among fish species. Active fish, like the blue marlin, tend to have higher hematocrits, whereas less active fish, such as the starry flounder exhibit lower hematocrits. Hematocrit may be increased in response to both short-term (acute) or long-term (chronic) hypoxia exposure and results in an increase in the total amount of oxygen the blood can carry, also known as the oxygen carrying capacity of the blood. Acute changes in hematocrit are the result of circulating stress hormones (see - catecholamines) activating receptors on the spleen that cause the release of RBCs into circulation. During chronic hypoxia exposure, the mechanism used to increase hematocrit is independent of the spleen and results from hormonal stimulation of the kidney by erythropoetin (EPO). Increasing hematocrit in response to erythropoietin is observed after approximately one week and is therefore likely under genetic control of hypoxia inducible factor hypoxia inducible factor (HIF). While increasing hematocrit means that the blood can carry a larger total amount of oxygen, a possible advantage during hypoxia, increasing the number of RBCs in the blood can also lead to certain disadvantages. First, A higher hematocrit results in more viscous blood (especially in cold water) increasing the amount of energy the cardiac system requires to pump the blood through the system and secondly depending on the transit time of the blood across the branchial arch and the diffusion rate of oxygen, an increased hematocrit may result in less efficient transfer of oxygen from the environment to the blood. Changing the binding affinity of hemoglobin An alternative mechanism to preserve O2 delivery in the face of low ambient oxygen is to increase the affinity of the blood. The oxygen content of the blood is related to PaO2 and is illustrated using an oxygen equilibrium curve (OEC). Fish hemoglobins, with the exception of the agnathans, are tetramers that exhibit cooperativity of O2 binding and have sigmoidal OECs. The binding affinity of hemoglobin to oxygen is estimated using a measurement called P50 (the partial pressure of oxygen at which hemoglobin is 50% bound with oxygen) and can be extremely variable. If the hemoglobin has a weak affinity for oxygen, it is said to have a high P50 and therefore constrains the environment in which a fish can inhabit to those with relatively high environmental PO2. Conversely, fish hemoglobins with a low P50 bind strongly to oxygen and are then of obvious advantage when attempting to extract oxygen from hypoxic or variable PO2 environments. The use of high affinity (low P50) hemoglobins results in reduced ventillatory and therefore energetic requirements when facing hypoxic insult. The oxygen binding affinity of hemoglobin (Hb-O2) is regulated through a suite of allosteric modulators; the principal modulators used for controlling Hb-O2 affinity under hypoxic insult are: Increasing RBC pH Reducing inorganic phosphate interactions pH and inorganic phosphates (Pi) In rainbow trout as well as a variety of other teleosts, increased RBC pH stems from the activation of B-andrenergic exchange protein (BNHE) on the RBC membrane via circulating catelcholamines. This process causes the internal pH of the RBC to increase through the outwards movement of and inwards movement of . The net consequence of alkalizing the RBC is an increase in Hb-O2 affinity via the Bohr effect. The net influx of ions and the compensatory activation of -ATPase to maintain ionic equilibrium within the RBC results in a steady decline in cellular ATP, also serving to increase Hb-O2 affinity. As a further result of inward movement, the osmolarity of the RBC increases causing osmotic influx of water and cell swelling. The dilution of the cell contents causes further spatial separation of hemoglobin from the inorganic phosphates and again serves to increase Hb-O2 affinity. Intertidal hypoxia-tolerant triplefin fish (Family Tripterygiidae) species seem to take advantage of intracellular acidosis and appears to "bypasse" the traditional oxidative phosphorylation and directly drives mitochondrial ATP synthesis using the cytosolic pool of protons that likely accumulates in hypoxia (via lactic acidosis and ATP hydrolysis). Changing Hb- isoforms Nearly all animals have more than one kind of Hb present in the RBC. Multiple Hb isoforms (see isoforms) are particularly common in ectotherms, but especially in fish that are required to cope with both fluctuating temperature and oxygen availability. Hbs isolated from the European eel can be separated into anodic and cathodic isoforms. The anodic isoforms have low oxygen affinities (high P50) and marked Bohr effects, while the cathodic lack significant pH effects and are therefore thought to confer hypoxia tolerance. Several species of African cichlids raised from early stage development under either hypoxic or normoxic conditions were contrasted in an attempt to compare Hb isoforms. They demonstrated there were Hb isoforms specific to the hypoxia-raised individuals. Metabolic challenge To deal with decreased ATP production through the electron transport chain, fish must activate anaerobic means of energy production (see anaerobic metabolism) while suppressing metabolic demands. The ability to decrease energy demand by metabolic suppression is essential to ensure hypoxic survival due to the limited efficiency of anaerobic ATP production. Switch from aerobic to anaerobic metabolism Aerobic respiration, in which oxygen is used as the terminal electron acceptor, is crucial to all water-breathing fish. When fish are deprived of oxygen, they require other ways to produce ATP. Thus, a switch from aerobic metabolism to anaerobic metabolism occurs at the onset of hypoxia. Glycolysis and substrate-level phosphorylation are used as alternative pathways for ATP production. However, these pathways are much less efficient than aerobic metabolism. For example, when using the same substrate, the total yield of ATP in anaerobic metabolism is 15 times lower than in aerobic metabolism. This level of ATP production is not sufficient to maintain a high metabolic rate, therefore, the only survival strategy for fish is to alter their metabolic demands. Metabolic suppression Metabolic suppression is the regulated and reversible reduction of metabolic rate below basal metabolic rate (called standard metabolic rate in ectothermic animals). This reduces the fish's rate of ATP use, which prolongs its survival time at severely hypoxic sub-Pcrit PO2s by reducing the rate at which the fish's finite anaerobic fuel stores (glycogen) are used. Metabolic suppression also reduces the accumulation rate of deleterious anaerobic end-products (lactate and protons), which delays their negative impact on the fish. The mechanisms that fish use to suppress metabolic rate occur at behavioral, physiological and biochemical levels. Behaviorally, metabolic rate can be lowered through reduced locomotion, feeding, courtship, and mating. Physiologically, metabolic rate can be lowered through reduced growth, digestion, gonad development, and ventilation efforts. And biochemically, metabolic rate can be further lowered below standard metabolic rate through reduced gluconeogenesis, protein synthesis and degradation rates, and ion pumping across cellular membranes. Reductions in these processes lower ATP use rates, but it remains unclear whether metabolic suppression is induced through an initial reduction in ATP use or ATP supply. The prevalence of metabolic suppression use among fish species has not been thoroughly explored. This is partly because the metabolic rates of hypoxia-exposed fish, including suppressed metabolic rates, can only be accurately measured using direct calorimetry, and this technique is seldom used for fish. The few studies that have used calorimetry reveal that some fish species employ metabolic suppression in hypoxia/anoxia (e.g., goldfish, tilapia, European eel) while others do not (e.g. rainbow trout, zebrafish). The species that employ metabolic suppression are more hypoxia-tolerant than the species that do not, which suggests that metabolic suppression enhances hypoxia tolerance. Consistent with this, differences in hypoxia tolerance among isolated threespine stickleback populations appear to result from differences in the use of metabolic suppression, with the more tolerant stickleback using metabolic suppression. Fish that are capable of hypoxia-induced metabolic suppression reduce their metabolic rates by 30% to 80% relative to standard metabolic rates. Because this is not a complete cessation of metabolic rate, metabolic suppression can only prolong hypoxic survival, not sustain it indefinitely. If the hypoxic exposure lasts sufficiently long, the fish will succumb to a depletion of its glycogen stores and/or the over-accumulation of deleterious anaerobic end-products. Furthermore, the severely limited energetic scope that comes with a metabolically suppressed state means that the fish is unable to complete critical tasks such a predator avoidance and reproduction. Perhaps for these reasons, goldfish prioritize their use of aerobic metabolism in most hypoxic environments, reserving metabolic suppression for the extreme case of anoxia. Energy conservation In addition to a reduction in the rate of protein synthesis, it appears that some species of hypoxia-tolerant fish conserve energy by employing Hochachka's ion channel arrest hypothesis. This hypothesis makes two predictions: Hypoxia-tolerant animals naturally have low membrane permeabilities Membrane permeability decreases even more during hypoxic conditions (ion channel arrest) The first prediction holds true. When membrane permeability to Na+ and K+ ions was compared between reptiles and mammals, reptile membranes were discovered to be five times less leaky. The second prediction has been more difficult to prove experimentally, however, indirect measures have showed a decrease in Na+/K+-ATPase activity in eel and trout hepatocytes during hypoxic conditions. Results seem to be tissue-specific, as crucian carp exposed to hypoxia do not undergo a reduction in Na+/K+ ATPase activity in their brain. Although evidence is limited, ion channel arrest enables organisms to maintain ion channel concentration gradients and membrane potentials without consuming large amounts of ATP. Enhanced glycogen stores The limiting factor for fish undergoing hypoxia is the availability of fermentable substrate for anaerobic metabolism; once substrate runs out, ATP production ceases. Endogenous glycogen is present in tissue as a long term energy storage molecule. It can be converted into glucose and subsequently used as the starting material in glycolysis. A key adaptation to long-term survival during hypoxia is the ability of an organism to store large amounts of glycogen. Many hypoxia-tolerant species, such as carp, goldfish, killifish, and oscar contain the largest glycogen content (300-2000 μmol glocosyl units/g) in their tissue compared to hypoxia-sensitive fish, such as rainbow trout, which contain only 100 μmol glocosyl units/g. The more glycogen stored in a tissue indicates the capacity for that tissue to undergo glycolysis and produce ATP. Tolerance of waste products When anaerobic pathways are turned on, glycogen stores are depleted and accumulation of acidic waste products occurs. This is known as a Pasteur effect. A challenge hypoxia-tolerant fish face is how to produce ATP anaerobically without creating a significant Pasteur effect. Along with a reduction in metabolism, some fish have adapted traits to avoid accumulation of lactate. For example, the crucian carp, a highly hypoxia-tolerant fish, has evolved to survive months of anoxic waters. A key adaptation is the ability to convert lactate to ethanol in the muscle and excrete it out of their gills. Although this process is energetically costly it is crucial to their survival in hypoxic waters. Gene expression changes DNA microarray studies done on different fish species exposed to low-oxygen conditions have shown that at the genetic level fish respond to hypoxia by changing the expression of genes involved in oxygen transport, ATP production, and protein synthesis. In the liver of mudsuckers exposed to hypoxia there were changes in the expression of genes involved in heme metabolism such as hemopexin, heme oxygenase 1, and ferritin. Changes in the sequestration and metabolism of iron may suggest hypoxia induced erythropoiesis and increased demand for hemoglobin synthesis, leading to increased oxygen uptake and transport. Increased expression of myoglobin, which is normally only found in muscle tissue, has also been observed after hypoxia exposure in the gills of zebrafish and in non-muscle tissue of the common carp suggesting increased oxygen transport throughout fish tissues. Microarray studies done on fish species exposed to hypoxia typically show a metabolic switch, that is, a decrease in the expression of genes involved in aerobic metabolism and an increase in expression of genes involved in anaerobic metabolism. Zebrafish embryos exposed to hypoxia decreased expression of genes involved in the citric acid cycle including, succinate dehydrogenase, malate dehydrogenase, and citrate synthase, and increased expression of genes involved in glycolysis such as phosphoglycerate mutase, enolase, aldolase, and lactate dehydrogenase. A decrease in protein synthesis is an important response to hypoxia to decrease ATP demand for whole organism metabolic suppression. Decreases in the expression of genes involved in protein synthesis, such as elongation factor-2 and several ribosomal proteins, have been shown in the muscle of the mudsucker and gills of adult zebrafish after hypoxia exposure . Research in mammals has implicated hypoxia inducible factor (HIF) as a key regulator of gene expression changes in response to hypoxia However, a direct link between fish HIFs and gene expression changes in response to hypoxia has yet to be found. Phylogenetic analysis of available fish, tetrapod, and bird HIF-α and -β sequences shows that the isoforms of both subunits present in mammals are also represented in fish Within fish, HIF sequences group close together and are distinct from tetrapod and bird sequences. As well, amino acid analysis of available fish HIF-α and -β sequences reveals that they contain all functional domains shown to be important for mammalian HIF function, including the basic helix-loop-helix (bHLH) domain, Per-ARNT-Sim (PAS) domain, and the oxygen-dependent degradation domain (ODD), which render the HIF-α subunit sensitive to oxygen levels. The evolutionary similarity between HIF sequences in fish, tetrapods and birds, as well as the conservation of important functional domains suggests that HIF function and regulation is similar between fish and mammalian species. There is also evidence of novel HIF mechanisms present in fish not found in mammals. In mammals, HIF-α protein is continuously synthesized and regulated post-translationally by changing oxygen conditions, but it has been shown in different fish species that HIF-α mRNA levels are also responsive to hypoxia. In the hypoxia tolerant grass carp, substantial increases in HIF-1α and HIF-3α mRNA were observed in all tissues after hypoxia exposure. Likewise, mRNA levels of HIF-1α and HIF-2α were hypoxia-responsive in the ovaries of the Atlantic croaker during both short and long term hypoxia. See also Algal bloom Eutrophication Fish kill Hypoxia (environmental) References Aquatic ecology Chemical oceanography Environmental science Water quality indicators Oxygen
Hypoxia in fish
[ "Chemistry", "Biology", "Environmental_science" ]
7,252
[ "Water pollution", "Chemical oceanography", "Water quality indicators", "Ecosystems", "nan", "Aquatic ecology" ]
30,099,659
https://en.wikipedia.org/wiki/Preconsolidation%20pressure
Preconsolidation pressure is the maximum effective vertical overburden stress that a particular soil sample has sustained in the past. This quantity is important in geotechnical engineering, particularly for finding the expected settlement of foundations and embankments. Alternative names for the preconsolidation pressure are preconsolidation stress, pre-compression stress, pre-compaction stress, and preload stress. A soil is called overconsolidated if the current effective stress acting on the soil is less than the historical maximum. The preconsolidation pressure can help determine the largest overburden pressure that can be exerted on a soil without irrecoverable volume change. This type of volume change is important for understanding shrinkage behavior, crack and structure formation and resistance to shearing stresses. Previous stresses and other changes in a soil's history are preserved within the soil's structure. If a soil is loaded beyond this point the soil is unable to sustain the increased load and the structure will break down. This breakdown can cause a number of different things depending on the type of soil and its geologic history. Preconsolidation pressure cannot be measured directly, but can be estimated using a number of different strategies. Samples taken from the field are subjected to a variety of tests, like the constant rate of strain test (CRS) or the incremental loading test (IL). These tests can be costly due to expensive equipment and the long period of time they require. Each sample must be undisturbed and can only undergo one test with satisfactory results. It is important to execute these tests precisely to ensure an accurate resulting plot. There are various methods for determining the preconsolidation pressure from lab data. The data is usually arranged on a semilog plot of the effective stress (frequently represented as σ'vc) versus the void ratio. This graph is commonly called the e log p curve or the consolidation curve. Methods The preconsolidation pressure can be estimated in a number of different ways but not measured directly. It is useful to know the range of expected values depending on the type of soil being analyzed. For example, in samples with natural moisture content at the liquid limit (liquidity index of 1), preconsolidation ranges between about 0.1 and 0.8 tsf, depending on soil sensitivity (defined as the ratio of undisturbed peak undrained shear strength to totally remolded undrained shear strength). For natural moisture at the plastic limit (liquidity index equal to zero), preconsolidation ranges from about 12 to 25 tsf. See Atterberg limits for information about soil properties like liquidity index and liquid limit. Arthur Casagrande's graphical method Using a consolidation curve:(Casagrande 1936) Choose by eye the point of maximum curvature on the consolidation curve. Draw a horizontal line from this point. Draw a line tangent to the curve at the point found in part 1. Bisect the angle made from the horizontal line in part 2 and the tangent line in part 3. Extend the "straight portion" of the virgin compression curve (high effective stress, low void ratio: almost vertical on the right of the graph) up to the bisector line in part 4. The point where the lines in part 4 and part 5 intersect is the preconsolidation pressure. Gregory et al. proposed an analytical method to calculate preconsolidation stress that avoids subjective interpretations of the location of the maximum curvature point (i.e. Minimum radius of curvature). Tomás et al. used this method to calculate the preconsolidation pressure of 139 undisturbed soil samples to generate preconsolidation pressure maps of the Vega Baja of the Segura (Spain). Estimation of the "most probable" preconsolidation pressure Using a consolidation curve, intersect the horizontal portion of the recompression curve and a line tangent to the compression curve. This point is within the range of probable preconsolidation pressures. It can be used in calculations that require less accuracy or if a rough estimate is all that is required. See "Modeling Volume Change and Mechanical Properties with Hydraulic Models," from the Soil Science Society of America (link in references) for a more involved mathematical model based on Casagrande's method combining principles from soil mechanics and hydraulics. Profiling of overconsolidation ratio in clays by field vane The field vane (FV) has traditionally been utilized to obtain profiles of undrained shear strength in soft to medium clays. After some 40 years of experience with FV results, it has been suggested that empirical correction factors be applied to the FV data to account for the effects of strain rate, anisotropy, and disturbance on measured shear strengths. As an additional use of the device, the FV may be calibrated at each site to develop profiles of overconsolidation ratio (OCR) with depth by , where (PI, %). Mechanisms causing preconsolidation Various different factors can cause a soil to approach its preconsolidation pressure: Change in total stress due to removal of overburden can cause preconsolidation pressure in a soil. For example, removal of structures or glaciation would cause a change in total stress that would have this effect. Change in pore water pressure: A change in water table elevation, Artesian pressures, deep pumping or flow into tunnels, and desiccation due to surface drying or plant life can bring soil to its preconsolidation pressure. Change in soil structure due to aging (secondary compression): Over time, soil will consolidate even after high pressures from loading and pore water pressure have been depleted. Environmental changes: Changes in pH, temperature, and salt concentration can cause a soil to approach its preconsolidation pressure. Chemical weathering: Different types of chemical weathering will cause preconsolidation pressure. Precipitation, cementing agents, and ion exchange are a few examples. Uses Preconsolidation pressure is used in many calculations of soil properties essential for structural analysis and soil mechanics. One of the primary uses is to predict settlement of a structure after loading. This is required for any construction project such as new buildings, bridges, large roads and railroad tracks. All of these require site evaluation before construction. Preparing a site for construction requires an initial compression of the soil to prepare for foundation to be added. It is important to know the preconsolidation pressure because it will help to determine the amount of loading that is appropriate for the site. It will also help to determine whether recompression (after excavation), if the conditions allow, soil can exhibit volumetric expansion, recompression, due to the removal of load conditions need to be considered. See also Geotechnical engineering Compaction (geology) Soil compaction Settlement (soils) Soil Mechanics Notes References Soil mechanics
Preconsolidation pressure
[ "Physics" ]
1,423
[ "Soil mechanics", "Applied and interdisciplinary physics" ]
30,100,941
https://en.wikipedia.org/wiki/Spectrum%20pooling
Spectrum pooling is a spectrum management strategy in which multiple radio spectrum users can coexist within a single allocation of radio spectrum space. One use of this technique is for primary users of a spectrum allocation to be able to rent out use of unused parts of their allocation to secondary users. Spectrum pooling schemes generally require cognitive radio techniques to implement them. References Radio spectrum
Spectrum pooling
[ "Physics" ]
75
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
30,102,234
https://en.wikipedia.org/wiki/Marine%20botany
Marine botany is the study of flowering vascular plant species and marine algae that live in shallow seawater of the open ocean and the littoral zone, along shorelines of the intertidal zone, coastal wetlands, and low-salinity brackish water of estuaries. It is a branch of marine biology and botany. Marine Plant Classifications There are five kingdoms that present-day classifications group organisms into: the Monera, Protist, Plantae, Fungi, and Animalia. The Monera Less than 2,000 species of bacteria occur in the marine environment out of the 100,000 species. Although this group of species is small, they play a tremendous role in energy transfer, mineral cycles, and organic turnover. The monera differs from the four other kingdoms as "members of the Monera have a prokaryotic cytology in which the cells lack membrane-bound organelles such as chloroplasts, mitochondria, nuclei, and complex flagella." The bacteria can be divided into two major subkingdoms: Eubacteria and Archaebacteria. Eubacteria Eubacteria include the only bacteria that contain chlorophyll a. Not only that, but Eubacteria are placed in the divisions of Cyanobacteria and Prochlorophyta. Characteristics of Eubacteria: They do not have any membrane-bound organelles. Most are enclosed by a cellular wall. Archaebacteria Archaebacteria are a type of single-cell organism and have a number of characteristics not seen in more "modern" cell types. These characteristics include: Unique cell membrane chemistry Unique gene transcription Capable of methanogenesis Differences in ribosomal RNA Types of Archaebacteria: Thermoproteota: Extremely heat-tolerant "Euryarchaeota": Able to survive in very salty habitats "Korarchaeota": The oldest lineage of archaebacteria Archaebacteria vs. Eubacteria While both are prokaryotic, these organisms exist in different biological domains because of how genetically different they are. Some believe archaebacteria are some of the oldest forms of life on Earth while eubacteria arose later in evolutionary history. As eubacteria are found in almost all environments, archaebacteria have been pushed to only the most extreme environments. These extreme environments include: high salinity lakes, thermal hot springs, and deep within the Earth's crust. Other differences include: While most eubacteria are susceptible to antibiotics, archaebacteria are not. Archaebacteria typically do not infect humans. While eubacteria have the ability to form spores to survive adverse conditions, archaebacteria do not have this ability. Kingdom Protist The Protist kingdom contains species that have been categorized due to the simplicity of their structure and being unicellular. These include protozoa, algae and slime molds. In marine ecosystems, macroalgae and microalgae make up a large portion of the photosynthetic organisms found. The algae can be then further categorized based on these characteristics: Storage products Photosynthetic pigments Chloroplast structure Inclusions of the cell Cell wall structure Flagella structure Cell division Life history The algae in the Protist kingdom can be placed into three different categories of macroalgae/seaweeds—phaeophyta, rhodophyta or chlorophyta. The microalgae in these marine environments can be categorized into four varieties—pyrrhophyta, chrysophyta, euglenophyta or cryptophyta. Examples of the types of organisms found in the Protist Kingdom are red, green and brown algae. Kingdom Plantae The Plantae Kingdoms consists of angiosperms-plants that produce seeds or flower as a part of their reproductive system. About 0.085% of the 300,000 Angiosperms believed to exist can be found in marine like environments. Some examples of what plants in this kingdom exist are mosses, ferns, seagrasses, mangroves, and salt marsh plants—the last three being the three major communities of angiosperms in marine waters. Seagrasses are recognized as some of the most important member to marine communities. It is the only true submerged angiosperm and can help determine the state of an ecosystem. Seagrass helps identify the conditions of an ecosystem, as the presence of this plant aids the environment by: Stabilizing the water's bottom, providing shelter and food for animals, and maintaining water quality. Marine ecology Marine ecology and marine botany include: Benthic zone Coral reef Kelp forests Mangroves Phytoplankton Salt marsh Sea grass Seaweed See also Aquatic plants Aquatic ecology "Aquatic Botany" — scientific journal Phycology — study of algae Index: Marine botany Marine primary production References Biological oceanography Aquatic ecology . Seaweeds Branches of botany Oceanographical terminology
Marine botany
[ "Biology" ]
1,046
[ "Branches of botany", "Algae", "Ecosystems", "Seaweeds", "Aquatic ecology" ]
30,102,322
https://en.wikipedia.org/wiki/Dipolar%20compound
In organic chemistry, a dipolar compound or simply dipole is an electrically neutral molecule carrying a positive and a negative charge in at least one canonical description. In most dipolar compounds the charges are delocalized. Unlike salts, dipolar compounds have charges on separate atoms, not on positive and negative ions that make up the compound. Dipolar compounds exhibit a dipole moment. Dipolar compounds can be represented by a resonance structure. Contributing structures containing charged atoms are denoted as zwitterions. Some dipolar compounds can have an uncharged canonical form. Types of dipolar compounds 1,2-dipolar compounds have the opposite charges on adjacent atoms. 1,3-dipolar compounds have the charges separated over three atoms. They are reactants in 1,3-dipolar cycloadditions. Also 1,4-dipolars, 1,5-dipolars, and so on exist. Examples See also Zwitterion Ylide 1,3-dipole 1,3-Dipolar cycloaddition Betaine References Organic chemistry
Dipolar compound
[ "Chemistry" ]
220
[ "nan" ]
30,107,409
https://en.wikipedia.org/wiki/Barium%20azide
Barium azide is an inorganic azide with the formula . It is a barium salt of hydrazoic acid. Like all azides, it is explosive. It is less sensitive to mechanical shock than lead azide. Preparation Barium azide may be prepared by reacting sodium azide with a soluble barium salt: Uses Barium azide can be used to make azides of magnesium, sodium, potassium, lithium, rubidium and zinc with their respective sulfates. It can also be used as a source for high purity nitrogen by heating: This reaction liberates metallic barium, which is used as a getter in vacuum applications. See also Calcium azide Sodium azide Hydrazoic acid References Azides Barium compounds Explosive chemicals Inorganic compounds
Barium azide
[ "Chemistry" ]
156
[ "Explosive chemicals", "Azides", "Inorganic compounds" ]
30,109,018
https://en.wikipedia.org/wiki/Vibration%20galvanometer
A vibration galvanometer is a type of mirror galvanometer, usually with a coil suspended in the gap of a magnet or with a permanent magnet suspended in the field of an electromagnet. The natural oscillation frequency of the moving parts is carefully tuned to a specific frequency; commonly 50 or 60 Hz. Higher frequencies up to 1 kHz are possible. Since the frequency depends on the mass of the moving elements, high frequency vibration galvanometers are very small with light coils and mirrors. The tuning of the vibration galvanometer is done by adjusting the tension of the suspension spring. The vibration galvanometer is used for detecting alternating currents in the frequency of its natural resonance. Most common application is as a null indicating instrument in AC bridge circuits and current comparators. The sharp resonance of the vibration galvanometer makes it very sensitive to changes in the measured current frequency and it can be used as an accurate tuning device. Frequency display The frequency-sensitive behaviour of the galvanometer allows their use as a crude frequency meter, commonly used for adjusting the speed of AC generator sets. The galvanometer is constructed as a number of moving-iron galvanometers, sharing the same excitation coil. As each is tuned to a slightly different frequency, one of them will resonate at a time, according to the input frequency. The magnets are conveniently constructed as a single iron 'comb' of individual reeds, each of different length. Their range is typically from around 45-55 Hz (for a 50 Hz base frequency), with around 2 Hz resolution between each. As it is the identity of the vibrating reed (and thus frequency) that is of interest, rather than its amplitude, these are not calibrated. They are often viewed end-on, as the clearest viewpoint for the whole comb. See also Reed receiver References Galvanometers
Vibration galvanometer
[ "Technology", "Engineering" ]
386
[ "Galvanometers", "Measuring instruments" ]
30,109,143
https://en.wikipedia.org/wiki/Penetrant%20%28mechanical%2C%20electrical%2C%20or%20structural%29
Penetrants, or penetrating items, are the mechanical, electrical or structural items that pass through an opening in a wall or floor, such as pipes, electrical conduits, ducting, electrical cables and cable trays, or structural steel beams and columns. When these items pierce a wall or floor assembly, they create a space between the penetrant and the surrounding structure which can become an avenue for the spread of fire between rooms or floors. Building codes require a firestop to seal the openings around penetrants. Passive fire protection Building engineering Firestops
Penetrant (mechanical, electrical, or structural)
[ "Engineering" ]
117
[ "Building engineering", "Civil engineering", "Architecture" ]
30,109,665
https://en.wikipedia.org/wiki/Logico-linguistic%20modeling
Logico-linguistic modeling is a method for building knowledge-based systems with a learning capability using conceptual models from soft systems methodology, modal predicate logic, and logic programming languages such as Prolog. Overview Logico-linguistic modeling is a six-stage method developed primarily for building knowledge-based systems (KBS), but it also has application in manual decision support systems and information source analysis. Logico-linguistic models have a superficial similarity to John F. Sowa's conceptual graphs; both use bubble style diagrams, both are concerned with concepts, both can be expressed in logic and both can be used in artificial intelligence. However, logico-linguistic models are very different in both logical form and in their method of construction. Logico-linguistic modeling was developed in order to solve theoretical problems found in the soft systems method for information system design. The main thrust of the research into has been to show how soft systems methodology (SSM), a method of systems analysis, can be extended into artificial intelligence. Background SSM employs three modeling devices i.e. rich pictures, root definitions, and conceptual models of human activity systems. The root definitions and conceptual models are built by stakeholders themselves in an iterative debate organized by a facilitator. The strengths of this method lie, firstly, in its flexibility, the fact that it can address any problem situation, and, secondly, in the fact that the solution belongs to the people in the organization and is not imposed by an outside analyst. Information requirements analysis (IRA) took the basic SSM method a stage further and showed how the conceptual models could be developed into a detailed information system design. IRA calls for the addition of two modeling devices: "Information Categories", which show the required information inputs and outputs from the activities identified in an expanded conceptual model; and the "Maltese Cross", a matrix which shows the inputs and outputs from the information categories and shows where new information processing procedures are required. A completed Maltese Cross is sufficient for the detailed design of a transaction processing system. The initial impetus to the development of logico-linguistic modeling was a concern with the theoretical problem of how an information system can have a connection to the physical world. This is a problem in both IRA and more established methods (such as SSADM) because none base their information system design on models of the physical world. IRA designs are based on a notional conceptual model and SSADM is based on models of the movement of documents. The solution to these problems provided a formula that was not limited to the design of transaction processing systems but could be used for the design of KBS with learning capability. The six stages of logico-linguistic modeling The logico-linguistic modeling method comprises six stages. 1. Systems analysis In the first stage logico-linguistic modeling uses SSM for systems analysis. This stage seeks to structure the problem in the client organization by identifying stakeholders, modelling organizational objectives and discussing possible solutions. At this stage it not assumed that a KBS will be a solution and logico-linguistic modeling often produces solutions that do not require a computerized KBS. Expert systems tend to capture the expertise, of individuals in different organizations, on the same topic. By contrast a KBS, produced by logico-linguistic modeling, seeks to capture the expertise of individuals in the same organization on different topics. The emphasis is on the elicitation of organizational or group knowledge rather than individual experts. In logico-linguistic modeling the stakeholders become the experts. The end point of this stage is an SSM style conceptual models such as figure 1. 2. Language creation According to the theory behind logico-linguistic modeling the SSM conceptual model building process is a Wittgensteinian language-game in which the stakeholders build a language to describe the problem situation. The logico-linguistic model expresses this language as a set of definitions, see figure 2. 3. Knowledge elicitation After the model of the language has been built putative knowledge about the real world can be added by the stakeholders. Traditional SSM conceptual models contain only one logical connective (a necessary condition). In order to represent causal sequences, "sufficient conditions" and "necessary and sufficient conditions" are also required. In logico-linguistic modeling this deficiency is remedied by two addition types of connective. The outcome of stage three is an empirical model, see figure 3. 4. Knowledge representation Modal predicate logic (a combination of modal logic and predicate logic) is used as the formal method of knowledge representation. The connectives from the language model are logically true (indicated by the "L" modal operator) and connective added at the knowledge elicitation stage are possibility true (indicated by the "M" modal operator). Before proceeding to stage 5, the models are expressed in logical formulae. 5. Computer code Formulae in predicate logic translate easily into the Prolog artificial intelligence language. The modality is expressed by two different types of Prolog rules. Rules taken from the language creation stage of model building process are treated as incorrigible. While rules from the knowledge elicitation stage are marked as hypothetical rules. The system is not confined to decision support but has a built in learning capability. 6. Verification A knowledge based system built using this method verifies itself. Verification takes place when the KBS is used by the clients. It is an ongoing process that continues throughout the life of the system. If the stakeholder beliefs about the real world are mistaken this will be brought out by the addition of Prolog facts that conflict with the hypothetical rules. It operates in accordance to the classic principle of falsifiability found in the philosophy of science Applications Knowledge-based computer systems Logico-linguistic modeling has been used to produce fully operational computerized knowledge based systems, such as one for the management of diabetes patients in a hospital out-patients department. Manual decision support In other projects the need to move into Prolog was considered unnecessary because the printed logico-linguistic models provided an easy to use guide to decision making. For example, a system for mortgage loan approval Information source analysis In some cases a KBS could not be built because the organization did not have all the knowledge needed to support all their activities. In these cases logico-linguistic modeling showed shortcomings in the supply of information and where more was needed. For example, a planning department in a telecoms company Criticism While logico-linguistic modeling overcomes the problems found in SSM's transition from conceptual model to computer code, it does so at the expense of increased stakeholder constructed model complexity. The benefits of this complexity are questionable and this modeling method may be much harder to use than other methods. This contention has been exemplified by subsequent research. An attempt by researchers to model buying decisions across twelve companies using logico-linguistic modeling required simplification of the models and removal of the modal elements. See also Argument map Cognitive map Concept map Fuzzy cognitive map Knowledge representation and reasoning Rhetorical structure theory Semantic network References Further reading Gregory, Frank Hutson (1993) "A logical analysis of soft systems modelling: implications for information system design and knowledge based system design''. PhD thesis, University of Warwick. Knowledge representation Systems analysis Modal logic
Logico-linguistic modeling
[ "Mathematics" ]
1,481
[ "Mathematical logic", "Modal logic" ]
56,620,190
https://en.wikipedia.org/wiki/Clay%20panel
Clay panel or clay board (also known as loam panel, clay wallboard, clay building board, or clay building panel) is a panel made of clay with some additives. The clay is mixed with sand, water, and fiber, typically wood fiber, and sometimes other additives like starch. Most often this means employing the use of high-cellulose waste fibres. To improve the breaking resistance clay boards are often embedded in a hessian skin on the backside or similar embeddings. By introducing the clay panels, the building material loam can also be used in dry construction. Clay wallboards are a sustainable alternative to gypsum plasterboards, suitable for drywall applications for interior walls and ceilings. It can be applied to either timber or metal studwork. Usually the application of clay boards is completed with clay finishing plaster. Constructional properties The boards have fire retardant properties and medium levels of acoustic insulation. Due to the clay component they have the ability to absorb large amounts of humidity through the plaster and helping to protect vulnerable buildings from excess moisture generated by modern living. Production Clay panels are available from different manufacturers in different designs. The main component is clay or loam. This is either reinforced by a reed mat or stabilised by straw or wood fibres (sawdust) as with clay bricks. Further vegetable or mineral aggregates may also be contained. The boards are not heat-treated, so that the positive characteristics of the clay remain in its entirety. Processing Clay panels are used for the dry construction of wall and ceiling claddings and facing shells. The boards are mounted on a steel profile or wooden frame construction by means of screws or nails. For ceiling claddings, washers must be used depending on the type of clay panel. The panels can be sawn with standard tools, like a cutter knife, a jigsaw or a circular saw. The joints of the clay boards often have tongue and groove for easier processing. The joints must be reinforced with a jute fabric or glass fibre fabric and filled with a fine clay plaster mortar. The clay board is mounted in vertical or horizontal form. The clay building wall or ceiling can then be treated with a clay plaster or painted directly with clay paint. Combination with heating and cooling Clay is an ideal construction material for combinations with heating and cooling. Before the development of drywall panels made of clay, however, wall heating elements could only be laid in clay plaster. Meanwhile, some manufacturers offer clay building boards with integrated heating and cooling pipes. This makes the installation of heating and cooling in dry construction on walls and ceilings much easier. Applications Interior wall and ceiling board Area separation wall board Backer board and underlayment Substrates for coatings and insulated systems Clay Climate Systems See also Enviroboard Magnesium oxide wallboard References Building materials Sustainable building Sustainable architecture Building biology Composite materials Passive fire protection Wallcoverings
Clay panel
[ "Physics", "Engineering", "Environmental_science" ]
589
[ "Sustainable building", "Sustainable architecture", "Building engineering", "Composite materials", "Construction", "Materials", "Building materials", "Environmental social science", "Building biology", "Matter", "Architecture" ]
56,621,861
https://en.wikipedia.org/wiki/Floating-point%20error%20mitigation
Floating-point error mitigation is the minimization of errors caused by the fact that real numbers cannot, in general, be accurately represented in a fixed space. By definition, floating-point error cannot be eliminated, and, at best, can only be managed. Huberto M. Sierra noted in his 1956 patent "Floating Decimal Point Arithmetic Control Means for Calculator": The Z1, developed by Konrad Zuse in 1936, was the first computer with floating-point arithmetic and was thus susceptible to floating-point error. Early computers, however, with operation times measured in milliseconds, could not solve large, complex problems and thus were seldom plagued with floating-point error. Today, however, with supercomputer system performance measured in petaflops, floating-point error is a major concern for computational problem solvers. The following sections describe the strengths and weaknesses of various means of mitigating floating-point error. Numerical error analysis Though not the primary focus of numerical analysis, numerical error analysis exists for the analysis and minimization of floating-point rounding error. Monte Carlo arithmetic Error analysis by Monte Carlo arithmetic is accomplished by repeatedly injecting small errors into an algorithm's data values and determining the relative effect on the results. Extension of precision Extension of precision is using of larger representations of real values than the one initially considered. The IEEE 754 standard defines precision as the number of digits available to represent real numbers. A programming language can include single precision (32 bits), double precision (64 bits), and quadruple precision (128 bits). While extension of precision makes the effects of error less likely or less important, the true accuracy of the results is still unknown. Variable-length arithmetic Variable length arithmetic represents numbers as a string of digits of a variable's length limited only by the memory available. Variable-length arithmetic operations are considerably slower than fixed-length format floating-point instructions. When high performance is not a requirement, but high precision is, variable length arithmetic can prove useful, though the actual accuracy of the result may not be known. Use of the error term of a floating-point operation The floating-point algorithm known as TwoSum or 2Sum, due to Knuth and Møller, and its simpler, but restricted version FastTwoSum or Fast2Sum (3 operations instead of 6), allow one to get the (exact) error term of a floating-point addition rounded to nearest. One can also obtain the (exact) error term of a floating-point multiplication rounded to nearest in 2 operations with a fused multiply–add (FMA), or 17 operations if the FMA is not available (with an algorithm due to Dekker). These error terms can be used in algorithms in order to improve the accuracy of the final result, e.g. with floating-point expansions or compensated algorithms. Operations giving the result of a floating-point addition or multiplication rounded to nearest with its error term (but slightly differing from algorithms mentioned above) have been standardized and recommended in the IEEE 754-2019 standard. Choice of a different radix Changing the radix, in particular from binary to decimal, can help to reduce the error and better control the rounding in some applications, such as financial applications. Interval arithmetic Interval arithmetic is a mathematical technique used to put bounds on rounding errors and measurement errors in mathematical computation. Values are intervals, which can be represented in various ways, such as: inf-sup: a lower bound and an upper bound on the true value; mid-rad: an approximation and an error bound (called midpoint and radius of the interval); triplex: an approximation, a lower bound and an upper bound on the error. "Instead of using a single floating-point number as approximation for the value of a real variable in the mathematical model under investigation, interval arithmetic acknowledges limited precision by associating with the variable a set of reals as possible values. For ease of storage and computation, these sets are restricted to intervals."The evaluation of interval arithmetic expression may provide a large range of values, and may seriously overestimate the true error boundaries. Gustafson's unums Unums ("Universal Numbers") are an extension of variable length arithmetic proposed by John Gustafson. Unums have variable length fields for the exponent and significand lengths and error information is carried in a single bit, the ubit, representing possible error in the least significant bit of the significand (ULP). The efficacy of unums is questioned by William Kahan. Bounded floating point Bounded floating point is a method proposed and patented by Alan Jorgensen. The data structure includes the standard IEEE 754 data structure and interpretation, as well as information about the error between the true real value represented and the value stored by the floating point representation. Bounded floating point has been criticized as being derivative of Gustafson's work on unums and interval arithmetic. References Floating point Computer arithmetic Error
Floating-point error mitigation
[ "Mathematics" ]
1,026
[ "Computer arithmetic", "Arithmetic" ]
56,632,242
https://en.wikipedia.org/wiki/EzMol
Ezmol, stylized EzMol, is a web server for molecular modelling. About Ezmol is a molecular modeling web server for the visualisation of protein molecules. It has a limited selection of visualisation options for the most common requirements of molecular visualisation, enabling the rapid production of images through a wizard-style interface, without the use of command-line syntax. It is developed and maintained by Professor Michael Sternberg's group at The Centre for Integrative Systems Biology and Bioinformatics, Imperial College London and was published in the Journal of Molecular Biology in 2018. References External links Official promotional video Computational chemistry Computational chemistry software Molecular modelling software
EzMol
[ "Chemistry" ]
138
[ "Molecular modelling software", "Molecular physics", "Computational chemistry software", "Chemistry software", "Theoretical chemistry", "Computational chemistry", "Molecular modelling", "Molecular physics stubs" ]
52,290,066
https://en.wikipedia.org/wiki/Square-difference-free%20set
In mathematics, a square-difference-free set is a set of natural numbers, no two of which differ by a square number. Hillel Furstenberg and András Sárközy proved in the late 1970s the Furstenberg–Sárközy theorem of additive number theory showing that, in a certain sense, these sets cannot be very large. In the game of subtract a square, the positions where the next player loses form a square-difference-free set. Another square-difference-free set is obtained by doubling the Moser–de Bruijn sequence. The best known upper bound on the size of a square-difference-free set of numbers up to is only slightly sublinear, but the largest known sets of this form are significantly smaller, of size . Closing the gap between these upper and lower bounds remains an open problem. The sublinear size bounds on square-difference-free sets can be generalized to sets where certain other polynomials are forbidden as differences between pairs of elements. Example An example of a set with no square differences arises in the game of subtract a square, invented by Richard A. Epstein and first described in 1966 by Solomon W. Golomb. In this game, two players take turns removing coins from a pile of coins; the player who removes the last coin wins. In each turn, the player can only remove a nonzero square number of coins from the pile. Any position in this game can be described by an integer, its number of coins. The non-negative integers can be partitioned into "cold" positions, in which the player who is about to move is losing, and "hot" positions, in which the player who is about to move can win by moving to a cold position. No two cold positions can differ by a square, because if they did then a player faced with the larger of the two positions could move to the smaller position and win. Thus, the cold positions form a set with no square difference: These positions can be generated by a greedy algorithm in which the cold positions are generated in numerical order, at each step selecting the smallest number that does not have a square difference with any previously selected As Golomb observed, the cold positions are infinite, and more strongly the number of cold positions up to is at least proportional For, if there were fewer cold positions, there wouldn't be enough of them to supply a winning move to each hot position. The Furstenberg–Sárközy theorem shows, however, that the cold positions are less frequent than hot positions: for every , and for all large the proportion of cold positions up to is That is, when faced with a starting position in the range from the first player can win from most of these positions. Numerical evidence suggests that the actual number of cold positions is Upper bounds According to the Furstenberg–Sárközy theorem, if is a square-difference-free set, then the natural density of is zero. That is, for every , and for all sufficiently large , the fraction of the numbers up to that are in is less than . Equivalently, every set of natural numbers with positive upper density contains two numbers whose difference is a square, and more strongly contains infinitely many such pairs. The Furstenberg–Sárközy theorem was conjectured by László Lovász, and proved independently in the late 1970s by Hillel Furstenberg and András Sárközy, after whom it is named. Since their work, several other proofs of the same result have been published, generally either simplifying the previous proofs or strengthening the bounds on how sparse a square-difference-free set must be. The best upper bound currently known is due to Thomas Bloom and James Maynard, who show that a square-difference-free set can include at most of the integers from to , as expressed in big O notation, where is some absolute constant. Most of these proofs that establish quantitative upper bounds use Fourier analysis or ergodic theory, although neither is necessary to prove the weaker result that every square-difference-free set has zero density. Lower bounds Paul Erdős conjectured that every square-difference-free set has elements up to , for some constant , but this was disproved by Sárközy, who proved that denser sequences exist. Sárközy weakened Erdős's conjecture to suggest that, for every , every square-difference-free set has elements up to . This, in turn, was disproved by Imre Z. Ruzsa, who found square-difference-free sets with up to elements. Ruzsa's construction chooses a square-free integer as the radix of the base- notation for the integers, such that there exists a large set of numbers from to none of whose difference are squares modulo . He then chooses his square-difference-free set to be the numbers that, in base- notation, have members of in their even digit positions. The digits in odd positions of these numbers can be arbitrary. Ruzsa found the seven-element set modulo , giving the stated bound. Subsequently, Ruzsa's construction has been improved by using a different base, , to give square-difference-free sets with size When applied to the base , the same construction generates the Moser–de Bruijn sequence multiplied by two, a square-difference-free set of elements. This is too sparse to provide nontrivial lower bounds on the Furstenberg–Sárközy theorem but the same sequence has other notable mathematical properties. Based on these results, it has been conjectured that for every and every sufficiently large there exist square-difference-free subsets of the numbers from to with elements. That is, if this conjecture is true, the exponent of one in the upper bounds for the Furstenberg–Sárközy theorem cannot be lowered. As an alternative possibility, the exponent 3/4 has been identified as "a natural limitation to Ruzsa's construction" and another candidate for the true maximum growth rate of these sets. Generalization to other polynomials The upper bound of the Furstenberg–Sárközy theorem can be generalized from sets that avoid square differences to sets that avoid differences in , the values at integers of a polynomial with integer coefficients, as long as the values of include an integer multiple of every integer. The condition on multiples of integers is necessary for this result, because if there is an integer whose multiples do not appear in , then the multiples of would form a set of nonzero density with no differences in . References Additive number theory
Square-difference-free set
[ "Mathematics" ]
1,349
[ "Unsolved problems in mathematics", "Mathematical problems", "Unsolved problems in number theory", "Number theory" ]
52,294,072
https://en.wikipedia.org/wiki/Anta%20capital
An anta capital is the crowning portion of an anta, the front edge of a supporting wall in Greek temple architecture. The anta is generally crowned by a stone block designed to spread the load from the superstructure (entablature) it supports, called an "anta capital" when it is structural, or sometimes "pilaster capital" if it is only decorative as often during the Roman period. In order not to protrude unduly from the wall, these anta capitals usually display a rather flat surface, so that the capital has more or less a brick-shaped structure overall. The anta capital can be more or less decorated depending on the artistic order it belongs to, with designs, at least in ancient Greek architecture, often quite different from the design of the column capitals it stands next to. This difference disappeared with Roman times, when anta or pilaster capitals have designs very similar to those of the column capitals. Doric anta capital The Doric capital was designed in the continuity of the wall cornice. It is characterized by a broad neck, a beak molding and an abacus. The decoration is usually very sparse, except for the capitals displaying a transition with the Ionic order. Ionic anta capital The Ionic anta capital is very different in that it is very rich in moldings. It remains however essentially brick-shaped. The Ionic anta capitals of the Erechtheion take the shape of very decorated brick-shaped capitals, with designs essentially in the continuity of wall cornices, with some additional horizontal moldings. Some temples in Ionia tend to have a very different design of anta capital, flat at the fronts but with volutes on the side, giving them the shape of sofas, hence the name they sometimes take of "sofa capitals". In this case the sides of the capital broaden upward, in a shape reminiscent of a couch or sofa. This capital can also be described as pilaster capitals, which, strictly speaking, are normally decorative rather than structural components. In India, an anta capital of a quasi-Ionic type was discovered, and dated to the 3rd century BCE. It has a central flame palmette motif framed by Ionic volutes and placed between horizontal rows of decorative motifs. It is thought that its creation was due to the influence of the neighboring Seleucid Empire, or a nearby Hellenistic city such as Ai-Khanoum. Corinthian anta capital Corinthian anta capitals tend to be much closer in design to the capitals of the columns, although often with a flattened composition: during the Greek period, Acanthus leaves are crowned by a central motif, such as a palmette, itself bracketed by volutes. This design was widely adopted in India for Indo-Corinthian capitals. During the Greek period, anta capitals had designs different from those of column capital, but during Roman and later times this difference disappeared and both column and anta capitals has the same types of designs. At the same time, decorative pilaster designs multiplied during Roman times, so that many of the Corinthian anta capital designs are actually purely decorative pilaster designs. See also Antae temple Notes References Attribution Architectural elements Ancient Greek architecture
Anta capital
[ "Technology", "Engineering" ]
671
[ "Building engineering", "Architectural elements", "Components", "Architecture" ]