id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
70,534,081 | https://en.wikipedia.org/wiki/Michael%20Start | Michael Start (born 27 October 1960) is a British automata maker and restorer. He trained in Technical Horology at Hackney College in London, and now specialises in the conservation and restoration of antique automata, with a focus on 19th Century automata.
Michael Start is co-founder of "The House of Automata". Together with his wife, Maria Start, they restore and deal in antique automata. Founded in London, it is now based in the North of Scotland, where “The House of Automata” operates from a workshop studio.
Media and television
Michael Start has worked as a consultant to the media, advising on automata and horology for screen. In 2011, Start designed the mechanism for the automaton that was featured in the Martin Scorsese screen adaptation of ‘’Hugo’’.
Michael Start features as an expert on Salvage Hunters- The Restorers, produced by Quest and Discovery Channel. Together with his wife Maria, they appear in Series 3, 4 and 5, as experts in Automata restoration.
References
External links
The House of Automata
The House of Automata
The House of Automata - YouTube
Login • Instagram
Salvage Hunters on Quest discovery+ | Stream 55,000+ Real-Life TV Episodes
Michael Start
Automata Convention
1960 births
Living people
Automata (mechanical)
Conservator-restorers
21st-century British people | Michael Start | [
"Engineering"
] | 283 | [
"Automata (mechanical)",
"Automation"
] |
64,638,887 | https://en.wikipedia.org/wiki/Tower%20crane%20anti-collision%20system | A tower crane anti-collision system is an operator support system for tower cranes on construction sites. It helps an operator to anticipate the risk of contact between the moving parts of a tower crane and other tower cranes and structures. In the event that a collision becomes imminent, the system can send a command to the crane's control system, ordering it to slow down or stop. An anti-collision system can describe an isolated system installed on an individual tower crane. It can also describe a site wide coordinated system, installed on many tower cranes in close proximity.
History
Developments in tower crane design and the increasing complexity of construction sites in the 1970’s and 1980’s led to an increase in the quantity and proximity of tower cranes on construction sites. This increased the risk of collisions between cranes, particularly when their operating areas overlapped.
The first tower crane anti-collision systems were developed in France in 1985 by SMIE.
A Ministry of Labour directive issued in 1987 made anti-collision systems compulsory on all tower cranes in France.
In 2011, Hong Kong introduced a "Code of Practice for the Safe Use of Tower Cranes" and Singapore introduced a "Workplace Safety and Health construction Regulation". Both required the provision of an anti-collision system where more than one tower crane is in use.
In 2015, Luxembourg required automatic devices to be installed to avoid the risk of collision between tower cranes.
Collision avoidance with structures and other tower cranes
Various sensors are used to measure the position, velocity and angle of each tower crane’s moving parts. These sensors can be part of the anti-collision system or the crane. This information is sent via radio link to a computer and a display in the operator’s cabin. Several features commonly found across tower crane anti-collision systems use this data.
Zoning
Anti-collision systems allow prohibited zones to be defined. These are areas (such as schools, transport links, electrical power lines and areas beyond the site boundary) where the crane is not allowed to operate.
Situational awareness
The operator's cabin hosts a display showing the tower crane's position, movement and operating area. Where the tower crane’s operating area overlaps with other cranes or prohibited zones these are also displayed. The system alerts the operator when the crane is approaching a prohibited area or another crane.
Tower crane control
Anti-collision systems are often connected to the tower crane’s control system. This allows the anti-collision system to automatically slow down and stop the crane if there is a risk of an accident. The operator is then prevented from moving the crane towards the danger and can only move it away.
Supervisory system
A supervisory system is a typical feature of an anti-collision system that covers an entire construction site. It allows a site supervisor to have a complete view of tower crane operations on a construction site. It also allows for centralised configuration and maintenance of the system.
Fail-safe operation
If a fault occurs on a tower crane's anti-collision system, or it is bypassed, other tower cranes will be prevented from operating within the volume of the faulty system.
Collision avoidance with other vehicles
Anti-collision lights are required on tower cranes operating in or near to airfield flight paths. Three red flashing lights are positioned on each end and the top of the crane. They provide a visual warning to aircraft pilots.
Limitations
Tower crane anti-collision systems do not prevent collisions with mobile construction equipment such as mobile cranes and aerial work platforms.
Standards
A draft standard setting out the functional requirements of tower crane anti-collision devices and systems is open for comment. It is BS EN 17076. Anti-collision devices and systems for tower crane. Safety characteristics and requirements.
See also
Crane (machine) - Tower crane
Anti-collision light
References
Cranes (machines)
Construction | Tower crane anti-collision system | [
"Engineering"
] | 757 | [
"Construction",
"Engineering vehicles",
"Cranes (machines)"
] |
64,643,019 | https://en.wikipedia.org/wiki/Ralph%20N.%20Adams | Ralph Norman "Buzz" Adams (August 26, 1924 – November 28, 2002) was a distinguished bioanalytical chemist at the University of Kansas. The Adams Institute and Adams Professorship at the university are named after him.
Background and career
Adams was born in Atlantic City, New Jersey in 1924. He was drafted into the Army Air Corps in World War II, flying bombers in the Pacific theater. Upon his return, he studied chemistry at Rutgers University, graduating in 1950, followed by Ph.D. studies at Princeton University under N. Howell Furman. After 2 years on the faculty at Princeton, Adams became a professor at KU in 1955. Adams' research interests began studying solid electrodes and Electrochemical cell reactions. In later years, his research group changed direction and studied how electrical signaling in the brain underlie Neurological disorders such as Schizophrenia.
Awards
1996 - Oesper Award
1963 - Guggenheim Fellowship
1985 - Reilly Award
1982 - Higuchi Award for Basic Science
References
1924 births
Electrochemists
Rutgers University alumni
Princeton University
University of Kansas
2002 deaths
United States Army Air Forces bomber pilots of World War II | Ralph N. Adams | [
"Chemistry"
] | 222 | [
"Electrochemistry",
"Electrochemists"
] |
64,645,289 | https://en.wikipedia.org/wiki/Viedma%20ripening | Viedma ripening or attrition-enhanced deracemization is a chiral symmetry breaking phenomenon observed in solid/liquid mixtures of enantiomorphous (racemic conglomerate) crystals that are subjected to comminution. It can be classified in the wider area of spontaneous symmetry breaking phenomena observed in chemistry and physics.
It was discovered in 2005 by geologist Cristobal Viedma, who used glass beads and a magnetic stirrer to enable particle breakage of a racemic mixture of enantiomorphous sodium chlorate crystals in contact with their saturated solution in water. A sigmoidal (autocatalytic) increase in the solid-phase enantiomeric excess of the mixture was obtained, eventually leading to homochirality, i.e. the complete disappearance of one of the chiral species. Since the original discovery, Viedma ripening has been observed in a variety of intrinsically chiral organic compounds that exhibit conglomerate crystallization and are able to inter-convert in the liquid via racemization reactions. It is also regarded as a potential new technique to separate enantiomers of chiral molecules in the pharmaceutical and fine chemical industries (chiral resolution).
Mechanism
The exact interplay of the mechanisms leading to deracemization in Viedma ripening is a subject of ongoing scientific debate. It is, however, currently believed that for intrinsically chiral molecules, deracemization occurs via a combination of various phenomena:
Crystal growth and dissolution due to the particle-size dependence of solubility (i.e. Ostwald ripening)
Enantiospecific cluster aggregation to larger particles of the same chirality
Particle breakage
Racemization
Two key assumptions often invoked to explain the mechanism is that: a) small fragments generated by breakage for each enantiomeric crystal population can maintain their chirality, even when they are smaller than the critical radius for nucleation (and are thus expected to dissolve) and b) small chiral fragments can undergo enantiospecific aggregation to larger particles of the same chirality. Using these two assumptions, it can be shown mathematically, that any stochastic even immeasurable asymmetry of one enantiomeric crystal population over the other can be amplified to homochirality in a random manner.
Implications for the origin of life
In principle, molecules required for the generation of life, i.e. amino acids that combine to form proteins and sugars that form DNA molecules are all chiral and are thus able to adopt two mirror-image forms (often described as left- and right-handed), which from a chemical perspective are equally likely to exist. However, all biologically-relevant molecules known on earth are of a single handedness, even though their mirror images are also capable of forming similar molecules. The reason of the prevalence of homochirality in living organisms is currently unknown and is often connected to the origin of life itself. Whether homochirality emerged before or after life is currently unknown, but many researchers believe that homochirality could have been a result of amplification of extremely small chiral asymmetries.
Since Viedma ripening has been observed in biologically-relevant molecules, such as chiral amino acids it has been put forward by some as a possible contributing mechanism for chiral amplification in a prebiotic world.
See also
Crystallization
Racemic mixture
Chiral symmetry breaking
Racemization
Ostwald ripening
Solubility equilibrium
Chiral resolution
Enantioselective synthesis
Stereoselectivity
Spontaneous absolute asymmetric synthesis
References
Chirality
Racemic mixtures
Stereochemistry
Amino acids | Viedma ripening | [
"Physics",
"Chemistry",
"Biology"
] | 762 | [
"Biomolecules by chemical classification",
"Pharmacology",
"Racemic mixtures",
"Origin of life",
"Biochemistry",
"Stereochemistry",
"Chirality",
"Amino acids",
"Space",
"Chemical mixtures",
"nan",
"Asymmetry",
"Biological hypotheses",
"Spacetime",
"Symmetry"
] |
64,646,818 | https://en.wikipedia.org/wiki/Genda%20Gu | Genda Gu is a condensed matter physicist at the Brookhaven National Laboratory. In his research, Gu specializes in the synthesis of large, high quality crystals for the production of superconductors. He works in the Brookhaven Laboratory's crystal growth lab, and as an adjunct professor at Stony Brook University. In 2012, Gu became a fellow of the American Physical Society.
Gu obtained his PhD from the Harbin Institute of Technology.
References
Condensed matter physicists
Fellows of the American Physical Society
Brookhaven National Laboratory staff
Year of birth missing (living people)
Living people | Genda Gu | [
"Physics",
"Materials_science"
] | 115 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
64,647,451 | https://en.wikipedia.org/wiki/2020%20California%20Proposition%2014 | California Proposition 14 is a citizen-initiated ballot measure that appeared on the ballot in the 2020 California elections, for November 3, 2020. It authorizes state bonds to be issued worth $5.5 billion, which will fund the California Institute for Regenerative Medicine (CIRM), which serves as the state's center for stem cell research, and enable it to continue its operations. This measure passed with 51% of the vote.
Background
Robert N. Klein II, motivated by the suffering of family members from autoimmune diseases, launched a citizen initiative known as Proposition 71 in 2004, which created a state-funded center for stem cell research - the California Institute for Regenerative Medicine (CIRM). Based in San Francisco, the CIRM is responsible for with making grants and loans to stem cell research initiatives focused on developing treatment methods and completing research for clinical trials. Proposition 71 was approved by 59% of California voters and authorized $3 billion in bonds to fund the CIRM in addition to creating a Governing Board of 29 members as an Independent Citizens' Oversight Committee (ICOC). By 2020, $2.75 billion of the original $3 billion has been used or earmarked for funding of basic research, infrastructure, education, and clinical translational studies. For this reason, Klein spearheaded this initiative to authorize an additional $5.5 billion in bonds for the CIRM to support additional grants and operations under Proposition 14. Research areas of focus at CIRM include stem cell based research to mitigate or cure serious illness and chronic diseases such as cancer, heart disease, kidney disease, respiratory illnesses including COVID-19, diabetes, cancer, HIV/AIDs, paralysis, blindness, and more. A dedicated $1.5 billion under funding from Proposition 14 will be dedicated to research of diseases specific to the central nervous system and brain, including cancer, autism, dementia, Parkinson's and Alzheimers' disease.
Changes to the CIRM program and governance proposed in Proposition 14 include increased focus in improving patient access to stem cell treatments by expanding sites and facilities for human trials, the requirement for income earned from CIRM agreements to reduce the cost of stem cell treatments for patients, increase the ICOC from 29 members to 35 members, and to hire 15 full-time employees whose roles are dedicated to improving patient access to stem cell-derived therapeutics and treatments. Further, proposition 14 stipulates $1.5 billion to be spent researching brain and nervous system diseases, including dementia and Parkinson's disease. Estimated fiscal impact of Proposition 14 would include the initial $5.5 billion in bonds and $2.5 billion in interest, for an overall annual debt payment of $310 million over 25 years. Proposition 14 appropriates money from the general fund in order to fully pay the bond debt service.
Support
In addition to Klein, this measure is supported by the Regents of the University of California. It was also endorsed by governor Gavin Newsom and The Modesto Bee.
Proponents argue that biomedical research is crucial, particularly in light of the COVID-19 pandemic. Proponents of Proposition 14 have raised more than $13.4 million in campaign funds.
Opposition
As with Proposition 71, opposition to Proposition 14 includes many across the political spectrum including the Bakersfield Californian, California Nurses Association, California Catholic Conference, California Republican Party, Center for Genetics and Society, Friends Committee on Legislation of California, Green Party of California, Howard Jarvis Taxpayers Association, Libertarian Party of California, Los Angeles Times, Orange County Register, Peace and Freedom Party, Right to Life of Central California, San Bernardino Sun, San Francisco Chronicle, San Jose Mercury News, Scholl Institute of Bioethics, and CIRM board member Jeff Sheehy.
However, there was no significant organized opposition to Proposition 14, and the "No on Proposition 14" committee raised only $250.
References
2020 California ballot propositions
Stem cell research | 2020 California Proposition 14 | [
"Chemistry",
"Biology"
] | 795 | [
"Translational medicine",
"Tissue engineering",
"Stem cell research"
] |
77,760,475 | https://en.wikipedia.org/wiki/Aberration-Corrected%20Transmission%20Electron%20Microscopy | Aberration-Corrected Transmission Electron Microscopy (AC-TEM) is the general term for using electron microscopes where electro optical components are introduced to reduce the aberrations that would otherwise reduce the resolution of images. Historically electron microscopes had quite severe aberrations, and until about the start of the 21st century the resolution was quite limited, at best able to image the atomic structure of materials so long as the atoms were far enough apart. Theoretical methods of correcting the aberrations existed for some time, but could not be implemented in practice. Around the turn of the century the electron optical components were coupled with computer control of the lenses and their alignment; this was the breakthrough which led to significant improvements both in resolution and the clarity of the images. As of 2024 correction of geometric aberrations is standard in many commercial electron microscopes. They are extensively used in many different areas of science.
History
Early Theoretical Work
Scherzer's theorem is a theorem in the field of electron microscopy. It states that there is a limit of resolution for electronic lenses because of unavoidable aberrations.
German physicist Otto Scherzer found in 1936 that the electromagnetic lenses, which are used in electron microscopes to focus the electron beam, entail unavoidable imaging errors. These aberrations are of spherical and chromatic nature, that is, the spherical aberration coefficient Cs and the chromatic aberration coefficient Cc are always positive.
Scherzer solved the system of Laplace equations for electromagnetic potentials assuming the following conditions:
electromagnetic fields are rotationally symmetric,
electromagnetic fields are static,
there are no space charges.
He showed that under these conditions the aberrations that emerge degrade the resolution of an electron microscope up to one hundred times the wavelength of the electron. He concluded that the aberrations cannot be fixed with a combination of rotationally symmetrical lenses.
In his original paper, Scherzer summarized: "Chromatic and spherical aberration are unavoidable errors of the space charge-free electron lens. In principle, distortion (strain and twist) and (all types of) coma can be eliminated. Due to the inevitability of spherical aberration, there is a practical, but not a fundamental, limit to the resolving power of the electron microscope."
The resolution limit provided by Scherzer's theorem can be overcome by breaking one of the above-mentioned three conditions. Giving up rotational symmetry in electronic lenses helps in correcting spherical aberrations. A correction of the chromatic aberration can be achieved with time-dependent, i.e. non-static, electromagnetic fields (for example in particle accelerators).
Prototypes
The benefit of the scanning transmission electron microscope (STEM) and its potentional for high-resolution imaging had been investigated by Albert Crewe. He investigated the need for a brighter electron source in the microscope, positing that cold field emission guns would be feasible. Through this and other iterations, Crewe was able to improve the resolution of the STEM from 30 Ångstroms (Å) down to 2.5 Å. Crewe's work made it possible to visualize individual atoms for the first time.
Crewe filed patents for electron aberration correctors, but could never get functioning prototypes.
In the early efforts to correct aberrations, low voltage electrostatic correctors were explored. These correctors used electrostatic lenses to manipulate the electron beam. The advantage of low voltage systems was their reduced chromatic aberration, as the energy spread of the electrons was lower at reduced voltages. Researchers found that by carefully designing these electrostatic elements, they could correct some of the spherical and chromatic aberrations that plagued early electron microscopes. These early correctors were crucial in understanding the behavior of electron optics and provided a stepping stone toward more sophisticated correction techniques.
Phase plate and similar ideas
The design parameters and functional requirements for phase plates were thoroughly examined in the context of their application as spherical aberration correctors. In particular, emphasis was placed on developing a programmable, electrostatic phase plate, highlighting its potential for precise control and adaptability in correcting aberrations.
First demonstrations
The first demonstration of aberration correction in TEM mode was demonstrated by Harald Rose and Maximilian Haider in 1998 using a hexapole corrector, and in STEM mode by Ondrej Krivanek and Niklas Dellby in 1999 using a quadrupole/octupole corrector. As the electron optic resolution improved, it became apparent that there also needed to be improvements to the mechanical stability of the microscopes to keep pace. Many aberration corrected microscopes heavily employ sound and temperature insulation, usually in an enclosure surrounding the microscope.
Early Commercial Products
Nion
Ondrej Krivanek and Niklas Dellby founded Nion in the late 1990s, initially as a collaboration with IBM. Their first products were correctors of spherical aberration correctors for existing STEMs. Later on, they designed an ACTEM from scratch, UltraSTEM 1.
CEOS
The approach to aberration correction used by Rose and Haider formed the basis of the company CEOS. They produced modular correctors which could be incorporated into microscopes produced by other vendors, which led to commercial products from FEI, JEOL, Hitachi, and Zeiss.
TEAM Project
The Transmission Electron Aberration-Corrected Microscope (TEAM) project was a collaborative effort between Lawrence Berkeley National Laboratory (LBNL), Argonne National Laboratory (ANL), Brookhaven National Laboratory, Oak Ridge National Laboratory, and the University of Illinois, Urbana-Chamaign with the technical goal of reaching spatial resolution 0.05 nanometers, smooth sample translation and tilt, while allowing for a variety of in-situ experiments.
The TEAM project resulted in several microscopes, the first was the ACAT at Argonne National Laboratory in Illinois which had the first chromatic aberration corrector, then the TEAM 0.5 and TEAM I at the Molecular Foundry in California, and concluded in 2009. Both the TEAM microscopes are S/TEMs (they can be used in both TEM mode and STEM mode) that correct for both spherical aberration and chromatic aberration. The TEAM microscopes are managed by the National Center for Electron Microscopy, a facility of the Molecular Foundry at LBNL, and ACAT by the Center for Nanoscale Materials at ANL.
Other
Several other aberration correctors have been designed and used in electron microscopes such as one by Takanayagi. Similar correctors have also been used at much lower energies such as for LEEM instruments.
Present State
In their modern state, resolutions of about 0.1 nm are fairly routine in microscopes around the world. This is true for both standard higher-voltage electron microscopes as well as a few ones specially designed to operate at lower electron energies. An important offshoot of the improved optical resolution is a companion improvement in the mechanical stability. Exploiting these improvements, significantly better identification of chemical contents of materials has become possible, as well as their atomic structure. This has had a major impact on our understanding across multiple fields of study.
Applications
There is a significant difference in the usage of AC-TEM across various fields. Despite aberration correction for electron microscopes existing in the case of STEMs, the amount of electrons needed to form useful images is far greater than biological samples can handle before being destroyed by radiation damage. Life science studies still heavily rely on conventional TEMs, which form a full image with their electron beam (similar to a conventional light microscope).
Physical Sciences
AC-TEM has been used extensively in physical sciences, in part due to the imperviousness of samples to radiation damage. This has ranged across chemistry, materials science and physics.
Life Sciences
Aberration correction have yet to be significantly used in the life sciences, due to generally low atomic weight contrast in biological systems and also the increased radiation damage. However, the side benefits such as improved mechanical stability and detectors have significantly improved data collection quality.
References
Crystallography
Electron microscopy | Aberration-Corrected Transmission Electron Microscopy | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,652 | [
"Electron",
"Electron microscopy",
"Materials science",
"Crystallography",
"Condensed matter physics",
"Microscopy"
] |
62,401,003 | https://en.wikipedia.org/wiki/Cell%20cycle%20regulated%20Methyltransferase | CcrM (or M.CcrMI) is an orphan DNA methyltransferase, that is involved in controlling gene expression in most Alphaproteobacteria. This enzyme modifies DNA by catalyzing the transference of a methyl group from the S-adenosyl-L methionine substrate to the N6 position of an adenine base in the sequence 5'-GANTC-3' with high specificity. In some lineages such as SAR11, the homologous enzymes possess 5'-GAWTC-3' specificity. In Caulobacter crescentus Ccrm is produced at the end of the replication cycle when Ccrm recognition sites are hemimethylated, rapidly methylating the DNA. CcrM is essential in other Alphaproteobacteria but its role is not yet determined. CcrM is a highly specific methyltransferase with a novel DNA recognition mechanism.
CcrM role in cell cycle regulation
Methylations are epigenetic modification that, in eukaryotes, regulates processes as cell differentiation, and embryogenesis, while in prokaryotes can have a role in self recognition, protecting the DNA from being cleaved by the restriction endonuclease system, or for gene regulation. The first function is controlled by the restriction methylation system while the second by Orphan MTases as Dam and CcrM.
CcrM role have been characterized in the marine model organism Caulobacter crescentus, which is suitable for the study of cell cycle and epigenetics as it asymmetrically divides generating different progeny, a stalked and a swarmer cell, with different phenotypes and gene regulation. The swarmer cell has a single flagellum and polar pili and is characterized by its mobility, while the stacked cell has a stalk and is fixed to the substrate. The stacked cell enters immediately in S-phase, while the swarmer cell stays in G1-phase and will differentiate to a stacked cell before entering the S-phase again.
The stacked cell in S phase will replicate its DNA in a semiconservative manner producing two hemimethylated DNA double strands that will be rapidly methylated by the Methyltransferase CcrM, which is only produced at the end of the S phase. The enzyme will methylate more than 4 thousand 5'-GANTC-3' sites in around 20 minutes, and then it will be degraded by the LON protease. This fast methylation plays an important role in the transcriptional control of several genes and controls the cell differentiation. CcrM expression is regulated by the CtrA master regulator, and in addition various 5'-GANTC-3' sites methylation sites regulate CcrM expression, which will only occur at the end of the S phase when this sites are hemimethylated. In this process CtrA regulates the expression of CcrM and more than 1000 genes in the pre-divisional state, and SciP prevents the activation of CcrM transcription in non replicative cells.
CcrM role in Alphaproteobacteria
Orphan MTases are common in bacteria and archea CcrM is found in almost every group of Alphaproteobacteria, excepting in Rickettsiales and Magnetococcales, and homologs can be found in Campylobacterota and Gammaproteobacteria. Alphaproteobacteria are organisms with different life stages from free living to substrate associated, some of them are intracellular pathogens of plants, animal and even human, in those groups the CcrMs must have an important role in cell cycle progression.
CcrM miss regulation have shown to produce severe miss control of cell cycle regulation and differentiation in various Alphaproteobacteria; C. crescentus , the plant symbiont Sinorhizobium meliloti and in the human pathogen Brucella abortus. Also CcrM gene has proven to be essential for the viability of various Alphaproteobacteria.
Structure and DNA recognition mechanism
CcrM is a type II DNA Methyltransferase, that transfer a methyl group from the methyl donor SAM to the N6 of an adenine in a 5'-GANTC-3' recognition sites of hemimethylated DNA. Based on the order of the conserved motifs that form the SAM binding, the active site and the target recognition domain in the sequence of CcrM it can be classified as a β-class adenine N6 Methyltransferase. CcrM homologs in Alphaproteobacteria have an 80 residues C terminal domain, with non well characterized function.
CcrM is characterized by a high degree of sequence discrimination, showing a very high specificity for GANTC sites over AANTC sites , being able to recognize and methylate this sequence in both double and single strand DNA. CcrM in complex with a dsDNA structure was resolved, showing that the enzyme presents a novel DNA interaction mechanism, opening a bubble in the DNA recognition site (The concerted mechanism of Methyltransferases relies in the flip of the target base), the enzyme interacts with DNA forming an homodimer with differential monomer interactions.
CcrM is a highly efficient enzyme capable of methylating a high number of 5'-GANTC-3' sites in low time, however if the enzyme is processive (the enzyme binds to the DNA and methylate several methylation sites before dissociation) or distributive (the enzyme dissociates from DNA after each methylation) it is still in discussion. First reports indicated the second case, however more recent characterisation of CcrM indicate that it is a processive enzyme.
References
DNA
Gene expression
Alphaproteobacteria | Cell cycle regulated Methyltransferase | [
"Chemistry",
"Biology"
] | 1,184 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
62,403,302 | https://en.wikipedia.org/wiki/Functional%20ultrasound%20imaging | Functional ultrasound imaging (fUS) is a medical ultrasound imaging technique for detecting or measuring changes in neural activities or metabolism, such as brain activity loci, typically through measuring hemodynamic (blood flow) changes. It is an extension of Doppler ultrasonography.
Background
Brain activation can be directly measured by imaging electrical activity of neurons using voltage-sensitive dyes, calcium imaging, electroencephalography, or magnetoencephalography. It can also be indirectly measured hemodynamically, that is, by detecting changes in blood flow in the neurovascular systems through functional magnetic resonance imaging (fMRI), positron emission tomography (PET), Functional near-infrared spectroscopy (fNIRS), or Doppler ultrasonography, etc.
Optics-based methods generally provide the highest spatial and temporal resolutions; however, due to scattering, they are limited to measuring regions close to the surface. Thus, they are often used on animal models after partially removing or thinning the skull to allow light to penetrate into brain tissue.
fMRI and PET, which measure the blood-oxygen level dependent (BOLD) signal, were the only techniques capable of imaging brain activation in depth. BOLD signal increases when neuronal activation exceeds oxygen consumption, where blood flow increases significantly, resulting in cerebral blood volume (CBV) changes. This relationship between neuronal activity and blood flow is called neurovascular coupling. In fact, in-depth imaging of cerebral hemodynamic responses by fMRI, being noninvasive, paved the way for major discoveries in neurosciences in the early stage, and is applicable on humans.
However, fMRI also suffers limitations. First, the cost and size of MRI machines can be prohibitive. Second, for fMRI to achieve a high spatial resolution necessarily decreases its time resolution and/or signal-noise ratio. As a result, it is hard to image fine spatial details of transient events such as epilepsy. Finally, fMRI is not appropriate for all clinical applications. For example, fMRI is rarely performed on infants, because infants do not stay still inside MRI machines.
Like fMRI, Doppler-based functional ultrasound is based on the neurovascular coupling and are thus limited by the spatiotemporal features of neurovascular coupling, specifically cerebral blood volume (CBV) changes. CBV is a pertinent parameter for functional imaging that is already used by other modalities such as intrinsic optical imaging or CBV-weighted fMRI. The spatiotemporal extent of CBV response was extensively studied. The spatial resolution of sensory-evoked CBV response can go down to cortical column (~100 μm). Temporally, the CBV impulse response function was measured to typically start at ~0.3 s and peak at ~1 s in response to ultrashort stimuli (300μs), which is much slower than the underlying electrical activity.
Conventional Doppler based approaches
Hemodynamic changes in the brain are often used as a surrogate indicator of neuronal activity to map the loci of brain activity. Major part of the hemodynamic response occurs in small vessels; however, conventional Doppler ultrasound is not sensitive enough to detect the blood flow in such small vessels.
Functional transcranial Doppler (fTCD)
Ultrasound Doppler imaging can be used to obtain basic functional measurements of brain activity using blood flow. In functional transcranial Doppler sonography, a low frequency (1-3 MHz) transducer is used through the temporal bone window with a conventional pulse Doppler mode to estimate blood flow at a single focal location. The temporal profile of blood velocity is usually acquired in main large arteries such as the middle cerebral artery (MCA). The peak velocity is compared between rest and task conditions or between right and left sides when studying lateralization.
The temporal window is the thinnest lateral area of the skull, and it is mostly hairless. It is often used for fTCD.
Power Doppler
Power Doppler is a Doppler sequence that measures the ultrasonic energy backscattered from red blood cells in each pixel of the image. It provides no information on blood velocity but is proportional to blood volume within the pixel. However, conventional power Doppler imaging lacks sensitivity to detect small arterioles/venules and thus is unable to provide local neurofunctional information through neurovascular coupling.
Ultrasensitive Doppler imaging
Functional ultrasound imaging was pioneered at ESPCI by 's team following work on ultrafast imaging and ultrafast Doppler.
Principles
Ultrasensitive Doppler relies on ultrafast imaging scanners able to acquire images at thousands of frames per second (~1 kHz), thus boosting the SNR of power Doppler without any contrast agents. Instead of the line by line acquisition of conventional ultrasound devices, ultra-fast ultrasound takes advantage of successive tilted plane wave transmissions that afterward coherently compounded to form images at high frame rates. Coherent Compound Beamforming consists of the recombination of backscattered echoes from different illuminations achieved on the acoustic pressure field with various angles (as opposed to the acoustic intensity for the incoherent case). All images are added coherently to obtain a final compounded image. This very addition is produced without taking the envelope of the beamformed signals or any other nonlinear procedure to ensure a coherent addition. As a result, coherent adding of several echo waves leads to cancellation of out-of-phase waveforms, narrowing the point spread function (PSF), and thus increasing spatial resolution. A theoretical model demonstrates that the gain in sensitivity of the ultrasensitive Doppler method is due to the combination of the high signal-to-noise ratio (SNR) of the gray scale images, due to the synthetic compounding of backscattered echoes and the extensive signal samples averaging due to the high temporal resolution of ultrafast frame rates. The sensitivity was recently further improved using multiple plane wave transmissions and advanced spatiotemporal clutter filters for better discrimination between low blood flow and tissue motion. Ultrasound researchers have been using ultrafast imaging research platforms with parallel acquisition of channels and custom sequences programming to investigate ultrasensitive Doppler/fUS modalities. A custom real-time high-performance GPU beamforming code with a high data transfer rate (several GB/s) must then be implemented to perform imaging at high frame rate. Acquisitions can also typically easily provide gigabytes of data depending on acquisition duration.
Ultrasensitive Doppler has a typical 50-200 μm spatial resolution depending on the ultrasound frequency used. It features temporal resolution ~10 ms, can image the full depth of the brain, and can provide 3D angiography.
functional ultrasound imaging
This signal boost enables the sensitivity required to map subtle blood variations in small arterioles (down to 1mm/s) related to neuronal activity. By applying an external stimulus such as a sensory, auditory or visual stimulation, it is then possible to construct a map of brain activation from the ultrasensitive Doppler movie.
Functional ultrasound (fUS) measures indirectly cerebral blood volume which provides an effect size close to 20% and as such is quite more sensitive than fMRI whose BOLD response is typically only a few percents. Correlation maps or statistical parametric maps can be constructed to highlight the activated areas. fUS has been shown to have a spatial resolution on the order of 100 μm at 15 MHz in ferrets and is sensitive enough to perform single trial detection in awake primates. Other fMRI-like modalities such as functional connectivity can also be implemented.
Commercial scanners with specialized hardware and software are enabling fUS to rapidly expand behind ultrasound research labs to the neuroscience community.
4D functional ultrasound imaging
4D functional ultrasound imaging (4D fUS) means fUS imaging of a 3D region of the brain over time.
Some researchers conducted 4D fUS of whole-brain activity in rodents. Currently, two different technological solutions are proposed for the acquisition of 3D and 4D fUS data, each with its own advantages and drawbacks. The first is a tomographic approach based on motorized translation of linear probes. This approach proved to be a successful method for several applications such as 3D retinotopic mapping in the rodent brain and 3D tonotopic mapping of the auditory system in ferrets. The second approach relies on high frequency 2D matrix array transducer technology coupled with a high channel count electronic system for fast 3D imaging. To counterbalance the intrinsically poor sensitivity of matrix elements, they devised a 3D multiplane-wave scheme with 3D spatiotemporal encoding of transmit signals using Hadamard coefficients. For each transmission, the backscattered signals containing mixed echoes from the different plane waves are decoded using the summation of echoes from successive receptions with appropriate Hadamard coefficients. This summation enables the synthetic building of echoes from a virtual individual plane wave transmission with a higher amplitude. Finally, they perform coherent compounding beamforming of decoded echoes to produce 3D ultrasonic images and apply a spatiotemporal clutter filter separating blood flow from tissue motion to compute a power Doppler volume, which is proportional to the cerebral blood volume.
Applications
Preclinical
fUS can benefit in monitoring cerebral function in the whole brain which is important to understanding how the brain works on a large scale under normal or pathological conditions. The ability to image cerebral blood volume at high spatiotemporal resolution and with high sensitivity using fUS could be of great interest for applications in which fMRI reaches its limits, such as imaging of epileptic-induced changes in blood volume. fUS can be applied for chronic studies in animal models through a thinned-skull or smaller cranial window or directly through the skull in mice.
Brain activity mapping
Tonotopics or retinotopics maps can be constructed by mapping the response of frequency-varying sounds or moving visual targets.
Functional connectivity / resting state
When no stimulus is applied, fUS can be used to study functional connectivity during resting state. The method has been demonstrated in rats and awake mice and can be used for pharmacological studies when testing drugs. Seed-based maps, independent component analysis of resting states modes or functional connectivity matrix between atlas-based regions of interests can be constructed with high resolution.
Awake fUS imaging
Using dedicated ultralight probes, it is possible to perform freely-moving experiments in rats or mice. The size of the probes and electromagnetic-compatibility of fUS means it can also be used easily on head-fixed setups for mice or in electrophysiology chambers in primate.
Clinical
Neonates
Thanks to its portability, fUS has also been used in clinics in awake neonates. Functional ultrasound imaging can be applied to neonatal brain imaging in a non-invasive manner through the fontanel window. Ultrasound is usually performed in this case, which means that the current procedures does not have to be changed. High quality angiographic images could help diagnose vascular diseases such as perinatal ischemia or ventricular hemorrhage.
Adults / intraoperative
For adults, this method can be used during neurosurgery to guide the surgeon through the vasculature and to monitor the patient's brain function prior to tumor resection
See also
functional magnetic resonance imaging (fMRI)
Functional neuroimaging
References
Ultrasound
Magnetic resonance imaging | Functional ultrasound imaging | [
"Chemistry"
] | 2,373 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
67,619,861 | https://en.wikipedia.org/wiki/TEOS-10 | TEOS-10 (Thermodynamic Equation of Seawater - 2010) is the international standard for the use and calculation of the thermodynamic properties of seawater, humid air and ice. It supersedes the former standard EOS-80 (Equation of State of Seawater 1980). TEOS-10 is used by oceanographers and climate scientists to calculate and model properties of the oceans such as heat content in an internationally comparable way.
History
TEOS-10 was developed by the SCOR(Scientific Committee on Oceanic Research)/IAPSO(International Association for the Physical Sciences of the Oceans) Working Group 127 which was chaired by Trevor McDougall. It has been approved as the official description of the thermodynamic properties of seawater, humid air and ice in 2009 by the Intergovernmental Oceanographic Commission (IOC) and in 2011 by the International Union of Geodesy and Geophysics (IUGG).
Physical basis
TEOS-10 is based on thermodynamic potentials. Fluids like humid air and liquid water in TEOS-10 are therefore described by the Helmholtz energy F(m,T,V)=F(m,T,m/ρ) or the specific Helmholtz-energy f(T,ρ)=F(m,T,m/ρ)/m. The Helmholtz energy has a unique value across phase boundaries. For the calculation of the thermodynamic properties of seawater and ice, TEOS-10 uses the specific Gibbs potential g(T,P)=G/m, G=F+pV, because the pressure is a more easily measurable property than density in a geophysical context. Gibbs energies are multivalued around phase boundaries and need to be defined for each phase separately.
The thermodynamic potential functions are determined by a set of adjustable parameters which are tuned to fit experimental data and theoretical laws of physics like the ideal gas equation. Since absolute energy and entropy cannot be directly measured, arbitrary reference states for liquid water, seawater and dry air in TEOS-10 are defined in a way that
internal energy and entropy of liquid water at the solid-liquid-gas triple point are zero,
entropy and enthalpy of seawater are zero at SA (Absolute Salinity) = 35.16504 g/kg, T (Temperature) = 273.15 K, p (pressure) = 101325 Pa,
entropy and enthalpy of dry air are zero at T (Temperature) = 273.15 K, p (pressure) = 101325 Pa.
Included thermodynamic properties
TEOS-10 covers all thermodynamic properties of liquid water, seawater, ice, water vapour and humid air within their particular ranges of validity as well as their mutual equilibrium composites such as sea ice or cloudy (wet and icy) air.
Additionally, TEOS-10 covers derived properties, for example the potential temperature and Conservative Temperature, the buoyancy frequency, the planetary vorticity and the Montgomery and Cunningham geostrophic streamfunctions. A complete list of featured properties can be found in the TEOS-10 Manual.
The handling of salinity was one of the novelties in TEOS-10. It defines the relationship between Reference Salinity and Practical Salinity, Chlorinity or Absolute Salinity and accounts for the different chemical compositions by adding a regionally variable 𝛿SA (see Figure). TEOS-10 is valid for Vienna Standard Mean Ocean Water which accounts for different hydrogen- and oxygen-isotope compositions in water which affects the triple point and therefore phase transitions of water.
Software packages
TEOS-10 includes the Gibbs Seawater (GSW) Oceanographic Toolbox which is available as open source software in MATLAB, Fortran, Python, C, C++, R, Julia and PHP. While TEOS-10 is generally expressed in basic SI-units, the GSW package uses input and output data in commonly used oceanographic units (such as g/kg for Absolute Salinity SA and dbar for pressure p).
In addition to the GSW Oceanographic Toolbox, the Seawater-Ice-Air (SIA) Library is available for Fortran and VBA (for the use in Excel), and covers the thermodynamic properties of seawater, ice and (moist) air. In contrast to the GSW Toolbox, the SIA-Library exclusively uses basic SI-units.
Differences between TEOS-10 and EOS-80
EOS-80 (Equation of State of Seawater -1980) uses Practical Salinity measured on the PSS-78 (Practical Salinity Scale of 1978) scale that itself is based on measurements of temperature, pressure and electrical conductivity. Thus, EOS-80 did not account for different chemical compositions of seawater.
EOS-80 consisted of separate equations for density, sound speed, freezing temperature and heat capacity but did not provide expressions for entropy or chemical potentials. Therefore, it was not a complete and consistent description of the thermodynamic properties of seawater. Inconsistencies in EOS-80 appear for example in the heat content at high pressure, depending on which equation is used for the calculation. Furthermore, EOS-80 was not consistent with meteorological equations while TEOS-10 is valid for humid air as well as for seawater.
EOS-80 provided expressions for potential temperature, which removes the effect of pressure on temperature but not for Conservative Temperature, which is a direct measure for potential enthalpy and therefore heat content.
In TEOS-10 the current standard for temperature scales, ITS-90 (International Temperature Scale of 1990) is used, while EOS-80 used the IPTS-68 (International Practical Temperature of 1968). In the SIA-Library of TEOS-10 implementations to convert outdated into current scales are included.
TEOS-10 was derived using absolute pressure P while EOS-80 used the pressure relative to the sea surface 𝑝sea. They can be converted by: P/Pa = 101325 + 10000 ∙ 𝑝sea/dbar (see Atmospheric Pressure).
External links
TEOS-10 Website
The Gibbs-Seawater (GSW) Oceanographic Toolbox functions
TEOS-10 Primer
TEOS-10 Manual
References
International standards
Thermodynamics | TEOS-10 | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,326 | [
"Thermodynamics",
"Dynamical systems"
] |
67,620,089 | https://en.wikipedia.org/wiki/PG5%20%28molecule%29 | PG5 is the largest stable synthetic molecule ever made. PG5 was designed by the organic chemistry research group working at the Federal Institute of Technology in Zürich.
Properties
PG5 has a molecular mass of about 200 MDa or 200,000,000 g/mol. It has roughly 20 million atoms and a diameter of roughly 10 nm. Its length is up to a few micrometers. It is similar in size to a tobacco mosaic virus with comparable length and diameter. PG5 was shown to be resistant against attempts to flatten its structure.
References
Polymers
Dihydroxybenzoic acids
Amides
Dendrimers | PG5 (molecule) | [
"Chemistry",
"Materials_science"
] | 128 | [
"Functional groups",
"Dendrimers",
"Polymer chemistry",
"Polymers",
"Amides"
] |
73,418,856 | https://en.wikipedia.org/wiki/Osmium%28II%29%20chloride | Osmium(II) chloride or osmium dichloride is an inorganic compound composed of osmium metal and chlorine with the chemical formula .
Synthesis
Osmium(II) chloride can be prepared by disproportionation of osmium(III) chloride at 500 °C in vacuum.
Physical properties
Osmium(II) chloride is a hygroscopic dark brown solid that is insoluble in water.
It is soluble in ethanol and ether.
Chemical properties
Osmium(II) chloride does not react with hydrochloric acid and sulfuric acid.
It reacts with CO at 220 °C:
Uses
Osmium(II) chloride can be used for the catalytic production of trialkylamines.
References
Osmium compounds
Chlorides
Platinum group halides | Osmium(II) chloride | [
"Chemistry"
] | 158 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
73,420,346 | https://en.wikipedia.org/wiki/Concrete%20frame | A concrete frame, also known as a concrete skeleton, is a structure composed of interconnected beams, columns, and slabs that is used to support larger structures. Due to the low cost of producing them, concrete frames are often used when building damns, bridges, and buildings. Reinforced concrete frame structures are commonly used when making concrete frames, as steel rebar's stronger tensile strength makes up for concretes tensile weakness.
Concept
Connected by rigid joints, reinforced concrete frames consist of beams and columns. With the beams and columns cast in a single operation to act in unison, reinforced concrete frames provide resistance to lateral loads and gravity due to the bends in the beams and columns. Common subtypes of this frame include: Nonductile reinforced concrete frames with or without infill walls, ductile reinforced concrete frames with or without infill walls, and nonductile reinforced concrete frames with reinforced infill walls.
Masonry infills
Most prevalent type are these: RC frames with concrete infill walls, commonly referred to as dual systems, are typically used in earthquake prone areas like Turkey, Colombia, and Greece.
Advantages
While being the most fire resistant material around, concrete frames are stronger, safer, less costly, and more energy efficient than steel buildings.
See also
Steel frame
Reinforced concrete
Framing (construction)
References
Reinforced Concrete Frame Sector. https://www.citb.co.uk/media/aycpantr/rc-skills-pathway-schools-brochure.pdf
Yakut Ahmet. Reinforced Concrete Frame Construction. Retrieved 3/20/23. https://www.world-housing.net/wp-content/uploads/2011/06/RC-Frame_Yakut.pdf
Bertagnoli, Gabriele. (2016). Reinforced Concrete Frame Structures. Procedia Engineering, 161, 1013–1017. https://doi.org/10.1016/j.proeng.2016.08.841
Concrete | Concrete frame | [
"Engineering"
] | 417 | [
"Structural engineering",
"Concrete"
] |
73,422,141 | https://en.wikipedia.org/wiki/C7H16O2 | {{DISPLAYTITLE:C7H16O2}}
The molecular formula C7H16O2 (molar mass: 132.203 g/mol) may refer to:
Prenderol
2-Methyl-2-propyl-1,3-propanediol | C7H16O2 | [
"Chemistry"
] | 62 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
73,422,453 | https://en.wikipedia.org/wiki/Topological%20Yang%E2%80%93Mills%20theory | In gauge theory, topological Yang–Mills theory, also known as the theta term or -term is a gauge-invariant term which can be added to the action for four-dimensional field theories, first introduced by Edward Witten. It does not change the classical equations of motion, and its effects are only seen at the quantum level, having important consequences for CPT symmetry.
Action
Spacetime and field content
The most common setting is on four-dimensional, flat spacetime (Minkowski space).
As a gauge theory, the theory has a gauge symmetry under the action of a gauge group, a Lie group , with associated Lie algebra through the usual correspondence.
The field content is the gauge field , also known in geometry as the connection. It is a -form valued in a Lie algebra .
Action
In this setting the theta term action is
where
is the field strength tensor, also known in geometry as the curvature tensor. It is defined as , up to some choice of convention: the commutator sometimes appears with a scalar prefactor of or , a coupling constant.
is the dual field strength, defined .
is the totally antisymmetric symbol, or alternating tensor. In a more general geometric setting it is the volume form, and the dual field strength is the Hodge dual of the field strength .
is the theta-angle, a real parameter.
is an invariant, symmetric bilinear form on . It is denoted as it is often the trace when is under some representation. Concretely, this is often the adjoint representation and in this setting is the Killing form.
As a total derivative
The action can be written as
where is the Chern–Simons 3-form.
Classically, this means the theta term does not contribute to the classical equations of motion.
Properties of the quantum theory
CP violation
Chiral anomaly
See also
Yang–Mills theory
References
External links
nLab
Quantum field theory | Topological Yang–Mills theory | [
"Physics"
] | 385 | [
"Quantum field theory",
"Quantum mechanics"
] |
73,426,352 | https://en.wikipedia.org/wiki/Osmium%28III%29%20chloride | Osmium(III) chloride is an inorganic chemical compound of osmium metal and chlorine with the chemical formula .
Synthesis
Osmium(III) chloride can be made by a reaction of chlorine with osmium:
It can also be made by heating of osmium(IV) chloride:
Physical properties
Osmium(III) chloride forms black-brown crystals.
Osmium(III) chloride forms a hydrate of the composition with dark green crystals.
Uses and reactions
Osmium(III) chloride hydrate is used as a precursor material for the production of dichlorodihydridoosmium complex compounds and other compounds.
It is the precursor to a variety of arene complexes.
References
Osmium compounds
Chlorides
Metal halides | Osmium(III) chloride | [
"Chemistry"
] | 152 | [
"Chlorides",
"Inorganic compounds",
"Metal halides",
"Salts"
] |
73,426,861 | https://en.wikipedia.org/wiki/Tegileridine | Tegileridine is a drug which acts as a μ-opioid receptor agonist. It is closely related to compounds such as oliceridine, TRV734, and SHR9352, and shares a similar profile as a biased agonist selective for activation of the G-protein signalling pathway over β-arrestin 2 recruitment.
In January 2024, tegileridine was approved in China for the treatment of moderate to severe pain after abdominal surgery.
See also
PZM21
SR-17018
References
Mu-opioid receptor agonists
Pyridines
Spiro compounds
Oxygen heterocycles
Amines
Ethoxy compounds
Drugs developed by Jiangsu Hengrui | Tegileridine | [
"Chemistry"
] | 145 | [
"Pharmacology",
"Functional groups",
"Medicinal chemistry stubs",
"Amines",
"Organic compounds",
"Pharmacology stubs",
"Bases (chemistry)",
"Spiro compounds"
] |
73,430,543 | https://en.wikipedia.org/wiki/Xenon%20octafluoride | Xenon octafluoride is a chemical compound of xenon and fluorine with the chemical formula . This is still a hypothetical compound. is reported to be unstable even under pressures reaching 200 GPa.
History
The compound was initially predicted in 1933 by Linus Pauling—among other noble gas compounds but which, unlike other xenon fluorides, could probably never be synthesized. This appears to be due to the steric hindrance of the fluorine atoms around the xenon atom. However, scientists continue to try to synthesize it.
Potential synthesis
The formation of xenon octafluoride has been calculated to be endothermic:
References
Xenon(VIII) compounds
Fluorides
Nonmetal halides
Hypothetical chemical compounds | Xenon octafluoride | [
"Chemistry"
] | 162 | [
"Hypotheses in chemistry",
"Salts",
"Theoretical chemistry",
"Hypothetical chemical compounds",
"Fluorides"
] |
58,058,216 | https://en.wikipedia.org/wiki/Dimension%20of%20a%20scheme | In algebraic geometry, the dimension of a scheme is a generalization of a dimension of an algebraic variety. Scheme theory emphasizes the relative point of view and, accordingly, the relative dimension of a morphism of schemes is also important.
Definition
By definition, the dimension of a scheme X is the dimension of the underlying topological space: the supremum of the lengths ℓ of chains of irreducible closed subsets:
In particular, if is an affine scheme, then such chains correspond to chains of prime ideals (inclusion reversed) and so the dimension of X is precisely the Krull dimension of A.
If Y is an irreducible closed subset of a scheme X, then the codimension of Y in X is the supremum of the lengths ℓ of chains of irreducible closed subsets:
An irreducible subset of X is an irreducible component of X if and only if the codimension of it in X is zero. If is affine, then the codimension of Y in X is precisely the height of the prime ideal defining Y in X.
Examples
If a finite-dimensional vector space V over a field is viewed as a scheme over the field, then the dimension of the scheme V is the same as the vector-space dimension of V.
Let , k a field. Then it has dimension 2 (since it contains the hyperplane as an irreducible component). If x is a closed point of X, then is 2 if x lies in H and is 1 if it is in . Thus, for closed points x can vary.
Let be an algebraic pre-variety; i.e., an integral scheme of finite type over a field . Then the dimension of is the transcendence degree of the function field of over . Also, if is a nonempty open subset of , then .
Let R be a discrete valuation ring and the affine line over it. Let be the projection. consists of 2 points, corresponding to the maximal ideal and closed and the zero ideal and open. Then the fibers are closed and open, respectively. We note that has dimension one, while has dimension and is dense in . Thus, the dimension of the closure of an open subset can be strictly bigger than that of the open set.
Continuing the same example, let be the maximal ideal of R and a generator. We note that has height-two and height-one maximal ideals; namely, and the kernel of . The first ideal is maximal since the field of fractions of R. Also, has height one by Krull's principal ideal theorem and has height two since . Consequently,
while X is irreducible.
Equidimensional scheme
An equidimensional scheme (or, pure dimensional scheme) is a scheme all of whose irreducible components are of the same dimension (implicitly assuming the dimensions are all well-defined).
Examples
All irreducible schemes are equidimensional.
In affine space, the union of a line and a point not on the line is not equidimensional. In general, if two closed subschemes of some scheme, neither containing the other, have unequal dimensions, then their union is not equidimensional.
If a scheme is smooth (for instance, étale) over Spec k for some field k, then every connected component (which is then in fact an irreducible component), is equidimensional.
Relative dimension
Let be a morphism locally of finite type between two schemes and . The relative dimension of at a point is the dimension of the fiber . If all the nonempty fibers are purely of the same dimension , then one says that is of relative dimension .
See also
Kleiman's theorem
Glossary of scheme theory
Equidimensional ring
Notes
References
External links
Algebraic geometry | Dimension of a scheme | [
"Mathematics"
] | 790 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
58,062,999 | https://en.wikipedia.org/wiki/Amenamevir | Amenamevir (trade name Amenalief) is an antiviral drug used for the treatment of shingles (herpes zoster).
It acts as an inhibitor of the zoster virus's helicase–primase complex. Amenamevir was approved in Japan for the treatment of shingles in 2017.
See also
Pritelivir
References
Antiviral drugs
Oxadiazoles
Sulfones | Amenamevir | [
"Chemistry",
"Biology"
] | 91 | [
"Antiviral drugs",
"Sulfones",
"Biocides",
"Functional groups"
] |
66,146,272 | https://en.wikipedia.org/wiki/Elabela | ELABELA (ELA, Apela, Toddler) is a hormonal peptide that in humans is encoded by the APELA gene. Elabela is one of two endogenous ligands for the G-protein-coupled APLNR receptor.
Ela is secreted by certain cell types including human embryonic stem cells. It is widely expressed in various developing organs such as the blastocyst, placenta, heart, kidney, endothelium, and is circulating in human plasma.
Discovery
Elabela is a micropeptide that was identified in 2013 by Professor Bruno Reversade's team.
Biosynthesis
Elabela gene encodes a pre-proprotein of 54 amino acids, with a signal peptide in the N-terminal region. After translocation into the endoplasmic reticulum and cleavage of the signal peptide, the proprotein of 32 amino acids may generate several active fragments.
Physiological functions
The sites of APLNR receptor expression are linked to the different functions played by Elabela in the organism. Despite that, Elabela is capable of signaling independently of APLNR in human embryonic stem cells and certain cancer cell lines including OVISE.
Embryonic pluripotency
The Elabela protein is synthesized, processed and secreted by undifferentiated human embryonic stem cells but not mouse embryonic stem cells. In humans it is under the direct regulation of POU5F1 (a.k.a. OCT4) and NANOG.
Through autocrine and paracrine signalling, endogenous Elabela entrains the PI3K/AKT/mTOR pathway to maintain pluripotency and self-renewal.
Vascular
Elabela is expressed by midline tissues (such as the notochord in zebrafish and neural tube in mammals) during organogenesis.
There it serves as a chemoattractant to angioblasts expressing APLNR at their cell surface. This participates in the formation of the first and secondary vessels of the vascular system.
Cardiac
The ELABELA -APLNR signaling axis is required for formation of the coronary vessels of the heart in mice through the sinus venosus progenitors.
Pre-eclampsia
ELA is a secreted into the bloodstream by the developing placenta. Pregnant mice lacking Ela, exhibit pre-eclampsia-like symptoms, characterized by proteinuria and gestational hypertension.
Infusion of exogenous ELA normalizes blood pressure and prevents intrauterine growth retardation in pups born to Ela knockout mothers. ELA increases the invasiveness of trophoblast-like cells, suggesting that it may enhance placental development to prevent eclampsia.
Therapeutics
Several mimetics of ELA have been developed for therapeutic purposes. Amgen has created a camel antibody and a small molecule agonist capable of mimicking the function of ELA towards it cognate receptor APLNR.
The latter has entered phase 1 clinical trials for heart failure and acute kidney disease. Bristol Myers Squibb has also created its own small molecule agonist of APLNR.
An opinion published in the Lancet in 2019 suggested that ELABELA could be used to treat intrauterine growth restriction and maternal morbidity linked to eclampsia.
References
Genes
Peptides
Ligands
Hormones | Elabela | [
"Chemistry"
] | 705 | [
"Biomolecules by chemical classification",
"Ligands",
"Coordination chemistry",
"Molecular biology",
"Peptides"
] |
66,147,490 | https://en.wikipedia.org/wiki/Bound%20state%20in%20the%20continuum | A bound state in the continuum (BIC) is an eigenstate of some particular quantum system with the following properties:
Energy lies in the continuous spectrum of propagating modes of the surrounding space;
The state does not interact with any of the states of the continuum (it cannot emit and cannot be excited by any wave that came from the infinity);
Energy is real and Q factor is infinite, if there is no absorption in the system.
BICs are observed in electronic, photonic, acoustic systems, and are a general phenomenon exhibited by systems in which wave physics applies.
Bound states in the forbidden zone, where there are no finite solutions at infinity, are widely known (atoms, quantum dots, defects in semiconductors). For solutions in a continuum that are associated with this continuum, resonant states are known, which decay (lose energy) over time. They can be excited, for example, by an incident wave with the same energy. The bound states in the continuum have real energy eigenvalues and therefore do not interact with the states of the continuous spectrum and cannot decay.
Classification of BICs by mechanism of occurrence
Source:
BICs arising when solving the inverse problem
Wigner-von Neumann's BIC (Potential engineering)
The wave function of one of the continuum states is modified to be normalizable and the corresponding potential is selected for it.
Hopping rate engineering
In the tight binding approximation, the jump rates are modified so that the state becomes localized
Boundary shape engineering
Sources for BICs of different types, e.g. Fabry-Perot type are replaced by scatterers so as to create BIC of the same type.
BICs arising due to parameter tuning
Fabry-Perot BICs
For resonant structures, the reflection coefficient near resonance can reach unity. Two such structures can be arranged in such a way that they radiate in antiphase and compensate each other.
Friedrich-Wintgen BICs
Two modes of the same symmetry of one and the same structure approach each other when the parameters of the structure are changed, and at some point an anti-crossing occurs. In this case, BIC is formed on one of the branches, since the modes as if compensate each other, being in antiphase and radiating into the same radiation channel.
Single-resonance parametric BICs
Occur when a single mode can be represented as a sum of contributions, each of which varies with the structure parameters. At some point, destructive interference of all contributions occurs.
Symmetry-protected BICs
Arise when the symmetry of the eigenstate differs from any of the possible symmetries of propagating modes in the continuum.
Separable BICs
Arise when the eigenvalue problem is solved by the Separation of Variables Method, and the wave function is represented, for example, as , where both multipliers correspond to localized states, with the total energy lying in the continuum.
Wigner-Von Neumann BICs
Bound states in the continuum were first predicted in 1929 by Eugene Wigner and John von Neumann. Two potentials were described, in which BICs appear for two different reasons.
In this work, a spherically symmetric wave function is first chosen so as to be quadratically integrable over the entire space. Then a potential is chosen such that this wave function corresponds to zero energy.
The potential is spherically symmetric, then the wave equation will be written as follows:
the angle derivatives disappear, since we limit ourselves to considering only spherically symmetric wave functions:
For to be the eigenvalue for the spherically symmetric wave function , the potential must be
.
We obtain the specific values and for which the BIC will be observed.
First case
Let us consider the function . While the integral must be finite, then considering the behavior when , we get that , then considering the behavior when , we get . The regularity in requires . Finally, we get .
Assuming , then the potential will be equal to (discarding the irrelevant multiplier ):
The eigenfunction and the potential curve are shown in the figure. It seems that the electron will simply roll off the potential and the energy will belong to the solid spectrum, but there is a stationary orbit with .
In the work is given the following interpretation: this behavior can be understood from an analogy with classical mechanics (considerations belong to Leo Szilard). The motion of a material point in the potential is described by the following equation:
It's easy to see that when , , so the asymptotic is
that is, for a finite time the point goes to infinity. The stationary solution means that the point returns from infinity again, that it is as if it is reflected from there and starts oscillating. The fact that at tends to zero follows from the fact that it rolls down a large potential slide and has an enormous speed and therefore a short lifetime. And since the whole oscillatory process (from to infinity and back) is periodic, it is logical that this quantum mechanical problem has a stationary solution.
Second case
Let's move on to the second example, which can no longer be interpreted from such considerations.
First of all, we take a function , then . These are divergent spherical waves, since the energy is greater than the potential , the classical kinetic energy remains positive. The wave function belongs to a continuous spectrum, the integral diverges. Let's try to change the wave function so that the quadratic integral converges and the potential varies near -1.
Consider the following ansatz:
If the function is continuous, and at the asymptotic is then the integral is finite. The potential would then be equal (with the corrected arithmetical error in the original article):
In order for the potential to remain near -1, and at tend to -1, we must make the functions small and at tend to zero.
In the first case, also should vanish for , namely for , that is for . This is the case when or any other function of this expression.
Let assume , where is arbitrary (here tends to when ). Then
The expression for the potential is cumbersome, but the graphs show that for the potential tends to -1.
Furthermore, it turns out that for any one can choose such an A that the potential is between and .
We can see that the potential oscillates with period and the wave function oscillates with period . It turns out that all reflected waves from the "humps" of such a potential are in phase, and the function is localized in the center, being reflected from the potential by a mechanism similar to the reflection from a Bragg mirror.
Notes
Literature
Waves
Quantum optics | Bound state in the continuum | [
"Physics"
] | 1,361 | [
"Physical phenomena",
"Quantum optics",
"Quantum mechanics",
"Waves",
"Motion (physics)"
] |
66,148,740 | https://en.wikipedia.org/wiki/FAM237A | FAM237A is a protein coding gene which encodes a protein of the same name. Within Homo sapiens, FAM237A is believed to be primarily expressed within the brain, with moderate heart and lesser testes expression,. FAM237A is hypothesized to act as a specific activator of receptor GPR83.
Gene
FAM237A is alternatively known as HCG1657980 and LOC200726. Homo sapiens FAM237A's sequence resides on chromosome 2’s + strand, and extends from bases 207486904 to 207514174. Homo sapiens FAM237A sequence contains 13 exons unspliced.
Transcripts
Homo sapiens FAM237A is predicted to produce six unique transcripts, of which four are spliced.
Proteins
Homo sapiens FAM237A is associated with three unnamed protein isoforms. FAM237A's most-researched isoform is 181 amino acids long, and is predicted to contain a transmembrane domain. FAM237A's second protein isoform is predicted to be 417 amino acids long; it contains a transmembrane domain and an upstream open reading frame. The last protein isoform of FAM237A is made up of 158 amino acids and contains a transmembrane domain; this isoform is predicted to localize within the membrane. Several databases, including NCBI, only recognize FAM237A's 181 amino acid isoform. Given the relative abundance of literature surrounding it, the remainder of this page's findings only discuss FAM237A's 181 amino acid isoform.
The theoretical molecular weight of this isoform is 20.56 kDA. Its theoretical isoelectric point is 8.96. Homo sapiens FAM237A amino acid composition is predicted to be relatively standard. It notably contains a repeat LFWD motif at amino acids 90 and 97.
FAM237A's transmembrane domain is generally predicted to reside on amino acids 14-32 within the protein. However, structure prediction tool Phyre2 predicts that the protein's transmembrane domain resides on amino acids 91–106.
Regulation
Three promoters of Homo sapiens FAM237A are predicted: GXP_8991091, GXP_7539237, and GXP_8991092. Of these, GXP_8991091 has the greatest predicted tissue expression levels.
AceView predicts that Homo sapiens FAM237A is localized to membranes. However, this is disputed, with protein localization prediction resource Hum-mPLoc predicting that Homo sapiens FAM237A is expressed within the nucleus and resource PSORT II predicting ER localization, with lesser chances of expression within the mitochondria and Golgi apparatus.
An abundance of predicted phosphorylation sites reside on Homo sapiens FAM237A's sequence. Homo sapiens FAM237A contains two predicted fatty acid addition sites at amino acids 18 and 26; these sites overlap with one of the FAM237A's predicted transmembrane sequences. Homo sapiens FAM237A is additionally predicted to contain two sites of ubiquitination at amino acids 179 and 181 on its sequence. These ubiquitination sites are predicted to perfectly overlap two acetylation sites.
Homology
Homo sapiens FAM237A has one predicted paralog: FAM237B. FAM237B has 21.6% predicted identity with FAM237A
FAM237A has orthologs in a broad range of vertebrate organisms, including other Mammals, Reptilia, Actinopterygii, and Aves. The gene is not found in invertebrates. Based upon BLAST analysis, FAM237A is not found in invertebrates. The only reptiles which FAM237A is found in are predicted to be of the suborder Cryptodira, based upon BLAST searches.
Function
Information regarding FAM237A's function is limited; however, FAM237A is predicted to be a specific activator of GPR83, which is implicated in energy metabolism, dietary patterns, and reward signaling. GPR83 is additionally suspected to be correlated to immune system function
References
Genes
Proteins | FAM237A | [
"Chemistry"
] | 931 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
66,153,199 | https://en.wikipedia.org/wiki/NV-5440 | NV-5440 is a drug which acts as both a non-specific inhibitor of the glucose transporters and also a selective inhibitor of mTORC1, with no significant action at the related mTORC2 subtype. Compounds of this type have potential application in the treatment of cancer, and it is also used for research into the links between calorie restriction and longevity.
References
Enzyme inhibitors
N-benzoylpiperazines | NV-5440 | [
"Chemistry"
] | 91 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
66,155,057 | https://en.wikipedia.org/wiki/Electrified%20reef | An electric reef (also electrified reef) is an artificial reef made from biorock, being limestone that forms rapidly in seawater on a metal structure from dissolved minerals in the presence of a small electric current. The first reefs of this type were created by Wolf Hilbertz and Thomas J. Goreau in the 1980s. By 2011 there were examples in over 20 countries.
History
Artificial reefs have been built since the 1950s using materials including sunken ships and concrete blocks. While artificial reefs have been effective at boosting fish populations and are valuable areas for benthic organisms and other marine life (e.g. sponges) to colonise, they are less viable for coral restoration due to the slow growth of corals and their susceptibility to environmental changes.
In the 1970s, whilst studying how seashells and reefs grow, Wolf Hilbertz discovered a simple method of creating limestone from minerals dissolved in seawater, which he called biorock. Together with Thomas J. Goreau he realised that this process could be adapted to rapidly create artificial coral reefs during the 1980s. Using the names "Sea-ment" and "sea cement", the process was publicised in the 1992 futurology book titled The Millennial Project.
With others, Hilbertz and Goreau made expeditions to the Saya de Malha bank in 1997 and 2002 where they grew an artificial island around steel structures anchored to the sea floor using this process. In the Maldives, 80% of the electric reefs survived the 1998 warming which killed 95% of the natural reef corals.
Goreau continued the work after Hilbertz's death in 2007. By 2011 there were electric reef projects installed in over 20 countries. In 2012, both Goreau and Robert K. Trench published works on how the process could generate building materials as well as restore damaged ecosystems.
Construction process
The base of an electrified reef is a welded electrically conductive frame, often made from construction grade rebar or wire mesh which submerged and attached to the seafloor to which an electrical field applied. The frame (cathode) and a much smaller metal plate (anode) placed at a suitable distance from the frame initiates the electrolytic reaction.
Dissolved calcium carbonate and magnesium hydroxide and other minerals naturally found in seawater breakdown in the vicinity of the anode and recombine and precipitate out of the water onto the cathode. The exact composition of the minerals within the crystal formation is depends on their abundance, the climatic conditions and the voltage used. The structure takes on a whitish appearance within days.
This electric field, together with shade and protection offered by the metal/limestone frame soon attracts colonizing marine life, including fish, crabs, clams, octopus, lobster and sea urchins. Once the structure is in place and minerals begin to coat the surface divers transplant coral fragments from other reefs to the frame which soon bond to the newly accreted mineral substrate.
Because of the availability of evolved oxygen at the cathode and the electrochemically facilitated accretion of dissolved ions such as bicarbonate, they start to grow, some three to five times faster than normal and soon the reef takes on the appearance and utility of a natural reef ecosystem.
As shore protection
Shorelines are increasingly susceptible to beach erosion and loss due to climate change which is resulting in rising sea levels and increasingly frequent and more powerful storms. Large structures such as breakwaters constructed to reflect waves to prevent erosion are problematic and can in fact contribute to further beach erosion since for force of waves is doubled due to the reversal of the wave direction vector with the reflected wave carrying sand from the structure's base back out to sea resulting in the structure failing over time.
Common electrified reef used for shore protection mimic the effect of a natural reef which prevent erosion by dissipating wave energy and causing waves to break before they impact the shore. In nature, large reefs, have been shown to dissipate up to 97% of their energy. They are based around the same open mesh frameworks as those used for coral restoration. Skeletons of dead coral and algae from the reef are then deposited and help grow beaches. Because these reefs mimic the properties of natural reefs they solve some of the challenges they have in storm dissipation and their self-healing qualities helps structures survive extreme storms as long as the electricity supply remains in operation.
In Turks and Caicos trials of electrified reefs of coastal protection survived the two worst hurricanes in the history of the islands, which occurred three days apart and damaged or destroyed 80% of the buildings on the island. Sand was observed to build up around the bases of the reef structure.
In Maldives in 1997, shore protection reefs helped save several buildings, including a hotel, that had risked washing away due to severe beach erosion. The 50-meter-long shore protection reef stabilized and ultimately reversed erosion in several years, even allowing the beach to survive a tsunami in 2004.
Distribution
Electric reef projects had been installed in over 20 countries, in the Caribbean, Indian Ocean, Pacific and Southeast Asia. Projects are located in French Polynesia, Indonesia, Maldives, Mexico, Panama, Papua New Guinea, Seychelles, the Philippines, Thailand and on one of the most remote and unexplored reef areas of the world, the Saya de Malha Bank in the Indian Ocean.
Indonesia has the most reef projects, with sites near over half a dozen islands, including the world's two largest reef restoration projects: Pemuteran with the Karang Lestari and the Gili islands with the Gili Eco Trust.
Non-coral reef projects have been conducted in places such as Barataria Bay, Galveston, seagrasses in the Mediterranean, oyster reefs and salt marshes in New York City, in Port Aransas, and in St. Croix.
Effectiveness
Electrolysis of electric reefs enhances coral growth, reproduction and ability to resist environmental stress. Coral species typically found on healthy reefs gain a major advantage over the weedy organisms that often overgrow them on stressed reefs.
Biorock can enable coral growth and regrowth even in the presence of environmental stress such as rising ocean temperatures, diseases, and nutrient, sediment, and other types of pollution. Biorock represents the only known method that can sustain and grow natural coral species using only basic conducting elements, typically of a common metal such as steel.
The process accelerated growth on coral reefs by as much as fivefold and restoration of physical damage by as much as 20 times. and the rate of growth can be varied by altering the amount of current flowing into the structure.
In one study, Porites colonies with and without an electric field were compared for 6 months after which time the current to the electric reef was eliminated. Growth differences were significant only during the first 4 months with longitudinal growth being relatively high in the presence of the field. The treatment corals survived at a higher rate.
On Vabbinfaru island in the Maldives, a 12-meter, 2 ton steel cage called the Lotus was secured on the sea floor. As of 2012, coral was so abundant on the structure that the cage is difficult to discern. The 1998 El Nino killed 98% of the reef around Vabbinfaru. Abdul Azeez, who led the Vabbinfaru project, said coral growth on the structure is up to five times that of elsewhere. A smaller prototype device was in place during the 1998 warming event and more than 80% of its corals survived, compared to just 2% elsewhere. However, power is no longer supplied to the project, leaving it vulnerable to the next round of bleaching.
Drawbacks
Electric reefs require electrical power to maintain them. In Maldives, several electric reefs successfully survived a 1998 bleaching event that killed off nearly all local wild coral, however after being depowered they were killed by the bleaching event of 2016.
A study conducted in the Bahamas in 2015 showed that the electric field deterred sharks, specifically the bull shark and the Caribbean reef shark, from swimming and feeding in the area. The electric field is believed to affect sharks because of their electroreception abilities, however species with similar capabilities such as the bar jack and Bermuda chub did not appear to be affected by the electric field.
See also
Gili Eco Trust
References
Further reading
"Changes in zooxanthellae density, morphology, and mitotic index in hermatypic corals and anemones exposed to cyanide", 2003
Goreau + Hilbertz: "Marine Ecosystem Restoration: Costs and benefits for coral reefs", World Resource Review, 2005
Vaccarella, R. + Goreau: "Applicazione della elettrodeposizione nel recupero die mattes di Posidonia oceanica", 2008
Goreau + Hilbertz, "Bottom-Up Community-Based Coral Reef and Fisheries Restoration in Indonesia, Panama, and Palau", 2008
Goreau + Hilbertz, "Reef Restoration as a Fisheries Management Tool", UK 2008, on GCRA website
Strömberg + Lundälv + Goreau: "Suitability of Mineral Accretion as a Rehabilitation Method for Cold-Water Coral Reefs", 2010
"Effect of severe hurricanes on Biorock Coral Reef Restoration Projects in Grand Turk, Turks and Caicos Islands", 2010
Goreau, T. J.: "Coral Reef and Fisheries Habitat Restoration in the Coral Triangle", Indonesia 2010
External links
Wolf Hilbertz website
Global Coral Reef Alliance
Biorock.net
CCell Supplier of equipment
Marine biology | Electrified reef | [
"Biology"
] | 1,928 | [
"Marine biology"
] |
54,822,651 | https://en.wikipedia.org/wiki/Glycan%20array | Glycan arrays, like that offered by the Consortium for Functional Glycomics (CFG), National Center for Functional Glycomics (NCFG) and Z Biotech, LLC, contain carbohydrate compounds that can be screened with lectins, antibodies or cell receptors to define carbohydrate specificity and identify ligands. Glycan array screening works in much the same way as other microarray that is used for instance to study gene expression DNA microarrays or protein interaction Protein microarrays.
Glycan arrays are composed of various oligosaccharides and/or polysaccharides immobilised on a solid support in a spatially-defined arrangement. This technology provides the means of studying glycan-protein interactions in a high-throughput environment. These natural or synthetic (see carbohydrate synthesis) glycans are then incubated with any glycan-binding protein such as lectins, cell surface receptors or possibly a whole organism such as a virus. Binding is quantified using fluorescence-based detection methods. Certain types of glycan microarrays can even be re-used for multiple samples using a method called Microwave Assisted Wet-Erase.
Applications
Glycan arrays have been used to characterize previously unknown biochemical interactions. For example, photo-generated glycan arrays have been used to characterize the immunogenic properties of a tetrasaccharide found on the surface of anthrax spores. Hence, glycan array technology can be used to study the specificity of host-pathogen interactions.
Early on, glycan arrays were proven useful in determining the specificity of the Hemagglutinin (influenza) of the Influenza A virus binding to the host and distinguishing across different strains of flu (including avian from mammalian). This was shown with CFG arrays as well as customised arrays.
Cross-platform benchmarks led to highlight the effect of glycan presentation and spacing on binding.
Glycan arrays are possibly combined with other techniques such as Surface Plasmon Resonance (SPR) to refine the characterisation of glycan-binding. For example, this combination led to demonstrate the calcium-dependent heparin binding of Annexin A1 that is involved in several biological processes including inflammation, apoptosis and membrane trafficking.
References
Microarrays
Glycobiology
Glycomics
Carbohydrates | Glycan array | [
"Chemistry",
"Materials_science",
"Biology"
] | 516 | [
"Biochemistry methods",
"Biomolecules by chemical classification",
"Carbohydrates",
"Genetics techniques",
"Microtechnology",
"Microarrays",
"Organic compounds",
"Glycomics",
"Bioinformatics",
"Molecular biology techniques",
"Carbohydrate chemistry",
"Biochemistry",
"Glycobiology"
] |
53,537,603 | https://en.wikipedia.org/wiki/Compulsion%20loop | A compulsion loop, reward loop or core loop is a habitual chain of activities that a user may feel compelled to repeat. Typically, this loop is designed to create a neurochemical reward in the user such as the release of dopamine.
Compulsion loops are deliberately used in video game design as an extrinsic motivation for players, but may also result from other activities that create such loops, intentionally or not, such as gambling addiction and Internet addiction disorder.
Basis
The understanding of the motivations of compulsion loops came out of experiments performed on laboratory animals in operant conditioning chamber or a "Skinner box", where the animals are given both positive and negative stimuli for performing certain actions, such as providing food by pressing a lever. Besides demonstrating that animals would prefer positive rewards and thus learned to trigger the correct lever, B. F. Skinner found that the effects of random rewards and variable time between awards also became a factor towards how quickly the animals learned the rules of the positive reinforcement system. Ongoing research has shown that dopamine, synthesized in the animal brain, is a key neurotransmitter involved in this process; disabling the ability for receptors to react to dopamine in animal studies can impact how rapidly the animals can be conditioned.
Applying these principles to gaming, a compulsion loop creates a three-part cycle: the anticipation of receiving some reward, the activity that must be completed to receive that reward, and the act of finally obtaining the reward. From a neuroscience aspect, it is believed that the anticipation phase is where dopamine is created by the human brain, while it is released upon obtaining the reward. Dopamine creates feelings of pleasure in the brain and drives motivation, and while the neurotransmitter itself is not addictive, can lead to addictive behavior as the user desires to experience the further dopamine release.
In game design
A core or compulsion loop is any repetitive gameplay cycle that is designed to keep the player engaged with the game. Players perform an action, are rewarded, another possibility opens and the cycle repeats. A compulsion loop may be distinguished further from a core loop; while many games have a core loop of activities that a player may repeat over and over again, such as combat within a role-playing game, a compulsion loop is particularly designed to guide the player into anticipation for the potential reward from specific activities. The compulsion loop can be strengthened by adding a variable ratio schedule, where each response has a chance of producing a reward. Another strategy is an avoidance schedule, where the players work to postpone a negative consequence. Without a lack of meaningful reward, the player may eventually no longer engage with the game, causing extinction of the player population for a game. Particularly for freemium titles, where players can opt to spend real-world money for in-game boosts, extinction is undesirable so the game is designed around a near-perpetual compulsion loop alongside frequent addition of new content.
Compulsion loops in video games can be established through several means. One common approach is to show the player a "baseline" for how powerful the player-character could become, such as starting the game in an advanced power state and shortly stripping the character of those advancements and having the player rebuild the character to that state, or to show them a powerful non-player character that their starting character could eventually build towards. Another approach is through the difficulty curve of the game, making enemies stronger as the player-characters advance deeper into the game, and requiring the player to spend time to improve the character whether through new gear, abilities, or the player's own performance to progress. In multiplayer games, players may also be simply driven by envy towards other players that have more powerful characters. Some loops can rely on the concept of withdrawal, in that the player may get to a state in the game they are content with, but by some means, the game shows the player a potential of where they could be by improving, and anticipating the player to feel like they are lacking something and will return to engage in the game.
A well-known example of a compulsion loop in video games is the Monster Hunter series by Capcom. Players take the role of a monster hunter, using a variety of weapons and armor to slay or trap the creatures. By doing so, they gain monster parts and other loot that can be used to craft new weapons and equipment that is typically stronger than their previous gear. The loop presents itself that players use their current equipment to hunt monsters with a given difficulty level that provide parts that can be used to craft improved equipment. This then lets them face more difficult monsters that provide parts for even better gear. This is aided by the random nature of the drops (a variable ratio), sometimes requiring players to repeat quests several times to get the right parts. A compulsion loop may involve two or different gameplay modes that feed each other. For example, in Cult of the Lamb, one half of the game is a roguelike hack-n-slash system which the player can use to gather resources, which are then used in the game's other half, a settlement management simulation. By advancing the settlement, the player can unlock more powerful weapons and abilities in the hack-and-slash and reach more difficult areas required to obtain rarer resources needed for further settlement advancement.
Another type of compulsion loop are offered through many games in the form of a loot box or similar term, depending on the game. Loot boxes are earned progressively by continuing to play the game; this may be as a reward for winning a match, purchasable through in-game currency that one earns in game, or through microtransactions with real-world funds. Loot boxes contain a fixed number of randomly chosen in-game items, with at least one guaranteed to be of a higher rarity than the others. For many games, these items are simply customization options for the player's avatar that has no direct impact on gameplay, but they may also include gameplay-related items, or additional in-game currency. Loot boxes work under the psychology principle of variable rate reinforcement, which causes dopamine production at higher rates due to the unpredictable nature of the reward in contrast to fixed rewards. In many games, opening a loot box is accompanied by visuals and audios to heighten the excitement and further this response. Overall, a loot box system can encourage the player to continue to play the game, and potentially spend real-world funds to gain loot boxes immediately. Controversy arose in 2018 around loot boxes with several experts, governments, and concerned citizens fearing that loot boxes could lead to gambling, particularly in youth, and some governments took step to ban loot box practices that involved real-world funds.
Compulsion loops can be used as a replacement for game content, especially in grinding and freemium game experience models. The opposite of rewarding predictable, tedious and repetitive tasks are reward action contingency based systems, where players overcome game challenges with clear signals of progress.
Psychological effects
Encouraging players to return to the game world can lead to video game addiction if not careful. Internet addiction disorder can also result from a compulsion loop created by users in checking email, websites, and social media to see the results of their actions. A common concern related to compulsion loops in video games is a potential for violent video games to lead to violent behavior even though the American Psychological Association (APA) had asserted in 2019 that there is no direct connection between violent video games and real-world violent behavior, some still fear that compulsion loops in these types of games can help to reinforce violence tendencies.
References
Behaviorism
Video game terminology | Compulsion loop | [
"Technology",
"Biology"
] | 1,557 | [
"Computing terminology",
"Video game terminology",
"Behavior",
"Behaviorism"
] |
53,543,792 | https://en.wikipedia.org/wiki/Switching%20Kalman%20filter | The switching Kalman filtering (SKF) method is a variant of the Kalman filter. In its generalised form, it is often attributed to Kevin P. Murphy, but related switching state-space models have been in use.
Applications
Applications of the switching Kalman filter include: Brain–computer interfaces and neural decoding, real-time decoding for continuous neural-prosthetic control, and sensorimotor learning in humans.
It also has application in econometrics, signal processing, tracking, computer vision, etc. It is an alternative to the Kalman filter when the system's state has a discrete component. The additional error when using a Kalman filter instead of a Switching Kalman filter may be quantified in terms of the switching system's parameters. For example, when an industrial plant has "multiple discrete modes of behaviour, each of which having a linear (Gaussian) dynamics".
Model
There are several variants of SKF discussed in.
Special case
In the simpler case, switching state-space models are defined based on a switching variable which evolves independent of the hidden variable. The probabilistic model of such variant of SKF is as the following:
[This section is badly written: It does not explain the notation used below.]
The hidden variables include not only the continuous , but also a discrete *switch* (or switching) variable . The dynamics of the switch variable are defined by the term . The probability model of and can depend on .
The switch variable can take its values from a set . This changes the joint distribution which is a separate multivariate Gaussian distribution in case of each value of .
General case
In more generalised variants, the switch variable affects the dynamics of , e.g. through .
The filtering and smoothing procedure for general cases is discussed in.
References
Control theory
Nonlinear filters
Linear filters
Signal estimation
Stochastic differential equations
Robot control
Markov models | Switching Kalman filter | [
"Mathematics",
"Engineering"
] | 393 | [
"Robotics engineering",
"Applied mathematics",
"Control theory",
"Robot control",
"Dynamical systems"
] |
53,545,555 | https://en.wikipedia.org/wiki/Outage%20probability | In Information theory, outage probability of a communication channel is the probability that a given information rate is not supported, because of variable channel capacity. Outage probability is defined as the probability that information rate is less than the required threshold information rate. It is the probability that an outage will occur within a specified time period.
Slow-fading channel
For example, the channel capacity for slow-fading channel is C = log2(1 + h2 SNR), where h is the fading coefficient and SNR is a signal to noise ratio without fading. As C is random, no constant rate is available. There may be a chance that information rate may go below to required threshold level. For slow fading channel, outage probability = P(C < r) = P(log2(1 + h2 SNR) < r), where r is the required threshold information rate.
See also
Shannon–Hartley theorem
Fading channel
References
Information theory | Outage probability | [
"Mathematics",
"Technology",
"Engineering"
] | 193 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
53,545,809 | https://en.wikipedia.org/wiki/Motolimod | Motolimod (VTX-2337) is a drug which acts as a potent and selective agonist of toll-like receptor 8 (TLR8), a receptor involved in the regulation of the immune system. It is used to stimulate the immune system, and has potential application as an adjuvant therapy in cancer chemotherapy, although clinical trials have shown only modest benefits. It also worsens neuropathic pain in animal models and has been used to research the potential of targeting TLR8 in some kinds of chronic pain syndromes.
See also
Imiquimod
Vesatolimod
References
Nitrogen heterocycles
Amides
Amines | Motolimod | [
"Chemistry"
] | 135 | [
"Amines",
"Amides",
"Bases (chemistry)",
"Functional groups"
] |
53,547,112 | https://en.wikipedia.org/wiki/Samuelson%E2%80%93Berkowitz%20algorithm | In mathematics, the Samuelson–Berkowitz algorithm efficiently computes the characteristic polynomial of an matrix whose entries may be elements of any unital commutative ring. Unlike the Faddeev–LeVerrier algorithm, it performs no divisions, so may be applied to a wider range of algebraic structures.
Description of the algorithm
The Samuelson–Berkowitz algorithm applied to a matrix produces a vector whose entries are the coefficient of the characteristic polynomial of . It computes this coefficients vector recursively as the product of a Toeplitz matrix and the coefficients vector an principal submatrix.
Let be an matrix partitioned so that
The first principal submatrix of is the matrix . Associate with the Toeplitz matrix
defined by
if is ,
if is ,
and in general
That is, all super diagonals of consist of zeros, the main diagonal consists of ones, the first subdiagonal consists of and the th subdiagonal
consists of .
The algorithm is then applied recursively to , producing the Toeplitz matrix times the characteristic polynomial of , etc. Finally, the characteristic polynomial of the matrix is simply . The Samuelson–Berkowitz algorithm then states that the vector defined by
contains the coefficients of the characteristic polynomial of .
Because each of the may be computed independently, the algorithm is highly parallelizable.
References
Linear algebra
Polynomials
Numerical linear algebra | Samuelson–Berkowitz algorithm | [
"Mathematics"
] | 281 | [
"Linear algebra",
"Polynomials",
"Algebra"
] |
76,380,163 | https://en.wikipedia.org/wiki/Taylor%E2%80%93Maccoll%20flow | Taylor–Maccoll flow refers to the steady flow behind a conical shock wave that is attached to a solid cone. The flow is named after G. I. Taylor and J. W. Maccoll, whom described the flow in 1933, guided by an earlier work of Theodore von Kármán.
Mathematical description
Consider a steady supersonic flow past a solid cone that has a semi-vertical angle . A conical shock wave can form in this situation, with the vertex of the shock wave lying at the vertex of the solid cone. If it were a two-dimensional problem, i.e., for a supersonic flow past a wedge, then the incoming stream would have deflected through an angle upon crossing the shock wave so that streamlines behind the shock wave would be parallel to the wedge sides. Such a simple turnover of streamlines is not possible for three-dimensional case. After passing through the shock wave, the streamlines are curved and only asymptotically they approach the generators of the cone. The curving of streamlines is accompanied by a gradual increase in density and decrease in velocity, in addition to those increments/decrements effected at the shock wave.
The direction and magnitude of the velocity immediately behind the oblique shock wave is given by weak branch of the shock polar. This particularly suggests that for each value of incoming Mach number , there exists a maximum value of beyond which shock polar do not provide solution under in which case the conical shock wave will have detached from the solid surface (see Mach reflection). These detached cases are not considered here. The flow immediately behind the oblique conical shock wave is typically supersonic, although however when is close to , it can be subsonic. The supersonic flow behind the shock wave will become subsonic as it evolves downstream.
Since all incident streamlines intersect the conical shock wave at the same angle, the intensity of the shock wave is constant. This particularly means that entropy jump across the shock wave is also constant throughout. In this case, the flow behind the shock wave is a potential flow. Hence we can introduce the velocity potential such that . Since the problem do not have any length scale and is clearly axisymmetric, the velocity field and the pressure field will be turn out to functions of the polar angle only (the origin of the spherical coordinates is taken to be located at the vertex). This means that we have
The steady potential flow is governed by the equation
where the sound speed is expressed as a function of the velocity magnitude only. Substituting the above assumed form for the velocity field, into the governing equation, we obtain the general Taylor–Maccoll equation
The equation is simplified greatly for a polytropic gas for which , i.e.,
where is the specific heat ratio and is the stagnation enthalpy. Introducing this formula into the general Taylor–Maccoll equation and introducing a non-dimensional function , where (the speed of the potential flow when it flows out into a vacuum), we obtain, for the polytropic gas, the Taylor–Maccoll equation,
The equation must satisfy the condition that (no penetration on the solid surface) and also must correspond to conditions behind the shock wave at , where is the half-angle of shock cone, which must be determined as part of the solution for a given incoming flow Mach number and . The Taylor–Maccoll equation has no known explicit solution and it is integrated numerically.
Kármán–Moore solution
When the cone angle is very small, the flow is nearly parallel everywhere in which case, an exact solution can be found, as shown by Theodore von Kármán and Norton B. Moore in 1932. The solution is more apparent in the cylindrical coordinates (the here is the radial distance from the -axis, and not the density). If is the speed of the incoming flow, then we write , where is a small correction and satisfies
where is the Mach number of the incoming flow. We expect the velocity components to depend only on , i.e., in cylindrical coordinates, which means that we must have , where is a self-similar coordinate. The governing equation reduces to
On the surface of the cone , we must have and conesequently .
In the small-angle approximation, the weak shock cone is given by . The trivial solution for describes the uniform flow upstream of the shock cone, whereas the non-trivial solution satisfying the boundary condition on the solid surface behind the shock wave is given by
We therefore have
exhibiting a logarthmic singularity as The velocity components are given by
The pressure on the surface of the cone is found to be (in this formula, is the density of the incoming gas).
See also
Kármán–Moore theory
References
Fluid dynamics | Taylor–Maccoll flow | [
"Chemistry",
"Engineering"
] | 957 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
56,343,400 | https://en.wikipedia.org/wiki/Polynomial%20functor%20%28type%20theory%29 | In type theory, a polynomial functor (or container functor) is a kind of endofunctor of a category of types that is intimately related to the concept of inductive and coinductive types. Specifically, all W-types (resp. M-types) are (isomorphic to) initial algebras (resp. final coalgebras) of such functors.
Polynomial functors have been studied in the more general setting of a pretopos with Σ-types; this article deals only with the applications of this concept inside the category of types of a Martin-Löf style type theory.
Definition
Let be a universe of types, let : , and let : → be a family of types indexed by . The pair (, ) is sometimes called a signature or a container. The polynomial functor associated to the container (, ) is defined as follows:
Any functor naturally isomorphic to is called a container functor. The action of on functions is defined by
Note that this assignment is only truly functorial in extensional type theories (see #Properties).
Properties
In intensional type theories, such functions are not truly functors, because the universe type is not strictly a category (the field of homotopy type theory is dedicated to exploring how the universe type behaves more like a higher category). However, it is functorial up to propositional equalities, that is, the following identity types are inhabited:
for any functions and and any type , where is the identity function on the type .
Inline citations
References
External links
An extensive collection of Notes on Polynomial Functors
Type theory | Polynomial functor (type theory) | [
"Mathematics"
] | 333 | [
"Type theory",
"Mathematical logic",
"Mathematical structures",
"Mathematical objects"
] |
56,343,495 | https://en.wikipedia.org/wiki/Gas%20chromatography%E2%80%93vacuum%20ultraviolet%20spectroscopy | Gas chromatography–vacuum ultraviolet spectroscopy (GC-VUV) is a universal detection technique for gas chromatography. VUV detection provides both qualitative and quantitative spectral information for most gas phase compounds.
GC-VUV spectral data is three-dimensional (time, absorbance, wavelength) and specific to chemical structure. Nearly all compounds absorb in the vacuum ultraviolet region of the electromagnetic spectrum with the exception of carrier gases hydrogen, helium, and argon. The high energy, short wavelength VUV photons probe electronic transitions in almost all chemical bonds including ground state to excited state. The result is spectral "fingerprints" that are specific to individual compound structure and can be readily identified by the VUV library.
Unique VUV spectra enable closely related compounds such as structural isomers to be clearly differentiated. VUV detectors complement mass spectrometry, which struggles with characterizing constitutional isomers and compounds with low mass quantitation ions. VUV spectra can also be used to deconvolve analyte co-elution, resulting in an accurate quantitative representation of individual analyte contribution to the original response. This characteristically lends itself to significantly reducing GC runtimes through flow rate-enhanced chromatographic compression.
VUV spectroscopy follows the simple linear relationship between absorbance and concentration described by the Beer-Lambert Law, resulting in more accurate retention time-based identification. VUV absorbance spectra also exhibit feature similarity within compound classes, meaning VUV detectors can rapidly compound class characterization in complex samples through compound spectral shape and retention index information. Advances in technology reduces the typical group analysis data processing time from 15 to 30 minutes to <1 minute per sample.
History
The first benchtop detector was introduced in 2014 with detection capabilities between 120 and 240 nm. This portion of the ultraviolet spectrum had historically been restricted to bright source synchrotron facilities due to significant background absorption challenges inherent to working within the wavelength range. Further detector platform development has extended the wavelength detection range out from 120 to 430 nm.
How it works
VUV detectors for gas chromatography detectors
VUV detectors are compatible with most gas chromatography (GC) manufacturers. The detectors can be connected through a heated transfer line inserted through a punch-out in the GC oven casing. A makeup flow of carrier gas is introduced at the end of the transfer line. Analytes arrive in the flow cell and are exposed to VUV light from a deuterium lamp. Specially coated reflective optics paired with a back-thinned charge-coupled device (CCD) enable the collection of high-quality VUV absorption data. Figure 1 shows a schematic of the analyte path from GC to VUV detector.
VUV spectral identification
Gas phase species absorb and display unique spectra between 120 – 240 nm where high energy σ→σ*, n→σ*, π→π*, n → π* electronic transitions can be excited and probed. VUV spectra reflect the absorbance cross section of compounds and are specific to their electronic structure and functional group arrangement. The ability of VUV detectors to produce spectra for most compounds results in universal and highly selective compound identification. VUV spectroscopy data is highly characteristic while also providing quantitative information. Many commonly used GC detectors such as the electron capture detector (ECD), flame ionization detector (FID), and thermal conductivity detector (TCD) produce quantitative but not qualitative detail. Gas chromatography–mass spectrometry (GC-MS) generates qualitative and quantitative data but has difficulty characterizing labile and low mass compounds, as well as differentiating between isomers. GC-VUV complements MS by overcoming its limitations and providing a secondary method of confirmation. It also offers a single-instrument alternative to the use of multiple detectors for qualitative and quantitative analysis.
Naphthols, xylenes, and cis- and trans- fatty acids are compounds that are prohibitively difficult to distinguish according to their electron ionization mass spectral profiles. Xylenes present the additional challenge of natural co-elution that makes separating their isoforms problematic. Figure 2 shows the distinct VUV spectra of m-, p-, and o-xylene. These compounds can be differentiated despite their only difference being the position of two methyl groups around a benzene ring. The spectral differences of these isomers enable their co-elution to be resolved through spectral deconvolution.
Fatty acid screening and profiling is an application that commonly requires the use of multiple detectors to achieve quantitative and qualitative results. FID is a quantitative detector that is suitable for routine screening when guided by retention index information. GC-MS has traditionally been used for qualitative compound profiling, but falls short where isobaric analytes are prevalent. It especially struggles with differentiating cis and trans fatty acid isomers. Electron impact ionization can also cause double bond migration and lead to ambiguous fatty acid structural data.
Determining cis and trans fatty acid distribution in oils and fats is important in assessing their potential health impacts. VUV spectra of trans-containing fatty acid methyl ester (FAME) isomers typically found in butter and vegetable oils are shown in Figure 3. These trans-containing isomers separate chromatographically from cis-containing isomers and have the tendency to co-elute with each other and, in some cases, with select C20:1 isomers. GC-VUV is not only able to differentiate the C18:3 FAME variants, but is also capable of telling cis isomers apart from trans isomers. Degrees of unsaturation such as C20:1 vs. C18:3 can additionally be distinguished. Previous work has demonstrated how distinct VUV spectra enable straightforward deconvolution and accurate quantitation of cis and trans FAME isomers.
Chromatographic compression and spectral deconvolution
Unique VUV absorbance spectra not only enable unambiguous compound identification, and allows GC run times to be deliberately shortened. VUV detectors operate at ambient pressure and are thus not flow rate limited. GC run times can be reduced by increasing the GC column flow and oven temperature program rates.
Flow rate-enhanced chromatographic compression utilizes VUV spectral deconvolution to resolve any co-elution that may result from shortening GC runtimes. VUV absorption is additive, meaning that overlapping peaks give a spectrum that corresponds to the sum absorbance of each compound. The individual contribution of each analyte can be determined if the VUV spectra for co-eluting compounds are stored in the VUV library. The ability to differentiate coeluting analyte spectra and use them to deconvolve the overlapping signals is demonstrated in Figure 4. The individual spectra of terpenes limonene and p-Cymene are shown in Panel A along with the summed absorbance of the selected retention time window (blue region in Panel B) and the fit with VUV library spectra. The R2 >0.999 fit result confirms their identities, and enables the deconvolution of these and other terpenes analyzed by GC-VUV as featured in Panel B.
Testing for the presence of residual solvents in Active Pharmaceutical Ingredients (APIs) is critical for patient safety and commonly follows United States Pharmacopeia (USP) Method <467> guidelines, or more broadly, International Council for Harmonization (ICH) Guideline Q3C(R6). The gas chromatography (GC) runtime suggested by USP Method 467 is approximately 60 min. A generic method for residual solvent analysis by GC-MS describes conditions that include a runtime of approximately 30 minutes. A GC-VUV and static headspace method was developed using a chromatographic compression strategy that resulted in a GC runtime of 8 minutes. The GC-VUV method uses a flow rate of 4 mL/min and an oven ramp of 35 °C (held for 1 min), followed by an increase to 245 °C at a rate of 30 °C/min.
Figure 5 compares the results when the general conditions of the GC-MS method were followed against the GC-VUV method run with Class 2 residual solvents. Tetralin eluted at approximately 35 minutes using the GC-MS method conditions, whereas the analyte had a retention time of less than 7 minutes when the GC-VUV method was applied. The co-elution of m- and p-xylene occurred in both GC-MS and GC-VUV method runs. VUV software matched the analyte absorbance of both isomers with VUV library spectra (Figure 2) to deconvolve the overlapping signals as displayed in Figure 6. Goodness of fit information ensures that the correct compound assignment takes place during the post-run data analysis.
The flow rate-enhanced chromatographic compression strategy has been applied to a diverse set of applications since the development of the GC-VUV method for residual solvents analysis. The fast GC-VUV approach reduced GC runtimes for terpene analysis from 30 minutes to 9 minutes (the deconvolution of monoterpene isomers is shown in Figure 4). It has also been demonstrated that GC runtimes as short as 14 minutes can be used for PIONA compound analysis of gasoline samples. Typical GC separation times range between 1 – 2 hours using alternative methods.
Compound class characterization
GC-VUV can be used for bulk compositional analysis because compounds share spectral shape characteristics within a class. Proprietary software applies fitting procedures to quickly determine the relative contribution of each compound category present in a sample. Retention index information is used to limit the amount of VUV library searching and fitting performed for each analyte, enabling the automated data processing routine to be completed quickly. Compound class or specific compound concentrations can be reported as either mass or volume percent.
GC-VUV bulk compound characterization was first applied to the analysis of paraffin, isoparaffin, olefin, naphthene, and aromatic (PIONA) hydrocarbons in gasoline streams. It is suitable for use with finished gasoline, reformate, reformer feed, FCC, light naphtha, and heavy naphtha samples. A typical chromatographic analysis is displayed in Figure 7. The inset shows how the analyte spectral response is fit with VUV library spectra for the selected time slice. A report detailing the carbon number breakdown within each PIONA compound class, as well as the relative mass or volume percent of classes, is shown. A table with mass % and carbon number data from a gasoline sample can be seen in Figure 8. Compound class characterization utilizes a method known as time interval deconvolution (TID), which has recently been applied to the analysis of terpenes.
References
Gas chromatography
Spectroscopy | Gas chromatography–vacuum ultraviolet spectroscopy | [
"Physics",
"Chemistry"
] | 2,261 | [
"Chromatography",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Spectroscopy",
"Gas chromatography"
] |
56,343,589 | https://en.wikipedia.org/wiki/SAMV%20%28algorithm%29 | SAMV (iterative sparse asymptotic minimum variance) is a parameter-free superresolution algorithm for the linear inverse problem in spectral estimation, direction-of-arrival (DOA) estimation and tomographic reconstruction with applications in signal processing, medical imaging and remote sensing. The name was coined in 2013 to emphasize its basis on the asymptotically minimum variance (AMV) criterion. It is a powerful tool for the recovery of both the amplitude and frequency characteristics of multiple highly correlated sources in challenging environments (e.g., limited number of snapshots and low signal-to-noise ratio). Applications include synthetic-aperture radar, computed tomography scan, and magnetic resonance imaging (MRI).
Definition
The formulation of the SAMV algorithm is given as an inverse problem in the context of DOA estimation. Suppose an -element uniform linear array (ULA) receive narrow band signals emitted from sources located at locations , respectively. The sensors in the ULA accumulates snapshots over a specific time. The dimensional snapshot vectors are
where is the steering matrix, contains the source waveforms, and is the noise term. Assume that , where is the Dirac delta and it equals to 1 only if and 0 otherwise. Also assume that and are independent, and that , where . Let be a vector containing the unknown signal powers and noise variance, .
The covariance matrix of that contains all information about is
This covariance matrix can be traditionally estimated by the sample covariance matrix where . After applying the vectorization operator to the matrix , the obtained vector is linearly related to the unknown parameter as
,
where , , , , and let
where
is the Kronecker product.
SAMV algorithm
To estimate the parameter from the statistic , we develop a series of iterative SAMV approaches based on the asymptotically minimum variance criterion. From, the covariance matrix of an arbitrary consistent estimator of based on the second-order statistic is bounded by the real symmetric positive definite matrix
where . In addition, this lower bound is attained by the covariance matrix of the asymptotic distribution of obtained by minimizing,
where
Therefore, the estimate of can be obtained iteratively.
The and that minimize can be computed as follows. Assume and have been approximated to a certain degree in the th iteration, they can be refined at the th iteration by,
where the estimate of at the th iteration is given by with .
Beyond scanning grid accuracy
The resolution of most compressed sensing based source localization techniques is limited by the fineness of the direction grid that covers the location parameter space. In the sparse signal recovery model, the sparsity of the truth signal is dependent on the distance between the adjacent element in the overcomplete dictionary , therefore, the difficulty of choosing the optimum overcomplete dictionary arises. The computational complexity is directly proportional to the fineness of the direction grid, a highly dense grid is not computational practical. To overcome this resolution limitation imposed by the grid, the grid-free SAMV-SML (iterative Sparse Asymptotic Minimum Variance - Stochastic Maximum Likelihood) is proposed, which refine the location estimates by iteratively minimizing a stochastic maximum likelihood cost function with respect to a single scalar parameter .
Application to range-Doppler imaging
A typical application with the SAMV algorithm in SISO radar/sonar range-Doppler imaging problem. This imaging problem is a single-snapshot application, and algorithms compatible with single-snapshot estimation are included, i.e., matched filter (MF, similar to the periodogram or backprojection, which is often efficiently implemented as fast Fourier transform (FFT)), IAA, and a variant of the SAMV algorithm (SAMV-0). The simulation conditions are identical to: A -element polyphase pulse compression P3 code is employed as the transmitted pulse, and a total of nine moving targets are simulated. Of all the moving targets, three are of dB power and the rest six are of dB power. The received signals are assumed to be contaminated with uniform white Gaussian noise of dB power.
The matched filter detection result suffers from severe smearing and leakage effects both in the Doppler and range domain, hence it is impossible to distinguish the dB targets. On contrary, the IAA algorithm offers enhanced imaging results with observable target range estimates and Doppler frequencies. The SAMV-0 approach provides highly sparse result and eliminates the smearing effects completely, but it misses the weak dB targets.
Open source implementation
An open source MATLAB implementation of SAMV algorithm could be downloaded here.
See also
(Radon transform)
(MUSIC), a popular parametric superresolution method
References
Signal estimation
Fourier analysis
Frequency-domain analysis
Trigonometry
Wave mechanics
Medical imaging
Inverse problems
Multidimensional signal processing
Signal processing
Tomography | SAMV (algorithm) | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 1,015 | [
"Physical phenomena",
"Telecommunications engineering",
"Spectrum (physical sciences)",
"Computer engineering",
"Signal processing",
"Frequency-domain analysis",
"Applied mathematics",
"Classical mechanics",
"Waves",
"Wave mechanics",
"Inverse problems"
] |
56,343,748 | https://en.wikipedia.org/wiki/Laser%20welding%20of%20polymers | Laser welding of polymers is a set of methods used to join polymeric components through the use of a laser. It can be performed using CO2 lasers, Nd:YAG lasers, Diode lasers and Fiber lasers.
When a laser encounters the surface of plastics, it can be reflected, absorbed or penetrate through the thickness of a component. Laser welding of plastics is based on the energy absorption of laser radiation, which can be reinforced by additives and fillers.
Laser welding techniques include:
Direct laser welding
Laser surface heating,
Through transmission laser welding
Intermediate film welding.
Because of high joining speeds, low residual stresses and excellent weld appearances, laser welding processes have been widely used for automotive and medical applications.
Laser sources
The types of lasers used in the welding of polymers include CO2 lasers, Nd:YAG lasers, Diode lasers and fiber lasers. CO2 lasers are mostly applied to weld thin films and thin plastics due to the high energy absorption coefficients of most plastics. Nd:YAG lasers and Diode lasers produce short wavelength radiation, which transmit through several millimeters of unpigmented polymer. They are used in the transmission laser welding techniques.
Carbon dioxide lasers
Carbon dioxide lasers have a wavelength of 10.6 μm which is rapidly absorbed by most polymers. Because of the high-energy absorption coefficients, processing of plastics using CO2 can be done rapidly with low laser powers. This type of laser can be used in direct welding of polymers or cutting. However, the penetration of CO2 lasers is less than 0.5 mm and is mostly applicable for the welding of thin film and surface heating. Because the beam cannot be transmitted by silicon fiber, the beam is commonly delivered by mirrors.
Nd:YAG lasers
Nd:YAG lasers have a wavelength in the range of 0.9 - 1.1 μm, with 1064 nm being the most common. These lasers provide a high beam quality allowing for small spot sizes. This type of beam can be delivered via fiber optic cable.
Diode lasers
The wavelength of diode lasers is typically in the 780 - 980 nm wavelength range. Compared with Nd:YAG laser and CO2 laser, diode laser has supreme advantage in energy efficiency. The high-energy light wave can penetrate a thickness of a few millimeters in semicrystalline plastics and further in unpigmented amorphous plastics. Diode lasers can be either fiber delivered or local to the weld location. The relatively small size makes assembling arrays for larger foot prints possible.
Fiber laser
Fiber lasers typically exhibit wavelengths ranging from 1000 to 3500 nm. The expanded range of wavelengths has allowed for the development of through transmission welding without additional absorbing additives.
Equipment
The equipment settings may vary greatly in design and complexity. However, there are five components included in most of the machines:
generator/power supply
control interface
actuator
lower fixture
upper fixture.
Generator/Power supply
This component transforms the received voltage and frequency to the corresponding voltage, current and frequency to the laser source. Diode laser and fiber laser are the two most commonly used system for laser welding.
Control interface
The control interface is an interface between operator and machine to monitor operations of the system. It is constructed by logic circuits to send operators the information of machine status and welding parameters. Depending on different laser modes, the control interface will vary the parameters allowable to change.
Actuator
This component is a press activated by pneumatical and electrical power. It compresses the part in the upper fixture to touch the components in the lower fixture and apply pre-determined loads during welding processes. Displacement controls are added to actuators to monitor precisely the movements.
Lower fixture
Lower fixture is a jig structure that locates the lower part of a joint. It provides locations and alignments that ensure the welding of components with tight tolerances.
Upper fixture
The upper fixture is the most complicated and important component in the whole system. Laser beam is generated in this component to heat up the welding parts. The design of upper fixture often varies from laser sources and heating modes. For example, when a YAG laser or a diode laser is used as the heat source, optical fibers are often employed to provide mobility. However, the welding part cannot move.
Laser interaction with polymers
There are three types of interactions that can occur between laser radiation and plastics:
reflection,
absorption
transmission
The extent of individual interaction is dependent upon materials properties, laser wavelength, laser intensity and beam speed.
Reflection
Reflection of incident laser radiation is typically on the order of 5 to 10% in most polymers, which is low compared with absorption and transmission. The fraction of reflection (R) can be determined by the following equation,
where is the index of refraction of the plastics and is the index of refraction of air (~1).
Transmission
Transmission of laser energy through certain polymers allows for processes such as through transmission welding. When the laser beam travel through the interfaces between different medium, the laser beam is refracted unless the path is perpendicular to the surface. This effect needs to be considered when laser travels through multi-layer to reach the joint region.
Internal scattering occur when laser pass through the thickness in semicrystalline plastics, where crystalline and amorphous phase have different index of refraction. Scattering can also occur in crystalline and amorphous plastics with reinforcement like glass fiber and certain colorants and additives. In transmission laser welding, such effect can reduce the effective energy of laser radiation towards joint area and limit the thickness of components.
Absorption
Laser absorption can occur at the surface of plastics or during transmission through thickness. The amount of laser energy absorbed by a polymer is a function of the laser wavelength, polymer absorptivity, polymer crystallinity, and additives (i.e. composite reinforcements, pigments, etc.). The absorption at surface has two possible ways, photolytic and pyrolytic.
Photolytic process occurs at short wavelength radiation (less than 350 nm or ultraviolet (UV)), when the photon energy is sufficient to break chemical bonds.
Pyrolytic process occurs at long wavelength radiation (larger than 0.35 μm). Such process is involved with heat generations, which can be used for welding and cutting purposes.
The heat distribution within a laser welded polymer is dictated by the Bouger–Lambert law of absorption.
I(z) = I(z=0) eKz
where I(z) is the laser intensity at a certain depth z, I(z=0) is the laser intensity at the surface, K is the absorption constant.
Effect of additives
Polymers often have secondary elements added to them for various reasons (i.e. strength, color, absorption, etc.). These elements can have a profound effect on the laser interaction with the polymer component. Some common additives and their effect on laser welding are described below.
Reinforcements
Various fibers are added to polymeric materials to create higher strength composites. Some typical fiber materials include: glass, carbon fiber, wood, etc. When the laser beam interacts with these materials it can get scattered or absorbed, changing the optical properties from that of the base polymer. In laser transmission welding, a transparent material with reinforcement may absorb or dilute the energy beam more, effecting the quality of the weld. High contents of glass fiber content increase the scattering within the plastics and raise the laser energy input for welding a certain thickness.
Colorants
Colorants (pigments) are added to polymers for various reasons including aesthetics and functional requirements (such as optics). Certain color additives, such as titanium dioxide, can have a negative impact on the laser weldability of a polymer. The titanium dioxide provides a white coloring to polymers but also scatters laser energy making it difficult to weld. Another color additive, carbon black, is a very effective energy absorber and is often added to create welds. By controlling the concentration of carbon black with the absorbing polymer it is possible to control the effective area of the laser weld.
Laser application configurations
The laser beam energy can be delivered to the required areas through a variety of configurations. The four most common approaches include:
contour heating,
simultaneous heating,
quasi-simultaneous heating, and
masked heating.
Contour heating
In the contour heating (laser scanning or laser moving) technique, a laser beam of fixed dimension passes through the desired area to create a continuous weld seam. The laser source is manipulated by a galvanic mirror or a robotic system to scan at a fast rate. The benefit of contour heating is that the weld can be performed with a single laser source, which can be reprogramed for different applications; however, due to the localized heating area, uneven contact between welding components can occur and form weld voids. The important parameters for this technique include: laser wavelength, laser power, traverse speed, and polymer properties.
Simultaneous heating
In the simultaneous heating approach, a beam spot of appropriate size is used to irradiate the entire weld area without the need for relative movement between the work piece and the laser source. For creating a weld with a large area, multiple laser sources can be combined to melt the selected region simultaneously. This approach can be adopted to substitute ultrasonic welding in the case of welding components sensitive to vibration. Key processing parameters for this approach include: laser wavelength, laser power, heating time, clamp pressure, cooling time, and polymer properties.
Quasi-simultaneous heating (QSLW)
In the quasi-simultaneous heating, a work area is irradiated by the use of scanning mirrors. The mirrors raster the laser beam over the entire work area rapidly, creating a simultaneously melted region. Some of the important parameters for this technique include: laser wavelength, laser power, heating time, cooling time, polymer properties.
Masked heating
Masked heating is a process of laser line scanning through a region with a mask, which ensures that only the selected areas can be heated when the laser pass through. Masks can be made out of laser cut steel, or other materials that effectively block the laser radiation. This approach is capable of creating micro-scale welds on components with complex geometries. Key processing parameters for this approach include: laser wavelength, laser power, heating time, clamp pressure, cooling time, and polymer properties.
Laser welding techniques
Depending on different interactions between laser and thermoplastics, four different laser welding techniques have been developed for plastic joining. CO2 lasers have good surface absorption for most thermoplastics, hence they are applied for direct laser welding and laser surface heating. Through transmission laser welding and intermediate film welding require the deep penetration of laser beam, so YAG lasers and diode lasers are the most common sources for these techniques.
Direct laser welding
Similar to laser welding of metals, in direct laser welding the surface of the polymer is heated to create a melt zone that joins two components together. This approach can be used to create butt joints and lap joints with complete penetration. Laser wavelengths between 2 and 10.6 μm are used for this process due to their high absorptivity in polymers.
Laser surface heating
Laser surface heating is similar to non-contact hot plate welding in that mirrors are placed between components to create a molten surface layer. The exposure duration is usually between 2-10 s. Then the mirror is retracted and the components are pressed together to form a joint. Process parameters for laser surface heating include the laser output, wavelength, heating time, change-over time, and forging pressure and time.
Through transmission laser welding (TTLW)
Through transmission laser welding of polymers is a method to create a joint at the interface between two polymer components with different transparencies to laser wavelengths. The upper component is transparent to the laser wavelength between 0.8 μm to 1.05 μm, and the lower component is either opaque in nature, or modified by the addition of colorants which promote the absorption of laser radiation. A typical colorant is carbon black that absorb most of the electromagnetic wavelength. When the joint is irradiated by the laser, the transparent layer passes the light with minimal loss while the opaque layer absorbs the laser energy and heats up.
The two components are held by the lower fixture to control alignment and a small clamping force is added to the upper part to form intimate contact. A melt layer is then created at the interface between the two components, composed of a mixture of two plastic materials.
There are four different modes of transmission laser welding: scanning mode, simultaneous, quasi-simultaneous, and mask heating.
Many benefits can be obtained by transmission laser welding such as fast welding velocity, flexibility, good cosmetic properties and low residual stresses. From processing perspectives, laser welding can be performed in the pre-assembled conditions, reducing the necessity for complex fixtures; however, this method is not suitable for plastics with high crystallinity due to refraction and geometric limitations.
Intermediate film welding
Intermediate film welding is a method to join incompatible plastic components by using an intermediate film between them. Similar to transmission welding, laser radiation passes through the transparent components and melts the intermediate layers to create a joint. This film can be made of an opaque thermoplastic, solvent, viscous fluid, or other substances that heat up upon exposure to laser energy. The combination of intermediate films and adhesion promoters is able to join incompatible thermoplastics together. The thin layer then generates the heat required to fuse the system together.
Applications
Automotive applications
The black body of car keys is welded by the Through Transmission Laser Welding (TTLW) technique, in which laser radiation transmits through the upper component and forms a joint at the interface. Carbon black is added to the lower part of car keys to absorb laser radiation. The black color of the upper part is made by the addition of dye, which makes the component appear black but transparent to laser radiation.
Other applications of laser welding in automotive industry include brake fluid reservoirs and lighting components.
Medical applications
Laser welding of plastics is applied to weld medical devices like IV-bags. Joints of high geometrical complexity can be produced by laser welding without particulate formation. This is critical for the safety of patients, when welding techniques are applied to produce IV-bags containing blood. In addition, flashes generated during welding can cause blood turbulences and destroy blood platelets. A good control of the laser power avoids flash formation and thus protects the blood cells from damage.
References
Polymers
Laser applications
Welding | Laser welding of polymers | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,937 | [
"Polymers",
"Welding",
"Mechanical engineering",
"Polymer chemistry"
] |
52,044,083 | https://en.wikipedia.org/wiki/Cryoimmunotherapy | Cryoimmunotherapy, also referred to as cryoimmunology, is an oncological treatment for various cancers that combines cryoablation of tumor with immunotherapy treatment. In-vivo cryoablation of a tumor, alone, can induce an immunostimulatory, systemic anti-tumor response, resulting in a cancer vaccine—the abscopal effect. Thus, cryoablation of tumors is a way of achieving autologous, in-vivo tumor lysate vaccine and treat metastatic disease. However, cryoablation alone may produce an insufficient immune response, depending on various factors, such as high freeze rate. Combining cryotherapy with immunotherapy enhances the immunostimulating response and has synergistic effects for cancer treatment.
Although, cryoblation and immunotherapy has been used successfully in oncological clinical practice for over 100 years, and can treat metastatic disease with curative intent, it has been ignored in modern practice. Only recently has cryoimmunotherapy been resurrected to become the gold standard in cancer treatment of all stages of disease.
History
Immunological effects resulting from the cryoablation of tumors was first observed in the 1960s. Since the 1960s, Tanaka treated metastatic breast cancer patients with cryotherapy and reported cryoimmunological reaction resulting from cryotherapy. In the 1970s, systemic immunological response from local cryoablation of prostate cancer was also clinically observed. In the 1980s, Tanaka, of Japan, continued to advance the clinical practice of cryoimmunology with combination treatments including: cryochemotherapy and cryoimmunotherapy. In 1997, Russian scientists confirmed the efficacy of cryoimmunotherapy in inhibiting metastases in advanced cancer. In 2000s, China, following closely with the exciting developments, enthusiastically embraced cryoablation treatment for cancer and has been leading the practice ever since with cryoimmunotherapy treatments available for cancer patients in numerous hospitals and medical clinics throughout China. In the 2010s, American researchers and medical professionals, started to explore cryoimmunotherapy for systemic treatment of cancer.
Mechanisms of actions
Cryoablation of tumor induces necrosis of tumor cells. The immunotherapeutic effect of cryoablation of tumor is the result of the release of intracellular tumor antigens from within the necrotized tumor cells. The released tumor antigens help activate anti-tumor T cells, which destroy remaining malignant cells. Thus, cryoablation of tumor elicits a systemic anti-tumor immunologic response.
The resulting immunostimulation from cryoablation may not be sufficient to induce sustained, systemic regression of metastases, and can be synergised with the combination of immunotherapy treatment and vaccine adjuvants.
Various adjuvant immunotherapy and chemotherapy treatments can be combined with cryoablation to sustain systemic anti-tumor response with regression of metastases, including:
Injection of immunomodulating drugs (i.e.: therapeutic antibodies) and vaccine adjuvants (saponins) directly into the cryoablated, necrotized tumor lysate, immediately after cryoablation
Administration of autologous immune enhancement therapy, including: dendritic cell therapy, CIK cell therapy
See also
Combinatorial ablation and immunotherapy
Photoimmunotherapy
References
External links
The Great Prostate Hoax: How Big Medicine Hijacked...
Immunologic Response to Cryoablation of Breast Cancer
Modern Cryosurgery for Cancer
Percutaneous Cryotherapy of Renal Cell Carcinoma Under an Open MRI System
Modern Cryosurgery for Cancer
Tumor Ablation: Effects on Systemic and Local Anti-Tumor Immunity and on Other Tumor-Microenvironment Interactions
Basics of Cryosurgery
Cryosurgery: A Practical Manual
Dermatological Cryosurgery and Cryotherapy
The Abscopal Effect and the Prospect of Using Cancer Against Itself
Tumor Ablation: Principles and Practice
Cryoimmunologie: Cryoimmunology: colloque
Metastatic Bone Disease: An Integrated Approach to Patient Care
Musculoskeletal Cancer Surgery: Treatment of Sarcomas and Allied Diseases
Prospects for cryo-immunotherapy in cases of metastasizing carcinoma of the prostate .
Therapy
Cancer
Cryobiology | Cryoimmunotherapy | [
"Physics",
"Chemistry",
"Biology"
] | 929 | [
"Biochemistry",
"Physical phenomena",
"Phase transitions",
"Cryobiology"
] |
52,046,282 | https://en.wikipedia.org/wiki/Stable%20salt%20reactor | The stable salt reactor (SSR) is a nuclear reactor design under development by Moltex Energy Canada Inc. and its subsidiary Moltex Energy USA LLC, based in Canada, the United States, and the United Kingdom, as well as MoltexFLEX Ltd., based in the United Kingdom.
The SSR design being developed by Moltex Energy Canada Inc. is the Stable Salt Reactor - Wasteburner (SSR-W), which incorporates elements of the molten salt reactor, and aims to have improved safety characteristics (intrinsically safe) and economics (LCOE of $45/MWh USD or less) over traditional light water reactors.
SSRs, which are protected by robust patents, are being designed so that they will not need expensive containment structures and components to mitigate radioactive releases in accident scenarios. The design would preclude the type of widespread radiological contamination that occurred in the Chernobyl or Fukushima accidents, because any hazardous isotopes that might otherwise become airborne would be chemically bound to the coolant. Additionally, the modular design would allow factory production of components and delivery to site by standard road transportation, reducing costs and construction timescales.
The fuel design is a hybrid between light water reactor fuel assemblies and traditional molten salt reactor approaches, in which the fuel is mixed with the coolant. The liquid salt fuel mixture is contained within fuel assemblies that are very similar to current light water reactor technology. The fuel assemblies are then submerged in a pool of liquid salt coolant.
Moltex Energy Canada Inc. plans to deploy the SSR-W and associated waste recycling facility in New Brunswick, Canada in partnership with NB Power. The company has support and funding from the Canadian federal government, the government of New Brunswick, NB Power, Ontario Power Generation, ARPA-E, IDOM, SNC Lavalin.
Technology
The basic unit of the reactor core is the fuel assembly. In the SSR-W, each assembly contains nearly 300 fuel tubes of 10 mm diameter, filled to a height of 1.8 m with fuel salt. The tubes have “diving bell” gas vents at the top to allow fission gases to escape. The assemblies are loaded vertically into the core, with fresh assemblies entering through an airlock and inserted into the core through a fuelling machine.
Fuel and materials
The fuel in the SSR is two-thirds sodium chloride (table salt) and one-third mixed lanthanide/actinide trichlorides. Fuel for the initial reactors is planned to come from converted spent nuclear fuel from existing conventional reactors. In the UK, the fuel could come from the stocks of civil plutonium dioxide from PUREX downblended and converted to chloride impurities added to reduce any proliferation concerns.
Trichlorides are more thermodynamically stable than the corresponding fluoride salts, and can therefore be maintained in a strongly reducing state by contact with sacrificial nuclear-grade zirconium metal added as a coating on, or an insert within, the fuel tube of the SSR-W. As a result, using this patented approach, the fuel tube can be made from standard nuclear certified steel without risk of corrosion. Since the reactor operates in the fast spectrum, the tubes will be exposed to very high neutron flux and so will suffer high levels of radiation damage estimated at 100–200 dpa over the tube life. The highly neutron damage tolerant steel, PE16 will therefore be used for the tubes. Other steels with fast-neutron tolerance (such as T9, NF616 and 15-15Ti) could also be used depending on the local supply chain capabilities.
The average power density in the SSR-W fuel salt is 150 kW/L, which allows a large temperature margin below the boiling point of the salt.
Coolants
The coolant salt in the SSR-W reactor tank is a chloride-based coolant salt. The coolant also contains an agent to reduce its redox potential, making it virtually non-corrosive to standard types of steel. The reactor tank, support structures and heat exchangers can therefore be built with standard 316L stainless steel.
The coolant salt is circulated through the reactor core by three pumps attached to the heat exchangers in each module. Flow rates are modest (approximately 1 m/s) with a resulting low requirement for pump power. Redundant engineering would allow operation to continue in the event of a pump failure.
Safety
SSRs are designed with intrinsic safety characteristics being the first line of defence. No operator or active system is required to maintain the reactor in a safe and stable state. The following are primary intrinsic safety features of the SSR.
Reactivity control
As the SSR is self-controlling, no mechanical control is required. This is made possible by the combination of a high negative temperature coefficient of reactivity and the ability to continually extract heat from the fuel tubes. As heat is taken out of the system the temperature drops, causing the reactivity to go up. Conversely, when the reactor heats up, reactivity decreases. This provides security against all overpower scenarios, such as a reactivity insertion accident. For the SSR-W, diverse and redundant safety is also provided by an array of gravitationally driven boron carbide control rods.
Non-volatile radioactive material
Use of molten salt fuel with the appropriate chemistry eliminates the hazardous volatile iodine and caesium, making multi-layered containment unnecessary to prevent airborne radioactive plumes in severe accident scenarios. For the SSR-W, the noble gases xenon and krypton would leave the reactor core in normal operation, but would be trapped until their radioactive isotopes decay, so there would be very little that could be released in an accident.
No high pressures
In a water-cooled reactor, high internal pressures provide a driving force for dispersion of radioactive materials in the event of an accident. In contrast, molten salt fuels and coolants have boiling points far above the SSR's operating temperature. So, its core runs at atmospheric pressure. Physical separation of the steam-generating system from the radioactive core, by means of a secondary coolant loop, eliminates high pressure within the reactor. High pressures within fuel tubes are also avoided by venting off fission gases into the surrounding coolant salt.
Low chemical reactivity
Zirconium in pressurized water reactors and sodium in fast reactors both create the potential for severe explosion and fire risks. No chemically reactive materials are used in the SSR.
Decay heat removal
Immediately after a nuclear reactor shuts down, almost 7% of its previous operating power continues to be generated, from the decay of short half-life fission products. In conventional reactors, removing this decay heat passively is challenging because of the reactors’ low temperatures. An SSR operates at much higher temperatures; so, this heat can be rapidly transferred away from the core. In the event of a reactor shutdown and failure of all active heat-removal systems in the SSR, decay heat from the core would dissipate into air-cooling ducts around the perimeter of the tank that operate continually. This is known as the Emergency Heat Removal System. The main heat-transfer mechanism is radiative. Heat transfer goes up substantially with temperature; so, it is negligible at operating temperatures but sufficient at higher-temperature accident conditions. The reactor components are not damaged during this process and the plant can be restarted afterwards.
Consumption of nuclear waste
Most countries that use nuclear power plan to store spent nuclear fuel deep underground until its radioactivity has reduced to levels similar to that of natural uranium. As the SSR-W consumes nuclear waste, the countries could use them to reduce the volume of waste that ends up in long-term storage.
Operating in the fast spectrum, the SSR-W is effective at transmuting long-lived actinides into more stable isotopes. Today’s reactors that are fuelled by reprocessed spent fuel need very high-purity plutonium to form a stable pellet. The SSR-W can have any level of lanthanide and actinide contamination in its fuel, so long as it can still go critical. This low level of purity greatly simplifies the recycling method for existing waste.
The well established recycling method is based on pyroprocessing. A 2016 report by the Canadian Nuclear Laboratories on recycling of CANDU fuel estimates that pyroprocessing would cost about half as much as more conventional reprocessing. Pyroprocessing for the SSR-W uses only one third of the steps of conventional pyroprocessing, which will make it even cheaper. It is potentially competitive with the cost of manufacturing fresh fuel from mined uranium.
Waste from the SSR-W will take the form of solid salt in tubes. This can be vitrified and stored underground for over 100,000 years, as is planned today, or it can be recycled. In that case, fission products would be separated out and safely stored at ground level for the several hundred years needed for them to decay to radioactivity levels similar to that of uranium ore. The troublesome long-lived actinides and the remaining fuel would go back into the reactor, where they could be burned and transmuted into more-stable isotopes.
Other stable salt reactor designs
Stable salt reactor technology is highly flexible and can be adapted to several different reactor designs. The use of molten salt fuel in standard fuel assemblies allows stable salt versions of many of the large variety of nuclear reactors considered for development worldwide. The industry’s focus today however is to allow rapid development and roll out of low-cost reactors.
Another design now in development, by MoltexFLEX Ltd., is the FLEX reactor, a thermal spectrum SSR fuelled by low-enriched uranium (around 6%). The FLEX reactor may be more suited to nations without an existing nuclear fleet and concerns about waste. It is moderated with graphite as part of the fuel assembly and has significant peaking plant capabilities.
Moltex Energy Canada Inc., Moltex Energy USA LLC and MoltexFLEX Ltd. have also conceptualized a thorium breeding version of the SSR (SSR-Th). This reactor would contain thorium in the coolant salt, which could breed new fuel. Thorium is an abundant fuel source that can provide energy security to nations that do not have their own uranium reserves.
With this range of reactor options and the large global reserves of uranium and thorium available, SSRs could fuel the planet for several thousands of years.
Economics
The capital cost of the SSR-W was estimated at $1,950/kW USD by an independent UK nuclear engineering firm. For comparison, the capital cost of a modern pulverized coal power station in the United States is $3,250/kW and the cost of large-scale nuclear is $5,500/kW. Further reductions to this cost are expected for modular factory-based construction.
This low capital cost results in a levelised cost of electricity (LCOE) of $44.64/MWh USD with substantial potential for further reductions, because of the greater simplicity and intrinsic safety of the SSR.
Given the pre-commercial nature of the technology, the figures for capital cost and LCOE are estimates, and may increase or decrease during completion of the development and licensing processes.
The International Energy Agency predicts that nuclear power will maintain a constant small role in the global energy supply, with a market opportunity of 219 GWe up to 2040. With the improved economics of the SSR, Moltex Energy predicts that it has the potential to access a market of over 1,300 GWe by 2040.
Development
The fundamental patent on the use of unpumped molten salt fuel was granted to Moltex Energy Ltd in 2014, and further implementation-related patents have been applied for and granted since.
The SSR-W has completed Vendor Design Review Phase 1 review with the Canadian Nuclear Safety Commission. Both the US and Canadian governments are supporting development of elements of the SSR technology.
Moltex Energy Canada Inc. plans to build, by the early 2030s, a demonstration SSR-W at the Point Lepreau nuclear power plant site in Canada under an agreement signed with NB Power.
Recognition
As well as the selection for development support by the US and Canadian governments noted above, the SSR has been identified as a leading SMR technology by a 2020 Tractebel analysis, and the SSR-W was selected as one of two SMR candidates for further progression by NB Power, out of a field of 90 candidates. It was also selected as part of the UK government's Phase 1 Advanced Modular Reactor competition but was not selected for the Phase 2 part of the funding.
References
External links
Stable Salt Reactor Technology Introduction, YouTube video
Modular Stable Salt Reactors – a simpler way to use molten salt fuel – Ian Scott Moltex Energy
Nuclear power reactor types
Nuclear power
Molten salt reactors | Stable salt reactor | [
"Physics"
] | 2,628 | [
"Power (physics)",
"Physical quantities",
"Nuclear power"
] |
52,048,187 | https://en.wikipedia.org/wiki/Twister%20sister%20ribozyme | The twister sister ribozyme (TS) is an RNA structure that catalyzes its own cleavage at a specific site. In other words, it is a self-cleaving ribozyme. The twister sister ribozyme was discovered by a bioinformatics strategy as an RNA Associated with Genes Associated with Twister and Hammerhead ribozymes, or RAGATH.
The twister sister ribozyme has a possible structural similarity to twister ribozymes. Some striking similarities were noted, but also surprising differences, such as the absence of the two pseudoknot interactions in the twister ribozyme. The exact nature of the structural relationship between twister and twister sister ribozymes, if any, has not be determined.
Discovery
The twister sister ribozyme was discovered through a bioinformatic search. This study conducted a search for conserved RNA structures near known twister and hammerhead ribozymes as well as certain protein-coding genes based on the fact that many ribozymes are located near to each other and near those genetic fragments. Later they tested the self-cleaving activity of 15 conserved RNA motifs that were found in these regions. 3 out of the 15 RNA motifs showed self-cleaving activity, which were the twister sister ribozyme, the pistol ribozyme and the hatchet ribozyme.
Structure
The crystal structures of the pre-catalytic state of the twister sister ribozymes were solved by two research groups independently.
The structure of a three-way junctional twister sister ribozyme is composed of two co-axial stacked helical sections connected with a three-way junction and two tertiary contacts. The active site, a scissile phosphate, is located in a loop with quasihelical character in one coaxial base-stacked helix. Five divalent metal ions are coordinate to RNA ligands, one of which is directly bound to C54 O2’ near the scissile phosphate and exchange inner sphere water molecules with the RNA ligands.
The crystal structure of a four-way junctional twister sister ribozyme is different from the three-way junctional one in terms of long-range interaction and active site structure. The active site of a four-way junctional twister sister is splayed-apart with an interaction between guanine and scissile phosphate. Besides, there are seven divalent metal ions in this ribozyme.
So far, we only know the pre-catalytic conformation of twister sister ribozymes. Understanding the transition state is needed to explain the relationship between twister ribozyme and twister sister ribozyme as well as the structure differences of the active site between the three-way and four-way junctional twister sister ribozymes.
Catalytic mechanism
Generally, nucleolytic ribozymes cleave a specific phosphodiester linkage by SN2 mechanism. The O2' acts as a nucleophile to attack the adjacent P, with O5’ as a leaving group. The catalytic products are a cyclic 2’,3’ phosphate and a 5’-hydroxyl.
The catalytic activity of twister sister increases with pH and depends on divalent metal ion. The cleavage speed increases 10 fold with each increase in pH unit and reach a plateau near pH 7, which indicates that the 2-hydroxyl group of cytidine near the active site is fully deprotonated at pH 7 in the ribozyme. However, the structural basis for the catalytic activity is still under investigation.
The three-way junctional twister sister is a metalloenzyme. The inner sphere water of a divalent metal ion bound to C54 O2’ acts as a general base to deprotonate the 2-hydroxyl group, making it a stronger nucleophile, but the general acid which can stabilize the oxyanion leaving group remains unknown. This mechanism is supported by the exponential correlation between catalytic activity and the pKa of hydrated metal ion.
For the four-way junctional twister sister, Ren and coworkers find that guanine with an amino group is likely to play a role in the catalysis because G5 mutations result in very low catalytic activity. However, it remains unclear whether guanine directly participates in the catalysis as it is not absolutely conserved. The formation of a pseudoknot for four-way junctional TS was found to be highly Mg2+ dependent by conducting SHAPE (Selective-2′ -Hydroxyl Acylation analyzed by Primer Extension) experiments.
References
RNA
Ribozymes | Twister sister ribozyme | [
"Chemistry"
] | 966 | [
"Catalysis",
"Ribozymes"
] |
52,048,251 | https://en.wikipedia.org/wiki/Pistol%20ribozyme | The pistol ribozyme is an RNA structure that catalyzes its own cleavage at a specific site. In other words, it is a self-cleaving ribozyme. The pistol ribozyme was discovered through comparative genomic analysis.
Subsequent biochemical analysis determined further biochemical characteristics of the ribozyme. This understanding was further advanced by an atomic-resolution crystal structure of a pistol ribozyme
Discovery
Pistol ribozyme was discovered by a bioinformatics strategy as an RNA Associated with Genes Associated with Twister and Hammerhead ribozymes, or RAGATH.
Physical Properties
Comparative analysis of 501 unique samples of pistol ribozyme from ribozyme-associated gene classes and bacterial DNA sequences was done to reach a consensus of the physical properties of the pistol ribozyme
Sequences
10 nucleotides were discovered to be highly conserved amongst many pistol ribozymes: G5, A19, A20, A 21, A31, A32, A33, G40, C41, and G42. Mutation to any of these nucleotides disrupt its secondary structure, which also disrupt its catalytic ability. The scissile bond was also determine to be between G53-U54 located in the junction connecting P2 and P3. Although the identity of these two nucleotides might vary, the length of the junction remains highly conserved.
Secondary Structure
Secondary structure in pistol ribozyme was observed to be highly conserved. There are 3 Watson-Crick base-paired stems: P1, P2, and P3, which are all connected by loops. A pseudoknot interaction exists between the loop of P1 and the junction connecting P2 and P3.
Catalytic Activity
Mechanism
The mechanism for pistol ribozyme was deduced through the identification of the products of the self-cleaving reaction. Through mass spectrometry, it was found that the products contain 5'-hydroxyl and 2',3'-cyclic phosphate functional groups. Reaction mechanism was concluded to involve 2'-OH nucleophilic attack by G53 on the phosphate bond connecting G53-U54. The process involves a trigonal bipyramidal penta-coordinated phosphorus center. N1 on G40 acts a general base in which it activates the nucleophile 2'-OH on G53. G32 acts as a general acid in which it neutralizes the developing negative charge on the intermediate.
Kinetics
Under physiological pH and magnesium ion concentration, the rate constant of pistol ribozyme self-cleaving reaction was observed to be > 10 min−1. Under optimum condition (pH = 7.0 - 9.0, and magnesium concentration above 50 mM), the rate constant detected to be > 100 min−1. As magnesium concentration increases, the rate of reaction increases but starts to plateau around 50 mM.
Metal Ions Specificity
Self-cleaving reactions were observed in the presence of 0.1 mM of various monovalent and divalent metal ions such as magnesium, manganese, calcium, cobalt, nickel, cadmium, barium, sodium, and lithium. This implies that pistol ribozyme possess no specificity in the metal ion required in catalysis.
References
RNA
Ribozymes | Pistol ribozyme | [
"Chemistry"
] | 670 | [
"Catalysis",
"Ribozymes"
] |
52,048,266 | https://en.wikipedia.org/wiki/Hatchet%20ribozyme | Background: The hatchet ribozyme is an RNA structure that catalyzes its own cleavage at a specific site. In other words, it is a self-cleaving ribozyme. Hatchet ribozymes were discovered by a bioinformatics strategy as RNAs Associated with Genes Associated with Twister and Hammerhead ribozymes, or RAGATH.
Subsequent biochemical analysis supports the conclusion of a ribozyme function, and determined further characteristics of the chemical reaction catalyzed by the ribozyme.
Nucleolytic ribozymes are small RNAs that adopt compact folds capable of site-specific cleavage/ligation reactions. 14 unique nucleolytic ribozymes have been identified to date, including recently discovered twister, pistol, twister-sister, and hatchet ribozymes that were identified based on application of comparative sequence and structural algorithms.
The consensus sequence and secondary structure of this class includes 13 highly conserved and numerous other modestly conserved nucleotides inter-dispersed among bulges linking four base-paired substructures. A representative hatchet ribozyme requires divalent cations such as Mg2+ to promote RNA strand scission with a maximum rate constant of ~4/min. As with all other small self-cleaving ribozymes discovered to date, hatchet ribozymes employ a general mechanism for catalysis consisting of a nucleophilic attack of a ribose 2-oxygen atom on the adjacent phosphorus center. Kinetic characteristics of the reaction demonstrate that members of this ribozyme class have an essential requirement for divalent metal cations and that they have a complex active site which employs multiple catalytic strategies to accelerate RNA cleavage by internal phosphoester transfer.
Mechanism
Nucleolytic ribozymes like the Hatchet Ribozyme adopt an SN2-like mechanism that results in site-specific phosphodiester bond cleavage. An activated 2-OH of the ribose 5 to the scissile phosphate adopts an in-line alignment to target the adjacent to-be-cleaved P-O5 phosphodiester bond, resulting in formation of 2,3-cyclic phosphate and 5-OH groups. X-ray crystallographic structural studies on the hammerhead, hairpin, GlmS, hepatitis delta virus (HDV), Varkud satellite, and pistol ribozymes have defined the overall RNA fold, the catalytic pocket arrangement, the in-line alignment, and the key residues that contribute to the cleavage reaction. The cleavage site is located at the 5' end of its consensus secondary motif.
In addition, the removal of the nucleophilic hydroxyl renders the ribozyme inactive as it is not able to create the cleavage site. More specifically, if the 2'-ribose or 2'-OH is replaced with a 2'-deoxyribose or 2'-H, there are no electrons available to perform the nucleophilic attack on the adjacent phosphate group. This results in no phosphoester bond being formed, which again inactivates the ribozyme's enzymatic cleavage ability.
Secondary Structure
In 2019, researchers crystallized a 2.1 Å product of the Hatchet Ribozyme. The consensus sequence is depicted in the image to the right. Most hatchet ribozymes and ribozymes in general adopt a P0 configuration. P0 is an additional hairpin loop located at the 5' end of the cleavage site, though it does not contribute to catalytic activity or functionality unlike Hammerhead ribozymes which have a short consensus sequence near P1, or the 5' end, that promotes high speed catalytic activity. About 90% of the sequence is conserved and similar to other ribozymes in this class.
Based on the RNA sequence, the resulting DNA sequence which ends up coding for the Hatchet Ribozyme is as follows from 5'-3' because in DNA uracil is replaced by thymine.
TTAGCAAGAATGACTATAGTCACTG TTTGTACACCCCGAATAGATTAGAA GCCTAATCATAATCACGTCTGCAAT TTTGGTACA
Due to this sequence construct, after self catalyzed cleavage, it leaves an 8 nucleotide residue upstream on the 3'-end of the RNA.
Tertiary Structure
Each ribozyme may have different motifs and thus different tertiary structures:
The Tertiary structure of the Hatchet Ribozyme with the motif of HT-UUCG is through dimerization. The dimer is formed through the swapping of the 3' ends of the pairing strands which is also in equilibrium with the dimer formed product of HT-GAAA. Therefore, the RNA sequence shifts between monomer and dimer configurations. To view the 3-D shape of the ribozyme see Figure S1A and B. Two molecules of the HT-GAAA ribozyme can actually form a pseudosymmetric dimer with both monomers of the ribozyme exhibiting relatively well-defined electron density. The tertiary fold consists of four stem substructures which covalently stack upon each other forming the helical and loop structures, called P1, P2, P3, and P4, L1, L2 and L3 respectively (though not shown in the figure above). The actual cleavage site is positioned between the junction of P1 and P2 adjacent to P3 and L2. P1 is composed of three or six base pairs roughly 40% and 60% of the time respectively in its natural state, suggesting that length corresponds to catalytic function.
There is also a conserved palindromic sequencing between base U70' and A67', which likely triggers the formation of the dimer due to Watson-Crick base pair interactions.
The tertiary structure also has long range implications within itself based on interactions between its loops.
Effect of pH and Mg2+
Ribozyme catalysis experiments were done by the addition of MgCl2 and stopped for measurement at each time point by the addition of a stop solution containing urea and EDTA.
A plot of the kobs values measured at pH 7.5 with increasing concentrations of Mg2+. There is a sharp increase in ribozyme function that plateaus as the concentration approaches 10 mM. The steep slope observed at lower Mg2+ concentrations suggests that more than one metal ion is necessary for each RNA to achieve maximal ribozyme activity. Moreover, this suggests that the construct requires higher than normal physiological concentrations of Mg2+ to become completely saturated with Mg2+ as the cofactor. It is possible that native unimolecular constructs, also carrying P0, might achieve saturation at concentrations of Mg2+ that are closer to normal physiological levels.
The effect of pH on ribozyme rate constant in reactions containing 10 mM Mg2+ was also experimentally measured. pH-dependent ribozyme activity increases linearly with a slope of 1 until reaching a kobs, of a Michaelis-Menten plot, plateau of ~4/min near a pH value of 7.5. Any higher pH has the same catalytic effect and more acidic pH's begin denaturing the ribozyme and thus reducing catalytic function. Both the pH dependency and the maximum rate constant have interesting implications for the possible catalytic strategies used by this ribozyme class.
The effects of various mono- and divalent metal ions on hatchet ribozyme activity
The Hatchet ribozyme construct remains completely inactive when incubated in the absence of Mg2+ in reactions containing only other monovalent cations at 1 M (Na+, K+, Rb+, Li+, Cs+), 2.5 M (Na+, K+), or 3 M (Li+). In contrast, other divalent metal ions such as Mn2+, Co2+, Zn2+, and Cd2+ support ribozyme function with varying levels of efficiency. Furthermore, two metal ions (Zn2+, Cd2+) function only at low concentrations, and three metal ions (Ba2+, Ni2+, and Cu2+) inhibit activity at 0.5 mM, even when Mg2+ is present. These results indicate that hatchet ribozymes are relatively restrictive in their use of cations to promote catalysis, perhaps indicating that one or more specialized binding sites that accommodate a limited number of divalent cations are present in the RNA structure or perhaps even at the active site. Inhibition by certain divalent metal ions could be due to the displacement of critical Mg2+ ions or by general disruption of RNA folding.
Significance/Applications
One standard application is to use flanking self-cleaving ribozymes to generate precisely cut out sequences of functional RNA molecules (i.e. shRNA, saiRNA, sgRNA). This is especially useful for in vivo expression of gene editing systems (i.e. CRISPR/Cas sgRNA) and inhibitory systems.
Another method is for in vivo transcription of siRNA. This design uses multiple self-cleaving ribozymes, which are all transcribed from the same gene. After cleavage, both parts of the precursor siRNA (siRNA 1 and 2) can form a double strand and act as intended. To see the setup, see saiRNA graphic
Lastly, if you want to combine self-cleaving ribozymes with protein sequences, it is important to know that the self-cleaving mechanism of the ribozymes will modify the mRNA. A 5' ribozyme will modify the downstream 5' end of the pre-mRNA, disabling the cell from creating a 5' cap. This decreases the stability of the pre-mRNA and prevents it from being fully functional mature mRNA. On the other side, a 3' ribozyme would prevent polyadenylation of the upstream pre-mRNA, again decreasing stability and preventing maturation. Both interfere with translation as well.
References
RNA
Ribozymes | Hatchet ribozyme | [
"Chemistry"
] | 2,089 | [
"Catalysis",
"Ribozymes"
] |
52,048,594 | https://en.wikipedia.org/wiki/RAGATH%20RNA%20motifs | RNAs Associated with Genes Associated with Twister and Hammerhead ribozymes (RAGATH) refers to a bioinformatics strategy that was devised to find self-cleaving ribozymes in bacteria. It also refers to candidate RNAs, or RAGATH RNA motifs, discovered using this strategy.
With the discovery of the twister ribozyme, it was recognized that many genetic elements in bacteria are often located nearby to twister ribozymes and also to the previously discovered hammerhead ribozymes. These genetic elements include several gene classes, many of which are characteristic of Mu-like phages. The nearby elements also include twister and hammerhead ribozymes. In other words, twister and hammerhead ribozymes are often located in bacteria nearby to other twister or hammerhead ribozymes.
Given these observations, researchers hypothesized that other classes of self-cleaving ribozymes would also associate with these genetic elements. Therefore, searches were conducted on the non-coding regions nearby to the associated genetic elements to find conserved RNA structures using a previously established method. Such RNA structures would then be candidates as self-cleaving ribozymes.
Using this method, previously unknown self-cleaving ribozyme classes were found: the twister sister, pistol and hatchet ribozymes. Unusual examples of hammerhead and HDV ribozymes were also found. Twelve additional conserved RNA structures did not appear to function as ribozymes, and the biological and biochemical functions of these RNAs remain unknown. All conserved RNAs were named "RAGATH RNA motifs", and the unsolved RNAs are numbered RAGATH-4 through RAGATH-15. Additional RAGATH motifs that did not self-cleave 'in vitro' were also later published.
RAGATH-18 RNAs contain a predicted kink turn. This particular example of a kink turn was studied to better understand how kink turn structures relate to their sequences.
References
RNA | RAGATH RNA motifs | [
"Chemistry"
] | 422 | [
"Catalysis",
"Ribozymes"
] |
61,162,823 | https://en.wikipedia.org/wiki/C18H16N2O3 | {{DISPLAYTITLE:C18H16N2O3}}
The molecular formula C18H16N2O3 (molar mass: 308.3329 g/mol) may refer to:
Amfonelic acid
Citrus Red 2
Roquinimex
Molecular formulas | C18H16N2O3 | [
"Physics",
"Chemistry"
] | 61 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
61,170,148 | https://en.wikipedia.org/wiki/C1377H2208N382O442S17 | {{DISPLAYTITLE:C1377H2208N382O442S17}}
The molecular formula C1377H2208N382O442S17 (molar mass: 31731.9 g/mol) may refer to:
Asparaginase
Pegaspargase
Molecular formulas | C1377H2208N382O442S17 | [
"Physics",
"Chemistry"
] | 72 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
61,171,196 | https://en.wikipedia.org/wiki/C17H18N2O3S | {{DISPLAYTITLE:C17H18N2O3S}}
The molecular formula C17H18N2O3S (molar mass: 330.40 g/mol, exact mass: 330.1038 u) may refer to:
Atibeprone
SB-205384
Molecular formulas | C17H18N2O3S | [
"Physics",
"Chemistry"
] | 68 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
61,171,217 | https://en.wikipedia.org/wiki/C65H82N2O18S2 | {{DISPLAYTITLE:C65H82N2O18S2}}
The molecular formula C65H82N2O18S2 (molar mass: 1243.49 g/mol) may refer to:
Atracurium besilate
Cisatracurium besilate
Molecular formulas | C65H82N2O18S2 | [
"Physics",
"Chemistry"
] | 70 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
61,171,888 | https://en.wikipedia.org/wiki/C30H46O2 | {{DISPLAYTITLE:C30H46O2}}
The molecular formula C30H46O2 (molar mass: 438.70 g/mol) may refer to:
Momordicinin
Ganoderol A
Molecular formulas | C30H46O2 | [
"Physics",
"Chemistry"
] | 53 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
61,173,505 | https://en.wikipedia.org/wiki/Cederbaum%27s%20maximum%20flow%20theorem | Cederbaum's theorem defines hypothetical analog electrical networks which will automatically produce a solution to the minimum s–t cut problem. Alternatively, simulation of such a network will also produce a solution to the minimum s–t cut problem. This article gives basic definitions, a statement of the theorem and a proof of the theorem. The presentation in this article closely follows the presentation of the theorem in the original publication.
Definitions
Definitions in this article are consistent in all respects with those given in a discussion of the maximum-flow minimum-cut theorem.
Flow graph
Cederbaum's theorem applies to a particular type of directed graph: . V is the set of nodes. is the a set of directed edges: .
A positive weight is associated with each edge: . Two of the nodes must be s and t: and .
Flow
Flow, , is a positive quantity associated with each edge in the graph. Flow is constrained by the weight of the associated edge and by the conservation of flow at each vertex as described here.
Current
Current is defined as a map for each edge pair to the real numbers, . Current maps from the voltage to a range that is determined by the weights of the respective forward and reverse edges. Each edge pair is the tuple consisting of the forward and reverse edges for a given pair of vertices. The current in the forward and reverse directions between a pair of nodes are the additive inverses of one another: . Current is conserved at each interior node in the network. The net current at the and nodes is non-zero. The net current at the node is defined as the input current. For the set of neighbors of the node , :
Voltage
Voltage is defined as a mapping from the set of edge pairs to real numbers, . Voltage is directly analogous to electrical voltage in an electrical network. The voltage in the forward and reverse directions between a pair of nodes are the additive inverses of one another: . The input voltage is the sum of the voltages over a set of edges, , that form a path between the and nodes.
s–t cut
An s–t cut is a partition of the graph into two parts each containing one of either or . Where , , , the s–t cut is . The s–t cut set is the set of edges that start in and end in . The minimum s–t cut is the s–t cut whose cut set has the minimum weight. Formally, the cut set is defined as:
Electrical network
An electrical network is a model that is derived from a flow graph. Each resistive element in the electrical network corresponds to an edge pair in the flow graph. The positive and negative terminals of the electrical network are the nodes corresponding to the and terminals of the graph, respectively. The voltage state of the model becomes binary in the limit as the input voltage difference approaches . The behavior of the electrical network is defined by Kirchhoff's voltage and current laws. Voltages add to zero around all closed loops and currents add to zero at all nodes.
Resistive element
A resistive element in the context of this theorem is a component of the electrical network that corresponds to an edge pair in the flow graph.
iv characteristic
The characteristic is the relationship between current and voltage. The requirements are:
(i) Current and voltage are continuous function with respect to one another.
(ii) Current and voltage are non-decreasing functions with respect to one another.
(iii) The range of the current is limited by the weights of the forward and reverse edges corresponding to the resistive element. The current range may be inclusive or exclusive of the endpoints. The domain of the voltage is exclusive of the maximum and minimum currents:
Statement of theorem
The limit of the current between the input terminals of the electrical network as the input voltage, approaches , is equal to the weight of the minimum cut set .
Proof
Claim 1 Current at any resistive element in the electrical network in either direction is always less than or equal to the maximum flow at the corresponding edge in the graph. Therefore, the maximum current through the electrical network is less than the weight of the minimum cut of the flow graph:
Claim 2 As the input voltage approaches infinity, there exists at least one cut set such that the voltage across the cut set approaches infinity.
This implies that:
Given claims 1 and 2 above:
Related Topics
The existence and uniqueness of a solution to the equations of an electrical network composed of monotone resistive elements was established by Duffin.
Application
Cederbaum's maximum flow theorem is the basis for the Simcut algorithm.
References
Combinatorial optimization
Theorems in graph theory
Network flow problem | Cederbaum's maximum flow theorem | [
"Mathematics"
] | 930 | [
"Theorems in graph theory",
"Theorems in discrete mathematics"
] |
74,791,569 | https://en.wikipedia.org/wiki/Cephalotaxus%20alkaloids | Cephalotaxus alkaloids are natural products characterized by pentacyclic structure.
Occurrence
These alkaloids are commonly found in the plum yew species (Cephalotaxus), especially in the Japanese plum-yew.
Representative
The most important representative is cephalotaxin. Other notable representatives include harringtonine and homoharringtonine.
Properties
Harringtonine and homoharringtonine exhibit antitumor activity and act against murine leukemia cells. They serve as protein and DNA biosynthesis inhibitors and are used against acute myelocytoxic leukemia.
References
Alkaloids
Heterocyclic compounds with 5 rings
Nitrogen heterocycles
Oxygen heterocycles | Cephalotaxus alkaloids | [
"Chemistry"
] | 145 | [
"Organic compounds",
"Biomolecules by chemical classification",
"Natural products",
"Alkaloids"
] |
74,797,491 | https://en.wikipedia.org/wiki/Atg8ylation | Atg8ylation is a process of conjugation of mammalian ATG8 proteins (mATG8s) to proteins or membranes. The process is akin to the ubiquitylation of diverse substrates by ubiquitin. There are six principal mATG8s: LC3A, LC3B, LC3C, GABARAP, GABARAPL1 and GABARAPL2. Together, they comprise a sub-class of ubiquitin-like molecules characterized by two N-terminal α-helices added to the ubiquitin core, which serve a dual role of forming a docking site for interacting proteins containing ATG8-interaction motifs and enhancing mATG8’s affinity for membranes.
Membrane atg8ylation
Membrane atg8ylation is a response to membrane stress, damage, and remodeling inputs. This process is best appreciated by analogy to ubiquitylation considering that atg8ylation is to membranes what ubiquitylation is to proteins. Membrane atg8ylation occurs via covalent modification by mATG8s of the membrane phospholipids phosphatidylethanolamine and phosphatidylserine. The conjugation cascade that activates mATG8s and results in membrane atg8ylation is biochemically similar to protein ubiquitylation, as both systems require ATP, E1, E2 and E3 ligases. The specific factors leading to atg8ylation include two enzymatic cascades with ATG12-ATG5 and mATG8-phosphatidylethanolamine (PE) conjugates as their end products. The ATG12-ATG5 protein-protein conjugate combines with additional proteins such as ATG16L1 or TECPR1 to form E3 ligases that spatially guide the formation of protein-lipid conjugate resulting in atg8ylation of specific membrane domains
The specialization of atg8ylation for membranes is ensured by the two extra (relative to ubiquitin) α-helices at the N-terminus of mATG8s with concealed affinities for membranes realized during atg8ylation and intrinsic membrane affinities of the atg8ylation cascade E2 component ATG3, as well as E3 components ATG16L1 or TECPR.
Principles of membrane atg8ylation
Mammalian membranes that undergo atg8ylation include: canonical autophagosomes, phagosomes harboring phagocytosed pathogens or microbial products, perturbed or signaling endosomes, damaged lysosomes, exocytic compartments releasing exosomes, endoplasmic reticulum (ER) during its piecemeal ESCRT-dependent lysosomal degradation, and lipid droplets. The delimiting membrane of lipid droplets modified by LC3B is not a full lipid bilayer but a monolayer of phospholipids surrounding neutral lipid core. The lipid droplet atg8ylation illustrates the principle that any cellular membrane may undergo atg8ylation including double membranes of autophagosomes (double lipid bilayer), single membranes (single lipid bilayer) of phagosomes and endosomes, and a phospholipid monolayer (hemilayer) surrounding lipid droplets.
During canonical autophagy, which includes atg8ylation of growing phagophores, WIPI2, an effector of phosphatidylinositol-3-phosphate (a stress-signaling phosphoinositide phospholipid) and a known interactor of ATG16L1 , helps dock the E3 ligase ATG12-ATG5/ATG16L1 to the phosphatidylinositol-3-phosphate-marked membranes. This presents activated mATG8s for conjugation to the phospholipid phosphatidylethanolamine embedded within the target membrane.
During noncanonical atg8ylation of stressed, damaged or remodeling membranes other than autophagosomes, the E3 ligases are recruited to target membranes by a variety of mechanisms. This includes docking of the ATG12-ATG5/ATG16L1 E3 ligase on vacuolar compartments including phagosomes, endosomes and lysosomes via binding of ATG16L1 to the vacuolar-type ATPase (v-ATPase). This binding is stimulated when the lumenal pH of the vacuole is perturbed. In other instances, the ATG12-ATG5/TECPR1 E3 ligase docks to stressed membranes via TECPR1, which recognizes the citofacially displayed sphingomyelin misplaced and exposed on perturbed membranes.
Manifestations of membrane atg8ylation
Atg8ylation is an important aspect of canonical autophagy. The initial stages of autophagy morphologically detectable as crescent phagophores do occur independently of all principal mATG8s. Phagophore formation proceeds in cells defective for mATG8 lipidation. However, the size of autophagosomes is smaller without atg8ylation. Further, the quality of autophagosomal membranes, such as membrane permeability, are adversely affected. The known effects of atg8ylation on autophagosomal membranes include membrane remodeling, kinetic effects, selective cargo sequestration into autophagosomes, and effects on autophagosome-lysosome fusion. Atg8ylation is important for ESCRT-dependent sealing of nascent autophagosomes and for their maintenance in an impervious state.
The non-autophagic processes dependent on atg8ylation include: LAP (LC3-associated phagocytosis), LANDO (LC3-associated endocytosis), LC3-associated micropinocytosis (LAM), CASM (conjugation of ATG8 to single membranes) alternatively referred to as SMAC (single membrane ATG8 conjugation) , and ‘vATPase-ATG16L1 axis xenophagy’ known under an acronym VAIL (V-ATPase-ATG16L1-induced LC3 lipidation). Many of the physiological and disease-associated effects of atg8ylation are manifested via these noncanonical processes or through canonical autophagy.
References
Post-translational modification | Atg8ylation | [
"Chemistry"
] | 1,408 | [
"Post-translational modification",
"Gene expression",
"Biochemical reactions"
] |
69,043,896 | https://en.wikipedia.org/wiki/Sibelektroterm | Sibelektroterm () is a manufacturing company in Kirovsky District of Novosibirsk, Russia. It was founded in 1945. The enterprise is a developer and manufacturer of electrometallurgical equipment.
Production
The plant produces electric furnaces, gas distribution and mining equipment, agricultural equipment etc.
Partnerships
The company collaborates with the Budker Institute of Nuclear Physics. In 2020, Sibelektroterm completed several complicated orders for the research work of this scientific organization. In addition, in November 2020, the company won a tender for the supply of magnetic cores for the Siberian Ring Photon Source (SKIF), which has been under construction in Koltsovo since August 2021.
According to an article published in Kontinent Sibir Online in 2021, 80% of the customers of Sibelektroterm's products were Kazakhstan plants.
References
Manufacturing companies based in Novosibirsk
Kirovsky District, Novosibirsk
Manufacturing companies established in 1945
Metallurgical facilities | Sibelektroterm | [
"Chemistry",
"Materials_science"
] | 206 | [
"Metallurgy",
"Metallurgical facilities"
] |
69,045,082 | https://en.wikipedia.org/wiki/Germanium%20tetrabromide | Germanium tetrabromide is the inorganic compound with the formula GeBr4. It is a colorless solid that melts near room temperature. It can be formed by treating solid germanium with bromine, or by treating a germanium-copper mixture with bromine:
From this reaction, GeBr4 has a heat of formation of 83.3 kcal/mol.
The compound is liquid at 25 °C, and forms an interlocking liquid structure. From room temperature down to −60 °C the structure takes on a cubic α form, whereas at lower temperatures it takes on a monoclinic β form.
References
Germanium(IV) compounds
Bromides | Germanium tetrabromide | [
"Chemistry"
] | 139 | [
"Bromides",
"Inorganic compounds",
"Inorganic compound stubs",
"Salts"
] |
69,046,334 | https://en.wikipedia.org/wiki/Dialane | Dialane is an unstable compound of aluminium and hydrogen with formula Al2H6. Dialane is unstable in that it reacts with itself to form a polymer, aluminium hydride. Isolated molecules can be stabilised and studied in solid hydrogen.
References
Aluminium compounds
Hydrogen compounds | Dialane | [
"Chemistry"
] | 56 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
69,051,638 | https://en.wikipedia.org/wiki/O-Acetylbufotenine | O-Acetylbufotenine, or bufotenine O-acetate, also known as 5-acetoxy-N,N-dimethyltryptamine (5-AcO-DMT) or O-acetyl-N,N-dimethylserotonin, is a synthetic tryptamine derivative and putative serotonergic psychedelic. It is the O-acetylated analogue of the naturally occurring peripherally selective serotonergic tryptamine bufotenine (5-hydroxy-N,N-dimethyltrypamine or N,N-dimethylserotonin) and is thought to act as a centrally penetrant prodrug of bufotenine.
Bufotenine has low lipophilicity, limitedly crosses the blood–brain barrier in animals, does not produce psychedelic-like effects in animals except at very high doses or administered directly into the brain, and produces inconsistent and weak psychedelic effects accompanied by pronounced peripheral side effects in humans. O-Acetylbufotenine, which is much more lipophilic than bufotenine due to its acetyl group, was developed in an attempt to overcome bufotenine's limitations and allow for the drug to efficiently cross the blood–brain barrier. In contrast to peripherally administered bufotenine, O-acetylbufotenine readily enters the brain in animals and produces robust psychedelic-like effects. In addition, O-acetylbufotenine was more potent than N,N-dimethyltryptamine (DMT) or 5-methoxy-N,N-dimethyltryptamine (5-MeO-DMT; O-methylbufotenine) in animals. However, the effects of O-acetylbufotenine in humans have not been assessed or reported. Alexander Shulgin speculated about O-acetylbufotenine in TiHKAL but did not personally synthesize or test it.
O-Acetylbufotenine is thought to be a prodrug of bufotenine, which is a non-selective agonist of many of the serotonin receptors, including of the serotonin 5-HT2A receptor (the activation of which is associated with psychedelic effects). However, O-acetylbufotenine has also unexpectedly been found to act directly as an agonist of certain serotonin receptors, including of the serotonin 5-HT1A and 5-HT1D receptors.
The O-acetyl substitution of O-acetylbufotenine is expected to be cleaved quite rapidly in vivo, which may hinder the ability of O-acetylbufotenine to cross the blood–brain barrier and deliver bufotenine into the central nervous system. Because of this, other O-acyl derivatives of bufotenine besides O-acetylbufotenine have been developed and studied. One such analogue, O-pivalylbufotenine, has been assessed and has likewise been shown to produce psychedelic-like effects animals.
O-Acetylbufotenine was first described in the scientific literature by 1968.
See also
4-AcO-DMT
5-EtO-DMT
5-HO-AMT
5-HO-DiPT
O,O′-Diacetyldopamine
O,O′-Dipivaloyldopamine
α-Methyltryptophan
Neurotransmitter prodrug
References
Acetate esters
Designer drugs
Neurotransmitter precursors
Prodrugs
Psychedelic tryptamines
Serotonin receptor agonists
Tertiary amines | O-Acetylbufotenine | [
"Chemistry"
] | 781 | [
"Chemicals in medicine",
"Prodrugs"
] |
64,649,692 | https://en.wikipedia.org/wiki/Fill%20and%20finish | In the pharmaceutical industry, fill and finish (also referred to as fill finish, fill-finish or fill/finish) is the process of filling vials with vaccine, biological and pharmaceutical Drug Substances (DS) and finishing the process of packaging the medicine for distribution. Many vaccine manufacturers use third parties to fill and finish their vaccines.
The fill/finish process is a common bottleneck in the manufacturing and deployment of vaccines.
To address this problem, in 2013 the U.S. federal government created the Fill Finish Manufacturing Network, a network of third-party provider contracts intended to perform fill and finish operations for vaccines against future infectious diseases. As part of its response to the COVID-19 pandemic, the UK government has provided financial support for fill and finish operations.
References
Vaccination
Manufacturing
Drug manufacturing
Packaging | Fill and finish | [
"Engineering",
"Biology"
] | 165 | [
"Vaccination",
"Manufacturing",
"Mechanical engineering"
] |
64,649,725 | https://en.wikipedia.org/wiki/Clozapine%20N-oxide | Clozapine N-oxide (CNO) is a synthetic drug used mainly in biomedical research as a ligand to activate Designer Receptors Exclusively Activated by Designer Drugs (DREADDs), despite the initial belief that it was biologically inert. However, it has been shown to not enter the brain after administration and to reverse metabolize in peripheral tissues to form clozapine. Clozapine can bind to a number of different serotonergic, dopaminergic and adrenergic receptors within the brain. These off-target effects mean behavioral data using the CNO-DREADD system have to be interpreted with caution.
Alternatives to CNO with more affinity, more inert character, and faster kinetics include Compound 21 (C21) and deschloroclozapine (DCZ).
References
Neuroscience
Pharmacology | Clozapine N-oxide | [
"Chemistry",
"Biology"
] | 174 | [
"Pharmacology",
"Neuroscience",
"Medicinal chemistry"
] |
64,657,529 | https://en.wikipedia.org/wiki/Wei-Ta%20Fang | Wei-Ta Fang () is a Taiwanese wetland scientist and environmental educator, a distinguished professor, vice dean of the College of Science, and the director of the Graduate Institute of Sustainability Management and Environmental Education, National Taiwan Normal University. President of the Society of Wetland Scientists (SWS) Asia Chapter.
Early life and education
Fang was born in Kaohsiung, Taiwan on 14 February 1966. He earned a Bachelor of Arts in land economics and administration from National Taipei University in 1989. He completed a master's degree in environmental planning (MEP) from Arizona State University, in 1994, followed by a second master's degree in landscape architecture in design studies (MDes.S.) from the Harvard Graduate School of Design in 2001. He obtained a Ph.D. from the Department of Ecosystem Science and Management, Texas A&M University in 2005.
Career
He served as a specialist in the Taipei Land Management Bureau in 1991 and 1992 and a senior specialist in charge of environmental education and environmental impact assessments (EIAs) at Taiwan's Environmental Protection Administration(EPA) Headquarters from 1994 to 2006. He was also co-principal investigator (co-PI) for the National Environmental Literacy Survey in Taiwan during 2012 and 2020. He is currently serving as distinguished professor, vice dean of the College of Science, and as director of the Graduate Institute of Sustainability Management and Environmental Education, National Taiwan Normal University, and is president of the Society of Wetland Scientists (SWS) Asia Chapter. and the president of Taiwan Wetland Society. He was awarded the designation of visiting research fellow from the director, Dr. Xingyuan He at the Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences in Changchun, China in March 2016 His currently studies focus on the psychological, social, and physical environmental characteristics to predict pro-environmental behavioral changes, such as smartphone usage children during COVID-19 periods.
Publications
《The Living Environmental Education: Sound Science toward a Cleaner, Safer, and Healthier Future》
《Tourism in Emerging Economies: The Way We Green, Sustainable, and Healthy》
《Envisioning Environmental Literacy: Action and Outreach》
《Determinants of pro-environmental behavior among excessive smartphone usage children and moderate smartphone usage children in Taiwan》
《Ecotourism: Environment, Health, and Education》
References
Living people
1966 births
Environmental scientists
Harvard Graduate School of Design alumni
Scientists from Kaohsiung
National Taipei University alumni
Academic staff of the National Taiwan Normal University
Arizona State University alumni
Texas A&M University alumni
Taiwanese expatriates in the United States | Wei-Ta Fang | [
"Environmental_science"
] | 518 | [
"Environmental scientists"
] |
64,659,151 | https://en.wikipedia.org/wiki/Independence%20complex | The independence complex of a graph is a mathematical object describing the independent sets of the graph. Formally, the independence complex of an undirected graph G, denoted by I(G), is an abstract simplicial complex (that is, a family of finite sets closed under the operation of taking subsets), formed by the sets of vertices in the independent sets of G. Any subset of an independent set is itself an independent set, so I(G) is indeed closed under taking subsets.
Every independent set in a graph is a clique in its complement graph, and vice versa. Therefore, the independence complex of a graph equals the clique complex of its complement graph, and vice versa.
Homology groups
Several authors studied the relations between the properties of a graph G = (V, E), and the homology groups of its independence complex I(G). In particular, several properties related to the dominating sets in G guarantee that some reduced homology groups of I(G) are trivial.
1. The total domination number of G, denoted , is the minimum cardinality of a total dominating set of G - a set S such that every vertex of V is adjacent to a vertex of S. If then .
2. The total domination number of a subset A of V in G, denoted , is the minimum cardinality of a set S such that every vertex of A is adjacent to a vertex of S. The independence domination number of G, denoted , is the maximum, over all independent sets A in G, of . If , then .
3. The domination number of G, denoted , is the minimum cardinality of a dominating set of G - a set S such that every vertex of V \ S is adjacent to a vertex of S. Note that . If G is a chordal graph and then .
4. The induced matching number of G, denoted , is the largest cardinality of an induced matching in G - a matching that includes every edge connecting any two vertices in the subset. If there exists a subset A of V such that then . This is a generalization of both properties 1 and 2 above.
5. The non-dominating independence complex of G, denoted I'(G), is the abstract simplicial complex of the independent sets that are not dominating sets of G. Obviously I'(G) is contained in I(G); denote the inclusion map by . If G is a chordal graph then the induced map is zero for all . This is a generalization of property 3 above.
6. The fractional star-domination number of G, denoted , is the minimum size of a fractional star-dominating set in G. If then .
Related concepts
Meshulam's game is a game played on a graph G, that can be used to calculate a lower bound on the homological connectivity of the independence complex of G.
The matching complex of a graph G, denoted M(G), is an abstract simplicial complex of the matchings in G. It is the independence complex of the line graph of G.
The (m,n)-chessboard complex is the matching complex on the complete bipartite graph Km,n. It is the abstract simplicial complex of all sets of positions on an m-by-n chessboard, on which it is possible to put rooks without each of them threatening the other.
The clique complex of G is the independence complex of the complement graph of G.
See also
Rainbow-independent set
References
Simplicial sets
Simplicial homology
Graph theory | Independence complex | [
"Mathematics"
] | 734 | [
"Graph theory objects",
"Graph theory",
"Basic concepts in set theory",
"Families of sets",
"Mathematical relations",
"Simplicial sets"
] |
72,008,050 | https://en.wikipedia.org/wiki/Ana%20Celia%20Mota | Ana Celia Mota (born 1935) is a retired Argentine-American condensed matter physicist specializing in phenomena at ultracold temperatures, including superfluids and superconductors. She is a professor emerita at ETH Zurich in Switzerland.
Education and career
Mota was born in 1935 in Argentina, and is a US citizen. She studied physics at the Balseiro Institute in Argentina, where she earned a licenciate in 1960, and became a doctoral student of John C. Wheatley. Her research with him concerned the heat capacity of liquid Helium-3.
After earning her doctorate in 1967, she worked for eight years in the Department of Physics and Institute for Pure and Applied Physical Sciences at the University of California, San Diego, and then for five more years at the University of Cologne, before joining ETH Zurich in 1980. At ETH Zurich, she was Senior Researcher in the Laboratory of Solid State Physics, professor, and director of a research group on low-temperature physics.
Recognition
Mota was named a Fellow of the American Physical Society (APS) in 1994, after a nomination from the APS Division of Condensed Matter Physics, "for work on superfluidity and superconductivity at ultra-low temperatures".
References
1935 births
Living people
Argentine physicists
Argentine women physicists
American physicists
American women physicists
Swiss physicists
Swiss women physicists
Condensed matter physicists
Fellows of the American Physical Society
Academic staff of ETH Zurich | Ana Celia Mota | [
"Physics",
"Materials_science"
] | 300 | [
"Condensed matter physicists",
"Condensed matter physics"
] |
72,010,909 | https://en.wikipedia.org/wiki/Fuzzy%20differential%20equation | Fuzzy differential equation are general concept of ordinary differential equation in mathematics defined as differential inclusion for non-uniform upper hemicontinuity convex set with compactness in fuzzy set.
for all .
First order fuzzy differential equation
A first order fuzzy differential equation with real constant or variable coefficients
where is a real continuous function and is a fuzzy continuous function
such that .
Linear systems of fuzzy differential equations
A system of equations of the form
where are real functions and are fuzzy functions
Fuzzy partial differential equations
A fuzzy differential equation with partial differential operator is
for all .
Fuzzy fractional differential equation
A fuzzy differential equation with fractional differential operator is
for all where is a rational number.
References
Fuzzy logic
Differential equations | Fuzzy differential equation | [
"Mathematics"
] | 136 | [
"Mathematical objects",
"Differential equations",
"Equations"
] |
72,012,955 | https://en.wikipedia.org/wiki/Ganga%20Water%20Supply%20Scheme | Ganga Water Lift Project () is a multi-phase drinking water project in Bihar, India. It is the ambitious project of Chief Minister of Bihar, Nitish Kumar to supply safe drinking water to the water-alarmed towns like Gaya, Rajgir and Nawada, located in southern part of the state through pipeline by lifting water from Ganga river near Hathidah Ghat in Mokama in Patna district. The cost of first phase of this project was initially approved with , the cost was later revised to . Government of Bihar approved the first phase of the coveted Ganga Water Lift Scheme (GWLS) of Water Resource Department (WRD) in December 2019. Ganga Water Lift Project is part of Nitish Kumar's ‘Jal-Jivan-Hariyali Abhiyan' which is aimed to minimize the bad effects of climate change.
Project details
The total length of pipeline that supply Ganga waters to three towns is 190.90 km. The Ganga water is lifted near Hathidah Ghat in Mokama and the pipeline crosses alongside the national highways and state highways. The main pipeline runs from Hathidah to Giriyak via Sarmera and Barbigha. From Giriyak, one pipeline goes to Rajgir, while another to Nawada. The water from Ganga is brought to Motnaje water treatment plant in Nawada district through a pipeline. In Gaya, urban development & housing department (UDHD) will ensure supply of water to the households through pipeline. The Public health & engineering department (PHED) will be responsible for Ganga water supply in Nawada. The length of the pipeline on Hathidah-Motnaje-Tetar-Abgilla pipe route is 150 km.
Third pipeline goes to Manpur (near Gaya) via Vanganga, Tapovan and Jethia. A major water storage point is constructed near Manpur. Similar storage point is to be constructed for other towns too. The project is completed in three phases. Ganga water will be supplied to Gaya, Bodhgaya and Rajgir in the first phase of the project. Nawada town would be covered in the second phase.
Hyderabad-based infrastructure firm Megha Engineering & Infrastructures Ltd (MEIL) completed phase 1 of Ganga Water Lift Project in 2022.
See also
Kaleshwaram Lift Irrigation Project
Colorado River Aqueduct
References
Water treatment | Ganga Water Supply Scheme | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 497 | [
"Water treatment",
"Water pollution",
"Water technology",
"Environmental engineering"
] |
72,013,014 | https://en.wikipedia.org/wiki/1%2C1%27-Ferrocenediisocyanate | 1,1'-Ferrocenediisocyanate (1,1'-diisocyanatoferrocene) is the organoiron compound with the formula . It is the simplest diisocyanate derivative of ferrocene. It can be synthesized by the Curtius rearrangement of the diacyl azide, using several protocols starting from 1,1'-ferrocenedicarboxylic acid. The compound is useful as an intermediate in the synthesis of
1,1'-diaminoferrocene by hydrolysis of the isocyanates. Various poly(siloxane–urethane) crosslinked polymers can be formed by reaction with siloxane-diols. These compounds are of interest as electrochemically active polymers that might have good mechanical properties at low temperature.
References
Ferrocenes
Cyclopentadienyl complexes
Isocyanates | 1,1'-Ferrocenediisocyanate | [
"Chemistry"
] | 193 | [
"Isocyanates",
"Cyclopentadienyl complexes",
"Functional groups",
"Organic compounds",
"Organometallic chemistry",
"Organic compound stubs",
"Organic chemistry stubs"
] |
72,013,280 | https://en.wikipedia.org/wiki/Familial%20natural%20short%20sleep | Familial natural short sleep is a rare, genetic, typically inherited trait where an individual sleeps for fewer hours than average without suffering from daytime sleepiness or other consequences of sleep deprivation. This process is entirely natural in this kind of individual, and it is caused by certain genetic mutations. A person with this trait is known as a "natural short sleeper".
This condition is not to be confused with intentional sleep deprivation, which leaves symptoms such as irritability or temporarily impaired cognitive abilities in people who are predisposed to sleep a normal amount of time but not in people with FNSS.
This sleep type is not considered to be a genetic disorder nor are there any known harmful effects to overall health associated with it; therefore it is considered to be a genetic, benign trait.
Presentation
Signs
Individuals with this trait are known for having the life-long ability of being able to sleep for a lesser amount of time than average people, usually 4 to 6 hours (less than the average sleeptime of 8 hours) each night while waking up feeling relatively well-rested, they also have a notable absence of any sort of consequence that derives from depriving oneself of sleep, something an average person would not be able to do on the sleeptime (and the frequency of said sleeptime) that is common for people with FNSS.
Another common trait among people with familial natural short sleep is an increased ability at recalling memories. Other common traits include outgoing personality, high productiveness, lower body mass index than average (possibly due to faster metabolism), higher resilience and heightened pain tolerance. All of these traits are of slightly better quality in people with natural short sleep than in people with natural normal sleep, essentially making them slightly more efficient than average people.
Onset
This condition is life-long, meaning that a natural short sleeper has naturally slept for a shorter time than average for most, if not all, of their lives.
Inheritance
This trait is inherited as an autosomal dominant trait, which means that for a person to be a natural short sleeper, they must have at least one copy of a mutation related to this condition, this mutation must have been either inherited or it must have arisen from a spontaneous genetic error. A carrier for a mutation associated with FNSS has a 50% chance of transmitting the mutation to one of their offspring.
Complications
This condition has no known health complications associated with it.
A study done in 2001 showed that natural short sleepers are more prone to subclinical hypomania, a temporary mental state most common during adolescence characterized by racing thoughts, abnormally high focus on goal-directed activities, unusually euphoric mood, and a perceptual innecessity for sleep.
Genetics
Early research, particularly from the lab of Ying-Hui Fu, named several mutations as causing heritable short sleep in studied families. These mutations implicated the genes DEC2/BHLHE41, ADRB1, NPSR1, and GRM1. However, subsequent biobank research showed that other carriers of these mutations or of different high-impact mutations in the same genes do not exhibit any reduction in sleep duration. This indicates that the short sleeper phenotype in the original case reports had a different basis.
Current genome-wide association studies suggest that sleep behaviors such as sleep length are highly polygenic, with most heritability explained by variants with small effects. The largest non-pathogenic genetic effect on sleep duration found to date is a change of 2.44 or 3.24 minutes associated with variation in the PAX8 gene.
Diagnosis
Diagnosis is usually not necessary, as this trait is not considered a disorder in and of itself, however, there are various methods one's doctor can use to diagnose the condition, including but not limited to the use of questionnaires such as the morningness-eveningness questionnaire, the Munich chronotype questionnaire, etc. Clinical diagnostic methods for the condition include electroencephalograms, delta-power analyses, and genetic testing.
Differential diagnosis
There are other conditions similar to this specific trait that share some characteristics between each other, these include:
Advanced sleep phase syndrome, this is a rare condition affecting the circadian rhythm in which individuals have an early sleep onset and equally early sleep awakening that is part of their regular sleep schedule. While both sleep traits are similar in the sense of early awakening, patients with FASP typically spend the same amount of time (8 hours) sleeping as an average person, while patients with FNSS do not. Another difference between the two is that early sleep onset is not a feature shown by people with familial natural short sleep. Like familial natural short sleep, it has the tendency to be hereditary.
Delayed sleep phase syndrome, this is a more common circadian rhythm condition (estimated to affect around 16% of adolescents in the U.S.) characterized by late sleep onset and equally late sleep awakening. While both sleep traits are similar in the sense of late sleep-onset, individuals with FNSS do not suffer from late sleep awakening. Unlike FNSS, this condition is not highly heritable, but it does seem to have at least some genetic component linked to it.
List of conditions that may be confused with FNSS include:
Insomnia, this is a common sleep disorder which can be acute or chronic and is characterized by an individual's difficulty to fall asleep, this usually leads to them to stay up late involuntarily which shortens their sleep time. While insomnia and FNSS share some common features (late sleep onset, for example), those with insomnia do suffer from the consequences associated with sleep deprivation, something people with FNSS do not suffer from, as they actually have a resistance against them.
Prevalence
It is estimated that approximately 1 to 3 percent of the population has the trait. In the U.S., natural short sleepers are a small part of a larger group comprising 30–35% of the population who sleep less than recommended.
Familial natural short sleep and Alzheimer's disease
For some unknown reason, individuals with this condition (and their associated mutations) might be genetically protected against neurodegenerative disorders, mainly those that cause dementia, such as Alzheimer's disease.
Ying-Hui Fu did a study using animal mouse models who were genetically engineered to carry mutations associated with natural short sleep and mutations associated with an increased risk of suffering from dementia; the results showed that mice with both FNSS and dementia mutations did not show as much symptoms of dementia as their dementia-alone predisposed mice counterparts. the same mice who had both Alzheimer's and short sleep gene mutations also had lesser amounts of Aβ plaque depositions in their hippocampuses and brain cortexes than those who only carried the Alzheimer's mutations. The FNSS-related mutations that were used in the study were DEC2-P384R and NPSR1-Y206H, and the Alzheimer's disease-related mutations were PS19 and 5XFAD.
See also
Sleep apnea
Sleep epigenetics
Fatal familial insomnia
Hypersomnia
Sleep paralysis
Sleep walking
Parasomnia
References
Sleep physiology | Familial natural short sleep | [
"Biology"
] | 1,478 | [
"Behavior",
"Sleep physiology",
"Sleep"
] |
72,016,563 | https://en.wikipedia.org/wiki/Sarcomyxa%20edulis | Sarcomyxa edulis is a species of fungus in the family Sarcomyxaceae. Fruit bodies grow as ochraceous to ochraceous-brown, overlapping fan- or oyster-shaped caps on the wood of deciduous trees. The gills on the underside are closely spaced, ochraceous, and have an adnate attachment to the stipe. Spores are smooth, amyloid, and measure 4.5–6 by 1–2 μm.
The species was previously confused with the greenish-capped Sarcomyxa serotina which is bitter-tasting. Sarcomyxa edulis is mild-tasting and edible. In Japan, where it is called mukitake, it is considered "one of the most delicious edible mushrooms" and a system has recently been developed to cultivate the mushroom in plastic greenhouses. In China, it is called “元蘑/yuanmo,” “黄蘑/huangmo,” or “冻蘑/dongmo”. It is considered a delicacy in China, rich in nutrition. "Generally, it grows on the fallen woods of broad-leaved trees in remote mountains and old forests, but not all broad-leaved trees are suitable for its growth, and the rotten basswood is very easy to grow S. edulis". "S. edulisis distributed in provinces of Hebei, Heilongjiang, Jilin, Shanxi, Guangxi, northern Shaanxi, Sichuan" in China, and at present, China already has high yield cultivation techniques.
Sarcomyxa edulis is known to occur in China, Japan, and the Russian Far East.
References
Fungi described in 2003
Fungi of Asia
Edible fungi
fungi in cultivation
Fungus species | Sarcomyxa edulis | [
"Biology"
] | 358 | [
"Fungi",
"Fungus species"
] |
72,021,555 | https://en.wikipedia.org/wiki/Sonja%20Lapajne%20Oblak | Sonja Lapajne Oblak (July 15, 1906 – September 29, 1993) was a Slovenian architect. She was the first Slovenian woman to graduate as a civil engineer from the Faculty of Technology in Ljubljana and Slovenia's first female urban planner. She was a member of the Partisans during the Second World War, and she survived incarceration in the Ravensbruck concentration camp before playing a part in rebuilding Yugoslavia in the postwar period.
Early life and education
Sonja Lapajne was born in Šentvid pri Ljubljani in 1906 and baptized Zofija-Sonja. Her parents were Antonija and Živko Lapajne, her father was a prominent medical doctor specialising in the treatment of tuberculosis and public health work, including running a hygiene institute with an interest in eugenics after the First World War.
She became the first Slovenian woman to graduate as a civil engineer from the Faculty of Technology in Ljubljana in 1932.
Career
From 1934 to 1943 she worked as a structural engineer for the technical department of the royal administration of the province of Drava Banate in Ljubljana, supervising the construction of buildings planned by the state at the time. She collaborated with prominent architects of the time, Jože Plečnik, Emil Navinšek, Vinko Glanz, and Edvard Ravnikar. She calculated the world's first corridor-free, reinforced concrete school building designed by the architect Navinsek in 1936.
She worked on the static calculations for the construction of reinforced concrete buildings and supervised their creation. Buildings she worked on included Ljublijana's Gimnazija Bezigrad High School, the Gallery of Modern Art and the National and University Library, and the King Hotel in Rogaška Slatina.
Second World War
In 1941 she joined the Yugoslav Partisans, a resistance movement against the Axis forces during the Second World War, led by the Communist Party of Yugoslavia (KPJ) under the leadership of Josip Broz Tito. By 1943 she was the party secretary of the Liberation Front, but she was captured and imprisoned by the Italians. After the Italian capitulation, she was interned in the German Ravensbrück concentration camp, where she remained until the end of the war.
Postwar career
Lapajne Oblak became Slovenia's first female urban planner. After the war she worked in leading construction companies in Yugoslavia and as an urban planner. In the 1950s, she was involved in the development plan for the Mura Valley region in northeastern Slovenia. Until her retirement in 1969, she was the director of the Institute for Architecture, Urban Planning, and Civil Engineering in Ljubljana.
Sonja Lapajne Oblak died in 1993 in Ljubljana.
Awards
Order of Merits for the People with Golden Star (1976)
Order of the Republic with Silver Wreath (1973)
Order of Brotherhood and Unity, 2nd class
Order of Labour, 2nd class
Order of Merits for the People, 3rd class (1946)
Commemorative Medal of the Partisans of 1941
Commemoration
Sonia Lapagne Oblak featured in the exhibition To the Fore: Female Pioneers of Slovenian Architecture, Civil Engineering, and Design at the DESSA Gallery, Ljubljana in 2017.
In 2018 she was featured In the Foreground: Pioneering Women of Slovenian Architecture, Construction and Design, organised by the Slovenian Academy of Sciences, an outdoor exhibition along the promenade on the Krakov Embankment. The other 19 women featured were Darinka Battelino, Alenka Kham Pičman, Janja Lap, Dana Pajnič, Lidija Podbregar, Barbara Rot, Olga Rusanova, Erna Tomšič, Mojca Vogelnik, Vladimira Bratuž, Majda Dobravec Lajovic, Mgada Fornazarič Kocmut, Marta Ivanšek, Nives Kalin Vehovar, Juta Krulc, Seta Mušič, Dušana Šantel Kanoni, Gizela Šuklje and Branka Tancig Novak.
References
1906 births
1993 deaths
Slovenian architects
Slovene Partisans
Slovenian women architects
20th-century Slovenian architects
20th-century architects
Yugoslav architects
Urban planners
Slovenian urban planners
Yugoslav urban planners
Women urban planners
Civil engineers
Female resistance members of World War II
Slovenian women engineers
Slovenian engineers | Sonja Lapajne Oblak | [
"Engineering"
] | 854 | [
"Civil engineering",
"Civil engineers"
] |
77,767,272 | https://en.wikipedia.org/wiki/Lundstrom%20Stones | The Lundstrom Stones or Lundstrom Walking Stones (previously known as the Loon Stones) are a pair of American natural lifting stones located in Charlestown, New Hampshire. They are used as a test of physical strength and endurance.
History
The pair of stones were found by blacksmith and stone-lifter John Lundstrom from North Reading, Massachusetts who often competed at farmer's walk type events during the late 70s and early 80s as a member of Clan Anderson. At the time, the heaviest stones used in Highland games in both United States and Canada were up to a combined , and they were not challenging enough. Lundstrom searched through local quarries to find something suitable in the range of .
Specifications
In 1983, after searching along the rock-strewn channel of the East Branch of the Pemigewasset River, Lundstrom found two near elliptical stones which he thought would suffice the new challenge. One of them had a smooth surface and the other was rough, giving them a unique appearance. After drilling and forging the two stones with steel rods, they were connected to a couple of iron gripping rings by a couple of chains. The rough stone weighed and the smooth stone weighed for a combined weight of . The poundages were engraved to the sides of each of them.
The objective is to pick up the two stones from their rings, stand upright, and then walk them as far as possible in farmer's walk style before the grip gives out. Walking them has been a staple event at the Loon Mountain Highland Games of New Hampshire, the Quechee games, the Festival at Fort 4, and New England Stone Lifting.
Since the death of Lundstrom in 2013, Robert Troupe acts as the custodian of the stones.
World records
All-time world record – by Hafþór Júlíus Björnsson (2015)
→ Former record holders include Benedikt Magnússon, Gerard Benderoth and Stefán Sölvi Pétursson.
Master's (40y+) record – by John Lundstrom (2011)
Women's record – by Chloe Brennan (2023)
References
Notes:
1983 establishments in New Hampshire
1983 in strength athletics
Charlestown, New Hampshire
Stones | Lundstrom Stones | [
"Physics"
] | 438 | [
"Stones",
"Physical objects",
"Matter"
] |
77,770,542 | https://en.wikipedia.org/wiki/John%20William%20Strutt%2C%20Lord%20Rayleigh%20Medal%20and%20Prize | The John William Strutt, Lord Rayleigh Medal and Prize is an award of the UK-based Institute of Physics (IOP) for "distinguished contributions to theoretical (including mathematical and computational) physics". The award, named in honour of Lord Rayleigh, consists of a medal with £1,000 and a certificate.
The John William Strutt, Lord Rayleigh Medal and Prize (established in 2008) should not be confused with the Rayleigh Medal, which was established by the Institute of Acoustics in 1970.
Recipients
References
Awards of the Institute of Physics
Awards established in 2008
Physics awards
Science and technology awards | John William Strutt, Lord Rayleigh Medal and Prize | [
"Technology"
] | 126 | [
"Science and technology awards",
"Physics awards"
] |
77,771,525 | https://en.wikipedia.org/wiki/Stachydrine | Stachydrine, also known as proline betaine, is a naturally occurring alkaloid found in citrus, caper, chestnuts, alfalfa, Leonurus japonicus, Maclura tricuspidata, Stachys arvensis and Arisaema heterophyllum. It has been studied for its potential health benefits.
References
Zwitterions
Alkaloids
Pyrrolidines
Quaternary ammonium compounds
Carboxylic acids | Stachydrine | [
"Physics",
"Chemistry"
] | 99 | [
"Biomolecules by chemical classification",
"Matter",
"Natural products",
"Carboxylic acids",
"Functional groups",
"Organic compounds",
"Zwitterions",
"Ions",
"Alkaloids"
] |
77,773,349 | https://en.wikipedia.org/wiki/Bloomage | Bloomage, also known as Bloomage Biotech, is a biomaterial company based in China. Bloomage primarily specializes in hyaluronic acid and other bioactive substances products. It is listed on the Shanghai Stock Exchange.
History
In 2000, Bloomage was established and began to mass-produce hyaluronic acid using microbial fermentation.
Bloomage's first plant in Jinan was completed and put into production in 2005.
Bloomage was licensed by the US FDA and established a US subsidiary in 2012.
Products
Bloomage specializes in hyaluronic acid microbial fermentation production. It also focuses on pharmaceutical, cosmetic, and food-grade application products.
Other than hyaluronic acid, Bloomage also manufactures recombinant collagen, ergothioneine, ectoine, GABA, PDRN, among others. The company also produces skin fillers and various cosmetic products.
References
External links
Biomaterials
2000 establishments in China
Chemical companies of China | Bloomage | [
"Physics",
"Biology"
] | 207 | [
"Biomaterials",
"Materials",
"Matter",
"Medical technology"
] |
67,630,778 | https://en.wikipedia.org/wiki/Generalized%20suffix%20array | In computer science, a generalized suffix array (GSA) is a suffix array containing all suffixes for a set of strings. Given the set of strings of total length , it is a lexicographically sorted array of all suffixes of each string in . It is primarily used in bioinformatics and string processing.
Functionality
The functionality of a generalized suffix array is as follows:
For a collection or set of strings, .
It is a lexicographically sorted array of all suffixes of each string in the set .
In the array, each suffix is represented by an integer pair which denotes the suffix starting from position in .
In the case where different strings in have identical suffixes, in the generalized suffix array, those suffixes will occupy consecutive positions. However, for convenience, the exception can be made where repeats will not be listed.
A generalized suffix array can be generated for a generalized suffix tree. When compared to a generalized suffix tree, while the generalized suffix array will require more time to construct, it will use less space than the tree.
Construction Algorithms and Implementations
Algorithms and tools for constructing a generalized suffix array include:
Fei Shi's (1996) algorithm which runs in worst case time and space, where is the sum of the lengths of all strings in and the length of the longest string in . This includes sorting, searching and finding the longest common prefixes.
The external generalized enhanced suffix array, or eGSA, construction algorithm which specializes in external memory construction, is particularly useful when the size of the input collection or data structure is larger than the amount of available internal memory
gsufsort is an open-source, fast, portable and lightweight tool for the construction of generalized suffix arrays and related data structures like Burrows–Wheeler transform or LCP Array)
Mnemonist, a collection of data structures implemented in JavaScript contains an implementation for a generalized suffix tree and can be found publicly on npm and GitHub.
Solving the Pattern Matching Problem
Generalized suffix arrays can be used to solve the pattern matching problem:
Given a pattern and a text , find all occurrences of in .
Using the generalized suffix array of , then first, the suffixes that have as a prefix need to be found.
Since is a lexicographically sorted array of the suffixes of , then all such suffixes will appear in consecutive positions within . Particularly important, since is sorted, it makes identification of suffixes possible and easy using binary search.
Using binary search, first find the smallest index in such that contains as a prefix, or determine that no such suffix is present. In the case where the suffix is not found, does not occur in . Otherwise, find the largest index which contains as a prefix. The elements in the range indicate the starting positions of the occurrences of in .
Binary search on takes comparisons. is compared with a suffix to determine their lexicographic order in each comparison that is done. Thus, this requires comparing at most characters. Note that a array is not required, but will offer the benefit of a lower running time.
The runtime of the algorithm is . By comparison, solving this problem using suffix trees takes time. Note that with a generalized suffix array, the space required is smaller compared to a suffix tree, since the algorithm only requires space for words and the space to store the string. As mentioned above, by optionally keeping track of information which will use slightly more space, the running time of the algorithm can be improved to .
Other Applications
A generalized suffix array can be utilized to compute the longest common subsequence of all the strings in a set or collection. A naive implementation would compute the largest common subsequence of all the strings in the set in .
A generalized suffix array can be utilized to find the longest previous factor array, a concept central to text compression techniques and in the detection of motifs and repeats
See also
Suffix Tree
Suffix Array
Generalized Suffix Tree
Pattern matching problem
Bioinformatics
References
External links
Generalized enhanced suffix array construction in external memory
Arrays
Computer science suffixes
Substring indices | Generalized suffix array | [
"Technology"
] | 815 | [
"Computer science",
"Computer science suffixes"
] |
67,633,513 | https://en.wikipedia.org/wiki/Nanoneuroscience | Nanoneuroscience is an interdisciplinary field that integrates nanotechnology and neuroscience. One of its main goals is to gain a detailed understanding of how the nervous system operates and, thus, how neurons organize themselves in the brain. Consequently, creating drugs and devices that are able to cross the blood brain barrier (BBB) are essential to allow for detailed imaging and diagnoses. The blood brain barrier functions as a highly specialized semipermeable membrane surrounding the brain, preventing harmful molecules that may be dissolved in the circulation blood from entering the central nervous system.
The main two hurdles for drug-delivering molecules to access the brain are size (must have a molecular weight < 400 Da) and lipid solubility. Physicians hope to circumvent difficulties in accessing the central nervous system through viral gene therapy. This often involves direct injection into the patient's brain or cerebral spinal fluid. The drawback of this therapy is that it is invasive and carries a high risk factor due to the necessity of surgery for the treatment to be administered. Because of this, only 3.6% of clinical trials in this field have progressed to stage III since the concept of gene therapy was developed in the 1980s.
Another proposed way to cross the BBB is through temporary intentional disruption of the barrier. This method was first inspired by certain pathological conditions that were discovered to break down this barrier by themselves, such as Alzheimer's disease, Parkinson's disease, stroke, and seizure conditions.
Nanoparticles
Nanoparticles are unique from macromolecules because their surface properties are dependent on their size, allowing for strategic manipulation of these properties (or, "programming") by scientists that would not be possible otherwise. Likewise, nanoparticle shape can also be varied to give a different set of characteristics based on the surface area to volume ratio of the particle.
Nanoparticles have promising therapeutic effects when treating neurodegenerative diseases. Oxygen reactive polymer (ORP) is a nano-platform programmed to react with oxygen and has been shown to detect and reduce the presence of reactive oxygen species (ROS) formed immediately after traumatic brain injuries. Nanoparticles have also been employed as a "neuroprotective" measure, as is the case with Alzheimer's disease and stroke models. Alzheimer's disease results in toxic aggregates of the amyloid beta protein formed in the brain. In one study, gold nanoparticles were programmed to attach themselves to these aggregates and were successful in breaking them up. Likewise, with ischemic stroke models, cells in the affected region of the brain undergo apoptosis, dramatically reducing blood flow to important parts of the brain and often resulting in death or severe mental and physical changes. Platinum nanoparticles have been shown to act as ROS, serving as "biological antioxidants" and significantly reducing oxidation in the brain as a result of stroke. Nanoparticles can also lead to neurotoxicity and cause permanent BBB damage either from brain oedema or from unrelated molecules crossing the BBB and causing brain damage. This proves further long term in vivo studies are needed to gain enough understanding to allow for successful clinical trials.
One of the most common nano-based drug delivery platforms is liposome-based delivery. They are both lipid-soluble and nano-scale and thus are permitted through a fully functioning BBB. Additionally, lipids themselves are biological molecules, making them highly biocompatible, which in turn lowers the risk of cell toxicity. The bilayer that is formed allows the molecule to fully encapsulate any drug, protecting it while it is travelling through the body. One drawback to shielding the drug from the outside cells is that it no longer has specificity, and requires coupling to extra antibodies to be able to target a biological site. Due to their low stability, liposome-based nanoparticles for drug delivery have a short shelf life.
Targeted therapy using magnetic nanoparticles (MNPs) is also a popular topic of research and has led to several stage III clinical trials. Invasiveness is not an issue here because a magnetic force can be applied from the outside of a patient's body to interact and direct the MNPs. This strategy has been proven successful in delivering brain-derived neurotropic factor, a naturally occurring gene thought to promote neurorehabilitation, across the BBB.
Nano-imaging tools
The visualization of neuronal activity is of key importance in neuroscience. Nano-imaging tools with nanoscale resolution help in these areas. These optical imaging tools are PALM and STORM which helps visualize nanoscale objects within cells. So far, these imaging tools revealed the dynamic behavior and organization of the actin cytoskeleton inside the cells, which will assist in understanding how neurons probe their involvement during neuronal outgrowth and in response to injury, and how they differentiate axonal processes and characterization of receptor clustering and stoichiometry at the plasma inside the synapses, which are critical for understanding how synapses respond to changes in neuronal activity. These past works focused on devices for stimulation or inhibition of neural activity, but the crucial aspect is the ability for the device to simultaneously monitor neural activity. The major aspect that is to be improved in the nano imaging tools is the effective collection of the light as a major problem is that biological tissue are dispersive media that do not allow a straightforward propagation and control of light. These devices use nanoneedle and nanowire for probing and stimulation.
Nanowires
Nanowires are artificial nano- or micro-sized "needles" that can provide high-fidelity electrophysiological recordings if used as microscopic electrodes for neuronal recordings. Nanowires are an attractive as they are highly functional structures that offer unique electronic properties that are affected by biological/chemical species adsorbed on their surface; mostly the conductivity. This conductivity variance depending on chemical species present allows enhanced sensing performances. Nanowires are also able to act as non-invasive and highly local probes. These versatility of nanowires makes it optimal for interfacing with neurons due to the fact that the contact length along the axon (or the dendrite projection crossing a nanowires) is just about 20 nm.
References
Nanotechnology
Neurotechnology
Interdisciplinary subfields of medicine | Nanoneuroscience | [
"Materials_science",
"Engineering"
] | 1,311 | [
"Nanotechnology",
"Materials science"
] |
67,638,170 | https://en.wikipedia.org/wiki/Weather%20of%202021 | The following is a list of weather events that occurred in 2021. The year began with La Niña conditions. There were several natural disasters around the world from various types of weather, including blizzards, cold waves, droughts, heat waves, tornadoes, and tropical cyclones. In December, powerful Typhoon Rai moved through the southern Philippines, killing 410 people and becoming the deadliest single weather event of the year. The costliest event of the year, and the costliest natural disaster on record in the United States, was from a North American cold wave in February 2021, which caused $196.4 billion (USD) in damage; the freezing temperatures and widespread power outages in Texas killed hundreds of people. Another significant natural disaster was Hurricane Ida, which struck southeastern Louisiana and later flooded the Northeastern United States, resulting in $70 billion (USD) in damage. December saw two record-breaking tornado outbreaks, only four days apart from each other. In Europe, the European Severe Storms Laboratory documented 1,482 weather-related injuries and 568 weather-related fatalities. The National Oceanic and Atmospheric Administration documented 796 weather-related fatalities and at least 1,327 weather-related injuries in the United States and the territories of the United States.
Global conditions
The year began with La Niña conditions that developed the previous year. This was reflected in cooler than normal sea surface temperatures in the south Pacific Ocean. However, conditions were unlike typical La Niña events, with above normal temperatures in the United States in January, but colder than normal temperatures in February. By March and April, the La Niña conditions had begun to weaken. On May 13, the National Oceanic and Atmospheric Administration (NOAA) assessed that the El Niño–Southern Oscillation (ENSO) transitioned into its neutral phase. However, following cooler than normal temperatures in the tropical eastern Pacific Ocean, NOAA declared that the global weather conditions shifted back to La Niña by October.
Deadliest events
Weather summaries by type
Cold waves and winter storms
In January, at least 70 people in Japan died while removing snow, related to a blizzard that dropped of snowfall. At least 1,500 people were stranded on a highway.
In February, extreme cold affected much of North America. During much of the winter, a high pressure system existed over southeastern Canada and Greenland, while lower than normal pressure existed over northeastern Asia into Alaska. A winter storm left more than 9 million people without power from northern Mexico to the northeastern United States; nearly half of the power outages were in Texas. There were 172 deaths in the United States, The system is estimated to have cost over $196.5 billion (2021 USD) in damages, including at least $195 billion in the United States and over $1.5 billion in Mexico, making it the costliest winter storm on record, as well as the costliest natural disaster recorded in the United States. It is also the deadliest winter storm in North America since the 1993 Storm of the Century, which killed 318 people. Another winter storm added on to the effects, leading to 29 deaths and $2 billion in damage, and caused 4 million power outages.
At the same time, a cold wave impacts Greece. This cold wave resulted in 3 deaths, and resulted in Greece getting their heaviest snowfall since 2008. Temperatures dropped as low as .
In March, a record-breaking blizzard affects the Rocky Mountains. Although no one died, the system caused $75 million in damage. Cheyenne, Wyoming saw their largest two-day snowfall on record. It also became Denver's fourth largest blizzard. The storm caused car crashes which resulted in 22 injuries.
Droughts
A drought in western North America began in 2020 and continued into 2021. A 20-month period from January 2020 to August 2021 recorded the least rainfall since 1895. Lake Powell hit record low levels in July 2021, and due to Lake Mead dropping so low, water restrictions were imposed. By mid-August 2021, Iowa was facing extreme drought. Drought also affected over 85% of Mexico.
Floods
In March, a multi-day rain event caused significant flooding for many parts of Eastern and Central Australia from the March 17–21, being called a 1 in 100-year event. Comboyne, New South Wales reported a four-day total of
Significant flooding occurred along the Mid North Coast and Central Australia. The Manning River at Taree equalled its 1929 record, Wingham, New South Wales saw its highest levels since 1978, The Gwydir River was 0.2m short of its 1955 record and the Mehi River in Moree, New South Wales was 0.4m below its 1955 peak. One man died due to his car losing control in Mona Vale, New South Wales, a bodyboarder who disappeared on the Coffs Harbour seashore is presumed dead. 2 more fatalities confirmed on the 24th and a woman went missing on the 26th and later discovered. In addition, floods in Hawaii left a person missing, caused $49 million in damage, and caused 1,300 power outages. Haiku recorded of rain, and parts of the state receive . David Ige declared a state of emergency due to the floods.
In July, a storm system stalled over Germany, producing torrential rainfall and flash flooding. With at least 184 deaths, the floods are the deadliest natural disaster in Germany since the North Sea flood of 1962. There were also 42 deaths in Belgium. Then, floods in Henan result in at least 302 deaths. Most of the deaths and damage were in Zhengzhou. At the end of the month, floods in Afghanistan cause 113 deaths.
On August 21, severe flash flooding impacted Middle Tennessee. The state set a 24-hour precipitation record of , and resulted in 20 deaths. The death toll was initially 22, but was lowered when more accurate counts were published. The flooding also affected Kentucky but to a much lesser extent.
On September 1 and 2, major flash flooding affected the Northeastern United States due to the remnants of Hurricane Ida. This causes 55 deaths and around $20 billion in damage. Before the storm, the Weather Prediction Center issued a high risk for flash flooding. This became New York's 9th wettest tropical cyclone on record. New York City got its first flash flood emergency. Between 8:51 p.m. and 9:51 p.m. on September 1, New York City saw of rain, its wettest hour on record.
The 2021 Pacific Northwest floods comprise a series of floods that affected British Columbia, Canada, and parts of neighboring Washington state in the United States in November and December. The flooding and numerous mass wasting events were caused by a Pineapple Express, a type of atmospheric river, which brought heavy rain to parts of southern British Columbia and northwestern United States. The natural disaster prompted a state of emergency for the province of British Columbia. Damage was at least $2.5 billion. That same month, floods in South Asia caused 41 fatalities. In addition, over 11,000 people were displaced. Over 11,000 were displaced in India due to BOB 05's rainfall impact.
At the end of the year into 2022, Malaysia experienced intense floods. 54 people died causing over $4.77 billion in damage. This was fueled by Tropical Depression 29W. Comparisons were drawn to the floods seven years prior. This was declared a once in 100 year event. This became the deadliest tropical cyclone in Malaysia since Tropical Storm Greg of 1996.
Heat waves
A winter heat wave in February across Eurasia. Sweden saw its highest ever February temperature at . Beijing also surpassed its February heat record by over five degrees when it hit .
An extreme heat wave affected much of Western North America from late June through mid-July 2021, resulting in the highest temperature ever measured in Canada at . The heat wave kills 229 Americans alone, and causes $8.9 billion in the US alone. Over 600 Canadians die, making it the deadliest weather event in the history of Canada. The heat wave breaks an all-time high temperature record in Washington and ties one in Oregon.
Tornadoes
There were 1,374 preliminary filtered reports of tornadoes in the United States in 2021, of which at least 1,278 were confirmed. Worldwide, 151 tornado-related deaths were confirmed with 104 in the United States, 28 in China, six in the Czech Republic, four in Russia, three in Italy, two in India and one each in Canada, New Zealand, Indonesia, and Turkey. The year started well below average with the lowest amount of tornado reports through the first two months in the past 16 years and remained below-average for most of the year due to inactivity during April, June, September, and November. Despite this, several intense outbreaks occurred in March, May, July, August, and October. May, for the first time ever, had no tornadoes above EF2 status. The year ended on a destructive note, however, as December was incredibly active, more than doubling the previous record, which pushed 2021 above average. Additionally, 2021 had the most tornado fatalities in the United States since 2011. Almost all of the fatalities were due to the Tornado outbreak of December 10–11, 2021. The 2021 Western Kentucky tornado becomes the longest tracked tornado in December, and the tornado outbreak becomes the deadliest in December. The December 2021 Midwest derecho and tornado outbreak brought the first December tornadoes on record to Minnesota. This made December 2021 the most active December for tornadoes on record. In addition, 2021 saw the 2nd highest confirmed number of tornadoes in both Pennsylvania and New Jersey.
Tropical and subtropical cyclones
In the Southern Hemisphere, there were two tropical cyclones that formed in late December and persisted into January 2021 – the remnants of Tropical Storm Chalane over southern Africa, and a tropical depression east of Madagascar that would soon become Tropical Storm Danilo. In April, Cyclone Seroja produced deadly flooding in Indonesia and East Timor, killing at least 272 people. Also in the month, Typhoon Surigae in the northwest Pacific Ocean became the strongest Northern Hemisphere tropical cyclone to form before the month of May; it attained 10-minute maximum sustained winds of , according to the Japan Meteorological Agency, or one-minute sustained wind of according to the Joint Typhoon Warning Center. In May, the Eastern Pacific basin had its earliest tropical storm on record, with Tropical Storm Andres. Also in May, Cyclone Tauktae tied a cyclone in 1998 to become the strongest cyclone to strike Gujarat, with sustained winds of ; Tauktae killed at least 118 people in India, with another 66 deaths after Barge P305 sank near Heera oil field, off the coast of Mumbai. In August, Hurricane Ida struck the U.S. state of Louisiana with sustained winds of , tying 2020's Hurricane Laura and the 1856 Last Island hurricane as the strongest on record to hit the state. Throughout the United States, damage from Ida was estimated at US$64.5 billion. In December, Typhoon Rai struck the eastern Philippines, which killed 410 people.
Wildfires
In June, the taiga forests in Siberia and the Far East region of Russia were hit by unprecedented wildfires, following record-breaking heat and drought. For the first time in recorded history, wildfire smoke reached the North Pole. In July and August, Turkey experienced its worst ever wildfire season. The fires caused 9 deaths.
Africa was also hit by wildfires. Across Algeria, wildfires killed 90 people. On April 18, a wildfire affects Table Mountain National Park and Cape Town in South Africa. The fires injured 5 firefighters.
North America was hit extremely hard by wildfires in 2021. The United States saw 5.6 million acres burn and Canada saw 10.34 million acres burn. It was predicted to be severe as early as April 2021 due to record drought. Unhealthy air from the fires spread as far as New Hampshire. One particularly severe wildfire was the Lytton wildfire. The fires caused 2 deaths, and destroyed 90% of Lytton, British Columbia. Then, in July, the Dixie Fire became the largest single wildfire in California's history. Suppression costs alone were $637 million. When the cause was determined, PG&E pled guilty to 85 felonies. Oregon also sees a massive wildfire, the Bootleg Fire. This became the third largest in state history. The wildfire is believed to have created a fire tornado. Wildfire activity persisted into December. On December 15, the December 2021 Midwest derecho and tornado outbreak caused strong, dry winds across Kansas, leading to wildfires that kill two. On December 30, the Marshall Fire became the most destructive fire in Colorado history, causing over $513 million in damage. The fire was extinguished by January 1, 2022, due to heavy snow.
Timeline
This is a timeline of weather events during 2021.
January
December 30, 2020 – January 3, 2021 – The New Year's North American winter storm kills one person and caused 119,000 power outages. The storm caused $35 million (2021 USD) in damage across the United States and Canada, per Aon.
January 1–6 – Cyclone Imogen caused $10 million (2021 USD) in damage across Australia.
January 7–15 – Storm Filomena killed five people and caused $2.2 billion (2021 USD) in damage across Portugal, Spain, Gibraltar, Andorra, France, Morocco, Italy, Vatican City, San Marino, Greece, Turkey, and Ukraine.
January 14–25 – Cyclone Eloise kills 27 people with 11 missing and caused $10 million (2021 USD) in damage across Madagascar, Mozambique, Malawi, Zimbabwe, South Africa, and Eswatini.
January 26 – An EF3 tornado hits Fultondale, Alabama, killing one person and injuring 30 others.
January 26 – February 5 – Cyclone Ana kills one person with five missing and caused $1 million (2021 USD) in damage in Fiji.
January 31 – February 3 – The 2021 Groundhog Day nor'easter kills 7 people, knocks out power for over 500,000 people, and caused $1.85 billion (2021 USD).
February
February 1–7 – The 2021 Wooroloo bushfire in Australia burns 27,000 acres and 86 buildings and injured eight people.
February 6 – Four skiers were killed and four others were injured in an avalanche in Millcreek Canyon, Utah, United States.
February 6–22 – A cold wave, in addition to winter storms Uri and Viola, kills at least 278 people, causes power outages for millions of people across the United States, and causes $198.6 billion (2021 USD) in damage. This cold wave also led to the 2021 Texas power crisis which resulted in 210 to 702 deaths.
February 7 – The Chamoli disaster was triggered a rock and ice avalanche. The flood resulted in 83 deaths and 121 missing.
February 8 – Twenty-four workers died in a flooded illegally-run textile workshop in a private house in Tangier, Morocco, which occurred as a result of intense rains that hit the region. Ten others were rescued and hospitalized.
February 10–12 – An ice storm across the United States killed 12 people and caused over $75 million (2021 USD) in damage. The first ice storm warning ever issued for Richmond, Virginia was due to this storm.
February 11 – The Met Office reports an overnight temperature of −22.9 °C in Braemar, Aberdeenshire, the coldest weather in the UK since 1995.
February 13 – A series of severe weather-related incidents in Northern Italy leaves four people dead and 25 others injured.
February 13–16 – A cold wave in Greece killed three people.
February 15 – A tornado in Brunswick County, North Carolina, associated with Winter Storm Uri, kills three people and injures ten others.
February 16–23 – Tropical Storm Dujuan, known in the Philippines as Severe Tropical Storm Auring, kills one person with four missing and caused $3.29 million (2021 USD) in damage across Palau and the Philippines.
February 27 – March 8 – Cyclone Niran caused 70,000 power outages and caused $200 million (2021 USD) in damage across Queensland, New Caledonia, and Vanuatu.
March
March 4–17 – The March 2021 North American blizzard occurs, causing $75 million (2021 USD) in damage. The blizzard caused over 54,000 to lose power and several areas received some of their heaviest late-season snowfall on record.
March 16–18 – A tornado outbreak in the Southeastern United States and Southern Plains resulted in one non-tornadic fatality and six injuries from 51 tornadoes. 25 of those 51 tornadoes occurred in Alabama, which locally refer to this outbreak as the Saint Patrick's Day tornado outbreak of 2021.
March 24–28 – A tornado outbreak in the Southern United States resulted in 14 fatalities (7 direct tornadic, 1 indirect tornadic and 8 non-tornadic) and 37+ injuries from 43 tornadoes. The Storm Prediction Center (SPC) issued its second high-risk outlook for the month of March, as well as the second high-risk outlook for 2021 on March 25 when the bulk of activity was expected. Two tornado emergencies were issued during this outbreak by the National Weather Service.
March 25 – An EF3 tornado during the Tornado outbreak sequence of March 24–28 kills six people and injured ten others in Ohatchee, Alabama.
March 26 – An EF4 tornado during the Tornado outbreak sequence of March 24-28, 2021 in Newnan, Georgia kills one person indirectly, and causes $20.5 million in damage.
April
April 3–12 – Cyclone Seroja kills 272 people and causing $490.7 million (2021 USD) in damage. The cyclone brought historic flooding and landslides to portions of southern Indonesia and East Timor and later went on to make landfall in Western Australia's Mid West region, becoming the first to do so since 1999.
April 9–11 – An EF3 tornado in Louisiana kills one during a tornado outbreak. The system also caused two deaths due to straight-line winds in Louisiana and Florida.
April 12 – May 2 – Typhoon Surigae, known in the Philippines as Typhoon Bising, kills ten people with eight missing, and caused about $10.74 million (2021 USD) in damage across the Caroline Islands, Palau, Sulawesi, the Philippines, Taiwan, Ryukyu Islands, Kuril Islands, Russian Far East, and Alaska. 63 cities experienced power interruptions; however, power was restored in 54 of those cities. Typhoon Surigae became a category 5 super typhoon and became the strongest pre-May typhoon on record.
April 15 - Severe Nor'wester locally named kalboishakhi - severe thunderstorm, rain, and high wind affecting Bangladesh, particularly Rajshahi, Rangpur, and Dhaka with devastating effect and loss of life.
April 18 – The Cape Town fire occurs destroying Mostert's Mill in Cape Town, South Africa, after a fire spreads from Table Mountain.
April 21 - Whiteout conditions along Interstate 41 result in one person being killed in an 80-vehicle crash.
May
May 2–4 – A tornado outbreak occurs in the Southeastern United States and the Great Plains, resulting in 97 tornadoes that caused $1.3 billion (2021 USD) in damage, and ten injuries. There are also four non-tornadic fatalities.
May 9 – A landslide at a clandestine gold mine in Siguiri, Guinea, kills at least fifteen miners.
May 14–19 – Cyclone Tauktae kills 174 people, with 81 missing, and caused $2.12 billion (2021 USD) in damage in India, Sri Lanka, Maldives, and Pakistan.
May 16 - Floods in Texas and Louisiana kill 5 people.
May 20 – July 23 – The Johnson Fire, in New Mexico, burned 88,918 acres.
May 22 – The Gansu ultramarathon disaster occurs with 21 people dying from hypothermia when high winds and freezing rain strike a long-distance race in Jingtai, Gansu, China.
May 23–28 – Cyclone Yaas kills 20 people and caused $2.84 billion (2021 USD) in damage across Bangladesh, India (Andaman and Nicobar Islands, Bihar, Jharkhand, Madhya Pradesh, Odisha, Uttar Pradesh, West Bengal), and Nepal. The total damages in West Bengal, the most heavily impacted Indian state from Yaas, were estimated to be around ₹20 thousand crore (US$2.76 billion).
May 29–30 – Many cities in the Northeastern United States set record low high temperatures. New York City sees a high of , while Philadelphia has a high of , both becoming the coldest high for the day. Albany, New York recorded a high of on May 29 and on May 30, both breaking records. The storm system also dumped up to just outside New York City. Nearly an inch of snow fell on Mount Snow in Vermont. Due to the rain in New York City, two games between the New York Mets and Atlanta Braves were postponed. Rain in Washington DC also forced a game between the Milwaukee Brewers and Washington Nationals to be postponed.
May 29 – June 6 – Tropical Storm Choi-wan, known in the Philippines as Tropical Storm Dante, occurs, killing 11 with 2 missing and causes $6.39 million (2021 USD) in damage in Palau, the Philippines, Taiwan and Japan.
June
June–October – Wildfires in Algeria kills 90 people.
June 11–13 – Tropical Storm Koguma kills one person with two missing and caused $9.87 million (2021 USD) in damage across South China, Vietnam and Indochina.
June 11 – Lake Mead drops to its lowest water level ever recorded due to the 2020–21 North American drought.
June 18-19 – A storm complex resulted in one fatality due to flooding in Indiana, caused a hailstorm resulting in $1.9 billion in damage, and spawned 7 tornadoes.
June 18–20 – Tropical Storm Dolores kills three people and caused $50 million (2021 USD) in damage in Mexico.
June 19–23 – Tropical Storm Claudette kills 14 people and caused $375 million (2021 USD) in damage in the United States.
June 20–21 – A tornado outbreak in Canada kills one person due to an EF2 tornado in Quebec.
June 24 – A rare, powerful and deadly IF4 tornado passes through several villages in southeastern Czech Republic, causing catastrophic damage and results in the deaths of six people and 200 others are injured. The tornado caused 15+ billion CZK (~693.9 million USD) in damage and is the strongest tornado ever recorded on the International Fujita scale.
June 25–30 – Hurricane Enrique kills two people and caused more than $50 million (2021 USD) in damage in Mexico.
June 25 – July 7 – The 2021 Western North America heat wave results in 914 confirmed deaths with up to 1,408+ deaths estimated. Damage totals are $8.9 billion in the United States alone.
June 29 – The temperature reaches in Lytton, British Columbia, breaking the all-time record for hottest temperature ever recorded in Canada for the third day in a row. The temperature reached in Lytton on June 28 and on June 27, both records. These record high temperatures are a result of the 2021 Western North America heat wave.
June 30 – The Lytton wildfire kills 2 people and burned 206,926 acres. The wildfire is a result of the 2021 Western North America heat wave.
June 30 - Newark, New Jersey sets their all time hottest temperature in June, at .
July
July 1–14 – Hurricane Elsa kills 13 people, and caused $1.2 billion (2021 USD) in damage in the Caribbean, the United States, and Canada.
July 3 – The 2021 Atami landslide occurs in Atami, Shizuoka Prefecture, Japan, killing 27. The landslide was a result of heavy rainfall with the city receiving of rainfall in a 48-hour period.
July 3-4 - Several record low highs were set. On July 3, this included in Boston, in Worcester. On July 4, this included Augusta, Maine, with a high of . Record daily precipitation also hit the city, accumulating to .
July 3–5 – A huge wildfire spreads through Limassol, Cyprus, killing four people and forcing the evacuation of several villages. It is described as the worst wildfire in the country's history.
July 6 – August 15 – The Bootleg Fire occurs in Oregon, resulting in 413,765 acres being burned and 408 building being destroyed.
July 10 - The all-time high temperature of the state of Utah, at , is tied in Saint George. Las Vegas also tied their all time high temperature, also at .
July 12–25 – The 2021 European floods results in 243 deaths and caused $11.8 billion (2021 USD) in damage across Europe.
July 12 – 65 people were killed by lightning strikes in the Indian states of Rajasthan, Uttar Pradesh and Madhya Pradesh, with a single strike killing 16 at Amer Fort near Jaipur.
July 13 – October 25 – The Dixie Fire kills one firefighter, burns 963,309 acres and damaged over 1,300 structures. The Dixie Fire became the largest single (i.e. non-complex) wildfire in California's history and it was the first fire known to have burned across the crest of the Sierra Nevada. It caused $1.15 billion in damage.
July 15–31 – Typhoon In-fa kills 6 people and resulted in $1 billion (2021 USD) in damage in the Philippines, Ryukyu Islands, Taiwan, China, and North Korea.
July 17–31 – Floods in Henan, China result in the deaths of 302 people with 50 missing and causing around 82 billion yuan (US$12.7 billion) in damage.
July 18 – Heavy floods in Mumbai, India, caused a landslide that kills 32 people and injured 5 others.
July 22–August – Floods in Maharashtra kills 208 people with eight missing.
July 26 – A dust storm caused a 20-vehicle pileup on Interstate 15 in the U.S. state of Utah, killing eight people and injuring several others.
July 28–29 – A tornado outbreak across the Great Lakes, Ohio Valley, and Mid-Atlantic kills one person (non-tornadic), injured 13 others, and caused $315 million (2021 USD) in damage.
July 29 – A possible EF0 anticyclonic tornado touches down in Bustleton, Philadelphia, Pennsylvania during the tornado outbreak of July 28–29, 2021.
July 28 – Floods in Islamabad, Pakistan kills two people. Started after the cloudburst in Islamabad, Pakistan, caused flood situation in many parts of the federal capital and killed two people.
July 28 – August 1 – Floods in Afghanistan kill at least 113 people.
August
August 4 – Seventeen people were killed in northern Bangladesh during a lightning strike on a boat celebrating a wedding.
August 11–20 – Tropical Storm Fred kills seven people and caused $1.3 billion (2021 USD) in damage in the Caribbean, the Eastern United States, and Canada.
August 11 – At least 10 people were killed and dozens more trapped under debris after a landslide in a Himalayan district of Himachal Pradesh, India.
August 13–21 – Hurricane Grace kills 16 people and caused $513 million (2021 USD) in damage across the Caribbean and Mexico.
August 15 – Heavy rain in Japan causes a landslide in Okaya, Nagano leaving 3 people dead after the landslide damaged their house.
August 16–24 – Hurricane Henri kills two people and caused $550 million (2021 USD) in damage in Bermuda, the northeastern United States, and southern Nova Scotia.
August 18 – Flash flooding caused by torrential rains kills at least seven people in Addis Ababa, Ethiopia.
August 21 – Floods in Tennessee kill 20 people and cause $101.11 million in damage. A record of rain in 24 hours was reported in McEwen, Tennessee.
August 25–30 – Hurricane Nora kills three people and caused $125 million (2021 USD) in damage in Western Mexico.
August 25 – September 4 – Hurricane Ida kills 115 people and causes $75.25 billion (2021 USD) in damage, making this the fifth-costliest hurricane on record. The precursor to Ida killed 20 people and left 17 people missing after torrential rains caused landslides in western Venezuela. The hurricane also impacted Colombia, Jamaica, the Cayman Islands, Cuba, the United States, and Canada. In addition, from August 29 to September 2, the Hurricane Ida tornado outbreak kills one person and injures seven others from 35 tornadoes.
September
September 2-7 - The precursor to Tropical Storm Mindy caused 23 deaths and $75 million in Mexico.
September 7-9 - Death Valley sets two global heat records. The high of in September 7 is the latest any spot on the globe saw a temperature in the 50s°C. On September 9, the low of became the warmest low on record in September.
September 7–13 – Hurricane Olaf kills one person and caused $10 million (2021 USD) in damage across Western Mexico and the Baja California Peninsula.
September 10 – Two people were killed and nine others were injured after a powerful whirlwind hits Pantelleria, Sicily, Italy.
September 12–18 – Hurricane Nicholas caused over 700,000 power outages, kills 4 people and caused $1 billion (2021 USD) in damage across the Yucatán Peninsula, Tamaulipas, and the Gulf Coast of the United States. A state of emergency was declared by Governor of Louisiana, John Bel Edwards, in preparation for the hurricane.
October
The hottest October occurred in Newark, New Jersey, Washington DC, Milwaukee, Scranton, Pennsylvania, Williamsburg, Virginia, Baltimore, Harrisburg, Pennsylvania, and Syracuse, New York occurs. Toledo also has its wettest October.
October 6 – Five people were killed by flash flooding which occurred in parts of the U.S. states of Alabama and Tennessee, with as much as of rain falling in some areas.
October 13–16 – European Windstorm Ballos kills two people and causes damage across France (Corsica), Italy, Greece, Slovenia, Croatia, Bosnia and Herzegovina, Serbia, Montenegro, North Macedonia, and Albania.
October 19–26 – The October 2021 Northeast Pacific bomb cyclone occurs killing two people, causing a power outage to 370,500 people, and caused $400 million in damage to Russia's Far East, Japan, Alaska, the Western United States, and Western Canada. The bomb cyclone had a minimum central pressure of at its peak, making it the most powerful cyclone recorded in the Northeast Pacific.
October 20–23 – European Windstorm Aurore, kills six people, causes 525,000 power outages, and causes more than $100 million (2021 USD) in damage across the United Kingdom, France, Czech Republic, Poland, Netherlands, Germany, and Russia.
October 24 – November 2 – Cyclone Apollo, also known as Medicane Nearchus, killed seven and caused $245 million (2021 USD) in damage across Algeria, Tunisia, Italy (especially Sicily), Malta, Libya, Cyprus, and Turkey.
October 25–28 – The October 2021 nor'easter, which eventually became Tropical Storm Wanda, kills at least two people and causes more than $200 million (2021 USD) in damage across the United States and Canada.
October 27 – An EF1 tornado hits Moss Point, Mississippi killing one person.
November
November 5–18 – European Windstorm Blas kills nine people and caused damage across Algeria, the Balearic Islands, the east coast of Spain, Southern France, Morocco, Sardinia, and Sicily.
November 6 – An extremely rare EF0 tornado hit Vancouver, British Columbia, Canada and caused significant damage to the areas surrounding the University of British Columbia.
November 6–12 – Floods in South India caused by Depression BOB 05 kills 41 people and causes damage across India and Sri Lanka.
November 14 – December 17 – Floods in the Pacific Northwest kill six people across Southern British Columbia, Canada, and Washington, United States and causes over $2.5 billion (2021 USD) in damage. Washington Governor Jay Inslee issued a state of emergency on November 15 covering 14 counties in Western Washington, and on November 17, a state of emergency was declared in British Columbia.
November 17 – A tornado moved through Modica, Sicily, killing one person, injuring two others, and severely damaging several homes.
November 21–23 – A series of floods in Atlantic Canada caused damage across that area. The floods prompted a state of emergency to be declared in Inverness and Victoria.
December
December 2–6 – Cyclone Jawad kills two people and caused damage across India (Andhra Pradesh, Odisha, and West Bengal) and Bangladesh.
December 5–9 – European Windstorm Barra kills three people with one missing and caused damage and caused over 59,000 power outages across Ireland and the United Kingdom.
December 9–11 – A winter storm, unofficially named Winter Storm Atticus impacted the United States and Canada and caused over 500,000 power outages. This winter storm later created the Tornado Outbreak of December 10–11, 2021, which killed 95 people (89 tornadic and 6 non-tornadic) and injured 672 throughout the United States from 71 tornadoes. During the outbreak, the National Weather Service issued eight tornado emergencies, setting a new record for the most issued during the month of December. The outbreak prompted the Governor of Kentucky, Andy Beshear, to declare a state of emergency for portions of Western Kentucky.
December 10 – A violent, long tracked EF4 tornado in Western Kentucky kills at least 58 people (57 direct and 1 indirect), injures 515 others, and caused catastrophic damage to numerous towns in Kentucky, including Mayfield, Benton, Dawson Springs, and Bremen.
December 10 – An EF3 tornado in Illinois kills six people, injures one and caused catastrophic damage to an Amazon warehouse in Edwardsville, Illinois.
December 10 – A long tracked EF3 tornado causes $11.026 in damage and 34 injuries to portions of Tennessee and Kentucky along its path.
December 11 – An EF3 tornado kills 16 people directly, plus one indirect, and injures 63 others after hitting Bowling Green, Kentucky and caused Western Kentucky University to lose power.
December 10–13 – Subtropical Storm Ubá kills 15 people and caused damage across Argentina, Brazil and Uruguay. On 10 December 2021, according to the Brazilian Navy, the system transitioned into a subtropical depression. Subtropical Storm Ubá caused 30 municipalities in Bahia, Brazil, to decree a state of emergency.
December 10–14 – Cyclone Ruby caused over 14,800 power outages and damage across the Solomon Islands and New Caledonia.
December 11–21 – Typhoon Rai, known in the Philippines as Typhoon Odette, kills 410 people with 80 missing, and caused $1.02 billion (2021 USD) in damage across the Caroline Islands, Palau, the Philippines, the Spratly Islands, Vietnam, South China, Hong Kong and Macau.
December 13–18 – A historic derecho, winter storm, and windstorm across North America kills five people directly and two people indirectly through a wildfire outbreak in Kansas, caused 117 tornadoes, and caused over 600,000 power outages. This tornado outbreak set the record for the most tornadoes during a December outbreak. The initial winter storm, unofficially named Winter Storm Bankston by The Weather Channel, became a category 3 atmospheric river event, which heavy rain and snow to the west coast of the United States. The winter storm caused California's statewide snowpack to increase from 19% of normal to 83% of normal.
December 16–22 – Winter Storm Carmel kills four people and caused damage across Greece, Cyprus, and Israel.
December 16, 2021 – January 19, 2022 – Floods in Malaysia, locally called Banjir Shah Alam, caused by Tropical Depression 29W kills 54 people with two missing and caused over $4.77 billion (2021 USD) in damage across Malaysia.
December 24–Present – Floods in Bahia, Brazil kills 21 people and injured over 280 others. As a result of the floods, 72 municipalities of Bahia declared a state of emergency.
December 24, 2021 – January 6, 2022 – Tropical Cyclone Seth kills two people and caused severe flooding in southeastern Queensland.
December 25 – The National Weather Service office in Boquillas, Texas records a temperature of , marking the highest temperature ever recorded in the United States on Christmas Day.
December 28 - A temperature of in Kodiak, Alaska becomes the all time warmest statewide temperature for the entire month.
December 30, 2021 – January 1, 2022 – Grass fires in Boulder County, Colorado killed one person, left two people missing and injured six others. Wind gusts of were reported and the fire destroyed 1,084 structures and caused $513 million (2022 USD) in damage.
See also
2021 in the environment and environmental sciences
Weather of 2020
References
Weather by year
Weather-related lists
2021-related lists | Weather of 2021 | [
"Physics"
] | 7,542 | [
"Weather",
"Physical phenomena",
"Weather by year",
"Weather-related lists"
] |
67,638,248 | https://en.wikipedia.org/wiki/Gallium%28III%29%20sulfate | Gallium(III) sulfate refers to the chemical compound, a salt, with the formula Ga2(SO4)3, or its hydrates Ga2(SO4)3·xH2O. Gallium metal dissolves in sulfuric acid to form solutions containing [Ga(OH2)6]3+ and SO42− ions. The octadecahydrate Ga2(SO4)3·18H2O crystallises from these solutions at room temperature. This hydrate loses water in stages when heated, forming the anhydrate Ga2(SO4)3 above 150 °C and completely above 310 °C. Anhydrous Ga2(SO4)3 is isostructural with iron(III) sulfate, crystallizing in the rhombohedral space group R.
Preparation
Gallium(III) sulfate is prepared from the reaction of hydroxygallium diacetate and sulfuric acid. The two reactants were mixed at 90 °C and left for 2 days which produced the octadecahydrate. Then, it was dried in a vacuum for 2 hours which created the extremely hygroscopic anhydrous form. The overall reaction is below:
After the production, it was confirmed to be the simple salt, Ga2(SO4)3, by x-ray diffraction.
Properties
When heated over 680 °C, gallium sulfate gives off sulfur trioxide, yielding gallium(III) oxide.
A gallium sulfate solution in water mixed with zinc sulfate can precipitate ZnGa2O4.
Derivatives
Basic gallium sulfate is known with the formula (H3O)Ga3(SO4)2(OH)6.
Double gallium sulfates are known with composition NaGa3(SO4)2(OH)6, KGa3(SO4)2(OH)6, RbGa3(SO4)2(OH)6, NH4Ga3(SO4)2(OH)6. These compounds are isostructural with jarosite and alunite. Jarosite and alunite can contain a small amount of gallium substituted for iron or aluminium. Organic base double gallium sulfates can contain different core structures, these can be chains of [Ga(SO4)3]3-, [Ga(OH)(SO4)2]2- or [Ga(H2O)2(SO4)2]− or sheets of [Ga(H2O)2(SO4)2]− units.
References
Gallium compounds
Sulfates
Catalysts | Gallium(III) sulfate | [
"Chemistry"
] | 540 | [
"Catalysis",
"Catalysts",
"Sulfates",
"Salts",
"Chemical kinetics"
] |
62,409,684 | https://en.wikipedia.org/wiki/Fluoride%20phosphate | The fluoride phosphates or phosphate fluorides are inorganic double salts that contain both fluoride and phosphate anions. In mineralogy, Hey's Chemical Index of Minerals groups these as 22.1. The Nickel-Strunz grouping is 8.BN.
Related mixed anion compounds are the chloride phosphates, the fluoride arsenates and fluoride vanadates.
They are distinct from the fluorophosphates: monofluorophosphate, difluorophosphate and hexafluorophosphate which have fluorine bonds to the phosphorus.
Minerals
Artificial
References
Fluorides
Phosphates
Mixed anion compounds | Fluoride phosphate | [
"Physics",
"Chemistry"
] | 139 | [
"Matter",
"Mixed anion compounds",
"Salts",
"Phosphates",
"Fluorides",
"Ions"
] |
62,410,103 | https://en.wikipedia.org/wiki/Carbonate%20chloride | The carbonate chlorides are double salts containing both carbonate and chloride anions. Quite a few minerals are known. Several artificial compounds have been made. Some complexes have both carbonate and chloride ligands. They are part of the family of . In turn these are a part of mixed anion materials.
The carbonate chlorides do not have a bond from chlorine to carbon, however "chlorocarbonate" has also been used to refer to the chloroformates which contain the group ClC(O)O-.
Formation
Natural
Scapolite is produced in nature by metasomatism, where hot high pressure water solutions of carbon dioxide and sodium chloride modify plagioclase.
Chloroartinite is found in Sorel cements exposed to air.
Minerals
In 2016 27 chloride containing carbonate minerals were known.
Artificial
Complexes
The "lanthaballs" are lanthanoid atom clusters held together by carbonate and other ligands. They can form chlorides. Examples are [La13(ccnm)6(CO3)14(H2O)6(phen)18] Cl3(CO3)·25H2O where ccnm is carbamoylcyanonitrosomethanide and phen is 1,10-phenanthroline. Praseodymium (Pr) or cerium (Ce) can substitute for lanthanum (La). Other lanthanide cluster compounds include :(H3O)6[Dy76O10(OH)138(OAc)20(L)44(H2O)34]•2CO3•4
Cl2•L•2OAc (nicknamed Dy76) and (H3O)6[Dy48O6(OH)84(OAc)4(L)15(hmp)18(H2O)20]•CO3•14Cl•2H2O (termed Dy48-T) with OAc=acetate, and L=3-furancarboxylate and Hhmp=2,2-bis(hydroxymethyl)propionic acid.
Platinum can form complexes with carbonate and chloride ligands, in addition to an amino acid. Examples include the platinum compound [Pt(gluH)Cl(CO3)]2.2H2O gluH=glutamic acid, and Na[Pt(gln)Cl2(CO3)].H2O gln=glutamine. Rhodium complexes include Rh2(bipy)2(CO3)2Cl (bipy=bipyridine)
References
Carbonates
Chlorides
Mixed anion compounds
Double salts | Carbonate chloride | [
"Physics",
"Chemistry"
] | 571 | [
"Matter",
"Chlorides",
"Inorganic compounds",
"Mixed anion compounds",
"Double salts",
"Salts",
"Ions"
] |
62,415,050 | https://en.wikipedia.org/wiki/European%20Structural%20Integrity%20Society | The European Society for Structural Integrity (ESIS) is an international non-profit engineering scientific society. Its purpose is to create and expand knowledge about all aspects of structural integrity and the dissemination of that knowledge. The goal is to improve the safety and performance of structures and components.
History
The purpose of European Structural Integrity Society dates back to November 1978 during the summer school in Darmstadt (Germany). At the time, the name was European Group on Fracture. Between 1979 and 1988 the first technical committees were created, the first technical committee had the designation of Elasto-Plastic Fracture Mechanics. The initial idea was to reproduce in Europe the same as the ASTM committee. The first president of European Structural Integrity Society was Dr. L.H. Larsson (European Commission Joint Research Centre). ESIS has a total of 24 technical committees and national groups in each European country.
The current president of ESIS is Prof. Aleksandar Sedmak from the University of Belgrade (Serbia).
Scientific Journals
ESIS is institutionally responsible for the following scientific journals:
Engineering Failure Analysis
Engineering Fracture Mechanics
International Journal of Fatigue
Theoretical and Applied Fracture Mechanics
Procedia Structural Integrity
Organization of International Conferences
ESIS is the organizer or supporter of various international conference series:
ECF, European Conference on Fracture (biennial)
ICSI, International Conference on Structural Integrity (biennial)
IRAS, International Conference on Risk Analysis and Safety of Complex Structures and Componentes (biennial)
Awards
ESIS, at its events, confers the following awards:
The Griffith Medal
The August-Wöhler Medal
The Award of Merit
Honorary Membership
The Young Scientist Award
Robert Moskovic Award (ESIS TC12)
The August Wöhler Medal Winners
2022: Youshi Hong, Chinese Academy of Sciences, China
2020: Filippo Berto, Sapienza University of Rome, Italy
2016: Paul C. Paris, Washington University in St. Louis, USA
2014: Reinhard Pippan, Austrian Academy of Sciences, Austria
2010: Ashok Saxena, University of Arkansas, USA
2008: Morris Sonsino, Technische Universität Darmstadt, Germany
2006: Robert O. Ritchie, University of California, Berkeley, USA
2004: Leslie Pook, University College London, UK
2002: Michael W. Brown, The University of Sheffield, UK
2000: Darrell F. Socie, University of Illinois, USA
The Award of Merit Winners
2022: José A.F.O. Correia, University of Porto, Portugal
2020: Uwe Zerbst, Federal Institute for Materials Research and Testing, Germany
2018: Filippo Berto, Sapienza University of Rome, Italy
2016: Laszlo Toth, University of Miskolc, Hungary
2014: Wolfgang Dietzel, Helmholtz-Zentrum Hereon, Germany
2010: Jaroslav Pokluda, Brno University of Technology, Czech Republic
2008: Emmanuel Gdoutos, Democritus University of Thrace, Greece
2006: Andrzej Neimitz, Kielce University of Technology, Poland
2004: Keith Miller, University of Sheffield, UK
2002: Dietrich Munz, Karlsruhe Institute of Technology, Germany
2000: Ian Milne, Integrity Management Services, UK
The Robert Moskovic Award Winners
2023: Aleksandar Grbovic, University of Belgrade, Serbia; Marc A. Meyers, University of California, San Diego, USA; Motomichi Koyama, Tohoku University, Japan
2022: Hryhoriy Nykyforchyn, National Academy of Sciences of Ukraine, Ukraine; John Michopoulos, United States Naval Research Laboratory, USA; Grzegorz Lesiuk, Wrocław University of Science and Technology, Poland
2021: Maria Feng, Columbia University, USA; Filippo Berto, Norwegian University of Science and Technology, Norway; Milan Veljkovic, Delft University of Technology, Netherlands
2020: Neil James, Plymouth University, UK; Rui Calçada, University of Porto, Portugal; Vladimir Moskvichev, Russian Academy of Sciences, Russia
2019: Hojjat Adeli, Ohio State University, USA; Alfonso Fernández-Canteli, University of Oviedo, Spain; Aleksandar Sedmak, University of Belgrade, Serbia
References
External links
European Structural Integrity Society Official Website
Engineering organizations
Organizations established in 1978
Scientific organizations established in 1978
Materials science organizations | European Structural Integrity Society | [
"Materials_science",
"Engineering"
] | 879 | [
"Materials science organizations",
"Materials science",
"nan"
] |
62,415,639 | https://en.wikipedia.org/wiki/Cache%20Creek%20Ocean | The Cache Creek Ocean, formerly called Anvil Ocean, is an inferred ancient ocean which existed between western North America and offshore continental terranes between the Devonian and the Middle Jurassic.
Evolution of the concept
First proposed in the 1970s and referred to as the Anvil Ocean, the oceanic crust between the Yukon composite terranes and North America was later updated to Cache Creek Sea in 1987 Monger and Berg, before being renamed the Cache Creek Ocean by Plafker and Berg in 1994. Other researchers in 1998 proposed the name Slide Mountain Ocean.
The geology of Yukon and geology of Alaska formed in part due to the accretion of island arcs and continental terranes onto the western margin of North America. Many of these island arcs arrived onshore during and after the Devonian. The Cache Creek Belt (also referred to as the Cache Creek suture zone or Cache Creek terrane) is an extensive area of mélange and oceanic rocks in the Canadian province of British Columbia. Sedimentary rocks contain fossils from the Carboniferous through the Middle Jurassic and isotopic dating of blueschist gives ages 230 and 210 million years ago in the Late Triassic.
The Cache Creek Belt is bordered by the Quesnellia Terrane in the east and by the large Stikinia Terrane in the west. The accretion of the landmasses and the closing the Cache Creek Ocean likely happened in the Middle Jurassic.
References
Historical oceans
Oceanography
Geology of British Columbia
Devonian North America
Carboniferous North America
Permian North America
Triassic North America
Early Jurassic North America
Middle Jurassic North America | Cache Creek Ocean | [
"Physics",
"Environmental_science"
] | 316 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
62,415,641 | https://en.wikipedia.org/wiki/H4K16ac | H4K16ac is an epigenetic modification to the DNA packaging protein Histone H4. It is a mark that indicates the acetylation at the 16th lysine residue of the histone H4 protein.
H4K16ac is unusual in that it has both transcriptional activation AND repression activities.
The loss of H4K20me3 along with a reduction of H4K16ac is a strong indicator of cancer.
Lysine acetylation and deacetylation
Proteins are typically acetylated on lysine residues and this reaction relies on acetyl-coenzyme A as the acetyl group donor. In histone acetylation and deacetylation, histone proteins are acetylated and deacetylated on lysine residues in the N-terminal tail as part of gene regulation. Typically, these reactions are catalyzed by enzymes with histone acetyltransferase (HAT) or histone deacetylase (HDAC) activity, although HATs and HDACs can modify the acetylation status of non-histone proteins as well.
The regulation of transcription factors, effector proteins, molecular chaperones, and cytoskeletal proteins by acetylation and deacetylation is a significant post-translational regulatory mechanism These regulatory mechanisms are analogous to phosphorylation and dephosphorylation by the action of kinases and phosphatases. Not only can the acetylation state of a protein modify its activity but there has been recent suggestion that this post-translational modification may also crosstalk with phosphorylation, methylation, ubiquitination, sumoylation, and others for dynamic control of cellular signaling.
In the field of epigenetics, histone acetylation (and deacetylation) have been shown to be important mechanisms in the regulation of gene transcription. Histones, however, are not the only proteins regulated by posttranslational acetylation.
Nomenclature
H4K16ac indicates acetylation of lysine 16 on histone H4 protein subunit:
Histone modifications
The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3.
Epigenetic implications
The post-translational modification of histone tails by either histone modifying complexes or chromatin remodeling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together.
Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications.
The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation.
Importance
Secondly, it can block the function of chromatin remodelers. Thirdly, it neutralizes the positive charge on lysines. Acetylation of histone H4 on lysine 16 (H4K16Ac) is especially important for chromatin structure and function in a variety of eukaryotes and is catalyzed by specific histone lysine acetyltransferases (HATs). H4K16 is particularly interesting because this is the only acetylatable site of the H4 N-terminal tail, and can influence the formation of a compact higher-order chromatin structure. Hypoacetylation of H4K16 appears to cause delayed recruitment of DNA repair proteins to sites of DNA damage in a mouse model of the premature aging, such as Hutchinson–Gilford progeria syndrome. H4K16Ac also has roles in transcriptional activation and the maintenance of euchromatin.
Activation and repression
H4K16ac is unusual in that it is associated with both transcriptional activation and repression. The bromodomain of TIP5, part of NoRC, binds to H4K16ac and then the NoRC complex silences rDNA with HATs and DNMTs.
There is also a reduction in the levels of H3K56ac during aging and an increase in the levels of H4K16ac. Increased H4K16ac in old yeast cells is associated with the decline in levels of the HDAC Sir2, which can increase the life span when overexpressed.
Cancer marker
The loss of the repressive H4K20me3 mark defines cancer along with a reduction of activating H4K16ac mark. It is not clear how the loss of a repressive and an activating mark is an indicator of cancer. It is not clear exactly how but this reduction happens at repetitive sequences along with general reduced DNA methylation.
Methods
The histone mark acetylation can be detected in a variety of ways:
1. Chromatin Immunoprecipitation Sequencing (ChIP-sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region.
2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences.
3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation.
See also
Histone acetylation
References
Epigenetics
Post-translational modification | H4K16ac | [
"Chemistry"
] | 1,597 | [
"Post-translational modification",
"Gene expression",
"Biochemical reactions"
] |
62,416,628 | https://en.wikipedia.org/wiki/Design%20system | A design system is a comprehensive set of standards, documentation, and reusable components that guide the development of digital products within an organization. It serves as a single source of truth for designers and developers, ensuring consistency and efficiency across projects. A Design system may comprise, pattern and component libraries; style guides for font, color, spacing, component dimensions, and placement; design languages, coded components, brand languages, and documentation. Design Systems aid in digital product design and development of products such as mobile applications or websites.
A design system serves as a reference to establish a common understanding between design, engineering, and product teams. This understanding ensures smooth communication and collaboration between different teams involved in designing and building a product, and ultimately results in a consistent user experience.
Notable design systems include Lightning Design System (by Salesforce), Material Design (by Google), Carbon Design System (by IBM), and Fluent Design System (by Microsoft).
Advantages
Some of the advantages of a design system are:
Streamlined design to production workflow.
Creates a unified language between and within the cross-functional teams.
Faster builds, through reusable components and shared rationale.
Better products, through more cohesive user experiences and a consistent design language.
Improved maintenance and scalability, through the reduction of design and technical debt.
Stronger focus for product teams, through tackling common problems so teams can concentrate on solving user needs.
Origins
Design systems have been in practice for a long time under different nomenclatures. Design systems have been significant in the design field since they were created but have had many changes and improvements since their origin. Using systems or patterns as they called it in 1960s was first mentioned in NATO Software Engineering Conference (discussion on how the softwares should be developed) by Christopher Alexander gaining industry’s attention. In 1970s, he published a book named “A Pattern Language” along with Murray Silverstein, and Sara Ishikawa which discussed the interconnected patterns in architecture in an easy and democratic way and that gave birth to what we know today as “Design Systems”.
Interests in the digital field surged again in the latter half of the 1980s, for this tool to be used in software development which led to the notion of Software Design Pattern. As patterns are best maintained in a collaborative editing environment, it led to the invention of the first wiki, which later led to the invention of Wikipedia itself. Regular conferences were held, and even back then, patterns were used to build user interfaces. The surge continued well into the 90s, with Jennifer Tidwell's research closing the decade. Scientific interest continued way into the 2000s.
Mainstream interest about pattern languages for UI design surged again by the opening of Yahoo! Design Pattern Library in 2006 with the simultaneous introduction of Yahoo! User Interface Library (YUI Library for short). The simultaneous introduction was meant to allow more systematic design than mere components which the UI library has provided.
Google's Material Design in 2014 was the first to be called a "design language" by the firm (the previous version was called "Holo Theme"). Soon, others followed suit.
Technical challenges of large-scale web projects led to the invention of systematic approaches in the 2010s, most notably BEM and Atomic Design. The book about Atomic Design helped popularize the term "Design System" since 2016. The book describes an approach to design layouts of digital products in a component-based way making it future-friendly and easy to update.
Difference between pattern languages and design systems and UI kits
A pattern language allows its patterns to exist in many different shapes and forms – for example, a login form, with an input field for username, password, and buttons to log in, register and retrieve lost password is a pattern, no matter if the buttons are green or purple. Patterns are called patterns exactly because their exact nature might differ, but similarities provide the relationship between them (called a configuration) to remain the same. A design language however always has a set of visual guidelines to contain specific colors and typography. Most design systems allow elements of a design language to be configured (via its patterns) according to need.
A UI kit is simply a set of UI components, with no explicit rules provided on its usage.
Design tokens
A design token is a named variable that stores a specific design attribute, such as a color, typography setting, spacing value, or other design decision. Design tokens serve as a single source of truth for these attributes across an entire brand or system, and provide a wide array of benefits such as abstraction, flexibility, scalability, and consistency to large design systems. Design tokens, which are essentially design decisions expressed in code, also improve collaboration between designers and developers. The concept of design tokens exists within a variety of well known design systems such as Google's Material Design, Amazon's Style Dictionary, Adobe's Spectrum and the Atlassian Design System
The W3C Design Tokens Community Group is working to provide open standards for design tokens.
Summary
A design system comprises various components, patterns, styles, and guidelines that aid in streamlining and optimizing design efforts. The critical factors to consider when creating a design system include the scope and ability to reproduce your projects and the availability of resources and time. If design systems are not appropriately implemented and maintained, they can become disorganized, making the design process less efficient. When implemented well however, they can simplify work, make the end products more cohesive, and empower designers to address intricate UX challenges.
References
External links
What is a Design System? by Robert Gourley
Design Systems Handbook by Marco Suarez, Jina Anne, Katie Sylor-Miller, Diana Mounter, and Roy Stanfield. (Design Better by InVision)
Post (in French): Why set up a design system?
Design Patterns
Example Design Systems
system
Product design
Systems architecture | Design system | [
"Engineering"
] | 1,195 | [
"Systems engineering",
"Product design",
"Design",
"Systems architecture"
] |
62,417,498 | https://en.wikipedia.org/wiki/Truthful%20resource%20allocation | Truthful resource allocation is the problem of allocating resources among agents with different valuations over the resources, such that agents are incentivized to reveal their true valuations over the resources.
Model
There are m resources that are assumed to be homogeneous and divisible. Examples are:
Materials, such as wood or metal;
Virtual resources, such as CPU time or computer memory;
Financial resources, such as shares in firms.
There are n agents. Each agent has a function that attributes a numeric value to each "bundle" (combination of resources).
It is often assumed that the agents' value functions are linear, so that if the agent receives a fraction rj of each resource j, then his/her value is the sum of rj ∗vj .
Design goals
The goal is to design a truthful mechanism, that will induce the agents to reveal their true value functions, and then calculate an allocation that satisfies some fairness and efficiency objectives. The common efficiency objectives are:
Pareto efficiency (PE);
Utilitarian social welfare defined as the sum of agents' utilities. An allocation maximizing this sum is called utilitarian or max-sum; it is always PE.
Nash social welfare defined as the product of agents' utilities. An allocation maximizing this product is called Nash-optimal or max-product or proportionally-fair; it is always PE. When agents have additive utilities, it is equivalent to the competitive equilibrium from equal incomes.
The most common fairness objectives are:
Equal treatment of equals (ETE) if two agents have exactly the same utility function, then they should get exactly the same utility.
Envy-freeness no agent should envy another agent. It implies ETE.
Egalitarian in lieu of equitable markets are analogous to laissez-faire early-stage capitalism, which form the basis of common marketplaces bearing fair trade policies in world markets' market evaluation; financiers can capitalise on financial controls and financial leverage and the concomitant exchange.
Trivial algorithms
Two trivial truthful algorithms are:
The equal split algorithm which gives each agent exactly 1/n of each resource. This allocation is envy-free (and obviously ETE), but usually it is very inefficient.
The serial dictatorship algorithm which orders the agents arbitrarily, and lets each agent in turn take all resources that he wants, from among the remaining ones. This allocation is PE, but usually it is unfair.
It is possible to mix these two mechanisms, and get a truthful mechanism that is partly-fair and partly-efficient. But the ideal mechanism would satisfy all three properties simultaneously: truthfulness, efficiency and fairness.
At most one object per agent
In a variant of the resource allocation problem, sometimes called one-sided matching or assignment, the total amount of objects allocated to each agent must be at most 1.
When there are 2 agents and 2 objects, the following mechanism satisfies all three properties: if each agent prefers a different object, give each agent his preferred object; if both agents prefer the same object, give each agent 1/2 of each object (It is PE due to the capacity constraints). However, when there are 3 or more agents, it may be impossible to attain all three properties.
Zhou proved that, when there are 3 or more agents, each agent must get at most 1 object, and each object must be given to at most 1 agent, no truthful mechanism satisfies both PE and ETE.
When there are multiple units of each object (but each agent must still get at most 1 object), there is a weaker impossibility result: no PE and ETE mechanism satisfies Group strategyproofness.
He leaves open the more general resource allocation setting, in which each agent may get more than one object.
There are analogous impossibility results for agents with ordinal utilities:
For agents with strict ordinal utilities, Bogomolnaia and Moulin prove that no mechanism satisfies possible-PE, necessary-truthfulness, and ETE.
For agents with weak ordinal utilities, Katta and Sethuraman prove that no mechanism satisfies possible-PE, possible-truthfulness, and necessary-envy-freeness.
See also: Truthful one-sided matching.
Approximation Algorithms
There are several truthful algorithms that find a constant-factor approximation of the maximum utilitarian or Nash welfare.
Guo and Conitzer studied the special case of n=2 agents. For the case of m=2 resources, they showed a truthful mechanism attaining 0.828 of the maximum utilitarian welfare, and showed an upper bound of 0.841. For the case of many resources, they showed that all truthful mechanisms of the same kind approach 0.5 of the maximum utilitarian welfare. Their mechanisms are complete - they allocate all the resources.
Cole, Gkatzelis and Goel studied mechanisms of a different kind - based on the max-product allocation. For many agents, with valuations that are homogeneous functions, they show a truthful mechanism called Partial Allocation that guarantees to each agent at least 1/e ≈ 0.368 of his/her utility in the max-product allocation. Their mechanism is envy-free when the valuations are additive linear functions. They show that no truthful mechanism can guarantee to all agents more than 0.5 of their max-product utility.
For the special case of n=2 agents, they show a truthful mechanism that attains at least 0.622 of the utilitarian welfare. They also show that the mechanism running the equal-split mechanism and the partial-allocation mechanism, and choosing the outcome with the highest social welfare, is still truthful, since both agents always prefer the same outcome. Moreover, it attains at least 2/3 of the optimal welfare. They also show an algorithm for computing the max-product allocation, and show that the Nash-optimal allocation itself attains at least 0.933 of the utilitarian welfare.
They also show a mechanism called Strong Demand Matching, which is tailored for a setting with many agents and few resources (such as the privatization auction in the Czech republic). The mechanism guarantees to each agent at least p/(p+1) of the max-product utility, when p is the smallest equilibrium price of a resource when each agent has a unit budget. When there are many more agents than resources, the price of each resource is usually high, so the approximation factor approaches 1. In particular, when there are two resources, this fraction is at least n/(n+1). This mechanism assigns to each agent a fraction of a single resource.
Cheung improved the competitive ratios of previous works:
The ratio for two agents and two resources improved from 0.828 to 5/6 ≈ 0.833 with a complete-allocation mechanism, and strictly more than 5/6 with a partial-allocation mechanism. The upper bound improved from 0.841 to 5/6+ε; for a complete-allocation mechanism, and to 0.8644 for a partial mechanism.
The ratio for two agents and many resources improved from 2/3 to 0.67776, by using a weighted average of two mechanisms: partial-allocation, and max (partial-allocation, equal-split).
Related problems
Truthful cake-cutting - a variant of the problem in which there is a single heterogeneous resource ("cake"), and each agent has a personal value-measure over the resource.
Strategic fair division - the study of equilibria of fair division games when the agents act strategically rather than sincerely.
Truthful allocation of two kinds of resources - plentiful and scarce.
Truthful fair division of indivisible items.
Relation between truthful fair division and wagering strategies.
References
Mechanism design
Fair division protocols | Truthful resource allocation | [
"Mathematics"
] | 1,616 | [
"Game theory",
"Mechanism design"
] |
52,055,632 | https://en.wikipedia.org/wiki/Hironaka%20decomposition | In mathematics, a Hironaka decomposition is a representation of an algebra over a field as a finitely generated free module over a polynomial subalgebra or a regular local ring. Such decompositions are named after Heisuke Hironaka, who used this in his unpublished master's thesis at Kyoto University .
Hironaka's criterion , sometimes called miracle flatness, states that a local ring R that is a finitely generated module over a regular Noetherian local ring S is Cohen–Macaulay if and only if it is a free module over S. There is a similar result for rings that are graded over a field rather than local.
Explicit decomposition of an invariant algebra
Let be a finite-dimensional vector space over an algebraically closed field of characteristic zero, , carrying a representation of a group , and consider the polynomial algebra on , . The algebra carries a grading with , which is inherited by the invariant subalgebra
.
A famous result of invariant theory, which provided the answer to Hilbert's fourteenth problem, is that if is a linearly reductive group and is a rational representation of , then is finitely-generated. Another important result, due to Noether, is that any finitely-generated graded algebra with admits a (not necessarily unique) homogeneous system of parameters (HSOP). A HSOP (also termed primary invariants) is a set of homogeneous polynomials, , which satisfy two properties:
The are algebraically independent.
The zero set of the , , coincides with the nullcone (link) of .
Importantly, this implies that the algebra can then be expressed as a finitely-generated module over the subalgebra generated by the HSOP, . In particular, one may write
,
where the are called secondary invariants.
Now if is Cohen–Macaulay, which is the case if is linearly reductive, then it is a free (and as already stated, finitely-generated) module over any HSOP. Thus, one in fact has a Hironaka decomposition
.
In particular, each element in can be written uniquely as , where , and the product of any two secondaries is uniquely given by , where . This specifies the multiplication in unambiguously.
See also
Rees decomposition
Stanley decomposition
References
Commutative algebra | Hironaka decomposition | [
"Mathematics"
] | 470 | [
"Fields of abstract algebra",
"Commutative algebra"
] |
52,058,583 | https://en.wikipedia.org/wiki/Energy%20system | An energy system is a system primarily designed to supply energy-services to end-users. The intent behind energy systems is to minimise energy losses to a negligible level, as well as to ensure the efficient use of energy. The IPCC Fifth Assessment Report defines an energy system as "all components related to the production, conversion, delivery, and use of energy".
The first two definitions allow for demand-side measures, including daylighting, retrofitted building insulation, and passive solar building design, as well as socio-economic factors, such as aspects of energy demand management and remote work, while the third does not. Neither does the third account for the informal economy in traditional biomass that is significant in many developing countries.
The analysis of energy systems thus spans the disciplines of engineering and economics. Merging ideas from both areas to form a coherent description, particularly where macroeconomic dynamics are involved, is challenging.
The concept of an energy system is evolving as new regulations, technologies, and practices enter into service – for example, emissions trading, the development of smart grids, and the greater use of energy demand management, respectively.
Treatment
From a structural perspective, an energy system is like any system and is made up of a set of interacting component parts, located within an environment. These components derive from ideas found in engineering and economics. Taking a process view, an energy system "consists of an integrated set of technical and economic activities operating within a complex societal framework". The identification of the components and behaviors of an energy system depends on the circumstances, the purpose of the analysis, and the questions under investigation. The concept of an energy system is therefore an abstraction which usually precedes some form of computer-based investigation, such as the construction and use of a suitable energy model.
Viewed in engineering terms, an energy system lends itself to representation as a flow network: the vertices map to engineering components like power stations and pipelines and the edges map to the interfaces between these components. This approach allows collections of similar or adjacent components to be aggregated and treated as one to simplify the model. Once described thus, flow network algorithms, such as minimum cost flow, may be applied. The components themselves can be treated as simple dynamical systems in their own right.
Economic modeling
Conversely, relatively pure economic modeling may adopt a sectoral approach with only limited engineering detail present. The sector and sub-sector categories published by the International Energy Agency are often used as a basis for this analysis. A 2009 study of the UK residential energy sector contrasts the use of the technology-rich Markal model with several UK sectoral housing stock models.
Data
International energy statistics are typically broken down by carrier, sector and sub-sector, and country. Energy carriers ( energy products) are further classified as primary energy and secondary (or intermediate) energy and sometimes final (or end-use) energy. Published energy datasets are normally adjusted so that they are internally consistent, meaning that all energy stocks and flows must balance. The IEA regularly publishes energy statistics and energy balances with varying levels of detail and cost and also offers mid-term projections based on this data. The notion of an energy carrier, as used in energy economics, is distinct and different from the definition of energy used in physics.
Scopes
Energy systems can range in scope, from local, municipal, national, and regional, to global, depending on issues under investigation. Researchers may or may not include demand side measures within their definition of an energy system. The Intergovernmental Panel on Climate Change (IPCC) does so, for instance, but covers these measures in separate chapters on transport, buildings, industry, and agriculture.
Household consumption and investment decisions may also be included within the ambit of an energy system. Such considerations are not common because consumer behavior is difficult to characterize, but the trend is to include human factors in models. Household decision-taking may be represented using techniques from bounded rationality and agent-based behavior. The American Association for the Advancement of Science (AAAS) specifically advocates that "more attention should be paid to incorporating behavioral considerations other than price- and income-driven behavior into economic models [of the energy system]".
Energy-services
The concept of an energy-service is central, particularly when defining the purpose of an energy system:
Energy-services can be defined as amenities that are either furnished through energy consumption or could have been thus supplied. More explicitly:
A consideration of energy-services per capita and how such services contribute to human welfare and individual quality of life is paramount to the debate on sustainable energy. People living in poor regions with low levels of energy-services consumption would clearly benefit from greater consumption, but the same is not generally true for those with high levels of consumption.
The notion of energy-services has given rise to energy-service companies (ESCo) who contract to provide energy-services to a client for an extended period. The ESCo is then free to choose the best means to do so, including investments in the thermal performance and HVAC equipment of the buildings in question.
International standards
ISO13600, ISO13601, and ISO13602 form a set of international standards covering technical energy systems (TES). Although withdrawn prior to 2016, these documents provide useful definitions and a framework for formalizing such systems. The standards depict an energy system broken down into supply and demand sectors, linked by the flow of tradable energy commodities (or energywares). Each sector has a set of inputs and outputs, some intentional and some harmful byproducts. Sectors may be further divided into subsectors, each fulfilling a dedicated purpose. The demand sector is ultimately present to supply energyware-based services to consumers (see energy-services).
Energy system redesign and transformation
Energy system design includes the redesigning of energy systems to ensure sustainability of the system and its dependents and for meeting requirements of the Paris Agreement for climate change mitigation. Researchers are designing energy systems models and transformational pathways for renewable energy transitions towards 100% renewable energy, often in the form of peer-reviewed text documents created once by small teams of scientists and published in a journal.
Considerations include the system's intermittency management, air pollution, various risks (such as for human safety, environmental risks, cost risks and feasibility risks), stability for prevention of power outages (including grid dependence or grid-design), resource requirements (including water and rare minerals and recyclability of components), technology/development requirements, costs, feasibility, other affected systems (such as land-use that affects food systems), carbon emissions, available energy quantity and transition-concerning factors (including costs, labor-related issues and speed of deployment).
Energy system design can also consider energy consumption, such as in terms of absolute energy demand, waste and consumption reduction (e.g. via reduced energy-use, increased efficiency and flexible timing), process efficiency enhancement and waste heat recovery. A study noted significant potential for a type of energy systems modelling to "move beyond single disciplinary approaches towards a sophisticated integrated perspective".
See also
Control volume – a concept from mechanics and thermodynamics
Electric power system – a network of electrical components used to generate, transfer, and use electric power
Energy development – the effort to provide societies with sufficient energy under the reduced social and environmental impact
Energy modeling – the process of building computer models of energy systems
Energy industry – the supply-side of the energy sector
Mathematical model – the representation of a system using mathematics and often solved using computers
Object-oriented programming – a computer programming paradigm suited to the representation of energy systems as networks
Network science – the study of complex networks
Open energy system databases – database projects which collect, clean, and republish energy-related datasets
Open energy system models – a review of energy system models that are also open source
Sankey diagram – used to show energy flows through a system
Notes
References
External links
Energy
Energy development
Energy economics
Networks
Energy infrastructure
Systems science | Energy system | [
"Physics",
"Environmental_science"
] | 1,623 | [
"Physical quantities",
"Energy economics",
"Energy (physics)",
"Energy",
"Environmental social science"
] |
73,435,829 | https://en.wikipedia.org/wiki/List%20of%20least-polluted%20cities%20by%20particulate%20matter%20concentration | Below is a list of 526 cities sorted by their annual mean concentration of PM2.5 (μg/m3) in 2022. By default the least polluted cities which have fewest particulates in the air come first. Click on the arrows next to the table's headers to have the most polluted cities ranked first.
Please note that constraints exist in this type of lists. For instance, some places like Africa and South America lack air pollution reporting tools, so their pollution levels are probably not reflected in this list. Moreover, many cities from a certain country are featured in the list may only mean that they have large and wide air pollution monitoring networks, which may or may not be an indicator of heavy pollution.
See also
List of most-polluted cities by particulate matter concentration
List of countries by air pollution
Air quality monitoring
Air purifier
References
Particulates
Pollutants
Visibility
Air pollution
Pollution
Pollution by city
Pollution
Pollution-related lists | List of least-polluted cities by particulate matter concentration | [
"Physics",
"Chemistry",
"Mathematics"
] | 198 | [
"Visibility",
"Physical quantities",
"Quantity",
"Particulates",
"Particle technology",
"Wikipedia categories named after physical quantities"
] |
73,438,573 | https://en.wikipedia.org/wiki/Dark-field%20X-ray%20microscopy | Dark-field X-ray microscopy (DFXM or DFXRM) is an imaging technique used for multiscale structural characterisation. It is capable of mapping deeply embedded structural elements with nm-resolution using synchrotron X-ray diffraction-based imaging. The technique works by using scattered X-rays to create a high degree of contrast, and by measuring the intensity and spatial distribution of the diffracted beams, it is possible to obtain a three-dimensional map of the sample's structure, orientation, and local strain.
History
The first experimental demonstration of dark-field X-ray microscopy was reported in 2006 by a group at the European Synchrotron Radiation Facility in Grenoble, France. Since then, the technique has been rapidly evolving and has shown great promise in multiscale structural characterization. Its development is largely due to advances in synchrotron X-ray sources, which provide highly collimated and intense beams of X-rays. The development of dark-field X-ray microscopy has been driven by the need for non-destructive imaging of bulk crystalline samples at high resolution, and it continues to be an active area of research today. However, dark-field microscopy, dark-field scanning transmission X-ray microscopy, and soft dark-field X-ray microscopy has long been used to map deeply embedded structural elements.
Principles and instrumentation
In this technique, a synchrotron light source is used to generate an intense and coherent X-ray beam, which is then focused onto the sample using a specialized objective lens. The objective lens acts as a collimator to select and focus the scattered light, which is then detected by the 2D detector to create a diffraction pattern. The specialized objective lens in DFXM, referred to as an X-ray objective lens, is a crucial component of the instrumentation required for the technique. It can be made from different materials such as beryllium, silicon, and diamond, depending on the specific requirements of the experiment. The objective enables one to enlarge or reduce the spatial resolution and field of view within the sample by varying the number of individual lenses and adjusting and (as in the figure) correspondingly. The diffraction angle is typically 10–30°.
The sample is positioned at an angle such that the direct beam is blocked by a beam stop or aperture, and the diffracted beams from the sample are allowed to pass through a detector.
An embedded crystalline element (for example, a grain or domain) of choice (green) is aligned such that the detector is positioned at a Bragg angle that corresponds to a particular diffraction peak of interest, which is determined by the crystal structure of the sample. The objective magnifies the diffracted beam by a factor and generates an inverted 2D projection of the grain. Through repeated exposures during a 360° rotation of the element around an axis parallel to the diffraction vector, , several 2D projections of the grain are obtained from various angles. A 3D map is then obtained by combining these projections using reconstruction algorithms similar to those developed for CT scanning. If the lattice of the crystalline element exhibits an internal orientation spread, this procedure is repeated for a number of sample tilts, indicated by the angles and .
The current implementation of DFXM at ID06, , uses a compound refractive lens (CRL) as the objective, giving spatial resolution of 100 nm and angular resolution of 0.001°.
Applications, limitations and alternatives
Current and potential applications
DFXM has been used for the non-destructive investigation of polycrystalline materials and composites, revealing the 3D microstructure, phases, orientation of individual grains, and local strains. It has also been used for in situ studies of materials recrystallisation, dislocations and other defects, and the deformation and fracture mechanisms in materials, such as metals and composites. DFXM can provide insights into the 3D microstructure and deformation of geological materials such as minerals and rocks, and irradiated materials.
DFXM has the potential to revolutionise the field of nanotechnology by providing non-destructive, high-resolution 3D imaging of nanostructures and nanomaterials. It has been used to investigate the 3D morphology of nanowires and to detect structural defects in nanotubes.
DFXM has shown potential for imaging biological tissues and organs with high contrast and resolution. It has been used to visualize the 3D microstructure of cartilage and bone, as well as to detect early-stage breast cancer in mouse model.
Limitations
The intense X-ray beams used in DFXM can damage delicate samples, particularly biological specimens. DFXM can suffer from imaging artefacts such as ring artefacts, which can affect image quality and limit interpretation.
The instrumentation required for DFXM is expensive and typically only available at synchrotron facilities, making it inaccessible to many researchers. Although DFXM can achieve high spatial resolution, it is still not as high as the resolution achieved by other imaging techniques such as transmission electron microscopy (TEM) or X-ray crystallography.
Preparation of samples for DFXM imaging can be challenging, especially for samples that are not crystalline. There are also limitations on the sample size that can be imaged as the technique works best with thin samples, typically less than 100 microns thick, due to the attenuation of the X-ray beam by thicker samples. DFXM still suffers from long integration times, which can limit its practical applications. This is due to the low flux density of X-rays emitted by synchrotron sources and the high sensitivity required to detect scattered X-rays.
Alternatives
There are several alternative techniques to DFXM, depending on the application, some of which are:
Differential-aperture X-ray structural microscopy (DAXM): DAXM is a synchrotron X-ray method capable of delivering precise information about the local structure and crystallographic orientation in three dimensions at a spatial resolution of less than one micron. It also provides angular precision and local elastic strain with high accuracy a wide range of materials, including single crystals, polycrystals, composites, and materials with varying properties.
Bragg Coherent diffraction imaging (BCDI): BCDI is an advanced microscopy technique introduced in 2006 to study crystalline nanomaterials' 3D structure. BCDI has applications in diverse areas, including in situ studies of corrosion, probing dissolution processes, and simulating diffraction patterns to understand atomic displacement.
Ptychography is a computational imaging method used in microscopy to generate images by processing multiple coherent interference patterns. It provides advantages such as high-resolution imaging, phase retrieval, and lensless imaging capabilities.
Diffraction Contrast Tomography (DCT): DCT is a method that uses coherent X-rays to generate three-dimensional grain maps of polycrystalline materials. DCT enables visualization of crystallographic information within samples, aiding in the analysis of materials' structural properties, defects, and grain orientations.
Three-dimensional X-ray diffraction (3DXRD): 3DXRD is a synchrotron-based technique that provides information about the crystallographic orientation of individual grains in polycrystalline materials. It can be used to study the evolution of microstructure during deformation and recrystallization processes and provides submicron resolution.
Electron backscatter diffraction (EBSD): EBSD is a scanning electron microscopy (SEM) technique that can be used to map - the sample surface - crystallographic orientation and strain at the submicron scale. It works by detecting the diffraction pattern of backscattered electrons, which provides information about the crystal structure of the material. EBSD can be used on a variety of materials, including metals, ceramics, and semiconductors, and can be extended to the third dimension, i.e., 3D EBSD, and can be combined with Digital image correlation, i.e., EBSD-DIC.
Digital image correlation (DIC): DIC is a non-contact optical method used to measure the displacement and deformation of a material by analysing the digital images captured before and after the application of load. This technique can measure strain with sub-pixel accuracy and is widely used in materials science and engineering.
Transmission electron microscopy (TEM): TEM is a high-resolution imaging technique that provides information about the microstructure and crystallographic orientation of materials. It can be used to study the evolution of microstructure during deformation and recrystallization processes and provides submicron resolution.
Micro-Raman spectroscopy: Micro-Raman spectroscopy is a non-destructive technique that can be used to measure the strain of a material at the submicron scale. It works by illuminating a sample with a laser beam and analysing the scattered light. The frequency shift of the scattered light provides information about the crystal deformation, and thus the strain of the material.
Neutron diffraction: Neutron diffraction is a technique that uses a beam of neutrons to study the structure of materials. It is particularly useful for studying the crystal structure and magnetic properties of materials. Neutron diffraction can provide sub-micron resolution.
References
Further reading
Diffraction
Materials science
Microscopes
Microscopy
Nanotechnology
Scientific techniques | Dark-field X-ray microscopy | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 1,945 | [
"Applied and interdisciplinary physics",
"Spectrum (physical sciences)",
"Materials science",
"Measuring instruments",
"Diffraction",
"Crystallography",
"Microscopes",
"nan",
"Microscopy",
"Nanotechnology",
"Spectroscopy"
] |
73,439,301 | https://en.wikipedia.org/wiki/Wilma%20Dierkes | Wilma K. Dierkes is a University of Twente Associate Professor and chair of the Elastomer Technology and Engineering group known for her research on elastomer sustainability.
Education
Dierkes completed an undergraduate degree in chemistry at Leibniz University in Hannover, Germany in 1990. After a period working in industry, she returned to study for a PhD in polymer science at University of Twente, completing her doctorate in 2010. She completed postgraduate study in environmental science at Foundation Universitaire Luxembourgeoise, Arlon, Belgium.
Career
Dierkes entered the rubber industry in 1991. She worked on elastomer recycling at the company Rubber Resources in Maastricht. Here, she was responsible for development and technical service, and implemented recycling of production waste. She later joined Degussa working on carbon black research, and Bosch working on windshield wiper development.
She joined the University Twente, the Netherlands in 2001. She is currently an associate professor. From 2009 to 2013, she held a visiting professorship at Tampere University of Technology. From 2005 to 2014, Dierkes served as chairman of the Dutch Association of Plastics and Rubber Technologists (VKRT). She is also a founding member of the Female Faculty Network at the University Twente (FFNT) and has served on its board. She serves on the expert committee for the Recircle Awards, which is composed of "individuals from the global tyre retreading and recycling industries selected according to their independent status and their acknowledged expertise".
Dierkes' most cited works address the topics of recycling of natural rubber based latex products and silica filler technology for application in tire tread compounding. She has advocated for an open source approach to research and development in the tire industry.
Awards and recognition
2013 - Sparks–Thomas award from the ACS Rubber Division
References
Polymer scientists and engineers
Living people
Women materials scientists and engineers
Year of birth missing (living people) | Wilma Dierkes | [
"Chemistry",
"Materials_science",
"Technology"
] | 403 | [
"Polymer scientists and engineers",
"Physical chemists",
"Materials scientists and engineers",
"Polymer chemistry",
"Women materials scientists and engineers",
"Women in science and technology"
] |
73,440,849 | https://en.wikipedia.org/wiki/Pressure%20gain%20combustion | Pressure gain combustion (PGC) is the unsteady state process used in gas turbines in which gas expansion caused by heat release is constrained. First developed in the early 20th century as one of the earliest gas turbine designs, the concept was mostly abandoned following the advent of isobaric jet engines in WWII.
As an alternative to conventional gas turbines, pressure gain combustion prevents the expansion of gas by holding it at constant volume during the reaction, causing an increase in stagnation pressure. The subsequent combustion produces a detonation, rather than the deflagration used in most turbines. Doing so allows for extra work extraction rather than a loss of energy due to pressure loss across the turbine.
Several different variations of turbines use this process, the most prominent being the pulse detonation engine and the rotating detonation engine. In recent years, pressure gain combustion has once again gained relevance and is currently being researched for use in propulsion systems and power generation due to its potential for improved efficiency and performance over conventional turbines.
History
Early history
Gas-powered turbines have been researched since the late 18th century, starting with John Barber's 1791 patent. Over a century later, Ægidius Elling built a turbine in 1903 which generated 11 bhp (8.2 kW), the first gas turbine to produce net positive work. In 1909, the first pressure gain combustion turbine was built by Hans Holzwarth. Initially operating at 200 bhp (147 kW), subsequent improvements to the engine increased its power output to 5000 bhp (3728 kW) by 1939. However, the aptly named Explosion Turbine would lose popularity among engineers and inventors as continuous combustion designs gained traction due to their use in jet engine prototypes.
Renewed Interest
The concept of pulsed propulsion is neither new, nor exclusive to pressure gain combustion. In fact, the German V1 missile utilized a pulse jet operating at 45 Hz. During the space race, NASA's Project Orion concept utilized force from nuclear explosions ignited behind the spacecraft to generate thrust. This process is known as nuclear pulse propulsion and is stylistically similar to the pulse detonation engine.
In the mid-20th century, US aeronautical scientists and engineers were trying to study the properties of detonation waves. To do this, a primitive rotating detonation chamber of created. This development became the basis for the rotating detonation engine, one of the leading PGC engine concepts, although it was largely ignored at the time due to its instability.
However, as gas turbines are becoming more and more optimized, PGC research is now gaining traction in aircraft propulsion, power generation, and even rocket propulsion. In January 2008, a pulse detonation-powered plane completed its first flight as a cooperative project between the Air Force Research Laboratory and Innovative Scientific Solutions, a research and product development company. Currently, various organizations have developed working PGC engines (mostly RDEs), but none have been put to commercial use due to developmental challenges.
Concept & Comparison to Convention Turbines
Overview of Conventional Turbines
The majority of gas turbines consist of an intake through which atmospheric air enters the turbine. The air is then pressurized through a compressor before mixing with fuel. The air-fuel mixture, also known as the working fluid, is combusted in a deflagration (a combustion reaction propagating at subsonic speed), which causes the mixture to expand in volume while maintaining constant pressure. Finally, the combustion product is ejected out of the exhaust to produce thrust. This process is known as the Brayton Cycle and has been used as the standard method of jet propulsion and turbine design for about a century.
Humphrey Cycle
Contrasting against the Brayton Cycle used in most turbines, Pressure Gain Combustion is based on the Humphrey Cycle. Instead of an isobaric system in which gas volume expands as heat is added to the combustion chamber, the volume of working fluid stays constant as its pressure increases during combustion. While the Brayton Cycle describes a subsonic deflagration, the Humphrey Cycle occurs in a detonation (A combustion reaction propagating at supersonic speed). The reaction occurs so quickly that the mixture doesn't have time to expand, causing a pressure gain, before being ejected through the exhaust to produce thrust. The whole process occurs rapidly, and turbines will produce anywhere from 20 to 200 detonations per second.
Because the working fluid is combusting at a constant volume, there is no pressure loss across the turbine, which increases the net work generated by each cycle. However, since work is done by a series of detonations, rather than a constant reaction generating thrust, the process is naturally more unsteady compared to a conventional turbine.
Designs & Variations
Pulse Detonation Engine
The simplest modern PGC turbine is the Pulse Detonation Engine. Consisting of almost no moving parts, the PDE is externally similar to a ramjet, a type of jet engine without compressor fans that is viable only at supersonic speeds. First, air enters the intake nozzle and travels directly to the combustion chamber to be mixed with injected fuel. There, the mixture is ignited while the front of the chamber closes, producing a detonation wave which both compresses and combusts the mixture, before the working fluid is ejected at supersonic speeds through the exhaust.
Because of the engine's simplicity and anatomical similarity to ramjets and scramjets, pulse detonation engines can be implemented as a combined-cycle engine, which can improve the performance and reliability of ramjets. Conventional combined-cycle engines have complex moving parts that are essentially rendered useless at high speeds, an issue that PDE/ramjet drives will not have.
Rotating Detonation Engine
Apart from PDEs, there exist multiple other PGC engine concepts, including Resonant Pulse Combustors, and Internal Combustion Wave Rotors to name a few. However, the majority of modern PGC research is concentrated around the rotating detonation engine (RDE), which aims to solve many of the issues encountered by PDEs.
The main drawback of pulse detonation is the intermittent nature of the combustions. Not only is the reaction hard to control, but the intermittent combustion also loses power due to the time it takes to refuel the combustion chamber after purging, in which no thrust is produced. The rotating detonation engine aims to address both these problems. While PDEs involve a series of repeating detonations to ignite batches of air that enter the combustion chamber, RDEs can circumvent this by utilizing a single detonation wave that rotates around the space in between concentric cylinders. A continuous air intake flows through the cylinders, which compresses and combusts as it passes through the rotating detonation wave. This eliminates the need to constantly produce detonations since it only uses a single cyclic detonation, and it allows for a steadier constant flow, instead of the pulsing thrust produced by PDEs.
Applications & Technical Challenges
Propulsion
Modern chemical rockets still utilize deflagration reactions to generate thrust, which are getting increasingly optimized to their limits. As a result, pressure gain combustion engines, mostly RDEs, to garner significant attention as a possible method of improving rocket performance. Currently, pressure gain rocket engines are being researched by space agencies in multiple countries, including NASA and JAXA, as well as numerous universities and private companies. Detonation propulsion, which is more energy efficient than conventional deflagration reactions, may increase efficiency by 5-10%, which can both reduce rocket mass and increase payload size.
As mentioned previously, pressure gain turbines have also been researched and developed extensively for use in aircraft propulsion. Pressure gain combustion engines can both improve the performance and reduce the complexity of combined ramjet/scramjet engines through their shared design similarities. Furthermore, this may even allow PDE/RDE combined ramjets to be utilized at conditions unsuitable for conventional ramjets. In addition, pressure gain turbojets require significantly less complexity, especially in the compressors, compared to regular turbines. This will not only save resources in manufacturing but also allow for designs to produce higher thrust in smaller engines.
Energy Generation
Apart from nuclear fission, natural gas contains the highest energy density of widely used fuels. As such, to reduce carbon emissions, electricity-generating plants are increasingly turning to gas turbines from crude oil and coal. While conventional turbines generate large amounts of energy more efficiently than other fossil fuels, just as in aerospace, they are beginning to reach their limits.
Similar to its potential use in propulsion, pressure gain combustion turbines can offer an improvement to gas power plants. In addition to better efficiency, RDEs can operate at much higher hydrogen concentrations, further improving performance because of hydrogen's higher energy density compared to petrochemicals. The relative simplicity of RDEs can also improve reliability and ease of maintenance, though that may be counterbalanced by the increased stress put on the engine by the process itself.
Engineering and Implementation Challenges
While PGC offers improved performance and efficiency, there are serious flaws and challenges that researchers were initially unable to solve, preventing the technology from being widely used.
Since PDEs are effectively intermittent explosion drives, the cycle they run on is far more unsteady and harder to control than conventional turbines. This makes PDEs very difficult to integrate into airframes, as the high energy pulsing of the engine can cause the inlet to unstart and stop the reaction, in addition to putting high stress on the nacelle or any other adjacent parts. The noise from the exhaust is also a concern. In testing, the highly energetic detonations produced up to 122 dB at a distance of 3 m in a 20 Hz PDE. For scaled-up commercial units operating at higher power and frequency, noise pollution will be a serious issue if effective damping measures are not implemented.
Moreover, due to the high energy required to initiate detonations, PDEs with shorter combustion chambers will need to utilize deflagration combustion at initial ignition and accelerate pressure waves through a process called Deflagration to Detonation Transition (DDT). This requires placing obstacles in the path of the deflagration wave to induce turbulent flow, which speeds up the wave but requires more complexity in the engine structure.
While RDEs solve many of the problems encountered in PDEs, it isn't without its flaws. The constant flow of the engine, coupled with the need to sustain the detonation, requires a tremendous intake of air to be rapidly mixed with the fuel in a shorter distance than most PDEs, which are normally quite elongated. In addition, the stress placed on the engine by the detonation process was simply too much for the engine to withstand during the early years of development. However, advancements in material science and manufacturing processes have improved the feasibility of RDEs to the point where research and development is believed to be worthwhile by many organizations.
See also
Nuclear Pulse Propulsion
Pulse Jet
Pulse Detonation Engine
Rotating Detonation Engine
Schramjet
References
Combustion | Pressure gain combustion | [
"Chemistry"
] | 2,239 | [
"Combustion"
] |
58,071,309 | https://en.wikipedia.org/wiki/Unitary%20transformation%20%28quantum%20mechanics%29 | In quantum mechanics, the Schrödinger equation describes how a system changes with time. It does this by relating changes in the state of the system to the energy in the system (given by an operator called the Hamiltonian). Therefore, once the Hamiltonian is known, the time dynamics are in principle known. All that remains is to plug the Hamiltonian into the Schrödinger equation and solve for the system state as a function of time.
Often, however, the Schrödinger equation is difficult to solve (even with a computer). Therefore, physicists have developed mathematical techniques to simplify these problems and clarify what is happening physically. One such technique is to apply a unitary transformation to the Hamiltonian. Doing so can result in a simplified version of the Schrödinger equation which nonetheless has the same solution as the original.
Transformation
A unitary transformation (or frame change) can be expressed in terms of a time-dependent Hamiltonian and unitary operator . Under this change, the Hamiltonian transforms as:
.
The Schrödinger equation applies to the new Hamiltonian. Solutions to the untransformed and transformed equations are also related by . Specifically, if the wave function satisfies the original equation, then will satisfy the new equation.
Derivation
Recall that by the definition of a unitary matrix, . Beginning with the Schrödinger equation,
,
we can therefore insert the identity at will. In particular, inserting it after and also premultiplying both sides by , we get
.
Next, note that by the product rule,
.
Inserting another and rearranging, we get
.
Finally, combining (1) and (2) above results in the desired transformation:
.
If we adopt the notation to describe the transformed wave function, the equations can be written in a clearer form. For instance, can be rewritten as
,
which can be rewritten in the form of the original Schrödinger equation,
The original wave function can be recovered as .
Relation to the interaction picture
Unitary transformations can be seen as a generalization of the interaction (Dirac) picture. In the latter approach, a Hamiltonian is broken into a time-independent part and a time-dependent part,
.
In this case, the Schrödinger equation becomes
, with .
The correspondence to a unitary transformation can be shown by choosing . As a result,
Using the notation from above, our transformed Hamiltonian becomes
First note that since is a function of , the two must commute. Then
,
which takes care of the first term in the transformation in , i.e. . Next use the chain rule to calculate
which cancels with the other . Evidently we are left with , yielding as shown above.
When applying a general unitary transformation, however, it is not necessary that be broken into parts, or even that be a function of any part of the Hamiltonian.
Examples
Rotating frame
Consider an atom with two states, ground and excited . The atom has a Hamiltonian , where is the frequency of light associated with the ground-to-excited transition. Now suppose we illuminate the atom with a drive at frequency which couples the two states, and that the time-dependent driven Hamiltonian is
for some complex drive strength . Because of the competing frequency scales (, , and ), it is difficult to anticipate the effect of the drive (see driven harmonic motion).
Without a drive, the phase of would oscillate relative to . In the Bloch sphere representation of a two-state system, this corresponds to rotation around the z-axis. Conceptually, we can remove this component of the dynamics by entering a rotating frame of reference defined by the unitary transformation . Under this transformation, the Hamiltonian becomes
.
If the driving frequency is equal to the g-e transition's frequency, , resonance will occur and then the equation above reduces to
.
From this it is apparent, even without getting into details, that the dynamics will involve an oscillation between the ground and excited states at frequency .
As another limiting case, suppose the drive is far off-resonant, . We can figure out the dynamics in that case without solving the Schrödinger equation directly. Suppose the system starts in the ground state . Initially, the Hamiltonian will populate some component of . A small time later, however, it will populate roughly the same amount of but with completely different phase. Thus the effect of an off-resonant drive will tend to cancel itself out. This can also be expressed by saying that an off-resonant drive is rapidly rotating in the frame of the atom.
These concepts are illustrated in the table below, where the sphere represents the Bloch sphere, the arrow represents the state of the atom, and the hand represents the drive.
Displaced frame
The example above could also have been analyzed in the interaction picture. The following example, however, is more difficult to analyze without the general formulation of unitary transformations. Consider two harmonic oscillators, between which we would like to engineer a beam splitter interaction,
.
This was achieved experimentally with two microwave cavity resonators serving as and . Below, we sketch the analysis of a simplified version of this experiment.
In addition to the microwave cavities, the experiment also involved a transmon qubit, , coupled to both modes. The qubit is driven simultaneously at two frequencies, and , for which .
In addition, there are many fourth-order terms coupling the modes, but most of them can be neglected. In this experiment, two such terms which will become important are
.
(H.c. is shorthand for the Hermitian conjugate.) We can apply a displacement transformation, , to mode . For carefully chosen amplitudes, this transformation will cancel while also displacing the ladder operator, . This leaves us with
.
Expanding this expression and dropping the rapidly rotating terms, we are left with the desired Hamiltonian,
.
Relation to the Baker–Campbell–Hausdorff formula
It is common for the operators involved in unitary transformations to be written as exponentials of operators, , as seen above. Further, the operators in the exponentials commonly obey the relation , so that the transform of an operator is,. By now introducing the iterator commutator,
we can use a special result of the Baker-Campbell-Hausdorff formula to write this transformation compactly as,
or, in long form for completeness,
References
Quantum mechanics | Unitary transformation (quantum mechanics) | [
"Physics"
] | 1,314 | [
"Theoretical physics",
"Quantum mechanics"
] |
58,080,546 | https://en.wikipedia.org/wiki/Sewer%20Murders | The Sewer Murders or "Sewage Plant Murders" ( or ) were an unexplained murder series of male adolescents in the Frankfurt Rhine-Main area during the 1970s and 1980s.
Victims
The killings took place from 1976 to 1983. The victims were seven boys and male adolescents aged between 11 and 18 from Frankfurt (likely Baseler Platz at the "Tivoli" arcade) or the Offenbach station area where some of them may have worked as prostitutes and met the culprit. The boys' hands were tied to the back with a rope or cord and then killed by apparent blunt force. For some, however, death presumably occurred by drowning in the sewerage. Due to long submersion in the sewage and partly strong damage to the corpses by screw conveyors, the victims were identified relatively late, and on only one, clear signs of blunt force trauma to the head had been found.
Victim list
7 September 1976: Unidentified male (15–18 years), found in Stangenrod, Giessen. The naked corpse of a young man, only in socks, was found near a footpath in a forest between Atzenhain and Lehnheim during the military manoeuvre "Gordian Shield". The body was heavily mummified with partial skeletonization after a lying time of at least six weeks. A violent skull fracture had been found to be the probable cause of death. Since the identity of the decedent could not be clarified, the police assumed that he may have been a foreigner in transit through West Germany.
23 May 1982: Erik (17), Dreieich, Offenbach. Found in an oblique position behind an inflow. The body had significant damage, such as the right thigh being torn off, the pelvis and skull being smashed, and exposed bones. According to the autopsy report, the corpse was in an advanced state of decomposition with extensive adipocere growth. He was probably lying there for over six months, and the cause of death could no longer be determined.
19 September 1982: Bernd Michel (17–18 years), Darmstadt-Erzhausen. The collecting rake was blocked by a clothed body. Michel was probably still alive when he was thrown into a manhole, and most likely drowned. The identification of the almost unrecognizable corpse was difficult. The young man was around 17 years old and was characterized by a clear overbite. He was a prostitute in Frankfurt.
2 July 1983: Markus Hildebrandt (17 years), Darmstadt-Erzhausen. A tattooed body was discovered in the sump of the Dreieich-Buchschlag sewage plant. According to the Offenbach police, the decedent was washed ashore by a sewage pipe. His hands were handcuffed, but there were no other externally visible injuries. The tattoos on the upper arms showed different motifs and the word "Fuck". Hildebrandt came from Hanau and had been involved in the Frankfurt heroin scene since 1981. Hildebrandt, who had spent much of his youth in congregate care, was in apprenticeship and lived a "restless life" in Frankfurt. He is said to have occasionally prostituted. He was last seen in January 1983, accompanied by three men, and allegedly claimed to be travelling to Saarbrücken.
9 September 1983: Fuad Rahou (14 years), Niederrad. The body of the 14-year-old Moroccan boy was found in the Niederrad sewage treatment plant. At first, it was assumed that Rahou had drowned accidentally or inhaled marsh gasses. Only later did it become clear that he must have been murdered. Rahou had been reported missing since 1 September 1983, by his parents.
11 October 1983: Oliver Tupikas (11 years), Niederrad. The youngest victim, was also found in Niederrad's sewage treatment plant. Probably pushed down a manhole after being murdered. Traces of legcuffs were found on the body. Oliver had run away from home and had not been seen alive since.
21 June 1989: Daniel Schaub (14 years), Offenbach-Rosenhöhe. Bones and pieces of clothing from the presumably last victim were found in a tributary of the drainage system. The teenager had been missing since 1983.
Possible motive
The criminal psychologist Rudolf Egg suggested that the suspect might be a single person at the age of about 50 years without family ties or friends. It is possible that the culprit himself had been a victim of sexual abuse and may therefore have developed a disturbed relationship with his own homosexuality or with other same-sex people. His inclinations apparently include sadistic bondage. The suspect likely moved from Giessen to Frankfurt at the end of the 1970s and lived out his fetishes in the local milieu. He had also been familiar with the area and was highly mobile. The fact that he threw his victims into the sewerage after violating them is probably a hint of a deep-rooted hatred.
Modus Operandi
The first murder is believed to have happened at the body's site of discovery. Only when he resumed killing, the suspect could have figured out that throwing a dead or dying victim down the sewers was a more effective way to get rid of them. The quick disposal of the bodies allowed him to carry out his murders even within the densely populated Frankfurt area, without a risk of being caught. The victims were tied up, then the killer abused them and "disposed of them like garbage". For weeks or even months, the bodies in the sewers began to decompose. The dead usually remained undetected in the sewage system for a long time until they were eventually flushed into the sewage treatment plants, where they often blocked the screw pumps to separate the solid particles. The advanced decomposition of the bodies has made the identification and the clarification of the factual circumstances in the investigation much more difficult. The first victim, for instance, was identified 2.5 years after discovery.
Investigation
Horst Kropp and the "AG 229" were entrusted with the investigation of sexually motivated murders of young people. For some time, a 40-year-old storeman from Offenbach, who was mainly convicted of multiple sexual misconducts toward minors and molestation, had been the prime suspect. He was known for enticing homeless teens to his summer house in Riederwald where he performed sadistic sex games with them. He is said to have acted very brutally during these but bribed his victims with money to keep quiet about what he did to them. Investigators found out that the prime suspect and Markus Hildebrandt had visited the same gay bars in Frankfurt. However, this was not sufficient evidence, as the traces of blood found in the summer house did not match Hildebrandt's. In the home of the suspect, who had known two of the other victims alongside Hildebrandt, police secured a gas pistol, several knives, including a butcher knife, and handcuffs. Due to lack of evidence, however, there were no charges.
See also
List of fugitives from justice who disappeared
List of German serial killers
List of unsolved murders
Literature
Stephan Harbort: Mörderisches Profil: Phänomen Serienkiller, Heyne Verlag, 2006, .
References
External links
Film contribution, Kriminalreport Hessen. Abominable murder series in the Rhine-Main area
Text contribution, Kriminalreport Hessen. Abominable murder series in the Rhine-Main area
Mortuary finds in sewage treatment plant. Aktenzeichen XY, 24. February 1984 from 37:47
1976 murders in Germany
1982 murders in Germany
1983 murders in Germany
1989 murders in Germany
Crimes against sex workers
Fugitives
Serial murders in Germany
Sewerage
Unidentified serial killers
Unsolved murders in Germany
Violence against men in Europe | Sewer Murders | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,614 | [
"Sewerage",
"Environmental engineering",
"Water pollution"
] |
61,186,816 | https://en.wikipedia.org/wiki/Variational%20multiscale%20method | The variational multiscale method (VMS) is a technique used for deriving models and numerical methods for multiscale phenomena. The VMS framework has been mainly applied to design stabilized finite element methods in which stability of the standard Galerkin method is not ensured both in terms of singular perturbation and of compatibility conditions with the finite element spaces.
Stabilized methods are getting increasing attention in computational fluid dynamics because they are designed to solve drawbacks typical of the standard Galerkin method: advection-dominated flows problems and problems in which an arbitrary combination of interpolation functions may yield to unstable discretized formulations. The milestone of stabilized methods for this class of problems can be considered the Streamline Upwind Petrov-Galerkin method (SUPG), designed during 80s for convection dominated-flows for the incompressible Navier–Stokes equations by Brooks and Hughes. Variational Multiscale Method (VMS) was introduced by Hughes in 1995. Broadly speaking, VMS is a technique used to get mathematical models and numerical methods which are able to catch multiscale phenomena; in fact, it is usually adopted for problems with huge scale ranges, which are separated into a number of scale groups. The main idea of the method is to design a sum decomposition of the solution as , where is denoted as coarse-scale solution and it is solved numerically, whereas represents the fine scale solution and is determined analytically eliminating it from the problem of the coarse scale equation.
The abstract framework
Abstract Dirichlet problem with variational formulation
Consider an open bounded domain with smooth boundary , being the number of space dimensions. Denoting with a generic, second order, nonsymmetric differential operator, consider the following boundary value problem:
being and given functions. Let be the Hilbert space of square-integrable functions with square-integrable derivatives:
Consider the trial solution space and the weighting function space defined as follows:
The variational formulation of the boundary value problem defined above reads:
,
being the bilinear form satisfying , a bounded linear functional on and is the inner product. Furthermore, the dual operator of is defined as that differential operator such that .
Variational multiscale method
In VMS approach, the function spaces are decomposed through a multiscale direct sum decomposition for both and into coarse and fine scales subspaces as:
and
Hence, an overlapping sum decomposition is assumed for both and as:
,
where represents the coarse (resolvable) scales and the fine (subgrid) scales, with , , and . In particular, the following assumptions are made on these functions:
With this in mind, the variational form can be rewritten as
and, by using bilinearity of and linearity of ,
Last equation, yields to a coarse scale and a fine scale problem:
or, equivalently, considering that and :
By rearranging the second problem as , the corresponding Euler–Lagrange equation reads:
which shows that the fine scale solution depends on the strong residual of the coarse scale equation . The fine scale solution can be expressed in terms of through the Green's function :
Let be the Dirac delta function, by definition, the Green's function is found by solving
Moreover, it is possible to express in terms of a new differential operator that approximates the differential operator as
with . In order to eliminate the explicit dependence in the coarse scale equation of the sub-grid scale terms, considering the definition of the dual operator, the last expression can be substituted in the second term of the coarse scale equation:
Since is an approximation of , the Variational Multiscale Formulation will consist in finding an approximate solution instead of . The coarse problem is therefore rewritten as:
being
Introducing the form
and the functional
,
the VMS formulation of the coarse scale equation is rearranged as:
Since commonly it is not possible to determine both and , one usually adopt an approximation. In this sense, the coarse scale spaces and are chosen as finite dimensional space of functions as:
and
being the Finite Element space of Lagrangian polynomials of degree over the mesh built in . Note that and are infinite-dimensional spaces, while and are finite-dimensional spaces.
Let and be respectively approximations of and , and let and be respectively approximations of and . The VMS problem with Finite Element approximation reads:
or, equivalently:
VMS and stabilized methods
Consider an advection–diffusion problem:
where is the diffusion coefficient with and is a given advection field. Let and , , . Let , being and .
The variational form of the problem above reads:
being
Consider a Finite Element approximation in space of the problem above by introducing the space over a grid made of elements, with .
The standard Galerkin formulation of this problem reads
Consider a strongly consistent stabilization method of the problem above in a finite element framework:
for a suitable form that satisfies:
The form can be expressed as , being a differential operator such as:
and is the stabilization parameter. A stabilized method with is typically referred to multiscale stabilized method . In 1995, Thomas J.R. Hughes showed that a stabilized method of multiscale type can be viewed as a sub-grid scale model where the stabilization parameter is equal to
or, in terms of the Green's function as
which yields the following definition of :
Stabilization Parameter Properties
For the 1-d advection diffusion problem, with an appropriate choice of basis functions and , VMS provides a projection in the approximation space. Further, an adjoint-based expression for can be derived,
where is the element wise stabilization parameter, is the element wise residual and the adjoint problem solves,
In fact, one can show that the thus calculated allows one to compute the linear functional exactly.
VMS turbulence modeling for large-eddy simulations of incompressible flows
The idea of VMS turbulence modeling for Large Eddy Simulations(LES) of incompressible Navier–Stokes equations was introduced by Hughes et al. in 2000 and the main idea was to use - instead of classical filtered techniques - variational projections.
Incompressible Navier–Stokes equations
Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density in a domain with boundary , being and portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied ():
being the fluid velocity, the fluid pressure, a given forcing term, the outward directed unit normal vector to , and the viscous stress tensor defined as:
Let be the dynamic viscosity of the fluid, the second order identity tensor and the strain-rate tensor defined as:
The functions and are given Dirichlet and Neumann boundary data, while is the initial condition.
Global space time variational formulation
In order to find a variational formulation of the Navier–Stokes equations, consider the following infinite-dimensional spaces:
Furthermore, let and . The weak form of the unsteady-incompressible Navier–Stokes equations reads: given ,
where represents the inner product and the inner product. Moreover, the bilinear forms , and the trilinear form are defined as follows:
Finite element method for space discretization and VMS-LES modeling
In order to discretize in space the Navier–Stokes equations, consider the function space of finite element
of piecewise Lagrangian Polynomials of degree over the domain triangulated with a mesh made of tetrahedrons of diameters , .
Following the approach shown above, let introduce a multiscale direct-sum decomposition of the space which represents either and :
being
the finite dimensional function space associated to the coarse scale, and
the infinite-dimensional fine scale function space, with
,
and
.
An overlapping sum decomposition is then defined as:
By using the decomposition above in the variational form of the Navier–Stokes equations, one gets a coarse and a fine scale equation; the fine scale terms appearing in the coarse scale equation are integrated by parts and the fine scale variables are modeled as:
In the expressions above, and are the residuals of the momentum equation and continuity equation in strong forms defined as:
while the stabilization parameters are set equal to:
where is a constant depending on the polynomials's degree , is a constant equal to the order of the backward differentiation formula (BDF) adopted as temporal integration scheme and is the time step. The semi-discrete variational multiscale multiscale formulation (VMS-LES) of the incompressible Navier–Stokes equations, reads: given ,
being
and
The forms and are defined as:
From the expressions above, one can see that:
the form contains the standard terms of the Navier–Stokes equations in variational formulation;
the form contain four terms:
the first term is the classical SUPG stabilization term;
the second term represents a stabilization term additional to the SUPG one;
the third term is a stabilization term typical of the VMS modeling;
the fourth term is peculiar of the LES modeling, describing the Reynolds cross-stress.
See also
Navier–Stokes equations
Large eddy simulation
Finite element method
Backward differentiation formula
Computational fluid dynamics
Streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations
References
Mathematical modeling
Numerical analysis
Computational fluid dynamics | Variational multiscale method | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,893 | [
"Mathematical modeling",
"Computational fluid dynamics",
"Applied mathematics",
"Computational mathematics",
"Computational physics",
"Mathematical relations",
"Numerical analysis",
"Approximations",
"Fluid dynamics"
] |
61,186,830 | https://en.wikipedia.org/wiki/Reinforced%20concrete%20structures%20durability | The durability design of reinforced concrete structures has been recently introduced in national and international regulations. It is required that structures are designed to preserve their characteristics during the service life, avoiding premature failure and the need of extraordinary maintenance and restoration works. Considerable efforts have therefore made in the last decades in order to define useful models describing the degradation processes affecting reinforced concrete structures, to be used during the design stage in order to assess the material characteristics and the structural layout of the structure.
Service life of a reinforced concrete structure
Initially, the chemical reactions that normally occur in the cement paste, generate an alkaline environment, bringing the solution in the cement paste pores to pH values around 13. In these conditions, passivation of steel rebar occurs, due to a spontaneous generation of a thin film of oxides able to protect the steel from corrosion. Over time, the thin film can be damaged, and corrosion of steel rebar starts. The corrosion of steel rebar is one of the main causes of premature failure of reinforced concrete structures worldwide, mainly as a consequence of two degradation processes, carbonation and penetration of chlorides. With regard to the corrosion degradation process, a simple and accredited model for the assessment of the service life is the one proposed by Tuutti, in 1982. According to this model, the service life of a reinforced concrete structure can be divided into two distinct phases.
, initiation time: from the moment the structure is built, to the moment corrosion initiates on steel rebar. More in particular, it is the time required for aggressive agents (carbon dioxide and chlorides) to penetrate the concrete cover thickness, reach the embedded steel rebar, alter the initial passivation condition on steel surface and cause corrosion initiation.
, propagation time: which is defined as the time from the onset of active corrosion until an ultimate limit state is reached, i.e. corrosion propagation reaches a limit value corresponding to unacceptable structural damage, such as cracking and detachment of the concrete cover thickness.
The identification of initiation time and propagation time is useful to further identify the main variables and processes influencing the service life of the structure which are specific of each service life phase and of the degradation process considered.
Carbonation-induced corrosion
The initiation time is related to the rate at which carbonation propagates in the concrete cover thickness. Once that carbonation reaches the steel surface, altering the local pH value of the environment, the protective thin film of oxides on the steel surface becomes instable, and corrosion initiates involving an extended portion of the steel surface. One of the most simplified and accredited models describing the propagation of carbonation in time is to consider penetration depth proportional to the square root of time, following the correlation
where is the carbonation depth, is time, and is the carbonation coefficient. The corrosion onset takes place when the carbonation depth reaches the concrete cover thickness, and therefore can be evaluated as
where is the concrete cover thickness.
is the key design parameter to assess initiation time in the case of carbonation-induced corrosion. It is expressed in mm/year1/2 and depends on the characteristics of concrete and the exposure conditions. The penetration of gaseous CO2 in a porous medium such as concrete occurs via diffusion. The humidity content of concrete is one of the main influencing factors of CO2 diffusion in concrete. If concrete pores are completely and permanently saturated (for instance in submerged structures) CO2 diffusion is prevented. On the other hand, for completely dry concrete, the chemical reaction of carbonation cannot occur. Another influencing factor for CO2 diffusion rate is concrete porosity. Concrete obtained with higher w/c ratio or obtained with an incorrect curing process presents higher porosity at hardened state, and is therefore subjected to a higher carbonation rate. The influencing factors concerning the exposure conditions are the environmental temperature, humidity and concentration of CO2. Carbonation rate is higher for environments with higher humidity and temperature, and increases in polluted environments such as urban centres and inside close spaces as tunnels.
To evaluate propagation time in the case of carbonation-induced corrosion, several models have been proposed. In a simplified but commonly accepted method, the propagation time is evaluated as function of the corrosion propagation rate. If the corrosion rate is considered constant, tp can be estimated as:
where is the limit corrosion penetration in steel and is the corrosion propagation rate.
must be defined in function of the limit state considered. Generally for carbonation-induced corrosion the concrete cover cracking is considered as limit state, and in this case a equal to 100 μm is considered. depends on the environmental factors in proximity of the corrosion process, such as the availability of oxygen and water at concrete cover depth. Oxygen is generally available at the steel surface, except for submerged structures. If pores are constantly fully saturated, a very low amount of oxygen reaches the steel surface and corrosion rate can be considered negligible. For very dry concretes is negligible due to the absence of water which prevents the chemical reaction of corrosion. For intermediate concrete humidity content, corrosion rate increases with increasing the concrete humidity content. Since the humidity content in a concrete can significantly vary along the year, it is general not possible to define a constant . One possible approach is to consider a mean annual value of .
Chloride-induced corrosion
The presence of chlorides to the steel surface, above a certain critical amount, can locally break the protective thin film of oxides on the steel surface, even if concrete is still alkaline, causing a very localized and aggressive form of corrosion known as pitting. Current regulations forbid the use of chloride contaminated raw materials, therefore one factor influencing the initiation time is chloride penetration rate from the environment. This is a complex task, because chloride solutions penetrate in concrete through the combination of several transport phenomena, such as diffusion, capillary effect and hydrostatic pressure. Chloride binding is another phenomenon affecting the kinetic of chloride penetration. Part of the total chloride ions can be absorbed or can chemically react with some constituents of the cement paste, leading to a reduction of chlorides in the pore solution (free chlorides that are steel able to penetrate in concrete). The ability of a concrete to chloride binding is related to the cement type, being higher for blended cements containing silica fume, fly ash or furnace slag.
Being the modelling of chloride penetration in concrete particularly complex, a simplified correlation is generally adopted, which was firstly proposed by Collepardi in 1972
Where is the chloride concentration at the exposed surface, x is the chloride penetration depth, D is the chloride diffusion coefficient, and t is time.
This equation is a solution of Fick's II law of diffusion in the hypothesis that chloride initial content is zero, that is constant in time on the whole surface, and D is constant in time and through the concrete cover. With and D known, the equation can be used to evaluate the temporal evolution of the chloride concentration profile in the concrete cover and evaluate the initiation time as the moment in which critical chloride threshold () is reached at the depth of steel rebar.
However, there are many critical issues related to the practical use of this model. For existing reinforced concrete structures in chloride-bearing environment and D can be identified calculating the best-fit curve for measured chloride concertation profiles. From concrete samples retrieved on field is therefore possible to define the values of Cs and D for residual service life evaluation.
On the other hand, for new structures it is more complicated to define and D. These parameters depend on the exposure conditions, the properties of concrete such as porosity (and therefore w/c ratio and curing process) and type of cement used. Furthermore, for the evaluation of long-term behaviour of structure, a critical issue is related to the fact that and D can not be considered constant in time, and that the transport penetration of chlorides can be considered as pure diffusion only for submerged structures.
A further issue is the assessment of . There are various influencing factors, such as are the potential of steel rebar and the pH of the solution included in concrete pores. Moreover, pitting corrosion initiation is a phenomenon with a stochastic nature, therefore also can be defined only on statistical basis.
Corrosion prevention
The durability assessment has been implemented in European design codes at the beginning of the 90s. It is required for designers to include the effects of long-term corrosion of steel rebar during the design stage, in order to avoid unacceptable damages during the service life of the structure. Different approaches are then available for the durability design.
Standard approach
It is the standardized method to deal with durability, also known as deem-to-satisfy approach, and provided by current european regulation EN 206. It is required that the designer identifies the environmental exposure conditions and the expected degradation process, assessing the correct exposure class. Once this is defined, design code gives standard prescriptions for w/c ratio, the cement content, and the thickness of the concrete cover.
This approach represents an improvement step for the durability design of reinforced concrete structures, it is suitable for the design of ordinary structures designed with traditional materials (Portland cement, carbon steel rebar) and with an expected service life of 50 years. Nevertheless, it is considered not completely exhaustive in some cases. The simple prescriptions do not allow to optimize the design for different parts of the structures with different local exposure conditions. Furthermore, they do not allow to consider the effects on service life of special measures such as the use of additional protections.
Performance-based approach
Performance-based approaches provide for a real design of durability, based on models describing the evolution in time of degradation processes, and the definition of times at which defined limit states will be reached. To consider the wide variety of service life influencing factors and their variability, performance-based approaches address the problem from a probabilistic or semiprobabilistic point of view.
The performance-based service life model proposed by the European project DuraCrete, and by FIB Model Code for Service Life Design, is based on a probabilistic approach, similar to the one adopted for structural design. Environmental factors are considered as loads S(t), while material properties such as chloride penetration resistance are considered as resistances R(t) as shown in Figure 2. For each degradation process, design equations are set to evaluate the probability of failure of predefined performances of the structure, where acceptable probability is selected on the basis of the limit state considered. The degradation processes are still described with the models previously defined for carbonation-induced and chloride-induced corrosion, but to reflect the statistical nature of the problem, the variables are considered as probability distribution curves over time. To assess some of the durability design parameters, the use of accelerated laboratory test is suggested, such as the so called Rapid Chloride Migration Test to evaluate chloride penetration resistance of concrete '. Through the application of corrective parameters, the long-term behaviour of the structure in real exposure conditions may be evaluated.
The use of probabilistic service life models allows to implement a real durability design that could be implemented in the design stage of structures. This approach is of particular interest when an extended service life is required (>50 years) or when the environmental exposure conditions are particularly aggressive. Anyway, the applicability of this kind of models is still limited. The main critical issues still concern, for instance, the individuation of accelerated laboratory tests able to characterize concrete performances, reliable corrective factors to be used for the evaluation of long-term durability performances and the validation of these models based on real long-term durability performances.
See also
Concrete
Concrete degradation
Reinforced concrete
References
Cement
Concrete
Corrosion
Concrete buildings and structures
Reinforced concrete
Structural engineering | Reinforced concrete structures durability | [
"Chemistry",
"Materials_science",
"Engineering"
] | 2,381 | [
"Structural engineering",
"Metallurgy",
"Corrosion",
"Construction",
"Electrochemistry",
"Civil engineering",
"Concrete",
"Materials degradation"
] |
61,190,532 | https://en.wikipedia.org/wiki/C4H7BrO2 | {{DISPLAYTITLE:C4H7BrO2}}
The molecular formula C4H7BrO2 (molar mass: 167.002 g/mol, exact mass: 165.9629 u) may refer to:
2-Bromobutyric acid
Ethyl bromoacetate
Molecular formulas | C4H7BrO2 | [
"Physics",
"Chemistry"
] | 68 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
76,382,714 | https://en.wikipedia.org/wiki/Metal-formaldehyde%20complex | A metal-formaldehyde complex is a coordination complex in which a formaldehyde ligand has two bonds to the metal atom(s) (η2-CH2O). This type of ligand has been reported in both monometallic and bimetallic complexes.
History
Metal-formaldehyde complexes have been reported for tungsten (W), osmium (Os), vanadium (V), rhenium (Re), zirconium (Zr), ruthenium (Ru), and niobium (Nb).
In 1984, Green and coworkers reported the yellow crystalline solid W(PMe3)4(η2-CH2O)H2. It was the result of the addition of methanol to W(PMe3)4(η2-CH2PMe2)H.
W(PMe3)4(η2-CH2O)H2 can be hydrogenated to give W(PMe3)4(MeO)H3, and then further hydrogenated to reform methanol and generate W(PMe3)4H4. In 1986, Green and Parkin demonstrated further reactivities of W(PMe3)4(η2-CH2O)H2. Upon addition of CO or CO2, W(PMe3)4(η2-CH2O)H2 produces fac-W(PMe3)3(CO)3 and W(PMe3)4(κ2-O2CO)H2, respectively, much like its precursor.
W(PMe3)4(η2-CH2O)H2 also reacts with buta-1,3-diene to give W(PMe3)3(η2-CH2O)(η-C4H6).
W(PMe3)4(η2-CH2O)H2 can also be used as a route to further oxometallacycles by the addition of ethylene and rapid cooling to –80°C. The resultant green-colored crystals are composed of W(OCH2CH2CH2)(PMe3)2(η2-C2H4)2, with either both ethylene ligands on the equatorial plane or the ethylene ligand cis- to the ligating oxygen in the axial direction. Further reaction with ethylene produces trans-W(PMe3)4(η2-C2H4)2 and W(PMe3)4(CO)H2.
References
Coordination complexes
Organometallic compounds | Metal-formaldehyde complex | [
"Chemistry"
] | 553 | [
"Inorganic compounds",
"Coordination complexes",
"Organometallic compounds",
"Coordination chemistry",
"Organic compounds",
"Organometallic chemistry"
] |
76,383,140 | https://en.wikipedia.org/wiki/Closure%20of%20tidal%20inlets | In coastal and environmental engineering, the closure of tidal inlets entails the deliberate prevention of the entry of seawater into inland areas through the use of fill material and the construction of barriers. The aim of such closures is usually to safeguard inland regions from flooding, thereby protecting ecological integrity and reducing potential harm to human settlements and agricultural areas.
The complexity of inlet closure varies significantly with the size of the estuary involved. For smaller estuaries, which may naturally dry out at low tide, the process can be relatively straightforward. However, the management of larger estuaries demands a sophisticated blend of technical expertise, encapsulating hydrodynamics, sediment transport, as well as mitigation of the potential ecological consequences of such interventions. The development of knowledge around such closures over time reflects a concerted effort to balance flood defence mechanisms with environmental stewardship, leading to the development of both traditional and technologically advanced solutions.
In situations where rivers and inlets pose significant flood risk across large areas, providing protection along the entire length of both banks can be prohibitively expensive. In London, this issue has been addressed by construction of the Thames Barrier, which is only closed during forecasts of extreme water levels in the southern North Sea. In the Netherlands, a number of inlets were closed by fully damming their entrances. Since such dams take many months or years to complete, water exchange between the sea and the inlet continues throughout the construction period. It is only during the final stages that the gap is sufficiently narrowed to limit this exchange, presenting unique construction challenges. As the gap diminishes, significant differences in water levels between the sea and the inlet create very strong currents, potentially reaching several metres per second, through the remaining narrow opening.
Special techniques are required during this critical closure phase to prevent severe erosion of existing defences. Two primary methods are used: the abrupt or sudden closure method, which involves positioning prefabricated caissons during a brief period of slack water, and the gradual closure method, which involves progressively building up the last section of the dam, keeping the crest nearly horizontal to prevent strong currents and erosion along any specific section.
Purpose of a tidal inlet closure
The closure of tidal inlets serves various primary purposes:
Land reclamation
Shortening sea defence length
Creation of fresh water reservoirs
Establishment of tidal energy basins
Development of fixed-level harbour docks
Construction of docks for marine activities
Provision of road or rail connections
Repair of breaches in dikes
Creation of fish ponds.
Historically, the closure of inlets was primarily aimed at land reclamation and water level control in marshy areas, facilitating agricultural development. Such activities necessitated effective management of river and storm surge levels, often requiring ongoing dike maintenance. Secondary purposes, such as tidal energy generation, harbour and construction docks, dams for transportation infrastructure, and fish farming, also emerged but had lesser environmental impact.
In contemporary times, driven by a growing emphasis on quality of life, particularly in industrialised nations, inlet closure projects encompass a broader spectrum of objectives. These may include creating freshwater storage facilities, mitigating water pollution in designated zones, providing recreational amenities, and combating saltwater intrusion or groundwater contamination.
Side effects
Depending on circumstances, various hydrological, environmental, ecological, and economic side effects can be realised by the implementation of a tidal inlet closure, including:
change of tide (amplitude, flows) at the seaward side of the dam
change in bar and gully topography, outside the dam
removal of tides on the inner side of the dam
change in groundwater level in adjoining areas
alteration of drainage capacity for adjoining areas
loss of fish and vegetation species
loss of breeding and feeding areas for water birds
rotting processes during change in vegetation and fauna
stratification of water quality in stagnant reservoir
accumulation of sediments in the reservoir
impact on facilities for shipping
impact on recreation and leisure pursuits
change in professional occupations (fishery, navigation)
social and cultural impacts.
Examples of closure works
Historical closures in the Netherlands
Several towns in the Netherlands bear names ending in "dam," indicating their origin at the site of a dam in a tidal river. Prominent examples include Amsterdam (located at a dam in the Amstel) and Rotterdam (situated at a dam in the Rotte). However, some locations, like Maasdam, have less clear origins. Maasdam, a village situated at the site of a dam on the Maas dating back to before 1300, was the site of the construction of the Grote Hollandse Waard, which was subsequently lost during the devastating St. Elizabeth's Flood of 1421. As a result of the flood, the Maas river is now located far from the village of Maasdam.
One technique widely employed in historical closures was known as (English: sinking up). This method involved sinking fascine mattresses, filling them with sand, and stabilising them with ballast stone. Successive sections were then sunk on top until the dam reached a height where no further mattresses could be placed. This process effectively reduced the flow, allowing the completion of the dam with sand and clay. For instance, the construction of the Sloedam in 1879, as part of the railway to Middelburg, utilised this technique.
Early observations revealed that during closures, the flow velocity within the closure gap increased, leading to soil erosion. Consequently, measures such as bottom protection around the closing gap were implemented, guided primarily by experiential knowledge rather than precise calculations. Until 1953, closing dike breaches in tidal areas posed challenges due to high current velocities. In such instances, new dikes were constructed further inland, albeit a lengthier process, to mitigate closure difficulties. An extreme example occurred after the devastating North Sea flood of 1953, necessitating the closure of breaches at Schelphoek, marking the last major closure in the Netherlands.
Modern dam construction in the Netherlands
In recent times, the construction of larger dams in the Netherlands has been driven by both the necessity to protect the hinterlands and the ambition to create new agricultural lands.
The formation of currents at the mouth of an inlet arises from the tidal actions of filling (high tide) and emptying (ebb tide) of the basin. The speed of these currents is influenced by the tidal range, the tidal curve, the volume of the tidal basin (also known as the storage area), and the size of the flow profile at that location. The tidal range varies along the Dutch coast, being minimal near Den Helder (about 1.5 metres) and maximal off the coast of Zeeland (2 to 3 metres), with the range expanding to 4 to 5 metres in the areas behind the Oosterschelde and Westerschelde.
In tidal basins with loosely packed seabeds, current channels emerge and may shift due to the constantly changing directions and speeds of currents. The strongest flows cause scour in the deepest channels, such as in the Oosterschelde where depths can reach up to 45 metres, while sandbanks form between these channels, occasionally becoming exposed at low tide.
The channel systems that naturally develop in tidal areas are generally in a state of approximate equilibrium, balancing flow velocity and the total flow profile. Conversely, when dike breaches are sealed, this equilibrium is often not yet achieved at the time of closure. For instance, rapid intervention in closing numerous breaches following the 1953 storm surge helped limit erosion. For the construction of a dam at the mouth of an inlet, activities are undertaken to reduce the flow profile, potentially leading to increased flow velocities and subsequent scouring unless pre-emptive measures are taken, such as reinforcing the beds and sides of channels with bottom protection. An exception occurs when the surface area of the tidal basin is preliminarily reduced by compartmentalisation dams.
The procedure for closing a tidal channel can generally be segmented into four phases:
A preparatory phase with a slight reduction in the flow profile (to 80 to 90% of its original size), during which dam sections are constructed in shallow areas and soil protection is placed in the channels.
A sill is then erected, serving as a foundation for the closing dike. This sill can help distribute the dike's pressure on the subsoil and/or act as a filter between the bottom protection and the closing structure. The closure gap at this stage must be wide enough to allow the ebb and flow currents to pass without damaging the sill and the protective measures.
The actual closure, where the final gap is sealed.
The final phase involves constructing the dike over and around the temporary dam.
Under specific circumstances, alternative construction methods may be applied; for instance, during a sand closure, dumping capacity is utilised in such a manner that more material is added per tide than can be removed by the current, typically negating the need for soil protection.
When the Zuiderzee was enclosed in 1932, it was still possible to manage the current with boulder clay, as the tidal difference there was only about 1 metre, preventing excessively high flow velocities in the closure gap that would require alternative materials. Numerous closure methods have been implemented in the Delta area, on both small and large scales, highly dependent on a variety of preconditions. These include hydraulic and soil mechanical prerequisites, as well as available resources such as materials, equipment, labour, finances, and expertise. Post-World War II, the experiences gained from dike repairs in Walcheren in 1945, the closure of the Brielse Maas in 1950, the Braakman in 1952, and the repair of the breaches after the 1953 storm surge significantly influenced the choice of closure methods for the first Delta dams.
Up until the completion of the Brouwersdam in 1971, the choice of closure method was almost entirely based on technical factors. However, environmental and fisheries considerations became equally vital in the selection of closure methods for the Markiezaatskade near Bergen op Zoom, the Philipsdam, Oesterdam, and the storm surge barrier in the Oosterschelde, taking into account factors like the timing of tidal organism mortality and salinity control during closures, which are critical for determining the initial conditions of the newly formed basin.
Closures in Germany
In the north-west of Germany, a series of closure works have been implemented. Initially, the primary aims of these closures were land reclamation and protection against flooding. Subsequently, the focus shifted towards safety and ecological conservation. Closures took place in Meldorf (1978), Nordstrander Bucht (Husum, 1987), and Leyhörn (Greetsiel, 1991).
Around 1975, evolving global perspectives on ecological significance led to a change in the approach to closures. As a result, in northern Germany, several closures were executed differently from their original designs. For instance, while there were plans to completely dam the Leybucht near Greetsiel, only a minor portion was ultimately closed—just enough to meet safety and water management requirements. This made the closure of the remaining area no longer a technical challenge. A discharge sluice and navigation lock were constructed, providing adequate capacity to mitigate currents in the closure gap of the dam.
Closures in South Korea
In the 1960s, South Korea faced a significant shortage of agricultural land, prompting plans for large reclamation projects, including the construction of closure dams. These projects were carried out between 1975 and 1995, incorporating the expertise and experience from the Netherlands. Over time, attitudes towards closure works in South Korea evolved, leading to considerable delays and modifications in the plans for the Hwaong and Saemangeum projects.
Closures in Bangladesh
Creeks have been closed to facilitate the creation of agricultural land and provide protection against floods in Bangladesh for many years. The combination of safeguarding against flooding, the need for agricultural land, and the availability of irrigation water served as the driving forces behind these initiatives. Prior to 1975, such closure works were relatively modest in scale. Some early examples include:
The approach to closures in Bangladesh did not significantly differ from practices elsewhere. However, due to the country's low labour costs and high unemployment rates, methods employing extensive local manpower were preferred.
These works primarily utilised a type of locally developed fascine rolls known as mata. The final gaps were closed swiftly within a single tidal cycle. Notably, the Gangrail closure failed twice.
In the years 1977/78, the Madargong creek was closed, safeguarding an agricultural area of 20,000 hectares. At the closure site, the creek spanned a width of 150 metres with a depth of 6 metres below mean sea level. The following year, 1978/79, saw the closure of the Chakamaya Khal, featuring a tidal prism of 10 million cubic metres, a tidal range of 3.3 metres, spanning 210 metres in width and 5 metres in depth.
In 1985, the Feni River was dammed to create an irrigation reservoir covering 1,200 hectares. The project was distinctive in its explicit request for the utilisation of local products and manual labour. The 1,200-metre-wide gap needed to be sealed during a neap tide. On the day of the closure, 12,000 workers placed 10,000 bags within the gap.
In 2020, the Nailan dam, originally constructed in the 1960s, experienced a breach that necessitated repair. At the time, the basin covered an area of 480 hectares, with a tidal range varying from 2.5 to 4 metres (neap tide to spring tide). The breach spanned a width of 500 metres, with a tidal prism of 7 million cubic metres. The closure was accomplished by deploying a substantial quantity of geobags, weighing up to 250 kg, though the majority of the bags in the core were 50 kg. The gap was progressively narrowed to 75 metres, the width of the final closure gap, which was sealed in one tidal cycle during a neap tide. To facilitate this, two rows of palisades were erected in the gap, and bags were used to fill the space between them, effectively creating a cofferdam.
Types of closures
Closing methods can be categorized into two principal groups: gradual closures and sudden closures. Within gradual closures, four distinct methods are identified: horizontal closure without a significant sill (a), vertical closure (b), horizontal closure with a sill (c), and sand closures. Sand closures further differentiate into horizontal and vertical types. Sudden closures are typically achieved through the deployment of (sluice) caissons, often positioned on a sill (d).
The technology of closure works
The challenge in sealing a sea inlet lies in the phenomenon that as the flow area of the closure gap decreases due to the construction of the dam, the flow speed within this gap increases. This acceleration can become so significant that the material deposited into the gap is immediately washed away, leading to the failure of the closure. Therefore, accurately calculating the flow rate is crucial. Given that the length of the basin is usually small relative to the length of the tidal wave, this calculation can typically be performed using a storage area approach (for more details, see the end of this page). This methodology enables the creation of straightforward graphs depicting the velocities within a closure gap throughout the closure process.
Stone closures
Horizontal stone closures
In the technique of horizontal stone closures, stone is deployed from both sides into the closing gap. The stone must be heavy enough to counter the increased velocity that results from the reduced flow profile. An added complication is the creation of turbulent eddies, which lead to further scouring of the seabed. It is therefore critical to lay a foundation of stone prior to commencing the closure. The closure of the Zuiderzee in 1932, as depicted in the attached photograph, vividly illustrates the downstream turbulence at the closing gap. Notably, during the Afsluitdijk closure, boulder clay was utilised in a manner akin to stone, which circumvented the need for costly imports of armourstone.
In the Netherlands, horizontal stone closures have been relatively uncommon due to the high costs associated with armourstone and the prerequisite soil protection. Conversely, in countries where stone is more affordable and soils are less prone to erosion, horizontal stone closures are more frequently employed. A notable instance of this method was the closure of the Saemangeum estuary in South Korea, where a scarcity of heavy stone led to the innovative use of stone packed in steel nets as dumping material. The logistical challenges of transporting and deploying stone, especially within the constraints of a tight timeframe to prevent excessive bottom erosion, often pose significant challenges.
Vertical stone closures
From a hydraulic perspective, vertical closures are preferable due to their reduced turbulence and consequent minimisation of soil erosion issues. However, their implementation is more complex. For parts of the dam submerged underwater, stone dumpers (either bottom or side dumpers) can be employed. Yet, this becomes impractical for the final segments due to insufficient navigational depth. Two alternatives exist: the construction of an auxiliary bridge or the use of a cable car.
Auxiliary bridge
An auxiliary bridge allows armourstone to be directly deposited into the closing gap. This method was contemplated for the Delta Works' Oesterdam closure but was ultimately deemed more expensive than sand closure. In the Netherlands, such a technique was applied during the closure of the dike around De Biesbosch polder in 1926, where a temporary bridge facilitated the dumping of materials into the gap using tipping carts propelled by a steam locomotive.
Cable car
Constructing an auxiliary bridge for larger and deeper closing gaps can be exceedingly cumbersome, leading to the preference for cable cars in the Delta Works closures. The first application of a cable car was for the northern gap of the Grevelingendam, serving as a trial to gather insights for subsequent larger closures like the Brouwershavense Gat and the Oosterschelde.
Stone transport via cable involved wagons with independent propulsion, enhancing transport capacity through one-way traffic. The system's design, a collaboration between Rijkswaterstaat and French company Neyrpic, minimized malfunction risks across the network. The 'blondin automoteur continu' type cable car spanned approximately 1200 m, with a continuous track supported by two carrying cables and terminal turntables for wagon transfer. Initially, stone was transported in steel bottom-unloading containers, later supplemented by steel nets, allowing for a dumping rate of 360 tons per hour.
However, the system's loading capacity proved insufficient, prompting a switch to 1 m3 (2500 kg) concrete blocks for subsequent closures (Haringvliet and Brouwersdam). Although planned for the Oosterschelde closure, a policy shift led to the construction of a storm surge barrier instead, foregoing the use of the cable car for this purpose.
Sand closure
Beyond the use of armourstone, closures can also be achieved solely with sand. This method necessitates a substantial dredging capacity. In the Netherlands, sand closures have been successfully implemented in various projects, including the Oesterdam, the Philipsdam, and the construction of the Second Maasvlakte.
Principles of a sand closure
Sand closures involve employing a dumping capacity within the closure gap that introduces more material per tidal cycle than can be removed by the current. Unlike stone closures, the material used here is inherently unstable under the flow velocities encountered. Typically, sand closures do not necessitate soil protection. This, among other reasons, makes sand closure a cost-effective solution when locally sourced sand is utilized. Since 1965, numerous tidal channels have been effectively sealed using sand, aided by the rapidly increasing capabilities of modern sand suction dredgers.
These advancements have enabled quick and voluminous sand delivery for larger closures, tolerating sand losses during the closing phase of up to 20 to 50%. The initial sand closures of tidal channels — including the Ventjagersgatje in 1959 and the southern entrance to the Haringvliet bridge in 1961 — contributed to the development of a basic calculation method for sand closures. Subsequent sand closures provided practical validation for this method, refining predictions of sand losses.
Selected sand closures
The following table outlines several channels that have been closed using sand, illustrating the technique's application and effectiveness.
Note: Several compartments did not encompass fully enclosed basins, making a surface area metric inapplicable.
During the closure of the Geul at the mouth of the Oosterschelde—characterised by a tidal capacity of roughly 30 million cubic metres and a maximum depth of 10 metres below mean sea level (MSL)—the Oosterschelde dam between the working islands of Noordland and Neeltje Jans in 1972 witnessed minimised sand losses thanks to the employment of high-capacity suction dredging. This strategy achieved a sand extraction rate exceeding 500,000 cubic metres per week, distributed across three suction dredgers. It was also demonstrated that initiating the closure from one side and progressing towards the shallowest part of the gap effectively reduces sand losses. This approach ensured the shortest possible distance for the sand to be deposited towards the closure's culmination, particularly during periods of maximum flow velocity.
This technique partly accounts for the significant sand losses, approximately 45%, observed during the closure of the Brielse Gat, which has a maximum depth of 2 metres below MSL and where sand was deposited from both sides towards the centre. Opting for a single sand deposit site, while reducing sand losses, necessitates substantial suction capacity and results in a notably wider closure dam to accommodate all discharge pipelines.
Designing sand closures
A defining feature of sand closures is the movement and subsequent loss of the construction material. The principle underpinning a sand closure relies on the production of more sand than what is lost during the process. Sand losses occur daily under average flow conditions through the closing gap, contingent upon the flow dynamics. In the context of "strength and load," the "strength" of a sand closure is represented by its production capacity, while the "load" is the resultant loss. A closure is deemed successful when the production exceeds the loss, leading to a gradual narrowing of the closing gap.
The production capacity, which includes a sufficiently large extraction site for the sand, must surpass the maximum anticipated loss during the closure operation. Consequently, the feasibility study for a (complete) sand closure must initially concentrate on identifying the phase associated with maximum losses. Employing hydraulic boundary conditions, the sand loss for each closure phase can be calculated and depicted graphically as illustrated. The horizontal axis in the diagram represents the closing gap's size, indicating that the depicted capacity is insufficient for a sand closure under these conditions.
A sand closure becomes viable if sufficient sand production can be sustained near the closure gap to overcome the phase with the highest losses. The essential criterion is that the average tidal loss remains lower than the production. However, considerable uncertainties exist in both the calculated losses and the anticipated production, necessitating careful attention. The loss curve, as a function of the closing gap area, typically exhibits a single peak. The maximal loss is usually found when the closing gap area is between 0 and 30% of its initial size. Hence, initial loss calculations can be restricted to this range of closing gap sizes.
Interestingly, the peak sand loss does not coincide with the near completion of the closure gap. Despite potentially high flow velocities, the eroded width of the closing hole is minimal, thus keeping overall sand losses low. Hydraulic boundary conditions can be determined using a storage/area approach.
In general, sand closures are theoretically feasible for maximum flow velocities up to approximately 2.0 to 2.5 m/s. Beyond these velocities, achieving a sand closure becomes virtually impossible due to the resulting flow rates, which are influenced by the reference flow rate U0 and the discharge coefficient μ. The discharge coefficient μ is affected by both friction and deceleration losses within the closing gap, with friction losses being notably significant due to the large dimensions of the sand dams. Consequently, the choice of gradient measurement distance significantly impacts the discharge coefficient, which exhibits considerable variability. However, this variability diminishes during the crucial final phase of the closure, where a value of 0.9 is recommended as a reasonable upper limit for the discharge coefficient. The actual flow velocity within the closing gap is determined by applying the storage area approach, adjusted by the discharge coefficient.
Sudden closures (caissons)
A sudden closure involves the rapid sealing of a tidal inlet or breach in a dike. This is typically prepared in such a manner that the gap can be entirely closed in one swift action during slack tide. The use of caissons or sluice caissons is common, though other unique methods, such as sandbags or ships, have also been employed. Caissons were initially utilized as an emergency response for sealing dike breaches post the Allied Battle of Walcheren in 1944 and subsequently after the 1953 North Sea flood. This technique has since been refined and applied in the Delta Works projects.
Caisson closure
A caisson closure involves sealing the gap with a caisson, essentially a large concrete box. This method was first applied in the Netherlands for mending dike breaches resulting from Allied assaults on Walcheren in 1944. The following year, at Rammekens, surplus caissons (Phoenix caissons) sourced from England, originally used for constructing the Mulberry harbours post-Normandy landings by Allied troops, were repurposed for dike repairs.
In the aftermath of the 1953 storm disaster, the closure of numerous breaches with caissons was contemplated. Given the uncertainty surrounding the final sizes of the gaps and the time-consuming nature of caisson construction, a decision was made shortly after February 1, 1953, to pre-fabricate a considerable quantity of relatively small caissons. These were strategically employed across various sites, and later, within the Delta Works.
A limited supply of larger Phoenix caissons from the Mulberry harbours was also utilized for sealing a few extensive dike breaches, notably at Ouwerkerk and Schelphoek.
Placing a caisson
To successfully sink a caisson, it's imperative that the flow velocity within the closing gap is minimized; thus, the operation is conducted during slack water. Given the extremely brief period during which the current is genuinely still, the sinking process must commence while the tidal flow remains at a manageable low speed. Past experiences with caisson closures have demonstrated that this speed should not exceed 0.3 m/s, guiding the timing for various phases of the operation as follows:
This schedule dictates that flow speeds must reduce to 0.30 m/s at most 13 minutes before slack water and to 0.75 m/s at most 30 minutes before. Considering the sinusoidal nature of tides in the Netherlands, with a cycle of 12.5 hours, the maximum velocity in the closing gap should not surpass 2.5 m/s. This velocity threshold can be ascertained through a storage/basin analysis. The accompanying diagram illustrates outcomes for sill heights at MSL -10 m and MSL -12 m, indicating that a sill at MSL -12 m is necessary as the sinking time at MSL -10 m is insufficient. Consequently, caisson closures are feasible only at considerable channel depths.
Sluice caissons
The challenge in sealing larger gaps with caissons lies in the diminishing flow area as more caissons are placed, resulting in significantly increased flow speeds (exceeding the aforementioned 2.5 m/s), complicating the final caisson's proper placement. This issue is addressed through the use of sluice caissons, essentially a box equipped with gates on one side. During installation, these gates are shut to maintain buoyancy, and the opposite side is sealed with wooden boards.
Once each caisson is positioned, the boards are removed, and the gates opened, allowing the tidal current to pass with minimal impedance. This approach ensures that the flow area doesn't drastically reduce, and flow velocities remain manageable, facilitating the placement of subsequent caissons. After all caissons are set, the gates are closed at slack water, completing the closure. Subsequently, sand is sprayed in front of the dam, and the gates along with other movable mechanisms are removed, available for reuse in future closures.
Sluice caissons were first employed in closing the Veerse Gat, and subsequently utilized at the Brouwersdam and the Volkerak. They were also deployed in the closure of the Lauwerszee.
Design of Sluice Caissons
For caisson closures, it is crucial to maintain the largest effective flow profile possible during installation. Additionally, the discharge coefficient must be as high as possible, indicating the degree to which flow is obstructed by the caisson's shape.
Flow Area
The flow area of each caisson should be maximised. This can be achieved by:
Ensuring the greatest possible distance between the caisson walls, with steel diagonals providing sufficient torsional stiffness.
Designing the bottom of the caissons to be as thin as possible.
Incorporating ballast spaces within the superstructure of the caisson to add necessary weight, thereby generating sufficient friction between the caisson and the sill.
Discharge Coefficient
Besides the flow area, the discharge coefficient is of paramount importance. Measures to improve the discharge coefficient include:
Streamlining the diagonals between the walls.
Adding extra features to streamline the sill.
The table below provides the discharge coefficients for various sluice caissons designed in the Netherlands.
Special closures
Closure by sinking ships
In exceptional circumstances, typically during emergencies such as dike breaches, efforts are made to seal the breach by manoeuvring a ship into it. Often, this method fails due to the mismatch between the dimensions of the ship and the breach. Instances have been recorded where the ship, once directed into the breach, was then dislodged by the powerful current. Another frequent issue is the incompatibility of the ship's bottom with the seabed of the breach, leading to undermining. The ensuing strong current further erodes the seabed beneath the ship, rendering the closure attempt unsuccessful. A notable exception occurred in 1953 during a dike breach along the Hollandse IJssel, which was successfully sealed; a monument commemorates this event later.
In Korea, an attempt was made in 1980 to close a tidal inlet using an old oil tanker. Little information is available about the outcome of this attempt, suggesting it may not have been notably successful, especially considering the numerous subsequent closures in Korea that have utilized stone. Later Google Earth imagery indicates that the ship was eventually removed following the dam's closure.
Closure with sandbags
Utilizing sandbags and a significant workforce represents another unique closure method. This approach was employed during the construction of the dam across the Feni river in Bangladesh. At low tide, the riverbed at the closure site was almost completely exposed.
Twelve depots, each containing 100,000 sandbags, were established along the 1,200 m wide closure gap. On the day of the closure, 12,000 workers deployed these bags into the gap over a span of six hours, outpacing the rising tide. By the day's end, the tidal inlet was sealed, albeit only to the water levels typical of neap tides. In the ensuing days, the dam was further augmented with sand to withstand spring tides and, over the next three months, reinforced to resist storm surges up to 10 metres above the dam's base.
Storage area approach
Utilising the tidal prism for velocity calculations in the neck of a tidal inlet
If a tidal basin is relatively short (i.e., its length is minor compared to the tidal wave's length), it's assumed that the basin's water level remains even, merely rising and falling with the tide. Under this assumption, the basin's storage (tidal prism) equals its surface area times the tidal range.
The formula for basin storage then simplifies to:
, in which:
represents the tidal prism (m3),
signifies the basin area (m2),
denotes the tidal range at the basin's entrance (m).
This methodology facilitates a reliable estimation of current velocities within the tidal inlet, essential for its eventual closure. Termed the storage area approach, this technique provides a straightforward means to gauge local hydraulic conditions essential for barrier construction.
Within this approach, estuary water movement is modelled without either friction and inertia effects, leading to:
,
in which is the flow rate in the inlet, is the basin storage area, and is the water level's rate of change.
The depicted basin storage system assumes:
A river discharge , with inflow considered positive,
A flow through the closure , governed by the energy height difference upstream and water level at the gap , along with the gap's drainage characteristics.
For an imperfect weir:
And for a perfect weir:
Symbol meanings are as follows:
Combining these yields the basin storage equation, facilitating velocity graphs within the closure gap. An example graph for a tidal amplitude of 2.5 m (therefore a total range of 5 metres) shows velocities as functions of the tidal storage area (B) to closure gap width (Wg) ratio and sill depth (d'). Red indicates vertical closures, orange horizontal, and green a combination, highlighting the speed differences between closure types.
General reference
References
Environmental engineering
Coastal engineering | Closure of tidal inlets | [
"Chemistry",
"Engineering"
] | 6,910 | [
"Coastal engineering",
"Chemical engineering",
"Civil engineering",
"Environmental engineering"
] |
76,384,954 | https://en.wikipedia.org/wiki/List%20of%203D%20printing%20software | This is a list of 3D printing software.
See also
3D printing - or additive manufacturing
3D scanning - replicating objects to 3D models to potentially 3D print
Comparison of computer-aided design software
3D Manufacturing Format - open source file format standard developed and published by the 3MF Consortium
PLaSM - open source scripting language for solid modeling
3D printing processes
Thingiverse - open CAD repository/library for 3D printers, laser cutters, milling machines
MyMiniFactory - 3D printing marketplace
CAD library - 3D repository to download 3D models
Fused filament fabrication - 3D printing process that uses a continuous filament of a thermoplastic material
Qlone - 3D scanning app based on photogrammetry for creation of 3D models on mobile devices that can be 3D printed
Metal injection molding
EnvisionTEC - 3D printing hardware company
Desktop Metal - company focused on 3D metal printing
Slicer (3D printing) - toolpath generation software used in 3D printing
List of computer-aided manufacturing software - for CNC machining
References
3D printing
DIY culture
Industrial design
Industrial processes
Free computer-aided manufacturing software
Computer-aided design software
Autodesk products
3D graphics software
Freeware 3D graphics software | List of 3D printing software | [
"Engineering"
] | 236 | [
"Industrial design",
"Design engineering",
"Design"
] |
76,386,646 | https://en.wikipedia.org/wiki/Modern%20methods%20of%20construction | Modern methods of construction (MMC) is a term used mainly in the UK construction industry to refer to "smart construction" processes designed to improve upon traditional design and construction approaches by focusing on (among other things) component and process standardisation, design for manufacture and assembly (DfMA), prefabrication, preassembly, off-site manufacture (including modular building) and onsite innovations such as additive manufacture (3D printing). While such modern approaches may be applied to infrastructure works (bridges, tunnels, etc.) and to commercial or industrial buildings, MMC has become particularly associated with construction of residential housing. However, several specialist housing businesses established to target this market did not become commercially viable.
History
The MMC term started to enter common industry use in the early 2000s following the publication of the Egan Report, Rethinking Construction, in November 1998. An industry task force chaired by Sir John Egan, produced an influential report on the UK construction industry, which did much to drive efficiency improvements in UK construction industry practice during the early years of the 21st century, with its recommendations implemented through initiatives including the Movement for Innovation (M4I) and the Construction Best Practice Programme (CBBP). However, the emergence of some non-traditional methods substantially predated Egan's report; procurement of prefabricated homes, for example, was a UK government response to housing shortages after both World Wars, the CLASP created prefabricated schools in the late 1950s, and the 1964-1970 Labour government engaged in an "Industrialised Building Drive".
MMC has been repeatedly advocated in UK government construction strategy statements including the 2017 Transforming Infrastructure Performance from the Infrastructure and Projects Authority (IPA), the 2019 Construction Sector Deal, the Construction Playbook (2020, 2022), and the IPA's 2021 TIP Roadmap to 2030. The 2022 Playbook and TIP Roadmap also encouraged procurement of construction projects based on product 'platforms' ("Platform Design for Manufacture and Assembly, PDfMA") comprising kits of parts, production processes, knowledge, people and relationships required to deliver all or part of construction projects.
The UK Government has also invested in MMC initiatives and businesses. During the 2010s, as government backing (including via Homes England) for MMC grew, several UK companies (for example, Ilke Homes, L&G Modular Homes, House by Urban Splash, Modulous, Lighthouse and TopHat) were established to develop modular homes as an alternative to traditionally-built residences. From its Knaresborough, Yorkshire factory (opened in 2018, closed in 2023), Ilke Homes delivered two- and three-bedroom 'modular' homes that could be erected in 36 hours. Homes England invested £30m in Ilke Homes in November 2019, and a further £30m in September 2021. Despite a further fund-raising round, raising £100m in December 2022, Ilke Homes went into administration on 30 June 2023, with most of the company's 1,150 staff made redundant, and creditors owed £320m, including £68m owed to Homes England. L&G Modular Homes halted production in May 2023, blaming planning delays and the COVID-19 pandemic for its failure, with the enterprise incurring total losses over seven years of £295m.
In November 2023, Homes England loaned £15m to TopHat, another loss-making MMC housebuilder, to fund construction of a factory in Corby; in March 2024, the factory's opening was postponed and the company announced 70 redundancies. MD Andrew Shepherd left TopHat in May 2024. In August 2024, TopHat faced a winding-up hearing after a petition was filed by Harworth, a Yorkshire based property developer, but settled out of court. Also in August 2024, housebuilder Persimmon wrote off a £25m investment in TopHat it made in 2023, due to "a re-assessment of risks within the modular build sector". In October 2024, having accumulated a loss of around £87m since 2016, TopHat confirmed it was winding down its Derby factory operations, with most staff being made redundant. In its penultimate year of trading, TopHat made an operating loss of £46m on turnover of less than £11m.
In January 2024, following the high-profile failures of Ilke Homes, L&G Modular and House by Urban Splash during 2022 and 2023, the House of Lords Built Environment Committee highlighted that the UK Government needed to take a more coherent approach to addressing barriers affecting adoption of MMC: "If the Government wants the sector to be a success, it needs to take a step back, acquire a better understanding of how it works and the help that it needs, set achievable goals and develop a coherent strategy." Modulous and Lighthouse went into administration in January and March 2024 respectively. In late March 2024, housing minister Lee Rowley told the Lords Committee that the government would be reviewing its MMC policies in light of the crisis in the volumetric house-building sector. He promised "a full update in late spring once we have undertaken further detailed work with the sector". Following the July 2024 general election, a House of Lords Library report was published in August 2024 ahead of a scheduled debate in September 2024; it said the new Labour government would publish a new long-term housing strategy "in the coming months".
Defining MMC
MMC refers to a variety of off-site construction methods:
modular construction: three-dimensional units produced in a factory are transported to site and assembled and connected
non-structural pods: for example, fitted kitchens or bathrooms that can be incorporated into load-bearing structures
panelised systems: flat panel units typically used for walls, ceilings and floors, and made of timber, light steel or concrete, and
sub-assemblies and components such as roof frames and floor cassettes.
During the early 2000s, the Housing Corporation classified a number of offsite manufacturing initiatives. Its classification included volumetric construction (e.g. bathroom and kitchen pods), panellised construction systems, hybrid construction (volumetric units integrated with panellised systems), sub-assemblies and components (e.g. floor and roof cassettes, wiring looms, pre-fabricated plumbing), and site-based MMC approaches.
In 2017, the IPA's Transforming Infrastructure Performance committed the government to "smart construction, using modern methods, including offsite manufacture". It said: "Smart construction (or 'modern methods of construction') offers the opportunity to transition from traditional construction to manufacturing, and unlock the benefits from standard, repeatable processes with components manufactured offsite."
MMC framework
Recognising that terms such as MMC, prefabrication and off-site construction were prone to different interpretations, a Modern Methods of Construction working group was established by the UK's Ministry for Housing, Communities and Local Government (MHCLG) to develop a definition framework. With inputs from Build Offsite, Homes England, National House Building Council (NHBC) and Royal Institute of Chartered Surveyors (RICS), the Modern Methods of Construction (MMC) framework definition was published in 2019. It was intended to regularise and refine the term MMC by defining the broad spectrum of innovative construction techniques being applied, enabling clients, advisors, lenders and investors, warranty providers, building insurers and valuers to all build a common understanding of the different forms of MMC use. It divides factory-produced systems into seven categories:
Criticisms
As previously mentioned, while MMC suggests a modern approach, some of its processes - notably prefabrication, but also standardisation of components - were extensively deployed during the 20th century. MMC, particularly in the UK, has been challenging to implement due to the volatility of the UK housing market, while the increasingly globalised nature of the supply chain for products such as panelised cladding systems also creates issues - for example, concerns about working conditions in remote off-site factories, and the de-skilling impacts on traditional capabilities in local communities. Moreover, re-classifying activities as manufacturing rather than construction would also materially impact the headline labour productivity of the construction sector. Also "Off-site factories are essentially transient entities. There is hence no guarantee they will be able to supply replacement components in the future. And the more that manufactured components rely on 'high-precision engineering' the less malleable they are in terms of future adaptation."
After high-profile business failures in the sector (including Ilke Homes), a 2024 study proposed steps to improve public perceptions of MMC and increase industry adoption. Confidence had also been adversely affected by the disjointed nature of MMC organisations, poor communication defining MMC, and unfair value comparisons. The study made a series of recommendations, including standardisation of systems, development of an MMC glossary and rethinking planning policies.
References
Construction
Building engineering
Construction industry of the United Kingdom | Modern methods of construction | [
"Engineering"
] | 1,870 | [
"Construction",
"Civil engineering",
"Architecture",
"Building engineering"
] |
76,386,798 | https://en.wikipedia.org/wiki/Joint%20Evaluated%20Fission%20and%20Fusion | The Joint Evaluated Fission and Fusion (JEFF) organization is an international collaboration for the production of nuclear data. It consists of members of the Nuclear Energy Agency (NEA) of the Organisation for Economic Co-operation and Development (OECD).
JEFF produces the Joint Evaluated Fission and Fusion Nuclear Data Library, which is in the universal ENDF format.
References
Nuclear physics
Organizations with year of establishment missing | Joint Evaluated Fission and Fusion | [
"Physics"
] | 82 | [
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
76,388,661 | https://en.wikipedia.org/wiki/Metal%20casting%20simulation | Casting process simulation is a computational technique used in industry and metallurgy to model and analyze the metal-casting process. This technology allows engineers to predict and visualize the flow of molten metal, crystallization patterns, and potential defects in the casting before the start of the actual production process. By simulating the casting process, manufacturers can optimize mold design, reduce material consumption, and improve the quality of the final product.
History
The theoretical foundations of heat conduction, critically important for casting simulation, were established by Jean-Baptiste Joseph Fourier at the École polytechnique in Paris. His thesis "Analytical Theory of Heat," awarded in 1822, laid the groundwork for all subsequent calculations of heat conduction and heat transfer in solid materials. Additionally, French physicist and engineer Claude-Louis Navier and Irish mathematician and physicist George Gabriel Stokes provided the foundations of fluid dynamics, which led to the development of the Navier-Stokes equations. Adolph Fick, working in the 19th century at the University of Zurich, developed the fundamental equations describing diffusion, published in 1855.
The beginning of simulation in casting started in the 1950s when V. Pashkis used analog computers to predict the movement of the crystallization front. The first use of digital computers to solve problems related to casting was carried out by Dr K. Fursund in 1962, who considered the penetration of steel into a sand mold. A pioneering work by J. G. Hentzel and J. Keverian in 1965 was the two-dimensional simulation of the crystallization of steel castings, using a program developed by General Electric to simulate heat transfer. In 1968, Ole Vestby was the first to use the finite difference method to program a 2D model that evaluated the temperature distribution during welding.
The 1980s marked a significant increase in research and development activities around the topic of casting process simulation with contributions from various international groups, including J. T. Berry and R. D. Pielke in the United States, E. Niyama in Japan, W. Kurz in Lausanne, and F. Durand in Grenoble. An especially important role in advancing this field was played by Professor P. R. Sahm at the Aachen Foundry Institute. Key milestones of this period were the introduction of the "criterion function" by Hansen and Berry in 1980, the Niyama criterion function for the representation of central porosities in 1982, and the proposal of a criterion function for the detection of hot cracks in steel castings by E. Fehlner and P. N. Hansen in 1984. In the late 1980s, the first capabilities for simulating mold filling were developed.
The 1990s focused on the simulation of stresses and strains in castings with significant contributions from Hattel and Hansen in 1990. This decade also saw efforts to predict microstructures and mechanical properties with the pioneering work of I. Svensson and M. Wessen in Sweden.
Principles of casting simulation
The production of casting is one of the most complex and multifaceted processes in metallurgy, requiring careful control and an understanding of a multitude of physical and chemical phenomena. To effectively manage this process and ensure the high quality of the final products, it is essential to have a deep understanding of the interaction of the various casting parameters. In this context, the mathematical modeling of casting acts as a critically important scientific tool, allowing for detailed analysis and optimization of the casting process based on mathematical principles.
Mathematical modeling of casting is a complex process that involves the formulation and solution of mathematical equations that describe physical phenomena such as thermal conductivity, fluid dynamics, phase transformations, among others. To solve these equations, various numerical analysis methods are applied, among which the finite element method (FEM), finite difference method (FDM), and finite volume method (FVM) hold a special place. Each of these methods has its particular characteristics and is applied depending on the specific modeling tasks and the requirements for precision and efficiency in the calculations.
Finite difference method (FDM): This method is based on differential equations of heat and mass transfer, which are approximated using finite difference relationships. The advantage of the FDM is its simplicity and the ability to simplify the solution of multidimensional problems. However, the method has limitations in modeling the boundaries of complex areas and performs poorly for castings with thin walls.
The finite element method and Finite volume method (FVM): Both methods are based on integral equations of heat and mass transfer. They provide a good approximation of the boundaries and allow the use of elements with different discretizations. The main drawbacks are the need for a finite element generator, the complexity of the equations, and large requirements for memory and time resources.
Modifications of the FVM: These methods attempt to combine the simplicity of the FDM with a good approximation of the boundaries of the FEM. They have the potential to improve the approximation of boundaries between different materials and phases.
The analysis of different methods of mathematical modeling of casting processes shows that the finite element method is one of the most reliable and optimal approaches for casting simulation. Despite higher computational resource requirements and complexity in implementation compared to the finite difference method and finite volume method, the FEM provides high accuracy in modeling boundaries, complex geometries, and temperature fields, which is critically important for predicting defects in castings and optimizing casting processes.
Application in production
Computer-aided engineering (CAE) systems for casting processes have long been used by foundries around the world as a "virtual foundry workshop," where it is possible to perform and verify any idea that arises in the minds of designers and technologists. The global market for CAE for casting processes can already be considered established.
Within the structure of the company for the development of the technology of a new casting, a computer-aided design department for casting processes is created, responsible for operating CAE systems for casting processes. Calculations are carried out by specialists of the department according to their job instructions, and interaction with other departments is regulated by technological design instructions.
The process begins with the delivery of the 3D model and drawing of the part to the foundry technologists, who coordinate the casting configuration with the mechanical workshop and determine the margins. Then, the technology is developed in the CAE department and transferred to the foundry workshop for experimental castings. The results are monitored, and if necessary, the castings are examined in the central laboratory of the factory. If defects are detected, an adjustment of the model parameters and the technological process is made in the CAE department, after which the technology is tested again in the workshop.
This cycle is repeated until suitable castings are obtained, after which the technology is considered developed and implemented in mass production.
Software and tools
In the modern foundry industry, software for the simulation of casting processes is widely used. Among the multitude of software solutions available, it is worth mentioning the most prominent and widely used products: Procast, MAGMASOFT, and PoligonSoft.
ProCAST: a casting process modeling system using the finite element method, which provides the joint solution of temperature, hydrodynamics, and deformation problems, along with unique metallurgical capabilities, for all casting processes and casting alloys. In addition to the main aspects of casting production – filling, crystallization, and porosity prediction, ProCAST is capable of predicting the occurrence of deformations and residual stresses in the casting and can be used to analyze processes such as core making, centrifugal casting, lost wax casting, continuous casting.
PoligonSoft: a casting process modeling system using the finite element method. Applicable for modeling almost any casting technology and any casting alloy. For a long time, PoligonSoft was the only casting process modeling system in the world that included a special model for calculating microporosity. To date, this model can be considered the most stable, and the results obtained with its help can satisfy the most demanding users. In many respects, PoligonSoft can be considered the Russian equivalent of the ProCAST system.
MAGMASOFT: a casting process modeling system using the finite difference method. It allows analyzing thermal processes, mold filling, crystallization, and predicting defects in castings. The program includes modules for different casting technologies and helps optimize casting parameters to improve product quality. MAGMASOFT is an effective tool for increasing the productivity and quality of casting production.
Future trends
The simulation of the casting process reflects the user's knowledge, who decides whether the filling system has led to an acceptable result. Optimization suggestions must come from the operator. The main problem is that all processes occur simultaneously and are interconnected: changes in one parameter affect many quality characteristics of the casting.
Autonomous optimization, which began in the late 1980s, uses the simulation tool as a virtual testing ground, changing filling conditions and process parameters to find the optimal solution. This allows evaluating numerous process parameters and their impact on process stability.
It is important to remember that only what can be modeled can be optimized. Optimization does not replace process knowledge or experience. The simulation user must know the objectives and quality criteria necessary to achieve those objectives and formulate specific questions to the program to obtain quantitative solutions.
See also
References
External links
Сasting simulation software
The efficiency of multithreaded computing in casting simulation software
Foundries
Metalworking
Software
Casting (manufacturing) | Metal casting simulation | [
"Chemistry",
"Technology",
"Engineering"
] | 1,887 | [
"Foundries",
"Software engineering",
"Computer science",
"nan",
"Metallurgical facilities",
"Software"
] |
54,829,672 | https://en.wikipedia.org/wiki/Cubical%20complex | In mathematics, a cubical complex (also called cubical set and Cartesian complex) is a set composed of points, line segments, squares, cubes, and their n-dimensional counterparts. They are used analogously to simplicial complexes and CW complexes in the computation of the homology of topological spaces. Non-positively curved and CAT(0) cube complexes appear with increasing significance in geometric group theory.
Definitions
With regular cubes
A unit cube (often just called a cube) of dimension is the metric space obtained as the finite () cartesian product of copies of the unit interval .
A face of a unit cube is a subset of the form , where for all , is either , , or . The dimension of the face is the number of indices such that ; a face of dimension , or -face, is itself naturally a unit elementary cube of dimension , and is sometimes called a subcube of . One can also regard as a face of dimension .
A cubed complex is a metric polyhedral complex all of whose cells are unit cubes, i.e. it is the quotient of a disjoint union of copies of unit cubes under an equivalence relation generated by a set of isometric identifications of faces. One often reserves the term cubical complex, or cube complex, for such cubed complexes where no two faces of a same cube are identified, i.e. where the boundary of each cube is embedded, and the intersection of two cubes is a face in each cube.
A cube complex is said to be finite-dimensional if the dimension of the cubical cells is bounded. It is locally finite if every cube is contained in only finitely many cubes.
With irregular cubes
An elementary interval is a subset of the form
for some . An elementary cube is the finite product of elementary intervals, i.e.
where are elementary intervals. Equivalently, an elementary cube is any translate of a unit cube embedded in Euclidean space (for some with ). A set is a cubical complex (or cubical set) if it can be written as a union of elementary cubes (or possibly, is homeomorphic to such a set).
Related terminology
Elementary intervals of length 0 (containing a single point) are called degenerate, while those of length 1 are nondegenerate. The dimension of a cube is the number of nondegenerate intervals in , denoted . The dimension of a cubical complex is the largest dimension of any cube in .
If and are elementary cubes and , then is a face of . If is a face of and , then is a proper face of . If is a face of and , then is a facet or primary face of .
In algebraic topology
In algebraic topology, cubical complexes are often useful for concrete calculations. In particular, there is a definition of homology for cubical complexes that coincides with the singular homology, but is computable.
In geometric group theory
Groups acting geometrically by isometries on CAT(0) cube complexes provide a wide class of examples of CAT(0) groups.
The Sageev construction can be understood as a higher-dimensional generalization of Bass-Serre theory, where the trees are replaced by CAT(0) cube complexes. Work by Daniel Wise has provided foundational examples of cubulated groups. Agol's theorem that cubulated hyperbolic groups are virtually special has settled the hyperbolic virtually Haken conjecture, which was the only case left of this conjecture after Thurston's geometrization conjecture was proved by Perelman.
CAT(0) cube complexes
Gromov's theorem
Hyperplanes
CAT(0) cube complexes and group actions
The Sageev construction
RAAGs and RACGs
See also
Simplicial complex
Simplicial homology
Abstract cell complex
References
Cubes
Topological spaces
Algebraic topology
Computational topology | Cubical complex | [
"Mathematics"
] | 790 | [
"Computational topology",
"Mathematical structures",
"Computational mathematics",
"Space (mathematics)",
"Algebraic topology",
"Topological spaces",
"Fields of abstract algebra",
"Topology"
] |
66,158,871 | https://en.wikipedia.org/wiki/Environmental%20impacts%20of%20beavers | The beaver is a keystone species, increasing biodiversity in its territory through creation of ponds and wetlands. As wetlands are formed and riparian habitats enlarged, aquatic plants colonize newly available watery habitat. Insect, invertebrate, fish, mammal, and bird diversities are also expanded. Effects of beaver recolonization on native and non-native species in streams where they have been historically absent, particularly dryland streams, is not well-researched.
Effects on stream flows and water quality
Beaver ponds increase stream flows in seasonally dry streams by storing run-off in the rainy season, which raises groundwater tables via percolation from beaver ponds. In a recent study using 12 serial aerial photo mosaics from 1948 to 2002, the impact of the return of beavers on openwater area in east-central Alberta, Canada, found that the mammals were associated with a 9-fold increase in openwater area. Beavers returned to the area in 1954 after a long absence since their extirpation by the fur trade in the 19th century. During drought years, where beavers were present, 60% more open water was available than those same areas during previous drought periods when beavers were absent. The authors concluded that beavers have a dramatic influence on the creation and maintenance of wetlands even during extreme drought.
From streams in the Maryland coastal plain to Lake Tahoe, beaver ponds have been shown to remove sediment and pollutants, including total suspended solids, total nitrogen, phosphates, carbon, and silicates, thus improving stream water quality. In addition, fecal coliform and streptococci bacteria excreted into streams by grazing cattle are reduced by beaver ponds, where slowing currents lead to settling of the bacteria in bottom sediments.
Following findings that the parasite Giardia lamblia, which causes giardiasis, was putatively carried by beavers, the term "beaver fever" was coined by the American press in the 1970s. Further research has shown that many animals and birds carry this parasite, and the major source of water contamination is by humans. Recent concerns point to domestic animals as a significant vector of giardia, with young calves in dairy herds testing as high as 100% positive for giardia. New Zealand has giardia outbreaks, but no beavers, whereas Norway has plenty of beavers, but had no giardia outbreaks until recently (in a southern part of Norway densely populated by humans but no beaver).
In 2011, a Eurasian beaver pair was introduced to a beaver project site in West Devon, consisting of a large enclosure with a long channel and one pond. Within five years, the pair created a complex wetland with an extensive network of channels, 13 ponds and dams. Survey results showed that the created ponds hold of sediment, which stores of carbon and of nitrogen. Concentrations of carbon and nitrogen were significantly higher in these ponds than farther upstream of this site. These results indicate that the beavers' activity contributes to reducing the effects of soil erosion and pollution in agricultural landscapes.
Effects on animals
Bird abundance and diversity
Beavers help waterfowl by creating increased areas of water, and in northerly latitudes, they thaw areas of open water, allowing an earlier nesting season. In a study of Wyoming streams and rivers, watercourses with beavers had 75-fold more ducks than those without.
Trumpeter swans (Cygnus buccinator) and Canada geese (Branta canadensis) often depend on beaver lodges as nesting sites. Canada's small trumpeter swan population was observed not to nest on large lakes, preferring instead to nest on the smaller lakes and ponds associated with beaver activity.
Beavers may benefit birds frequenting their ponds in several additional ways. Removal of some pondside trees by beavers increases the density and height of the grass–forb–shrub layer, which enhances waterfowl nesting cover adjacent to ponds. Both forest gaps where trees had been felled by beavers and a "gradual edge" described as a complex transition from pond to forest with intermixed grasses, forbs, saplings, and shrubs are strongly associated with greater migratory bird species richness and abundance. Coppicing of waterside willows and cottonwoods by beavers leads to dense shoot production which provides important cover for birds and the insects on which they feed. Widening of the riparian terrace alongside streams is associated with beaver dams and has been shown to increase riparian bird abundance and diversity, an impact that may be especially important in semiarid climates.
As trees are drowned by rising beaver impoundments, they become ideal nesting sites for woodpeckers, which carve cavities that attract many other bird species, including flycatchers (Empidonax spp.), tree swallows (Tachycineta bicolor), tits (Paridae spp.), wood ducks (Aix sponsa), goldeneyes (Bucephala spp.), mergansers (Mergus spp.), owls (Tytonidae, Strigidae) and American kestrels (Falco sparverius). Piscivores, including herons (Ardea spp.), grebes (Podicipedidae), cormorants (Phalacrocorax ssp.), American bitterns (Botaurus lentiginosa), great egret (Ardea alba), snowy egret (Egretta thula), mergansers, and belted kingfishers (Megaceryle alcyon), use beaver ponds for fishing. Hooded mergansers (Lophodytes cucullatus), green heron (Butorides virescens), great blue heron (Ardea herodias) and belted kingfisher appeared more frequently in New York wetlands where beaver were active than at sites with no beaver activity.
By perennializing streams in arid deserts, beavers can create habitat which increases abundance and diversity of riparian-dependent species. For example, such as the upper San Pedro River in southeastern Arizona, reintroduced beavers have created willow and pool habitat which has extended the range of the endangered Southwestern willow flycatcher (Empidonax trailii extimus) with the southernmost verifiable nest recorded in 2005.
Bats
Beaver modifications to streams in Poland have been associated with increased bat activity. While overall bat activity was increased, Myotis bat species, particularly Myotis daubentonii, activity may be hampered in locations where beaver ponds allow for increased presence of duckweed.
Trout and salmon
Beaver ponds have been shown to have a beneficial effect on trout and salmon populations. Many authors believe that the decline of salmonid fishes is related to the decline in beaver populations. Research in the Stillaguamish River basin in Washington found that extensive loss of beaver ponds resulted in an 89% reduction in coho salmon (Oncorhynchus kisutch) smolt summer production and an almost equally detrimental 86% reduction in critical winter habitat carrying capacity. This study also found that beaver ponds increased smolt salmon production 80 times more than the placement of large woody debris. Swales and Leving had previously shown on the Coldwater River in British Columbia that off-channel beaver ponds were preferentially populated by coho salmon over other salmonids and provided overwintering protection, protection from high summer snowmelt flows and summer coho rearing habitat. Beaver-impounded tidal pools on the Pacific Northwest's Elwha River delta support three times as many juvenile Chinook salmon (Oncorhynchus tshawytscha) as pools without beaver.
The presence of beaver dams has also been shown to increase either the number of fish, their size, or both, in a study of brook trout (Salvelinus fontinalis), rainbow trout (Oncorhynchus mykiss) and brown trout (Salmo trutta) in Sagehen Creek, which flows into the Little Truckee River at an altitude of in the northern Sierra Nevada. These findings are consistent with a study of small streams in Sweden, that found that brown trout were larger in beaver ponds compared with those in riffle sections, and that beaver ponds provide habitat for larger trout in small streams during periods of drought. Similarly, brook trout, coho salmon, and sockeye salmon (Oncorhynchus nerka) were significantly larger in beaver ponds than those in unimpounded stream sections in Colorado and Alaska. In a recent study on a headwater Appalachian stream, brook trout were also larger in beaver ponds.
Most beaver dams do not pose barriers to trout and salmon migration, although they may be restricted seasonally during periods of low stream flows. In a meta-review of studies claiming that beaver dams act as fish passage barriers, Kemp et al. found that 78% of these claims were not supported by any data. In a 2013 study of radiotelemetry-tagged Bonneville cutthroat trout (Oncorhynchus clarki utah) and brook trout (Salvelinus fontinalis) in Utah, both of these fish species crossed beaver dams in both directions, including dams up to high. Rainbow, brown, and brook trout have been shown to cross as many as 14 consecutive beaver dams. Both adults and juveniles of coho salmon, steelhead trout, sea run cutthroat (Oncorhyncus clarki clarki), Dolly Varden trout (Salvelinus malma malma), and sockeye salmon are able to cross beaver dams. In southeast Alaska, coho jumped dams as high as two meters, were found above all beaver dams and had their highest densities in streams with beaver. In Oregon coastal streams, beaver dams are ephemeral and almost all wash out in high winter flows only to be rebuilt every summer. Migration of adult Atlantic salmon (Salmo salar) may be limited by beaver dams, but the presence of juveniles upstream from the dams suggests that the dams are penetrated by parr. Downstream migration of Atlantic salmon smolts was similarly unaffected by beaver dams, even in periods of low flows. Two-year-old Atlantic salmon parr in beaver ponds in eastern Canada showed faster summer growth in length and mass and were in better condition than parr upstream or downstream from the pond.
The importance of winter habitat to salmonids afforded by beaver ponds may be especially important in streams without deep pools or where ice cover makes contact with the bottom of shallow streams. Enos Mills wrote in 1913, "One dry winter the stream ... ran low and froze to the bottom, and the only trout in it that survived were those in the deep holes of beaver ponds." Cutthroat trout and bull trout were noted to overwinter in Montana beaver ponds, brook trout congregated in winter in New Brunswick and Wyoming beaver ponds, and coho salmon in Oregon beaver ponds. In 2011, a meta-analysis of studies of beaver impacts on salmonids found that beaver were a net benefit to salmon and trout populations primarily by improving habitat (building ponds) both for rearing and overwintering and that this conclusion was based over half the time on scientific data. In contrast, the most often cited negative impact of beavers on fishes were barriers to migration, although that conclusion was based on scientific data only 22% of the time. They also found that when beaver dams do present barriers, these are generally short-lived, as the dams are overtopped, blown out, or circumvented by storm surges.
By creating additional channel network complexity, including ponds and marshes laterally separated from the main channel, beavers may play a role in the creation and maintenance of fish biodiversity. In off-mainstem channels restored by beaver on the middle section of Utah's Provo River, native fish species persist even when they have been extirpated in the mainstem channel by competition from introduced non-native fish. Efforts to restore salmonid habitat in the western United States have focused primarily on establishing large woody debris in streams to slow flows and create pools for young salmonids. Research in Washington found that the average summer smolt production per beaver dam ranges from 527 to 1,174 fish, whereas the summer smolt production from a pool formed by instream large woody debris is about 6–15 individuals, suggesting that re-establishment of beaver populations would be 80 times more effective.
Beaver have been discovered living in brackish water in estuarine tidal marshes where Chinook salmon (Oncorhynchus tshawytscha) densities were five times higher in beaver ponds than in neighboring areas.
Amphibians
A study of mid-elevation ( - ) beaver-dammed vs. undammed lentic streams in Washington's southern Cascades found that prevalence of slow-developing amphibian populations was 2.7 times higher in the former, because beaver ponds were deeper with longer hydroperiods. Specifically, slow developing northern red-legged frogs (Rana aurora) and northwestern salamanders (Ambystoma gracile) were found almost exclusively in beaver-dammed locations, suggesting that these amphibians depend on beaver-engineered microhabitats. In the arid Great Basin of the western and northwestern United States, establishment of beaver ponds has been used as a successful management strategy to accelerate population growth of Columbia spotted frog (Rana luteiventris), which also depend on ponds with longer hydroperiods. In a report from Contra Costa County, California, both beaver dams and burrows associated with bank-lodges were found to provide refuge microhabitat for federally listed threatened California red-legged frog (Rana draytonii). Beaver-dammed ponds were also found to provide breeding habitat for R. draytonii adults and rearing habitat for their tadpoles. The report recommended that beaver "be treated as critical to the survival" of California red-legged frogs.
Beaver-engineered wetlands in the Boreal Foothills of west-central Alberta were also found to play a pre-eminent role in establishment of anuran species including the boreal chorus frog (Pseudacris maculata), North American wood frog (Rana sylvatica) and western toad (Anaxyrus boreas). In northern New York, mink frogs (Rana septentrionalis) were more abundant in larger ponds associated with beaver in the Adirondack Mountains, possibly because the colder, deeper water associated with large beaver ponds buffers this heat-intolerant species.
Insects
The fallen trees and stripped bark produced by beaver activity provides popular sites for oviposition of the virilis group of Drosophila, including the fruit fly Drosophila montana. Capture of these species of Drosophila for research is significantly more successful near beaver residences. The preference of beavers for birch, willow, and alder corresponds with oviposition site preferences of the Drosophila virilis species group, leading to commensalism between beavers and these species.
Effects on riparian trees and vegetation
Conventional wisdom has held that beavers girdle and fell trees and that they diminish riparian trees and vegetation, but the opposite appears to be true when studies are conducted longer-term. In 1987, Beier reported that beavers had caused local extinction of Quaking aspen (Populus tremuloides) and Black cottonwood (Populus trichocarpa) on 4–5% of stream reaches on the lower Truckee River in the Sierra Nevada mountains; however willow (Salix spp.) responded by regrowing vigorously in most reaches. He further speculated that without control of beaver populations, aspen and cottonwood could go extinct on the Truckee River. Not only have aspen and cottonwood survived ongoing beaver colonization, but a recent study of ten Sierra Nevada streams in the Lake Tahoe basin using aerial multispectral videography has also shown that deciduous, thick herbaceous, and thin herbaceous vegetation are more highly concentrated near beaver dams, whereas coniferous trees are decreased. These findings are consistent with those of Pollock, who reported that in Bridge Creek, a stream in semiarid eastern Oregon, the width of riparian vegetation on stream banks was increased several-fold as beaver dams watered previously dry terraces adjacent to the stream.
In a second study of riparian vegetation based on observations of Bridge Creek over a 17-year period, although portions of the study reach were periodically abandoned by beaver following heavy utilization of streamside vegetation, within a few years, dense stands of woody plants of greater diversity occupied a larger portion of the floodplain. Although black cottonwood and thinleaf alder did not generally resprout after beaver cutting, they frequently grew from seeds landing on freshly exposed alluvial deposits subsequent to beaver activity. Therefore, beaver appear to increase riparian vegetation given enough years to aggrade sediments and pond heights sufficiently to create widened, well-watered riparian zones, especially in areas of low summer rainfall. Beavers play an important role in seed dispersal for the water lily populations that they consume.
The surface of beaver ponds is typically at or near bank-full, so even small increases in stream flows cause the pond to overflow its banks. Thus, high stream flows spread water and nutrients beyond the stream banks to wide riparian zones when beaver dams are present.
Finally, beaver ponds may serve as critical firebreaks in fire-prone areas.
Stream restoration
In the 1930s, the U.S. government put 600 beavers to work alongside the Civilian Conservation Corps in projects to stop soil erosion by streams in Oregon, Washington, Wyoming, and Utah. At the time, each beaver, whose initial cost was about $5, completed work worth an estimated $300. In 2014, a review of beaver dams as stream restoration tools proposed that an ecosystem approach using riparian plants and beaver dams could accelerate repair of incised, degraded streams versus physical manipulation of streams.
The province of Alberta published a booklet in 2016 providing information on using beaver for stream restoration.
Utah published a Beaver Management Plan which includes reestablishing beavers in ten streams per year for the purpose of watershed restoration each year from 2010 through 2020.
In a pilot study in Washington, the Lands Council is reintroducing beavers to the upper Methow River Valley in the eastern Cascades to evaluate its projections that if of suitable habitat were repopulated, then 650 trillion gallons of spring runoff would be held back for release in the arid autumn season. Beavers were nearly exterminated in the Methow watershed by the early 1900s by fur trappers. This project was developed in response to a 2003 Washington Department of Ecology proposal to spend as much as $10 billion on construction of several dams on Columbia River tributaries to retain storm-season runoff. As of January, 2016, 240 beavers released into the upper Methow River at 51 sites had built 176 beaver ponds, storing millions of gallons of water in this semiarid east region. One beaver that was passive integrated transponder tagged and released in the upper part of the Methow Valley, swam to the mouth of the Methow River, then up the Okanogan River almost to the Canada–US border, a journey of .
In efforts to 'rebeaver' areas of declined beaver populations, artificial logjams have been placed. Beavers may be encouraged to build dams by the creation of a "beaver dam analog (BDA)". Initially, these were made by felling fir logs, pounding them upright into the stream bed, and weaving a lattice of willow sticks through the posts, which beavers would then expand. To minimize labor further, newer postless designs have been used, which in smaller streams, beavers can still expand into sequential dams.
Beaver ponds as wildlife refugia and firebreaks in wildfires
Beaver and their associated ponds and wetlands may be overlooked as effective wildfire-fighting tools. Eric Collier's 1959 book, Three Against the Wilderness, provides an early description of a string of beaver ponds serving as a firebreak, saving the home of his pioneer family from a wildfire in interior British Columbia. Reduction of fuel loads by beaver removal of riparian trees, increased moisture content in riparian vegetation by beaver-raised water tables, and water held in beaver ponds all act as barriers to wildfires. A study of vegetation after five large wildfires in the western United States found that riparian corridors within of beaver ponds were buffered from wildfires when compared to similar riparian corridors without beaver dams. One month after the Sharps Fire burned in Idaho's Blaine County in 2018, a lone surviving green ribbon of riparian vegetation along Baugh Creek was observed, (see image) illustrating how a string of beaver ponds resists wildfires, creating an "emerald refuge" for wildlife. After the 2015 Twisp River Fire burned , ponds built by translocated beaver created firebreaks as evidenced by burns on one side of the river but not the other. A study of 29 beaver ponds in the Columbia River Basin found that they store an average of 1.1 million gallons of water, suggesting that beaver ponds may provide a water source for firefighters in remote areas. Lastly, two studies of the Methow River watershed, after the 2014 Carlton Complex Fire burned in north central Washington State, have shown that beaver dams reduced the negative impacts of wildfire on sediment runoff, reduced post-wildfire sediment and nutrient loads, and preserved both plant and macroinvertebrate communities.
Urban beavers
Canada
Several Canadian cities have seen a resurgence in its beaver population in recent decades. The beaver population in Calgary was approximately 200 in 2016, with the majority of the population located near the Bow, and Elbow River. When required, the city of Calgary will use a combination of methods to prevent beaver damage to trees and river parks. Methods of damage prevention includes the placement of a mesh wire fence around the tree trunk, planting trees less palatable to beavers near shorelines, placing under-dam drainage systems to control water levels; and placing traps designed to kill instantly, as Alberta Environment and Parks does not allow the relocation of caught beavers to other areas.
Beavers have occasionally wandered into Downtown Ottawa, including Parliament Hill, Major's Hill Park, and Sparks Street. Beavers caught in the urban core of Ottawa by the National Capital Commission's conservation team are typically brought to a wildlife centre, and later released near the Ottawa River, close to the Greenbelt. In 2011, the city of Ottawa began to trap beavers taking up residence in the stormwater pond in the Stittsville neighbourhood. Ottawa is situated south of the southern entrance to Gatineau Park. Located on the Quebec side of the Ottawa River, the park is home to one of North America's densest populations of beavers, with more than 1,100 beavers in 272 beaver colonies according to a 2011 aerial inventory of the park. The beaver population at Gatineau Park is monitored by the National Capital Commission in an effort to protect local infrastructure, and maintain public safety.
The city of Toronto government, and the Toronto and Region Conservation Authority (TRCA) do not keep track of the number of beavers residing in Toronto, although an estimate from 2001 places the local beaver population at several hundred. Beavers are commonly found along the shoreline of Lake Ontario, and make their way throughout the waterway corridors of the city, most notably the Don, Humber, and Rouge River; the ravine system adjacent to the waterways, and the Toronto Islands. The city of Toronto government does not have any plans to either control the spread, or contain the number of beavers in the city. The city's urban forestry department will occasionally install heavy mesh wire fences around the trunks of trees to prevent them from being damaged by beavers. In 2013, flow devices were installed along the Rouge River, to prevent beaver dams from flooding the river. Prior to their installation, beavers whose dams caused the river to flood were trapped. In a 2017 TRCA report on local occurrences of fauna in Greater Toronto, beavers were given a score of L4. The score was given to species whose populations were secure in the rural portions of Greater Toronto, but whose populations in the urban areas of Greater Toronto remained vulnerable to potential long-term decline of its habitats.
Several dozen beavers were estimated to inhabit Vancouver in 2016. Beavers have inhabited Jericho Beach as early as 2000, although they did not move into the other areas of Vancouver until later in that decade. After an 80-year absence, a beaver was spotted in Stanley Park's Beaver Lake in 2008. In 2016, five beavers inhabited Beaver Lake. In the same year, a pair of beavers built a dam in Hinge Park. The Vancouver Park Board approved a strategy that included plans to promote the growth of the beaver population near the Olympic Village in 2016.
Beavers in Winnipeg numbered around 100 in 2019, and live along the city's rivers and streams. After receiving complaints for beaver-related damages in 2012, the city of Winnipeg has placed mesh wire fence around tree trunks along the shore of the Assiniboine River during the winter; as well as laid down traps designed to kill the beavers. Like Alberta, provincial guidelines in Manitoba do not allow for the live capture and relocation of beavers. The city employs one contractor 10 times a year to manage the beaver population in Winnipeg, who is authorized to remove beavers with a firearm under Manitoba's Wildlife Act.
United States
Several cities in the United States have seen the reintroduction of beavers within their city limits. In Chicago, several beavers have returned and made a home near the Lincoln Park's North Pond. The "Lincoln Park beaver" has not been as well received by the Chicago Park District and the Lincoln Park Conservancy, which was concerned over damage to trees in the area. In March 2009, they hired an exterminator to remove a beaver family using live traps, and accidentally killed the mother when she got caught in a snare and drowned. Relocation costs $4,000–$4,500 per animal. Scott Garrow, District Wildlife Biologist with the Illinois Department of Natural Resources, opined that relocating the beavers may be "a waste of time", as beaver recolonizing North Pond in Lincoln Park has been recorded in 1994, 2003, 2004, 2008, 2009, 2014, and 2018
In downtown Martinez, California, a male and female beaver arrived in Alhambra Creek in 2006. The Martinez beavers built a dam 30 feet wide and at one time 6 feet high, and chewed through half the willows and other creekside landscaping the city planted as part of its $9.7 million 1999 flood-improvement project. When the City Council wanted to remove the beavers because of fears of flooding, local residents organized to protect them, forming an organization called "Worth a Dam". Resolution included installation of a flow device through the beaver dam so that the pond's water level could not become excessive. Now protected, the beavers have transformed Alhambra Creek from a trickle into multiple dams and beaver ponds, which in turn, led to the return of steelhead trout and river otter in 2008, and mink in 2009. The Martinez beavers probably originated from the Sacramento-San Joaquin River Delta, which once held the largest concentration of beavers in North America.
After 200 years, a lone beaver returned to New York City in 2007, making its home along the Bronx River, having spent time living at the Bronx Zoo and the Botanical Gardens. Though beaver pelts were once important to the city's economy and a pair of beavers appears on the city's official seal and flag, beavers had not lived in New York City since the early 19th century, when trappers extirpated them completely from the city. The return of "José", named after Representative José Serrano from the Bronx, has been seen as evidence that efforts to restore the river have been successful. In the summer of 2010, a second beaver named "Justin" joined José, doubling the beaver population in New York City. In February 2013, what appears to be both José and Justin were caught on motion-sensitive cameras at the New York Botanical Garden.
In 1999, Washington, D.C.'s annual Cherry Blossom Festival included a family of beavers that lived in the Tidal Basin. The animals were caught and removed, but not before damaging 14 cherry trees, including some of the largest and oldest trees.
Arctic impacts
rapidly increasing temperatures in northern latitudes had resulted in increasingly favorable conditions for beaver as availability of woody vegetation increased and permafrost melted releasing large volumes of water. Increased open water in ponds may increase warming and rapid ecological changes may disrupt fish populations and the harvests of indigenous hunter-gatherers. Increases in beaver and beaver dams interact intimately with thermokarst ponds and lakes. Beavers sometimes build dams at the outlets of thermokarst lakes and in the dried beds of thermokarst depressions as well as in beaded form along streams.
Invasive impacts
In the 1940s, beavers were brought to Tierra del Fuego in southern Chile and Argentina for commercial fur production and introduced near Fagnano Lake. Although the fur enterprise failed, 25 pairs of beavers were released into the wild. Having no natural predators in their new environment, they quickly spread throughout the main island, and to other islands in the archipelago, reaching some 100,000 individuals within 50 years. Although they have been considered an invasive species, it has been more recently shown that the beaver has some beneficial ecological effects on native fish and should not be considered wholly detrimental. Although the dominant Lenga beech (Nothofagus pumilio) forest can regenerate from stumps, most of the newly created beaver wetlands are being colonized by the rarer native Antarctic beech (Nothofagus antarctica). It is not known whether the shrubbier Antarctic beech will be succeeded by the originally dominant and larger Lengo beech; however, the beaver wetlands are readily colonized by non-native plant species. In contrast, areas with introduced beaver were associated with increased populations of the native catadromous puye fish (Galaxias maculatus). Furthermore, the beavers did not seem to have a highly beneficial impact on the exotic brook trout (Salvelinus fontinalis) and rainbow trout (Oncorhynchus mykiss) which have negative impacts on native stream fishes in the Cape Horn Biosphere Reserve, Chile. They have also been found to cross saltwater to islands northward; and reached the Chilean mainland in the 1990s. On balance, most favour their removal because of their landscape-wide modifications to the Fuegian environment and because biologists want to preserve the unique biota of the region.
See also
Beaver drop
References
Bibliography
Goldfarb, Ben (2019). Eager - The Surprising, Secret Life of Beavers and Why They Matter. Chelsea Green Publishing.
Aquatic ecology
Beavers
Forest ecology
Habitat
Systems ecology
Beavers | Environmental impacts of beavers | [
"Biology",
"Environmental_science"
] | 6,334 | [
"Environmental social science",
"Aquatic ecology",
"Ecosystems",
"Systems ecology"
] |
66,160,682 | https://en.wikipedia.org/wiki/Argonne%20Fast%20Source%20Reactor | Argonne Fast Source Reactor (AFSR) was a research reactor which was located at the Argonne National Laboratory, a United States Department of Energy national laboratory, facility located in the high desert of southeastern Idaho between Idaho Falls, Idaho and Arco, Idaho.
History
The Argonne Fast Source Reactor was a tool used to calibrate instruments and to study fast reactor physics, augmenting the Zero Power Plutonium Reactor (ZPPR) research program. Located at Argonne-West, this low-power reactor—designed to operate at a power of only one kilowatt—contributed to an improvement in the techniques and instruments used to measure experimental data.
The AFSR was designed to supplement the existing facilities of the Idaho Division of Argonne National Laboratory. It was designed as a readily available source of both fast and thermal neutrons for use as follows:
developing, testing, calibrating, and standardizing various counters;
preparation of radioactive metallic foils used in the development of counting and radiochemical techniques;
checking out complex experimental systems before operation in other reactors;
and development of potential experiments in the fast reactor field.
In the fall of 1970, this reactor was moved to a new location adjacent to the ZPPR facility at the ANL West site of the NRTS.
The reactor started up on October 29, 1959, and operated through the late 1970s.
Design
AFSR was designed and built in 1958 near EBR-I on the National Reactor Testing Station (NRTS). AFSR had a design power of one kilowatt.
Decommissioning
AFSR operated through the late 1970s. The reactor is now shutdown and defueled.
Bibliography
Stacy, Susan M. "Proving the Principle". Idaho Operations Office of the Department of Energy. Idaho Falls, Idaho. DOE/ID-10799. 2000. Retrieved from: https://factsheets.inl.gov/SitePages/Publications.aspx
This article incorporates text from the public domain (prepared by or on behalf of the US government) work "Proving the Principle" (2000) which may be found at: https://factsheets.inl.gov/SitePages/Publications.aspx.
See also
Argonne National Laboratory
Idaho National Laboratory
List of nuclear research reactors#United States
References
Nuclear research reactors | Argonne Fast Source Reactor | [
"Physics"
] | 483 | [
"Nuclear and atomic physics stubs",
"Nuclear physics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.