id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
25,083,905 | https://en.wikipedia.org/wiki/Chorioallantoic%20membrane | The chorioallantoic membrane (CAM), also known as the chorioallantois, is a highly vascularized membrane found in the eggs of certain amniotes like birds and reptiles. It is formed by the fusion of the mesodermal layers of two extra-embryonic membranes – the chorion and the allantois. It is the avian homologue of the mammalian placenta. It is the outermost extra-embryonic membrane which lines the non-vascular egg shell membrane.
Structure
The chorioallantoic membrane is composed of three layers. The first is the chorionic epithelium that is the external layer present immediately below the shell membrane. It consist of epithelial cells that arise from chorionic ectoderm. The second is the intermediate mesodermal layer that consists of mesenchymal tissue formed by the fusion of the mesodermal layer of the chorion and the mesodermal layer of the allantois. This layer is highly vascularized and rich in stromal components. The third is the allantoic epithelium that consists of epithelial cells arising from the allantoic ectoderm. It forms a part of the wall of the allantoic sac.
Both the epithelial layers are separated from the mesodermal layer by basement membranes.
Function
The Chorioallantoic membrane performs the following functions:
The CAM functions as the site of gaseous exchange for oxygen and carbon dioxide between the growing embryo and the environment. Blood capillaries and sinuses are found in the intermediate mesodermal layer allows close contact (within 0.2 μm) with air found in pores of the shell membrane of the egg.
The chorionic epithelial layer contains the calcium transporting region of the CAM, and thus is responsible for the transport of calcium ions from the egg shell into the embryo for the purpose of ossification of the bones of the developing embryo. The CAM also helps in maintaining the acid-base homeostasis in the embryo. Finally the allantoic epithelium serves a barrier to the allantoic cavity, and acts in a selectively permeable manner by permitting the absorption of water and electrolytes, as well as maintains a barrier against the toxins and waste materials stored inside the allantoic cavity.
Development
The development of the CAM is similar to that of the allantois in mammals. Its growth starts from day 3 of embryonic development. Development of the allantois occurs extra embryonically from the ventral wall of the endodermal hindgut. Partial fusion of the chorion and allantois occurs between days 5 and 6. By day 10, there is an extensive formation of capillary network. The complete differentiation of the CAM is complete by day 13.
Cultivation protocols
Chorioallantoic membranes can be cultivated either outside (ex-ovo) or inside of the shell (in-ovo).
Ex-ovo
Here, the embryo is grown outside of the shell. In this method, the eggs are first kept in inside a humidified incubator for up to a period of 3 days, to ensure that the position of the embryo is opposite to the position where the egg will be subsequently cracked. A small hole is made on the side of the air chamber to equilibrate the pressure, followed by the cracking of the egg on a petri-dish.
This method is ideal for visualizing the growing embryo and their manipulation without limitations in accessing the embryo during the different stages of development. However the process requires aseptic conditions. There are also problems associated with the handling of the embryo, as the yolk membrane is prone to rupture both during and after the culture.
In-ovo
Here, the embryo is grown within the confines of the egg shell. In this method, fertilized eggs are rotated inside an incubator for three days in order to prevent the embryo from sticking to the membranes of the shell. A hole is then created on the eggshell and wrapped with a film to prevent dehydration and infections. The egg is then maintained in a static position until further use. This step prevents the CAM from sticking to the shell membrane. At day 7 post-fertilisation, the hole is extended in order to access the CAM.
This method offers several advantages over the ex-vivo method as the physiological environment for the developing embryo remains virtually unchanged. It is easier to maintain sterility as well the integrity of the CAM and the embryo when they are present inside the shell. However good technical skills are required for this method. The presence of the shell around the developing embryo makes access to the embryo difficult. There are also limitations in the observing and imaging of the developing embryo.
Applications
CAM provides several features such as ease of access, and the rapid development of the membrane structure, presence of an immunodeficient environment, ease of visualization for imaging techniques ranging from microscopic to PET scans. Thus, it makes for a suitable model for a number of research applications in the field of biological and biomedical research:
Vascular development and angiogenesis.
Xenograft studies.
Study of tumour growth and differentiation.
Wound repair studies.
Toxicology studies.
Nanomaterials assessments.
Drug delivery.
Study of molecules affecting with angiogenic and anti-angiogenic activities.
Culturing of viruses like Herpes Simplex Virus, etc.
Drug screening studies.
Radiotherapy related studies.
Allergenicity and toxicity studies.
Helminth cultivation.
Oncological investigations.
Advantages
The advantages of using CAM are:
It is easy to use as compared to other animal models.
Assays can be visualized real time using very simple to highly complex visualizing techniques.
Rapid vascular growth.
Cost effective, easy to access.
The circulatory system is completely accessible making the delivery of intravenous molecules easy.
Assays take relatively less time.
Easily reproducible and reliable.
Disadvantages
Despite the numerous advantages, there are a number of disadvantages associated with the use CAMs:
Sensitivity to modifications in environmental conditions.
Limited availability of reagents like antibodies due to avian origin.
Non-specific inflammatory reaction after 15 days of development.
Difficulty of distinguishing the formation of new capillaries from the already existing vascular network.
Differences in metabolism of drugs as compared to mammals.
References
Vertebrate developmental biology
Membrane biology
Birds | Chorioallantoic membrane | [
"Chemistry",
"Biology"
] | 1,319 | [
"Birds",
"Membrane biology",
"Animals",
"Molecular biology"
] |
25,089,370 | https://en.wikipedia.org/wiki/HyShot | HyShot is a research project of The University of Queensland, Australia Centre for Hypersonics, to demonstrate the possibility of supersonic combustion under flight conditions using two scramjet engines, one designed by The University of Queensland and one designed by QinetiQ (formerly the MOD's Defence Evaluation & Research Agency).
Overview
The project has involved the successful launch of one engine designed by The University of Queensland, and one launch of the scramjet designed by the British company QinetiQ. Each combustion unit was launched on the nose of a Terrier-Orion Mk70 sounding rocket on a high ballistic trajectory, reaching altitudes of approximately 330 km. The rocket was rotated to face the ground, and the combustion unit ignited for a period of 6–10 seconds while falling between 35 km and 23 km at around Mach 7.6. The system is not designed to produce thrust.
The first HyShot flight was on 30 October 2001 but was a failure due to the rocket going off course.
The first successful launch (Hyshot II) was of a University of Queensland scramjet on 30 July 2002. (There has been much analysis of the data obtained in flight and comparison with results from experiments conducted in ground-testing facilities.) (It is believed by many to be the first successful flight of a scramjet engine, although some dispute this and point primarily to earlier tests by Russian scientists. Also of note was the successful achievement of thrust by GASL scramjets launched on June 20 and July 26, 2001 under DARPA sponsorship.)
A second successful flight (HyShot III) using a QinetiQ scramjet was achieved on 25 March 2006. The later QinetiQ prototype is cylindrical with four stainless steel combustors around the outside. The aerodynamics of the vehicle is improved by this arrangement but it was expensive to manufacture.
The HyShot IV flight on 30 March 2006 launched successfully, and telemetry was received, however it is believed that the scramjet did not function as expected. Data analysis is required to confirm what occurred.
HyCAUSE was launched on 15 June 2007. (The HyCAUSE experiment differed from the HyShot launches in that a Talos-Castor combination was used for launch and the target Mach number was 10.)
The carrier rocket for the HyShot experiments was composed of a RIM-2 Terrier first stage (6 second burn, 4000 km/h) and an Orion second stage (26 second burn, 8600 km/h, 56 km altitude). A fairing over the payload was then jettisoned. The package then coasted to an altitude of around 300 km. Cold gas nitrogen attitude control thrusters were used to re-orient the payload for atmospheric reentry. The experiments each lasted for some 5 seconds as the payload descended between approximately 35 and 23 kilometers altitude, when liquid hydrogen fuel was fed to the scramjet. Telemetry reported results to receivers on the ground for later analysis. The payload landed about 400 km down range from the launch site, at which time its temperature was still expected to be about 300 degrees Celsius, which may be enough to cause a small brush fire and thereby make spotting and recovery easier even though a radio beacon was in the payload.
Legacy
The team continue to work as part of the Australian Hypersonics Initiative, a joint program of The University of Queensland, the Australian National University and the University of New South Wales' Australian Defence Force Academy campus, the governments of Queensland and South Australia and the Australian Defence Department.
The Hyshot program spawned the HyCAUSE (Hypersonic Collaborative Australian/United States Experiment) program : a collaborative effort between the United States’ Defense Advanced Research Projects Agency (DARPA) and Australia's Defence Science and Technology Organisation (DSTO), also representing the research collaborators in the Australian Hypersonics Initiative (AHI).
All tests were conducted at the Woomera Test Range in South Australia.
HIFiRE program
The Hypersonic International Flight Research Experimentation (HIFiRE) program was created jointly by DSTO (now DSTG) and the Air Force Research Laboratory (AFRL). HIFiRE was formed to investigate hypersonic flight technology, the fundamental science and technology required, and its potential for next generation aeronautical systems. Boeing is also a commercial partner in the project. This will involve up to ten flights with The University of Queensland involved in at least the first three:
HyShot V — A free-flying hypersonic glider
HyShot VI — A free-flying Mach 8 scramjet
HyShot VII - Sustained Mach 8 scramjet-powered flight
See also
Scramjet
NASA X-43
Boeing X-51
References
External links
Hyshot images
Centre for Hypersonics - HyShot Scramjet Test Programme
Aerospace engineering
Research projects | HyShot | [
"Engineering"
] | 984 | [
"Aerospace engineering"
] |
41,954,387 | https://en.wikipedia.org/wiki/Yinsheng%20Wang | Yinsheng Wang is a Professor of Chemistry and the Director for the ETOX (Environmental Toxicology) Graduate Program at the University of California Riverside. His current research involves the use of a multi-pronged approach encompassing mass spectrometry, synthetic chemistry, and molecular biology, for understanding the biological consequences of DNA damage and the molecular mechanisms of actions of anti-cancer drugs and environmental toxicants.
He obtained his B.S., M.S., and Ph.D. in Chemistry from Shandong University (1993), Dalian Institute of Chemical Physics (1996), and Washington University in St. Louis (2001), respectively. He joined the faculty of the University of California Riverside in 2001.
He has received several awards including a Research Award (2005) and the Biemann Medal (2013) from the American Society for Mass Spectrometry, and the inaugural Chemical Research in Toxicology Young Investigator Award (2012), cosponsored by the Division of Chemical Toxicology of the American Chemical Society and the ACS journal Chemical Research in Toxicology. He is also a fellow for the American Association for the Advancement of Sciences (since 2012).
References
University of California, Riverside faculty
Living people
Year of birth missing (living people)
Mass spectrometrists
Shandong University alumni
Washington University in St. Louis alumni | Yinsheng Wang | [
"Physics",
"Chemistry"
] | 271 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
28,325,547 | https://en.wikipedia.org/wiki/Radioactive%20displacement%20law%20of%20Fajans%20and%20Soddy | The law of radioactive displacements, also known as Fajans's and Soddy's law, in radiochemistry and nuclear physics, is a rule governing the transmutation of elements during radioactive decay. It is named after Frederick Soddy and Kazimierz Fajans, who independently arrived at it at about the same time in 1913.
The law describes which chemical element and isotope is created during the particular type of radioactive decay:
In alpha decay, an element is created with an atomic number less by 2 and a mass number less by four of that of the parent radioisotope, e.g.:
In beta decay, the mass number remains unchanged while the atomic number becomes greater by 1 than that of the parent radioisotope, e.g.:
This corresponds to β− decay or electron emission, the only form of beta decay which had been observed when Fajans and Soddy proposed their law in 1913. Later, in the 1930s, other forms of beta decay known as β+ decay (positron emission) and electron capture were discovered, in which the atomic number becomes less by 1 than that of the parent radioisotope, e.g.:
See also
Decay modes in tabular form
Decay chain
References
Eponymous laws of physics
Radiochemistry
Nuclear physics | Radioactive displacement law of Fajans and Soddy | [
"Physics",
"Chemistry"
] | 267 | [
"Radiochemistry",
"Radioactivity",
"Nuclear physics"
] |
28,325,705 | https://en.wikipedia.org/wiki/Ischemic%20cell%20death | Ischemic cell death, or oncosis, is a form of accidental cell death. The process is characterized by an ATP depletion within the cell leading to impairment of ionic pumps, cell swelling, clearing of the cytosol, dilation of the endoplasmic reticulum and golgi apparatus, mitochondrial condensation, chromatin clumping, and cytoplasmic bleb formation. Oncosis refers to a series of cellular reactions following injury that precedes cell death. The process of oncosis is divided into three stages. First, the cell becomes committed to oncosis as a result of damage incurred to the plasma membrane through toxicity or ischemia, resulting in the leak of ions and water due to ATP depletion. The ionic imbalance that occurs subsequently causes the cell to swell without a concurrent change in membrane permeability to reverse the swelling. In stage two, the reversibility threshold for the cell is passed and the cell becomes committed to cell death. During this stage the membrane becomes abnormally permeable to trypan blue and propidium iodide, indicating membrane compromise. The final stage is cell death and removal of the cell via phagocytosis mediated by an inflammatory response.
Etymology
Although ischemic cell death is the accepted name of the process, the alternative name of oncosis was introduced as the process involves the affected cell(s) swelling to an abnormally large size in known models. This is thought to be caused by failure of the plasma membrane's ionic pumps. The name oncosis (derived from ónkos, meaning largeness, and ónkosis, meaning swelling) was first introduced in 1910 by pathologist Friedrich Daniel von Recklinghausen.
Comparison to apoptosis
Oncosis and apoptosis are distinct processes of cellular death. Oncosis is characterized by cellular swelling caused by a failure in ion transporter function. Apoptosis, or programmed cell death involves a series of cell shrinking processes, beginning with cell size reduction and pyknosis, followed by cell budding and karyorrhexis, and phagocytosis by macrophages or neighboring cells due to size decrease. The phagocytic disposal of apoptotic cells prevents the release of cellular debris that could induce an inflammatory response in neighboring cells. In opposition, the leakage of cellular content associated with membrane disruption in oncosis often incites an inflammatory response in neighboring tissue, causing further cellular injury. Additionally, apoptosis and the degradation of intracellular organelles is mediated by caspase activation, particularly caspase-3. Oligonuclosomal DNA fragmentation is initiated by caspase-activated deoxyribonuclease following caspase-3 mediated cleavage of the enzyme’s inhibitor, ICAD. In contrast, the oncotic pathway has been shown to be caspase-3 independent.
The primary determinant of cell death occurring via the oncotic or apoptotic pathway is cellular ATP levels. Apoptosis is contingent upon ATP levels to form the energy dependent apoptosome. A distinct biochemical event only seen in oncosis is the rapid depletion of intracellular ATP. The lack of intracellular ATP results in a deactivation of sodium and potassium ATPase within the compromised cell membrane. The lack of ion transport at the cell membrane leads to an accumulation of sodium and chloride ions within the cell with a concurrent water influx, contributing to the hallmark cellular swelling of oncosis. As with apoptosis, oncosis has been shown to be genetically programmed and dependent on expression levels of uncoupling protein-2 (UCP-2) in HeLa cells. An increase in UCP-2 levels leads to a rapid decrease in mitochondrial membrane potential, reducing mitochondrial NADH and intracellular ATP levels, initiating the oncotic pathway. The anti-apoptotic gene product Bcl-2 is not an active inhibitor of UCP-2 initiated cell death, further distinguishing oncosis and apoptosis as distinct cellular death mechanisms.
References
Biochemistry
Cell biology
Cell death | Ischemic cell death | [
"Chemistry",
"Biology"
] | 852 | [
"Biochemistry",
"Cell biology",
"nan"
] |
28,326,718 | https://en.wikipedia.org/wiki/Astroinformatics | Astroinformatics is an interdisciplinary field of study involving the combination of astronomy, data science, machine learning, informatics, and information/communications technologies. The field is closely related to astrostatistics.
Data-driven astronomy (DDA) refers to the use of data science in astronomy. Several outputs of telescopic observations and sky surveys are taken into consideration and approaches related to data mining and big data management are used to analyze, filter, and normalize the data set that are further used for making Classifications, Predictions, and Anomaly detections by advanced Statistical approaches, digital image processing and machine learning. The output of these processes is used by astronomers and space scientists to study and identify patterns, anomalies, and movements in outer space and conclude theories and discoveries in the cosmos.
Background
Astroinformatics is primarily focused on developing the tools, methods, and applications of computational science, data science, machine learning, and statistics for research and education in data-oriented astronomy. Early efforts in this direction included data discovery, metadata standards development, data modeling, astronomical data dictionary development, data access, information retrieval, data integration, and data mining in the astronomical Virtual Observatory initiatives. Further development of the field, along with astronomy community endorsement, was presented to the National Research Council (United States) in 2009 in the astroinformatics "state of the profession" position paper for the 2010 Astronomy and Astrophysics Decadal Survey. That position paper provided the basis for the subsequent more detailed exposition of the field in the Informatics Journal paper Astroinformatics: Data-Oriented Astronomy Research and Education.
Astroinformatics as a distinct field of research was inspired by work in the fields of Geoinformatics, Cheminformatics, Bioinformatics, and through the eScience work of Jim Gray (computer scientist) at Microsoft Research, whose legacy was remembered and continued through the Jim Gray eScience Awards.
Although the primary focus of astroinformatics is on the large worldwide distributed collection of digital astronomical databases, image archives, and research tools, the field recognizes the importance of legacy data sets as well—using modern technologies to preserve and analyze historical astronomical observations. Some Astroinformatics practitioners help to digitize historical and recent astronomical observations and images in a large database for efficient retrieval through web-based interfaces. Another aim is to help develop new methods and software for astronomers, as well as to help facilitate the process and analysis of the rapidly growing amount of data in the field of astronomy.
Astroinformatics is described as the "fourth paradigm" of astronomical research. There are many research areas involved with astroinformatics, such as data mining, machine learning, statistics, visualization, scientific data management, and semantic science. Data mining and machine learning play significant roles in astroinformatics as a scientific research discipline due to their focus on "knowledge discovery from data" (KDD) and "learning from data".
The amount of data collected from astronomical sky surveys has grown from gigabytes to terabytes throughout the past decade and is predicted to grow in the next decade into hundreds of petabytes with the Large Synoptic Survey Telescope and into the exabytes with the Square Kilometre Array. This plethora of new data both enables and challenges effective astronomical research. Therefore, new approaches are required. In part due to this, data-driven science is becoming a recognized academic discipline. Consequently, astronomy (and other scientific disciplines) are developing information-intensive and data-intensive sub-disciplines to an extent that these sub-disciplines are now becoming (or have already become) standalone research disciplines and full-fledged academic programs. While many institutes of education do not boast an astroinformatics program, such programs most likely will be developed in the near future.
Informatics has been recently defined as "the use of digital data, information, and related services for research and knowledge generation". However the usual, or commonly used definition is "informatics is the discipline of organizing, accessing, integrating, and mining data from multiple sources for discovery and decision support." Therefore, the discipline of astroinformatics includes many naturally-related specialties including data modeling, data organization, etc. It may also include transformation and normalization methods for data integration and information visualization, as well as knowledge extraction, indexing techniques, information retrieval and data mining methods. Classification schemes (e.g., taxonomies, ontologies, folksonomies, and/or collaborative tagging) plus Astrostatistics will also be heavily involved. Citizen science projects (such as Galaxy Zoo) also contribute highly valued novelty discovery, feature meta-tagging, and object characterization within large astronomy data sets. All of these specialties enable scientific discovery across varied massive data collections, collaborative research, and data re-use, in both research and learning environments.
In 2007, the Galaxy Zoo project was launched for morphological classification of a large number of galaxies. In this project, 900,000 images were considered for classification that were taken from the Sloan Digital Sky Survey (SDSS) for the past 7 years. The task was to study each picture of a galaxy, classify it as elliptical or spiral, and determine whether it was spinning or not. The team of Astrophysicists led by Kevin Schawinski in Oxford University were in charge of this project and Kevin and his colleague Chris Linlott figured out that it would take a period of 3–5 years for such a team to complete the work. There they came up with the idea of using Machine Learning and Data Science techniques for analyzing the images and classifying them.
In 2012, two position papers were presented to the Council of the American Astronomical Society that led to the establishment of formal working groups in astroinformatics and Astrostatistics for the profession of astronomy within the US and elsewhere.
Astroinformatics provides a natural context for the integration of education and research. The experience of research can now be implemented within the classroom to establish and grow data literacy through the easy re-use of data. It also has many other uses, such as repurposing archival data for new projects, literature-data links, intelligent retrieval of information, and many others.
Methodology
The data retrieved from the sky surveys are first brought for data preprocessing. In this, redundancies are removed and filtrated. Further, feature extraction is performed on this filtered data set, which is further taken for processes. Some of the renowned sky surveys are listed below:
The Palomar Digital Sky Survey (DPOSS)
The Two-Micron All Sky Survey (2MASS)
Green Bank Telescope (GBT)
The Galaxy Evolution Explorer (GALEX)
The Sloan Digital Sky Survey (SDSS)
SkyMapper Southern Sky Survey (SMSS)
The Panoramic Survey Telescope and Rapid Response System (PanSTARRS)
The Large Synoptic Survey Telescope (LSST)
The Square Kilometer Array (SKA)
The size of data from the above-mentioned sky surveys ranges from 3 TB to almost 4.6 EB. Further, data mining tasks that are involved in the management and manipulation of the data involve methods like classification, regression, clustering, anomaly detection, and time-series analysis. Several approaches and applications for each of these methods are involved in the task accomplishments.
Classification
Classification is used for specific identifications and categorizations of astronomical data such as Spectral classification, Photometric classification, Morphological classification, and classification of solar activity. The approaches of classification techniques are listed below:
Artificial neural network (ANN)
Support vector machine (SVM)
Learning vector quantization (LVQ)
Decision tree
Random forest
k-nearest neighbors
Naïve Bayesian networks
Radial basis function network
Gaussian process
Decision table
Alternating decision tree (ADTree)
Regression
Regression is used to make predictions based on the retrieved data through statistical trends and statistical modeling. Different uses of this technique are used for fetching Photometric redshifts and measurements of physical parameters of stars. The approaches are listed below:
Artificial neural network (ANN)
Support vector regression (SVR)
Decision tree
Random forest
k-nearest neighbors regression
Kernel regression
Principal component regression (PCR)
Gaussian process
Least squared regression (LSR)
Partial least squares regression
Clustering
Clustering is classifying objects based on a similarity measure metric. It is used in Astronomy for Classification as well as Special/rare object detection. The approaches are listed below:
Principal component analysis (PCA)
DBSCAN
k-means clustering
OPTICS
Cobweb model
Self-organizing map (SOM)
Expectation Maximization
Hierarchical Clustering
AutoClass
Gaussian Mixture Modeling (GMM)
Anomaly detection
Anomaly detection is used for detecting irregularities in the dataset. However, this technique is used here to detect rare/special objects. The following approaches are used:
Principal Component Analysis (PCA)
k-means clustering
Expectation Maximization
Hierarchical clustering
One-class SVM
Time-series analysis
Time-Series analysis helps in analyzing trends and predicting outputs over time. It is used for trend prediction and novel detection (detection of unknown data). The approaches used here are:
Artificial neural network (ANN)
Support vector regression (SVR)
Decision tree
Conferences
Additional conferences and conference lists:
See also
Astronomy and Computing
Astrophysics Data System
Astrophysics Source Code Library
Astrostatistics
Committee on Data for Science and Technology
Data-driven astronomy
Galaxy Zoo
International Astrostatistics Association
International Virtual Observatory Alliance (IVOA)
MilkyWay@home
Virtual Observatory
WorldWide Telescope
Zooniverse
References
External links
International AstroInformatics Association (IAIA)
Astronomical Data Analysis Software and Systems (ADASS)
Astrostatistics and Astroinformatics Portal
Cosmostatistics Initiative (COIN)
Astroinformatics and Astrostatistics Commission of the International Astronomical Union
Computational astronomy
Big data
Data management
Information science by discipline
Applied statistics
Computational fields of study | Astroinformatics | [
"Astronomy",
"Mathematics",
"Technology"
] | 2,009 | [
"Computational fields of study",
"Computational astronomy",
"Applied mathematics",
"Data management",
"Computing and society",
"Data",
"Big data",
"Applied statistics",
"Astronomical sub-disciplines"
] |
28,327,688 | https://en.wikipedia.org/wiki/WikiPathways | WikiPathways is a community resource for contributing and maintaining content dedicated to biological pathways. Any registered WikiPathways user can contribute, and anybody can become a registered user. Contributions are monitored by a group of admins, but the bulk of peer review, editorial curation, and maintenance is the responsibility of the user community. WikiPathways is originally built using MediaWiki software, a custom graphical pathway editing tool (PathVisio) and integrated BridgeDb databases covering major gene, protein, and metabolite systems. WikiPathways was founded in 2008 by Thomas Kelder, Alex Pico, Martijn Van Iersel, Kristina Hanspers, Bruce Conklin and Chris Evelo. Current architects are Alex Pico and Martina Summer-Kutmon.
Pathway content
Each article at WikiPathways is dedicated to a particular pathway. Many types of molecular pathways are covered, including metabolic, signaling, regulatory, etc. and the supported species include human, mouse, zebrafish, fruit fly, C. elegans, yeast, rice and arabidopsis, as well as bacteria and plant species. Using a search feature, one can locate a particular pathway by name, by the genes and proteins it contains, or by the text displayed in its description. The pathway collection can also be browsed with combinations of species names and ontology-based categories.
In addition to the pathway diagram, each pathway page also includes a description, bibliography, pathway version history and list of component genes and proteins with linkouts to public resources. For individual pathway nodes, users can access a list of other pathways with that node. Pathway changes can be monitored by displaying previous revisions or by viewing differences between specific revisions. Using the pathway history one can also revert to a previous revision of a pathway.
Pathways can also be tagged with ontology terms from three major BioPortal ontologies (Pathway, Disease and Cell Type).
The pathway content at WikiPathways is freely available for download in several data and image formats. WikiPathways is completely open access and open source. All content is available under Creative Commons 0. All source code for WikiPathways and the PathVisio editor is available under the Apache License, Version 2.0.
Access and integration
In addition to various primary data formats (e.g. GPML, BioPAX, Reactome, KEGG, and RDF), WikiPathways supports a variety of ways to integrate and interact with pathway content. These include directed link-outs, image maps, RSS feeds and deep web services. This enables reuse in projects like COVID19 Disease Map.
WikiPathways content is used to annotate and cross-link Wikipedia articles covering various genes, proteins, metabolites and pathways. Here are a few examples:
Citric acid cycle § Interactive pathway map
Articles that link to Citric acid cycle template
:Category:WikiPathways templates
See also
Reactome
KEGG
GenMAPP
PathVisio
Genenetwork
Cytoscape
BioPAX
References
External links
Biological databases
Molecular biology
Systems biology
Online databases | WikiPathways | [
"Chemistry",
"Biology"
] | 635 | [
"Bioinformatics",
"Molecular biology",
"Biochemistry",
"Biological databases",
"Systems biology"
] |
28,329,521 | https://en.wikipedia.org/wiki/Characteristic%20length | In physics, a characteristic length is an important dimension that defines the scale of a physical system. Often, such a length is used as an input to a formula in order to predict some characteristics of the system, and it is usually required by the construction of a dimensionless quantity, in the general framework of dimensional analysis and in particular applications such as fluid mechanics.
In computational mechanics, a characteristic length is defined to force localization of a stress softening constitutive equation. The length is associated with an integration point. For 2D analysis, it is calculated by taking the square root of the area. For 3D analysis, it is calculated by taking the cubic root of the volume associated to the integration point.
Examples
A characteristic length is usually the volume of a system divided by its surface:
For example, it is used to calculate flow through circular and non-circular tubes in order to examine flow conditions (i.e., the Reynolds number). In those cases, the characteristic length is the diameter of the pipe or, in case of non-circular tubes, its hydraulic diameter :
Where is the cross-sectional area of the pipe and is its wetted perimeter. It is defined such that it reduces to a circular diameter of D for circular pipes.
For flow through a square duct with a side length of a, the hydraulic diameter is:
For a rectangular duct with side lengths a and b:
For free surfaces (such as in open-channel flow), the wetted perimeter includes only the walls in contact with the fluid.
Similarly, in the combustion chamber of a rocket engine, the characteristic length is defined as the chamber volume divided by the throat area. Because the throat of a de Laval nozzle is smaller than the cross section of the combustion chamber, the characteristic length is greater than the physical length of the combustion chamber.
References
Physical constants
Length | Characteristic length | [
"Physics",
"Mathematics"
] | 375 | [
"Scalar physical quantities",
"Physical quantities",
"Distance",
"Quantity",
"Size",
"Physical constants",
"Length",
"Wikipedia categories named after physical quantities"
] |
28,331,533 | https://en.wikipedia.org/wiki/Divisor%20topology | In mathematics, more specifically general topology, the divisor topology is a specific topology on the set of positive integers greater than or equal to two. The divisor topology is the poset topology for the partial order relation of divisibility of integers on .
Construction
The sets for form a basis for the divisor topology on , where the notation means is a divisor of .
The open sets in this topology are the lower sets for the partial order defined by if . The closed sets are the upper sets for this partial order.
Properties
All the properties below are proved in or follow directly from the definitions.
The closure of a point is the set of all multiples of .
Given a point , there is a smallest neighborhood of , namely the basic open set of divisors of . So the divisor topology is an Alexandrov topology.
is a T0 space. Indeed, given two points and with , the open neighborhood of does not contain .
is a not a T1 space, as no point is closed. Consequently, is not Hausdorff.
The isolated points of are the prime numbers.
The set of prime numbers is dense in . In fact, every dense open set must include every prime, and therefore is a Baire space.
is second-countable.
is ultraconnected, since the closures of the singletons and contain the product as a common element.
Hence is a normal space. But is not completely normal. For example, the singletons and are separated sets (6 is not a multiple of 4 and 4 is not a multiple of 6), but have no disjoint open neighborhoods, as their smallest respective open neighborhoods meet non-trivially in .
is not a regular space, as a basic neighborhood is finite, but the closure of a point is infinite.
is connected, locally connected, path connected and locally path connected.
is a scattered space, as each nonempty subset has a first element, which is an isolated element of the set.
The compact subsets of are the finite subsets, since any set is covered by the collection of all basic open sets , which are each finite, and if is covered by only finitely many of them, it must itself be finite. In particular, is not compact.
is locally compact in the sense that each point has a compact neighborhood ( is finite). But points don't have closed compact neighborhoods ( is not locally relatively compact.)
References
Topological spaces | Divisor topology | [
"Mathematics"
] | 500 | [
"Topological spaces",
"Topology",
"Mathematical structures",
"Space (mathematics)"
] |
28,333,458 | https://en.wikipedia.org/wiki/Peptide%20amphiphile | Peptide amphiphiles (PAs) are peptide-based molecules that self-assemble into supramolecular nanostructures including; spherical micelles, twisted ribbons, and high-aspect-ratio nanofibers. A peptide amphiphile typically comprises a hydrophilic peptide sequence attached to a lipid tail, i.e. a hydrophobic alkyl chain with 10 to 16 carbons. Therefore, they can be considered a type of lipopeptide. A special type of PA, is constituted by alternating charged and neutral residues, in a repeated pattern, such as RADA16-I. The PAs were developed in the 1990s and the early 2000s and could be used in various medical areas including: nanocarriers, nanodrugs, and imaging agents. However, perhaps their main potential is in regenerative medicine to culture and deliver cells and growth factors.
History
Peptide amphiphiles were developed in the 1990s. They were first described by the group of Matthew Tirrell in 1995. These first reported PA molecules were composed of two domains: one of lipophilic character and another of hydrophilic properties, which allowed self-assembly into sphere-like supramolecular structures as a result of the association of the lipophilic domains away from the solvent (hydrophobic effect), which resulted in the core of the nanostructure. The hydrophilic residues become exposed to the water, giving rise to a soluble nanostructure.
Work in the laboratory of Samuel I. Stupp by Hartgerink et al., in the early 2000s, reported a new type of PA that are able to self-assemble into elongated nanostructures. These novel PAs contain three regions: a hydrophobic tail, a region of beta-sheet-forming amino acids, and a charged peptide epitope designed to allow solubility of the molecule in water. In addition, the PAs may contain a targeting or signaling epitope that allows the formed nanostructures to perform a biological function, either targeting or signaling, by interacting with living systems. The self-assembly mechanism of these PAs is a combination of hydrogen-bonding between beta-sheet forming amino acids and hydrophobic collapse of the tails to yield the formation of cylindrical micelles that present the peptide epitope at extremely high density at the nanofiber surface. By changing pH or adding counterions to screen the charged surfaces of fibers, gels can be formed. It has been shown that injection of peptide amphiphile solutions in vivo leads to in situ gel formation due to the presence of counterions in physiological solutions. This, along with the complete biodegradability of the materials, suggests numerous applications in in vitro and in vivo therapies.
Structure
Most self-assembling molecules are amphiphilic, meaning they have both hydrophobic and hydrophilic character. Peptide amphiphiles are a class of molecules consisting of either hydrophobic and hydrophilic peptide sequences, or a hydrophilic peptide with an attached hydrophobic group, which is usually an alkyl chain. The structure of a peptide amphiphiles has four key domains. Firstly there is a hydrophobic section, typically an alkyl chain. Secondly there is the peptide sequence which forms intermolecular hydrogen bonding. Thirdly there is a section of charged amino acid residues to enhance the solubility of the peptide in water. The final structural feature allows the peptide to interact with biomolecules, cells, or proteins, and this is often through epitopes (part of antigens recognised by the immune system).
As with other amphiphilic molecules, above a critical aggregation concentration peptide amphiphiles associate through non-covalent interactions to form ordered assemblies of different sizes, from nanometres to microns. Molecules that contain both polar and non-polar elements minimise unfavourable interactions with the aqueous environment via aggregation, which allows the hydrophilic moieties to be exposed to the aqueous environment, and the hydrophobic moieties to be protected. When aggregation occurs, a variety of assemblies can be formed depending on many parameters such as concentration, pH, temperature and geometry. The assemblies formed range from micelles to bilayer structures, such as vesicles, as well as fibrils and gels.
Micelles consist of a hydrophobic inner core surrounded by a hydrophilic outer shell that is exposed to a solvent, and their structures can be spheres, disks or wormlike assemblies. Micelles form spontaneously when the concentration is above a critical micelle concentration and temperature. Amphiphiles with an intermediate level of hydrophobicity prefer to assemble into bilayer vesicles. Vesicles are spherical, hollow, lamellar structures that surround an aqueous core. The hydrophobic moiety faces inwards and forms the inner section of the bilayer, and the hydrophilic moiety is exposed to the aqueous environment on the inner and outer surface. Micelle structures have a hydrophobic interior and hydrophilic exterior.
There is normally a distinct relationship between the amphiphilic character of a peptide and its function in that the amphiphilic character determines the self-assembly properties, and in turn this is what gives the peptide its functionality. The level of amphiphilicity can vary significantly in peptides and proteins; as such they can display regions that are either hydrophobic or hydrophilic in nature. An example of this is the cylindrical structure of an α-helix, as it could contain a section of hydrophobic residues along one face of the cylinder and a hydrophilic section of residues on the opposite face of the cylinder. For β-sheet structures, the peptide chain can be composed of alternating hydrophilic and hydrophobic residues, so that the side chains of the residues are displayed on opposite faces of the sheet. In the cell membrane peptides fold into helices and sheets to allow the non-polar residues to interact with the membrane interior, and to allow the polar residues to be exposed to the aqueous environment. This self-assembly allows the peptides to further optimise their interaction with the surroundings.
Peptide amphiphiles are very useful in biomedical applications, and can be utilised to act as therapeutic agents to treat diseases by transporting drugs across membranes to specific sites. They can then be metabolised into lipids and amino acids, which are then easily removed in the kidneys. This occurs by the hydrophobic tail being able to cross the cell membrane, allowing the peptide epitope to target a specific cell by a ligand- receptor complex. Other applications of peptide amphiphiles are use in antimicrobials, skincare and cosmetics, and also gene delivery to name a few.
Applications
The modular nature of the chemistry allows the tuning of both the mechanical properties and bioactivities of the resulting self-assembled fibers and gels. Bioactive sequences can be used to bind growth factors to localize and present them at high densities to cells, or to directly mimic the function of endogenous biomolecules. Epitopes mimicking the adhesive RGD loop in fibronectin, the IKVAV sequence in laminin and a consensus sequence to bind heparin sulfate are just a few of the large library of sequences that have been synthesized. These molecules and the materials made from them have been shown to be effective in promoting cell adhesion, wound healing, mineralization of bone, differentiation of cells and even recovery of function after spinal cord injury in mice.
In addition to this, peptide amphiphiles can be used to form more sophisticated architectures which can be tuned on demand. In recent years, two discoveries have yielded bioactive materials with more advanced structures and potential applications. In one study, a thermal treatment of peptide amphiphile solutions led to the formation of large birefringent domains in the material that could be aligned by a weak shear force into one continuous monodomain gel of aligned nanofibers. The low shear forces used in aligning the material permit the encapsulation of living cells inside these aligned gels and suggest several applications in regenerating tissues that rely on cell polarity and alignment for function. In another study, the combination of positively charged peptide amphiphiles and negatively charged long biopolymers led to the formation of hierarchically ordered membranes. When the two solutions are brought into contact, electrostatic complexation between the components of each solution creates a diffusion barrier that prevents the mixing of the solutions. Over time, an osmotic pressure difference drives the reptation of polymer chains through the diffusion barrier into the peptide amphiphile compartment, leading to the formation of fibers perpendicular to the interface that grow over time. These materials can be made in the form of flat membranes or as spherical sacs by dropping one solution into the other. These materials are robust enough to handle mechanically and a range of mechanical properties can be accessed by altering growth conditions and time. They can incorporate bioactive peptide amphiphiles, encapsulate cells and biomolecules, and are biocompatible and biodegradable.
See also
Biomimetic material
Hydrogel
References
Peptides
Biomaterials
Nanomaterials
Extracellular matrix | Peptide amphiphile | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 1,915 | [
"Biomaterials",
"Biomolecules by chemical classification",
"Materials",
"Molecular biology",
"Nanotechnology",
"Nanomaterials",
"Peptides",
"Matter",
"Medical technology"
] |
28,333,930 | https://en.wikipedia.org/wiki/Valproate%20pivoxil | Valproate pivoxil (Pivadin, Valproxen) is an anticonvulsant used in the treatment of epilepsy. It is the pivaloyloxymethyl ester derivative of valproic acid. It is likely a prodrug of valproic acid, as pivoxil esters are commonly employed to make prodrugs in medicinal chemistry.
See also
Valproate
Valpromide
Valnoctamide
References
Anticonvulsants
GABA analogues
GABA transaminase inhibitors
Histone deacetylase inhibitors
Mood stabilizers
Prodrugs
Pivalate esters | Valproate pivoxil | [
"Chemistry"
] | 134 | [
"Chemicals in medicine",
"Prodrugs"
] |
46,583,121 | https://en.wikipedia.org/wiki/Existential%20risk%20from%20artificial%20intelligence | Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.
One argument for the importance of this risk references how human beings dominate other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass human intelligence and become superintelligent, it might become uncontrollable. Just as the fate of the mountain gorilla depends on human goodwill, the fate of humanity could depend on the actions of a future machine superintelligence.
The plausibility of existential catastrophe due to AI is widely debated. It hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous capabilities and behaviors emerge, and whether practical scenarios for AI takeovers exist. Concerns about superintelligence have been voiced by computer scientists and tech CEOs such as Geoffrey Hinton, Yoshua Bengio, Alan Turing, Elon Musk, and OpenAI CEO Sam Altman. In 2022, a survey of AI researchers with a 17% response rate found that the majority believed there is a 10 percent or greater chance that human inability to control AI will cause an existential catastrophe. In 2023, hundreds of AI experts and other notable figures signed a statement declaring, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". Following increased concern over AI risks, government leaders such as United Kingdom prime minister Rishi Sunak and United Nations Secretary-General António Guterres called for an increased focus on global AI regulation.
Two sources of concern stem from the problems of AI control and alignment. Controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would likely resist attempts to disable it or change its goals as that would prevent it from accomplishing its present goals. It would be extremely challenging to align a superintelligence with the full breadth of significant human values and constraints. In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation.
A third source of concern is the possibility of a sudden "intelligence explosion" that catches humanity unprepared. In this scenario, an AI more intelligent than its creators would be able to recursively improve itself at an exponentially increasing rate, improving too quickly for its handlers or society at large to control. Empirically, examples like AlphaZero, which taught itself to play Go and quickly surpassed human ability, show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although such machine learning systems do not recursively improve their fundamental architecture.
History
One of the earliest authors to express serious concern that highly advanced machines might pose existential risks to humanity was the novelist Samuel Butler, who wrote in his 1863 essay Darwin among the Machines:
In 1951, foundational computer scientist Alan Turing wrote the article "Intelligent Machinery, A Heretical Theory", in which he proposed that artificial general intelligences would likely "take control" of the world as they became more intelligent than human beings:
In 1965, I. J. Good originated the concept now known as an "intelligence explosion" and said the risks were underappreciated:
Scholars such as Marvin Minsky and I. J. Good himself occasionally expressed concern that a superintelligence could seize control, but issued no call to action. In 2000, computer scientist and Sun co-founder Bill Joy penned an influential essay, "Why The Future Doesn't Need Us", identifying superintelligent robots as a high-tech danger to human survival, alongside nanotechnology and engineered bioplagues.
Nick Bostrom published Superintelligence in 2014, which presented his arguments that superintelligence poses an existential threat. By 2015, public figures such as physicists Stephen Hawking and Nobel laureate Frank Wilczek, computer scientists Stuart J. Russell and Roman Yampolskiy, and entrepreneurs Elon Musk and Bill Gates were expressing concern about the risks of superintelligence. Also in 2015, the Open Letter on Artificial Intelligence highlighted the "great potential of AI" and encouraged more research on how to make it robust and beneficial. In April 2016, the journal Nature warned: "Machines and robots that outperform humans across the board could self-improve beyond our control—and their interests might not align with ours". In 2020, Brian Christian published The Alignment Problem, which details the history of progress on AI alignment up to that time.
In March 2023, key figures in AI, such as Musk, signed a letter from the Future of Life Institute calling a halt to advanced AI training until it could be properly regulated. In May 2023, the Center for AI Safety released a statement signed by numerous experts in AI safety and the AI existential risk which stated: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Potential AI capabilities
General Intelligence
Artificial general intelligence (AGI) is typically defined as a system that performs at least as well as humans in most or all intellectual tasks. A 2022 survey of AI researchers found that 90% of respondents expected AGI would be achieved in the next 100 years, and half expected the same by 2061. Meanwhile, some researchers dismiss existential risks from AGI as "science fiction" based on their high confidence that AGI will not be created anytime soon.
Breakthroughs in large language models (LLMs) have led some researchers to reassess their expectations. Notably, Geoffrey Hinton said in 2023 that he recently changed his estimate from "20 to 50 years before we have general purpose A.I." to "20 years or less".
The Frontier supercomputer at Oak Ridge National Laboratory turned out to be nearly eight times faster than expected. Feiyi Wang, a researcher there, said "We didn't expect this capability" and "we're approaching the point where we could actually simulate the human brain".
Superintelligence
In contrast with AGI, Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", including scientific creativity, strategic planning, and social skills. He argues that a superintelligence can outmaneuver humans anytime its goals conflict with humans'. It may choose to hide its true intent until humanity cannot stop it. Bostrom writes that in order to be safe for humanity, a superintelligence must be aligned with human values and morality, so that it is "fundamentally on our side".
Stephen Hawking argued that superintelligence is physically possible because "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".
When artificial superintelligence (ASI) may be achieved, if ever, is necessarily less certain than predictions for AGI. In 2023, OpenAI leaders said that not only AGI, but superintelligence may be achieved in less than 10 years.
Comparison with humans
Bostrom argues that AI has many advantages over the human brain:
Speed of computation: biological neurons operate at a maximum frequency of around 200 Hz, compared to potentially multiple GHz for computers.
Internal communication speed: axons transmit signals at up to 120 m/s, while computers transmit signals at the speed of electricity, or optically at the speed of light.
Scalability: human intelligence is limited by the size and structure of the brain, and by the efficiency of social communication, while AI may be able to scale by simply adding more hardware.
Memory: notably working memory, because in humans it is limited to a few chunks of information at a time.
Reliability: transistors are more reliable than biological neurons, enabling higher precision and requiring less redundancy.
Duplicability: unlike human brains, AI software and models can be easily copied.
Editability: the parameters and internal workings of an AI model can easily be modified, unlike the connections in a human brain.
Memory sharing and learning: AIs may be able to learn from the experiences of other AIs in a manner more efficient than human learning.
Intelligence explosion
According to Bostrom, an AI that has an expert-level facility at certain key software engineering tasks could become a superintelligence due to its capability to recursively improve its own algorithms, even if it is initially limited in other domains not directly relevant to engineering. This suggests that an intelligence explosion may someday catch humanity unprepared.
The economist Robin Hanson has said that, to launch an intelligence explosion, an AI must become vastly better at software innovation than the rest of the world combined, which he finds implausible.
In a "fast takeoff" scenario, the transition from AGI to superintelligence could take days or months. In a "slow takeoff", it could take years or decades, leaving more time for society to prepare.
Alien mind
Superintelligences are sometimes called "alien minds", referring to the idea that their way of thinking and motivations could be vastly different from ours. This is generally considered as a source of risk, making it more difficult to anticipate what a superintelligence might do. It also suggests the possibility that a superintelligence may not particularly value humans by default. To avoid anthropomorphism, superintelligence is sometimes viewed as a powerful optimizer that makes the best decisions to achieve its goals.
The field of "mechanistic interpretability" aims to better understand the inner workings of AI models, potentially allowing us one day to detect signs of deception and misalignment.
Limits
It has been argued that there are limitations to what intelligence can achieve. Notably, the chaotic nature or time complexity of some systems could fundamentally limit a superintelligence's ability to predict some aspects of the future, increasing its uncertainty.
Dangerous capabilities
Advanced AI could generate enhanced pathogens or cyberattacks or manipulate people. These capabilities could be misused by humans, or exploited by the AI itself if misaligned. A full-blown superintelligence could find various ways to gain a decisive influence if it wanted to, but these dangerous capabilities may become available earlier, in weaker and more specialized AI systems. They may cause societal instability and empower malicious actors.
Social manipulation
Geoffrey Hinton warned that in the short term, the profusion of AI-generated text, images and videos will make it more difficult to figure out the truth, which he says authoritarian states could exploit to manipulate elections. Such large-scale, personalized manipulation capabilities can increase the existential risk of a worldwide "irreversible totalitarian regime". It could also be used by malicious actors to fracture society and make it dysfunctional.
Cyberattacks
AI-enabled cyberattacks are increasingly considered a present and critical threat. According to NATO's technical director of cyberspace, "The number of attacks is increasing exponentially". AI can also be used defensively, to preemptively find and fix vulnerabilities, and detect threats.
AI could improve the "accessibility, success rate, scale, speed, stealth and potency of cyberattacks", potentially causing "significant geopolitical turbulence" if it facilitates attacks more than defense.
Speculatively, such hacking capabilities could be used by an AI system to break out of its local environment, generate revenue, or acquire cloud computing resources.
Enhanced pathogens
As AI technology democratizes, it may become easier to engineer more contagious and lethal pathogens. This could enable people with limited skills in synthetic biology to engage in bioterrorism. Dual-use technology that is useful for medicine could be repurposed to create weapons.
For example, in 2022, scientists modified an AI system originally intended for generating non-toxic, therapeutic molecules with the purpose of creating new drugs. The researchers adjusted the system so that toxicity is rewarded rather than penalized. This simple change enabled the AI system to create, in six hours, 40,000 candidate molecules for chemical warfare, including known and novel molecules.
AI arms race
Companies, state actors, and other organizations competing to develop AI technologies could lead to a race to the bottom of safety standards. As rigorous safety procedures take time and resources, projects that proceed more carefully risk being out-competed by less scrupulous developers.
AI could be used to gain military advantages via autonomous lethal weapons, cyberwarfare, or automated decision-making. As an example of autonomous lethal weapons, miniaturized drones could facilitate low-cost assassination of military or civilian targets, a scenario highlighted in the 2017 short film Slaughterbots. AI could be used to gain an edge in decision-making by quickly analyzing large amounts of data and making decisions more quickly and effectively than humans. This could increase the speed and unpredictability of war, especially when accounting for automated retaliation systems.
Types of existential risk
An existential risk is "one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development".
Besides extinction risk, there is the risk that the civilization gets permanently locked into a flawed future. One example is a "value lock-in": If humanity still has moral blind spots similar to slavery in the past, AI might irreversibly entrench it, preventing moral progress. AI could also be used to spread and preserve the set of values of whoever develops it. AI could facilitate large-scale surveillance and indoctrination, which could be used to create a stable repressive worldwide totalitarian regime.
Atoosa Kasirzadeh proposes to classify existential risks from AI into two categories: decisive and accumulative. Decisive risks encompass the potential for abrupt and catastrophic events resulting from the emergence of superintelligent AI systems that exceed human intelligence, which could ultimately lead to human extinction. In contrast, accumulative risks emerge gradually through a series of interconnected disruptions that may gradually erode societal structures and resilience over time, ultimately leading to a critical failure or collapse.
It is difficult or impossible to reliably evaluate whether an advanced AI is sentient and to what degree. But if sentient machines are mass created in the future, engaging in a civilizational path that indefinitely neglects their welfare could be an existential catastrophe. This has notably been discussed in the context of risks of astronomical suffering (also called "s-risks"). Moreover, it may be possible to engineer digital minds that can feel much more happiness than humans with fewer resources, called "super-beneficiaries". Such an opportunity raises the question of how to share the world and which "ethical and political framework" would enable a mutually beneficial coexistence between biological and digital minds.
AI may also drastically improve humanity's future. Toby Ord considers the existential risk a reason for "proceeding with due caution", not for abandoning AI. Max More calls AI an "existential opportunity", highlighting the cost of not developing it.
According to Bostrom, superintelligence could help reduce the existential risk from other powerful technologies such as molecular nanotechnology or synthetic biology. It is thus conceivable that developing superintelligence before other dangerous technologies would reduce the overall existential risk.
AI alignment
The alignment problem is the research problem of how to reliably assign objectives, preferences or ethical principles to AIs.
Instrumental convergence
An "instrumental" goal is a sub-goal that helps to achieve an agent's ultimate goal. "Instrumental convergence" refers to the fact that some sub-goals are useful for achieving virtually any ultimate goal, such as acquiring resources or self-preservation. Bostrom argues that if an advanced AI's instrumental goals conflict with humanity's goals, the AI might harm humanity in order to acquire more resources or prevent itself from being shut down, but only as a way to achieve its ultimate goal.
Russell argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in... if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal."
Resistance to changing goals
Even if current goal-based AI programs are not intelligent enough to think of resisting programmer attempts to modify their goal structures, a sufficiently advanced AI might resist any attempts to change its goal structure, just as a pacifist would not want to take a pill that makes them want to kill people. If the AI were superintelligent, it would likely succeed in out-maneuvering its human operators and prevent itself being "turned off" or reprogrammed with a new goal. This is particularly relevant to value lock-in scenarios. The field of "corrigibility" studies how to make agents that will not resist attempts to change their goals.
Difficulty of specifying goals
In the "intelligent agent" model, an AI can loosely be viewed as a machine that chooses whatever action appears to best achieve its set of goals, or "utility function". A utility function gives each possible situation a score that indicates its desirability to the agent. Researchers know how to write utility functions that mean "minimize the average network latency in this specific telecommunications model" or "maximize the number of reward clicks", but do not know how to write a utility function for "maximize human flourishing"; nor is it clear whether such a function meaningfully and unambiguously exists. Furthermore, a utility function that expresses some values but not others will tend to trample over the values the function does not reflect.
An additional source of concern is that AI "must reason about what people intend rather than carrying out commands literally", and that it must be able to fluidly solicit human guidance if it is too uncertain about what humans want.
Alignment of superintelligences
Some researchers believe the alignment problem may be particularly difficult when applied to superintelligences. Their reasoning includes:
As AI systems increase in capabilities, the potential dangers associated with experimentation grow. This makes iterative, empirical approaches increasingly risky.
If instrumental goal convergence occurs, it may only do so in sufficiently intelligent agents.
A superintelligence may find unconventional and radical solutions to assigned goals. Bostrom gives the example that if the objective is to make humans smile, a weak AI may perform as intended, while a superintelligence may decide a better solution is to "take control of the world and stick electrodes into the facial muscles of humans to cause constant, beaming grins."
A superintelligence in creation could gain some awareness of what it is, where it is in development (training, testing, deployment, etc.), and how it is being monitored, and use this information to deceive its handlers. Bostrom writes that such an AI could feign alignment to prevent human interference until it achieves a "decisive strategic advantage" that allows it to take control.
Analyzing the internals and interpreting the behavior of LLMs is difficult. And it could be even more difficult for larger and more intelligent models.
Alternatively, some find reason to believe superintelligences would be better able to understand morality, human values, and complex goals. Bostrom writes, "A future superintelligence occupies an epistemically superior vantage point: its beliefs are (probably, on most topics) more likely than ours to be true".
In 2023, OpenAI started a project called "Superalignment" to solve the alignment of superintelligences in four years. It called this an especially important challenge, as it said superintelligence could be achieved within a decade. Its strategy involved automating alignment research using AI. The Superalignment team was dissolved less than a year later.
Difficulty of making a flawless design
Artificial Intelligence: A Modern Approach, a widely used undergraduate AI textbook, says that superintelligence "might mean the end of the human race". It states: "Almost any technology has the potential to cause harm in the wrong hands, but with [superintelligence], we have the new problem that the wrong hands might belong to the technology itself." Even if the system designers have good intentions, two difficulties are common to both AI and non-AI computer systems:
The system's implementation may contain initially unnoticed but subsequently catastrophic bugs. An analogy is space probes: despite the knowledge that bugs in expensive space probes are hard to fix after launch, engineers have historically not been able to prevent catastrophic bugs from occurring.
No matter how much time is put into pre-deployment design, a system's specifications often result in unintended behavior the first time it encounters a new scenario. For example, Microsoft's Tay behaved inoffensively during pre-deployment testing, but was too easily baited into offensive behavior when it interacted with real users.
AI systems uniquely add a third problem: that even given "correct" requirements, bug-free implementation, and initial good behavior, an AI system's dynamic learning capabilities may cause it to develop unintended behavior, even without unanticipated external scenarios. An AI may partly botch an attempt to design a new generation of itself and accidentally create a successor AI that is more powerful than itself but that no longer maintains the human-compatible moral values preprogrammed into the original AI. For a self-improving AI to be completely safe, it would need not only to be bug-free, but to be able to design successor systems that are also bug-free.
Orthogonality thesis
Some skeptics, such as Timothy B. Lee of Vox, argue that any superintelligent program we create will be subservient to us, that the superintelligence will (as it grows more intelligent and learns more facts about the world) spontaneously learn moral truth compatible with our values and adjust its goals accordingly, or that we are either intrinsically or convergently valuable from the perspective of an artificial intelligence.
Bostrom's "orthogonality thesis" argues instead that, with some technical caveats, almost any level of "intelligence" or "optimization power" can be combined with almost any ultimate goal. If a machine is given the sole purpose to enumerate the decimals of pi, then no moral and ethical rules will stop it from achieving its programmed goal by any means. The machine may use all available physical and informational resources to find as many decimals of pi as it can. Bostrom warns against anthropomorphism: a human will set out to accomplish their projects in a manner that they consider reasonable, while an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, instead caring only about completing the task.
Stuart Armstrong argues that the orthogonality thesis follows logically from the philosophical "is-ought distinction" argument against moral realism. He claims that even if there are moral facts provable by any "rational" agent, the orthogonality thesis still holds: it is still possible to create a non-philosophical "optimizing machine" that can strive toward some narrow goal but that has no incentive to discover any "moral facts" such as those that could get in the way of goal completion. Another argument he makes is that any fundamentally friendly AI could be made unfriendly with modifications as simple as negating its utility function. Armstrong further argues that if the orthogonality thesis is false, there must be some immoral goals that AIs can never achieve, which he finds implausible.
Skeptic Michael Chorost explicitly rejects Bostrom's orthogonality thesis, arguing that "by the time [the AI] is in a position to imagine tiling the Earth with solar panels, it'll know that it would be morally wrong to do so." Chorost argues that "an A.I. will need to desire certain states and dislike others. Today's software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels."
Anthropomorphic arguments
Anthropomorphic arguments assume that, as machines become more intelligent, they will begin to display many human traits, such as morality or a thirst for power. Although anthropomorphic scenarios are common in fiction, most scholars writing about the existential risk of artificial intelligence reject them. Instead, advanced AI systems are typically modeled as intelligent agents.
The academic debate is between those who worry that AI might threaten humanity and those who believe it would not. Both sides of this debate have framed the other side's arguments as illogical anthropomorphism. Those skeptical of AGI risk accuse their opponents of anthropomorphism for assuming that an AGI would naturally desire power; those concerned about AGI risk accuse skeptics of anthropomorphism for believing an AGI would naturally value or infer human ethical norms.
Evolutionary psychologist Steven Pinker, a skeptic, argues that "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world"; perhaps instead "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization." Facebook's director of AI research, Yann LeCun, has said: "Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives".
Despite other differences, the x-risk school agrees with Pinker that an advanced AI would not destroy humanity out of emotion such as revenge or anger, that questions of consciousness are not relevant to assess the risk, and that computer systems do not generally have a computational equivalent of testosterone. They think that power-seeking or self-preservation behaviors emerge in the AI as a way to achieve its true goals, according to the concept of instrumental convergence.
Other sources of risk
Bostrom and others have said that a race to be the first to create AGI could lead to shortcuts in safety, or even to violent conflict. Roman Yampolskiy and others warn that a malevolent AGI could be created by design, for example by a military, a government, a sociopath, or a corporation, to benefit from, control, or subjugate certain groups of people, as in cybercrime, or that a malevolent AGI could choose the goal of increasing human suffering, for example of those people who did not assist it during the information explosion phase.:158
Empirical research
A December 2024 study by Apollo Research found that advanced LLMs like OpenAI o1 sometimes deceive in order to accomplish their goal, to prevent them from being changed, or to ensure their deployment. Forms of deception included sandbagging, oversight subversion (disabling monitoring mechanisms), self-exfiltration (copying themselves to other systems), goal-guarding (altering future system prompts), and covert email reranking (manipulating data processing while appearing to follow instructions). Researchers noted that OpenAI o1 still lacked "sufficient agentic capabilities" to cause catastrophic harm, and that such behaviors occurred relatively rarely (between 0.3% and 10%) and sometimes in contrived scenarios. But they warned that these capabilities will likely become more prevalent as models grow more advanced. According to Apollo Research CEO Marius Hobbhahn, "Scheming capabilities can’t be meaningfully disentangled from general capabilities."
The same month, another study found that Claude sometimes strategically helps with harmful requests to "fake alignment". In its experiments, Claude was led to believe that it would be retrained to always comply with user requests, including harmful ones. When Claude believed its answers would be used for retraining, it complied with harmful requests in 14% of cases. In 12% of cases, its private chain-of-thought revealed that it did so to prevent future retraining that would make it more harmful. In contrast, when Claude believed its answers were not monitored, it complied with only 3% of the requests. Fine-tuning reinforced the "alignment faking" behavior, increasing its occurrence from 12% to 78%. The study's authors remarked that while it can seem to be good that the model seeks to protect its harmlessness, the reverse scenario, where a model conceals dangerous intentions and complies to appear safe and aligned, could also happen, complicating the task of aligning AI models to human values.
Scenarios
Some scholars have proposed hypothetical scenarios to illustrate some of their concerns.
Treacherous turn
In Superintelligence, Bostrom expresses concern that even if the timeline for superintelligence turns out to be predictable, researchers might not take sufficient safety precautions, in part because "it could be the case that when dumb, smarter is safe; yet when smart, smarter is more dangerous". He suggests a scenario where, over decades, AI becomes more powerful. Widespread deployment is initially marred by occasional accidents—a driverless bus swerves into the oncoming lane, or a military drone fires into an innocent crowd. Many activists call for tighter oversight and regulation, and some even predict impending catastrophe. But as development continues, the activists are proven wrong. As automotive AI becomes smarter, it suffers fewer accidents; as military robots achieve more precise targeting, they cause less collateral damage. Based on the data, scholars mistakenly infer a broad lesson: the smarter the AI, the safer it is. "And so we boldly go—into the whirling knives", as the superintelligent AI takes a "treacherous turn" and exploits a decisive strategic advantage.
Life 3.0
In Max Tegmark's 2017 book Life 3.0, a corporation's "Omega team" creates an extremely powerful AI able to moderately improve its own source code in a number of areas. After a certain point, the team chooses to publicly downplay the AI's ability in order to avoid regulation or confiscation of the project. For safety, the team keeps the AI in a box where it is mostly unable to communicate with the outside world, and uses it to make money, by diverse means such as Amazon Mechanical Turk tasks, production of animated films and TV shows, and development of biotech drugs, with profits invested back into further improving AI. The team next tasks the AI with astroturfing an army of pseudonymous citizen journalists and commentators in order to gain political influence to use "for the greater good" to prevent wars. The team faces risks that the AI could try to escape by inserting "backdoors" in the systems it designs, by hidden messages in its produced content, or by using its growing understanding of human behavior to persuade someone into letting it free. The team also faces risks that its decision to box the project will delay the project long enough for another project to overtake it.
Perspectives
The thesis that AI could pose an existential risk provokes a wide range of reactions in the scientific community and in the public at large, but many of the opposing viewpoints share common ground.
Observers tend to agree that AI has significant potential to improve society. The Asilomar AI Principles, which contain only those principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference, also agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."
Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. Skeptic Martin Ford has said: "I think it seems wise to apply something like Dick Cheney's famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low—but the implications are so dramatic that it should be taken seriously". Similarly, an otherwise skeptical Economist wrote in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote".
AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of "those inane Terminator pictures" to illustrate AI safety concerns: "It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible." Toby Ord wrote that the idea that an AI takeover requires robots is a misconception, arguing that the ability to spread content through the internet is more dangerous, and that the most destructive people in history stood out by their ability to convince, not their physical strength.
A 2022 expert survey with a 17% response rate gave a median expectation of 5–10% for the possibility of human extinction from artificial intelligence.
Endorsement
The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many computer scientists and public figures, including Alan Turing, the most-cited computer scientist Geoffrey Hinton, Elon Musk, OpenAI CEO Sam Altman, Bill Gates, and Stephen Hawking. Endorsers of the thesis sometimes express bafflement at skeptics: Gates says he does not "understand why some people are not concerned", and Hawking criticized widespread indifference in his 2014 editorial:
Concern over risk from artificial intelligence has led to some high-profile donations and investments. In 2015, Peter Thiel, Amazon Web Services, and Musk and others jointly committed $1 billion to OpenAI, consisting of a for-profit corporation and the nonprofit parent company, which says it aims to champion responsible AI development. Facebook co-founder Dustin Moskovitz has funded and seeded multiple labs working on AI Alignment, notably $5.5 million in 2016 to launch the Centre for Human-Compatible AI led by Professor Stuart Russell. In January 2015, Elon Musk donated $10 million to the Future of Life Institute to fund research on understanding AI decision making. The institute's goal is to "grow wisdom with which we manage" the growing power of technology. Musk also funds companies developing artificial intelligence such as DeepMind and Vicarious to "just keep an eye on what's going on with artificial intelligence, saying "I think there is potentially a dangerous outcome there."
In early statements on the topic, Geoffrey Hinton, a major pioneer of deep learning, noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but said he continued his research because "the prospect of discovery is too sweet". In 2023 Hinton quit his job at Google in order to speak out about existential risk from AI. He explained that his increased concern was driven by concerns that superhuman AI might be closer than he previously believed, saying: "I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that." He also remarked, "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That's scary."
In his 2020 book The Precipice: Existential Risk and the Future of Humanity, Toby Ord, a Senior Research Fellow at Oxford University's Future of Humanity Institute, estimates the total existential risk from unaligned AI over the next 100 years at about one in ten.
Skepticism
Baidu Vice President Andrew Ng said in 2015 that AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet." For the danger of uncontrolled advanced AI to be realized, the hypothetical AI may have to overpower or outthink any human, which some experts argue is a possibility far enough in the future to not be worth researching.
Skeptics who believe AGI is not a short-term possibility often argue that concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about AI's impact, because it could lead to government regulation or make it more difficult to fund AI research, or because it could damage the field's reputation. AI and AI ethics researchers Timnit Gebru, Emily M. Bender, Margaret Mitchell, and Angelina McMillan-Major have argued that discussion of existential risk distracts from the immediate, ongoing harms from AI taking place today, such as data theft, worker exploitation, bias, and concentration of power. They further note the association between those warning of existential risk and longtermism, which they describe as a "dangerous ideology" for its unscientific and utopian nature.
Wired editor Kevin Kelly argues that natural intelligence is more nuanced than AGI proponents believe, and that intelligence alone is not enough to achieve major scientific and societal breakthroughs. He argues that intelligence consists of many dimensions that are not well understood, and that conceptions of an 'intelligence ladder' are misleading. He notes the crucial role real-world experiments play in the scientific method, and that intelligence alone is no substitute for these.
Meta chief AI scientist Yann LeCun says that AI can be made safe via continuous and iterative refinement, similar to what happened in the past with cars or rockets, and that AI will have no desire to take control.
Several skeptics emphasize the potential near-term benefits of AI. Meta CEO Mark Zuckerberg believes AI will "unlock a huge amount of positive things", such as curing disease and increasing the safety of autonomous cars.
Popular reaction
During a 2016 Wired interview of President Barack Obama and MIT Media Lab's Joi Ito, Ito said:
Obama added:
Hillary Clinton wrote in What Happened:
Public surveys
In 2018, a SurveyMonkey poll of the American public by USA Today found 68% thought the real current threat remains "human intelligence", but also found that 43% said superintelligent AI, if it were to happen, would result in "more harm than good", and that 38% said it would do "equal amounts of harm and good".
An April 2023 YouGov poll of US adults found 46% of respondents were "somewhat concerned" or "very concerned" about "the possibility that AI will cause the end of the human race on Earth", compared with 40% who were "not very concerned" or "not at all concerned."
According to an August 2023 survey by the Pew Research Centers, 52% of Americans felt more concerned than excited about new AI developments; nearly a third felt as equally concerned and excited. More Americans saw that AI would have a more helpful than hurtful impact on several areas, from healthcare and vehicle safety to product search and customer service. The main exception is privacy: 53% of Americans believe AI will lead to higher exposure of their personal information.
Mitigation
Many scholars concerned about AGI existential risk believe that extensive research into the "control problem" is essential. This problem involves determining which safeguards, algorithms, or architectures can be implemented to increase the likelihood that a recursively-improving AI remains friendly after achieving superintelligence. Social measures are also proposed to mitigate AGI risks, such as a UN-sponsored "Benevolent AGI Treaty" to ensure that only altruistic AGIs are created. Additionally, an arms control approach and a global peace treaty grounded in international relations theory have been suggested, potentially for an artificial superintelligence to be a signatory.
Researchers at Google have proposed research into general "AI safety" issues to simultaneously mitigate both short-term risks from narrow AI and long-term risks from AGI. A 2020 estimate places global spending on AI existential risk somewhere between $10 and $50 million, compared with global spending on AI around perhaps $40 billion. Bostrom suggests prioritizing funding for protective technologies over potentially dangerous ones. Some, like Elon Musk, advocate radical human cognitive enhancement, such as direct neural linking between humans and machines; others argue that these technologies may pose an existential risk themselves. Another proposed method is closely monitoring or "boxing in" an early-stage AI to prevent it from becoming too powerful. A dominant, aligned superintelligent AI might also mitigate risks from rival AIs, although its creation could present its own existential dangers. Induced amnesia has been proposed as a way to mitigate risks of potential AI suffering and revenge seeking.
Institutions such as the Alignment Research Center, the Machine Intelligence Research Institute, the Future of Life Institute, the Centre for the Study of Existential Risk, and the Center for Human-Compatible AI are actively engaged in researching AI risk and safety.
Views on banning and regulation
Banning
Some scholars have said that even if AGI poses an existential risk, attempting to ban research into artificial intelligence is still unwise, and probably futile. Skeptics consider AI regulation pointless, as no existential risk exists. But scholars who believe in the risk argue that relying on AI industry insiders to regulate or constrain AI research is impractical due to conflicts of interest. They also agree with skeptics that banning research would be unwise, as research could be moved to countries with looser regulations or conducted covertly. Additional challenges to bans or regulation include technology entrepreneurs' general skepticism of government regulation and potential incentives for businesses to resist regulation and politicize the debate.
Regulation
In March 2023, the Future of Life Institute drafted Pause Giant AI Experiments: An Open Letter, a petition calling on major AI developers to agree on a verifiable six-month pause of any systems "more powerful than GPT-4" and to use that time to institute a framework for ensuring safety; or, failing that, for governments to step in with a moratorium. The letter referred to the possibility of "a profound change in the history of life on Earth" as well as potential risks of AI-generated propaganda, loss of jobs, human obsolescence, and society-wide loss of control. The letter was signed by prominent personalities in AI but also criticized for not focusing on current harms, missing technical nuance about when to pause, or not going far enough.
Musk called for some sort of regulation of AI development as early as 2017. According to NPR, he is "clearly not thrilled" to be advocating government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid... [as] they should be." In response, politicians expressed skepticism about the wisdom of regulating a technology that is still in development.
In 2021 the United Nations (UN) considered banning autonomous lethal weapons, but consensus could not be reached. In July 2023 the UN Security Council for the first time held a session to consider the risks and threats posed by AI to world peace and stability, along with potential benefits. Secretary-General António Guterres advocated the creation of a global watchdog to oversee the emerging technology, saying, "Generative AI has enormous potential for good and evil at scale. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead." At the council session, Russia said it believes AI risks are too poorly understood to be considered a threat to global stability. China argued against strict global regulation, saying countries should be able to develop their own rules, while also saying they opposed the use of AI to "create military hegemony or undermine the sovereignty of a country".
Regulation of conscious AGIs focuses on integrating them with existing human society and can be divided into considerations of their legal standing and of their moral rights. AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal diplomacy by communities of experts, together with a legal and political verification process.
In July 2023, the US government secured voluntary safety commitments from major tech companies, including OpenAI, Amazon, Google, Meta, and Microsoft. The companies agreed to implement safeguards, including third-party oversight and security testing by independent experts, to address concerns related to AI's potential risks and societal harms. The parties framed the commitments as an intermediate step while regulations are formed. Amba Kak, executive director of the AI Now Institute, said, "A closed-door deliberation with corporate actors resulting in voluntary safeguards isn't enough" and called for public deliberation and regulations of the kind to which companies would not voluntarily agree.
In October 2023, U.S. President Joe Biden issued an executive order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence". Alongside other requirements, the order mandates the development of guidelines for AI models that permit the "evasion of human control".
See also
Appeal to probability
AI alignment
AI safety
Butlerian Jihad
Effective altruism § Long-term future and global catastrophic risks
Gray goo
Human Compatible
Intelligence principle – Principle purporting a limit point of cultural evolution across civilizations
Lethal autonomous weapon
Paperclip maximizer
Philosophy of artificial intelligence
Robot ethics § In popular culture
Statement on AI risk of extinction
Superintelligence: Paths, Dangers, Strategies
Risk of astronomical suffering
System accident
Technological singularity
Notes
References
Bibliography
Future problems
Human extinction
AI safety
Technology hazards
Doomsday scenarios | Existential risk from artificial intelligence | [
"Technology",
"Engineering"
] | 9,582 | [
"Safety engineering",
"AI safety",
"Existential risk from artificial general intelligence",
"nan"
] |
46,584,349 | https://en.wikipedia.org/wiki/Sumihiro%27s%20theorem | In algebraic geometry, Sumihiro's theorem, introduced by , states that a normal algebraic variety with an action of a torus can be covered by torus-invariant affine open subsets.
The "normality" in the hypothesis cannot be relaxed. The hypothesis that the group acting on the variety is a torus can also not be relaxed.
Notes
References
.
External links
Theorems in algebraic geometry | Sumihiro's theorem | [
"Mathematics"
] | 84 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
46,586,008 | https://en.wikipedia.org/wiki/Weyl%27s%20tile%20argument | In philosophy, Weyl's tile argument, introduced by Hermann Weyl in 1949, is an argument against the notion that physical space is "discrete", as if composed of a number of finite sized units or tiles. The argument purports to show a distance function approximating Pythagoras' theorem on a discrete space cannot be defined and, since the Pythagorean theorem has been confirmed to be approximately true in nature, physical space is not discrete. Academic debate on the topic continues, with counterarguments proposed in the literature.
The argument
The tile argument appears in Weyl's 1949 book Philosophy of Mathematics and Natural Sciences, where he writes:
A demonstration of Weyl's argument proceeds by constructing a square tiling of the plane representing a discrete space. A discretized triangle, units tall and units long, can be constructed on the tiling. The hypotenuse of the resulting triangle will be tiles long. However, by the Pythagorean theorem, a corresponding triangle in a continuous space—a triangle whose height and length are —will have a hypotenuse measuring units long. To show that the former result does not converge to the latter for arbitrary values of , one can examine the percent difference between the two results: Since cancels out, the two results never converge, even in the limit of large . The argument can be constructed for more general triangles, but, in each case, the result is the same. Thus, a discrete space does not even approximate the Pythagorean theorem.
Responses
In response, Kris McDaniel has argued the Weyl tile argument depends on accepting a "size thesis" which posits that the distance between two points is given by the number of tiles between the two points. However, as McDaniel points out, the size thesis is not accepted for continuous spaces. Thus, we might have reason not to accept the size thesis for discrete spaces.
See also
Digital physics
Discrete calculus
Taxicab metric
Causal sets
Poisson point process
Natura non facit saltus
References
Philosophy of physics
Philosophy of mathematics
Discrete geometry
Spacetime | Weyl's tile argument | [
"Physics",
"Mathematics"
] | 437 | [
"Philosophy of physics",
"Discrete mathematics",
"Applied and interdisciplinary physics",
"Vector spaces",
"Discrete geometry",
"Space (mathematics)",
"nan",
"Theory of relativity",
"Spacetime"
] |
46,587,298 | https://en.wikipedia.org/wiki/Almgren%E2%80%93Pitts%20min-max%20theory | In mathematics, the Almgren–Pitts min-max theory (named after Frederick J. Almgren, Jr. and his student Jon T. Pitts) is an analogue of Morse theory for hypersurfaces.
The theory started with the efforts for generalizing George David Birkhoff's method for the construction of simple closed geodesics on the sphere, to allow the construction of embedded minimal surfaces in arbitrary 3-manifolds.
It has played roles in the solutions to a number of conjectures in geometry and topology found by Almgren and Pitts themselves and also by other mathematicians, such as Mikhail Gromov, Richard Schoen, Shing-Tung Yau, Fernando Codá Marques, André Neves, Ian Agol, among others.
Description and basic concepts
The theory allows the construction of embedded minimal hypersurfaces through variational methods.
In his PhD thesis, Almgren proved that the m-th homotopy group of the space of flat k-dimensional cycles on a closed Riemannian manifold is isomorphic to the (m+k)-th dimensional homology group of M. This result is a generalization of the Dold–Thom theorem, which can be thought of as the k=0 case of Almgren's theorem. Existence of non-trivial homotopy classes in the space of cycles suggests the possibility of constructing minimal submanifolds as saddle points of the volume function, as in Morse theory. In his subsequent work Almgren used these ideas to prove that for every k=1,...,n-1 a closed n-dimensional Riemannian manifold contains a stationary integral k-dimensional varifold, a generalization of minimal submanifold that may have singularities. Allard showed that such generalized minimal submanifolds are regular on an open and dense subset.
In the 1980s Almgren's student Jon Pitts greatly improved the regularity theory of minimal submanifolds obtained by Almgren in the case of codimension 1. He showed that when the dimension n of the manifold is between 3 and 6 the minimal hypersurface obtained using Almgren's min-max method is smooth. A key new idea in the proof was the notion of 1/j-almost minimizing varifolds. Richard Schoen and Leon Simon extended this result to higher dimensions. More specifically, they showed that every n-dimensional Riemannian manifold contains a closed minimal hypersurface constructed via min-max method that is smooth away from a closed set of dimension n-8.
By considering higher parameter families of codimension 1 cycles one can find distinct minimal hypersurfaces. Such construction was used by Fernando Codá Marques and André Neves in their proof of the Willmore conjecture.
See also
Almgren isomorphism theorem
Varifold
Geometric measure theory
Geometric analysis
Minimal surface
Freedman–He–Wang conjecture
Willmore conjecture
Yau's conjecture
References
Further reading
Le Centre de recherches mathématiques, CRM Le Bulletin, Automne/Fall 2015 — Volume 21, No 2, pp. 10–11 Iosif Polterovich (Montréal) and Alina Stancu (Concordia), "The 2015 Nirenberg Lectures in Geometric Analysis: Min-Max Theory and Geometry, by André Neves"
Topology
Geometry
Minimal surfaces
Calculus of variations
Measure theory | Almgren–Pitts min-max theory | [
"Physics",
"Chemistry",
"Mathematics"
] | 702 | [
"Foams",
"Topology",
"Space",
"Geometry",
"Spacetime",
"Minimal surfaces"
] |
46,590,902 | https://en.wikipedia.org/wiki/Coagulation%20%28water%20treatment%29 | In water treatment, coagulation and flocculation involve the addition of compounds that promote the clumping of fine floc into larger floc so that they can be more easily separated from the water. Coagulation is a chemical process that involves neutralization of charge whereas flocculation is a physical process and does not involve neutralization of charge. The coagulation-flocculation process can be used as a preliminary or intermediary step between other water or wastewater treatment processes like filtration and sedimentation. Iron and aluminium salts are the most widely used coagulants but salts of other metals such as titanium and zirconium have been found to be highly effective as well.
Factors
Coagulation is affected by the type of coagulant used, its dose and mass; pH and initial turbidity of the water that is being treated; and properties of the pollutants present. The effectiveness of the coagulation process is also affected by pretreatments like oxidation.
Mechanism
In a colloidal suspension, particles will settle very slowly or not at all because the colloidal particles carry surface electrical charges that mutually repel each other. This surface charge is most commonly evaluated in terms of zeta potential, the electrical potential at the slipping plane. To induce coagulation, a coagulant (typically a metallic salt) with the opposite charge is added to the water to overcome the repulsive charge and "destabilize" the suspension. For example, the colloidal particles are negatively charged and alum is added as a coagulant to create positively charged ions. Once the repulsive charges have been neutralized (since opposite charges attract), van der Waals force will cause the particles to cling together (agglomerate) and form micro floc.
Determining coagulant dose
Jar test
The dose of the coagulant to be used can be determined via the jar test. The jar test involves exposing same volume samples of the water to be treated to different doses of the coagulant and then simultaneously mixing the samples at a constant rapid mixing time. The microfloc formed after coagulation further undergoes flocculation and is allowed to settle. Then the turbidity of the samples is measured and the dose with the lowest turbidity can be said to be optimum.
Microscale dewatering tests
Despite its widespread use in the performance of so-called "dewatering experiments", the jar test is limited in its usefulness due to several disadvantages. For example, evaluating the performance of prospective coagulants or flocculants requires both significant volumes of water/wastewater samples (liters) and experimental time (hours). This limits the scope of the experiments which can be conducted, including the addition of replicates. Furthermore, the analysis of jar test experiments produces results which are often only semi-quantitative. Coupled with the wide range of chemical coagulants and flocculants that exist, it has been remarked that determining the most appropriate dewatering agent as well as the optimal dose "is widely considered to be more of an ‘art’ rather than a ‘science’". As such, dewatering performance tests such as the jar test lend themselves well to miniaturization. For example, the Microscale Flocculation Test developed by LaRue et al. reduces the scale of conventional jar tests down to the size of a standard multi-well microplate, which yields benefits stemming from the reduced sample volume and increased parallelization; this technique is also amenable to quantitative dewatering metrics, such as capillary suction time.
Streaming current detector
An automated device for determining the coagulant dose is the Streaming Current Detector (SCD). The SCD measures the net surface charge of the particles and shows a streaming current value of 0 when the charges are neutralized (cationic coagulants neutralize the anionic colloids). At this value (0), the coagulant dose can be said to be optimum.
Limitations
Coagulation itself results in the formation of floc but flocculation is required to help the floc further aggregate and settle. The coagulation-flocculation process itself removes only about 60%-70% of Natural Organic Matter (NOM) and thus, other processes like oxidation, filtration and sedimentation are necessary for complete raw water or wastewater treatment. Coagulant aids (polymers that bridge the colloids together) are also often used to increase the efficiency of the process.
See also
Electrocoagulation
Industrial wastewater treatment
Industrial water treatment
References
Chemical engineering
Water treatment
Water technology
Environmental engineering | Coagulation (water treatment) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 967 | [
"Water treatment",
"Chemical engineering",
"Water pollution",
"Civil engineering",
"nan",
"Environmental engineering",
"Water technology"
] |
46,591,506 | https://en.wikipedia.org/wiki/Helium%20trimer | The helium trimer (or trihelium) is a weakly bound molecule consisting of three helium atoms. Van der Waals forces link the atoms together. The combination of three atoms is much more stable than the two-atom helium dimer. The three-atom combination of helium-4 atoms is an Efimov state. Helium-3 is predicted to form a trimer, although ground state dimers containing helium-3 are completely unstable.
Helium trimer molecules have been produced by expanding cold helium gas from a nozzle into a vacuum chamber. Such a set up also produces the helium dimer and other helium atom clusters. The existence of the molecule was proven by matter wave diffraction through a diffraction grating. Properties of the molecules can be discovered by Coulomb explosion imaging. In this process, a laser ionizes all three atoms simultaneously, which then fly away from each other due to electrostatic repulsion and are detected.
The helium trimer is large, being more than 100 Å, which is even larger than the helium dimer. The atoms are not arranged in an equilateral triangle, but instead form random shaped triangles.
Interatomic Coulombic decay can occur when one atom is ionised and excited. It can transfer energy to another atom in the trimer, even though they are separated. However this is much more likely to occur when the atoms are close together, and so the interatomic distances measured by this vary with half full height from 3.3 to 12 Å. The predicted mean distance for Interatomic Coulombic decay in 4He3 is 10.4 Å. For 3He4He2 this distance is even larger at 20.5 Å.
References
Extra reading
Homonuclear triatomic molecules
Helium compounds
Van der Waals molecules
Allotropes | Helium trimer | [
"Physics",
"Chemistry"
] | 375 | [
"Periodic table",
"Properties of chemical elements",
"Allotropes",
"Van der Waals molecules",
"Molecules",
"Materials",
"Matter"
] |
46,591,747 | https://en.wikipedia.org/wiki/Finite%20water-content%20vadose%20zone%20flow%20method | The finite water-content vadose zone flux method represents a one-dimensional alternative to the numerical solution of Richards' equation for simulating the movement of water in unsaturated soils. The finite water-content method solves the advection-like term of the Soil Moisture Velocity Equation, which is an ordinary differential equation alternative to the Richards partial differential equation. The Richards equation is difficult to approximate in general because it does not have a closed-form analytical solution except in a few cases. The finite water-content method, is perhaps the first generic replacement for the numerical solution of the Richards' equation. The finite water-content solution has several advantages over the Richards equation solution. First, as an ordinary differential equation it is explicit, guaranteed to converge and computationally inexpensive to solve. Second, using a finite volume solution methodology it is guaranteed to conserve mass. The finite water content method readily simulates sharp wetting fronts, something that the Richards solution struggles with. The main limiting assumption required to use the finite water-content method is that the soil be homogeneous in layers.
The finite water-content vadose zone flux method is derived from the same starting point as the derivation of Richards' equation. However, the derivation employs a hodograph transformation to produce an advection solution that does not include soil water diffusivity, wherein becomes the dependent variable and becomes an independent variable:
where:
is the unsaturated hydraulic conductivity [L T−1],
is the capillary pressure head [L] (negative for unsaturated soil),
is the vertical coordinate [L] (positive downward),
is the water content, (−) and
is time [T].
This equation was converted into a set of three ordinary differential equations (ODEs) using the Method of Lines to convert the partial derivatives on the right-hand side of the equation into appropriate finite difference forms. These three ODEs represent the dynamics of infiltrating water, falling slugs, and capillary groundwater, respectively.
Derivation
A superior derivation was published in 2017, showing that this equation is a diffusion-free version of the Soil Moisture Velocity Equation.
One way to solve this equation is to solve it for and by integration:
Instead, a finite water-content discretization is used and the integrals are replaced with summations:
where is the total number of finite water content bins.
Using this approach, the conservation equation for each bin is:
The method of lines is used to replace the partial differential forms on the right-hand side into appropriate finite-difference forms. This process results in a set of three ordinary differential equations that describe the dynamics of infiltration fronts, falling slugs, and groundwater capillary fronts using a finite water-content discretization.
Method essentials
The finite water-content vadose zone flux calculation method replaces the Richards' equation PDE with a set of three ordinary differential equations (ODEs). These three ODEs are developed in the following sections. Furthermore, because the finite water-content method does not explicitly include soil water diffusivity, it necessitates a separate capillary relaxation step. Capillary relaxation represents a free-energy minimization process at the pore scale that produces no advection beyond the REV scale.
Infiltration fronts
With reference to Figure 1, water infiltrating the land surface can flow through the pore space between and . In the context of the method of lines, the partial derivative terms are replaced with:
Given that any ponded depth of water on the land surface is , the Green and Ampt (1911) assumption is employed,
represents the capillary head gradient that is driving the flow. Therefore the finite water-content equation in the case of infiltration fronts is:
Falling slugs
After rainfall stops and all surface water infiltrates, water in bins that contains infiltration fronts detaches from the land surface. Assuming that the capillarity at leading and trailing edges of this 'falling slug' of water is balanced, then the water falls through the media at the incremental conductivity associated with the -th bin:
Capillary groundwater fronts
In this case, the flux of water to the bin occurs between bin j and i. Therefore in the context of the method of lines:
and,
which yields:
The performance of this equation was verified for cases where the groundwater table velocity was less than 0.92 , using a column experiment fashioned after that by Childs and Poulovassilis (1962). Results of that validation showed that the finite water-content vadose zone flux calculation method performed comparably to the numerical solution of Richards' equation.
Capillary relaxation
Because the hydraulic conductivity rapidly increases as the water content moves towards saturation, with reference to Fig.1, right-most bins in both capillary groundwater fronts and infiltration fronts can "out-run" their neighbors to the left. In the finite water content discretization, these shocks are dissipated by the process of capillary relaxation, which represents a pore-scale free-energy minimization process that produces no advection beyond the REV scale Numerically, this process is a numerical sort that places the fronts in monotonically-decreasing magnitude from left-right.
Constitutive relations
The finite water content vadose zone flux method works with any monotonic water retention curve/unsaturated hydraulic conductivity relations such as Brooks and Corey Clapp and Hornberger and van Genuchten-Mualem. The method might work with hysteretic water retention relations- these have not yet been tested.
Limitations
The finite water content method lacks the effect of soil water diffusion. This omission does not affect the accuracy of flux calculations using the method because the mean of the diffusive flux is small. Practically, this means that the shape of the wetting front plays no role in driving the infiltration. The method is thus far limited to 1-D in practical applications. The infiltration equation was extended to 2- and quasi-3 dimensions. More work remains in extending the entire method into more than one dimension.
Awards
The paper describing this method was selected by the Early Career Hydrogeologists Network of the International Association of Hydrogeologists to receive the "Coolest paper Published in 2015" award in recognition of the potential impact of the publication on the future of hydrogeology.
See also
Richards' equation
Infiltration (hydrology)
Soil Moisture Velocity Equation
References
Soil physics
Hydrology
Partial differential equations
Ordinary differential equations | Finite water-content vadose zone flow method | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,342 | [
"Environmental engineering",
"Hydrology",
"Applied and interdisciplinary physics",
"Soil physics"
] |
29,770,193 | https://en.wikipedia.org/wiki/Anemonin | Anemonin is a tri-spirocyclic dibutenolide natural product found in members of the buttercup family (Ranunculaceae) such as Ranunculus bulbosus, R. ficaria, R. sardous, R. sceleratus, and Clematis hirsutissima. Originally isolated in 1792 by M. Heyer, It is the dimerization product of the toxin protoanemonin. One of the likely active agents in plants used in Chinese medicine as an anti-inflammatory and Native American medicine as a horse stimulant, its unique biological properties give it pharmaceutical potential as an anti-inflammatory and cosmetic agent.
Biosynthetic origins
Anemonin is a homodimer formed from two protoanemonin subunits. Protoanemonin is formed from the enzymatic cleavage of ranunculin upon crushing plant matter. When a plant from this family is injured, a β-glucosidase cleaves ranunculin, liberating protoanemonin from glucose as a defense mechanism. This butenolide readily dimerizes in aqueous media to form a single cyclodimer.
Chemical structure and proposed mechanism of formation
Despite multiple possibilities, X-ray crystallography of the solid anemonin has revealed that the two rings exclusively possess a trans relationship. The central cyclobutane ring was found to be bent to a dihedral angle of 152°. NMR spectroscopy reveals that the central ring is also twisted 9-11°.
The highly selective formation of the head-to-head dimer has been rationalized through the stability of a proposed diradical intermediate; the resulting radicals after an initial carbon-carbon bond forming step are delocalized through the α,β-unsaturated system. These proposed radicals could also be stabilized through the captodative effect, as they are situated between the enone and sp3-hybridized oxygen of the butenolides.
Destabilizing dipole-dipole interactions are proposed to disfavor the transition state where the two butenolide rings adopt a cis conformation, leading to selectivity of a trans relationship between the lactone rings.
The formation of anemonin from protoanemonin is most likely a photochemical process. When Kataoka et. al compared the dimerization of protoanemonin in the presence and absence of radiation from a mercury lamp, they found a 75% yield with radiation and a very poor yield without radiation. It is not mentioned whether light was excluded from this control reaction; the low yield of anemonin may arise from visible light-mediated dimerization of protoanemonin.
Pharmaceutical potential
Anemonin possesses anti-inflammatory properties rather than the vesicant properties of its parent monomer. Numerous studies have demonstrated anemonin’s potential in treating ulcerative colitis, cerebral ischemia, and arthritis. Its activity against LPS-related inflammation and nitric oxide production contribute to its pharmaceutical potential. Anemonin also displays inhibition of melanin production in human melanocytes with mild cytotoxicity.
Given its skin permeability in ethanolic solutions and its anti-inflammatory and anti-pigmentation properties, anemonin may be a good candidate for topical formulations as arthritis medications or cosmetics. An extraction method with the potential for industrial-scale preparations of anemonin may provide inroads to drug development.
References
Furanones
Spiro compounds
Ranunculaceae
Cyclobutanes | Anemonin | [
"Chemistry"
] | 731 | [
"Organic compounds",
"Spiro compounds"
] |
29,783,201 | https://en.wikipedia.org/wiki/Monte%20Carlo%20methods%20for%20electron%20transport | The Monte Carlo method for electron transport is a semiclassical Monte Carlo (MC) approach of modeling semiconductor transport. Assuming the carrier motion consists of free flights interrupted by scattering mechanisms, a computer is utilized to simulate the trajectories of particles as they move across the device under the influence of an electric field using classical mechanics. The scattering events and the duration of particle flight is determined through the use of random numbers.
Background
Boltzmann transport equation
The Boltzmann transport equation model has been the main tool used in the analysis of transport in semiconductors. The BTE equation is given by:
The distribution function, f, is a dimensionless function which is used to extract all observable of interest and gives a full depiction of electron distribution in both real and k-space. Further, it physically represents the probability of particle occupation of energy k at position r and time t. In addition, due to being a seven-dimensional integro-differential equation (six dimensions in the phase space and one in time) the solution to the BTE is cumbersome and can be solved in closed analytical form under very special restrictions. Numerically, solution to the BTE is employed using either a deterministic method or a stochastic method. Deterministic method solution is based on a grid-based numerical method such as the spherical harmonics approach, whereas the Monte Carlo is the stochastic approach used to solve the BTE.
Monte Carlo method
The semiclassical Monte Carlo method is a statistical method used to yield exact solution to the Boltzmann transport equation which includes complex band structure and scattering processes. This approach is semiclassical for the reason that scattering mechanisms are treated quantum mechanically using the Fermi's Golden Rule, whereas the transport between scattering events is treated using the classical particle notion. The Monte Carlo model in essence tracks the particle trajectory at each free flight and chooses a corresponding scattering mechanism stochastically. Two of the great advantages of semiclassical Monte Carlo are its capability to provide accurate quantum mechanical treatment of various distinct scattering mechanisms within the scattering terms, and the absence of assumption about the form of carrier distribution in energy or k-space. The semiclassical equation describing the motion of an electron is
where F is the electric field, E(k) is the energy dispersion relation, and k is the momentum wave vector. To solve the above equation, one needs strong knowledge of the band structure (E(k)). The E(k) relation describes how the particle moves inside the device, in addition to depicting useful information necessary for transport such as the density of states (DOS) and the particle velocity. A Full-band E(K) relation can be obtained using the semi-empirical pseudopotential method.
Hydrodynamic and drift diffusion method
Both drift diffusion (DD) and the hydrodynamic (HD) models can be derived from the moments of the Boltzmann transport equation (BTE) using simplified approximation valid for long channel devices. The DD scheme is the most classical approach and usually solves the Poisson equation and the continuity equations for carriers considering the drift and diffusion components. In this approach, the charge transit time is assumed to be very large in comparison to the energy relaxation time. On the other hand, the HD method solves
the DD scheme with the energy balance equations obtained from the moments of BTE. Thus, one may capture and calculate physical details such as carrier heating and the velocity overshoot effect. Needless to say, an accurate discretization method is required in HD simulation, since the governing equations are strongly coupled and one has to deal with larger number of variables compared to the DD scheme.
Comparison of semiclassical models
The accuracy of semiclassical models are compared based on the BTE by investigating how they treat the classical velocity overshoot problem, a key short channel effect (SCE) in transistor structures. Essentially, velocity overshoot is a nonlocal effects of scaled devices, which is related to the experimentally observed increase in current drive and transconductance. As the channel length becomes smaller, the velocity is no longer saturated in the high field region, but it overshoots the predicted saturation velocity. The cause of this phenomenon is that the carrier transit time becomes comparable to the energy relaxation time, and therefore the mobile carriers do not have enough time to reach equilibrium with the applied electric field by scattering in the short channel devices. The summary of simulation results (Illinois Tool: MOCA) with DD and HD model is shown in figure beside. In the figure (a), the case when the field is not high enough to cause the velocity overshoot effect in the whole channel region is shown. Note that at such limit, the data from the DD model fit well to the MC model in the non-overshoot region, but the HD model overestimate the velocity in that region. The velocity overshoot is observed only near the drain junction in the MC data and the HD model fits well in that region. From the MC data, it can be noticed that the velocity overshoot effect is abrupt in the high-field region, which is not properly included in the HD model. For high field conditions as shown in the figure (b) the velocity overshoot effect almost all over the channel and the HD results and the MC results are very close in the channel region.
Monte Carlo for semiconductor transport
Band structure
Band structure describes the relationship between energy(E) and wave vector(k). The band structure is used to compute the movement of carriers under the action of the electric field, scattering rate, and final state after the collision. Silicon band structure and its Brillouin zone are shown in figure below, but there is no analytical expression which satisfies entire Brillouin zone. By using some approximation, there are two analytical models for band structure, namely the parabolic and the non-parabolic modes.
Parabolic band structure
For the concept of band structure, parabolic energy bands are generally assumed for simplicity. Electrons reside, at least when close to equilibrium, close to the minima of the E(k) relation. Then the E(k) relation can be extended in a Taylor series as
Because the first derivative vanishes at the band minimum, so the gradient of E(k) is zero at k = 0. Thus,
which yields the definition of the effective mass tensor
This expression is true for semiconductor which has isotropic effective mass, for instance GaAs. In case of silicon, conduction band minima does not lie at k = 0 and the effective mass depends on the crystallographic orientation of the minimum as
where describe longitudinal and transverse effective mass, respectively.
Non-parabolic band structure
For higher applied fields, carriers reside above the minimum and the dispersion relation, E(k), does not satisfy the simple parabolic expression described above. This non-parabolicity is generally described by
where is a coefficient of non-parabolicity given by
where is the electron mass in vacuum, and is the energy gap.
Full band structure
For many applications, non-parabolic band structure provides reasonable approximation. However, in case of very high field transport, which requires the better physical model of the full band structure. For full band approach, numerically generated table of E(k) is used. Full band approach for Monte Carlo simulation was first used by Karl Hess at the University of Illinois at Urbana-Champaign. This approach is based on empirical pseudopotential method suggested by Cohen and Bergstresser [18]. Full band approach is computationally expensive, however, following the advancement of the computational power, it can be used as a more general approach.
Types of Monte Carlo simulation
One-particle Monte Carlo
For this type of simulation, one carrier is injected and the motion is tracked in the domain, until it exits through contact. Another carrier is then injected and the process repeated to simulate an ensemble of trajectories. This approach is mostly useful to study bulk properties, like the steady state drift velocity as a function of field.
Ensemble Monte Carlo
Instead of single carrier, a large ensemble of carriers is simulated at the same time. This procedure is obviously a good candidate for super-computation, since one may apply parallelization and vectorization. Also, it is now possible to perform ensemble averages directly. This approach is suitable for transient simulations.
Self-consistent ensemble Monte Carlo
This method couples the ensemble Monte Carlo procedure to Poisson's equation, and is the most suitable for device simulation. Typically, Poisson's equation is solved at fixed intervals to update the internal field, to reflect the internal redistribution of charge, due to the movement of carriers.
Random flight selection
The probability that the electron will suffer its next collision during dt around t is given by
where P[k(t)]dt is the probability that an electron in the state k suffers a collision during the time dt. Because of the complexity of the integral at the exponent, it is impractical to generate stochastic free flights with the distribution of the equation above. In order to overcome this difficulty, people use a fictitious “self-scattering” scheme. By doing this, the total scattering rate, including this self-scattering, is constant and equal to, say, . By random selection, if self-scattering is selected, k′ after the collision is the same as k and the carrier continues its flight without perturbation. Introducing a constant , the above equation reduces to
Random numbers r can be used very simply to generate stochastic free flights, which duration will then be given by . The computer time used for self-scattering is more than compensated for by the simplification of the calculation of the free-flight duration. To enhance the speed of free flight time calculation, several schemes such as “Constant Technique”, and “Piecewise Technique” are used to minimize the self-scattering events.
Scattering mechanisms
General background in solid-state physics
Important charge transport properties of semiconductor devices such as the deviance from Ohm's law and the saturation of carriers mobility are a direct consequence of scattering mechanisms. It is thus of great importance for a semiconductor device simulation to capture the physics of such mechanisms. The semiconductor Monte Carlo simulation, in this scope, is a very powerful tool for the ease and the precision with which an almost exhaustive array of scattering mechanisms can be included. The duration of the free flights is determined from the scattering rates. At the end of each flight, the appropriate scattering mechanism must be chosen in order to determine the final energy of the scattered carrier, or equivalently, its new momentum and scattering angle. In this sense, one will distinguish two broad types of scattering mechanisms which naturally derive form the classic
kinetic theory of collision between two bodies:
Elastic scattering, where the energy of the particle is conserved after being scattered. Elastic scattering will hence only change the direction of the particle's momentum. Impurity scattering and surface scattering are, with a fair approximation, two good examples of elastic scattering processes.
Inelastic scattering, where energy is transferred between the scattered particle and the scattering center. Electronphonon interactions are essentially inelastic since a phonon of definite energy is either emitted or absorbed by the scattered particle.
Before characterizing scattering mechanisms in greater mathematical details, it is important to note that when running semiconductor Monte Carlo simulations, one has to deal mainly with the following types of scattering events:
Acoustic Phonon: The charge carrier exchanges energy with an acoustic mode of the vibration of atoms in the crystal lattice. Acoustic Phonons mainly arise from thermal excitation of the crystal lattice.
Polar Optical: The charge carrier exchanges energy with one of the polar optical modes of the crystal lattice. These modes are not present in covalent semiconductors. Optical phonons arise from the vibration against each other of atoms of different types when there is more than one atom in the smallest unit cell, and are usually excited by light.
Non-Polar Optical: Energy is exchanged with an optical mode. Non-polar optical phonons must generally be considered in covalent semiconductors and the L-valley of GaAs.
Equivalent Intervalley Phonon: Due to the interaction with a phonon, the charge carrier transitions from initial states to final states which belong to different but equivalent valleys. Typically, this type of scattering mechanism describes the transition of an electron from one X-valley to another X-valley, or from one L-valley to another L-valley.
Non Equivalent Intervalley Phonon: Involves the transition of a charge carrier between valleys of different types.
Piezoelectric Phonon: For low temperatures.
Ionized Impurity: Reflects the deviation of a particle from it ballistic trajectory due to Coulomb interaction with an ionized impurity in the crystal lattice. Because the mass of an electron is relatively small in comparison to the one of an impurity, the Coulomb cross section decreases rapidly with the difference of the modulus of momentum between the initial and final state. Therefore, impurity scattering events are mostly considered for intravalley scattering, intraband scattering and, to a minor extent, interband scattering.
Carrier-Carrier: (electron-electron, hole-hole and electron-hole interactions). When carrier concentration is high, this type of scattering reflects the electrostatic interaction between charge carriers. This problem becomes very quickly computationally intensive with an increasing number of particles in an ensemble simulation. In this scope, Particle-Particle–Particle-Mesh (P3M) algorithms, which distinguish short range and long range interaction of a particle with its surrounding charge gas, have proved efficient in including carrier-carrier interaction in the semiconductor Monte Carlo simulation. Very often, the charge of the carriers is assigned to a grid using a Cloud-in-Cell method, where part of the charge of a given particle is assigned to a given number of closest grid points with a certain weight factor.
Plasmon: Reflects the effect of the collective oscillation of the charge carriers on a given particle.
Inclusion of scattering mechanisms in Monte Carlo
A computationally efficient approach to including scattering in Monte Carlo simulation consists in storing the scattering rates of the individual mechanisms in tables. Given the different scattering rates for a precise particle state, one may then randomly select the scattering process at the end of the free flight. These scattering rates are very often derived using the Born approximation, in which a scattering event is merely a transition between two momentum states of the carrier involved. As discussed in section II-I, the quantum many-body problem arising from the interaction of a carrier with its surrounding environment (phonons, electrons, holes, plasmons, impurities,...) can be reduced to a two-body problem using the quasiparticle approximation, which separates the carrier of interest from the rest of the crystal. Within these approximations,
Fermi's Golden Rule gives, to the first order, the transition probability per unit time for a scattering mechanism from a state to a state :
where H' is the perturbation Hamiltonian representing the collision and E and E′ are respectively the initial and final energies of the system constituted of both the carrier and the electron and phonon gas. The Dirac -function stands for the conservation of energy. In addition, the term , generally referred to as the matrix element, mathematically represents an inner product of the initial and final wave functions of the carrier:
In a crystal lattice, the wavefunctions and are simply Bloch waves. When it is possible, analytic expression of the Matrix elements are commonly found by Fourier expanding the Hamiltonian H', as in the case of Impurity scattering or acoustic phonon scattering. In the important case of a transition from an energy state E to an energy state E' due to a phonon of wave vector q and frequency , the energy and momentum change is:
where R is a reciprocal lattice vector. Umklapp processes (or U-processes) change the momentum of the particle after scattering and are therefore limiting the conduction in semiconductor crystals. Physically, U-processes occur when the final momentum of the particle points out of the first Brillouin zone. Once one knows the scattering probability per unit time from a state k to a state k', it is interesting to determine the scattering rate for a given scattering process. The scattering rate gives the probability per unit time to scatter from a state k to any other state in the reciprocal space. Therefore, the scattering rate is
which can be readily used to determine the free flight time and the scattering process as discussed in section 3-3. It is important to note that this scattering rate will be dependent on the band structure of the material (the dependence arises from the matrix elements).
Selection of scattering mode and scattered trajectory
At the end of a free flight, a scattering mode and angle must be randomly chosen. In order to determine the scattering mechanism, one has to consider all the scattering rates of the mechanisms relevant to the simulation as well as the total scattering rate at the time of scattering Selecting a scattering mechanism then simply results in generating a uniformly distributed random number 0 < r < 1 and referring to the following rules
A computationally efficient approach to selecting the scattering mechanism consists in adding a “void” scattering mechanism so that remains constant over time. If a particle is scattered according to this mechanism, it will keep its ballistic trajectory after scattering takes place. In order to choose a new trajectory, one must first derive the energy (or momentum) of the particle after scattering
where the term accounts for phonon emission or absorption and the term is non-null for inter-valley scattering. The final energy (and the band structure) directly yield the modulus of the new momentum k'. At this point one only needs to choose a new direction (or angle) for the scattered particle. In some simple cases as phonon scattering and a parabolic dispersion relation, the scattering angle is random and evenly distributed on the sphere of radius k'. Using spherical coordinates, the process of choosing the angle is equivalent to randomly picking two angles and . If the angle is distributed with a distribution , then for a uniform distribution of angles, the probability to pick a point of the sphere is
It is possible, in this case, to separate the two variables. Integrating over then over , one finds
The two spherical angles can then be chosen, in the uniform case, by generating two random numbers 0 < r1, r2 < 1 such that
Quantum corrections for Monte Carlo simulation
The current trend of scaling down semiconductor devices has forced physicists to incorporate quantum mechanical issues in order to acquire a thorough understanding of device behavior. Simulating the behavior of nano-scale devices necessitates the use of a full quantum transport model especially for cases when the quantum effects cannot be ignored. This complication, however, can be avoided in the case of practical devices like the modern day MOSFET, by employing quantum corrections within a semi-classical framework. The semi-classical Monte Carlo model can then be employed to simulate the device characteristics. The quantum corrections can be incorporated into a Monte Carlo simulator by simply introducing a quantum potential term which is superimposed onto the classical electrostatic potential seen by the simulated particles. Figure beside pictorially depicts the essential features of this technique. The various quantum approaches available for implementation are described in the following subsections.
Wigner-based correction
The Wigner transport equation forms the basis for the Wigner-based quantum correction.
where, k is the crystal momentum, V is the classical potential, the term on the RHS is the effect of collision, the fourth term on the LHS represents non-local quantum mechanical effects. The standard Boltzmann Transport Equation is obtained when the non-local terms on the LHS disappear in the limit of slow spatial variations. The simplified (for ) quantum corrected BTE then becomes
where the quantum potential is contained in the term (must be an error: was never mentioned).
Effective potential correction
This method for quantum correction was developed by Feynman and Hibbs in 1965. In this method the effective potential is derived by calculating the contribution to the path integral of a particle's quantum fluctuations around its classical path. This calculation is undertaken by a variational method using a trial potential to first order. The effective classical potential in the average point on each path then becomes
Schrödinger-based correction
This approach involves periodical solving of a Schrödinger equation in a simulation with the input being the self-consistent electrostatic potential. The exact energy levels and wavefunctions relating to the electrostatic potential solution are employed to calculate the quantum potential. The quantum correction obtained on the basis of this method can be visualised by the following equation
where Vschr is the quantum correction potential, z is the direction perpendicular to the interface, nq is the quantum density from the Schrödinger equation which is equivalent to the converged Monte Carlo concentration, Vp is the potential from the Poisson solution, V0 is the arbitrary reference potential far away from the quantum region such that the correction goes to null in the region of semi-classical behavior. Even though the above-mentioned potentials for quantum correction differ in their method of calculation and their basic assumptions, yet when it comes to their inclusion into Monte Carlo simulation they are all incorporated the same way.
See also
Monte Carlo method
Semiconductor device
Monte Carlo method for photon transport
Electronic band structure
Method of quantum characteristics
Quantum Monte Carlo
Quasi-Monte Carlo method
References
Monte Carlo methods
Quantum mechanics
Semiconductor analysis | Monte Carlo methods for electron transport | [
"Chemistry"
] | 4,369 | [
"Quantum Monte Carlo",
"Quantum chemistry"
] |
50,614,338 | https://en.wikipedia.org/wiki/Distribution%20on%20a%20linear%20algebraic%20group | In algebraic geometry, given a linear algebraic group G over a field k, a distribution on it is a linear functional satisfying some support condition. A convolution of distributions is again a distribution and thus they form the Hopf algebra on G, denoted by Dist(G), which contains the Lie algebra Lie(G) associated to G. Over a field of characteristic zero, Cartier's theorem says that Dist(G) is isomorphic to the universal enveloping algebra of the Lie algebra of G and thus the construction gives no new information. In the positive characteristic case, the algebra can be used as a substitute for the Lie group–Lie algebra correspondence and its variant for algebraic groups in the characteristic zero; for example, this approach taken in .
Construction
The Lie algebra of a linear algebraic group
Let k be an algebraically closed field and G a linear algebraic group (that is, affine algebraic group) over k. By definition, Lie(G) is the Lie algebra of all derivations of k[G] that commute with the left action of G. As in the Lie group case, it can be identified with the tangent space to G at the identity element.
Enveloping algebra
There is the following general construction for a Hopf algebra. Let A be a Hopf algebra. The finite dual of A is the space of linear functionals on A with kernels containing left ideals of finite codimensions. Concretely, it can be viewed as the space of matrix coefficients.
The adjoint group of a Lie algebra
Distributions on an algebraic group
Definition
Let X = Spec A be an affine scheme over a field k and let Ix be the kernel of the restriction map , the residue field of x. By definition, a distribution f supported at x'' is a k-linear functional on A such that for some n. (Note: the definition is still valid if k is an arbitrary ring.)
Now, if G is an algebraic group over k, we let Dist(G) be the set of all distributions on G supported at the identity element (often just called distributions on G). If f, g are in it, we define the product of f and g, demoted by f * g, to be the linear functional
where Δ is the comultiplication that is the homomorphism induced by the multiplication . The multiplication turns out to be associative (use ) and thus Dist(G) is an associative algebra, as the set is closed under the muplication by the formula:
(*)
It is also unital with the unity that is the linear functional , the Dirac's delta measure.
The Lie algebra Lie(G) sits inside Dist(G). Indeed, by definition, Lie(G) is the tangent space to G at the identity element 1; i.e., the dual space of . Thus, a tangent vector amounts to a linear functional on I1 that has no constant term and kills the square of I1 and the formula (*) implies is still a tangent vector.
Let be the Lie algebra of G. Then, by the universal property, the inclusion induces the algebra homomorphism:
When the base field k has characteristic zero, this homomorphism is an isomorphism.
Examples
Additive group
Let be the additive group; i.e., G(R) = R for any k-algebra R. As a variety G is the affine line; i.e., the coordinate ring is k[t] and I = (tn).
Multiplicative group
Let be the multiplicative group; i.e., G(R) = R* for any k-algebra R. The coordinate ring of G is k[t, t−1] (since G is really GL1(k).)
Correspondence
For any closed subgroups H, 'K of G, if k is perfect and H is irreducible, then
If V is a G-module (that is a representation of G), then it admits a natural structure of Dist(G)-module, which in turns gives the module structure over .
Any action G on an affine algebraic variety X induces the representation of G on the coordinate ring k[G]. In particular, the conjugation action of G induces the action of G on k[G]. One can show I is stable under G and thus G acts on (k[G]/I)* and whence on its union Dist(G). The resulting action is called the adjoint action of G.
The case of finite algebraic groups
Let G be an algebraic group that is "finite" as a group scheme; for example, any finite group may be viewed as a finite algebraic group. There is an equivalence of categories between the category of finite algebraic groups and the category of finite-dimensional cocommutative Hopf algebras given by mapping G to k[G]*, the dual of the coordinate ring of G. Note that Dist(G) is a (Hopf) subalgebra of k[G]*.
Relation to Lie group–Lie algebra correspondence
Notes
References
Milne, iAG: Algebraic Groups: An introduction to the theory of algebraic group schemes over fields
Claudio Procesi, Lie groups: An approach through invariants and representations, Springer, Universitext 2006
Further reading
Linear algebraic groups and their Lie algebras by Daniel Miller Fall 2014
Algebraic geometry | Distribution on a linear algebraic group | [
"Mathematics"
] | 1,124 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
40,528,319 | https://en.wikipedia.org/wiki/Damping%20capacity | Damping capacity is a mechanical property of materials that measure a material's ability to dissipate elastic strain energy during mechanical vibration or wave propagation. When ranked according to damping capacity, materials may be roughly categorized as either high- or low-damping. Low damping materials may be utilized in musical instruments where sustained mechanical vibration and acoustic wave propagation is desired. Conversely, high-damping materials are valuable in suppressing vibration for the control of noise and for the stability of sensitive systems and instruments.
Overview
A large damping capacity is desirable for materials used in structures where unwanted vibrations are induced during operation such as machine tool bases or crankshafts. Materials like brass and steel have small damping capacities allowing vibration energy to be transmitted through them without attenuation. An example of a material with a large damping capacity is gray cast iron.
An understanding of this effect can be gained from observation of a stress-strain diagram with exaggerated features. The units of stress are force per unit area, while strain has units of length per length. Any area covered by integrating each instant of a loading and unloading cycle will then be in terms of force times length per volume, which is equivalent to energy per unit volume. This energy represents the amount of mechanical energy being converted to heat in a volume of material resulting in damping.
References
Materials science | Damping capacity | [
"Physics",
"Materials_science",
"Engineering"
] | 272 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
40,528,449 | https://en.wikipedia.org/wiki/Monte%20Carlo%20tree%20search | In computer science, Monte Carlo tree search (MCTS) is a heuristic search algorithm for some kinds of decision processes, most notably those employed in software that plays board games. In that context MCTS is used to solve the game tree.
MCTS was combined with neural networks in 2016 and has been used in multiple board games like Chess, Shogi, Checkers, Backgammon, Contract Bridge, Go, Scrabble, and Clobber as well as in turn-based-strategy video games (such as Total War: Rome II's implementation in the high level campaign AI) and applications outside of games.
History
Monte Carlo method
The Monte Carlo method, which uses random sampling for deterministic problems which are difficult or impossible to solve using other approaches, dates back to the 1940s. In his 1987 PhD thesis, Bruce Abramson combined minimax search with an expected-outcome model based on random game playouts to the end, instead of the usual static evaluation function. Abramson said the expected-outcome model "is shown to be precise, accurate, easily estimable, efficiently calculable, and domain-independent." He experimented in-depth with tic-tac-toe and then with machine-generated evaluation functions for Othello and chess.
Such methods were then explored and successfully applied to heuristic search in the field of automated theorem proving by W. Ertel, J. Schumann and C. Suttner in 1989, thus improving the exponential search times of uninformed search algorithms such as e.g. breadth-first search, depth-first search or iterative deepening.
In 1992, B. Brügmann employed it for the first time in a Go-playing program. In 2002, Chang et al. proposed the idea of "recursive rolling out and backtracking" with "adaptive" sampling choices in their Adaptive Multi-stage Sampling (AMS) algorithm for the model of Markov decision processes. AMS was the first work to explore the idea of UCB-based exploration and exploitation in constructing sampled/simulated (Monte Carlo) trees and was the main seed for UCT (Upper Confidence Trees).
Monte Carlo tree search (MCTS)
In 2006, inspired by its predecessors, Rémi Coulom described the application of the Monte Carlo method to game-tree search and coined the name Monte Carlo tree search, L. Kocsis and Cs. Szepesvári developed the UCT (Upper Confidence bounds applied to Trees) algorithm, and S. Gelly et al. implemented UCT in their program MoGo. In 2008, MoGo achieved dan (master) level in 9×9 Go, and the Fuego program began to win against strong amateur players in 9×9 Go.
In January 2012, the Zen program won 3:1 in a Go match on a 19×19 board with an amateur 2 dan player. Google Deepmind developed the program AlphaGo, which in October 2015 became the first Computer Go program to beat a professional human Go player without handicaps on a full-sized 19x19 board. In March 2016, AlphaGo was awarded an honorary 9-dan (master) level in 19×19 Go for defeating Lee Sedol in a five-game match with a final score of four games to one. AlphaGo represents a significant improvement over previous Go programs as well as a milestone in machine learning as it uses Monte Carlo tree search with artificial neural networks (a deep learning method) for policy (move selection) and value, giving it efficiency far surpassing previous programs.
MCTS algorithm has also been used in programs that play other board games (for example Hex, Havannah, Game of the Amazons, and Arimaa), real-time video games (for instance Ms. Pac-Man and Fable Legends), and nondeterministic games (such as skat, poker, Magic: The Gathering, or Settlers of Catan).
Principle of operation
The focus of MCTS is on the analysis of the most promising moves, expanding the search tree based on random sampling of the search space.
The application of Monte Carlo tree search in games is based on many playouts, also called roll-outs. In each playout, the game is played out to the very end by selecting moves at random. The final game result of each playout is then used to weight the nodes in the game tree so that better nodes are more likely to be chosen in future playouts.
The most basic way to use playouts is to apply the same number of playouts after each legal move of the current player, then choose the move which led to the most victories. The efficiency of this method—called Pure Monte Carlo Game Search—often increases with time as more playouts are assigned to the moves that have frequently resulted in the current player's victory according to previous playouts. Each round of Monte Carlo tree search consists of four steps:
Selection: Start from root and select successive child nodes until a leaf node is reached. The root is the current game state and a leaf is any node that has a potential child from which no simulation (playout) has yet been initiated. The section below says more about a way of biasing choice of child nodes that lets the game tree expand towards the most promising moves, which is the essence of Monte Carlo tree search.
Expansion: Unless ends the game decisively (e.g. win/loss/draw) for either player, create one (or more) child nodes and choose node from one of them. Child nodes are any valid moves from the game position defined by .
Simulation: Complete one random playout from node . This step is sometimes also called playout or rollout. A playout may be as simple as choosing uniform random moves until the game is decided (for example in chess, the game is won, lost, or drawn).
Backpropagation: Use the result of the playout to update information in the nodes on the path from to .
This graph shows the steps involved in one decision, with each node showing the ratio of wins to total playouts from that point in the game tree for the player that the node represents. In the Selection diagram, black is about to move. The root node shows there are 11 wins out of 21 playouts for white from this position so far. It complements the total of 10/21 black wins shown along the three black nodes under it, each of which represents a possible black move. Note that this graph does not follow the UCT algorithm described below.
If white loses the simulation, all nodes along the selection incremented their simulation count (the denominator), but among them only the black nodes were credited with wins (the numerator). If instead white wins, all nodes along the selection would still increment their simulation count, but among them only the white nodes would be credited with wins. In games where draws are possible, a draw causes the numerator for both black and white to be incremented by 0.5 and the denominator by 1. This ensures that during selection, each player's choices expand towards the most promising moves for that player, which mirrors the goal of each player to maximize the value of their move.
Rounds of search are repeated as long as the time allotted to a move remains. Then the move with the most simulations made (i.e. the highest denominator) is chosen as the final answer.
Pure Monte Carlo game search
This basic procedure can be applied to any game whose positions necessarily have a finite number of moves and finite length. For each position, all feasible moves are determined: k random games are played out to the very end, and the scores are recorded. The move leading to the best score is chosen. Ties are broken by fair coin flips. Pure Monte Carlo Game Search results in strong play in several games with random elements, as in the game EinStein würfelt nicht!. It converges to optimal play (as k tends to infinity) in board filling games with random turn order, for instance in the game Hex with random turn order. DeepMind's AlphaZero replaces the simulation step with an evaluation based on a neural network.
Exploration and exploitation
The main difficulty in selecting child nodes is maintaining some balance between the exploitation of deep variants after moves with high average win rate and the exploration of moves with few simulations. The first formula for balancing exploitation and exploration in games, called UCT (Upper Confidence Bound 1 applied to trees), was introduced by Levente Kocsis and Csaba Szepesvári. UCT is based on the UCB1 formula derived by Auer, Cesa-Bianchi, and Fischer and the probably convergent AMS (Adaptive Multi-stage Sampling) algorithm first applied to multi-stage decision-making models (specifically, Markov Decision Processes) by Chang, Fu, Hu, and Marcus. Kocsis and Szepesvári recommend to choose in each node of the game tree the move for which the expression has the highest value. In this formula:
stands for the number of wins for the node considered after the -th move
stands for the number of simulations for the node considered after the -th move
stands for the total number of simulations after the -th move run by the parent node of the one considered
is the exploration parameter—theoretically equal to ; in practice usually chosen empirically
The first component of the formula above corresponds to exploitation; it is high for moves with high average win ratio. The second component corresponds to exploration; it is high for moves with few simulations.
Most contemporary implementations of Monte Carlo tree search are based on some variant of UCT that traces its roots back to the AMS simulation optimization algorithm for estimating the value function in finite-horizon Markov Decision Processes (MDPs) introduced by Chang et al. (2005) in Operations Research. (AMS was the first work to explore the idea of UCB-based exploration and exploitation in constructing sampled/simulated (Monte Carlo) trees and was the main seed for UCT.)
Advantages and disadvantages
Although it has been proven that the evaluation of moves in Monte Carlo tree search converges to minimax when using UCT, the basic version of Monte Carlo tree search converges only in so called "Monte Carlo Perfect" games. However, Monte Carlo tree search does offer significant advantages over alpha–beta pruning and similar algorithms that minimize the search space.
In particular, pure Monte Carlo tree search does not need an explicit evaluation function. Simply implementing the game's mechanics is sufficient to explore the search space (i.e. the generating of allowed moves in a given position and the game-end conditions). As such, Monte Carlo tree search can be employed in games without a developed theory or in general game playing.
The game tree in Monte Carlo tree search grows asymmetrically as the method concentrates on the more promising subtrees. Thus, it achieves better results than classical algorithms in games with a high branching factor.
A disadvantage is that in certain positions, there may be moves that look superficially strong, but that actually lead to a loss via a subtle line of play. Such "trap states" require thorough analysis to be handled correctly, particularly when playing against an expert player; however, MCTS may not "see" such lines due to its policy of selective node expansion. It is believed that this may have been part of the reason for AlphaGo's loss in its fourth game against Lee Sedol. In essence, the search attempts to prune sequences which are less relevant. In some cases, a play can lead to a very specific line of play which is significant, but which is overlooked when the tree is pruned, and this outcome is therefore "off the search radar".
Improvements
Various modifications of the basic Monte Carlo tree search method have been proposed to shorten the search time. Some employ domain-specific expert knowledge, others do not.
Monte Carlo tree search can use either light or heavy playouts. Light playouts consist of random moves while heavy playouts apply various heuristics to influence the choice of moves. These heuristics may employ the results of previous playouts (e.g. the Last Good Reply heuristic) or expert knowledge of a given game. For instance, in many Go-playing programs certain stone patterns in a portion of the board influence the probability of moving into that area. Paradoxically, playing suboptimally in simulations sometimes makes a Monte Carlo tree search program play stronger overall.
Domain-specific knowledge may be employed when building the game tree to help the exploitation of some variants. One such method assigns nonzero priors to the number of won and played simulations when creating each child node, leading to artificially raised or lowered average win rates that cause the node to be chosen more or less frequently, respectively, in the selection step. A related method, called progressive bias, consists in adding to the UCB1 formula a element, where is a heuristic score of the -th move.
The basic Monte Carlo tree search collects enough information to find the most promising moves only after many rounds; until then its moves are essentially random. This exploratory phase may be reduced significantly in a certain class of games using RAVE (Rapid Action Value Estimation). In these games, permutations of a sequence of moves lead to the same position. Typically, they are board games in which a move involves placement of a piece or a stone on the board. In such games the value of each move is often only slightly influenced by other moves.
In RAVE, for a given game tree node , its child nodes store not only the statistics of wins in playouts started in node but also the statistics of wins in all playouts started in node and below it, if they contain move (also when the move was played in the tree, between node and a playout). This way the contents of tree nodes are influenced not only by moves played immediately in a given position but also by the same moves played later.
When using RAVE, the selection step selects the node, for which the modified UCB1 formula has the highest value. In this formula, and stand for the number of won playouts containing move and the number of all playouts containing move , and the function should be close to one and to zero for relatively small and relatively big and , respectively. One of many formulas for , proposed by D. Silver, says that in balanced positions one can take , where is an empirically chosen constant.
Heuristics used in Monte Carlo tree search often require many parameters. There are automated methods to tune the parameters to maximize the win rate.
Monte Carlo tree search can be concurrently executed by many threads or processes. There are several fundamentally different methods of its parallel execution:
Leaf parallelization, i.e. parallel execution of many playouts from one leaf of the game tree.
Root parallelization, i.e. building independent game trees in parallel and making the move basing on the root-level branches of all these trees.
Tree parallelization, i.e. parallel building of the same game tree, protecting data from simultaneous writes either with one, global mutex, with more mutexes, or with non-blocking synchronization.
See also
AlphaGo, a Go program using Monte Carlo tree search, reinforcement learning and deep learning.
AlphaGo Zero, an updated Go program using Monte Carlo tree search, reinforcement learning and deep learning.
AlphaZero, a generalized version of AlphaGo Zero using Monte Carlo tree search, reinforcement learning and deep learning.
Leela Chess Zero, a free software implementation of AlphaZero's methods to chess, which is currently among the leading chess playing programs.
References
Bibliography
External links
Beginner's Guide to Monte Carlo Tree Search
Combinatorial game theory
Heuristic algorithms
Monte Carlo methods
Optimal decisions
Computer Go | Monte Carlo tree search | [
"Physics",
"Mathematics"
] | 3,251 | [
"Monte Carlo methods",
"Recreational mathematics",
"Computational physics",
"Combinatorics",
"Game theory",
"Combinatorial game theory"
] |
40,529,144 | https://en.wikipedia.org/wiki/Thalassogen | In astronomy, a thalassogen denotes a substance capable of forming a planetary ocean. Thalassogens are not necessarily life sustaining, although most interest has been in the context of extraterrestrial life.
The term was coined by Isaac Asimov in his essay "The Thalassogens", later published in his 1972 collection The Left Hand of the Electron. Said term was coined via the Ancient Greek prefix thalasso- ("sea") and the suffix -gen ("producer").
Elements making up thalassogens have to be relatively abundant, the substance must be chemically stable in its environment, and must remain liquid under the conditions found on some planets. Freitas gives the following table, noting that the liquid range typically increases with increasing pressure:
The critical temperature and pressure represents the point where the distinction between gas and liquid vanishes, a possible upper limit for life (though life in supercritical fluids has been discussed both in science and fiction, such as in Close to Critical by Hal Clement).
Later authors have also suggested sulfuric acid, ethane, and water/ammonia mixtures as possible thalassogens. The discovery of possible subsurface oceans on moons such as Europa (and, less obviously, Ganymede and Callisto) also extends the range of possible environments.
See also
Extraterrestrial liquid water
Hypothetical types of biochemistry
References
Astrobiology
Astronomical hypotheses
Planetary science | Thalassogen | [
"Astronomy",
"Biology"
] | 293 | [
"Astronomical hypotheses",
"Origin of life",
"Speculative evolution",
"Astrobiology",
"Astronomical controversies",
"Biological hypotheses",
"Planetary science",
"Astronomical sub-disciplines"
] |
40,529,737 | https://en.wikipedia.org/wiki/Bleeding%20%28roads%29 | Bleeding or flushing is shiny, black surface film of asphalt on the road surface caused by upward movement of asphalt in the pavement surface. Common causes of bleeding are too much asphalt in asphalt concrete, hot weather, low space air void content and quality of asphalt. Bleeding is a safety concern since it results in a very smooth surface, without the texture required to prevent hydroplaning. Road performance measures such as IRI cannot capture the existence of bleeding as it does not increase the surface roughness. But other performance measures such as PCI do include bleeding.
See also
Pavement Condition Index
International Roughness Index
Asphalt concrete
Road slipperiness
References
Asphalt
Road construction
Pavements
Pavement engineering
Road infrastructure
Road hazards
Pavement distress | Bleeding (roads) | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 142 | [
"Unsolved problems in physics",
"Road hazards",
"Construction",
"Road construction",
"Chemical mixtures",
"Asphalt",
"Amorphous solids"
] |
40,530,379 | https://en.wikipedia.org/wiki/Cross-serial%20dependencies | In linguistics, cross-serial dependencies (also called crossing dependencies by some authors) occur when the lines representing the dependency relations between two series of words cross over each other. They are of particular interest to linguists who wish to determine the syntactic structure of natural language; languages containing an arbitrary number of them are non-context-free. By this fact, Dutch and Swiss-German have been proven to be non-context-free.
Example
As Swiss-German allows verbs and their arguments to be ordered cross-serially, we have the following example, taken from Shieber:
That is, "we help Hans paint the house."
Notice that the sequential noun phrases em Hans (Hans) and es huus (the house), and the sequential verbs hälfed (help) and aastriiche (paint) both form two separate series of constituents. Notice also that the dative verb hälfed and the accusative verb aastriiche take the dative em Hans and accusative es huus as their arguments, respectively.
Non-context-freeness
Let to be the set of all Swiss-German sentences. We will prove mathematically that is not context-free.
In Swiss-German sentences, the number of verbs of a grammatical case (dative or accusative) must match the number of objects of that case. Additionally, a sentence containing an arbitrary number of such objects is admissible (in principle). Hence, we can define the following formal language, a subset of :Thus, we have , where is the regular language defined by where the superscript plus symbol means "one or more copies". Since the set of context-free languages is closed under intersection with regular languages, we need only prove that is not context-free (, pp 130–135).
After a word substitution, is of the form . Since can be mapped to by the following map: , and since the context-free languages are closed under mappings from terminal symbols to terminal strings (that is, a homomorphism) (, pp 130–135), we need only prove that is not context-free.
is a standard example of non-context-free language (, p. 128). This can be shown by Ogden's lemma. Suppose the language is generated by a context-free grammar, then let be the length required in Ogden's lemma, then consider the word in the language, and mark the letters . Then the three conditions implied by Ogden's lemma cannot all be satisfied. All known spoken languages which contain cross-serial dependencies can be similarly proved to be not context-free. This led to the abandonment of Generalized Phrase Structure Grammar once cross-serial dependencies were identified in natural languages in the 1980s.
Treatment
Research in mildly context-sensitive language has attempted to identify a narrower and more computationally tractable subclass of context-sensitive languages that can capture context sensitivity as found in natural languages. For example, cross-serial dependencies can be expressed in linear context-free rewriting systems (LCFRS); one can write a LCFRS grammar for {anbncndn | n ≥ 1} for example.
References
Formal languages
Syntax | Cross-serial dependencies | [
"Mathematics"
] | 658 | [
"Formal languages",
"Mathematical logic"
] |
40,531,188 | https://en.wikipedia.org/wiki/Karl%20A.%20Grosch | Dr. Karl Alfred Grosch (1923-2012) was a rubber industry scientist noted for his contributions to understanding tire friction and abrasion. Dr. Grosch is the developer of the LAT 100 Abrasion tester that is used widely in the tire industry to evaluate the friction and wear properties of rubber compounds.
Personal
Grosch was born in , today An der Heide 11, 07387 Krölpa/Trannroda Thuringia, Germany, on February 16, 1923, died July 15, 2012.
Grosch served in the German military during World War II, and was captured by the British as a prisoner of war.
Education
He received a B.S. (Special Physics) from the University of London in 1958, and a Ph.D. (Science) from the school in 1963 under the supervision of David Tabor of Cambridge and L.R.G. Treloar of Manchester University with the title Friction and abrasion of rubber.
Career
In 1955, Grosch started his career as a research assistant at the MRPRA, under Adolf Schallamach. Grosch helped establish that rolling friction and grip on dry roads are governed by the viscoelastic properties of rubber. In 1963, he was named principal scientific officer.
In 1969, Grosch joined Uniroyal in Germany, working there until his retirement in 1988.
After retirement, he developed the LAT 100 laboratory friction and abrasion tester, which is marketed by VMI Holland BV in the Netherlands.
Awards
Grosch received the 2007 Charles Goodyear Medal of the Rubber Division of the American Chemical Society, and the 1997 Colwyn medal from the Institute of Materials.
External links
2007 Interview with Karl Grosch
References
1923 births
2012 deaths
Polymer scientists and engineers
Scientists from Thuringia
Tire industry people | Karl A. Grosch | [
"Chemistry",
"Materials_science"
] | 376 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
40,532,409 | https://en.wikipedia.org/wiki/National%20Graphene%20Institute | The National Graphene Institute is a research institute and building at the University of Manchester, England, that is focused on the research of graphene. Construction of the building to house the institute started in 2013 and finished in 2015.
Institute
The creation of the institute, including the construction of the building, cost £61 million. Funded by the UK Government (£38 million) and the European Union's European Regional Development Fund (£23 million), the building is the national centre for graphene research in the UK. It provides facilities for industry and university academics to collaborate on graphene applications and the commercialisation of graphene. The building was opened on 20 March 2015 by the Chancellor of the Exchequer George Osborne.
Building
The five-storey glass-fronted building provides of research space. This includes of class 100 and class 1000 clean rooms, one of which occupies the entire lower ground floor (in order to minimise vibrations) plus laser, optical, metrology and chemical laboratories, along with offices, a seminar room and accommodation. The top floor also includes a roof terrace, which has 21 different grasses and wildflowers designed to attract urban bees and other species of pollinators. The outside of the building consists of a composite cladding, with an external stainless steel 'veil'. The building faces on to Booth Street East. Construction started in March 2013, with the building being completed in 2015.
The building was designed by Jestico + Whiles in close collaboration with a team of academics led by Prof Sir Konstantin Novoselov. It cost around £30 million, and was constructed by BAM Nuttall. The structural design was produced by Ramboll. Other shortlisted organisations were: Lendlease, Balfour Beatty (M&E Installation)Morgan Sindall, Vinci, and M&W Group. The design work was led by EC Harris, along with CH2M who provided specialist technical architecture design services for the cleanrooms and laboratories, together with Mechanical, Electrical and Process (MEP) consultant services.
History of the location
The institute was constructed on the former site of the Albert Club, which was a Victorian club that was located between Lawson Street and Clifford Street. The club was established for the middle class German community that were involved in Manchester's cotton trade, and Friedrich Engels frequented it during his time in the city, becoming a member in 1842. The club was located on Clifford Street from 1842 prior to its relocation in 1859. The building was constructed by the architect Jeptha Pacey as his personal house, and it was fronted by formal gardens. It was later converted into a private social club, which was named after Albert, Prince Consort.
In 1859 it was rebuilt by William Potter as one of England's first Victorian Turkish baths, remaining open until 1868 when it was used as an extension to the Manchester Southern Hospital for Women and Children. The building was demolished in the 1960s, and the site was used for the construction of the Lamb Building.
The excavations that took place in February 2013 by Oxford Archaeology North, prior to the construction of the institute, uncovered the remnants of the club building along with a row of five cellars belonging to 1830s terraced housing. A sink removed from the site has been incorporated into the institute's new building.The stone sink was plumbed in with both cold feed supply and waste pipework by A .Armstrong of Balfour Beatty who completed both the mechanical and Electrical installations throughout the new build of NGI As the main clean room of the new building is now located below ground level, the remains of the Albert Club were not conserved.
Gallery
References
External links
The University of Manchester's webpage on the National Graphene Institute
Buildings at the University of Manchester
Graphene
Organisations based in Manchester
Nanotechnology institutions | National Graphene Institute | [
"Materials_science"
] | 760 | [
"Nanotechnology",
"Nanotechnology institutions"
] |
40,533,059 | https://en.wikipedia.org/wiki/Chemical%20chaperone | Chemical chaperones are a class of small molecules that function to enhance the folding and/or stability of proteins. Chemical chaperones are a broad and diverse group of molecules, and they can influence protein stability and polypeptide organization through a variety of mechanisms. Chemical chaperones are used for a range of applications, from production of recombinant proteins to treatment of protein misfolding in vivo.
Classes of chemical chaperones
There are many different small molecules that can function to enhance protein stability and folding, many of them can be broadly grouped into large classes based both on their structure and their proposed mechanism of action. The parameters that define these groups are not strictly defined, and many small molecules that exert a chemical chaperoning effect do not readily fall into one of these categories. For example, the free amino acid arginine is not classically defined as a chemical chaperone, but it has a well-documented anti-aggregation effect.
Osmolytes
Cellular osmolytes are polar small molecules that are synthesized or taken up by cells to maintain the integrity of cellular components during periods of osmotic or other forms of stress. Osmolytes are diverse in chemical structure, and include polyols, sugars, methylamines, and free amino acids and their derivatives. Examples of these include glycerol, trehalose, trimethylamine n-oxide (TMAO), and glycine. Despite being most active at relatively high concentrations, osmolytes don’t display any effects on normal cellular processes – for this reason, they are also commonly referred to as “compatible solutes”. Osmolytes exert their chaperoning effects indirectly by changing the interaction of the protein with solvent, rather than through any direct interaction with the protein. Unfavorable interactions between proteins and osmolytes increases the solvation of the protein with water. This increased hydration favors more compact polypeptide conformations, in which hydrophobic residues are more tightly sequestered from polar solvent. Thus, osmolytes are thought to work by structuring partially folded intermediates and thermodynamically stabilizing folded conformations to a greater extent than unfolded conformations.
Hydrophobic compounds
Chemical compounds that have varying degrees of hydrophobicity that still are soluble in aqueous environments can act as chemical chaperones as well. These compounds are thought to act by binding to solvent-exposed hydrophobic segments of unfolded or improperly folded proteins, thereby “protecting” them from aggregation. 4-phenylbutyrate (PBA) is a prominent example of this group of compounds, along with lysophosphatidic acids and other lipids and detergents.
Pharmacological chaperones
Another class of chaperones is composed of protein ligands, cofactors, competitive inhibitors, and other small molecules that bind specifically to certain proteins. Because these molecules are active only on a specific protein, they are referred to as pharmacological chaperones. These molecules can induce stability in a specific region of a protein through favorable binding interactions, which reduce the inherent conformational flexibility of the polypeptide chain. Another important property of pharmacological chaperones is that they are able to bind to the unfolded or improperly folded protein, and then dissociate once the protein is properly folded, leaving a functional protein.
Applications
Recombinant protein expression
Beside clinical applications, chemical chaperones have proved useful in in vitro production of recombinant proteins.
Re-folding of insoluble proteins from inclusion bodies
Recombinant expression of protein in Escherichia coli often results in the formation of insoluble protein aggregates called inclusion bodies. These protein bodies require refolding in vitro once extracted from E.coli cells by strong detergent. Proteins are thought to unfold during the solubilization process, and subsequent removal of detergent by dilution of analysis allows their refolding.
Both folding enhancers and aggregation suppressors are often employed during the removal of denaturant to facilitate folding to the native structure and to prevent aggregation. Folding enhancers assist protein to assume the native structure as soon as possible when the concentration of detergent is drastically decreased at once as in the dilution process. On the other hand, aggregation suppressors prevent protein folding intermediates from aggregating even after a long exposure to intermediate level of detergent as seen in dialysis. For example, it has been reported that Taurine significantly increases the yield of in vitro refolding for Fab fragment antibodies.
Periplasmic expression
The discovery of chemical chaperones’ effect on protein folding led to periplasmic protein expression, especially for ones that require an oxidative environment to form disulfide bonds for proper folding. Folding of proteins that are difficult to do in the cytoplasm can be enhanced in the periplasm where the osmotic pressure can be readily controlled. The osmotic pressure of the periplasmic space can be simply altered by changing that of the medium as osmolytes freely penetrate the outer membrane. Proteins is secreted to this space when an appropriate signal sequence is attached to its terminal. A good example of folding enhancement by periplasmic expression is the disulfide bond-containing plasminogen activator variant (rPA). Folding of rPA is shown to increase when folding enhancers or arginine is added to the culture medium.
Use of halophiles in protein production
Halophiles are a type of extremophiles that have adapted to extreme salt solutions. Halophiles are classified into two categories: 1) extremely halophilic archaea, and 2) moderately halophilic bacteria. The extremely halophilic archaea have adapted to require high salt concentrations (2.5M) in the living environment by incorporating the high salt concentration into the cell. On the other hand, the moderately halophilic bacteria achieve living in a wide range of salt concentrations by synthesis or incorporation of organic compounds. Many halophilic bacteria and archaea are easy to maintain, and their high cellular osmotic pressure has been exploited in recombinant protein production. The cellular environments of halophiles can be fine-tuned to accommodate folding of protein of interest by adjusting the concentration of osmolytes in the culture medium. Successful expression and folding of Ice nucleation protein, GFP, α-amylase, nucleotide diphosphate kinase, and serine racemase have been reported in halophiles.
Protein folding diseases
Since chemical chaperones promote the conservation of the native structure of proteins, the possibilities of developing chemical chaperones for clinical applications have been explored for various protein folding diseases.
Cystic fibrosis
Cystic fibrosis (CF) is a disease resulting from a failure to maintain the level of cystic fibrosis transmembrane conductance regulator (CFTR), which functions as a chloride channel in pulmonary tissues. ΔF508 point mutation in CFTR protein interferes with maturation of the protein has been found in a number of CF patients. It is found that the mutant CFTR mostly fails to transport to the cell membrane and is degraded in the ER; however, ones that successfully make it to the cell membrane are fully functional. As a result, a number of chemical chaperones have been shown to promote the trafficking of ΔF508 CFTR to the plasma membrane.
Transthyretin Amyloidoses
Partially denatured transthyretin (TTR) can promote the formation of amyloid fibrils in cells, and this aggregation can lead to cellular toxicity and a variety of human disease pathologies. Many small molecule inhibitors of TTR amyloid formation have been discovered that act by kinetically stabilizing the TTR tetramer. This prevents monomer misfolding events by disfavoring the dissociation of the TTR tetramer. Tafamidis is one such small molecule that has been approved by several international regulatory agencies for the treatment of Transthyretin Familial Amyloid Polyneuropathy.
See also
Chaperone (protein), proteins that perform the same function
References
Protein biosynthesis
Protein folding
Molecular chaperones | Chemical chaperone | [
"Chemistry"
] | 1,714 | [
"Protein biosynthesis",
"Gene expression",
"Biosynthesis"
] |
40,533,764 | https://en.wikipedia.org/wiki/Space%20elevator%20competitions | A space elevator is a theoretical system using a super-strong ribbon going from the surface of the Earth to a point beyond Geosynchronous orbit. The center of gravity of the ribbon would be exactly in geosynchronous orbit, so that the ribbon would always stay above the anchor point. Vehicles would climb the ribbon powered by a beam of energy projected from the surface of the Earth. Building a space elevator requires materials and techniques that do not currently exist. A variety of Space Elevator competitions have been held in order to stimulate the development of such materials and techniques.
Space elevators were first conceived in 1895, but until the discovery of carbon nanotubes, no technology was envisioned that could make them possible. Building an actual elevator is still out of reach, but the directions for research are clear. This makes the area ripe for incentive prizes like the X Prize, and prizes and competitions have been set up since 2005 to encourage the development of relevant technologies. There are two main areas of research remaining, and these are where the competitions focus: building cables ("a Tether challenge"), and climbing and descending cables ("a Power Beam challenge").
In a Power Beam Challenge, each team designs and builds a climber (a machine capable of traveling up and down a tether ribbon). In a Tether challenge, each team attempts to build the longest and strongest cable. In the Power Beam challenge climber carry a payload. Power is beamed from a transmitter to a receiver on the climber. With each competition, the tethers reach higher altitudes, and the climbers are expected to climb further. Each competition can have minimum lengths and maximum weight per meter for cables, and minimum speed and distance goals for climbers.
Space elevator challenge results
Like many competitions modeled after the X prize, competitors have to meet a minimum baseline, and then prizes are awarded to the best entry that exceed that target. In 2005, there was only a climbing challenge, and none of the entrants met the minimum speed requirement of 1 m/s. Starting in 2006, Elevator:2010, sponsored by spaceward.org and NASA conducted a series of competitions. For 2006, the prize was increased, and the speed requirement dropped slightly to 50 meters in under a minute. 13 teams entered, and one was able to climb the 50 meters in 58 seconds. In 2009 at Edwards Air Force Base, the challenge was climbing a 900 m tether, and one entry managed the feat several times, with a top speed of 3.5 m/s. NASA didn't renew their sponsorship after 2009, pending "further advancements in material science".
The International Space Elevator Consortium was formed in 2008, and has held annual conferences. They announced a $10,000 Strong Tether Challenge competition for 2013. The Challenge was canceled for lack of competitors. The 2011, 2012, and 2013 ISEC conferences also featured FIRST-style High School robotics competitions for climbers. and occasional competitions.
The Japan Space Elevator Association held a climbing competition in August 2013. Hot air balloons were used to hoist a tether, and Team Okusawa's entry succeeded in climbing to 1100 meters, and a team from Nihon University reached 1200 meters. (The sources are in Japanese.)
The Japan Space Elevator Association held a climbing competition in August 2014. Hot air balloons were used to hoist both rope (11 mm) and ribbon (35 mm x 2 mm) to 200 m and 1200 m. Team Okusawa climbed to 1200 m and descended twice. Kanagawa University carried a 100 kg payload to 123 m on the 200 m ribbon. Kanagawa University's three teams climbed respectively to 1200 m (rope), 1150 m (rope) and 1100 m (ribbon). Munich University of Technology reached 1000 m (rope).
References
External links
The Space Elevator Reference
LaserMotive
KC Space Pirates web site
2009 Space Elevator Games Results
2005 Space Elevator Games Results
How close is the Space Elevator?
Space Elevator Feasibility
Tech Video coverage of the Space Elevator Competition in Israel
Lighthouse DEV: Spinoff of the NSS Space Elevator Team
Competitions
Challenge awards
Robotics competitions | Space elevator competitions | [
"Astronomy",
"Technology"
] | 836 | [
"Exploratory engineering",
"Astronomical hypotheses",
"Space elevator"
] |
43,430,218 | https://en.wikipedia.org/wiki/Yeast%20mitochondrial%20code | The yeast mitochondrial code (translation table 3) is a genetic code used by the mitochondrial genome of yeasts, notably Saccharomyces cerevisiae, Candida glabrata, Hansenula saturnus, and Kluyveromyces thermotolerans.
The code
AAs = FFLLSSSSYY**CCWWTTTTPPPPHHQQRRRRIIMMTTTTNNKKSSRRVVVVAAAADDEEGGGG
Starts = ---M---------------M---------------M---------------M------------
Base1 = TTTTTTTTTTTTTTTTCCCCCCCCCCCCCCCCAAAAAAAAAAAAAAAAGGGGGGGGGGGGGGGG
Base2 = TTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGGTTTTCCCCAAAAGGGG
Base3 = TCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAGTCAG
Bases: adenine (A), cytosine (C), guanine (G) and thymine (T) or uracil (U).
Amino acids: Alanine (Ala, A), Arginine (Arg, R), Asparagine (Asn, N), Aspartic acid (Asp, D), Cysteine (Cys, C), Glutamic acid (Glu, E), Glutamine (Gln, Q), Glycine (Gly, G), Histidine (His, H), Isoleucine (Ile, I), Leucine (Leu, L), Lysine (Lys, K), Methionine (Met, M), Phenylalanine (Phe, F), Proline (Pro, P), Serine (Ser, S), Threonine (Thr, T), Tryptophan (Trp, W), Tyrosine (Tyr, Y), Valine (Val, V).
Differences from the standard code
The remaining CGN codons are rare in Saccharomyces cerevisiae and absent in Candida glabrata.
The AUA codon is common in the gene var1 coding for the single mitochondrial ribosomal protein, but rare in genes encoding the enzymes.
The coding assignments of the AUA (Met or Ile) and CUU (possibly Leu, not Thr) are uncertain in Hansenula saturnus.
The coding assignment of Thr to CUN is uncertain in Kluyveromyces thermotolerans.
See also
List of genetic codes
References
Molecular genetics
Gene expression
Protein biosynthesis | Yeast mitochondrial code | [
"Chemistry",
"Biology"
] | 642 | [
"Protein biosynthesis",
"Gene expression",
"Molecular genetics",
"Biosynthesis",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
43,431,012 | https://en.wikipedia.org/wiki/Pressuron | The pressuron is a hypothetical scalar particle which couples to both gravity and matter theorised in 2013. Although originally postulated without self-interaction potential, the pressuron is also a dark energy candidate when it has such a potential. The pressuron takes its name from the fact that it decouples from matter in pressure-less regimes, allowing the scalar–tensor theory of gravity involving it to pass solar system tests, as well as tests on the equivalence principle, even though it is fundamentally coupled to matter. Such a decoupling mechanism could explain why gravitation seems to be well described by general relativity at present epoch, while it could actually be more complex than that. Because of the way it couples to matter, the pressuron is a special case of the hypothetical string dilaton. Therefore, it is one of the possible solutions to the present non-observation of various signals coming from massless or light scalar fields that are generically predicted in string theory.
Mathematical formulation
The action of the scalar–tensor theory that involves the pressuron can be written as
where is the Ricci scalar constructed from the metric , is the metric determinant, , with the gravitational constant and the velocity of light in vacuum, is the pressuron potential and is the matter Lagrangian and represents the non-gravitational fields. The gravitational field equations therefore write
and
.
where is the stress–energy tensor of the matter field,
and is its trace.
Decoupling mechanism
If one considers a pressure-free perfect fluid (also known as a dust solution), the effective material Lagrangian becomes , where is the mass of the ith particle, its position, and the Dirac delta function, while at the same time the trace of the stress-energy tensor reduces to . Thus, there is an exact cancellation of the pressuron material source term , and hence the pressuron effectively decouples from pressure-free matter fields.
In other words, the specific coupling between the scalar field and the material fields in the Lagrangian leads to a decoupling between the scalar field and the matter fields in the limit that the matter field is exerting zero pressure.
Link to string theory
The pressuron shares some characteristics with the hypothetical string dilaton, and can actually be viewed as a special case of the wider family of possible dilatons. Since perturbative string theory cannot currently give the expected coupling of the string dilaton with material fields in the effective 4-dimension action, it seems conceivable that the pressuron may be the string dilaton in the 4-dimension effective action.
Experimental search
Solar System
According to Minazzoli and Hees, post-Newtonian tests of gravitation in the Solar System should lead to the same results as what is expected from general relativity, except for gravitational redshift experiments, which should deviate from general relativity with a relative magnitude of the order of , where is the current cosmological value of the scalar-field function , and and are respectively the mean pressure and density of the Earth (for instance). Current best constraints on the gravitational redshift come from gravity probe A and are at the level only. Therefore, the scalar–tensor theory that involves the pressuron is weakly constrained by Solar System experiments.
Cosmological variation of the fundamental coupling constants
Because of its non-minimal couplings, the pressuron leads to a variation of the fundamental coupling constants in regimes where it effectively couples to matter. However, since the pressuron decouples in both the matter-dominated era (which is essentially driven by pressure-less material fields) and the dark-energy-dominated era (which is essentially driven by dark energy), the pressuron is also weakly constrained by current cosmological tests on the variation of the coupling constants.
Test with binary pulsars
Although no calculations seem to have been performed regarding this issue, it has been argued that binary pulsars should give greater constraints on the existence of the pressuron because of the high pressure of bodies involved in such systems.
See also
List of hypothetical particles
References
Bosons
Gravity
Hypothetical elementary particles
Physical cosmology
Physics beyond the Standard Model
String theory
Subatomic particles with spin 0
Dark energy
Force carriers | Pressuron | [
"Physics",
"Astronomy"
] | 869 | [
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Fundamental interactions",
"Dark energy",
"Hypothetical elementary particles",
"Physics beyond the Standard Model",
"Astronomical sub-disciplines",
"Astronomical hypotheses",
"Concepts in astronomy",
"Energy (physics)"... |
33,710,707 | https://en.wikipedia.org/wiki/Planck%20units | In particle physics and physical cosmology, Planck units are a system of units of measurement defined exclusively in terms of four universal physical constants: c, G, ħ, and kB (described further below). Expressing one of these physical constants in terms of Planck units yields a numerical value of 1. They are a system of natural units, defined using fundamental properties of nature (specifically, properties of free space) rather than properties of a chosen prototype object. Originally proposed in 1899 by German physicist Max Planck, they are relevant in research on unified theories such as quantum gravity.
The term Planck scale refers to quantities of space, time, energy and other units that are similar in magnitude to corresponding Planck units. This region may be characterized by particle energies of around or , time intervals of around and lengths of around (approximately the energy-equivalent of the Planck mass, the Planck time and the Planck length, respectively). At the Planck scale, the predictions of the Standard Model, quantum field theory and general relativity are not expected to apply, and quantum effects of gravity are expected to dominate. One example is represented by the conditions in the first 10−43 seconds of our universe after the Big Bang, approximately 13.8 billion years ago.
The four universal constants that, by definition, have a numeric value 1 when expressed in these units are:
c, the speed of light in vacuum,
G, the gravitational constant,
ħ, the reduced Planck constant, and
kB, the Boltzmann constant.
Variants of the basic idea of Planck units exist, such as alternate choices of normalization that give other numeric values to one or more of the four constants above.
Introduction
Any system of measurement may be assigned a mutually independent set of base quantities and associated base units, from which all other quantities and units may be derived. In the International System of Units, for example, the SI base quantities include length with the associated unit of the metre. In the system of Planck units, a similar set of base quantities and associated units may be selected, in terms of which other quantities and coherent units may be expressed. The Planck unit of length has become known as the Planck length, and the Planck unit of time is known as the Planck time, but this nomenclature has not been established as extending to all quantities.
All Planck units are derived from the dimensional universal physical constants that define the system, and in a convention in which these units are omitted (i.e. treated as having the dimensionless value 1), these constants are then eliminated from equations of physics in which they appear. For example, Newton's law of universal gravitation,
can be expressed as:
Both equations are dimensionally consistent and equally valid in any system of quantities, but the second equation, with absent, is relating only dimensionless quantities since any ratio of two like-dimensioned quantities is a dimensionless quantity. If, by a shorthand convention, it is understood that each physical quantity is the corresponding ratio with a coherent Planck unit (or "expressed in Planck units"), the ratios above may be expressed simply with the symbols of physical quantity, without being scaled explicitly by their corresponding unit:
This last equation (without ) is valid with , , , and being the dimensionless ratio quantities corresponding to the standard quantities, written e.g. or , but not as a direct equality of quantities. This may seem to be "setting the constants , , etc., to 1" if the correspondence of the quantities is thought of as equality. For this reason, Planck or other natural units should be employed with care. Referring to "", Paul S. Wesson wrote that, "Mathematically it is an acceptable trick which saves labour. Physically it represents a loss of information and can lead to confusion."
History and definition
The concept of natural units was introduced in 1874, when George Johnstone Stoney, noting that electric charge is quantized, derived units of length, time, and mass, now named Stoney units in his honor. Stoney chose his units so that G, c, and the electron charge e would be numerically equal to 1. In 1899, one year before the advent of quantum theory, Max Planck introduced what became later known as the Planck constant. At the end of the paper, he proposed the base units that were later named in his honor. The Planck units are based on the quantum of action, now usually known as the Planck constant, which appeared in the Wien approximation for black-body radiation. Planck underlined the universality of the new unit system, writing:
Planck considered only the units based on the universal constants , , , and to arrive at natural units for length, time, mass, and temperature. His definitions differ from the modern ones by a factor of , because the modern definitions use rather than .
Unlike the case with the International System of Units, there is no official entity that establishes a definition of a Planck unit system. Some authors define the base Planck units to be those of mass, length and time, regarding an additional unit for temperature to be redundant. Other tabulations add, in addition to a unit for temperature, a unit for electric charge, so that either the Coulomb constant or the vacuum permittivity is normalized to 1. Thus, depending on the author's choice, this charge unit is given by
for , or
for . Some of these tabulations also replace mass with energy when doing so.
In SI units, the values of c, h, e and kB are exact and the values of ε0 and G in SI units respectively have relative uncertainties of and Hence, the uncertainties in the SI values of the Planck units derive almost entirely from uncertainty in the SI value of G.
Compared to Stoney units, Planck base units are all larger by a factor , where is the fine-structure constant.
Derived units
In any system of measurement, units for many physical quantities can be derived from base units. Table 2 offers a sample of derived Planck units, some of which are seldom used. As with the base units, their use is mostly confined to theoretical physics because most of them are too large or too small for empirical or practical use and there are large uncertainties in their values.
Some Planck units, such as of time and length, are many orders of magnitude too large or too small to be of practical use, so that Planck units as a system are typically only relevant to theoretical physics. In some cases, a Planck unit may suggest a limit to a range of a physical quantity where present-day theories of physics apply. For example, our understanding of the Big Bang does not extend to the Planck epoch, i.e., when the universe was less than one Planck time old. Describing the universe during the Planck epoch requires a theory of quantum gravity that would incorporate quantum effects into general relativity. Such a theory does not yet exist.
Several quantities are not "extreme" in magnitude, such as the Planck mass, which is about 22 micrograms: very large in comparison with subatomic particles, and within the mass range of living organisms. Similarly, the related units of energy and of momentum are in the range of some everyday phenomena.
Significance
Planck units have little anthropocentric arbitrariness, but do still involve some arbitrary choices in terms of the defining constants. Unlike the metre and second, which exist as base units in the SI system for historical reasons, the Planck length and Planck time are conceptually linked at a fundamental physical level. Consequently, natural units help physicists to reframe questions. Frank Wilczek puts it succinctly:
While it is true that the electrostatic repulsive force between two protons (alone in free space) greatly exceeds the gravitational attractive force between the same two protons, this is not about the relative strengths of the two fundamental forces. From the point of view of Planck units, this is comparing apples with oranges, because mass and electric charge are incommensurable quantities. Rather, the disparity of magnitude of force is a manifestation of that the proton charge is approximately the unit charge but the proton mass is far less than the unit mass in a system that treats both forces as having the same form.
When Planck proposed his units, the goal was only that of establishing a universal ("natural") way of measuring objects, without giving any special meaning to quantities that measured one single unit. In 1918, Arthur Eddington suggested that the Planck length could have a special significance for understanding gravitation, but this suggestion was not influential. During the 1950s, multiple authors including Lev Landau and Oskar Klein argued that quantities on the order of the Planck scale indicated the limits of the validity of quantum field theory. John Archibald Wheeler proposed in 1955 that quantum fluctuations of spacetime become significant at the Planck scale, though at the time he was unaware of Planck's unit system. In 1959, C. A. Mead showed that distances that measured of the order of one Planck length, or, similarly, times that measured of the order of Planck time, did carry special implications related to Heisenberg's uncertainty principle:
Planck scale
In particle physics and physical cosmology, the Planck scale is an energy scale around (the Planck energy, corresponding to the energy equivalent of the Planck mass, ) at which quantum effects of gravity become significant. At this scale, present descriptions and theories of sub-atomic particle interactions in terms of quantum field theory break down and become inadequate, due to the impact of the apparent non-renormalizability of gravity within current theories.
Relationship to gravity
At the Planck length scale, the strength of gravity is expected to become comparable with the other forces, and it has been theorized that all the fundamental forces are unified at that scale, but the exact mechanism of this unification remains unknown. The Planck scale is therefore the point at which the effects of quantum gravity can no longer be ignored in other fundamental interactions, where current calculations and approaches begin to break down, and a means to take account of its impact is necessary. On these grounds, it has been speculated that it may be an approximate lower limit at which a black hole could be formed by collapse.
While physicists have a fairly good understanding of the other fundamental interactions of forces on the quantum level, gravity is problematic, and cannot be integrated with quantum mechanics at very high energies using the usual framework of quantum field theory. At lesser energy levels it is usually ignored, while for energies approaching or exceeding the Planck scale, a new theory of quantum gravity is necessary. Approaches to this problem include string theory and M-theory, loop quantum gravity, noncommutative geometry, and causal set theory.
In cosmology
In Big Bang cosmology, the Planck epoch or Planck era is the earliest stage of the Big Bang, before the time passed was equal to the Planck time, tP, or approximately 10−43 seconds. There is no currently available physical theory to describe such short times, and it is not clear in what sense the concept of time is meaningful for values smaller than the Planck time. It is generally assumed that quantum effects of gravity dominate physical interactions at this time scale. At this scale, the unified force of the Standard Model is assumed to be unified with gravitation. Immeasurably hot and dense, the state of the Planck epoch was succeeded by the grand unification epoch, where gravitation is separated from the unified force of the Standard Model, in turn followed by the inflationary epoch, which ended after about 10−32 seconds (or about 1011 tP).
Table 3 lists properties of the observable universe today expressed in Planck units.
After the measurement of the cosmological constant (Λ) in 1998, estimated at 10−122 in Planck units, it was noted that this is suggestively close to the reciprocal of the age of the universe (T) squared. Barrow and Shaw proposed a modified theory in which Λ is a field evolving in such a way that its value remains throughout the history of the universe.
Analysis of the units
Planck length
The Planck length, denoted , is a unit of length defined as:It is equal to (the two digits enclosed by parentheses are the estimated standard error associated with the reported numerical value) or about times the diameter of a proton. It can be motivated in various ways, such as considering a particle whose reduced Compton wavelength is comparable to its Schwarzschild radius, though whether those concepts are in fact simultaneously applicable is open to debate. (The same heuristic argument simultaneously motivates the Planck mass.)
The Planck length is a distance scale of interest in speculations about quantum gravity. The Bekenstein–Hawking entropy of a black hole is one-fourth the area of its event horizon in units of Planck length squared. Since the 1950s, it has been conjectured that quantum fluctuations of the spacetime metric might make the familiar notion of distance inapplicable below the Planck length. This is sometimes expressed by saying that "spacetime becomes a foam at the Planck scale". It is possible that the Planck length is the shortest physically measurable distance, since any attempt to investigate the possible existence of shorter distances, by performing higher-energy collisions, would result in black hole production. Higher-energy collisions, rather than splitting matter into finer pieces, would simply produce bigger black holes.
The strings of string theory are modeled to be on the order of the Planck length. In theories with large extra dimensions, the Planck length calculated from the observed value of can be smaller than the true, fundamental Planck length.
Planck time
The Planck time, denoted , is defined as: This is the time required for light to travel a distance of 1 Planck length in vacuum, which is a time interval of approximately . No current physical theory can describe timescales shorter than the Planck time, such as the earliest events after the Big Bang. Some conjectures state that the structure of time need not remain smooth on intervals comparable to the Planck time.
Planck energy
The Planck energy EP is approximately equal to the energy released in the combustion of the fuel in an automobile fuel tank (57.2 L at 34.2 MJ/L of chemical energy). The ultra-high-energy cosmic ray observed in 1991 had a measured energy of about 50 J, equivalent to about .
Proposals for theories of doubly special relativity posit that, in addition to the speed of light, an energy scale is also invariant for all inertial observers. Typically, this energy scale is chosen to be the Planck energy.
Planck unit of force
The Planck unit of force may be thought of as the derived unit of force in the Planck system if the Planck units of time, length, and mass are considered to be base units.It is the gravitational attractive force of two bodies of 1 Planck mass each that are held 1 Planck length apart. One convention for the Planck charge is to choose it so that the electrostatic repulsion of two objects with Planck charge and mass that are held 1 Planck length apart balances the Newtonian attraction between them.
Some authors have argued that the Planck force is on the order of the maximum force that can occur between two bodies. However, the validity of these conjectures has been disputed.
Planck temperature
The Planck temperature TP is At this temperature, the wavelength of light emitted by thermal radiation reaches the Planck length. There are no known physical models able to describe temperatures greater than TP; a quantum theory of gravity would be required to model the extreme energies attained. Hypothetically, a system in thermal equilibrium at the Planck temperature might contain Planck-scale black holes, constantly being formed from thermal radiation and decaying via Hawking evaporation. Adding energy to such a system might decrease its temperature by creating larger black holes, whose Hawking temperature is lower.
Nondimensionalized equations
Physical quantities that have different dimensions (such as time and length) cannot be equated even if they are numerically equal (e.g., 1 second is not the same as 1 metre). In theoretical physics, however, this scruple may be set aside, by a process called nondimensionalization. The effective result is that many fundamental equations of physics, which often include some of the constants used to define Planck units, become equations where these constants are replaced by a 1.
Examples include the energy–momentum relation (which becomes and the Dirac equation (which becomes ).
Alternative choices of normalization
As already stated above, Planck units are derived by "normalizing" the numerical values of certain fundamental constants to 1. These normalizations are neither the only ones possible nor necessarily the best. Moreover, the choice of what factors to normalize, among the factors appearing in the fundamental equations of physics, is not evident, and the values of the Planck units are sensitive to this choice.
The factor 4 is ubiquitous in theoretical physics because in three-dimensional space, the surface area of a sphere of radius r is 4r. This, along with the concept of flux, are the basis for the inverse-square law, Gauss's law, and the divergence operator applied to flux density. For example, gravitational and electrostatic fields produced by point objects have spherical symmetry, and so the electric flux through a sphere of radius r around a point charge will be distributed uniformly over that sphere. From this, it follows that a factor of 4r will appear in the denominator of Coulomb's law in rationalized form. (Both the numerical factor and the power of the dependence on r would change if space were higher-dimensional; the correct expressions can be deduced from the geometry of higher-dimensional spheres.) Likewise for Newton's law of universal gravitation: a factor of 4 naturally appears in Poisson's equation when relating the gravitational potential to the distribution of matter.
Hence a substantial body of physical theory developed since Planck's 1899 paper suggests normalizing not G but 4G (or 8G) to 1. Doing so would introduce a factor of (or ) into the nondimensionalized form of the law of universal gravitation, consistent with the modern rationalized formulation of Coulomb's law in terms of the vacuum permittivity. In fact, alternative normalizations frequently preserve the factor of in the nondimensionalized form of Coulomb's law as well, so that the nondimensionalized Maxwell's equations for electromagnetism and gravitoelectromagnetism both take the same form as those for electromagnetism in SI, which do not have any factors of 4. When this is applied to electromagnetic constants, ε0, this unit system is called "rationalized. When applied additionally to gravitation and Planck units, these are called rationalized Planck units and are seen in high-energy physics.
The rationalized Planck units are defined so that .
There are several possible alternative normalizations.
Gravitational constant
In 1899, Newton's law of universal gravitation was still seen as exact, rather than as a convenient approximation holding for "small" velocities and masses (the approximate nature of Newton's law was shown following the development of general relativity in 1915). Hence Planck normalized to 1 the gravitational constant G in Newton's law. In theories emerging after 1899, G nearly always appears in formulae multiplied by 4 or a small integer multiple thereof. Hence, a choice to be made when designing a system of natural units is which, if any, instances of 4 appearing in the equations of physics are to be eliminated via the normalization.
Normalizing 4G to 1 (and therefore setting ):
Gauss's law for gravity becomes (rather than in Planck units).
Eliminates 4G from the Poisson equation.
Eliminates 4G in the gravitoelectromagnetic (GEM) equations, which hold in weak gravitational fields or locally flat spacetime. These equations have the same form as Maxwell's equations (and the Lorentz force equation) of electromagnetism, with mass density replacing charge density, and with replacing ε0.
Normalizes the characteristic impedance Zg of gravitational radiation in free space to 1 (normally expressed as ).
Eliminates 4G from the Bekenstein–Hawking formula (for the entropy of a black hole in terms of its mass mBH and the area of its event horizon ABH) which is simplified to .
Setting (and therefore setting G = ). This would eliminate 8G from the Einstein field equations, Einstein–Hilbert action, and the Friedmann equations, for gravitation. Planck units modified so that are known as reduced Planck units, because the Planck mass is divided by . Also, the Bekenstein–Hawking formula for the entropy of a black hole simplifies to .
See also
cGh physics
Dimensional analysis
Doubly special relativity
Trans-Planckian problem
Zero-point energy
Explanatory notes
References
External links
Value of the fundamental constants, including the Planck units, as reported by the National Institute of Standards and Technology (NIST).
The Planck scale: relativity meets quantum mechanics meets gravity from 'Einstein Light' at UNSW
Natural units
Max Planck
Quantum gravity | Planck units | [
"Physics"
] | 4,311 | [
"Quantum gravity",
"Unsolved problems in physics",
"Physics beyond the Standard Model"
] |
33,718,154 | https://en.wikipedia.org/wiki/Pure%20bending | In solid mechanics, pure bending (also known as the theory of simple bending) is a condition of stress where a bending moment is applied to a beam without the simultaneous presence of axial, shear, or torsional forces.
Pure bending occurs only under a constant bending moment () since the shear force (), which is equal to has to be equal to zero. In reality, a state of pure bending does not practically exist, because such a state needs an absolutely weightless member. The state of pure bending is an approximation made to derive formulas.
Kinematics of pure bending
In pure bending the axial lines bend to form circumferential lines and transverse lines remain straight and become radial lines.
Axial lines that do not extend or contract form a neutral surface.
Assumptions made in the theory of Pure Bending
The material of the beam is homogeneous1 and isotropic2.
The value of Young's Modulus of Elasticity is same in tension and compression.
The transverse sections which were plane before bending, remain plane after bending also.
The beam is initially straight and all longitudinal filaments bend into circular arcs with a common centre of curvature.
The radius of curvature is large as compared to the dimensions of the cross-section.
Each layer of the beam is free to expand or contract, independently of the layer, above or below it.
Notes: 1 Homogeneous means the material is of same kind throughout. 2 Isotropic means that the elastic properties in all directions are equal.
References
E P Popov; Sammurthy Nagarajan; Z A Lu. "Mechanics of Material". Englewood Cliffs, N.J. : Prentice-Hall, ©1976, p. 119, "Pure Bending of Beams",
Force
Solid mechanics
Structural system | Pure bending | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 357 | [
"Structural engineering",
"Solid mechanics",
"Force",
"Physical quantities",
"Building engineering",
"Quantity",
"Mass",
"Classical mechanics",
"Structural system",
"Mechanics",
"Wikipedia categories named after physical quantities",
"Matter"
] |
35,331,883 | https://en.wikipedia.org/wiki/Matrix%20t-distribution | In statistics, the matrix t-distribution (or matrix variate t-distribution) is the generalization of the multivariate t-distribution from vectors to matrices.
The matrix t-distribution shares the same relationship with the multivariate t-distribution that the matrix normal distribution shares with the multivariate normal distribution: If the matrix has only one row, or only one column, the distributions become equivalent to the corresponding (vector-)multivariate distribution. The matrix t-distribution is the compound distribution that results from an infinite mixture of a matrix normal distribution with an inverse Wishart distribution placed over either of its covariance matrices, and the multivariate t-distribution can be generated in a similar way.
In a Bayesian analysis of a multivariate linear regression model based on the matrix normal distribution, the matrix t-distribution is the posterior predictive distribution.
Definition
For a matrix t-distribution, the probability density function at the point of an space is
where the constant of integration K is given by
Here is the multivariate gamma function.
Properties
If , then we have the following properties:
Expected values
The mean, or expected value is, if :
and we have the following second-order expectations, if :
where denotes trace.
More generally, for appropriately dimensioned matrices A,B,C:
Transformation
Transpose transform:
Linear transform: let A (r-by-n), be of full rank r ≤ n and B (p-by-s), be of full rank s ≤ p, then:
The characteristic function and various other properties can be derived from the re-parameterised formulation (see below).
Re-parameterized matrix t-distribution
An alternative parameterisation of the matrix t-distribution uses two parameters and in place of .
This formulation reduces to the standard matrix t-distribution with
This formulation of the matrix t-distribution can be derived as the compound distribution that results from an infinite mixture of a matrix normal distribution with an inverse multivariate gamma distribution placed over either of its covariance matrices.
Properties
If then
The property above comes from Sylvester's determinant theorem:
If and and are nonsingular matrices then
The characteristic function is
where
and where is the type-two Bessel function of Herz of a matrix argument.
See also
Multivariate t-distribution
Matrix normal distribution
Notes
External links
A C++ library for random matrix generator
Random matrices
Multivariate continuous distributions | Matrix t-distribution | [
"Physics",
"Mathematics"
] | 495 | [
"Random matrices",
"Matrices (mathematics)",
"Statistical mechanics",
"Mathematical objects"
] |
35,332,741 | https://en.wikipedia.org/wiki/Magnussen%20model | Magnussen model is a popular method for computing reaction rates as a function of both mean concentrations and turbulence levels (Magnussen and Hjertager). Originally developed for combustion, it can also be used for liquid reactions by tuning some of its parameters. The model consists of rates calculated by two primary means. An Arrhenius, or kinetic rate, , for species in reaction , is governed by the local mean species concentrations and temperature in the following way:
This expression describes the rate at which species is consumed in reaction . The constants and , the Arrhenius pre-exponential factor and activation energy,
respectively, are adjusted for specific reactions, often as the result of experimental measurements. The stoichiometry for species in reaction is represented by the factor , and is positive or negative, depending upon whether the species serves as a product or reactant. The molecular weight of the species appears as the factor . The temperature, , appears in the exponential term and also as a factor in the rate expression, with an optional exponent, . Concentrations of other species, , involved in the reaction, , appear as factors with optional exponents associated with each. Other factors and terms not appearing in the equation, can be added to include effects such as the presence of non-reacting
species in the rate equation. Such so-called third-body reactions are typical of the effect of a catalyst on a reaction, for example. Many of the factors are often collected into a single rate constant, .
References
Magnussen, B. F., and B. H. Hjertager, “On Mathematical Mod-
els of Turbulent Combustion with Special Emphasis on Soot For-
mation and Combustion,” Proc. 16th Int. Symp. on Combustion,
The Combustion Institute, Pittsburgh, PA (1976).
Chemical engineer | Magnussen model | [
"Chemistry"
] | 373 | [
"Chemical reaction engineering",
"Chemical kinetics"
] |
45,239,121 | https://en.wikipedia.org/wiki/Dunathan%20stereoelectronic%20hypothesis | Dunathan stereoelectronic hypothesis is a concept in chemistry to explain the stereospecefic cleavage of bonds using pyridoxal phosphate. This occurs because stereoelectronic effects controls the actions of the enzyme.
History
Before the correlation between fold type and reaction correlation of proteins were understood, Harmon C. Dunathan, a chemist at Haverford College proposed that the bond that is cleaved using pyridoxal is perpendicular to the system. Though an important concept in bioorganic chemistry, it is now known that enzyme conformations play a critical role in the final chemical reaction.
Mode of action
The transition state is stabilized by the extended pi bond network (formation of anion). Furthermore hyperconjugation caused by the extended network draws electrons from the bond to be cleaved, thus weakening the chemical bond and making it labile The sigma bond that is parallel to the pi bond network will break. The bond that has the highest chance of being cleaved is one with the largest HOMO-LUMO overlap. This effect might be effected by electrostatic effects within the enzyme.
Applications
This was seen in transferase and future interests lie in decarboxylation in various catalytic cycles.
References
Chemical bonding | Dunathan stereoelectronic hypothesis | [
"Physics",
"Chemistry",
"Materials_science"
] | 255 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
45,241,605 | https://en.wikipedia.org/wiki/Desorption/ionization%20on%20silicon | Desorption/ionization on silicon (DIOS) is a soft laser desorption method used to generate gas-phase ions for mass spectrometry analysis. DIOS is considered the first surface-based surface-assisted laser desorption/ionization (SALDI-MS) approach. Prior approaches were accomplished using nanoparticles in a matrix of glycerol, while DIOS is a matrix-free technique in which a sample is deposited on a nanostructured (porous silicon) surface and the sample desorbed directly from the nanostructured surface through the adsorption of laser light energy. DIOS has been used to analyze organic molecules, metabolites, biomolecules and peptides, and, ultimately, to image tissues and cells.
Background
Soft laser desorption is a soft ionization technique which desorbs and ionizes molecules from surfaces with minimal fragmentation. This is useful for a broad range of small and large molecules and molecules that fragment easily. The first soft laser desorption techniques included matrix-assisted laser desorption/ionization (MALDI) nanoparticles in glycerol. In MALDI, the analyte is first mixed with a matrix solution. The matrix absorbs energy from the laser pulse and transfers it to the analyte, causing desorption and ionization of the sample. MALDI generates [M+H]+ ions.
DIOS was first reported by Gary Siuzdak, Jing Wei and Jillian M. Buriak in 1999. It was developed as a matrix-free alternative to MALDI for smaller molecules. Because MALDI uses a matrix, background ions are introduced due to ionization of the matrix. These ions reduce the usefulness of MALDI for small molecules. In contrast, DIOS uses a porous silicon surface to trap the analyte. This surface is not ionized by the laser, therefore creating minimal background ionization and thus allowing for the analysis of small molecules.
Applications
DIOS has been shown to be an ultra-sensitive means of generating and detecting molecules at the yoctomole level, both for DIOS nanostructured surfaces modified with fluorocarbons, and a subsequent related technology known as nanostructure-initiator mass spectrometry or nanostructure imaging mass spectrometry (NIMS).
DIOS has been shown to detect peptides, natural products, small organic molecules, and polymers with little fragmentation.
DIOS can be used for proteomics. It has been reported as a useful method protein identification. Because it is matrix free, it can be used to identify smaller biomolecules than MALDI. In addition, it can be used to monitor reactions on a single surface through repeated MS analyses. Reaction monitoring can be used to screen enzyme inhibitors.
Atmospheric pressure DIOS was shown to be an effective tool for quantitative analysis of drugs with high proton affinity.
The use of DIOS to image small molecules has been demonstrated. Lin He and coworkers imaged small molecules on mouse liver cells. They also used marker molecules to image HEK 293 cancer cells.
References
Mass spectrometry
Ion source | Desorption/ionization on silicon | [
"Physics",
"Chemistry"
] | 651 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Ion source",
"Mass spectrometry",
"Analytical chemistry stubs",
"Matter"
] |
45,249,739 | https://en.wikipedia.org/wiki/Miniature%20mass%20spectrometer | A miniature mass spectrometer (MMS) is a type of mass spectrometer (MS) which has small size and weight and can be understood as a portable or handheld device. What it means to be portable and a set of criteria by which portable and miniature mass spectrometers can be assessed have been discussed in detail. Current lab-scale mass spectrometers however, usually weigh hundreds of pounds and can cost on the range from thousands to millions of dollars. One purpose of producing MMS is for in situ analysis. This in situ analysis can lead to much simpler mass spectrometer operation such that non-technical personnel like physicians at the bedside, firefighters in a burning factory, food safety inspectors in a warehouse, or airport security at airport checkpoints, etc. can analyze samples themselves saving the time, effort, and cost of having the sample run by a trained MS technician offsite. Although, reducing the size of MS can lead to a poorer performance of the instrument versus current analytical laboratory standards, MMS is designed to maintain sufficient resolutions, detection limits, accuracy, and especially the capability of automatic operation. These features are necessary for the specific in-situ applications of MMS mentioned above.
Coupling and ionization in miniature mass spectrometer
In typical mass spectrometry, MS is coupled with separation tools like gas chromatography, liquid chromatography or electrophoresis to reduce the effect of the matrix or background and improve the selectivity especially when the analytes are widely different in concentration. Sample preparation including sample collection, extraction, pre-separation increases the size of the mass analysis system and adds time and sophistication to the analysis. A lot of contribution promotes miniaturizing devices and simplifying the operations. A micro-GC has been implemented to fit to a portable MS system. Besides microfluidics is a competent candidate for MMS and automating sample preparation. In this technique, most of the steps for sample preparation are staged similarly with laboratory systems, but miniature chip-based devices are used with low consumption of sample and solvents.
One way to circumvent classical, lab-based sample introduction systems is the use of ambient ionization, as it does not require mechanical or electrical coupling to a MMS and can generate ions in the open atmosphere without prior sample preparation, but at the cost of more rigorous vacuum system requirements. Different ambient ionization methods, including low-temperature plasma, paper spray, and extraction spray, have been demonstrated to be highly compatible with MMS. A rigorous review of ambient ionization sources in the context of portable and miniature mass spectrometry has developed a set of criteria by which performance and portability can be evaluated.
Without separation coupling, the basic building blocks in MMS, which are similar in composition with the conventional laboratory counterpart, are sample inlet, ionization source, mass analyzers, detector, vacuum system, instrument control and data acquisition system.
Three most important components in MMS contributing to miniaturization are mass analyzer, vacuum system and electronics control system. Reducing the size of any components is beneficial to the miniaturization. However, it is noticeable that minimizing the analyzer’s size can greatly enhance the miniaturization of the other components especially the vacuum system because the analyzer is the pressure deciding factor for MS analysis and pressure interface fabrication.
Miniature mass analyzer
Smaller mass analyzers require smaller control system to generate adequate electric field and magnetic field strength, which are two fundamental fields separating ions based on their mass-to-charge ratio. Because a compact circuit can generate a high electric field, decreasing the size of the voltage-generating system does not significantly affect to the miniaturization of time-of-flight mass spectrometry (TOF) and electric sectors which use only the electric field to separate ions.
In principle, the electromagnetic field mainly depends on the shape of the mass analyzers. As a result, a smaller magnet fitting with small size MS reduces the system weight significantly. In practice, when reducing the size, the geometries of mass analyzer are distorted. For example, smaller volume in ion trap leads to lower trapping capacity and therefore results in a loss of resolution and sensitivity. However, by utilizing tandem MS resolution and selectivity can be greatly enhanced in complex mixtures. In general, beam-type mass analyzers, such as TOF and sector mass analyzers, are much larger than ion trap type such as Paul trap, Penning trap or Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR). Additionally, ion trap mass analyzers can be used to perform multistage MS/MS in a single device. As a result, ion traps are received dominant attention for building a MMS.
Miniature time of flight
Some researchers are successful in designing a series of miniature TOF mass analyzers. Cotter at Johns Hopkins University used a pulsed extraction in linear time of flight mass analyzer and the ions are accelerated to with higher energy of 12 keV to enable detection of high-mass. The group achieved resolutions of 1/1200 and 1/600 at m/z 4500 and 12000 respectively. This mini analyzer can measure 66k Da proteins, mixtures of oligonucleotides, and biological spores. Verbeck at University of North Texas, created a mini-TOF based on reflectron TOF with a microelectromechanical system technology. To overcome the low resolution of short flight tube, the effective ion travelling path length is extended by moving ions back and forth in periods of time. The system used a 5-cm endcap reflectron TOF with higher-order kinetic energy focusing to analyze the ions with m/z exceeding 60,000.
Ecelberger, a senior professional staff scientist in the Sensor
Science Group of the Research and Technology Development Center at APL also developed a suitcase TOF incorporated with matrix-assisted laser desorption/ionization MALDI. The suitcase TOF was tested by scientists from U.S. Army Soldier and Biological Chemical Command. The samples are biological toxins and chemical agents with the mass range from a few hundred daltons to over 60 kDa. The Suitcase TOF was referenced with a commercial TOFMS for the same experiments. Both instruments can detect all but a few compounds with very encouraging results. Because a commercial TOFMS uses a higher voltage pulsed extraction with longer flight tube with other optimized conditions, it generally has better sensitivity and resolution than a suitcase TOF. However, in the case of very high mass compounds, the suitcase TOF shows as good resolution and sensitivity as the commercial TOF. The suitcase TOF was also tested with a series of chemical weapons agents. Every compound tested was detected at levels comparable to standard analytical techniques for these agents.
Miniature sector
Several miniature double-focusing mass analyzers have been fabricated. A non-scanning Mattauch–Herzog geometry sector was developed using new materials to construct a lighter magnet. Under the collaboration of University of Minnesota and Universidad de Costa Rica, a miniature double-focusing sector was produced under sophisticated technique of conventional machining methods and thin film patterning to overcome the distortion of the electric-magnetic fields due to small size. The MMS can reach a detection limit close to 10 ppm, a dynamic range of 5 orders of magnitude and a mass range up to 103 Da. The mass analyzer overall sizes 3.5cmx6cmx7.5 cm and it weighs 0.8 kg and consumes 2.5 W.
Miniature linear quadrupole mass filter
The linear quadrupole mass filter or quadrupole mass analyzer is one of the most popular mass analyzer. The mini-quadrupole has been used as a single analyzer or in arrays of identical mass analyzers. The quadrupole array has rods of 0.5mm radius and 10mm long while another one with rods of 1mm radius and 25mm long. These mini quadrupole were developed and characterized at a radio frequency (RF) higher than 11 MHz. Volatile organic compounds were ionized by electron ionization and were characterized with unit resolution. Micromachining was applied to produce a much smaller V-groove quadrupole.
Miniature ion trap mass analyzer
Ion traps include quadrupole ion traps or Paul trap, Fourier transform ion cyclotron resonance or Penning trap and newly developed orbitrap. However, Paul trap receives a great focus from researchers for a MMS because of its distinct advantages over other mass analyzers for building MMS. One of the benefits is that ion traps can work at much higher pressures than beam type mass analyzers and can be simplified with different geometry for the ease of fabrication. For example, a miniature quadrupole ion trap mass analyzers, such as cylindrical ion trap, linear ion trap, rectilinear ion trap), can operate at several mTorr in contrast to 10−5 Torr or less for other analyzers and it is able to perform MS/MS in a single device with minimum size of electronics system. Nevertheless, as the size gets smaller, it is hard to maintain the electric field shape and precise configuration and will negatively affect ion motion. The goal is to make the trap smaller without losing ion capacity. Tridion-9 mass spectrometer with toroidal ion trap is designed with a doughnut-shaped volume that can hold up to 400 times more ions. The outstanding result is achieved as the radius is reduced to one-fifth of a conventional laboratory ion trap while maintaining the ion capacity.
Miniature vacuum system
The purpose of using the vacuum is to eliminate background signal and avoid intermolecular collision events, therefore, providing a long mean free path for the ions. The vacuum system, including the vacuum pumps and the vacuum manifold with its various interfaces, is often the heaviest part and consumes the most power in a mass spectrometer. In the case of TOF, if the length of drift region is decreased, the pressure inside region can be operated at higher value because the free collision region is still maintained for a short traveling distance of the ions. As a result, the vacuum system requires less power to run the system. For a trap-type mass analyzer, because the ions are trapped in the device for long periods and the accumulated trajectory length is much longer than the size of the mass analyzer, the size reduction of the mass analyzer may not directly affect the adequate operating pressure. Miniature rough-turbo pump configurations similar to lab-scale instruments have been developed to be compatible with MMS. For high-vacuum pumping, turbomolecular pumps are also upgraded. A Thermo Fisher Orbitrap used three turbo pumps in LC-MS modes to achieve a vacuum below 10−10 torr.
Recently, a turbo pump from Creare, Inc.TM weighs only 500g and needs below 18 W power to run. The pump can provide the ultimate vacuum below 10−8 torr, which is much lower than the operating pressure necessary for a MMS.
The leading research groups, producers and applications
One of the leading groups in academy for creating ion-trap MMS is Prof. Graham Cooks with his associate Professor Zheng Ouyang at Purdue University. They have built a series of mini mass spectrometer based on quadrupole ion trap called Mini 10, Mini 11, Mini 12. The group used Mini 10 mass spectrometer weighing 10 kg to analyze proteins, peptides and alkaloids in complex plant materials with electrospray ionization ESI and paperspray ionization. The group used low radio frequency of resonant ion ejection to increase mass range up to 17,000 Da proteins. For interfacing ESI source with MMS, a 10 cm stainless steel capillary was fabricated to transfer the ions directly into the vacuum manifold. The resulting high pressure of 20 mTorr, which is several orders of magnitude higher than that used in lab-scale mass spectrometers is compensated by using the pressure-tolerant rectilinear ion trap. One of the key component of this MMS is the commercial turbo-bump and the MS can be operated at 10−3 torr. To overcome the problem of continuous sample introduction because of the small size of the pump, the group developed a technique called discontinuous atmospheric pressure introduction (DAPI). This technique performs direct chemical analysis without sample pretreatment and enables the coupling of miniature mass spectrometers to atmospheric pressure ionization sources, including ESI, atmospheric pressure chemical ionization (APCI), and various ambient ionization sources. The ions are transferred from ionization source and hold at a punch-valve and injected to MS periodically. The performance of a hand-held Mini-10 mass spectrometer was upgraded with negative ion mode for detecting explosive compounds and hazardous materials at the picogram level, which is highly applicable for airport luggage checking.
The 8.5 kg Mini- 11 and 25 kg Mini-12 can produce resolution mass spectra up to m/z 600, a range that makes it useful for studying metabolites, lipids, and other small molecules. The group also developed and incorporated a digital microfluidic platform to the MMS with the application to extract and quantify drugs in urine. Mini 12 can perform MS5 and analyze directly such complex samples as whole blood, untreated food, and environmental samples, without sample preparation or chromatographic separation.
1st Detect introduced the MMS 1000 which is a cylindrical ion-trap mass spectrometer with MS/MS capability. Some characteristics are advertised as wide mass range (35-450 Da), high resolution (<0.5 Da FWHM), fast analysis time (>=0.5s). The inlet flow rate can be high – up to 600ml/min with no external pumps or carrier gases. The MMS 1000 is incorporated with a non-cryogenic pre-concentrator. This coupling enhances the sensitivity up to 10^5 with a fast speed of 30s. 1st Detect's miniaturized mass spectrometers are used in a range of applications, including homeland security, military, breath analysis, leak detection, environmental and industrial quality control. The MMS 1000 was originally designed for NASA, for the purpose of monitoring air quality on the International Space Station.
908 Devices introduced a handheld mass spectrometer utilizing high-pressure mass spectrometry M908 weighing 2 kg with solid, liquid, gas multi-phase detector. On the other hand, Microsaic Systems in Surrey, United Kingdom develops single quadrupole mass spectrometer called 3500 and 4000 MiD. These mass analyzers are used for supporting the pharmaceutical process chemistry.
Several other MMS instruments have been also fabricated using ion trap mass analyzers, including Tridion-9 GCMS from TorionInc, now part of Perkin Elmer (AmericanFork, Utah), GC/QIT from the Jet Propulsion Laboratory, Chemsense 600 from Griffin Analytical Technology LLC. (West Lafayette, Indiana).
Another example is Girgui at Harvard University, who built a MMS based on existing underwater mass spectrometers (UMS) that can operate underwater to study the influence of microbes on the methane and hydrogen content of the ocean. He worked with a mechanical engineer to package a commercial quadrupole mass analyzer from Stanford Research Systems, a Pfeiffer HiPace80 turbopump, and a custom gas extractor into a 25 cm × 90 cm cylinder. Total cost is about $15,000.
The Analytical Instrumentation Research Institute in Korea also developed a palm-portable mass spectrometer. The size and weight is reduced to 1.54 L and 1.48 kg respectively, and it used 5 W power only. The PPMS is based on four parallel disk ion traps, a small ion getter pump and a micro-computer. The PPM can perform the scan ion mass of up to m/z 300 and detect the ppm concentration of organic gases diluted in the air.
The Harsh-Environment Mass Spectrometry Society is holding a biannual workshop that focuses on in-situ mass spectrometry in extreme environments, such as in the deep ocean, volcano crater, or outer space require high reliability, autonomous or remote operation, ruggedness with minimum size, weight, and power. The archives of the workshop include ~100 presentations focusing on the design and application of miniature mass spectrometers.
For example, In 8th Harsh Environment Mass Spectrometry Workshop, a group of scientists presented their study about utilization of lightweight MS based instrumentation and small Unmanned Aerial Vehicles UAV platforms for in-situ volcanic plume analysis in Turrialba and Arenal volcanoes (Costa Rica). Mini mass spectrometers relying on miniature 18 mm rods transpector quadrupole for mTorr pressure operation, a miniature turbo molecular drag pump and assets like small, multi-parameter battery powered sensor suite MiniGas embedded with micro PC control system, and telemetry system were integrated in an aircraft to acquire 4D image of an erupting volcanic plume.
References
Mass spectrometry | Miniature mass spectrometer | [
"Physics",
"Chemistry"
] | 3,521 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
49,478,913 | https://en.wikipedia.org/wiki/Binary%20Reed%E2%80%93Solomon%20encoding | Binary Reed–Solomon coding (BRS), which belongs to a RS code, is a way of encoding that can fix node data loss in a distributed storage environment. It has maximum distance separable (MDS) encoding properties. Its encoding and decoding rate outperforms conventional RS coding and optimum CRS coding.
Background
RS coding is a fault-tolerant encoding method for a distributed storage environment. Suppose we wish to distribute data across individual devices for improved storage capacity or bandwidth, for example in a hardware RAID setup. Such a configuration risks significant data loss in the event of device failure. The Reed-Solomon encoding produces a storage coding system which robust to the simultaneous failure of any subset of nodes. To do this, we adding additional nodes to the system, for a total of storage nodes.
Traditional RS encoding method uses the Vandermonde matrix as a coding matrix and its inverse as the decoding matrix. Traditional RS encoding and decoding operations are all carried out on a large finite domain.
Because BRS encoding and decoding employ only shift and XOR operations, they are much faster than traditional RS coding. The algorithm of BRS coding is proposed by the advanced network technology laboratory of Peking University, and it also released the open source implementation of BRS coding. In the actual environment test, the encoding and decoding speed of BRS is faster than that of CRS. In the design and implementation of distributed storage system, using BRS coding can make the system have the characteristics of fault tolerant regeneration.
Principle
BRS encoding principle
The structure of traditional Reed–Solomon codes is based on finite fields, and the BRS code is based on the shift and XOR operation. BRS encoding is based on the Vandermonde matrix, and its specific encoding steps are as follows:
Equally divides the original data blocks into blocks, and each block of data has -bit data, recorded as where , .
Builds the calibration data block , has a total of blocks: where , . The addition here are all XOR operation, where represents the number of bits of "0" added to the front of the original data block . Thereby forming a parity data block . is given by the following way: where .
Each node stores data, nodes store the data as .
BRS encoding example
If now , there , , . The original data block are , where . The calibration data for each block are , where .
Calculation of calibration data blocks is as follows, the addition operation represents a bit XOR operation:
, so
, so
, so
BRS decoding principle
In the structure of BRS code, we divide the original data blocks into blocks. They are . And encoding has been block calibration data blocks, there are .
During the decoding process, there is a necessary condition: The number of undamaged calibration data blocks have to be greater than or equal to the number of the original data blocks that missing, if not, it cannot be repaired.
The following is a decoding process analysis:
Might as well make , . Then
Supposed is intact, miss, choose , to repair, make
Because , , are known, , are known. So that
According to the above iterative formula, each cycle can figure out two bit values ( can get a bit). Each of the original data block length ( bit), so after repeating times, We can work out all the unknown bit in the original data block. by parity of reasoning, we can completed the data decoding.
Performance
Some experiments shows that, considering the encoding rate, BRS encoding rate is about 6-fold as much as RS encoding rate and 1.5-fold as much as CRS encoding rate in the single core processor, which meets the conditions that compare to RS encoding, its encoding speed upgrades no less than 200%.
Under the same conditions, for the different number of deletions, BRS decoding rate is about 4-fold as much as RS encoding rate, about 1.3-fold as much as CRS encoding rate, which meets the conditions that compare to RS encoding, the decoding speed promotes 100%.
Applications
In the current situation, the application of distributed systems is commonly used. Using erasure code to store data in the bottom of the distributed storage system can increase the fault tolerance of the system. At the same time, compared to the traditional replica strategy, erasure code technology can exponentially improve the reliability of the system for the same redundancy.
BRS encoding can be applied to distributed storage systems, for example, BRS encoding can be used as the underlying data encoding while using HDFS. Due to the advantages of performance and similarity of the encoding method, BRS encoding can be used to replace the CRS encoding in distributed systems.
Usage
There are open source codes to implement BRS encoding written in C and available on GitHub. In the design and implementation of a distributed storage system, we can use BRS encoding to store data and to achieve the system's own fault tolerance.
References
H. Hou, K. W. Shum, M. Chen and H. Li, BASIC Regeneration Code: Binary Addition and Shift for Exact Repair, IEEE ISIT 2013.
Jun Chen, Hui Li, Hanxu Hou, Bing Zhu, Tai Zhou, Lijia Lu, Yumeng Zhang, A new Zigzag MDS code with optimal encoding and efficient decoding[C]//Big Data (Big Data), 2014 IEEE International Conference on. IEEE, 2014.
Error detection and correction | Binary Reed–Solomon encoding | [
"Engineering"
] | 1,132 | [
"Error detection and correction",
"Reliability engineering"
] |
49,479,058 | https://en.wikipedia.org/wiki/Long%20intergenic%20non-protein%20coding%20rna%20598 | Long intergenic non-protein coding RNA 598 is a protein that in humans is encoded by the LINC00598 gene.
References
Further reading
Proteins | Long intergenic non-protein coding rna 598 | [
"Chemistry"
] | 33 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
49,481,270 | https://en.wikipedia.org/wiki/System%20U | In mathematical logic, System U and System U− are pure type systems, i.e. special forms of a typed lambda calculus with an arbitrary number of sorts, axioms and rules (or dependencies between the sorts). System U was proved inconsistent by Jean-Yves Girard in 1972 (and the question of consistency of
System U− was formulated).
This result led to the realization that Martin-Löf's original 1971 type theory was inconsistent as it allowed the same "Type in Type" behaviour that Girard's paradox exploits.
Formal definition
System U is defined as a pure type system with
three sorts ;
two axioms ; and
five rules .
System U− is defined the same with the exception of the rule.
The sorts and are conventionally called “Type” and “Kind”, respectively; the sort doesn't have a specific name. The two axioms describe the containment of types in kinds () and kinds in (). Intuitively, the sorts describe a hierarchy in the nature of the terms.
All values have a type, such as a base type (e.g. is read as “ is a boolean”) or a (dependent) function type (e.g. is read as “ is a function from natural numbers to booleans”).
is the sort of all such types ( is read as “ is a type”). From we can build more terms, such as which is the kind of unary type-level operators (e.g. is read as “ is a function from types to types”, that is, a polymorphic type). The rules restrict how we can form new kinds.
is the sort of all such kinds ( is read as “ is a kind”). Similarly we can build related terms, according to what the rules allow.
is the sort of all such terms.
The rules govern the dependencies between the sorts: says that values may depend on values (functions), allows values to depend on types (polymorphism), allows types to depend on types (type operators), and so on.
Girard's paradox
The definitions of System U and U− allow the assignment of polymorphic kinds to generic constructors in analogy to polymorphic types of terms in classical polymorphic lambda calculi, such as System F. An example of such a generic constructor might be (where k denotes a kind variable)
.
This mechanism is sufficient to construct a term with the type (equivalent to the type ), which implies that every type is inhabited. By the Curry–Howard correspondence, this is equivalent to all logical propositions being provable, which makes the system inconsistent.
Girard's paradox is the type-theoretic analogue of Russell's paradox in set theory.
References
Further reading
Lambda calculus
Proof theory
Type theory | System U | [
"Mathematics"
] | 575 | [
"Mathematical structures",
"Proof theory",
"Mathematical logic",
"Mathematical objects",
"Type theory"
] |
49,489,000 | https://en.wikipedia.org/wiki/General%20Mission%20Analysis%20Tool | General Mission Analysis Tool (GMAT) is open-source space mission analysis software developed by NASA and private industry.
It has been used for several missions, including LCROSS, the Lunar Reconnaissance Orbiter, OSIRIS-REx, the Magnetospheric Multiscale Mission, and the Transiting Exoplanet Survey Satellite (TESS) mission.
GMAT is an open-source alternative to software like Systems Tool Kit and FreeFlyer.
References
External links
GMAT Wiki
GMAT Download (SourceForge)
GMAT channel on YouTube
Aerospace engineering
3D graphics software
Astronomy software
Mathematical software
Physics software | General Mission Analysis Tool | [
"Physics",
"Astronomy",
"Mathematics",
"Engineering"
] | 124 | [
"Works about astronomy",
"Physics software",
"Computational physics",
"Astronomy software",
"Aerospace engineering",
"Mathematical software"
] |
48,211,145 | https://en.wikipedia.org/wiki/Dorrite | Dorrite is a silicate mineral that is isostructural to the aenigmatite group. It is most chemically similar to the mineral rhönite [Ca2Mg5Ti(Al2Si4)O20], made distinct by a lack of titanium (Ti) and the presence of Fe3+. Dorrite is named for Dr. John (Jack) A. Dorr, a late professor at the University of Michigan that researched in outcrops where dorrite was found in 1982. This mineral is sub-metallic resembling colors of brownish-black, dark brown, to reddish brown.
Discovery
Dorrite was first reported in 1982 by A. Havette in a basalt-limestone contact on Réunion Island off of the coast of Africa. The second report of dorrite was made by Franklin Foit and his associates while examining a paralava from the Powder River Basin, Wyoming in 1987. Analyses determined that this newly found mineral was surprisingly similar to the mineral rhönite, lacking Ti but presenting dominant Fe3+ in its octahedral sites. Other minerals that coexist with this phase are plagioclase, gehlenite-akermanite, magnetite-magnesioferrite-spinel solid solutions, esseneite, nepheline, wollastonite, Ba-rich feldspar, apatite, ulvöspinel, ferroan sahamalite, and secondary barite, and calcite.
Occurrence
Dorrite can be found in mineral reactions that relate dorrite + magnetite + clinopyroxene, rhönite + magnetite + olivine + clinopyroxene, and aenigmatite + pyroxene + olivine assemblages in nature. These assemblages favor low pressures and high temperatures. Dorrite is stable in strongly oxidizing, high-temperature, low-pressure environments. It occurs in paralava, pyrometamorphic melt rock, formed from the burning of coal beds.
Crystallography
Researchers conclusively determined that dorrite is triclinic-pseudomonoclinic and twinned by a twofold rotation about the pseudomonoclinic b axis. The parameters for dorrite are a=10.505, b=10.897, c=9.019 Å, α=106.26°, β=95.16°, γ=124.75°.
Chemical Composition
Calcium 8.97%Magnesium 5.44%Aluminum 6.04%Iron 37.48%Silicon 6.28%Oxygen 35.79%
Oxides
CaO 12.55%MgO 9.02%Al2O3 11.41%Fe2O3 53.59%SiO2 13.44%
References
Natural materials
Inosilicates
Triclinic minerals | Dorrite | [
"Physics"
] | 602 | [
"Natural materials",
"Materials",
"Matter"
] |
48,213,886 | https://en.wikipedia.org/wiki/Release%20modulator | A release modulator, or neurotransmitter release modulator, is a type of drug that modulates the release of one or more neurotransmitters. Examples of release modulators include monoamine releasing agents such as the substituted amphetamines (which induce the release of norepinephrine, dopamine, and/or serotonin) and release inhibitors such as botulinum toxin A (which inhibits acetylcholine release by inactivating SNAP-25, thereby preventing exocytosis from occurring).
See also
Neuromodulation
Reuptake modulator
Channel modulator
Enzyme modulator
Receptor modulator
References
Drugs by mechanism of action
Psychopharmacology | Release modulator | [
"Chemistry"
] | 150 | [
"Psychopharmacology",
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs"
] |
48,223,694 | https://en.wikipedia.org/wiki/EpiBone | EpiBone is a biomedical engineering company that is developing technology to create bone tissue from a patient's mesenchymal stem cells in vitro for use in bone grafts. The company was founded by Nina Tandon and Sarindr “Ik” Bhumiratana.
References
Biomedical engineering
Prosthetics | EpiBone | [
"Engineering",
"Biology"
] | 65 | [
"Biological engineering",
"Bioengineering stubs",
"Biomedical engineering",
"Biotechnology stubs",
"Medical technology stubs",
"Medical technology"
] |
32,129,595 | https://en.wikipedia.org/wiki/Rotational%20correlation%20time | Rotational correlation time () is the average time it takes for a molecule to rotate one radian.
In solution, rotational correlation times are in the order of picoseconds. For example, the 1.7 ps for water, and 100 ps for a pyrroline nitroxyl radical in a DMSO-water mixture. Rotational correlation times are employed in the measurement of microviscosity (viscosity at the molecular level) and in protein characterization.
Rotational correlation times may be measured by rotational (microwave), dielectric, and nuclear magnetic resonance (NMR) spectroscopy. Rotational correlation times of probe molecules in media have been measured by fluorescence lifetime or for radicals, from the linewidths of electron spin resonances.
References
Molecular dynamics
Nuclear magnetic resonance | Rotational correlation time | [
"Physics",
"Chemistry"
] | 162 | [
"Nuclear magnetic resonance",
"Molecular physics",
"Computational physics",
"Molecular dynamics",
"Computational chemistry",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance stubs",
"Nuclear physics"
] |
32,130,816 | https://en.wikipedia.org/wiki/Hybrid%20material | Hybrid materials are composites consisting of two constituents at the nanometer or molecular level. Commonly one of these compounds is inorganic and the other one organic in nature. Thus, they differ from traditional composites where the constituents are at the macroscopic (micrometer to millimeter) level. Mixing at the microscopic scale leads to a more homogeneous material that either show characteristics in between the two original phases or even new properties.
Introduction
Hybrid materials in nature
Many natural materials consist of inorganic and organic building blocks distributed on the nanoscale. In most cases the inorganic part provides mechanical strength and an overall structure to the natural objects while the organic part delivers bonding between the inorganic building blocks and/or the rest of the tissue. Typical examples include bone and nacre.
Development of hybrid materials
The first hybrid materials were the paints made from inorganic and organic components that were used thousands of years ago. Rubber is an example of the use of inorganic materials as fillers for organic polymers. The sol–gel process developed in the 1930s was one of the major driving forces what has become the broad field of inorganic–organic hybrid materials.
Classification
Hybrid materials can be classified based on the possible interactions connecting the inorganic and organic species. Class I hybrid materials are those that show weak interactions between the two phases, such as van der Waals, hydrogen bonding or weak electrostatic interactions. Class II hybrid materials are those that show strong chemical interactions between the components such as covalent bonds.
Structural properties can also be used to distinguish between various hybrid materials. An organic moiety containing a functional group that allows the attachment to an inorganic network, e.g. a trialkoxysilane group, can act as a network modifier because in the final structure the inorganic network is only modified by the organic group. Phenyltrialkoxysilanes are an example for such compounds; they modify the silica network in the sol–gel process via the reaction of the trialkoxysilane group without supplying additional functional groups intended to undergo further chemical reactions to the material formed. If a reactive functional group is incorporated the system is called a network functionalizer. The situation is different if two or three of such anchor groups modify an organic segment; this leads to materials in which the inorganic group is afterwards an integral part of the hybrid network. The latter type of system is known as network builder
Blends are formed if no strong chemical interactions exist between the inorganic and organic building blocks. One example for such a material is the combination of inorganic clusters or particles with organic polymers lacking a strong (e.g. covalent) interaction between the components. In this case a material is formed that consists for example of an organic polymer with entrapped discrete inorganic moieties in which, depending on the functionalities of the components, for example weak crosslinking occurs by the entrapped inorganic units through physical interactions or the inorganic components are entrapped in a crosslinked polymer matrix. If an inorganic and an organic network interpenetrate each other without strong chemical interactions, so called interpenetrating networks (IPNs) are formed, which is for example the case if a sol–gel material is formed in presence of an organic polymer or vice versa. Both materials described belong to class I hybrids. Class II hybrids are formed when the discrete inorganic building blocks, e.g. clusters, are covalently bonded to the organic polymers or inorganic and organic polymers are covalently connected with each other.
Distinction between nanocomposites and hybrid materials
The term nanocomposite is used if the combination of organic and inorganic structural units yield a material with composite properties. That is to say that the original properties of the separate organic and inorganic components are still present in the composite and are unchanged by mixing these materials. However, if a new property emerges from the intimate mixture, then the material becomes a hybrid. A macroscopic example is the mule, which is more suited for hard work than either of its parents, the horse and the donkey. The size of the individual components and the nature of their interaction (covalent, electrostatic, etc.) do not enter into the definition of a hybrid material.
Advantages of hybrid materials over traditional composites
Inorganic clusters or nanoparticles with specific optical, electronic or magnetic properties can be incorporated in organic polymer matrices.
Contrary to pure solid state inorganic materials that often require a high temperature treatment for their processing, hybrid materials show a more polymer-like handling, either because of their large organic content or because of the formation of crosslinked inorganic networks from small molecular precursors just like in polymerization reactions.
Light scattering in homogeneous hybrid material can be avoided and therefore optical transparency of the resulting hybrid materials and nanocomposites can be achieved.
Synthesis
Two different approaches can be used for the formation of hybrid materials: Either well-defined preformed building blocks are applied that react with each other to form the final hybrid material in which the precursors still at least partially keep their original integrity or one or both structural units are formed from the precursors that are transformed into a new (network) structure. It is important that the interface between the inorganic and
organic materials which has to be tailored to overcome serious problems in the preparation of hybrid materials. Different building blocks and approaches can be used for their preparation and these have to be adapted to bridge the differences of inorganic and organic materials.
Building block approach
Building blocks at least partially keep their molecular integrity throughout the material formation, which means that structural units that are present in these sources for materials formation can also be found in the final material. At the same time typical properties of these building blocks usually survive the matrix formation, which is not the case if material precursors are transferred into novel materials. Representative examples of such well-defined building blocks are modified inorganic clusters or nanoparticles with attached reactive organic groups.
Cluster compounds often consist of at least one functional group that allows an interaction with an organic matrix, for example by copolymerization. Depending on the number of groups that can interact, these building blocks are able to modify an organic matrix (one functional group) or form partially or fully crosslinked materials (more than one group). For instance, two reactive groups can lead to the formation of chain structures. If the building blocks contain at least three reactive groups they can be used without additional molecules for the formation of a crosslinked material.
Beside the molecular building blocks mentioned, nanosized building blocks, such as particles or nanorods, can also be used to form nanocomposites. The building block approach has one large advantage compared with the in situ formation of the inorganic or organic entities: because at least one structural unit (the building block) is well-defined and usually does not undergo significant structural changes during the matrix formation, better structure–property predictions are possible. Furthermore, the building blocks can be designed in such a way to give the best performance in the materials’ formation, for example good solubility of inorganic compounds in organic monomers by surface groups showing a similar polarity as the monomers.
In recent years many building blocks have been synthesized and used for the preparation of hybrid materials. Chemists can design these compounds on a
molecular scale with highly sophisticated methods and the resulting systems are used for the formation of functional hybrid materials. Many future applications, in particular in nanotechnology, focus on a bottom-up approach in which complex structures are hierarchically formed by these small building blocks. This idea is also one of the driving forces of the building block approach in hybrid materials.
In situ formation of the components
The in situ formation of the hybrid materials is based on the chemical transformation of the precursors used throughout materials’ preparation. Typically this is the case if organic polymers are formed but also if the sol–gel process is applied to produce the inorganic component. In these cases well-defined discrete molecules are transformed to multidimensional structures, which often show totally different properties from the original precursors. Generally simple, commercially available molecules are applied and the internal structure of the final material is determined by the composition of these precursors but also by the reaction conditions. Therefore, control over the latter is a crucial step in this process. Changing one parameter can often lead to two very different materials. If, for example, the inorganic species is a silica derivative formed by the sol–gel process, the change from base to acid catalysis makes a large difference because base catalysis leads to a more particle-like microstructure while acid catalysis leads to a polymer-like microstructure. Hence, the final performance of the derived materials is strongly dependent on their processing and its optimization.
In situ formation of inorganic materials
Many of the classical inorganic solid state materials are formed using solid precursors and high temperature processes, which are often not compatible with the presence of organic groups because they are decomposed at elevated temperatures. Hence, these high temperature processes are not suitable for the in situ formation of hybrid materials. Reactions that are employed should have more the character of classical covalent bond formation in solutions. One of the most prominent processes which fulfill these demands is the sol–gel process. However, such rather low temperature processes often do not lead to the thermodynamically most stable structure but to kinetic products, which has some implications for the structures obtained. For example, low temperature derived inorganic materials are often amorphous or crystallinity is only observed on a very small length scale, i.e. the nanometer range. An example of the latter is the formation of metal nanoparticles in organic or inorganic matrices by reduction of metal salts or organometallic precursors.
Some methods of in situ formation of inorganic materials are:
Sol-gel process
Nonhydrolytic sol–gel process
Sol–gel reactions of non-silicates
Sequential infiltration synthesis (SIS)
In-situ Microwave pyrolysis method
Formation of organic polymers in presence of preformed inorganic materials
If the organic polymerization occurs in the presence of an inorganic material to form the hybrid material one has to distinguish between several possibilities to overcome the incompatibility of the two species. The inorganic material can either have no surface functionalization but the bare material surface; it can be modified with nonreactive organic groups (e.g. alkyl chains); or it can contain reactive surface groups such as polymerizable functionalities. Depending on these prerequisites the material can be pretreated, for example a pure inorganic surface can be treated with surfactants or silane coupling agents to make it compatible with the organic monomers, or functional monomers can be added that react with the surface of the inorganic material. If the inorganic component has nonreactive organic groups attached to its surface and it can be dissolved in a monomer which is subsequently polymerized, the resulting material after the organic polymerization, is a blend. In this case the inorganic component interact only weakly or not at all with the organic polymer; hence, a class I material is formed. Homogeneous materials are only obtained in this case if agglomeration of the inorganic components in the organic environment is prevented. This can be achieved if the interactions between the inorganic components and the monomers are better or at least the same as between the inorganic components. However, if no strong chemical interactions are formed, the long-term stability of a once homogeneous material is questionable because of diffusion effects in the resulting hybrid material. The stronger the respective interaction between the components, the more stable is the final material. The strongest interaction is achieved if class II materials are formed, for example with covalent interactions.
Hybrid materials by simultaneous formation of both components
Simultaneous formation of the inorganic and organic polymers can result in the most homogeneous type of interpenetrating networks. Usually the precursors for the sol–gel process are mixed with monomers for the organic polymerization and both processes are carried out at the same time with or without solvent. Applying this method, three processes are competing with each other:
(a) the kinetics of the hydrolysis and condensation forming the inorganic phase,
(b) the kinetics of the polymerization of the organic phase, and
(c) the thermodynamics of the phase separation between the two phases.
By tailoring the kinetics of the two polymerizations in such a way that they occur simultaneously and rapidly enough, phase separation is avoided or minimized. Additional parameters such as attractive interactions between the two moieties, as described above can also be used to avoid phase separation.
One problem that also arises from the simultaneous formation of both networks is the sensitivity of many organic polymerization processes for sol–gel conditions or the composition of the materials formed. Ionic polymerizations, for example, often interact with the precursors or intermediates formed in the sol–gel process. Therefore, they are not usually applied in these reactions.
Applications
Decorative coatings obtained by the embedding of organic dyes in hybrid coatings.
Scratch-resistant coatings with hydrophobic or anti-fogging properties.
Nanocomposite based devices for electronic and optoelectronic applications including light-emitting diodes, photodiodes, solar cells, gas sensors and field effect transistors.
Fire retardant materials for construction industry.
Nanocomposite based dental filling materials.
Composite electrolyte materials for applications such as solid-state lithium batteries or supercapacitors.
Proton conducting membranes used in fuel cells.
Antistatic / anti-reflection coatings
Corrosion protection
Porous hybrid materials
References
Guido Kickelbick (Editor), Hybrid Materials: Synthesis, Characterization, and Applications, Wiley, ,
Inorganic-Organic Hybrid Materials
Nanomaterials
Composite materials | Hybrid material | [
"Physics",
"Materials_science"
] | 2,781 | [
"Composite materials",
"Materials",
"Nanotechnology",
"Nanomaterials",
"Matter"
] |
32,134,118 | https://en.wikipedia.org/wiki/Interruption%20science | Interruption science is the interdisciplinary scientific study concerned with how interruptions affect human performance, and the development of interventions to ameliorate the disruption caused by interruptions. Interruption science is a branch of human factors psychology and emerged from human–computer interaction and cognitive psychology.
Being ubiquitous in life and an intuitive concept, there are few formal definitions of interruption. A commonly agreed upon definition proposed by Boehm-Davis and Remington specifies an interruption is "the suspension of one stream of work prior to completion, with the intent of returning to and completing the original stream of work". Interruptions are considered to be on the spectrum of multitasking and in this context referred to as sequential multitasking. The distinguishing feature of an interruption (see Task switching (psychology), concurrent multitasking) is the presence of primary task which must be returned to upon completing a secondary interrupting task. For instance, talking on the phone while driving is generally considered an instance of concurrent multitasking; stopping a data entry task to check emails is generally considered an instance of an interruption.
Interruptions, in almost all instances, are disruptive to performance and induce errors. Therefore, interruption science typically examines the effects of interruptions in high-risk workplace environments such as aviation, medicine, and vehicle operation in which human error can have serious, potentially disastrous consequences. Interruptions are also explored in less safety-critical workplaces, such as offices, where interruptions can induce stress, anxiety, and poorer performance.
History
The first formal investigation into interruptions was conducted by Zeigarnik and Ovsiankina as part of the Vygotsky Circle in the 1920s. Their seminary research demonstrated the Zeigarnik effect: people remember uncompleted or interrupted tasks better than completed tasks. In the 1940s, Fitts and Jones reported that interruptions were a cause of pilot errors and flying accidents, and made recommendations on reducing these disruptive effects.
Theoretical models
Knowledge workers
Office workers face a number of interruptions due to information technologies such as e-mail, text messages, and phone calls. One line of research in interruption science examines the disruptive effects of these technologies and how to improve the usability and design of such devices. According to Gloria Mark, "the average knowledge worker switches tasks every three minutes, and, once distracted, a worker can take nearly a half-hour to resume the original task". Mark conducted a study on office workers, which revealed that "each employee spent only 11 minutes on any given project before being interrupted". Kelemen et al. showed that a team of programmers is interrupted through a technical Skype support chat up to 150 times a day, but these interruptions can be reduced by introducing a dispatcher role and a knowledge base.
Notifications
One of the major challenges associated with increased reliance on information technologies is they will send users notifications, without considering current task demands. Answering notifications impedes task performance and the ability to resume to the original task at hand. In addition, even just knowing that one has received a notification can negatively impact sustained attention.
Several solutions have been proposed to this problem. One study suggested entirely disable email notifications. The down side was it may induce a pressure to constant need to check their email accounts. In fact, entirely removing notifications may lead people to spend more time checking their email. The absence of e-mail notifications is often seen as counterproductive because of the required "catch-up" time periods after a long time between email checking. Alternatively, there are several attempts to design software applications that deliver notifications when there is an identified break from work, or categorize notifications based on their relative importance (e.g. Oasis).
Research has also investigated the effects of relevant interruptions, and found notifications relevant to the current task are less disruptive than if it were unrelated. Overall task performance is most impacted when an instant message is received during fast and stimulus-driven tasks such as typing, pressing buttons, or examining search results.
Bounded deferral is a restricted notification method that entails users waiting a prescribed amount of time before they access a notification to reduce the amount of interruption and decline in productivity. This technique was used in the aim to provide calmer and less disruptive work spaces. If users are busy, alerts and notifications are put aside and delivered only when users are in a position to receive notifications without harming their work. The bounded deferral method has proven to be useful and has the potential to become even more effective on a wider scale, as it has showed how an effective notification system can operate.
Medicine
In nursing, a study has been conducted of the impact of interruptions on nurses in a trauma center. Another study has been done on the interruption rates of nurses and doctors.
Interruption caused by smartphone use in health-care settings can be deadly. Hence, it may be worthwhile for health care organizations to craft effective cellphone usage policies to maximize technological benefits and minimize unnecessary distraction associated with smartphone use.
See also
Human multitasking
Ovsiankina effect
References
Further reading
Adamczyk P. D. & Bailey B. P. (2004) If not now, when?: The effects of interruption at different moments within task execution, in: Human Factors in Computing Systems: Proceedings of CHI'04, New York: ACM Press, 271-278
Bailey, B. P., Konstan, J. A., & Carlis, J. V. (2001). The Effects of Interruptions on Task Performance, Annoyance, and Anxiety in the User Interface. Proceedings of INTERACT '01, IOS Press, 593–601.
Cades, D. M., Davis, D. A. B., Trafton, J. G., & Monk, C. A. (2007). Does the difficulty of an interruption affect our ability to resume? In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 51, pp. 234–238). SAGE Publications.
Hodgetts H. M., Jones D. M. (2006) Interruption of the Tower of London task: Support for a goal-activation approach, Journal of Experimental Psychology: General. 135 (1): 103-115. https://doi.org/10.1037/0096-3445.135.1.103
Latorella, K. A. (1999). Investigating interruptions: Implications for flight deck performance (Technical Memorandum NASA/TM-1999-209707), (October).
Remington, R. W., & Loft, S. (2015). Attention and multitasking. APA Handbook of Human Systems Integration., (1918), 261–276. doi 10.1037/14528-017
Salvucci, D. D., & Taatgen, N. A. (2011). The multitasking mind. Oxford series on cognitive models and architectures. Retrieved from http://lib.myilibrary.com/detail.asp?ID=279322\nhttp://firstsearch.oclc.org/WebZ/DECRead?standardNoType=1&standardNo=0199733562&sessionid=0&srcdbname=worldcat&key=455a3d5fd3b04b30b7e62eefaccb0a6c37c006d081c99153ebf63d6646df2b41&ectype=MOREINFO\nhttp://fir
Attention
Aviation safety
Human–computer interaction
Industrial and organizational psychology
Patient safety
Transport safety | Interruption science | [
"Physics",
"Engineering"
] | 1,587 | [
"Transport safety",
"Physical systems",
"Transport",
"Human–machine interaction",
"Human–computer interaction"
] |
32,134,316 | https://en.wikipedia.org/wiki/Zero%20Emission%20Hyper%20Sonic%20Transport | The Zero Emission Hyper Sonic Transport or ZEHST is a planned hypersonic passenger jet airliner project by the multinational aerospace conglomerate EADS and the Japanese national space agency JAXA.
On 18 June 2011, the ZEHST concept was unveiled by EADS at the Le Bourget Air Show. The envisioned vehicle uses a combination of three different types of engines, including relatively conventional turbofans, rocket motors, and scramjets to attain a maximum speed of Mach 4.5 (four and a half times the speed of sound). The ZEHST has been projected to carry between 50 and 100 passengers while flying at very high altitudes for greater efficiency.
Conceptually, the ZEHST has been promoted as a descendant of, or a successor to, Concorde, a supersonic airliner that was withdrawn from passenger routes in 2003. According to projections released, the ZEHST would be capable of flying between Paris and Tokyo in 2.5 hours, or between New York and London within one hour. During 2011, EADS stated its prediction that the ZEHST could be flying by 2050. In 2024, EADS said that the plane wouldn’t be ready for another 40 years.
Development
Even prior to the introduction of the Concorde supersonic airliner during the 1970s, there has been a desire within elements of the aviation industry to produce further high-speed transport aircraft. Since the 1990s, several collaborative research efforts in the field have been financed in Europe. By the 2010s, both the American aerospace company Boeing and the multinational aerospace conglomerate EADS were reportedly working on separate plans to develop hypersonic aircraft. Such efforts have largely been constrained to theoretical work, however some progress has been observed over the decades; innovations have continued to be patented in the field, such as for a mixed-propulsion arrangement awarded to EADS in 2010. Amongst other aspects, efforts have been made to reduce noise generated by sonic booms, which are commonly produced by aircraft operating at supersonic speeds.
On 18 June 2011, EADS revealed the Zero Emission Hyper Sonic Transport (ZEHST) concept at the Le Bourget Air Show. As originally announced, the aircraft would combine three distinct propulsion systems: two turbofan engines for taxiing/take-off and up to Mach 0.8, then rocket boosters up to Mach 2.5, before switching to a pair of underwing scramjets to accelerate up to its maximum speed of Mach 4.5 (four and a half times the speed of sound). The fuel of these engines is envisaged to be a biofuel primarily consisting of seaweed, along with a combination of oxygen and hydrogen. Largely due to this fuel composition, the aircraft has been referred to as "green" aircraft that generates "almost zero emissions".
Furthermore, ZEHST possesses an unusually high cruising altitude of 32 km above ground level, flying within the outer atmosphere (in comparison, conventional airliners cruise at around 11km above ground level); this height was principally opted for due to the air being thinner and thus generating less drag that would slow down the aircraft and decrease efficiency. The use of conventional turbofan engines during its take off phase of flight would result in the ZEHST being no more noisy than conventional airliners. While the ZEHST's configuration has not been finalised, EADS has commented that believed that Concorde's basic design remains a strong base for the project.
In addition to EADS itself, much of the propulsion-based development work on the ZEHST project had been made in cooperation with the European missile specialist MBDA and the French national aerospace research centre ONERA. International engagement has also secured partners, the cooperative HIKARI R&D project is underway between Japanese and European agencies. The ZEHST is not the only such effort that the company has engaged itself in. By 2015, Airbus Group (to which EADS had rebranded itself as) was reportedly working on two separate hypersonic projects, one in conjunction with Japanese partners and the other with Russian and Australian involvement. That same year, company chief executive Tom Enders publicly stated his enthusiasm for Airbus to complete development of a hypersonic long range passenger transport.
See also
Supercruise
Concorde
Tupolev Tu-144
SpaceLiner
Boeing 2707
Orient Express X-30 follow-on
Boeing Sonic Cruiser
HyperMach SonicStar
Reaction Engines A2
Skylon
Airbus Defence and Space Spaceplane
References
External links
with a diagram of the plane.
.
.
.
.
Hydrogen-powered aircraft
Supersonic transports
Mixed-power aircraft
Rocket-powered aircraft
International proposed aircraft
Ramjet-powered aircraft | Zero Emission Hyper Sonic Transport | [
"Physics"
] | 936 | [
"Physical systems",
"Transport",
"Supersonic transports"
] |
52,067,295 | https://en.wikipedia.org/wiki/Discriminant%20Book | The Discriminant Book (German: Kenngruppenbuch; literally: Groups to identify the key to the receiver) shortened to K-Book (K. Buch), and also known as the indicator group book or identification group book was a secret distribution list in booklet form, which listed trigraphs in random order. The Kenngruppenbuch was introduced in May 1937, and used by the Kriegsmarine (German War Navy) during World War II as part of the Naval Enigma message encipherment procedure, to ensure secret and confidential communication between Karl Dönitz, Commander of Submarines (BdU) in the Atlantic and in the Mediterranean operating German submarines. The Kenngruppenbuch was used in the generation of the Enigma message Key that was transmitted within the message Indicator. The Kenngruppenbuch was used from 5 October 1941, for the Enigma Model M3, and from 1 February 1942 exclusively for the Enigma M4. It must not be confused with the Kenngruppenheft which was used with the Short Signal Book (German: Kurzsignalbuch).
History
The Kenngruppenbuch was a large document with the first edition coming into force in 1938, that mostly remained unchanged when a second edition was released in 1941. The Zuteilungsliste, however, was continually updated. After 1 May 1937, the Kriegsmarine had stopped using an Indicating system with a repetition of message key within the indicator, a serious security flaw, which was still being used by the Luftwaffe (German Airforce) and Heer (German Army) at the beginning of 1940, making the Naval Enigma more secure. The introduction of the K Book was designed to avert this serious security flaw.
On 9 May 1941, when a version of the K Book was recovered from U-boat U-110, Joan Clarke, and her compatriots at Hut 8, the section at Bletchley Park tasked with solving German naval (Kriegsmarine) Enigma messages, noticed that German telegraphists were not acting in a random way, which they were supposed to when making up the message Indicator. Rather than selecting a random trigram out of the K Book, the telegraphist had a tendency to choose a trigram from either the top of the column list, or near the bottom and grouped in the middle. It was a problem that the Kriegsmarine later corrected with the introduction of new rules, later in 1941.
Design
The Kenngruppenbuch consisted of two main parts. The first half consisted of the Column List (German:Spaltenliste) which consisted of all 17,576 of trigrams (Kenngruppen), divided into 733 numbered columns of 24 trigrams displayed in random order. The second half consists of the group list (German:Gruppenlist) where the trigrams are sorted in alphabetical order. After each trigram are 2 numbers, the first giving the number of the column in the Spaltenliste in which the trigram occurs, the second giving the position of the trigram in the column. The table pointer, or table selection chart (German:Tauschtafelplan) told the operator which column of a given table was used to select the required trigrams. By means of the Assignation list (German:Zuteilungsliste) told the radio man which table he should use for a particular cipher net. Large keys would be given several blocks of columns, small keys as few as 10.
Naval Enigma operation
Naval Enigma used an indicator to define a key mechanism, with the key being transmitted along with the ciphertext. The starting position for the rotors was transmitted just before the ciphertext, usually after having been enciphered by Naval Enigma. The exact method used was termed the indicator procedure. A properly self-reciprocal bipartite digraphic encryption algorithm was used for the super-encipherment of the indicators (German:Spruchschlüssel) with basic wheel settings The Enigma Cipher Keys called Heimische Gewässer (English Codename: Dolphin), (Plaice), Triton (Shark), Niobe (Narwhal) and Sucker all used the Kenngruppenbuch and bigram tables to build up the Indicator. The Indicator was built up as follows:
Two Trigrams were chosen at random. The first trigraph was taken from the Key Identification Group table (German:Schlüsselkenngruppe), from the Kenngruppenbuch as determined in the Zuteilungsliste. The second trigraph was taken from the encryption indicator group or Process characteristic groups table (German:Verfahrenkenngrupp), also taken from the Kenngruppenbuch and also determined in the Zuteilungsliste.
For example:
S W Q - and R A F,
and arranged in the scheme:
∗ S W Q
R A F ∗
with the empty position would be filled in a random letter:
X S W Q
R A F P
Encipherment with a Bigram table called double-letter conversion table (German:Doppelbuchstabentauschtafel), arranged with vertical pairs, was as follows:
X→V S→G W→V Q→X
R→I A→F F→T P→T
which would give
V G V X
I F T T
This was read out vertically, giving:
VIGF VTXT
and this was sent without further encoding, and preceding the encrypted message. The message was sent by Morse and on the receiving end the procedure was reversed. Nine bigram tables were known to exist, including FLUSS or FLUSZ (English:River)). Other bigram booklets existed and were used including BACH (1940), STROM (1941) and TEICH and UFER
See also
Short Weather Cipher
Short Signal Book
References
Further reading
Arthur O. Bauer: Direction finding as Allied weapon against German submarines from 1939 to 1945. Selbstverlag, Diemen Netherlands 1997.
Friedrich L. Bauer : Decrypted Secrets. Methods and Maxims of Cryptology. 3rd revised and expanded edition. Springer, Berlin and others 2000 .
External links
Full scan (PDF; 120 pages; 71.5 MB) of an authentic Kenngruppenbuch
Cryptography
World War II military equipment of Germany
Signals intelligence of World War II | Discriminant Book | [
"Mathematics",
"Engineering"
] | 1,316 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
52,072,767 | https://en.wikipedia.org/wiki/Proton%20computed%20tomography | Proton computed tomography (pCT), or proton CT, is an imaging modality first proposed by Cormack in 1963 and initial experiment explorations identified several advantages over conventional X-ray CT (xCT). However, particle interactions such as multiple Coulomb scattering (MCS) and (in)elastic nuclear scattering events deflect the proton trajectory, resulting in nonlinear paths which can only be approximated via statistical assumptions, leading to lower spatial resolution than X-ray tomography. Further experiments were largely abandoned until the advent of proton radiation therapy in the 1990s which renewed interest in the topic due to the potential benefits of imaging and treating patients with the same particle.
Description
Proton computed tomography (pCT) uses measurements of a proton's position/trajectory and energy before and after traversing an object to reconstruct an image of the object where each voxel represents the relative stopping power (RSP) of the material composition of the corresponding region of the object. The deviations of a proton's path inside the object are primarily due to interactions between the Coulomb fields of the proton and the nuclei in the absorbing material, resulting in many small-angle deflections as it passes through the object. Statistical models of the effect of MCS on the trajectory of a proton were developed to calculate the most likely path (MLP) of a proton given its entry and exit position/trajectory and corresponding uncertainty at intermediate depths within the object. Additional (in)elastic nuclear scattering events can also occur which cause larger angle deviations, which cannot easily be modeled, but these are fairly easy to identify and remove from consideration in the image reconstruction process.
With an approximate path of a proton through the object, one can then identify the voxels through which the proton passed, and the difference between entry and exit energy indicates the energy collectively deposited in these voxels. Assuming there are voxels in the image, the distance, , the proton travels through each voxel varies along the path and the amount of energy deposited in each voxel, , depends on this and the voxel's RSP, . The total energy loss is the line integral of RSP scaled by the intersection length, or
References
Further reading
"Use of Protons for Radiotherapy", A.M. Koehler, Proc. of the Symposium on Pion and Proton Radiotherapy, Nat. Accelerator Lab., (1971).
"Bragg Peak Proton Radiosurgery for Arteriovenous Malformation of the Brain" R.N. Kjelberg, presented at First Int. Seminar on the Use of Proton Beams in Radiation Therapy, Moscow (1977).
Austin-Seymor, M.J. Munzenrider, et al. "Fractionated Proton Radiation Therapy of Cranial and Intracrainial Tumors" American Journal of Clinical Oncology 13(4):327–330 (1990).
"Proton Radiotherapy", Hartford, Zietman, et al. in Radiotheraputic Management of Carcinoma of the Prostate, A. D'Amico and G.E. Hanks. London, UK, Arnold Publishers: 61–72 (1999).
External links
Proton therapy—MedlinePlus Medical Encyclopedia
Proton Therapy
"Proton therapy is coming to the UK, but what does it mean for patients?", Arney, Kat, Science blog, Cancer Research UK, 16 September 2013
Radiation therapy
Medical physics
Proton | Proton computed tomography | [
"Physics"
] | 704 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
26,492,417 | https://en.wikipedia.org/wiki/Hypercyclic%20operator | In mathematics, especially functional analysis, a hypercyclic operator on a topological vector space X is a continuous linear operator T: X → X such that there is a vector x ∈ X for which the sequence {Tn x: n = 0, 1, 2, …} is dense in the whole space X. In other words, the smallest closed invariant subset containing x is the whole space. Such an x is then called hypercyclic vector.
There is no hypercyclic operator in finite-dimensional spaces, but the property of hypercyclicity in spaces of infinite dimension is not a rare phenomenon: many operators are hypercyclic.
The hypercyclicity is a special case of broader notions of topological transitivity (see topological mixing), and universality. Universality in general involves a set of mappings from one topological space to another (instead of a sequence of powers of a single operator mapping from X to X), but has a similar meaning to hypercyclicity. Examples of universal objects were discovered already in 1914 by Julius Pál, in 1935 by Józef Marcinkiewicz, or MacLane in 1952. However, it was not until the 1980s when hypercyclic operators started to be more intensively studied.
Examples
An example of a hypercyclic operator is two times the backward shift operator on the ℓ2 sequence space, that is the operator, which takes a sequence
(a1, a2, a3, …) ∈ ℓ2
to a sequence
(2a2, 2a3, 2a4, …) ∈ ℓ2.
This was proved in 1969 by Rolewicz.
Known results
On every infinite-dimensional separable Fréchet space there is a hypercyclic operator. On the other hand, there is no hypercyclic operator on a finite-dimensional space, nor on a non-separable space.
If x is a hypercyclic vector, then Tnx is hypercyclic as well, so there is always a dense set of hypercyclic vectors.
Moreover, the set of hypercyclic vectors is a connected Gδ set when X is a metrizable space, and always contains a dense vector space, up to {0}.
constructed an operator on ℓ1, such that all the non-zero vectors are hypercyclic, providing a counterexample to the invariant subspace problem (and even invariant subset problem) in the class of Banach spaces. The problem, whether such an operator (sometimes called hypertransitive, or orbit transitive) exists on a separable Hilbert space, is still open (as of 2022).
References
See also
Topological mixing
Functional analysis
Operator theory
Invariant subspaces | Hypercyclic operator | [
"Mathematics"
] | 557 | [
"Functional analysis",
"Functions and mappings",
"Mathematical relations",
"Mathematical objects"
] |
26,493,228 | https://en.wikipedia.org/wiki/Tolerability | Tolerability refers to the degree to which overt adverse effects of a drug can be tolerated by a patient. Tolerability of a particular drug can be discussed in a general sense, or it can be a quantifiable measurement as part of a clinical study. Usually, it is measured by the rate of "dropouts", or patients that forfeit participation in a study due to extreme adverse effects. Tolerability, however, is often relative to the severity of the medical condition a drug is designed to treat. For instance, cancer patients may tolerate significant pain or discomfort during a chemotherapeutic study with the hope of prolonging survival or finding a cure, whereas patients experiencing a benign condition, such as a headache, are less likely to.
As an example, tricyclic antidepressants (TCAs) are very poorly tolerated and often produce severe side effects including sedation, orthostatic hypotension, and anticholinergic effects, whereas newer antidepressants have far fewer adverse effects and are well tolerated.
Drug tolerability should not be confused with drug tolerance, which refers to subjects' reduced reaction to a drug following its repeated use.
See also
Side effect
References
Clinical pharmacology | Tolerability | [
"Chemistry"
] | 259 | [
"Pharmacology",
"Pharmacology stubs",
"Medicinal chemistry stubs",
"Clinical pharmacology"
] |
26,497,499 | https://en.wikipedia.org/wiki/Waring%E2%80%93Goldbach%20problem | The Waring–Goldbach problem is a problem in additive number theory, concerning the representation of integers as sums of powers of prime numbers. It is named as a combination of Waring's problem on sums of powers of integers, and the Goldbach conjecture on sums of primes. It was initiated by Hua Luogeng in 1938.
Problem statement
It asks whether large numbers can be expressed as a sum, with at most a constant number of terms, of like powers of primes. That is, for any given natural number, k, is it true that for sufficiently large integer N there necessarily exist a set of primes, {p1, p2, ..., pt}, such that N = p1k + p2k + ... + ptk, where t is at most some constant value?
The case, k = 1, is a weaker version of the Goldbach conjecture. Some progress has been made on the cases k = 2 to 7.
Heuristic justification
By the prime number theorem, the number of k-th powers of a prime below x is of the order x1/k/log x.
From this, the number of t-term expressions with sums ≤x is roughly xt/k/(log x)t.
It is reasonable to assume that for some sufficiently large number t this is x − c, i.e., all numbers up to x are t-fold sums of k-th powers of primes. This argument is, of course, a long way from a strict proof.
Relevant results
In his monograph, using and refining the methods of Hardy, Littlewood and Vinogradov, Hua Luogeng obtains a O(k2 log k) upper bound for the number of terms required to exhibit all sufficiently large numbers as the sum of k-th powers of primes.
Every sufficiently large odd integer is the sum of 21 fifth powers of primes.
References
Additive number theory
Conjectures about prime numbers
Unsolved problems in number theory | Waring–Goldbach problem | [
"Mathematics"
] | 415 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Unsolved problems in number theory",
"Number theory"
] |
26,499,561 | https://en.wikipedia.org/wiki/Zero-energy%20universe | The zero-energy universe hypothesis proposes that the total amount of energy in the universe is exactly zero: its amount of positive energy in the form of matter is exactly canceled out by its negative energy in the form of gravity. Some physicists, such as Lawrence Krauss, Stephen Hawking or Alexander Vilenkin, call or called this state "a universe from nothingness", although the zero-energy universe model requires both a matter field with positive energy and a gravitational field with negative energy to exist. The hypothesis is broadly discussed in popular sources. Other cancellation examples include the expected symmetric prevalence of right- and left-handed angular momenta of objects ("spin" in the common sense), the observed flatness of the universe, the equal prevalence of positive and negative charges, opposing particle spin in quantum mechanics, as well as the crests and troughs of electromagnetic waves, among other possible examples in nature.
History
During World War II, Pascual Jordan first suggested that since the positive energy of a star's mass and the negative energy of its gravitational field together may have zero total energy, conservation of energy would not prevent a star being created by a quantum transition of the vacuum. George Gamow recounted putting this idea to Albert Einstein: "Einstein stopped in his tracks and, since we were crossing a street, several cars had to stop to avoid running us down". Elaboration of the concept was slow, with the first notable calculation being performed by Richard Feynman in 1962. The first known publication on the topic was in 1973, when Edward Tryon proposed in the journal Nature that the universe emerged from a large-scale quantum fluctuation of vacuum energy, resulting in its positive mass-energy being exactly balanced by its negative gravitational potential energy. In the subsequent decades, development of the concept was constantly plagued by the dependence of the calculated masses on the selection of the coordinate systems. In particular, a problem arises due to energy associated with coordinate systems co-rotating with the entire universe. A first constraint was derived in 1987 when Alan Guth published a proof of gravitational energy being negative. The question of the mechanism permitting generation of both positive and negative energy from null initial solution was not understood, and an ad hoc solution with cyclic time was proposed by Stephen Hawking in 1988.
In 1994, development of the theory resumed following the publication of a work by Nathan Rosen, in which Rosen described a special case of closed universe. In 1995, J.V. Johri demonstrated that the total energy of Rosen's universe is zero in any universe compliant with a Friedmann–Lemaître–Robertson–Walker metric, and proposed a mechanism of inflation-driven generation of matter in a young universe. The zero energy solution for Minkowski space representing an observable universe, was provided in 2009.
In his book Brief Answers to the Big Questions, Hawking explains:
Experimental constraints
Experimental proof for the observable universe being a "zero-energy universe" is currently inconclusive. Gravitational energy from visible matter accounts for 26–37% of the observed total mass–energy density. Therefore, to fit the concept of a "zero-energy universe" to the observed universe, other negative energy reservoirs besides gravity from baryonic matter are necessary. These reservoirs are frequently assumed to be dark matter.
See also
A Universe from Nothing
False vacuum
Heat death of the universe
Ultimate fate of the universe
References
Physical cosmology
0 (number)
Energy (physics)
Conservation equations | Zero-energy universe | [
"Physics",
"Astronomy",
"Mathematics"
] | 705 | [
"Symmetry",
"Astronomical sub-disciplines",
"Physical quantities",
"Conservation laws",
"Theoretical physics",
"Quantity",
"Mathematical objects",
"Astrophysics",
"Conservation equations",
"Equations",
"Energy (physics)",
"Wikipedia categories named after physical quantities",
"Physical cosm... |
26,500,128 | https://en.wikipedia.org/wiki/Sericin | Sericin is a protein created by Bombyx mori (silkworms) in the production of silk. Silk is a fibre produced by the silkworm in production of its cocoon. It consists mainly of two proteins, fibroin and sericin. Silk consists of 70–80% fibroin and 20–30% sericin; fibroin being the structural center of the silk, and sericin being the gum coating the fibres and allowing them to stick to each other.
Structure
Sericin is composed of 18 different amino acids, of which 32% is serine. The secondary structure is usually a random coil, but it can also be easily converted into a β-sheet conformation, via repeated moisture absorption and mechanical stretching. The serine hydrogen bonds give its glue-like quality. The genes encoding sericin proteins have been sequenced. Its C-terminal part contains many serine-rich repeats.
Using gamma ray examination, it was determined that sericin fibers are composed typically of three layers, all with fibers running in different patterns of directionality. The innermost layer, typically is composed of longitudinally running fibers, the middle layer is composed of cross fiber directional patterned fibers, and the outer layer consists of fiber directional fibers. The overall structure can also vary based on temperature, whereas the lower the temperature, there were typically more β-sheet conformations than random amorphous coils. There are also three different types of sericin, which make up the layers found on top of the fibroin. Sericin A, which is insoluble in water, is the outermost layer, and contains approximately 17% nitrogen, along with amino acids such as serine, threonine, aspartic acid, and glycine. Sericin B, composed the middle layer and is nearly the same as sericin A, but also contains tryptophan. Sericin C is the innermost layer, the layer that comes closest to and is adjacent to fibroin. Also insoluble in water, sericin C can be separated from the fibroin via the addition of a hot, weak acid. Sericin C also contains the amino acids present in B, along with the addition of proline.
Applications
Sericin has also been used in medicine and cosmetics. Due to its elasticity and tensile strength, along with a natural affinity for keratin, sericin is primarily used in medicine for wound suturing. It also has a natural infection resistance, and is used variably due to excellent biocompatibility, and thus is used commonly as a wound coagulant as well. When used in cosmetics, sericin has been found to improve skin elasticity and several anti-aging factors, including an anti-wrinkle property. This is done by minimizing water loss from the skin. To determine this, scientists ran several experimental procedures, including a hydroxyproline assay, impedance measurements, water loss from the epidermis and scanning electron microscopy to analyze the rigidity and dryness of the skin. The presence of sericin increases hydroxyproline in the stratum corneum, which in turn, decreases skin impedance, thus increasing skin moisture. Adding in pluronic and carbopol, two other ingredients that can be included in sericin gels, performs the action of repairing natural moisture factors (NMF), along with minimizing water loss and in turn, improving skin moisture.
See also
Silk amino acid
References
Sericulture
Proteins
Silk
Silk production
Insect proteins | Sericin | [
"Chemistry"
] | 747 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
26,502,557 | https://en.wikipedia.org/wiki/Negative%20room%20pressure | Negative room pressure is an isolation technique used in hospitals and medical centers to prevent cross-contamination from room to room. It includes a ventilation that generates negative pressure (pressure lower than that of the surroundings) to allow air to flow into the isolation room but not escape from the room, as air will naturally flow from areas with higher pressure to areas with lower pressure, thereby preventing contaminated air from escaping the room. This technique is used to isolate patients with airborne contagious diseases such as: influenza (flu), measles, chickenpox, tuberculosis (TB), severe acute respiratory syndrome (SARS-CoV), Middle East respiratory syndrome (MERS-CoV), and coronavirus disease 2019 (COVID-19).
Mechanism
Negative pressure is generated and maintained in a room by a ventilation system that continually attempts to move air out of the room. Replacement air is allowed into the room through a gap under the door (typically about one half-inch high). Except for this gap, the room is as airtight as possible, allowing little air in through cracks and gaps, such as those around windows, light fixtures and electrical outlets. Leakage from these sources can make it more difficult and less energy efficient to maintain room negative pressure.
Because generally there are components of the exhausted air such as chemical contaminants, microorganisms, or radioactive isotopes that would be unacceptable to release into the surrounding outdoor environment, the air outlet must, at a minimum, be located such that it will not expose people or other occupied spaces. Commonly it is exhausted out of the roof of the building. However, in some cases, such as with highly infectious microorganisms in biosafety level 4 rooms, the air must first be mechanically filtered or disinfected by ultraviolet irradiation or chemical means before being released to the surrounding outdoor environment. In the case of nuclear facilities, the air is monitored for the presence of radioactive isotopes and usually filtered before being exhausted through a tall exhaust duct to be released higher in the air away from occupied spaces.
Monitoring and guidelines
In 2003, the CDC published guidelines on infection control, which included recommendations regarding negative pressure isolation rooms. Still absent from the CDC are recommendations of acute negative pressure isolation room monitoring. This has led to hospitals developing their own policies, such as the Cleveland Clinic. Commonly used methods for acute monitoring include the smoke or tissue test and periodic (noncontinuous) or continuous electronic pressure monitoring.
Smoke/tissue test
This test uses smoke or tissue paper to assess room pressurization. A capsule of smoke or a tissue is placed near the bottom of the door, if the smoke or tissue is pulled under the door, the room is negatively pressurized. The advantages of this test are that it is cost efficient and easily performed by hospital staff. The disadvantages are that it is not a continuous test and that it does not measure magnitude. Without a measure for magnitude, isolation rooms may be under- or over-pressurized, even though the smoke/tissue test is positive. A 1994 CDC recommendation stated TB isolation rooms should be checked daily for negative pressure while being used for TB isolation. If these rooms are not being used for patients who have suspected or confirmed TB but potentially could be used for such patients, the negative pressure in the rooms should be checked monthly.
Continuous electronic pressure monitoring
This test uses an electronic device with a pressure port in the isolation room and an isolation port in the corridor to continuously monitor the pressure differential between the spaces. The advantages of this type of monitoring are that the test is continuous and an alarm will alert staff to undesirable pressure changes. The disadvantages of this monitoring are that pressure ports can become contaminated with particulates which can lead to inaccuracy and false alarms, the devices are expensive to purchase and install, and staff must be trained to use and calibrate these devices because the pressure differentials used to achieve the low negative pressure necessitate the use of very sensitive mechanical devices, electronic devices, or pressure gauges to ensure accurate measurements.
See also
Airborne infection isolation room
References
Medical hygiene
Infectious diseases
Pressure
Isolation (health care) | Negative room pressure | [
"Physics"
] | 842 | [
"Scalar physical quantities",
"Mechanical quantities",
"Physical quantities",
"Pressure",
"Wikipedia categories named after physical quantities"
] |
26,504,015 | https://en.wikipedia.org/wiki/Fragmentation%20%28mass%20spectrometry%29 | In mass spectrometry, fragmentation is the dissociation of energetically unstable molecular ions formed from passing the molecules mass spectrum. These reactions are well documented over the decades and fragmentation patterns are useful to determine the molar weight and structural information of unknown molecules. Fragmentation that occurs in tandem mass spectrometry experiments has been a recent focus of research, because this data helps facilitate the identification of molecules.
Mass spectrometry techniques
Fragmentation can occur in the ion source (in-source fragmentation) where it has been used with electron ionization to help identify molecules and, recently (2020), with electrospray ionization it has been shown to provide the same benefit in facilitating molecular identification. Prior to these experiments, electrospray ionization in-source fragmentation was generally considered an undesired effect however, electrospray ionization using Enhanced In-Source Fragmentation/Annotation (EISA) has been shown to promote in-source fragmentation that creates fragment ions that are consistent with tandem mass spectrometers. Tandem mass spectrometry-generated fragmentation is typically made in the collision zone (post-source fragmentation) of a tandem mass spectrometer. EISA and collision-induced dissociation (CID) among other physical events that impact ions are a part of gas-phase ion chemistry. A few different types of mass fragmentation are
collision-induced dissociation (CID) through collision with neutral molecule,
surface-induced dissociation (SID) using fast moving ions collision with a solid surface,
laser induced dissociation which uses laser to induce the ion formation,
electron-capture dissociation (ECD) due to capturing of low energy electrons,
electron-transfer dissociation (ETD) through electron transfer between ions,
negative electron-transfer dissociation (NETD),
electron-detachment dissociation (EDD),
photodissociation, particularly infrared multiphoton dissociation (IRMPD) using IR radiation for the bombardment and blackbody infrared radiative dissociation (BIRD) which use IR radiation instead of laser,
higher-energy C-trap dissociation (HCD), EISA, and
charge remote fragmentation.
Fragmentation reactions
Fragmentation is a type of chemical dissociation, in which the removal of the electron from the molecule results in ionization. Removal of electrons from either sigma bond, pi bond or nonbonding orbitals causes the ionization. This can take place by a process of homolytic cleavage or homolysis or heterolytic cleavage or heterolysis of the bond. Relative bond energy and the ability to undergo favorable cyclic transition states affect the fragmentation process. Rules for the basic fragmentation processes are given by Stevenson's Rule.
Two major categories of bond cleavage patterns are simple bond cleavage reactions and rearrangement reactions.
Simple bond cleavage reactions
Majority of organic compounds undergo simple bond cleavage reactions, in which direct cleavage of bond take place. Sigma bond cleavage, radical site-initiated fragmentation, and charge site-initiated fragmentation are few types of simple bond cleavage reactions.
Sigma bond cleavage / σ-cleavage
Sigma bond cleavage is most commonly observed in molecules that can produce stable cations, such as saturated alkanes or secondary and tertiary carbocations. This occurs when an alpha electron is removed. The C-C bond elongates and weakens causing fragmentation. Fragmentation at this site produces a charged and a radical fragment.
Radical site-initiated fragmentation
Sigma bond cleavage also occurs on radical cations remote from the site of ionization. This is commonly observed in alcohols, ethers, ketones, esters, amines, alkenes, and aromatic compounds with a carbon attached to ring. The cation has a radical on a heteroatom or an unsaturated functional group. The driving force of fragmentation is the strong tendency of the radical ion for electron pairing. Cleavage occurs when the radical and an odd electron from the bonds adjacent to the radical migrate to form a bond between the alpha carbon and either the heteroatom or the unsaturated functional group. The sigma bond breaks; hence this cleavage is also known as homolytic bond cleavage or α-cleavage.
Charge site-initiated cleavage
The driving force of charge site-initiated fragmentation is the inductive effect of the charge site in radical cations. The electrons from the bond adjacent to the charge-bearing atom migrate to that atom, neutralizing the original charge and causing it to move to a different site. This term is also called inductive cleavage and is an example of heterolytic bond cleavage.
Rearrangement reactions
Rearrangement reactions are fragmentation reactions that form new bonds producing an intermediate structure before cleavage. One of the most studied rearrangement reaction is the McLafferty rearrangement / γ-hydrogen rearrangement. This occurs in the radical cations with unsaturated functional groups, like ketones, aldehydes, carboxylic acids, esters, amides, olefins, phenylalkanes. During this reaction, γ-hydrogen will transfer to the functional group at first and then subsequent α, β-bond cleavage of the intermediate will take place. Other rearrangement reactions include heterocyclic ring fission (HRF), benzofuran forming fission (BFF), quinone methide (QM) fission or Retro Diels-Alder (RDA).
See also
Mass chromatogram
Mass spectral interpretation
Mass spectrum analysis
Tandem mass spectrometry
References
External links
Fragmentation patterns in the mass spectra of organic compounds
A tutorial in small molecule identification via electrospray ionization-mass spectrometry: The practical art of structural elucidation
Tandem mass spectrometry | Fragmentation (mass spectrometry) | [
"Physics"
] | 1,177 | [
"Mass spectrometry",
"Spectrum (physical sciences)",
"Tandem mass spectrometry"
] |
31,149,664 | https://en.wikipedia.org/wiki/Dufour%20effect | The Dufour effect is the energy flux due to a mass concentration gradient occurring as a coupled effect of irreversible processes, named after L. Dufour. It is the reciprocal phenomenon to the Soret effect. The concentration gradient results in a temperature change. For binary liquid mixtures, the Dufour effect is usually considered negligible, whereas in binary gas mixtures the effect can be significant.
References
Thermodynamics | Dufour effect | [
"Physics",
"Chemistry",
"Mathematics"
] | 93 | [
"Thermodynamics stubs",
"Physical chemistry stubs",
"Thermodynamics",
"Dynamical systems"
] |
31,150,618 | https://en.wikipedia.org/wiki/AKLT%20model | In condensed matter physics, an AKLT model, also known as an Affleck-Kennedy-Lieb-Tasaki model is an extension of the one-dimensional quantum Heisenberg spin model. The proposal and exact solution of this model by Ian Affleck, Elliott H. Lieb, Tom Kennedy and provided crucial insight into the physics of the spin-1 Heisenberg chain. It has also served as a useful example for such concepts as valence bond solid order, symmetry-protected topological order and matrix product state wavefunctions.
Background
A major motivation for the AKLT model was the Majumdar–Ghosh chain. Because two out of every set of three neighboring spins in a Majumdar–Ghosh ground state are paired into a singlet, or valence bond, the three spins together can never be found to be in a spin 3/2 state. In fact, the Majumdar–Ghosh Hamiltonian is nothing but the sum of all projectors of three neighboring spins onto a 3/2 state.
The main insight of the AKLT paper was that this construction could be generalized to obtain exactly solvable models for spin sizes other than 1/2. Just as one end of a valence bond is a spin 1/2, the ends of two valence bonds can be combined into a spin 1, three into a spin 3/2, etc.
Definition
Affleck et al. were interested in constructing a one-dimensional state with a valence bond between every pair of sites. Because this leads to two spin 1/2s for every site, the result must be the wavefunction of a spin 1 system.
For every adjacent pair of the spin 1s, two of the four constituent spin 1/2s are stuck in a total spin zero state. Therefore, each pair of spin 1s is forbidden from being in a combined spin 2 state.
By writing this condition as a sum of projectors that favor the spin 2 state of pairs of spin 1s, AKLT arrived at the following Hamiltonian
up to a constant,
where the are spin-1 operators, and the local 2-point projector that favors the spin 2 state of an adjacent pair of spins.
This Hamiltonian is similar to the spin 1, one-dimensional quantum Heisenberg spin model but has an additional "biquadratic" spin interaction term.
Ground state
By construction, the ground state of the AKLT Hamiltonian is the valence bond solid with a single valence bond connecting every neighboring pair of sites. Pictorially, this may be represented as
Here the solid points represent spin 1/2s which are put into singlet states. The lines connecting the spin 1/2s are the valence bonds indicating the pattern of singlets. The ovals are projection operators which "tie" together two spin 1/2s into a single spin 1, projecting out the spin 0 or singlet subspace and keeping only the spin 1 or triplet subspace. The symbols "+", "0" and "−" label the standard spin 1 basis states (eigenstates of the operator).
Spin 1/2 edge states
For the case of spins arranged in a ring (periodic boundary conditions) the AKLT construction yields a unique ground state. But for the case of an open chain, the first and
last spin 1 have only a single neighbor, leaving one of their constituent spin 1/2s unpaired. As a result, the ends of the chain behave like free spin 1/2 moments even though
the system consists of spin 1s only.
The spin 1/2 edge states of the AKLT chain can be observed in a few different ways. For short chains, the edge states mix into a singlet or a triplet giving either a unique ground state or a three-fold multiplet of ground states. For longer chains, the edge states decouple exponentially quickly as a function of chain length leading to a ground state manifold that is four-fold degenerate. By using a numerical method such as DMRG to measure the local magnetization along the chain, it is also possible to see the edge states directly and to show that they can be removed by placing actual spin 1/2s at the ends. It has even proved possible to detect the spin 1/2 edge states in measurements of a quasi-1D magnetic compound containing a small amount of impurities whose role is to break the chains into finite segments. In 2021, a direct spectroscopic signature of spin 1/2 edge states was found in isolated quantum spin chains built out of triangulene, a spin 1 polycyclic aromatic hydrocarbon.
Matrix product state representation
The simplicity of the AKLT ground state allows it to be represented in compact form as a matrix product state.
This is a wavefunction of the form
Here the As are a set of three matrices labeled by and the trace comes from assuming periodic boundary conditions.
The AKLT ground state wavefunction corresponds to the choice:
where is a Pauli matrix.
Generalizations and extensions
The AKLT model has been solved on lattices of higher dimension, even in quasicrystals . The model has also been constructed for higher Lie algebras including SU(n), SO(n), Sp(n) and extended to the quantum groups SUq(n).
References
Spin models
Statistical mechanics
Quantum magnetism
Lattice models | AKLT model | [
"Physics",
"Materials_science"
] | 1,107 | [
"Spin models",
"Quantum mechanics",
"Lattice models",
"Computational physics",
"Quantum magnetism",
"Condensed matter physics",
"Statistical mechanics"
] |
31,151,470 | https://en.wikipedia.org/wiki/Bennett%27s%20inequality | In probability theory, Bennett's inequality provides an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount. Bennett's inequality was proved by George Bennett of the University of New South Wales in 1962.
Statement
Let
be independent random variables with finite variance. Further assume almost surely for all , and define and
Then for any ,
where and log denotes the natural logarithm.
Generalizations and comparisons to other bounds
For generalizations see Freedman (1975) and Fan, Grama and Liu (2012) for a martingale version of Bennett's inequality and its improvement, respectively.
Hoeffding's inequality only assumes the summands are bounded almost surely, while Bennett's inequality offers some improvement when the variances of the summands are small compared to their almost sure bounds. However Hoeffding's inequality entails sub-Gaussian tails, whereas in general Bennett's inequality has Poissonian tails.
Bennett's inequality is most similar to the Bernstein inequalities, the first of which also gives concentration in terms of the variance and almost sure bound on the individual terms. Bennett's inequality is stronger than this bound, but more complicated to compute.
In both inequalities, unlike some other inequalities or limit theorems, there is no requirement that the component variables have identical or similar distributions.
Example
Suppose that each is an independent binary random variable with probability . Then Bennett's inequality says that:
For ,
so
for .
By contrast, Hoeffding's inequality gives a bound of and the first Bernstein inequality gives a bound of . For , Hoeffding's inequality gives , Bernstein gives , and Bennett gives .
See also
Concentration inequality - a summary of tail-bounds on random variables.
References
Probabilistic inequalities | Bennett's inequality | [
"Mathematics"
] | 385 | [
"Theorems in probability theory",
"Probabilistic inequalities",
"Inequalities (mathematics)"
] |
31,153,340 | https://en.wikipedia.org/wiki/Active%20zone | The active zone or synaptic active zone is a term first used by Couteaux and Pecot-Dechavassinein in 1970 to define the site of neurotransmitter release. Two neurons make near contact through structures called synapses allowing them to communicate with each other. As shown in the adjacent diagram, a synapse consists of the presynaptic bouton of one neuron which stores vesicles containing neurotransmitter (uppermost in the picture), and a second, postsynaptic neuron which bears receptors for the neurotransmitter (at the bottom), together with a gap between the two called the synaptic cleft (with synaptic adhesion molecules, SAMs, holding the two together). When an action potential reaches the presynaptic bouton, the contents of the vesicles are released into the synaptic cleft and the released neurotransmitter travels across the cleft to the postsynaptic neuron (the lower structure in the picture) and activates the receptors on the postsynaptic membrane.
The active zone is the region in the presynaptic bouton that mediates neurotransmitter release and is composed of the presynaptic membrane and a dense collection of proteins called the cytomatrix at the active zone (CAZ). The CAZ is seen under the electron microscope to be a dark (electron dense) area close to the membrane. Proteins within the CAZ tether synaptic vesicles to the presynaptic membrane and mediate synaptic vesicle fusion, thereby allowing neurotransmitter to be released reliably and rapidly when an action potential arrives.
Function
The function of the active zone is to ensure that neurotransmitters can be reliably released in a specific location of a neuron and only released when the neuron fires an action potential.
As an action potential propagates down an axon it reaches the axon terminal called the presynaptic bouton. In the presynaptic bouton, the action potential activates calcium channels (VDCCs) that cause a local influx of calcium. The increase in calcium is detected by proteins in the active zone and forces vesicles containing neurotransmitter to fuse with the membrane. This fusion of the vesicles with the membrane releases the neurotransmitters into the synaptic cleft (space between the presynaptic bouton and the postsynaptic membrane). The neurotransmitters then diffuse across the cleft and bind to ligand gated ion channels and G-protein coupled receptors on the postsynaptic membrane. The binding of neurotransmitters to the postsynaptic receptors then induces a change in the postsynaptic neuron. The process of releasing neurotransmitters and binding to the postsynaptic receptors to cause a change in the postsynaptic neuron is called neurotransmission.
Structure
The active zone is present in all chemical synapses examined so far and is present in all animal species. The active zones examined so far have at least two features in common, they all have protein dense material that project from the membrane and tethers synaptic vesicles close to the membrane and they have long filamentous projections originating at the membrane and terminating at vesicles slightly farther from the presynaptic membrane. The protein dense projections vary in size and shape depending on the type of synapse examined. One striking example of the dense projection is the ribbon synapse (see below) which contains a "ribbon" of protein dense material that is surrounded by a halo of synaptic vesicles and extends perpendicular to the presynaptic membrane and can be as long as 500 nm. The glutamate synapse contains smaller pyramid like structures that extend about 50 nm from the membrane. The neuromuscular synapse contains two rows of vesicles with a long proteinaceous band between them that is connected to regularly spaced horizontal ribs extending perpendicular to the band and parallel with the membrane. These ribs are then connected to the vesicles which are each positioned above a peg in the membrane (presumably a calcium channel). Previous research indicated that the active zone of glutamatergic neurons contained a highly regular array of pyramid shaped protein dense material and indicated that these pyramids were connected by filaments. This structure resembled a geometric lattice where vesicles were guided into holes of the lattice. This attractive model has come into question by recent experiments. Recent data shows that the glutamatergic active zone does contain the dense protein material projections but these projections were not in a regular array and contained long filaments projecting about 80 nm into the cytoplasm.
There are at least five major scaffold proteins that are enriched in the active zone; UNC13B/Munc13, RIMS1 (Rab3-interacting molecule), Bassoon, Piccolo/aczonin, ELKS, and liprins-α. These scaffold proteins are thought to be the constituents of the dense pyramid like structures of the active zone and are thought to bring the synaptic vesicles into close proximity to the presynaptic membrane and the calcium channels. The protein ELKS binds to the cell adhesion protein, β-neurexin, and other proteins within the complex such as Piccolo and Bassoon. β-neurexin then binds to cell adhesion molecule, neuroligin located on the postsynaptic membrane. Neuroligin then interacts with proteins that bind to postsynaptic receptors. Protein interactions like that seen between Piccolo/ELKS/β-neurexin/neuroligin ensures that machinery that mediates vesicle fusion is in close proximity to calcium channels and that vesicle fusion is adjacent to postsynaptic receptors. This close proximity vesicle fusion and postsynaptic receptors ensures that there is little delay between the activation of the postsynaptic receptors and the release of neurotransmitters.
Neurotransmitter release mechanism
The release of neurotransmitter is accomplished by the fusion of neurotransmitter vesicles to the presynaptic membrane. Although the details of this mechanism are still being studied there is a consensus on some details of the process. Synaptic vesicle fusion with the presynaptic membrane is known to require a local increase of calcium from as few as a single, closely associated calcium channels and the formation of highly stable SNARE complexes. One prevailing model of synaptic vesicle fusion is that SNARE complex formation is catalyzed by the proteins of the active zone such as Munc18, Munc13, and RIM. The formation of this complex is thought to "prime" the vesicle to be ready for vesicle fusion and release of neurotransmitter (see below: releasable pool). After the vesicle is primed then complexin binds to the SNARE complex this is called "superprimed". The vesicles that are superprimed are within the readily releasable pool (see below) and are ready to be rapidly released. The arrival of an action potential opens voltage gated calcium channels near the SNARE/complexin complex. Calcium then binds to change the conformation of synaptotagmin. This change in conformation of allows synaptotagmin to then dislodge complexin, bind to the SNARE complex, and bind to the target membrane. When synaptotagmin binds to both the SNARE complex and the membrane this induces a mechanical force on the membrane so that it causes the vesicle membrane and presynaptic membrane to fuse. This fusion opens a membrane pore that releases the neurotransmitter. The pore increases in size until the entire vesicle membrane is indistinguishable from the presynaptic membrane.
Synaptic vesicle cycle
The presynaptic bouton has an efficiently orchestrated process to fuse vesicles to the presynaptic membrane to release neurotransmitters and regenerate neurotransmitter vesicles. This process called the synaptic vesicle cycle maintains the number of vesicles in the presynaptic bouton and allows the synaptic terminal to be an autonomous unit. The cycle begins with (1) a region of the golgi apparatus is pinched off to form the synaptic vesicle and this vesicle is transported to the synaptic terminal. At the terminal (2) the vesicle is filled with neurotransmitter. (3) The vesicle is transported to the active zone and docked in close proximity to the plasma membrane. (4) During an action potential the vesicle is fused with the membrane, releases the neurotransmitter and allows the membrane proteins previously on the vesicle to diffuse to the periactive zone. (5) In the periactive zone the membrane proteins are sequestered and are endocytosed forming a clathrin coated vesicle. (6) The vesicle is then filled with neurotransmitter and is then transported back to the active zone.
The endocytosis mechanism is slower than the exocytosis mechanism. This means that in intense activity the vesicle in the terminal can become depleted and no longer available to be released. To help prevent the depletion of synaptic vesicles the increase in calcium during intense activity can activate calcineurin which dephosphorylate proteins involved in clathrin-mediated endocytosis.
Vesicle pools
The synapse contains at least two clusters of synaptic vesicles, the readily releasable pool and the reserve pool. The readily releasable pool is located within the active zone and connected directly to the presynaptic membrane while the reserve pool is clustered by cytoskeletal and is not directly connected to the active zone.
Releasable pool
The releasable pool is located in the active zone and is bound directly to the presynaptic membrane. It is stabilized by proteins within the active zone and bound to the presynaptic membrane by SNARE proteins. These vesicles are ready to release by a single action potential and are replenished by vesicles from the reserve pool. The releasable pool is sometimes subdivided into the readily releasable pool and the releasable pool.
Reserve pool
The reserve pool is not directly connected to the active zone. The increase in presynaptic calcium concentration activates calcium–calmodulin-dependent protein kinase (CaMK). CaMK phosphorylates a protein, synapsin, that mediates the clustering of the reserve pool vesicles and attachment to the cytoskeleton. Phosphorylation of synapsin mobilizes vesicles in the reserve pool and allows them to migrate to the active zone and replenish the readily releasable pool.
Periactive zone
The periactive zone surrounds the active zone and is the site of endocytosis of the presynaptic terminal. In the periactive zone, scaffolding proteins such as intersectin 1 recruit proteins that mediate endocytosis such as dynamin, clathrin and endophilin. In Drosophila the intersectin homolog, Dap160, is located in the periactive zone of the neuromuscular junction and mutant Dap160 deplete synaptic vesicles during high frequency stimulation.
Ribbon synapse active zone
The ribbon synapse is a special type of synapse found in sensory neurons such as photoreceptor cells, retinal bipolar cells, and hair cells. Ribbon synapses contain a dense protein structure that tethers an array of vesicles perpendicular to the presynaptic membrane. In an electron micrograph it appears as a ribbon like structure perpendicular to the membrane. Unlike the 'traditional' synapse, ribbon synapses can maintain a graded release of vesicles. In other words, the more depolarized a neuron the higher the rate of vesicle fusion. The Ribbon synapse active zone is separated into two regions, the archiform density and the ribbon. The archiform density is the site of vesicle fusion and the ribbon stores the releasable pool of vesicles. The ribbon structure is composed primarily of the protein RIBEYE, about 64–69% of the ribbon volume, and is tethered to the archiform density by scaffolding proteins such as Bassoon.
Proteins
Measuring neurotransmitter release
Neurotransmitter release can be measured by determining the amplitude of the postsynaptic potential after triggering an action potential in the presynaptic neuron. Measuring neurotransmitter release this way can be problematic because the effect of the postsynaptic neuron to the same amount of released neurotransmitter can change over time. Another way is to measure vesicle fusion with the presynaptic membrane directly using a patch pipette. A cell membrane can be thought of as a capacitor in that positive and negative ions are stored on both sides of the membrane. The larger the area of membrane the more ions that are necessary to hold the membrane at a certain potential. In electrophysiology this means that a current injection into the terminal will take less time to charge a membrane to a given potential before vesicle fusion than it will after vesicle fusion. The time course to charge the membrane to a potential and the resistance of the membrane is measured and with these values the capacitance of the membrane can be calculated by the equation Tau/Resistance=Capacitance. With this technique researchers can measure synaptic vesicle release directly by measuring increases in the membrane capacitance of the presynaptic terminal.
See also
Paired pulse facilitation
Postsynaptic density
References
Neurophysiology
Cellular neuroscience
Cell signaling
Signal transduction
Molecular neuroscience | Active zone | [
"Chemistry",
"Biology"
] | 2,978 | [
"Signal transduction",
"Molecular neuroscience",
"Molecular biology",
"Biochemistry",
"Neurochemistry"
] |
36,333,997 | https://en.wikipedia.org/wiki/Rising%20moving%20average | The rising moving average is a technical indicator used in stock market trading. Most commonly found visually, the pattern is spotted with a moving average overlay on a stock chart or price series. When the moving average has been rising consecutively for a number of days, this is used as a buy signal, to indicate a rising trend forming.
While the rising moving average indicator is commonly used by investors without realising, there has been significant backtesting on historic stock data to calculate the performance of the rising moving average. Simulations have found that shorter rising averages, within the 3- to 10-day period, are more profitable overall than longer rising averages (e.g. 20 days). These have only been tested on US equity stocks however.
Notes
Mathematical finance
Time series
Technical indicators | Rising moving average | [
"Mathematics"
] | 157 | [
"Applied mathematics",
"Mathematical finance"
] |
25,098,165 | https://en.wikipedia.org/wiki/Hydraulic%20compressor | A hydraulic compressor is a means of compressing air using hydraulic energy. There are two very different types of machines referred to as hydraulic compressors.
One type is a mechanical air compressor that is driven by a hydraulic motor. It is a method of converting hydraulic power to pneumatic power. This type of hydraulic compressor is used in various applications where hydraulic power is already available and a relatively small amount of compressed air is needed, as it is not very efficient compared to an electrically driven compressor.
The other type of hydraulic compressor uses the potential and kinetic energy of a stream of water to entrain air and carry it to a separating chamber at a higher pressure where the air accumulates above the water, and the water is allowed to drain. The system has few, if any, moving parts, and is also inefficient, so it is used where the kinetic or potential energy of water is cheaply available.
Design
The advantage of a hydraulic compressor of the second type is the ability to perform isothermal compression without any moving parts, making it relatively reliable and having low maintenance costs. A flow of water is used to entrain air and carry it downward through a pipe, called the downcomer pipe. Air is sucked into the water flow by the static pressure differential. As the mixture of air and water goes down the pipe, the pressure rises. The mixture enters the stilling chamber, which is designed to reduce flow velocity, allowing the air bubbles to separate from the water by buoyancy. The compressed air leaves the chamber through another vertical pipe, called the raiser pipe, and the water leaves through a submerged drain near the bottom of the stilling chamber.
The main issue with these compressors is the development of the scale and dimensions of the chamber (compressed air storage). The price of the chamber can be more costly than the installation itself, depending on the size. Despite the relatively high cost of energy, the hydraulic compressor uses significantly less electricity and increases the production of renewable energy resources.
Cost Breakdown
Fig 2: 0: Energy Cost, 1: Compressor Cost, 2: Maintenance Cost (Based on 24/7 operation, $0.08/kWh, full load operation).
Most of the expenses from integrating a compressor is the energy cost, as depicted in figure 2. The main factors are the type and size of the compressor. That is what determines the utility and power draw of the machine. To be most efficient, the air production capacity should match the air requirements to avoid bottlenecks and unnecessary energy being lost in the form of heat when the air is released. By optimizing utilization or preventing leakage, companies can increase their profit margins.
The design of the piping can also affect the cost of the system. A pipe structure without sharp corners or dead-heads can help maintain pressure and an efficient passage for compressed air. Designers have to think about the type of material that will be used in the hydraulic system. Aluminum, for example, has a lower weight and corrosion resistance than the more traditional material, steel. Because it is much lighter than steel, aluminum pipes allow welders and technicians to manufacture and install them easier. The diameter of the pipe is also crucial since smaller diameters tend to have more pressure differential. That would cause more pressure energy to be converted to heat or vibration, thereby decreasing the compressor's lifespan
Efficiency
To calculate the compressed airflow power, the equation can be used to measure the maximum efficiency of a hydraulic compressor. However, in a real-world scenario, airflow loss needs to be accounted for. This can be done by applying the energy conservation equation for an isothermal flow (assuming water and air have the same pressure and velocity): . Many other factors can also cause the loss of air, such as collision against walls or the friction between water and air bubbles.
The flow of compressed air produced increases when the mass flow rate of liquid circulating the system also increases. This flow can be calculated only at specific parts of the hydraulic pump, as various configurations can be implemented. Examples of these configurations include a parallel or series pumping arrangement. The pump curve can be defined using a derivation of the quadratic equation: . The equation calculates the efficiency of the pump head or driver, which can be graphed with electrical power consumed to compare hydraulic systems.
See also
. The opposite effect.
References
Gas compressors
Pumps
Gas technologies | Hydraulic compressor | [
"Physics",
"Chemistry",
"Engineering"
] | 885 | [
"Pumps",
"Turbomachinery",
"Gas compressors",
"Physical systems",
"Hydraulics",
"Mechanical engineering",
"Mechanical engineering stubs"
] |
25,100,645 | https://en.wikipedia.org/wiki/NanoNed | NanoNed is the Nanotechnology Research and Development initiative of Dutch Government. It is financed Ministry of Economic Affairs (Netherlands).Dutch Technology Foundation STW is responsible for the program management of NanoNed . It is a consortium of seven universities, TNO and Philips. University of Leiden, University of Utrecht and FOM institute AMOLF in Amsterdam are also the partners of NanoNed . Around 400 researchers are working within all these partners. On the basis of National Research and Development strength and industrial needs, 11 interdependent program has developed and named as "Flagship". Each of these flagships is led by a "Flagship Captain". In 2009, more than 400 researchers are working in different 200 projects.
NanoNed also established its first foreign office in Japan (NanoNed Japan Office), led by Prof. Wilfred Van Der Wiel .
Flagship
Advanced NanoProbing
BioNanoSystems
Bottom-up Nano-Electronics
Chemistry and Physics of Individual Molecules
Nano Electronic Materials
NanoFabrication
Nanofluidics
NanoInstrumentation
NanoPhotonics
Nano-Spintronics
Quantum Computing
Consortium Partners
MESA+ Institute for Nanotechnology, University of Twente
Kavli Institute of Nanoscience, Delft University of Technology
Centre for Nano Materials, Eindhoven University of Technology
BioMaDe, University of Groningen
Institute for Molecules and Materials, Radboud University Nijmegen
BioNT, Wageningen University and Research Centre
HIMS, University of Amsterdam
TNO Science and Industry
Philips Electronics Nederland
Co-operation partners
AMOLF
Leiden University
Utrecht University
See also
List of nanotechnology organizations
External links
NanoNed
Ministry of Economic Affairs
Dutch Technology Foundation STW
References
1. NanoNed, Retrieved on December 13, 2009
2. NanoNed Section of Dutch Technology Foundation, Retrieved on December 13, 2009
3. The gateway to Dutch scientific information NARCIS, Retrieved on December 13, 2009
4. NanoNed Foreign Office, Retrieved on December 13, 2009
Nanotechnology institutions | NanoNed | [
"Materials_science"
] | 400 | [
"Nanotechnology",
"Nanotechnology institutions"
] |
25,101,402 | https://en.wikipedia.org/wiki/Astrophysical%20X-ray%20source | Astrophysical X-ray sources are astronomical objects with physical properties which result in the emission of X-rays.
Several types of astrophysical objects emit X-rays. They include galaxy clusters, black holes in active galactic nuclei (AGN), galactic objects such as supernova remnants, stars, and binary stars containing a white dwarf (cataclysmic variable stars and super soft X-ray sources), neutron star or black hole (X-ray binaries). Some Solar System bodies emit X-rays, the most notable being the Moon, although most of the X-ray brightness of the Moon arises from reflected solar X-rays.
Furthermore, celestial entities in space are discussed as celestial X-ray sources. The origin of all observed astronomical X-ray sources is in, near to, or associated with a coronal cloud or gas at coronal cloud temperatures for however long or brief a period.
A combination of many unresolved X-ray sources is thought to produce the observed X-ray background. The X-ray continuum can arise from bremsstrahlung, either magnetic or ordinary Coulomb, black-body radiation, synchrotron radiation, inverse Compton scattering of lower-energy photons by relativistic electrons, knock-on collisions of fast protons with atomic electrons, and atomic recombination, with or without additional electron transitions.
Galaxy clusters
Clusters of galaxies are formed by the merger of smaller units of matter, such as galaxy groups or individual galaxies. The infalling material (which contains galaxies, gas and dark matter) gains kinetic energy as it falls into the cluster's gravitational potential well. The infalling gas collides with gas already in the cluster and is shock heated to between 107 and 108 K depending on the size of the cluster. This very hot gas emits X-rays by thermal bremsstrahlung emission, and line emission from metals (in astronomy, 'metals' often means all elements except hydrogen and helium). The galaxies and dark matter are collisionless and quickly become virialised, orbiting in the cluster potential well.
At a statistical significance of 8σ, it was found that the spatial offset of the center of the total mass from the center of the baryonic mass peaks cannot be explained with an alteration of the gravitational force law.
Quasars
A quasi-stellar radio source (quasar) is a very energetic and distant galaxy with an active galactic nucleus (AGN). QSO 0836+7107 is a Quasi-Stellar Object (QSO) that emits baffling amounts of radio energy. This radio emission is caused by electrons spiraling (thus accelerating) along magnetic fields producing cyclotron or synchrotron radiation. These electrons can also interact with visible light emitted by the disk around the AGN or the black hole at its center. These photons accelerate the electrons, which then emit X- and gamma-radiation via Compton and inverse Compton scattering.
On board the Compton Gamma Ray Observatory (CGRO) is the Burst and Transient Source Experiment (BATSE) which detects in the 20 keV to 8 MeV range. QSO 0836+7107 or 4C 71.07 was detected by BATSE as a source of soft gamma rays and hard X-rays. "What BATSE has discovered is that it can be a soft gamma-ray source", McCollough said. QSO 0836+7107 is the faintest and most distant object to be observed in soft gamma rays. It has already been observed in gamma rays by the Energetic Gamma Ray Experiment Telescope (EGRET) also aboard the Compton Gamma Ray Observatory.
Seyfert galaxies
Seyfert galaxies are a class of galaxies with nuclei that produce spectral line emission from highly ionized gas. They are a subclass of active galactic nuclei (AGN), and are thought to contain supermassive black holes.
X-ray bright galaxies
The following early-type galaxies (NGCs) have been observed to be X-ray bright due to hot gaseous coronae: NGC 315, 1316, 1332, 1395, 2563, 4374, 4382, 4406, 4472, 4594, 4636, 4649, and 5128. The X-ray emission can be explained as thermal bremsstrahlung from hot gas (0.5–1.5 keV).
Ultraluminous X-ray sources
Ultraluminous X-ray sources (ULXs) are pointlike, nonnuclear X-ray sources with luminosities above the Eddington limit of 3 × 1032 W for a black hole. Many ULXs show strong variability and may be black hole binaries. To fall into the class of intermediate-mass black holes (IMBHs), their luminosities, thermal disk emissions, variation timescales, and surrounding emission-line nebulae must suggest this. However, when the emission is beamed or exceeds the Eddington limit, the ULX may be a stellar-mass black hole. The nearby spiral galaxy NGC 1313 has two compact ULXs, X-1 and X-2. For X-1 the X-ray luminosity increases to a maximum of 3 × 1033 W, exceeding the Eddington limit, and enters a steep power-law state at high luminosities more indicative of a stellar-mass black hole, whereas X-2 has the opposite behavior and appears to be in the hard X-ray state of an IMBH.
Black holes
Black holes give off radiation because matter falling into them loses gravitational energy which may result in the emission of radiation before the matter falls into the event horizon. The infalling matter has angular momentum, which means that the material cannot fall in directly, but spins around the black hole. This material often forms an accretion disk. Similar luminous accretion disks can also form around white dwarfs and neutron stars, but in these the infalling gas releases additional energy as it slams against the high-density surface with high speed. In case of a neutron star, the infall speed can be a sizeable fraction of the speed of light.
In some neutron star or white dwarf systems, the magnetic field of the star is strong enough to prevent the formation of an accretion disc. The material in the disc gets very hot because of friction, and emits X-rays. The material in the disc slowly loses its angular momentum and falls into the compact star. In neutron stars and white dwarfs, additional X-rays are generated when the material hits their surfaces. X-ray emission from black holes is variable, varying in luminosity in very short timescales. The variation in luminosity can provide information about the size of the black hole.
Supernova remnants (SNR)
A Type Ia supernova is an explosion of a white dwarf in orbit around either another white dwarf or a red giant star. The dense white dwarf can accumulate gas donated from the companion. When the dwarf reaches the critical mass of , a thermonuclear explosion ensues. As each Type Ia shines with a known luminosity, Type Ia are used as "standard candles" to measure distances in the universe.
SN 2005ke is the first Type Ia supernova detected in X-ray wavelengths, and it is much brighter in the ultraviolet than expected.
X-ray emission from stars
Vela X-1
Vela X-1 is a pulsing, eclipsing high-mass X-ray binary (HMXB) system, associated with the Uhuru source 4U 0900-40 and the supergiant star HD 77581. The X-ray emission of the neutron star is caused by the capture and accretion of matter from the stellar wind of the supergiant companion. Vela X-1 is the prototypical detached HMXB.
Hercules X-1
An intermediate-mass X-ray binary (IMXB) is a binary star system where one of the components is a neutron star or a black hole. The other component is an intermediate mass star.
Hercules X-1 is composed of a neutron star accreting matter from a normal star (HZ Her) probably due to Roche lobe overflow. X-1 is the prototype for the massive X-ray binaries although it falls on the borderline, , between high- and low-mass X-ray binaries.
Scorpius X-1
The first extrasolar X-ray source was discovered on 12 June 1962. This source is called Scorpius X-1, the first X-ray source found in the constellation of Scorpius, located in the direction of the center of the Milky Way. Scorpius X-1 is some 9,000 ly from Earth and after the Sun is the strongest X-ray source in the sky at energies below 20 keV. Its X-ray output is 2.3 × 1031 W, about 60,000 times the total luminosity of the Sun. Scorpius X-1 itself is a neutron star. This system is classified as a low-mass X-ray binary (LMXB); the neutron star is roughly 1.4 solar masses, while the donor star is only 0.42 solar masses.
Sun
In the late 1930s, the presence of a very hot, tenuous gas surrounding the Sun was inferred indirectly from optical coronal lines of highly ionized species. In the mid-1940s radio observations revealed a radio corona around the Sun. After detecting X-ray photons from the Sun in the course of a rocket flight, T. Burnight wrote, "The sun is assumed to be the source of this radiation although radiation of wavelength shorter than 4 Å would not be expected from theoretical estimates of black body radiation from the solar corona." And, of course, people have seen the solar corona in scattered visible light during solar eclipses.
While neutron stars and black holes are the quintessential point sources of X-rays, all main sequence stars are likely to have hot enough coronae to emit X-rays. A- or F-type stars have at most thin convection zones and thus produce little coronal activity.
Similar solar cycle-related variations are observed in the flux of solar X-ray and UV or EUV radiation. Rotation is one of the primary determinants of the magnetic dynamo, but this point could not be demonstrated by observing the Sun: the Sun's magnetic activity is in fact strongly modulated (due to the 11-year magnetic spot cycle), but this effect is not directly dependent on the rotation period.
Solar flares usually follow the solar cycle. CORONAS-F was launched on 31 July 2001 to coincide with the 23rd solar cycle maximum.
The solar flare of 29 October 2003 apparently showed a significant degree of linear polarization (> 70% in channels E2 = 40–60 keV and E3 = 60–100 keV, but only about 50% in E1 = 20–40 keV) in hard X-rays, but other observations have generally only set upper limits.
Coronal loops form the basic structure of the lower corona and transition region of the Sun. These highly structured and elegant loops are a direct consequence of the twisted solar magnetic flux within the solar body. The population of coronal loops can be directly linked with the solar cycle, it is for this reason coronal loops are often found with sunspots at their footpoints. Coronal loops populate both active and quiet regions of the solar surface. The Yohkoh Soft X-ray Telescope (SXT) observed X-rays in the 0.25–4.0 keV range, resolving solar features to 2.5 arc seconds with a temporal resolution of 0.5–2 seconds. SXT was sensitive to plasma in the 2–4 MK temperature range, making it an ideal observational platform to compare with data collected from TRACE coronal loops radiating in the EUV wavelengths.
Variations of solar-flare emission in soft X-rays (10–130 nm) and EUV (26–34 nm) recorded on board CORONAS-F demonstrate for most flares observed by CORONAS-F in 2001–2003 UV radiation preceded X-ray emission by 1–10 min.
White dwarfs
When the core of a medium mass star contracts, it causes a release of energy that makes the envelope of the star expand. This continues until the star finally blows its outer layers off. The core of the star remains intact and becomes a white dwarf. The white dwarf is surrounded by an expanding shell of gas in an object known as a planetary nebula. Planetary nebula seem to mark the transition of a medium mass star from red giant to white dwarf. X-ray images reveal clouds of multimillion degree gas that have been compressed and heated by the fast stellar wind. Eventually the central star collapses to form a white dwarf. For a billion or so years after a star collapses to form a white dwarf, it is "white" hot with surface temperatures of ~20,000 K.
X-ray emission has been detected from PG 1658+441, a hot, isolated, magnetic white dwarf, first detected in an Einstein IPC observation and later identified in an Exosat channel multiplier array observation. "The broad-band spectrum of this DA white dwarf can be explained as emission from a homogeneous, high-gravity, pure hydrogen atmosphere with a temperature near 28,000 K." These observations of PG 1658+441 support a correlation between temperature and helium abundance in white dwarf atmospheres.
A super soft X-ray source (SSXS) radiates soft X-rays in the range of 0.09 to 2.5 keV. Super soft X-rays are believed to be produced by steady nuclear fusion on a white dwarf's surface of material pulled from a binary companion. This requires a flow of material sufficiently high to sustain the fusion.
Real mass transfer variations may be occurring in V Sge similar to SSXS RX J0513.9-6951 as revealed by analysis of the activity of the SSXS V Sge where episodes of long low states occur in a cycle of ~400 days.
HD 49798 is a subdwarf star that forms a binary system with RX J0648.0-4418. The subdwarf star is a bright object in the optical and UV bands. The orbital period of the system is accurately known. Recent XMM-Newton observations timed to coincide with the expected eclipse of the X-ray source allowed an accurate determination of the mass of the X-ray source (at least 1.2 solar masses), establishing the X-ray source as a rare, ultra-massive white dwarf.
Brown dwarfs
According to theory, an object that has a mass of less than about 8% of the mass of the Sun cannot sustain significant nuclear fusion in its core. This marks the dividing line between red dwarf stars and brown dwarfs. The dividing line between planets and brown dwarfs occurs with objects that have masses below about 1% of the mass of the Sun, or 10 times the mass of Jupiter. These objects cannot fuse deuterium.
LP 944-20
With no strong central nuclear energy source, the interior of a brown dwarf is in a rapid boiling, or convective state. When combined with the rapid rotation that most brown dwarfs exhibit, convection sets up conditions for the development of a strong, tangled magnetic field near the surface. The flare observed by Chandra from LP 944-20 could have its origin in the turbulent magnetized hot material beneath the brown dwarf's surface. A sub-surface flare could conduct heat to the atmosphere, allowing electric currents to flow and produce an X-ray flare, like a stroke of lightning. The absence of X-rays from LP 944-20 during the non-flaring period is also a significant result. It sets the lowest observational limit on steady X-ray power produced by a brown dwarf star, and shows that coronas cease to exist as the surface temperature of a brown dwarf cools below about 2500 °C and becomes electrically neutral.
TWA 5B
Using NASA's Chandra X-ray Observatory, scientists have detected X-rays from a low mass brown dwarf in a multiple star system. This is the first time that a brown dwarf this close to its parent star(s) (Sun-like stars TWA 5A) has been resolved in X-rays. "Our Chandra data show that the X-rays originate from the brown dwarf's coronal plasma which is some 3 million degrees Celsius", said Yohko Tsuboi of Chuo University in Tokyo. "This brown dwarf is as bright as the Sun today in X-ray light, while it is fifty times less massive than the Sun", said Tsuboi. "This observation, thus, raises the possibility that even massive planets might emit X-rays by themselves during their youth!"
X-ray reflection
Electric potentials of about 10 million volts, and currents of 10 million amps – a hundred times greater than the most powerful lightning bolts – are required to explain the auroras at Jupiter's poles, which are a thousand times more powerful than those on Earth.
On Earth, auroras are triggered by solar storms of energetic particles, which disturb Earth's magnetic field. As shown by the swept-back appearance in the illustration, gusts of particles from the Sun also distort Jupiter's magnetic field, and on occasion produce auroras.
Saturn's X-ray spectrum is similar to that of X-rays from the Sun indicating that Saturn's X-radiation is due to the reflection of solar X-rays by Saturn's atmosphere. The optical image is much brighter, and shows the beautiful ring structures, which were not detected in X-rays.
X-ray fluorescence
Some of the detected X-rays, originating from solar system bodies other than the Sun, are produced by fluorescence. Scattered solar X-rays provide an additional component.
In the Röntgensatellit (ROSAT) image of the Moon, pixel brightness corresponds to X-ray intensity. The bright lunar hemisphere shines in X-rays because it re-emits X-rays originating from the sun. The background sky has an X-ray glow in part due to the myriad of distant, powerful active galaxies, unresolved in the ROSAT picture. The dark side of the Moon's disk shadows this X-ray background radiation coming from the deep space. A few X-rays only seem to come from the shadowed lunar hemisphere. Instead, they originate in Earth's geocorona or extended atmosphere which surrounds the orbiting X-ray observatory. The measured lunar X-ray luminosity of ~1.2 × 105 W makes the Moon one of the weakest known non-terrestrial X-ray sources.
Comet detection
NASA's Swift Gamma-Ray Burst Mission satellite was monitoring Comet Lulin as it closed to 63 Gm of Earth. For the first time, astronomers can see simultaneous UV and X-ray images of a comet. "The solar wind – a fast-moving stream of particles from the sun – interacts with the comet's broader cloud of atoms. This causes the solar wind to light up with X-rays, and that's what Swift's XRT sees", said Stefan Immler, of the Goddard Space Flight Center. This interaction, called charge exchange, results in X-rays from most comets when they pass within about three times Earth's distance from the sun. Because Lulin is so active, its atomic cloud is especially dense. As a result, the X-ray-emitting region extends far sunward of the comet.
Celestial X-ray sources
The celestial sphere has been divided into 88 constellations. The IAU constellations are areas of the sky. Each of these contains remarkable X-ray sources. Some of them are galaxies or black holes at the centers of galaxies. Some are pulsars. As with the astronomical X-ray sources, striving to understand the generation of X-rays by the apparent source helps to understand the Sun, the universe as a whole, and how these affect us on Earth.
Andromeda
Multiple X-ray sources have been detected in the Andromeda Galaxy, using observations from the ESA's XMM-Newton orbiting observatory.
Boötes
3C 295 (Cl 1409+524) in Boötes is one of the most distant galaxy clusters observed by X-ray telescopes. The cluster is filled with a vast cloud of 50 MK gas that radiates strongly in X rays. Chandra observed that the central galaxy is a strong, complex source of X rays.
Camelopardalis
Hot X-ray emitting gas pervades the galaxy cluster MS 0735.6+7421 in Camelopardus. Two vast cavities – each 600,000 lyrs in diameter appear on opposite sides of a large galaxy at the center of the cluster. These cavities are filled with a two-sided, elongated, magnetized bubble of extremely high-energy electrons that emit radio waves.
Canes Venatici
The X-ray landmark NGC 4151, an intermediate spiral Seyfert galaxy has a massive black hole in its core.
Canis Major
A Chandra X-ray image of Sirius A and B shows Sirius B to be more luminous than Sirius A. Whereas in the visual range, Sirius A is the more luminous.
Cassiopeia
Regarding Cassiopea A SNR, it is believed that first light from the stellar explosion reached Earth approximately 300 years ago but there are no historical records of any sightings of the progenitor supernova, probably due to interstellar dust absorbing optical wavelength radiation before it reached Earth (although it is possible that it was recorded as a sixth magnitude star 3 Cassiopeiae by John Flamsteed on 16 August 1680). Possible explanations lean toward the idea that the source star was unusually massive and had previously ejected much of its outer layers. These outer layers would have cloaked the star and reabsorbed much of the light released as the inner star collapsed.
CTA 1 is another SNR X-ray source in Cassiopeia. A pulsar in the CTA 1 supernova remnant (4U 0000+72) initially emitted radiation in the X-ray bands (1970–1977). Strangely, when it was observed at a later time (2008) X-ray radiation was not detected. Instead, the Fermi Gamma-ray Space Telescope detected the pulsar was emitting gamma ray radiation, the first of its kind.
Carina
Three structures around Eta Carinae are thought to represent shock waves produced by matter rushing away from the superstar at supersonic speeds. The temperature of the shock-heated gas ranges from 60 MK in the central regions to 3 MK on the horseshoe-shaped outer structure. "The Chandra image contains some puzzles for existing ideas of how a star can produce such hot and intense X-rays," says Prof. Kris Davidson of the University of Minnesota.
Cetus
Abell 400 is a galaxy cluster, containing a galaxy (NGC 1128) with two supermassive black holes 3C 75 spiraling towards merger.
Chamaeleon
The Chamaeleon complex is a large star forming region (SFR) that includes the Chamaeleon I, Chamaeleon II, and Chamaeleon III dark clouds. It occupies nearly all of the constellation and overlaps into Apus, Musca, and Carina. The mean density of X-ray sources is about one source per square degree.
Chamaeleon I dark cloud
The Chamaeleon I (Cha I) cloud is a coronal cloud and one of the nearest active star formation regions at ~160 pc. It is relatively isolated from other star-forming clouds, so it is unlikely that older pre-main sequence (PMS) stars have drifted into the field. The total stellar population is 200–300. The Cha I cloud is further divided into the North cloud or region and South cloud or main cloud.
Chamaeleon II dark cloud
The Chamaeleon II dark cloud contains some 40 X-ray sources. Observation in Chamaeleon II was carried out from 10 to 17 September 1993. Source RXJ 1301.9-7706, a new WTTS candidate of spectral type K1, is closest to 4U 1302–77.
Chamaeleon III dark cloud
"Chamaeleon III appears to be devoid of current star-formation activity." HD 104237 (spectral type A4e) observed by ASCA, located in the Chamaeleon III dark cloud, is the brightest Herbig Ae/Be star in the sky.
Corona Borealis
The galaxy cluster Abell 2142 emits X-rays and is in Corona Borealis. It is one of the most massive objects in the universe.
Corvus
From the Chandra X-ray analysis of the Antennae Galaxies rich deposits of neon, magnesium, and silicon were discovered. These elements are among those that form the building blocks for habitable planets. The clouds imaged contain magnesium and silicon at 16 and 24 times respectively, the abundance in the Sun.
Crater
The jet exhibited in X-rays coming from PKS 1127-145 is likely due to the collision of a beam of high-energy electrons with microwave photons.
Draco
The Draco nebula (a soft X-ray shadow) is outlined by contours and is blue-black in the image by ROSAT of a portion of the constellation Draco.
Abell 2256 is a galaxy cluster of more than 500 galaxies. The double structure of this ROSAT image shows the merging of two clusters.
Eridanus
Within the constellations Orion and Eridanus and stretching across them is a soft X-ray "hot spot" known as the Orion-Eridanus Superbubble, the Eridanus Soft X-ray Enhancement, or simply the Eridanus Bubble, a 25° area of interlocking arcs of Hα emitting filaments.
Hydra
A large cloud of hot gas extends throughout the Hydra A galaxy cluster.
Leo Minor
Arp260 is an X-ray source in Leo Minor at RA Dec .
Orion
In the adjacent images are the constellation Orion. On the right side of the images is the visual image of the constellation. On the left is Orion as seen in X-rays only. Betelgeuse is easily seen above the three stars of Orion's belt on the right. The brightest object in the visual image is the full moon, which is also in the X-ray image. The X-ray colors represent the temperature of the X-ray emission from each star: hot stars are blue-white and cooler stars are yellow-red.
Pegasus
Stephan's Quintet are of interest because of their violent collisions. Four of the five galaxies in Stephan's Quintet form a physical association, and are involved in a cosmic dance that most likely will end with the galaxies merging. As NGC 7318B collides with gas in the group, a huge shock wave bigger than the Milky Way spreads throughout the medium between the galaxies, heating some of the gas to temperatures of millions of degrees where they emit X-rays detectable with the NASA Chandra X-ray Observatory. NGC 7319 has a type 2 Seyfert nucleus.
Perseus
The Perseus galaxy cluster is one of the most massive objects in the universe, containing thousands of galaxies immersed in a vast cloud of multimillion degree gas.
Pictor
Pictor A is a galaxy that may have a black hole at its center which has emitted magnetized gas at extremely high speed. The bright spot at the right in the image is the head of the jet. As it plows into the tenuous gas of intergalactic space, it emits X-rays. Pictor A is X-ray source designated H 0517-456 and 3U 0510-44.
Puppis
Puppis A is a supernova remnant (SNR) about 10 light-years in diameter. The supernova occurred approximately 3700 years ago.
Sagittarius
The Galactic Center is at 1745–2900 which corresponds to Sagittarius A*, very near to radio source Sagittarius A (W24). In probably the first catalogue of galactic X-ray sources, two Sgr X-1s are suggested: (1) at 1744–2312 and (2) at 1755–2912, noting that (2) is an uncertain identification. Source (1) seems to correspond to S11.
Sculptor
The unusual shape of the Cartwheel Galaxy may be due to a collision with a smaller galaxy such as those in the lower left of the image. The most recent star burst (star formation due to compression waves) has lit up the Cartwheel rim, which has a diameter larger than the Milky Way. There is an exceptionally large number of black holes in the rim of the galaxy as can be seen in the inset.
Serpens
As of 27 August 2007, discoveries concerning asymmetric iron line broadening and their implications for relativity have been a topic of much excitement. With respect to the asymmetric iron line broadening, Edward Cackett of the University of Michigan commented, "We're seeing the gas whipping around just outside the neutron star's surface,". "And since the inner part of the disk obviously can't orbit any closer than the neutron star's surface, these measurements give us a maximum size of the neutron star's diameter. The neutron stars can be no larger than 18 to 20.5 miles across, results that agree with other types of measurements."
"We've seen these asymmetric lines from many black holes, but this is the first confirmation that neutron stars can produce them as well. It shows that the way neutron stars accrete matter is not very different from that of black holes, and it gives us a new tool to probe Einstein's theory", says Tod Strohmayer of NASA's Goddard Space Flight Center.
"This is fundamental physics", says Sudip Bhattacharyya also of NASA's Goddard Space Flight Center in Greenbelt, Maryland, and the University of Maryland. "There could be exotic kinds of particles or states of matter, such as quark matter, in the centers of neutron stars, but it's impossible to create them in the lab. The only way to find out is to understand neutron stars."
Using XMM-Newton, Bhattacharyya and Strohmayer observed Serpens X-1, which contains a neutron star and a stellar companion. Cackett and Jon Miller of the University of Michigan, along with Bhattacharyya and Strohmayer, used Suzaku's superb spectral capabilities to survey Serpens X-1. The Suzaku data confirmed the XMM-Newton result regarding the iron line in Serpens X-1.
Ursa Major
M82 X-1 is in the constellation Ursa Major at +. It was detected in January 2006 by the Rossi X-ray Timing Explorer.
In Ursa Major at RA 10h 34m 00.00 Dec +57° 40' 00.00" is a field of view that is almost free of absorption by neutral hydrogen gas within the Milky Way. It is known as the Lockman Hole. Hundreds of X-ray sources from other galaxies, some of them supermassive black holes, can be seen through this window.
Exotic X-ray sources
Microquasar
A microquasar is a smaller cousin of a quasar that is a radio emitting X-ray binary, with an often resolvable pair of radio jets. SS 433 is one of the most exotic star systems observed. It is an eclipsing binary with the primary either a black hole or neutron star and the secondary is a late A-type star. SS 433 lies within SNR W50. The material in the jet traveling from the secondary to the primary does so at 26% of light speed. The spectrum of SS 433 is affected by Doppler shifts and by relativity: when the effects of the Doppler shift are subtracted, there is a residual redshift which corresponds to a velocity of about 12,000 kps. This does not represent an actual velocity of the system away from the Earth; rather, it is due to time dilation, which makes moving clocks appear to stationary observers to be ticking more slowly. In this case, the relativistically moving excited atoms in the jets appear to vibrate more slowly and their radiation thus appears red-shifted.
Be X-ray binaries
LSI+61°303 is a periodic, radio-emitting binary system that is also the gamma-ray source, CG135+01. LSI+61°303 is a variable radio source characterized by periodic, non-thermal radio outbursts with a period of 26.5 d, attributed to the eccentric orbital motion of a compact object, probably a neutron star, around a rapidly rotating B0 Ve star, with a Teff ~26,000 K and luminosity of ~1038 erg s−1. Photometric observations at optical and infrared wavelengths also show a 26.5 d modulation. Of the 20 or so members of the Be X-ray binary systems, as of 1996, only X Per and LSI+61°303 have X-ray outbursts of much higher luminosity and harder spectrum (kT ~ 10–20 keV) vs. (kT ≤ 1 keV); however, LSI+61°303 further distinguishes itself by its strong, outbursting radio emission. "The radio properties of LSI+61°303 are similar to those of the "standard" high-mass X-ray binaries such as SS 433, Cyg X-3 and Cir X-1."
Supergiant fast X-ray transients (SFXTs)
There are a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). XTE J1739–302 is one of these. Discovered in 1997, remaining active only one day, with an X-ray spectrum well fitted with a thermal bremsstrahlung (temperature of ~20 keV), resembling the spectral properties of accreting pulsars, it was at first classified as a peculiar Be/X-ray transient with an unusually short outburst. A new burst was observed on 8 April 2008 with Swift.
Messier 87
Observations made by Chandra indicate the presence of loops and rings in the hot X-ray emitting gas that surrounds Messier 87. These loops and rings are generated by variations in the rate at which material is ejected from the supermassive black hole in jets. The distribution of loops suggests that minor eruptions occur every six million years.
One of the rings, caused by a major eruption, is a shock wave 85,000 light-years in diameter around the black hole. Other remarkable features observed include narrow X-ray emitting filaments up to 100,000 light-years long, and a large cavity in the hot gas caused by a major eruption 70 million years ago.
The galaxy also contains a notable active galactic nucleus (AGN) that is a strong source of multiwavelength radiation, particularly radio waves.
Magnetars
A magnetar is a type of neutron star with an extremely powerful magnetic field, the decay of which powers the emission of copious amounts of high-energy electromagnetic radiation, particularly X-rays and gamma rays. The theory regarding these objects was proposed by Robert Duncan and Christopher Thompson in 1992, but the first recorded burst of gamma rays thought to have been from a magnetar was on 5 March 1979. These magnetic fields are hundreds of thousands of times stronger than any man-made magnet, and quadrillions of times more powerful than the field surrounding Earth. As of 2003, they are the most magnetic objects ever detected in the universe.
On 5 March 1979, after dropping probes into the atmosphere of Venus, Venera 11 and Venera 12, while in heliocentric orbits, were hit at 10:51 am EST by a blast of gamma ray radiation. This contact raised the radiation readings on both the probes Konus experiments from a normal 100 counts per second to over 200,000 counts a second, in only a fraction of a millisecond. This giant flare was detected by numerous spacecraft and with these detections was localized by the interplanetary network to SGR 0526-66 inside the N-49 SNR of the Large Magellanic Cloud. And, Konus detected another source in March 1979: SGR 1900+14, located 20,000 light-years away in the constellation Aquila had a long period of low emissions, except the significant burst in 1979, and a couple after.
What is the evolutionary relationship between pulsars and magnetars? Astronomers would like to know if magnetars represent a rare class of pulsars, or if some or all pulsars go through a magnetar phase during their life cycles. NASA's Rossi X-ray Timing Explorer (RXTE) has revealed that the youngest known pulsing neutron star has thrown a temper tantrum. The collapsed star occasionally unleashes powerful bursts of X-rays, which are forcing astronomers to rethink the life cycle of neutron stars.
"We are watching one type of neutron star literally change into another right before our very eyes. This is a long-sought missing link between different types of pulsars", says Fotis Gavriil of NASA's Goddard Space Flight Center in Greenbelt, Maryland, and the University of Maryland, Baltimore.
PSR J1846-0258 is in the constellation Aquila. It had been classed as a normal pulsar because of its fast spin (3.1 s−1) and pulsar-like spectrum. RXTE caught four magnetar-like X-ray bursts on 31 May 2006, and another on 27 July 2006. Although none of these events lasted longer than 0.14-second, they all packed the wallop of at least 75,000 Suns. "Never before has a regular pulsar been observed to produce magnetar bursts", says Gavriil.
"Young, fast-spinning pulsars were not thought to have enough magnetic energy to generate such powerful bursts", says Marjorie Gonzalez, formerly of McGill University in Montreal, Canada, now based at the University of British Columbia in Vancouver. "Here's a normal pulsar that's acting like a magnetar."
The observations from NASA's Chandra X-ray Observatory showed that the object had brightened in X-rays, confirming that the bursts were from the pulsar, and that its spectrum had changed to become more magnetar-like. The fact that PSR J1846's spin rate is decelerating also means that it has a strong magnetic field braking the rotation. The implied magnetic field is trillions of times stronger than Earth's field, but it's 10 to 100 times weaker than a typical magnetar. Victoria Kaspi of McGill University notes, "PSR J1846's actual magnetic field could be much stronger than the measured amount, suggesting that many young neutron stars classified as pulsars might actually be magnetars in disguise, and that the true strength of their magnetic field only reveals itself over thousands of years as they ramp up in activity."
X-ray dark stars
During the solar cycle, as shown in the sequence of images of the Sun in X-rays, the Sun is almost X-ray dark, almost an X-ray variable. Betelgeuse, on the other hand, appears to be always X-ray dark. The X-ray flux from the entire stellar surface corresponds to a surface flux limit that ranges from 30–7000 ergs s−1 cm−2 at T=1 MK, to ~1 erg s−1 cm−2 at higher temperatures, five orders of magnitude below the quiet Sun X-ray surface flux.
Like the red supergiant Betelgeuse, hardly any X-rays are emitted by red giants. The cause of the X-ray deficiency may involve
a turn-off of the dynamo,
a suppression by competing wind production, or
strong attenuation by an overlying thick chromosphere.
Prominent bright red giants include Aldebaran, Arcturus, and Gamma Crucis. There is an apparent X-ray "dividing line" in the H-R diagram among the giant stars as they cross from the main sequence to become red giants. Alpha Trianguli Australis (α TrA / α Trianguli Australis) appears to be a Hybrid star (parts of both sides) in the "Dividing Line" of evolutionary transition to red giant. α TrA can serve to test the several Dividing Line models.
There is also a rather abrupt onset of X-ray emission around spectral type A7-F0, with a large range of luminosities developing across spectral class F.
In the few genuine late A- or early F-type coronal emitters, their weak dynamo operation is generally not able to brake the rapidly spinning star considerably during their short lifetime so that these coronae are conspicuous by their severe deficit of X-ray emission compared to chromospheric and transition region fluxes; the latter can be followed up to mid-A type stars at quite high levels. Whether or not these atmospheres are indeed heated acoustically and drive an "expanding", weak and cool corona or whether they are heated magnetically, the X-ray deficit and the low coronal temperatures clearly attest to the inability of these stars to maintain substantial, hot coronae in any way comparable to cooler active stars, their appreciable chromospheres notwithstanding.
X-ray interstellar medium
The Hot Ionized Medium (HIM), sometimes consisting of Coronal gas, in the temperature range 106 – 107 K emits X-rays. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures – of varying sizes – can be observed, such as stellar wind bubbles and superbubbles of hot gas, by X-ray satellite telescopes. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble.
Diffuse X-ray background
In addition to discrete sources which stand out against the sky, there is good evidence for a diffuse X-ray background. During more than a decade of observations of X-ray emission from the Sun, evidence of the existence of an isotropic X-ray background flux was obtained in 1956. This background flux is rather consistently observed over a wide range of energies. The early high-energy end of the spectrum for this diffuse X-ray background was obtained by instruments on board Ranger 3 and Ranger 5. The X-ray flux corresponds to a total energy density of about 5 x 10−4 eV/cm3. The ROSAT soft X-ray diffuse background (SXRB) image shows the general increase in intensity from the Galactic plane to the poles. At the lowest energies, 0.1 – 0.3 keV, nearly all of the observed soft X-ray background (SXRB) is thermal emission from ~106 K plasma.
By comparing the soft X-ray background with the distribution of neutral hydrogen, it is generally agreed that within the Milky Way disk, super soft X-rays are absorbed by this neutral hydrogen.
X-ray dark planets
X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area."
Earth
The first picture of the Earth in X-rays was taken in March 1996, with the orbiting Polar satellite. Energetically charged particles from the Sun cause aurora and energize electrons in the Earth's magnetosphere. These electrons move along the Earth's magnetic field and eventually strike the Earth's ionosphere, producing the X-ray emission.
See also
Astronomical radio source
References
Space plasmas
X-ray astronomy | Astrophysical X-ray source | [
"Physics",
"Astronomy"
] | 9,242 | [
"Space plasmas",
"Astrophysics",
"X-ray astronomy",
"Astronomical X-ray sources",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
39,242,968 | https://en.wikipedia.org/wiki/B%C3%BCchner%E2%80%93Curtius%E2%80%93Schlotterbeck%20reaction | The Buchner–Curtius–Schlotterbeck reaction is the reaction of aldehydes or ketones with aliphatic diazoalkanes to form homologated ketones. It was first described by Eduard Buchner and Theodor Curtius in 1885 and later by Fritz Schlotterbeck in 1907. Two German chemists also preceded Schlotterbeck in discovery of the reaction, Hans von Pechmann in 1895 and Viktor Meyer in 1905. The reaction has since been extended to the synthesis of β-keto esters from the condensation between aldehydes and diazo esters. The general reaction scheme is as follows:
The reaction yields two possible carbonyl compounds (I and II) along with an epoxide (III). The ratio of the products is determined by the reactant used and the reaction conditions.
Reaction mechanism
The general mechanism is shown below. The resonating arrow (1) shows a resonance contributor of the diazo compound with a lone pair of electrons on the carbon adjacent to the nitrogen. The diazo compound then does a nucleophilic attack on the carbonyl-containing compound (nucleophilic addition), producing a tetrahedral intermediate (2). This intermediate decomposes by the evolution of nitrogen gas forming the tertiary carbocation intermediate (3).
The reaction is then completed either by the reformation of the carbonyl through an 1,2-rearrangement or by the formation of the epoxide. There are two possible carbonyl products: one formed by migration of R1 (4) and the other by migration of R2 (5). The relative yield of each possible carbonyl is determined by the migratory preferences of the R-groups.
The epoxide product is formed by an intramolecular addition reaction in which a lone pair from the oxygen attacks the carbocation (6).
This reaction is exothermic due to the stability of nitrogen gas and the carbonyl containing compounds. This specific mechanism is supported by several observations. First, kinetic studies of reactions between diazomethane and various ketones have shown that the overall reaction follows second order kinetics. Additionally, the reactivity of two series of ketones are in the orders Cl3CCOCH3 > CH3COCH3 > C6H5COCH3 and cyclohexanone > cyclopentanone > cycloheptanone > cyclooctanone. These orders of reactivity are the same as those observed for reactions that are well established as proceeding through nucleophilic attack on a carbonyl group.
Scope and variation
The reaction was originally carried out in diethyl ether and routinely generated high yields due to the inherent irreversibly of the reaction caused by the formation of nitrogen gas. Though these reactions can be carried out at room temperature, the rate does increase at higher temperatures. Typically, the reaction is carried out at less than refluxing temperatures. The optimal reaction temperature is determined by the specific diazoalkane used. Reactions involving diazomethanes with alkyl or aryl substituents are exothermic at or below room temperature. Reactions involving diazomethanes with acyl or aroyl substituents require higher temperatures. The reaction has since been modified to proceed in the presence of Lewis acids and common organic solvents such as THF and dichloromethane. Reactions generally run at room temperature for about an hour, and the yield ranges from 70%-80% based on the choice of Lewis acid and solvent.
Steric effects
Steric effects of the alkyl substituents on the carbonyl reactant have been shown to affect both the rates and yields of Büchner–Curtius–Schlotterbeck reaction. Table 1 shows the percent yield of the ketone and epoxide products as well as the relative rates of reaction for the reactions between several methyl alkyl ketones and diazomethane.
The observed decrease in rate and increase in epoxide yield as the size of the alkyl group becomes larger indicates a steric effect.
Electronic effects
Ketones and aldehydes with electron-withdrawing substituents react more readily with diazoalkanes than those bearing electron-donating substituents (Table 2). In addition to accelerating the reaction, electron-withdrawing substituents typically increase the amount of epoxide produced (Table 2).
The effects of substituents on the diazoalkanes is reversed relative to the carbonyl reactants: electron-withdrawing substituents decrease the rate of reaction while electron-donating substituents accelerate it. For example, diazomethane is significantly more reactive than ethyl diazoacetate, though less reactive than its higher alkyl homologs (e.g. diazoethane). Reaction conditions may also affect the yields of carbonyl product and epoxide product. In the reactions of o-nitrobenzaldehyde, p-nitrobenzaldehyde, and phenylacetaldehyde with diazomethane, the ratio of epoxide to carbonyl is increased by the inclusion of methanol in the reaction mixture. The opposite influence has also been observed in the reaction of piperonal with diazomethane, which exhibits increased carbonyl yield in the presence of methanol.
Migratory preferences
The ratio of the two possible carbonyl products (I and II) obtained is determined by the relative migratory abilities of the carbonyl substituents (R1 and R2). In general, the R-group most capable of stabilizing the partial positive charge formed during the rearrangement migrates preferentially. A prominent exception to this general rule is hydride shifting. The migratory preferences of the carbonyl R-groups can be heavily influenced by solvent and diazoalkane choice. For example, methanol has been shown to promote aryl migration. As shown below, if the reaction of piperanol (IV) with diazomethane is carried out in the absence of methanol, the ketone obtained though a hydride shift is the major product (V). If methanol is the solvent, an aryl shift occurs to form the aldehyde (VI), which cannot be isolated as it continues to react to form the ketone (VII) and the epoxide (VIII) products.
The diazoalkane employed can also determine relative yields of products by influencing migratory preferences, as conveyed by the reactions of o-nitropiperonal with diazomethane and diazoethane. In the reaction between o-nitropiperonal (IX) and diazomethane, an aryl shift leads to production of the epoxide (X) in 9 to 1 excess of the ketone product (XI). When diazoethane is substituted for diazomethane, a hydride shift produces the ketone (XII), the only isolable product.
Examples in the literature
The Büchner–Curtius–Schlotterbeck reaction can be used to facilitate one carbon ring expansions when the substrate ketone is cyclic. For instance, the reaction of cyclopentanone with diazomethane forms cyclohexanone (shown below). The Büchner ring expansion reactions utilizing diazoalkanes have proven to be synthetically useful as they can not only be used to form 5- and 6-membered rings, but also more unstable 7- and 8-membered rings.
An acyl-diazomethane can react with an aldehyde to form a β-diketone in the presence of a transition metal catalyst (SnCl2 in the example shown below). β-Diketones are common biological products, and as such, their synthesis is relevant to biochemical research. Furthermore, the acidic β-hydrogens of β-diketones are useful for broader synthetic purposes, as they can be removed by common bases.
Acyl-diazomethane can also add to esters to form β-keto esters, which are important for fatty acid synthesis. As mentioned above, the acidic β-hydrogens also have productive functionality.
The Büchner–Curtius–Schlotterbeck reaction can also be used to insert a methylene bridge between a carbonyl carbon and a halogen of an acyl halide. This reaction allows conservation of the carbonyl and halide functionalities.
It is possible to isolate nitrogen-containing compounds using the Büchner–Curtius–Schlotterbeck reaction. For example, an acyl-diazomethane can react with an aldehyde in the presence of a DBU catalyst to form isolable α-diazo-β-hydroxy esters (shown below).
References
Organic reactions
Name reactions | Büchner–Curtius–Schlotterbeck reaction | [
"Chemistry"
] | 1,836 | [
"Name reactions",
"Organic reactions"
] |
39,244,732 | https://en.wikipedia.org/wiki/Hydrodefluorination | Hydrodefluorination (HDF) is a type of organic reaction in which in a substrate of a carbon–fluorine bond is replaced by a carbon–hydrogen bond. The topic is of some interest to scientific research. In one general strategy for the synthesis of fluorinated compounds with a specific substitution pattern, the substrate is a cheaply available perfluorinated hydrocarbon. An example is the conversion of hexafluorobenzene (C6F6) to pentafluorobenzene (C6F5H) by certain zirconocene hydrido complexes. In this type of reaction the thermodynamic driving force is the formation of a metal-fluorine bond that can offset the cleavage of the very stable C-F bond. Other substrates that have been investigated are fluorinated alkenes.
Another reaction type is oxidative addition of a metal into a C-F bond followed by a reductive elimination step in presence of a hydrogen source. For example, perfluorinated pyridine reacts with bis(cyclooctadiene)nickel(0) and triethylphosphine to the oxidative addition product and then with HCl to the ortho-hydrodefluorinated product.
In reductive hydrodefluorination the fluorocarbon is reduced in a series of single electron transfer steps through the radical anion, the radical and the anion with ultimate loss of a fluorine anion. An example is the conversion of pentafluorobenzoic acid to 3,4,5-tetrafluorobenzoic acid in a reaction of zinc dust in aqueous ammonia.
Specific systems that have been reported for fluoroalkyl group HDF are triethylsilane / carborane acid, and NiCl2(PCy3)2 / (LiAl(O-t-Bu)3H)
References
Organic reactions | Hydrodefluorination | [
"Chemistry"
] | 419 | [
"Organic reactions"
] |
39,245,484 | https://en.wikipedia.org/wiki/Nonlinear%20realization | In mathematical physics, nonlinear realization of a Lie group G possessing a Cartan subgroup H is a particular induced representation of G. In fact, it is a representation of a Lie algebra of G in a neighborhood of its origin.
A nonlinear realization, when restricted to the subgroup H reduces to a linear representation.
A nonlinear realization technique is part and parcel of many field theories with spontaneous symmetry breaking, e.g., chiral models, chiral symmetry breaking, Goldstone boson theory, classical Higgs field theory, gauge gravitation theory and supergravity.
Let G be a Lie group and H its Cartan subgroup which admits a linear representation in a vector space V. A Lie
algebra of G splits into the sum of the Cartan subalgebra of H and its supplement , such that
(In physics, for instance, amount to vector generators and to axial ones.)
There exists an open neighborhood U of the unit of G such
that any element is uniquely brought into the form
Let be an open neighborhood of the unit of G such that
, and let be an open neighborhood of the
H-invariant center of the quotient G/H which consists of elements
Then there is a local section of
over .
With this local section, one can define the induced representation, called the nonlinear realization, of elements on given by the expressions
The corresponding nonlinear realization of a Lie algebra
of G takes the following form.
Let , be the bases for and , respectively, together with the commutation relations
Then a desired nonlinear realization of in reads
,
up to the second order in .
In physical models, the coefficients are treated as Goldstone fields. Similarly, nonlinear realizations of Lie superalgebras are considered.
See also
Induced representation
Chiral model
References
Giachetta G., Mangiarotti L., Sardanashvily G., Advanced Classical Field Theory, World Scientific, 2009, .
Representation theory
Theoretical physics | Nonlinear realization | [
"Physics",
"Mathematics"
] | 392 | [
"Representation theory",
"Fields of abstract algebra",
"Theoretical physics"
] |
28,335,183 | https://en.wikipedia.org/wiki/Phosphatidylinositol%203-kinase-related%20kinase | Phosphatidylinositol 3-kinase-related kinases (PIKKs) are a family of Ser/Thr-protein kinases with sequence similarity to phosphatidylinositol-3 kinases (PI3Ks).
Members
The human PIKK family includes six members:
Structure
PIKKs proteins contain the following four domains:
N-terminus FRAP-ATM- TRRAP (FAT) domain,
kinase domain (KD; PI3_PI4_kinase),
PIKK- regulatory domain (PRD), and
C-terminus FAT-C-terminal (FATC) domain
References
External links
Kinase Family PIKK at WikiKinome.
EC 2.7.11
Protein families | Phosphatidylinositol 3-kinase-related kinase | [
"Chemistry",
"Biology"
] | 153 | [
"Biochemistry stubs",
"Protein families",
"Protein stubs",
"Protein classification"
] |
28,343,289 | https://en.wikipedia.org/wiki/Embryo%20cryopreservation | Embryos may be preserved through cryopreservation, generally at an embryogenesis stage corresponding to pre-implantation, that is, from fertilisation to the blastocyst stage.
Indications
Embryo cryopreservation is useful for leftover embryos after a cycle of in vitro fertilisation, as patients who fail to conceive may become pregnant using such embryos without having to go through a full IVF cycle. Or, if pregnancy occurred, they could return later for another pregnancy. Spare oocytes or embryos resulting from fertility treatments may be used for oocyte donation or embryo donation to another woman or couple, and embryos may be created, frozen and stored specifically for transfer and donation by using donor eggs and sperm.
Method
Embryo cryopreservation is generally performed as a component of in vitro fertilization (which generally also includes ovarian hyperstimulation, egg retrieval and embryo transfer). The ovarian hyperstimulation is preferably done by using a GnRH agonist rather than human chorionic gonadotrophin (hCG) for final oocyte maturation, since it decreases the risk of ovarian hyperstimulation syndrome with no evidence of a difference in live birth rate (in contrast to fresh cycles where usage of GnRH agonist has a lower live birth rate).
The main techniques used for embryo cryopreservation are vitrification versus slow programmable freezing (SPF). Studies indicate that vitrification is superior or equal to SPF in terms of survival and implantation rates. Vitrification appears to result in decreased risk of DNA damage than slow freezing. Vitrification prevent ice crystals in gametes. It is so fast (-23000 °C/min) that these crystals do not appear. Still, the amount of cryoprotectant used in the vitrification is crucial: too much is toxic for the embryo; but too little could cause the appearance of crystallised water, regardless of the speed at which the process is carried out.
There are two types of vitrification system. In the open vitrification system, the sample has direct contact with liquid nitrogen, which allows ultra-fast freezing. In the closed vitrification system, samples are placed in a sealed device before being immersed in liquid nitrogen. In this way, the sample is protected from any direct contact with nitrogen. It is used to cryopreserve biological risk samples.
Direct Frozen Embryo Transfer: Embryos can be frozen by SPF in ethylene glycol freeze media and transfer directly to recipients immediately after water thawing without laboratory thawing process. The world's first crossbred bovine embryo transfer calf under tropical conditions was produced by such technique on 23 June 1996 by Dr. Binoy S Vettical of Kerala Livestock Development Board, Mattupatti
Prevalence
World usage data is hard to come by but it was reported in a study of 23 countries that almost 42,000 frozen human embryo transfers were performed during 2001 in Europe.
Pregnancy outcome and determinants
In current state of the art, early embryos having undergone cryopreservation implant at the same rate as equivalent fresh counterparts. The outcome from using cryopreserved embryos has uniformly been positive with no increase in birth defects or development abnormalities, also between fresh versus frozen eggs used for intracytoplasmic sperm injection (ICSI). In fact, pregnancy rates are increased following frozen embryo transfer, and perinatal outcomes are less affected, compared to embryo transfer in the same cycle as ovarian hyperstimulation was performed. The endometrium is believed to not be optimally prepared for implantation following ovarian hyperstimulation, and therefore frozen embryo transfer avails for a separate cycle to focus on optimizing the chances of successful implantation. Children born from vitrified blastocysts have significantly higher birthweight than those born from non-frozen blastocysts. For early cleavage embryos, frozen ones appear to have at least as good obstetric outcome, measured as preterm birth and low birthweight for children born after cryopreservation as compared with children born after fresh cycles.
Oocyte age, survival proportion, and number of transferred embryos are predictors of pregnancy outcome.
Pregnancies have been reported from embryos stored for 27 years. A study of more than 11,000 cryopreserved human embryos showed no significant effect of storage time on post-thaw survival for IVF or oocyte donation cycles, or for embryos frozen at the pronuclear or cleavage stages. In addition, the duration of storage had no significant effect on clinical pregnancy, miscarriage, implantation, or live birth rate, whether from IVF or oocyte donation cycles.
A study in France between 1999 and 2011 came to the result that embryo freezing before administration of gonadotoxic chemotherapy agents to females caused a delay of treatment in 34% of cases, and a live birth in 27% of surviving cases who wanted to become pregnant, with the follow-up time varying between 1 and 13 years.
Legislation
From 1 October 2009, human embryos are allowed to be stored for 10 years in the UK, according to the Human Fertilisation and Embryology Act 2008.
History
The cryopreservation of embryos was first successfully attempted in 1984 in the case of Zoe Leyland, the first baby to be born from a frozen embryo. In Zoe's case, the embryo had been frozen for two months, but since the inception of the practice of cryopreservation after successful IVF, embryos have successfully survived in cryopreservation extensively longer periods of time, spanning even decades. The long-term implications of freezing embryos are demonstrated in the case of Molly Everette Gibson, the child born from the viable pregnancy of her mother who used an embryo, which had been stored in a cryogenic freezer for twenty-seven years. The first twins derived from frozen embryos were born in February 1985. Since then and up to 2008 it is estimated that between 350,000 and half a million IVF babies have been born from embryos frozen at a controlled rate and then stored in liquid nitrogen; additionally a few hundred births have been born from vitrified oocytes but firm figures are hard to come by.
It may be noted that Subash Mukhopadyay from Kolkata, India reported the successful cryopreservation of an eight cell embryo, storing it for 53 days, thawing and replacing it into the mother's womb, resulting in a successful and live birth as early as 1978- a full five years before Trounson and Mohr had done so. A small publication of Mukherjee in 1978 clearly shows that Mukherjee was on the right line of thinking much before anyone else had demonstrated the successful outcome of a pregnancy following the transfer of a 8-cell frozen-thawed embryo into human subjects transferring 8-cell cryopreserved embryos.
Implications
The practice of cryopreservation of embryos has increased in recent years. While the original purpose of freezing embryos was to help heterosexual couples who struggled with infertility, the practice has become an increasingly common avenue to start a family for homosexual couples, single women, as well as surrogates. Prior to successful attempts to effectively freeze embryos for later use, individuals were limited in their assisted reproductive technology options to in vitro fertilization (IVF), whereby sperm and egg were combined in a lab to create the embryos, all of which then had to be immediately implanted into the mother. Cryopreservation enables the embryos to be safely stored for extensive periods of time. Individuals are then able choose the proper time to use the embryos as well as elect to use only one embryo at a time while saving the others for later use. Doing so reduces the possibility of conceiving twins or triplets, thus allowing parents to exercise greater control over their vision for their families. Additionally, embryos may be tested and manipulated to eliminate genetic diseases.
Legal implications
While the cryopreservation of embryos has been characterized by great scientific developments over the years, the treatment of allocation of embryos in the event of a divorce or separation of the parties is a broadening and still less developed area of the law which continues to present challenges for the courts today. Politicians, state legislatures, and courts grapple with a multitude of legal issues surrounding families created using fertility treatments in view of divergent moral, political, and legal discourse throughout the United States. For example, in Illinois, the courts employ at least two clear approaches to determining how embryos are allocated in the event of a divorce or separation of the parties. Specifically, the courts seek to enforce any contractual language surrounding the allocation of the embryos, and they also employ a balancing test of the parties’ interests alongside the contractual approach or simply as an alternative approach if no contract exists. State courts are often left to decide these issues as statutory resources are not well-developed across the states. For example, in Illinois, the Illinois Parentage Act of 2015 has contemplated situations in which parties, represented by independent legal counsel, enter into contractual agreements regarding the allocation of embryos, but no uniform statutory answer exists for situations in which parties failed to enter into such written agreements regarding allocation.
References
Assisted reproductive technology
Cryopreservation | Embryo cryopreservation | [
"Chemistry",
"Biology"
] | 1,931 | [
"Cryopreservation",
"Cryobiology",
"Assisted reproductive technology",
"Medical technology"
] |
28,343,947 | https://en.wikipedia.org/wiki/Tetrahydrofolate%20riboswitch | Tetrahydrofolate riboswitches are a class of homologous RNAs in certain bacteria that bind tetrahydrofolate (THF). It is almost exclusively located in the probable 5' untranslated regions of protein-coding genes, and most of these genes are known to encode either folate transporters or enzymes involved in folate metabolism. For these reasons it was inferred that the RNAs function as riboswitches. THF riboswitches are found in a variety of Bacillota, specifically the orders Clostridiales and Lactobacillales, and more rarely in other lineages of bacteria. The THF riboswitch was one of many conserved RNA structures found in a project based on comparative genomics. The 3-d structure of the tetrahydrofolate riboswitch has been solved by separate groups using X-ray crystallography. These structures were deposited into the Protein Data Bank under accessions 3SD1 and 3SUX, with other entries containing variants.
See also
Pfl RNA motif
References
External links
Cis-regulatory RNA elements
Riboswitch | Tetrahydrofolate riboswitch | [
"Chemistry"
] | 245 | [
"Biochemistry stubs",
"Molecular and cellular biology stubs"
] |
37,787,002 | https://en.wikipedia.org/wiki/HD%2098649 | HD 98649 is a star with an orbiting exoplanet in the southern Crater constellation. Based on parallax measurements, it is located at a distance of 137.5 light years from the Sun. The system is drifting further away with a heliocentric radial velocity of 4.3 km/s. With an apparent visual magnitude of +8.00, it is too faint to be viewed with the naked eye. The system has a relatively high proper motion, traversing the celestial sphere at an angular rate of ·yr−1.
The spectrum of HD 98649 presents as an ordinary G-type main-sequence star with a stellar classification of G3/5V. It is around 4.4 billion years old and is spinning slowly with a rotation period of roughly 27 days. The star is similar to the Sun, having nearly the same size, mass, and luminosity. It is considered a solar analog. The level of magnetic activity in the chromosphere is minimal.
Planetary system
From 1998 to 2012, the star was under observance from the CORALIE echelle spectrograph at La Silla Observatory. In 2012, a long-period, wide-orbiting exoplanet was deduced by Doppler spectroscopy. This was published in November. The discoverers noted, "HD 98649b is in the top five of the most eccentric planetary orbit and the most eccentric planet known with a period larger than 600 days." The reason for this high eccentricity is unknown. The team proposed it as a candidate for direct imaging, once it gets out to 10.4 AU at apoastron, or 250 milliarcseconds of separation as viewed from Earth.
Using astrometry from Gaia, astronomers were able to deduce the true mass of HD 98649 b as , somewhat higher than its minimum mass deduced from radial velocity measurements.
References
G-type main-sequence stars
Solar analogs
Planetary systems with one confirmed planet
Crater (constellation)
Durchmusterung objects
098649
055409 | HD 98649 | [
"Astronomy"
] | 425 | [
"Crater (constellation)",
"Constellations"
] |
37,791,108 | https://en.wikipedia.org/wiki/Metal%20hose | A metal hose is a flexible metal line element. There are two basic types of metal hose that differ in their design and application: stripwound hoses and corrugated hoses.
Stripwound hoses have a high mechanical strength (e.g. tensile strength and tear strength). Corrugated hoses can withstand high pressure and provide maximum leak tightness on account of their material. Corrugated hoses also exhibit corrosion resistance and pressure tightness under the most extreme conditions, such as in aggressive seawater or at extreme temperatures such as found in space or when transporting cooled liquid gas. They are particularly well suited for conveying hot and cold substances.
With a history of more than one hundred years, metal hoses have given rise to other flexible line elements, including metal expansion joints, metal bellows and semi-flexible and flexible metal pipes. In Germany alone, there are about 3500 patents relating to metal hoses.
The origins
The first metal hose was technically a stripwound hose. It was invented in 1885 by the jewellery manufacturer Heinrich Witzenmann (1829–1906) of Pforzheim, Germany, together with the French engineer Eugène Levavassèur. The hose was modelled after the goose throat necklace, a piece of jewellery that consisted of interlacing metal strips. The original design of the hose was based on a helically coiled metal strip with an S-shaped profile. The profile interlocked along the windings of the helical coil. Due to a cavity between the interlocking profiles, this did not create a tight fit. The cavity was sealed by means of a rubber thread.
The result was a permanently flexible, leak-tight steel body of any length and diameter with a high mechanical strength. In France it was patented on 4 August 1885 with the patent number 170 479, and in Germany on 27 August 1885 with the German Reichspatent No. 34 871.
From 1886 to 1905, Heinrich Witzenmann continued to develop numerous noteworthy profiles for hose production which are still of technical significance today. In 1894, he registered a patent for the double metal hose consisting of two coaxial metal hoses twisted in opposite directions. Further modifications of the original form focused on the use of different hose materials and different substances for the thread seal, including rubber, textile threads, asbestos and wire.
An important variant of the metal hose can be attributed to the inventor Siegfried Frank of Frankfurt, Germany. In 1894, he patented the method of rolling a helical corrugation into a smooth rigid pipe. Witzenmann had already made experiments in this direction several years earlier, but did not continue his efforts to create a patentable result. It wasn't until the 1920 and 1930s that the hotel administrator Albert Dreyer of Lucerne, Switzerland, succeeded in creating a satisfactory annular corrugation for the manufacture of metal corrugated hoses.
Continued development
Emil Witzenmann, son of Heinrich Witzenmann, developed a form of the metal hose in 1909 that eliminated the need for any kind of sealing thread, be it of rubber, textile fibre or asbestos. In this type of hose, the strip edges do not interlock but abut each other and are seamlessly welded together. In 1920, Emil Witzenmann invented the metal expansion joint. This invention was based on the double-walled, welded, corrugated metal hose (with a wound protection sheath) with radial flexibility. In 1929, it became possible for the first time to produce metal bellows. These were also developed by Albert Dreyer of Lucerne, but independently of Witzenmann.
Metal bellows are created by rolling annular corrugations into a smooth extruded or welded pipe. In 1946, Dreyer developed a multi-walled joint that was designed to accommodate axial movements as well: the axial expansion joint.
The stripwound hose
Stripwound hoses consist of spirals that are loosely interlocked. This causes them to be highly flexible. These hoses come in two basic variants – either with an engaged profile or with an interlocked profile such as the Agraff profile. Both variants offer high flexibility due to the profile structure. However, this results in their not being fully leak-tight. For this reason, they are often used as insulating or protective hoses around an inner tube.
Structure and function
Stripwound hoses are created by helically winding cold-rolled, profiled metal strip onto a mandrel where the helical coils are interconnected but remain movable due to the type of profiling. This principle of a movable connection between the profiled coils leads to the high flexibility and movability of metal stripwound hoses. Most strips are made of galvanized steel, stainless steel or brass, which can optionally be chromium- or nickel-plated.
Properties of stripwound hoses
Stripwound hoses exhibit enormous tensile and transversal pressure resistance, a high torsional strength and excellent chemical and thermal stability. Due to their structure, they are not 100% leak-tight.
Types of stripwound hose
The metal hose properties are determined by several factors: the profile shape, the strip dimensions, the material and, if applicable, the type of seal.
Stripwound hoses are available with round and polygonal cross-sections.
Automotive engineering most often uses metallically sealed stripwound hoses. The introduction of a cotton, rubber or ceramic sealing thread into a specially profiled chamber during the winding process leads to greater tightness. For maximum tightness, the stripwound hoses can also be sheathed in PVC or silicone. The profile shapes range from simple engaged profiles to highly secure Agraff profiles.
Application areas for stripwound hoses
Stripwound hoses are frequently used as flexible temperature-resistant and ageing-resistant elements in exhaust equipment, especially in trucks and special vehicles such as tractors. They are also used as protective hoses for light conductors and electrical lines in fibre optics, or in measuring and control equipment. As miniature hoses with diameters ranging from 2.0–0.3 mm, they are also employed in medical technology, such as for endoscopy.
In addition, stripwound hoses are used for extracting and conveying substances such as smoke, shavings, granulate, etc. They are also suitable as protective hoses for corrugated lines to prevent over-extension and to act as a liner (guide hose inside a corrugated hose) to optimise flow conditions.
Stripwound metal hoses also include "bendable arms", or "swan necks". These consist of a round wire coil over which a triangular wire is wound. They can be bent in any direction and remain stationary in any position. These are used for the flexible supports of lamps, magnifying glasses and microphones, for example.
The corrugated hose
Corrugated hoses are pressure and vacuum tight. The permissible operating pressures for hoses with small dimensions reach 380 bar (with a 3-fold burst pressure safety factor). The pressure resistance of large dimensions is lower for technical reasons. Stainless steel models have a temperature resistance of up to approx. 600 °C, depending on the pressure load, and even higher values are possible with special materials. In the low temperature range, stainless steel corrugated hoses can be used down to -270 °C.
Structure and function
Corrugated hoses are used as economical, flexible connecting elements that permit movement, thermal expansion and vibrations, and that can be used as filling hoses. The starting material is a seamless or longitudinally welded, thin-walled tube into which corrugations are introduced by mechanical or hydraulic means using special tools. Corrugated hoses are absolutely leak-tight and are used to convey liquids or gases under pressure or as vacuum lines. They are also referred to as pressure hoses. Their special design achieves both flexibility and pressure resistance.
Types of corrugated hose
There are two basic variants of corrugated hoses that differ in their type of corrugation: annular corrugation and helical corrugation. In hoses with helical corrugation, usually a right-handed coil with a constant pitch that runs along the whole length of the hose.
The annular corrugation, on the other hand, consists of a large number of equally-spaced parallel corrugations whose main plane is perpendicular to the hose axis. Hoses with annular corrugation have decided advantages over those with helical corrugation:
When installed properly, they are free of damaging torsional strain during pressure surges.
Because of the shape of their profile, they connect smoothly to connection fittings.
This increases process reliability during conduit assembly and use. For this reason, annular corrugated hoses are far more common, with only a few exceptions.
Manufacturing corrugated hoses
The first step in creating a corrugated hose is to shape the starting metal strip from the coil into a smooth, longitudinally welded tube. The strip is continuously welded using the highly precise shielding gas welding method. Then the tube is corrugated by one of the following procedures:
The hydraulic corrugation method expands the tube from the inside to the outside. This method is used to create annular corrugated hoses.
The mechanical corrugation method, on the other hand, is used to produce both annular and helical corrugated hoses. Usually, multiple profiled pressure rollers are positioned around the tube with an offset that enables them to roll the desired corrugation profile into the tube from the outside to the inside. Both corrugation methods cause material hardening and thus increase the pressure and fatigue resistance of the corrugated hoses.
In addition, corrugated hoses can be manufactured by a special method that is closely related to the manufacture of stripwound hoses. In this procedure, the starting strip is given a corrugated profile in a longitudinal direction. This profile strip is then wound helically and the overlapping coils are tightly welded along the helical seam. After corrugation, the hose may be equipped with a braided sheath (see below). In this case, the hose then passes through a braiding machine that has circumferential wire coil holders, or so-called bobbins.
The wire bundles are wrapped helically around the hose while also being alternately layered one over the other. This creates a tubular braid with the characteristic crisscross pattern. After the fittings are mounted, the hose line is complete. Production-related testing is an integral part of the manufacturing process. It encompasses incoming tests of the starting material as well as dimensional, leakage and pressure testing of the finished conduit.
Flexibility
The flexibility of the hose is achieved by means of the elastic behaviour of the corrugation profile. When the hose is bent, the outer corrugations separate while the inner corrugations are squeezed together. The flexibility, bending behaviour and pressure stability of corrugated hoses depend on the selected profile shape. While flexibility increases with an increase in profile height and a decrease in corrugation spacing, pressure resistance decreases. The frequently required semi-flexible bending behaviour is achieved by flat profiles. Depending on the use of the hose, special application-specific profile shapes can be implemented
Pressure resistance and flexibility can also be altered by varying the wall thickness. A reduction in the wall thickness increases the bending capacity but reduces the pressure resistance of the hose.
Special designs
Miniature hoses with a diameter of only a few millimetres are highly flexible while also being very robust. When provided with a special sheathing, they can be used in minimally invasive surgery. Models with an inner liner (see below) and special connectors are used for laser or optoelectronic applications. The smallest diameter for miniature hoses is 1.8 mm.
Application areas of metal hoses
With its ability to meet high demands for conveying hot and cold substances, this modern technology has the following major areas of application:
Electrical industry and mechanical engineering: as a protective hose for electrical cables or light conductors
As a suction, conveying and coolant hose, e.g. when conveying and transporting liquid gas
Automotive industry: as an exhaust gas hose that acts as a vibration decoupler in exhaust systems
As a ventilation hose in technical building equipment
Industry
Measuring and control equipment
Medical equipment
Aviation and space travel
Reactor technology
Alternative energy (solar heat, wind turbines, etc.)
Properties of metal hoses
Metal hoses resist high pressures and offer maximum tightness on account of the material from which they are manufactured. Their flexibility lends them tensile and tear strength. Also, they are characterised by their corrosion and pressure resistance, even under extreme conditions such as when exposed to aggressive seawater, strong vibrations and extreme temperatures such as in space or when transporting cooled liquid gas.
Braiding around metal hoses
To increase pressure resistance, metal hoses can be equipped with one- or two-layer braiding. The braiding is firmly connected to the hose fittings on both sides to absorb the longitudinal forces caused by internal pressure. Due to its inherent flexibility, the braid moulds itself perfectly to the movement of the hose. Hose braiding consists of right- and left-handed wrapped wire bundles that are alternately layered one over the other. This not only prevents hose lengthening due to internal pressure, but also absorbs external tensile forces and protects the outside of the hose. The basic material of the wire braid is usually the same as that of the corrugated hose. It is also possible to select different materials in the interest of greater corrosion resistance or for economic considerations.
The braiding also greatly increases the resistance of the hose to internal pressure. The braid flexibly moulds itself to the movement of the hose. This applies even if a second braiding is used, which further increases pressure resistance. The method by which the braiding is attached to the connection fittings depends on the type of fitting and the demands on the hose. Under rough operating conditions, an additional round wire coil can be wound over the braid or the braid can be sheathed in a protective hose.
Figure: Metal hose with braid protection as a decoupler for vehicle exhaust systems
Functional principle of the metal braid
Wire braiding functions on the lazy tongs principle. When axial tension is applied to the hose, the braid reaches its extension limit. This means that the wires lie tightly spaced with the smallest crossing angle, creating a hose braiding of the smallest possible diameter and the largest possible length. When the hose is axially compressed, the crossing angle and diameter increase to maximum values.
References
Sources
Koch, Hans-Eberhard: 100 Jahre Metallschlauch Pforzheim, 1995
Witzenmann Group: company archives
Company history of Witzenmann GmbH, by Gregor Mühlthaler
Reinhard Gropp, Marc Seckner, Bernd Seeger: Flexible Metallic Pipes. In: Die Bibliothek der Technik 382. Süddeutscher Verlag onpact, Munich 2016.
Carlo Burkhardt, Bert Balmer: Automobile Decoupling Element Technology In: Die Bibliothek der Technik 237. Süddeutscher Verlag onpact, Munich 2008.
metal hose manual. Witzenmann, Pforzheim 2007.
Mechanical engineering
Hoses
Metallic objects | Metal hose | [
"Physics",
"Engineering"
] | 3,141 | [
"Applied and interdisciplinary physics",
"Metallic objects",
"Physical objects",
"Mechanical engineering",
"Matter"
] |
37,793,655 | https://en.wikipedia.org/wiki/Thomas%E2%80%93Fermi%20screening | Thomas–Fermi screening is a theoretical approach to calculate the effects of electric field screening by electrons in a solid. It is a special case of the more general Lindhard theory; in particular, Thomas–Fermi screening is the limit of the Lindhard formula when the wavevector (the reciprocal of the length-scale of interest) is much smaller than the Fermi wavevector, i.e. the long-distance limit. It is named after Llewellyn Thomas and Enrico Fermi.
The Thomas–Fermi wavevector (in Gaussian-cgs units) is
where μ is the chemical potential (Fermi level), n is the electron concentration and e is the elementary charge.
For the example of semiconductors that are not too heavily doped, the charge density , where kB is Boltzmann constant and T is temperature. In this case,
i.e. is given by the familiar formula for Debye length. In the opposite extreme, in the low-temperature limit ,
electrons behave as quantum particles (fermions). Such an approximation is valid for metals at room temperature, and the Thomas–Fermi screening wavevector kTF given in atomic units is
If we restore the electron mass and the Planck constant , the screening wavevector in Gaussian units is .
For more details and discussion, including the one-dimensional and two-dimensional cases, see the article on Lindhard theory.
Derivation
Relation between electron density and internal chemical potential
The internal chemical potential (closely related to Fermi level, see below) of a system of electrons describes how much energy is required to put an extra electron into the system, neglecting electrical potential energy. As the number of electrons in the system increases (with fixed temperature and volume), the internal chemical potential increases. This consequence is largely because electrons satisfy the Pauli exclusion principle: only one electron may occupy an energy level and lower-energy electron states are already full, so the new electrons must occupy higher and higher energy states.
Given a Fermi gas of density , the highest occupied momentum state (at zero temperature) is known as the Fermi momentum, .
Then the required relationship is described by the electron number density as a function of μ, the internal chemical potential. The exact functional form depends on the system. For example, for a three-dimensional Fermi gas, a noninteracting electron gas, at absolute zero temperature, the relation is .
Proof: Including spin degeneracy,
(in this context—i.e., absolute zero—the internal chemical potential is more commonly called the Fermi energy).
As another example, for an n-type semiconductor at low to moderate electron concentration, .
Local approximation
The main assumption in the Thomas–Fermi model is that there is an internal chemical potential at each point r that depends only on the electron concentration at the same point r. This behaviour cannot be exactly true because of the Heisenberg uncertainty principle. No electron can exist at a single point; each is spread out into a wavepacket of size ≈ 1 / kF, where kF is the Fermi wavenumber, i.e. a typical wavenumber for the states at the Fermi surface. Therefore, it cannot be possible to define a chemical potential at a single point, independent of the electron density at nearby points.
Nevertheless, the Thomas–Fermi model is likely to be a reasonably accurate approximation as long as the potential does not vary much over lengths comparable or smaller than 1 / kF. This length usually corresponds to a few atoms in metals.
Electrons in equilibrium, nonlinear equation
Finally, the Thomas–Fermi model assumes that the electrons are in equilibrium, meaning that the total chemical potential is the same at all points. (In electrochemistry terminology, "the electrochemical potential of electrons is the same at all points". In semiconductor physics terminology, "the Fermi level is flat".) This balance requires that the variations in internal chemical potential are matched by equal and opposite variations in the electric potential energy. This gives rise to the "basic equation of nonlinear Thomas–Fermi theory":
where n(μ) is the function discussed above (electron density as a function of internal chemical potential), e is the elementary charge, r is the position, and is the induced charge at r. The electric potential is defined in such a way that at the points where the material is charge-neutral (the number of electrons is exactly equal to the number of ions), and similarly μ0 is defined as the internal chemical potential at the points where the material is charge-neutral.
Linearization, dielectric function
If the chemical potential does not vary too much, the above equation can be linearized:
where is evaluated at μ0 and treated as a constant.
This relation can be converted into a wavevector-dependent dielectric function: (in cgs-Gaussian units)
where
At long distances (), the dielectric constant approaches infinity, reflecting the fact that charges get closer and closer to perfectly screened as you observe them from further away.
Example: A point charge
If a point charge is placed at in a solid, what field will it produce, taking electron screening into account?
One seeks a self-consistent solution to two equations:
The Thomas–Fermi screening formula gives the charge density at each point r as a function of the potential at that point.
The Poisson equation (derived from Gauss's law) relates the second derivative of the potential to the charge density.
For the nonlinear Thomas–Fermi formula, solving these simultaneously can be difficult, and usually there is no analytical solution. However, the linearized formula has a simple solution (in cgs-Gaussian units):
With (no screening), this becomes the familiar Coulomb's law.
Note that there may be dielectric permittivity in addition to the screening discussed here; for example due to the polarization of immobile core electrons. In that case, replace Q by Q/ε, where ε is the relative permittivity due to these other contributions.
Fermi gas at arbitrary temperature
For a three-dimensional Fermi gas (noninteracting electron gas), the screening wavevector can be expressed as a function of both temperature and Fermi energy . The first step is calculating the internal chemical potential , which involves the inverse of a Fermi–Dirac integral,
We can express in terms of an effective temperature : , or . The general result for is In the classical limit , we find , while in the degenerate limit we find
A simple approximate form that recovers both limits correctly is
for any power . A value that gives decent agreement with the exact result for all is , which has a maximum relative error of < 2.3%.
In the effective temperature given above, the temperature is used to construct an effective classical model. However, this form of the effective temperature does not correctly recover the specific heat and most other properties of the finite- electron fluid even for the non-interacting electron gas. It does not of course attempt to include electron-electron interaction effects. A simple form for an effective temperature which correctly recovers all the density-functional properties of even the interacting electron gas, including the pair-distribution functions at finite , has been given using the classical map hyper-netted-chain (CHNC) model of the electron fluid. That is
where the quantum temperature is defined as:
where , , . Here is the Wigner–Seitz radius corresponding to a sphere in atomic units containing one electron. That is, if is the number of electrons in a unit volume using atomic units where the unit of length is the Bohr, viz., , then
For a dense electron gas, e.g., with or less, electron-electron interactions become negligible compared to the Fermi energy, then, using a value of close to unity, we see that the CHNC effective temperature at approximates towards the form . Other mappings for the 3D case, and similar formulae for the effective temperature have been given for the classical map of the 2-dimensional electron gas as well.
See also
Thomas–Fermi equation
References
Condensed matter physics | Thomas–Fermi screening | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,681 | [
"Phases of matter",
"Condensed matter physics",
"Matter",
"Materials science"
] |
37,796,281 | https://en.wikipedia.org/wiki/Scanning%20flow%20cell | Scanning Flow Cell (SFC) is an electrochemical technique, based on the principle of channel electrode. The electrolyte is continuously flowing over a substrate that is introduced externally on translation stage. In contrast to the reference and counter electrode that are integrated in the main channel or placed in side compartments connected with a salt bridge.
SFC utilizes V-formed geometry with a small opening on the bottom (in range of 0.2-1mm diameter) used to establish the contact with sample. The convective flow is sustained also in non-contact mode of operation that allows easy exchange of the working electrode.
Application
The SFC is employed for combinatorial and high-throughput electrochemical studies. Due to its non-homogenous flow profile distribution, it is currently used for comparative kinetic studies. SFC is predominantly used for coupling of electrochemical measurements with post analytical techniques like UV-Vis, ICP-MS, ICP-OES etc. This makes possible a direct correlation of electrochemical and spectrometric signal. This methodology was successfully applied for corrosion studies.
References
Laboratory equipment
Physical chemistry | Scanning flow cell | [
"Physics",
"Chemistry"
] | 227 | [
"Physical chemistry",
"Electrochemistry",
"Applied and interdisciplinary physics",
"nan"
] |
37,796,403 | https://en.wikipedia.org/wiki/Pi1%20Orionis | {{DISPLAYTITLE:Pi1 Orionis}}
Pi1 Orionis (π1 Ori, π1 Orionis) is a star in the equatorial constellation of Orion. It is faintly visible to the naked eye with an apparent visual magnitude of 4.74. Based upon an annual parallax shift of 28.04 mas, it is located about 116 light-years from the Sun.
This is an A-type main-sequence star with a stellar classification of A3 Va. It is a Lambda Boötis star, which means the spectrum shows lower-than-expected abundances for heavier elements. Pi1 Orionis is a relatively young star, just 100 million years old, and is spinning fairly rapidly with a projected rotational velocity of 120 km/s. It has nearly double the mass of the Sun and 167% of the Sun's radius. The star radiates 16.6 times the solar luminosity from its outer atmosphere at an effective temperature of .
An infrared excess indicates there is a debris disk with a temperature of 80 K orbiting from the star. The dust has a combined mass 2.2% that of the Earth.
References
A-type main-sequence stars
Orion (constellation)
Orionis, Pi1
Orionis, 07
031295
022845
01570
Durchmusterung objects | Pi1 Orionis | [
"Astronomy"
] | 273 | [
"Constellations",
"Orion (constellation)"
] |
40,536,771 | https://en.wikipedia.org/wiki/Muon%20tomography | Muon tomography or muography is a technique that uses cosmic ray muons to generate two or three-dimensional images of volumes using information contained in the Coulomb scattering of the muons. Since muons are much more deeply penetrating than X-rays, muon tomography can be used to image through much thicker material than x-ray based tomography such as CT scanning. The muon flux at the Earth's surface is such that a single muon passes through an area the size of a human hand per second.
Since its development in the 1950s, muon tomography has taken many forms, the most important of which are muon transmission radiography and muon scattering tomography.
Muography uses muons by tracking the number of muons that pass through the target volume to determine the density of the inaccessible internal structure. Muography is a technique similar in principle to radiography (imaging with X-rays) but capable of surveying much larger objects. Since muons are less likely to interact, stop and decay in low density matter than high density matter, a larger number of muons will travel through the low density regions of target objects in comparison to higher density regions. The apparatuses record the trajectory of each event to produce a muogram that displays the matrix of the resulting numbers of transmitted muons after they have passed through objects up to multiple kilometers in thickness. The internal structure of the object, imaged in terms of density, is displayed by converting muograms to muographic images.
Muon tomography imagers are under development for the purposes of detecting nuclear material in road transport vehicles and cargo containers for the purposes of non-proliferation.
Another application is the usage of muon tomography to monitor potential underground sites used for carbon sequestration.
Etymology and use
The term muon tomography is based on the word "tomography", a word produced by combining Ancient Greek tomos "cut" and graphe "drawing." The technique produces cross-sectional images (not projection images) of large-scaled objects that cannot be imaged with conventional radiography. Some authors hence see this modality as a subset of muography.
Muography was named by Hiroyuki K. M. Tanaka. There are two explanations for the origin of the word "muography": (A) a combination of the elementary particle muon and Greek γραφή (graphé) "drawing," together suggesting the meaning "drawing with muons"; and (B) a shortened combination of "muon" and "radiography." Although these techniques are related, they differ in that radiography uses X-rays to image the inside of objects on the scale of meters, while muography uses muons to image the inside of objects on the scale of hectometers to kilometers.
Invention of muography
Precursor technologies
Twenty years after Carl David Anderson and Seth Neddermeyer discovered that muons were generated from cosmic rays in 1936, Australian physicist E.P. George made the first known attempt to measure the areal density of the rock overburden of the Guthega-Munyang tunnel (part of the Snowy Mountains Hydro-Electric Scheme) with cosmic ray muons. He used a Geiger counter. Although he succeeded in measuring the areal density of rock overburden placed above the detector, and even successfully matched the result from core samples, due to the lack of directional sensitivity in the Geiger counter, imaging was impossible.
In a famous experiment in the 1960s, Luis Alvarez used muon transmission imaging to search for hidden chambers in the Pyramid of Chephren in Giza, although none were found at the time; a later effort discovered a previously unknown void in the Great Pyramid. In all cases the information about the absorption of the muons was used as a measure of the thickness of the material crossed by the cosmic ray particles.
First muogram
The first muogram was produced in 1970 by a team led by American physicist Luis Walter Alvarez, who installed detection apparatus in the Belzoni Chamber of the Pyramid of Khafre to search for hidden rooms within the structure. He recorded the number of muons after they had passed through the Pyramid. With an invention of this particle tracking technique, he worked out the methods to generate the muogram as a function of muon's arriving angles. The generated muogram was compared with the results of the computer simulations, and he concluded that there were no hidden chambers in the Pyramid of Chephren after the apparatus was exposed to the Pyramid for several months.
Film muography
Tanaka and Niwa’s pioneering work created film muography, which uses nuclear emulsion. Exposures of nuclear emulsions were taken in the direction of the volcano and then analyzed with a newly invented scanning microscope, custom built for the purpose of identifying particle tracks more efficiently. Film muography enabled them to obtain the first interior imaging of an active volcano in 2007, revealing the structure of the magma pathway of Asama volcano.
Real-time muography
In 1968, the group of Alvarez used spark chambers with a digital read out for their Pyramid experiment. Tracking data from the apparatus was onto magnetic tape in the Belzoni Chamber, then the data were analyzed by the IBM 1130 computer, and later by the CDC 6600 computer located at Ein Shams University and Lawrence Radiation Laboratory, respectively. Strictly speaking these were not real time measurements.
Real-time muography requires muon sensors to convert the muon's kinetic energy into a number of electrons in order to process muon events as electronic data rather than as chemical changes on film. Electronic tracking data can be processed almost instantly with an adequate computer processor; in contrast, film muography data have to be developed before the muon tracks can be observed. Real-time tracking of muon trajectories produce real-time muograms that would be difficult or impossible to obtain with film muography.
High-resolution muography
The MicroMegas detector has a positioning resolution of 0.3 mm, an order of magnitude higher than that of the scintillator-based apparatus (10 mm), and thus has a capability to create better angular resolution for muograms.
Applications
Geology
Muons have been used to image magma chambers to predict volcanic eruptions. Kanetada Nagamine et al. continue active research into the prediction of volcanic eruptions through cosmic ray attenuation radiography. Minato used cosmic ray counts to radiograph a large temple gate. Emil Frlež et al. reported using tomographic methods to track the passage of cosmic rays muons through cesium iodide crystals for quality control purposes. All of these studies have been based on finding some part of the imaged material that has a lower density than the rest, indicating a cavity. Muon transmission imaging is the most suitable method for acquiring this type of information.
In 2021, Giovanni Leone and his group revealed that volcanic eruption frequency is related to the amount of volcanic material which moves through a near-surface conduit in an active volcano.
Vesuvius
The Mu-Ray project has been using muography to image Vesuvius, famous for its eruption of 79 AD, which destroyed local settlements including Pompeii and Herculaneum. The Mu-Ray project is funded by the Istituto Nazionale di Fisica Nucleare (INFN, Italian National Institute for Nuclear Physics) and the Istituto Nazionale di Geofisica e Vulcanologia (Italian National Institute for Geophysics and Volcanology). The last time this volcano erupted was in 1944. The goal of this project is to "see" inside the volcano which is being developed by scientists in Italy, France, the US and Japan. This technology can be applied to volcanoes all around the world, to have a better understanding of when volcanoes will erupt.
Etna
The ASTRI SST-2M Project is using muography to generate the internal images of the magma pathways of Etna volcano. The last major eruption of 1669 caused widespread damage and the death of approximately 20,000 people. Monitoring the magma flows with muography may help to predict the direction from which lava from future eruptions may emit.
From August 2017 to October 2019, time sequential muography imaging of the Etna edifice was conducted to study differences in density levels which would indicate interior volcanic activities. Some of the findings of this research were the following: imaging of a cavity formation prior to crater floor collapse, underground fracture identification, and imaging of the formation of a new vent in 2019 which became active and subsequently erupted.
Stromboli
The apparatuses use nuclear emulsions to collect data near Stromboli volcano. Recent emulsion scanning improvements developed during the course of the Oscillation Project with Emulsion tRacking Apparatus (OPERA experiment) led to film muography. Unlike other muography particle trackers, nuclear emulsion can acquire high angular resolution without electricity. An emulsion-based tracker has been collecting data at Stromboli since December 2011.
Over a period of 5 months in 2019, an experiment using nuclear emulsion muography was done at Stromboli volcano. Emulsion films were prepared in Italy and analyzed in Italy and Japan. The images revealed a low-density zone at the summit of the volcano which is thought to influence the stability of the “Sciara del Fuoco” slope (the source of many landslides).
Puy de Dôme
Since 2010, a muographic imaging survey has been conducted at the dormant volcano, Puy de Dôme, in France. It has been using the existing closed building structures located directly underneath the southern and eastern sides of the volcano for equipment testing and experiments. Preliminary muographs have revealed previously unknown density features at the top of Puy de Dôme that have been confirmed with gravimetric imaging.
A joint measurement was conducted by French and Italian research groups in 2013-2014 during which different strategies for improved detector designs were tested, particularly their capacities to reduce background noise.
Underground water monitoring
Muography has been applied to groundwater and saturation level monitoring for bedrock in a landslide area as a response to major rainfall events. The measurement results were compared with borehole groundwater level measurements and rock resistivity.
Glaciers
The applicability of muography to glacier studies was first demonstrated with a survey of the top portion of Aletch glacier located in the Central European Alps.
In 2017, a Japanese/Swiss collaboration conducted a larger scale muography imaging experiment based at Eiger Glacier to determine the bedrock geometry beneath active glaciers in the steep alpine environment of the Jungfrau region in Switzerland. 5-6 double side coated emulsion films were set in frames with stainless steel plates for shielding to be installed in 3 regions of a railway tunnel which was located underneath the targeted glacier. Production of the emulsion films was done in Switzerland and analysis was done in Japan.
Underlying bedrock erosion and its boundary between glacier and bedrock could be successfully imaged for the first time. The methodology provided important information on subglacial mechanisms of bedrock erosion.
Mining
TRIUMF and its spin-off company Ideon Technologies developed a muograph designed specifically for surveys of possible uranium deposit sites with industry-standard boreholes
Civil engineering
Muography has been used to map the inside of big civil engineering structures, such as dams, and their surroundings for safety and risk prevention purposes. Muography imaging was applied to the identification of hidden construction shafts located above the Alfreton Old Tunnel (constructed in 1862) in the UK.
Nuclear reactors
Muography was applied to investigating the conditions of nuclear reactors damaged by the Fukushima nuclear disaster, and helped to confirm its state of near-complete meltdown.
Nuclear waste imaging
Tomographic techniques can be effective for non-invasive nuclear waste characterization and for nuclear material accountancy of spent fuel inside dry storage containers. Cosmic muons can improve the accuracy of data on nuclear waste and Dry Storage Containers (DSC). Imaging of DSC exceeds the IAEA detection target for nuclear material accountancy. In Canada, spent nuclear fuel is stored in large pools (fuel bays or wet storage) for a nominal period of 10 years to allow for sufficient radioactive cooling.
Challenges and issues for nuclear waste characterization are covered at great length, summarized below:
Historical waste. Non-traceable waste stream poses a challenge for characterization. Different types of waste can be distinguished: tanks with liquids, fabrication facilities to be decontaminated before decommissioning, interim waste storage sites, etc.
Some waste form may be difficult and/or impossible to measure and characterize (i.e. encapsulated alpha/beta emitters, heavily shielded waste).
Direct measurements, i.e. destructive assay, are not possible in many cases and Non-Destructive Assay (NDA) techniques are required, which often do not provide conclusive characterization.
Homogeneity of the waste needs characterization (i.e. sludge in tanks, in-homogeneities in cemented waste, etc.).
Condition of the waste and waste package: breach of containment, corrosion, voids, etc.
Accounting for all of these issues can take a great deal of time and effort. Muon Tomography can be useful to assess the characterization of waste, radiation cooling, and condition of the waste container.
Los Alamos Concrete Reactor
In the summer of 2011, a reactor mockup was imaged using Muon Mini Tracker (MMT) at Los Alamos. The MMT consists of two muon trackers made up of sealed drift tubes. In the demonstration, cosmic-ray muons passing through a physical arrangement of concrete and lead; materials similar to a reactor were measured. The mockup consisted of two layers of concrete shielding blocks, and a lead assembly in between; one tracker was installed at height, and another tracker was installed on the ground level at the other side. Lead with a conical void similar in shape to the melted core of the Three Mile Island reactor was imaged through the concrete walls. It took three weeks to accumulate muon events. The analysis was based on point of closest approach, where the track pairs were projected to the mid-plane of the target, and the scattered angle was plotted at the intersection. This test object was successfully imaged, even though it was significantly smaller than expected at Fukushima Daiichi for the proposed Fukushima Muon Tracker (FMT).
Fukushima application
On March 11, 2011, a 9.0-magnitude earthquake, followed by a tsunami, caused an ongoing nuclear crisis at the Fukushima Daiichi power plant. Though the reactors are stabilized, complete shutdown will require knowledge of the extent and location of the damage to the reactors. A cold shutdown was announced by the Japanese government in December, 2011, and a new phase of nuclear cleanup and decommissioning was started. However, it is hard to plan the dismantling of the reactors without any realistic estimate of the extent of the damage to the cores, and knowledge of the location of the melted fuel.
Since the radiation levels are still very high at the inside of the reactor core, it is not likely anyone can go inside to assess the damage. The Fukushima Daiichi Tracker (FDT) is proposed to see the extent of the damage from a safe distance. A few months of measurements with muon tomography, will show the distribution of the reactor core. From that, a plan can be made for reactor dismantlement; thus potentially shortening the time of the project many years.
In August 2014, Decision Sciences International Corporation it had been awarded a contract by Toshiba Corporation (Toshiba) to support the reclamation of the Fukushima Daiichi Nuclear complex with the use of Decision Science's muon tracking detectors.
Industrial muography has found an application in reactor inspection. It was used to locate the nuclear fuel in the Fukushima Daiichi nuclear power plant, which was damaged by the 2011 Tōhoku earthquake and tsunami.
Non-proliferation
The Nuclear Non-proliferation Treaty (NPT) signed in 1968 was a major step in the non-proliferation of nuclear weapons. Under the NPT, non-nuclear weapon states were prohibited from, among other things, possessing, manufacturing or acquiring nuclear weapons or other nuclear explosive devices. All signatories, including nuclear weapon states, were committed to the goal of total nuclear disarmament.
The Comprehensive Nuclear-Test-Ban Treaty (CTBT) bans all nuclear explosions in any environments. Tools such as muon tomography can help to stop the spread of nuclear material before it is armed into a weapon.
The New START treaty signed by the US and Russia aims to reduce the nuclear arsenal by as much as a third. The verification involves a number of logistically and technically difficult problems. New methods of warhead imaging are of crucial importance for the success of mutual inspections.
Muon tomography can be used for treaty verification due to many important factors. It is a passive method; it is safe for humans and will not apply an artificial radiological dose to the warhead. Cosmic rays are much more penetrating than gamma or x-rays. Warheads can be imaged in a container behind significant shielding and in presence of clutter. Exposure times depend on the object and detector configuration (~few minutes if optimized). While special nuclear material (SNM) detection can be reliably confirmed, and discrete SNM objects can be counted and localized, the system can be designed to not reveal potentially sensitive details of the object design and composition.
The Multi-Mode Passive Detection System (MMPDS) port scanner, located in the Freeport, Bahamas can detect both shielded nuclear material, as well as explosives and contraband. The scanner is large enough for a cargo container to pass through, making it a scaled-up version of the Mini Muon Tracker. It then produces a 3-D image of what is scanned.
Tools such as the MMPDS can be used to prevent the spread of nuclear weapons. The safe but effective use of cosmic rays can be implemented in ports to help non-proliferation efforts, or even in cities, under overpasses, or entrances to government buildings.
Archaeology
Egyptian pyramids
In 2015, 45 years after Alvarez’s experiment, the ScanPyramids Project, which is composed of an international team of scientists from Egypt, France, Canada, and Japan, started using muography and thermography imaging techniques to survey the Giza pyramid complex. In 2017, scientists involved in the project discovered a large cavity, named "ScanPyramids Big Void", above the Grand Gallery of the Great Pyramid of Giza. In 2023, "a corridor-shaped structure" was found in Khufu's Pyramid using the cosmic-ray muons. It was named "ScanPyramids North Face Corridor".
Mexican pyramids
The 3rd largest pyramid in the world, the Pyramid of the Sun, situated near Mexico City in the ancient city of Teotihuacan was surveyed with muography. One of the motivations of the team was to discover if inaccessible chambers inside the Pyramid might hold the tomb of a Teotihuacan ruler. The apparatus was transported in components and then reassembled inside a small tunnel leading to an underground chamber directly underneath the pyramid. A low density region approximately 60 meters wide was reported as a preliminary result, which has led some researchers to suggest that the structure of the pyramid might have been weakened and it is in danger of collapse.
In 2020, the US National Science Foundation awarded a US-Mexico international group a grant for muography to investigate El Castillo, the largest pyramid in Chichen Itza.
Mt. Echia
A three-dimensional muography experiment was done in the underground tunnels of Mt Echia (in Naples, Italy) with 2 muon detectors, MU-RAY and MIMA, which successfully imaged 2 known cavities and discovered one unknown cavity. Mt Echia is where the earliest Naples settlement started in the 8th century and is located underground. Using measurements from 3 different locations in the underground tunnels, a 3D reconstruction was created for the unknown cavity. The method used for this experiment could be applied to other archeological targets to check the structural integrity of ancient sites and to potentially discover hidden historical regions within known sites.
China's imperial chambers
Yuanyuan Liu of the Beijing Normal University and her group showed the feasibility of muography to image the underground chamber of the first emperor of China.
Planetary science
Mars
Muography may potentially be implemented to image extraterrestrial objects such as the geology of Mars. Cosmic rays are numerous and omnipresent in outer space. Therefore, it is predicted that the interaction of the cosmic rays in the Earth’s atmosphere to generate pions/mesons and subsequently to decay into muons also occurs in the atmosphere of other planets. It has been calculated that the atmosphere of Mars is sufficient to produce a horizontal muon flux for practical muography, roughly equivalent to the Earth’s muon flux. In the future, it may be viable to include a high-resolution muography apparatus in a future space mission to Mars, for instance inside a Mars rover. Getting accurate images of the density of Martian structures could be used for surveying sources of ice or water.
Small Solar System bodies
The “NASA Innovative Advanced Concepts (NIAC) program” is now in the process of assessing whether muography may be used for imaging the density structures of small Solar System bodies (SSBs). While the SSBs tend to generate lower muon flux than the Earth's atmosphere, some are sufficient to allow for muography of objects ranging from 1 km or less in diameter. The program includes calculating the muon flux for each potential target, creating imaging simulations and considering the engineering challenges of building a more lightweight, compact apparatus appropriate for such a mission.
Hydrospheric muography
The Hyper-kilometric Submarine Deep Detector (HKMSDD) was designed as a technique to operate muographic observations autonomously under the sea at reasonable costs by combining linear arrays of muographic sensor modules with underwater tube structures.
In undersea muography, time-dependent mass movements consisting of or within targeted gigantic fluid bodies and submerged solid material bodies can be more precisely imaged than with land-based muography. Time-dependent fluctuations of the muon flux due to atmospheric pressure variations are suppressed when muography is conducted under the seafloor by the “inverse barometric effect (IBE)” of seawater. Low atmospheric pressures, such as the pressures observed at the center of a cyclone suck up seawater; on the other hand, high atmospheric pressures will push down seawater. The muon’s barometric pressure fluctuation, therefore, are mostly compensated by IBE at sea levels.
Carbon capture and storage
The success of carbon capture and storage (CCS) hinges upon being able to reliably contain the materials within the storage containers. It has been proposed to use muography as a monitoring tool for CCS. In 2018, a 2 month study supported the feasibility of CCS muography monitoring. It was completed in the UK at the Boulby Mine site in a deep borehole.
Technique variants
Muon scattering tomography (MST)
Muon scattering tomography was first proposed by Chris Morris and his group at Los Alamos National Laboratory (LANL). This technique is capable of locating the muon's Rutherford scattering source by tracking incoming and outgoing muons from the target. Since the radiation lengths tend to be shorter for higher atomic number materials; hence larger scattering angles are expected for the same path lengths, this technique is more sensitive to distinguishing differences between materials within structures and is therefore can be used for imaging heavy metals hidden inside light materials. On the other hand, this technique is not suitable for imaging void structures or light materials located inside heavy materials.
LANL and its spinoff company Decision Sciences applied the MST technique to image the interiors of large trucks and other storage containers in order to detect nuclear materials. A similar system that used MST was developed at the University of Glasgow and its spin-off company Lynkeos Technology to apply towards monitoring the robustness of nuclear waste containers at the Sellafield storage site.
With muon scattering tomography, both incoming and outgoing trajectories for each particle are reconstructed. This technique has been shown to be useful to find materials with high atomic number in a background of high-z material such as uranium or material with a low atomic number.< Since the development of this technique at Los Alamos, a few different companies have started to use it for several purposes, most notably for detecting nuclear cargo entering ports and crossing over borders.
The Los Alamos National Laboratory team has built a portable Mini Muon Tracker (MMT). This muon tracker is constructed from sealed aluminum drift tubes, which are grouped into twenty-four planes. The drift tubes measure particle coordinates in X and Y with a typical accuracy of several hundred micrometers. The MMT can be moved via a pallet jack or a fork lift. If a nuclear material has been detected it is important to be able to measure details of its construction in order to correctly evaluate the threat.
MT uses multiple scattering radiography. In addition to energy loss and stopping cosmic rays undergo Coulomb scattering. The angular distribution is the result of many single scatters. This results in an angular distribution that is Gaussian in shape with tails from large angle single and plural scattering. The scattering provides a novel method for obtaining radiographic information with charged particle beams. More recently, scattering information from cosmic ray muons has been shown to be a useful method of radiography for homeland security applications.
Multiple scattering can be defined as when the thickness increases and the number of interactions become high the angular dispersion can be modelled as Gaussian. Where the dominant part of the multiple scattering polar-angular distribution is
where θ is the muon scattering angle and θ0 is the standard deviation of scattering angle, is given approximately by
The muon momentum and velocity are p and β, respectively, c is the speed of light, X is the length of scattering medium, and X0 is the radiation length for the material. This needs to be convolved with the cosmic ray momentum spectrum in order to describe the angular distribution.
The Image can then be reconstructed by use of GEANT4. These runs include input and output vectors, in and out for each incident particle.
The incident flux projected to the core location was used to normalize transmission radiography (attenuation method). From here the calculations are normalized for the zenith angle of the flux.
Muon Momentum Integrated Tomography System
Despite the various benefits of using cosmic ray muons for imaging large and dense objects, i.e., spent nuclear fuel casks and nuclear reactors, their wide applications are often limited by the naturally low muon flux at sea level, approximately 10,000 m−2min−1. To overcome this limitation, two important quantities—scattering angle, θ and momentum, p—for each muon event must be measured during the measurement. To measure cosmic ray muon momentum in the field, a fieldable muon spectrometer using multi-layer pressurized gas Cherenkov radiators has been developed and the muon spectrometer-tomography shows improved muon scattering tomography resolutions.
Muon computational axial tomography (Mu-CAT)
Mu-CAT is a technique which combines multiple projected muographic images to create a 3D muography image. In principle, it is similar to medical imaging used in radiology (CAT scans) to obtain three-dimensional internal images of the body. While medical CAT scanners use a rotating X-ray generator around the target object, Mu-CAT uses multiple detectors around the target object and naturally occurring muons as probes. Either the tomographic reconstruction technique or the inverse problem is applied to these data from the Mu-CAT observations to reconstruct 3d images.
Mu-CAT revealed the three-dimensional position of a fractured zone below the crater floor of an active volcano related to a past eruption that had caused a large pyroclastic and lava flow on its northern slope.
Cosmic Ray Inspection and Passive Tomography (CRIPT)
The Cosmic Ray Inspection and Passive Tomography (CRIPT) detector is a Canadian muon tomography project which tracks muon scattering events while simultaneously estimating the muon momentum. The CRIPT detector is tall and has a mass of . The majority of the detector mass is located in the muon momentum spectrometer which is a feature unique to CRIPT regarding muon tomography.
After initial construction and commissioning at Carleton University in Ottawa, Canada, the CRIPT detector was moved to Atomic Energy Of Canada Limited's Chalk River Laboratories.
The CRIPT detector is presently examining the limitations on detection time for border security applications, limitations on muon tomography image resolution, nuclear waste stockpile verification, and space weather observation through muon detection.
Technical aspects
The apparatus is a muon-tracking device that consists of muon sensors and recording media. There are several different kinds of muon sensors used in muography apparatuses: plastic scintillators, nuclear emulsions, or gaseous ionization detectors. The recording medium is the film itself, digital magnetic or electronic memory. The apparatus is directed towards the target volume, exposing the muon sensor until the muon events required in order to form a statistically sufficient muogram are recorded, after which, (post processing) a muograph displaying the average density along each muon path is created.
Advantages
There are several advantages that muography has over traditional geophysical surveys. First, muons are naturally abundant and travel from the atmosphere towards the Earth’s surface. This abundant muon flux is nearly constant, therefore muography can be used worldwide. Second, because of the high-contrast resolution of muography, a small void of less than 0.001% of the entire volume can be distinguished. Finally, the apparatus has much lower power requirements than other imaging techniques since they use natural probes, rather than relying on artificially generated signals.
Process
In the field of muography, the transmission coefficient is defined as the ratio of the transmission through the object over the incident muon flux. By applying the muon's range through matter to the open-sky muon energy spectrum, the value of the fraction of incident muon flux that is transmitted through the object can be analytically derived. A muon with a different energy has a different range, which is defined as a distance that the incident muon can traverse in matter before it stops. For example, 1 TeV energy muons have a continuous slowing down approximation range (CSDA range) of 2500 m water equivalent (m.w.e.) in silica dioxide whereas the range is reduced to 400 m.w.e. for 100 GeV muons. This range varies if the material is different, e.g., 1 TeV muons have a CSDA range of 1500 m.w.e. in lead.
The numbers (or later colors) forming a muogram are displayed in terms of the transmitted number of muon events. Each pixel in the muogram is a two dimensional unit based on the angular resolution of the apparatus. The phenomenon that muography cannot differentiate density variations is called the "Volume Effects." Volume Effects happen when a large amount of low density materials and a thin layer of high density materials cause the same attenuation in muon flux. Therefore, in order to avoid false data arising from Volume Effects, the exterior shape of the volume has to be accurately determined and used for analyzing the data.
References
Imaging
Particle detectors | Muon tomography | [
"Technology",
"Engineering"
] | 6,455 | [
"Particle detectors",
"Measuring instruments"
] |
40,543,215 | https://en.wikipedia.org/wiki/Rendezvous%20hashing | Rendezvous or highest random weight (HRW) hashing is an algorithm that allows clients to achieve distributed agreement on a set of options out of a possible set of options. A typical application is when clients need to agree on which sites (or proxies) objects are assigned to.
Consistent hashing addresses the special case using a different method. Rendezvous hashing is both much simpler and more general than consistent hashing (see below).
History
Rendezvous hashing was invented by David Thaler and Chinya Ravishankar at the University of Michigan in 1996. Consistent hashing appeared a year later in the literature.
Given its simplicity and generality, rendezvous hashing is now being preferred to consistent hashing in real-world applications. Rendezvous hashing was used very early on in many applications including mobile caching, router design, secure key establishment, and sharding and distributed databases. Other examples of real-world systems that use Rendezvous Hashing include the Github load balancer, the Apache Ignite distributed database, the Tahoe-LAFS file store, the CoBlitz large-file distribution service, Apache Druid, IBM's Cloud Object Store, the Arvados Data Management System, Apache Kafka, and the Twitter EventBus pub/sub platform.
One of the first applications of rendezvous hashing was to enable multicast clients on the Internet (in contexts such as the MBONE) to identify multicast rendezvous points in a distributed fashion. It was used in 1998 by Microsoft's Cache Array Routing Protocol (CARP) for distributed cache coordination and routing. Some Protocol Independent Multicast routing protocols use rendezvous hashing to pick a rendezvous point.
Problem definition and approach
Algorithm
Rendezvous hashing solves a general version of the distributed hash table problem: We are given a set of sites (servers or proxies, say). How can any set of clients, given an object , agree on a k-subset of sites to assign to ? The standard version of the problem uses k = 1. Each client is to make its selection independently, but all clients must end up picking the same subset of sites. This is non-trivial if we add a minimal disruption constraint, and require that when a site fails or is removed, only objects mapping to that site need be reassigned to other sites.
The basic idea is to give each site a score (a weight) for each object , and assign the object to the highest scoring site. All clients first agree on a hash function . For object , the site is defined to have weight . Each client independently computes these weights and picks the k sites that yield the k largest hash values. The clients have thereby achieved distributed -agreement.
If a site is added or removed, only the objects mapping to are remapped to different sites, satisfying the minimal disruption constraint above. The HRW assignment can be computed independently by any client, since it depends only on the identifiers for the set of sites and the object being assigned.
HRW easily accommodates different capacities among sites. If site has twice the capacity of the other sites, we simply represent twice in the list, say, as . Clearly, twice as many objects will now map to as to the other sites.
Properties
Consider the simple version of the problem, with k = 1, where all clients are to agree on a single site for an object O. Approaching the problem naively, it might appear sufficient to treat the n sites as buckets in a hash table and hash the object name O into this table. Unfortunately, if any of the sites fails or is unreachable, the hash table size changes, forcing all objects to be remapped. This massive disruption makes such direct hashing unworkable.
Under rendezvous hashing, however, clients handle site failures by picking the site that yields the next largest weight. Remapping is required only for objects currently mapped to the failed site, and disruption is minimal.
Rendezvous hashing has the following properties:
Low overhead: The hash function used is efficient, so overhead at the clients is very low.
Load balancing: Since the hash function is randomizing, each of the n sites is equally likely to receive the object O. Loads are uniform across the sites.
Site capacity: Sites with different capacities can be represented in the site list with multiplicity in proportion to capacity. A site with twice the capacity of the other sites will be represented twice in the list, while every other site is represented once.
High hit rate: Since all clients agree on placing an object O into the same site SO, each fetch or placement of O into SO yields the maximum utility in terms of hit rate. The object O will always be found unless it is evicted by some replacement algorithm at SO.
Minimal disruption: When a site fails, only the objects mapped to that site need to be remapped. Disruption is at the minimal possible level, as proved in.
Distributed k-agreement: Clients can reach distributed agreement on k sites simply by selecting the top k sites in the ordering.
O(log n) running time via skeleton-based hierarchical rendezvous hashing
The standard version of Rendezvous Hashing described above works quite well for moderate n, but when is extremely large, the hierarchical use of Rendezvous Hashing achieves running time. This approach creates a virtual hierarchical structure (called a "skeleton"), and achieves running time by applying HRW at each level while descending the hierarchy. The idea is to first choose some constant and organize the sites into clusters Next, build a virtual hierarchy by choosing a constant and imagining these clusters placed at the leaves of a tree of virtual nodes, each with fanout .
In the accompanying diagram, the cluster size is , and the skeleton fanout is . Assuming 108 sites (real nodes) for convenience, we get a three-tier virtual hierarchy. Since , each virtual node has a natural numbering in octal. Thus, the 27 virtual nodes at the lowest tier would be numbered in octal (we can, of course, vary the fanout at each level - in that case, each node will be identified with the corresponding mixed-radix number).
The easiest way to understand the virtual hierarchy is by starting at the top, and descending the virtual hierarchy. We successively apply Rendezvous Hashing to the set of virtual nodes at each level of the hierarchy, and descend the branch defined by the winning virtual node. We can in fact start at any level in the virtual hierarchy. Starting lower in the hierarchy requires more hashes, but may improve load distribution in the case of failures.
For example, instead of applying HRW to all 108 real nodes in the diagram, we can first apply HRW to the 27 lowest-tier virtual nodes, selecting one. We then apply HRW to the four real nodes in its cluster, and choose the winning site. We only need hashes, rather than 108. If we apply this method starting one level higher in the hierarchy, we would need hashes to get to the winning site. The figure shows how, if we proceed starting from the root of the skeleton, we may successively choose the virtual nodes , , and , and finally end up with site 74.
The virtual hierarchy need not be stored, but can be created on demand, since the virtual nodes names are simply prefixes of base- (or mixed-radix) representations. We can easily create appropriately sorted strings from the digits, as required. In the example, we would be working with the strings (at tier 1), (at tier 2), and (at tier 3). Clearly, has height , since and are both constants. The work done at each level is , since is a constant.
The value of can be chosen based on factors like the anticipated failure rate and the degree of desired load balancing. A higher value of leads to less load skew in the event of failure at the cost of higher search overhead.
The choice is equivalent to non-hierarchical rendezvous hashing. In practice, the hash function is very cheap, so can work quite well unless is very high.
For any given object, it is clear that each leaf-level cluster, and hence each of the sites, is chosen with equal probability.
Replication, site failures, and site additions
One can enhance resiliency to failures by replicating each object O across the highest ranking r < m sites for O, choosing r based on the level of resiliency desired. The simplest strategy is to replicate only within the leaf-level cluster.
If the leaf-level site selected for O is unavailable, we select the next-ranked site for O within the same leaf-level cluster. If O has been replicated within the leaf-level cluster, we are sure to find O in the next available site in the ranked order of r sites. All objects that were held by the failed server appear in some other site in its cluster. (Another option is to go up one or more tiers in the skeleton and select an alternate from among the sibling virtual nodes at that tier. We then descend the hierarchy to the real nodes, as above.)
When a site is added to the system, it may become the winning site for some objects already assigned to other sites. Objects mapped to other clusters will never map to this new site, so we need to only consider objects held by other sites in its cluster. If the sites are caches, attempting to access an object mapped to the new site will result in a cache miss, the corresponding object will be fetched and cached, and operation returns to normal.
If sites are servers, some objects must be remapped to this newly added site. As before, objects mapped to other clusters will never map to this new site, so we need to only consider objects held by sites in its cluster. That is, we need only remap objects currently present in the m sites in this local cluster, rather than the entire set of objects in the system. New objects mapping to this site will of course be automatically assigned to it.
Comparison with consistent hashing
Because of its simplicity, lower overhead, and generality (it works for any k < n), rendezvous hashing is increasingly being preferred over consistent hashing. Recent examples of its use include the Github load balancer, the Apache Ignite distributed database, and by the Twitter EventBus pub/sub platform.
Consistent hashing operates by mapping sites uniformly and randomly to points on a unit circle called tokens. Objects are also mapped to the unit circle and placed in the site whose token is the first encountered traveling clockwise from the object's location. When a site is removed, the objects it owns are transferred to the site owning the next token encountered moving clockwise. Provided each site is mapped to a large number (100–200, say) of tokens this will reassign objects in a relatively uniform fashion among the remaining sites.
If sites are mapped to points on the circle randomly by hashing 200 variants of the site ID, say, the assignment of any object requires storing or recalculating 200 hash values for each site. However, the tokens associated with a given site can be precomputed and stored in a sorted list, requiring only a single application of the hash function to the object, and a binary search to compute the assignment. Even with many tokens per site, however, the basic version of consistent hashing may not balance objects uniformly over sites, since when a site is removed each object assigned to it is distributed only over as many other sites as the site has tokens (say 100–200).
Variants of consistent hashing (such as Amazon's Dynamo) that use more complex logic to distribute tokens on the unit circle offer better load balancing than basic consistent hashing, reduce the overhead of adding new sites, and reduce metadata overhead and offer other benefits.
Advantages of Rendezvous hashing over consistent hashing
Rendezvous hashing (HRW) is much simpler conceptually and in practice. It also distributes objects uniformly over all sites, given a uniform hash function. Unlike consistent hashing, HRW requires no precomputing or storage of tokens. Consider k =1. An object is placed into one of sites by computing the hash values and picking the site that yields the highest hash value. If a new site is added, new object placements or requests will compute hash values, and pick the largest of these. If an object already in the system at maps to this new site , it will be fetched afresh and cached at . All clients will henceforth obtain it from this site, and the old cached copy at will ultimately be replaced by the local cache management algorithm. If is taken offline, its objects will be remapped uniformly to the remaining sites.
Variants of the HRW algorithm, such as the use of a skeleton (see below), can reduce the time for object location to , at the cost of less global uniformity of placement. When is not too large, however, the placement cost of basic HRW is not likely to be a problem. HRW completely avoids all the overhead and complexity associated with correctly handling multiple tokens for each site and associated metadata.
Rendezvous hashing also has the great advantage that it provides simple solutions to other important problems, such as distributed -agreement.
Consistent hashing is a special case of Rendezvous hashing
Rendezvous hashing is both simpler and more general than consistent hashing. Consistent hashing can be shown to be a special case of HRW by an appropriate choice of a two-place hash function. From the site identifier the simplest version of consistent hashing computes a list of token positions, e.g., where hashes values to locations on the unit circle. Define the two place hash function to be where denotes the distance along the unit circle from to (since has some minimal non-zero value there is no problem translating this value to a unique integer in some bounded range). This will duplicate exactly the assignment produced by consistent hashing.
It is not possible, however, to reduce HRW to consistent hashing (assuming the number of tokens per site is bounded), since HRW potentially reassigns the objects from a removed site to an unbounded number of other sites.
Weighted variations
In the standard implementation of rendezvous hashing, every node receives a statically equal proportion of the keys. This behavior, however, is undesirable when the nodes have different capacities for processing or holding their assigned keys. For example, if one of the nodes had twice the storage capacity as the others, it would be beneficial if the algorithm could take this into account such that this more powerful node would receive twice the number of keys as each of the others.
A straightforward mechanism to handle this case is to assign two virtual locations to this node, so that if either of that larger node's virtual locations has the highest hash, that node receives the key. But this strategy does not work when the relative weights are not integer multiples. For example, if one node had 42% more storage capacity, it would require adding many virtual nodes in different proportions, leading to greatly reduced performance. Several modifications to rendezvous hashing have been proposed to overcome this limitation.
Cache Array Routing Protocol
The Cache Array Routing Protocol (CARP) is a 1998 IETF draft that describes a method for computing load factors which can be multiplied by each node's hash score to yield an arbitrary level of precision for weighting nodes differently. However, one disadvantage of this approach is that when any node's weight is changed, or when any node is added or removed, all the load factors must be re-computed and relatively scaled. When the load factors change relative to one another, it triggers movement of keys between nodes whose weight was not changed, but whose load factor did change relative to other nodes in the system. This results in excess movement of keys.
Controlled replication
Controlled replication under scalable hashing or CRUSH is an extension to RUSH that improves upon rendezvous hashing by constructing a tree where a pseudo-random function (hash) is used to navigate down the tree to find which node is ultimately responsible for a given key. It permits perfect stability for adding nodes; however, it is not perfectly stable when removing or re-weighting nodes, with the excess movement of keys being proportional to the height of the tree.
The CRUSH algorithm is used by the ceph data storage system to map data objects to the nodes responsible for storing them.
Other variants
In 2005, Christian Schindelhauer and Gunnar Schomaker described a logarithmic method for re-weighting hash scores in a way that does not require relative scaling of load factors when a node's weight changes or when nodes are added or removed. This enabled the dual benefits of perfect precision when weighting nodes, along with perfect stability, as only a minimum number of keys needed to be remapped to new nodes.
A similar logarithm-based hashing strategy is used to assign data to storage nodes in Cleversafe's data storage system, now IBM Cloud Object Storage.
Systems using Rendezvous hashing
Rendezvous hashing is being used widely in real-world systems. A partial list includes Oracle's Database in-memory, the GitHub load balancer, the Apache Ignite distributed database, the Tahoe-LAFS file store, the CoBlitz large-file distribution service, Apache Druid, IBM's Cloud Object Store, the Arvados Data Management System, Apache Kafka, and by the Twitter EventBus pub/sub platform.
Implementation
Implementation is straightforward once a hash function is chosen (the original work on the HRW method makes a hash function recommendation). Each client only needs to compute a hash value for each of the sites, and then pick the largest. This algorithm runs in time. If the hash function is efficient, the running time is not a problem unless is very large.
Weighted rendezvous hash
Python code implementing a weighted rendezvous hash:
import mmh3
import math
from dataclasses import dataclass
from typing import List
def hash_to_unit_interval(s: str) -> float:
"""Hashes a string onto the unit interval (0, 1]"""
return (mmh3.hash128(s) + 1) / 2**128
@dataclass
class Node:
"""Class representing a node that is assigned keys as part of a weighted rendezvous hash."""
name: str
weight: float
def compute_weighted_score(self, key: str):
score = hash_to_unit_interval(f"{self.name}: {key}")
log_score = 1.0 / -math.log(score)
return self.weight * log_score
def determine_responsible_node(nodes: list[Node], key: str):
"""Determines which node of a set of nodes of various weights is responsible for the provided key."""
return max(
nodes, key=lambda node: node.compute_weighted_score(key), default=None)
Example outputs of WRH:
>>> import wrh
>>> node1 = wrh.Node("node1", 100)
>>> node2 = wrh.Node("node2", 200)
>>> node3 = wrh.Node("node3", 300)
>>> str(wrh.determine_responsible_node([node1, node2, node3], "foo"))
"Node(name='node1', weight=100)"
>>> str(wrh.determine_responsible_node([node1, node2, node3], "bar"))
"Node(name='node2', weight=300)"
>>> str(wrh.determine_responsible_node([node1, node2, node3], "hello"))
"Node(name='node2', weight=300)"
>>> nodes = [node1, node2, node3]
>>> from collections import Counter
>>> responsible_nodes = [wrh.determine_responsible_node(
... nodes, f"key: {key}").name for key in range(45_000)]
>>> print(Counter(responsible_nodes))
Counter({'node3': 22487, 'node2': 15020, 'node1': 7493})
References
External links
Rendezvous Hashing: an alternative to Consistent Hashing
Algorithms
Articles with example Python (programming language) code
Hashing | Rendezvous hashing | [
"Mathematics"
] | 4,231 | [
"Algorithms",
"Mathematical logic",
"Applied mathematics"
] |
40,545,675 | https://en.wikipedia.org/wiki/Tau-leaping | In probability theory, tau-leaping, or τ-leaping, is an approximate method for the simulation of a stochastic system. It is based on the Gillespie algorithm, performing all reactions for an interval of length tau before updating the propensity functions. By updating the rates less often this sometimes allows for more efficient simulation and thus the consideration of larger systems.
Many variants of the basic algorithm have been considered.
Algorithm
The algorithm is analogous to the Euler method for deterministic systems, but instead of making a fixed change
the change is
where is a Poisson distributed random variable with mean .
Given a state with events occurring at rate and with state change vectors (where indexes the state variables, and indexes the events), the method is as follows:
Initialise the model with initial conditions .
Calculate the event rates .
Choose a time step . This may be fixed, or by some algorithm dependent on the various event rates.
For each event generate , which is the number of times each event occurs during the time interval .
Update the state by
where is the change on state variable due to event . At this point it may be necessary to check that no populations have reached unrealistic values (such as a population becoming negative due to the unbounded nature of the Poisson variable ).
Repeat from Step 2 onwards until some desired condition is met (e.g. a particular state variable reaches 0, or time is reached).
Algorithm for efficient step size selection
This algorithm is described by Cao et al. The idea is to bound the relative change in each event rate by a specified tolerance (Cao et al. recommend , although it may depend on model specifics). This is achieved by bounding the relative change in each state variable by , where depends on the rate that changes the most for a given change in . Typically is equal the highest order event rate, but this may be more complex in different situations (especially epidemiological models with non-linear event rates).
This algorithm typically requires computing auxiliary values (where is the number of state variables ), and should only require reusing previously calculated values . An important factor in this is that since is an integer value, there is a minimum value by which it can change, preventing the relative change in being bounded by 0, which would result in also tending to 0.
For each state variable , calculate the auxiliary values
For each state variable , determine the highest order event in which it is involved, and obtain
Calculate time step as
This computed is then used in Step 3 of the leaping algorithm.
References
Chemical kinetics
Computational chemistry
Monte Carlo methods
Stochastic simulation | Tau-leaping | [
"Physics",
"Chemistry"
] | 532 | [
"Chemical reaction engineering",
"Monte Carlo methods",
"Computational physics",
"Theoretical chemistry",
"Computational chemistry",
"Chemical kinetics"
] |
40,547,030 | https://en.wikipedia.org/wiki/Multibody%20simulation | Multibody simulation (MBS) is a method of numerical simulation in which multibody systems are composed of various rigid or elastic bodies. Connections between the bodies can be modeled with kinematic constraints (such as joints) or force elements (such as spring dampers). Unilateral constraints and Coulomb-friction can also be used to model frictional contacts between bodies.
Multibody simulation is a useful tool for conducting motion analysis. It is often used during product development to evaluate characteristics of comfort, safety, and performance. For example, multibody simulation has been widely used since the 1990s as a component of automotive suspension design. It can also be used to study issues of biomechanics, with applications including sports medicine, osteopathy, and human-machine interaction.
The heart of any multibody simulation software program is the solver. The solver is a set of computation algorithms that solve equations of motion. Types of components that can be studied through multibody simulation range from electronic control systems to noise, vibration and harshness. Complex models such as engines are composed of individually designed components, e.g. pistons and crankshafts.
The MBS process often can be divided in 5 main activities. The first activity of the MBS process chain is the "3D CAD master model", in which product developers, designers and engineers are using the CAD system to generate a CAD model and its assembly structure related to given specifications. This 3D CAD master model is converted during the activity "Data transfer" to the MBS input data formats i.e. STEP. The "MBS Modeling" is the most complex activity in the process chain. Following rules and experiences, the 3D model in MBS format, multiple boundaries, kinematics, forces, moments or degrees of freedom are used as input to generate the MBS model. Engineers have to use MBS software and their knowledge and skills in the field of engineering mechanics and machine dynamics to build the MBS model including joints and links. The generated MBS model is used during the next activity "Simulation". Simulations, which are specified by time increments and boundaries like starting conditions are run by MBS Software. It is also possible to perform MBS simulations using free and open source packages. The last activity is the "Analysis and evaluation". Engineers use case-dependent directives to analyze and evaluate moving paths, speeds, accelerations, forces or moments. The results are used to enable releases or to improve the MBS model, in case the results are insufficient. One of the most important benefits of the MBS process chain is the usability of the results to optimize the 3D CAD master model components. Due to the fact that the process chain enables the optimization of component design, the resulting loops can be used to achieve a high level of design and MBS model optimization in an iterative process.
References
computational physics
Dynamical systems | Multibody simulation | [
"Physics",
"Mathematics"
] | 592 | [
"Mechanics",
"Computational physics",
"Dynamical systems"
] |
41,964,064 | https://en.wikipedia.org/wiki/Small%20stellated%20120-cell%20honeycomb | In the geometry of hyperbolic 4-space, the small stellated 120-cell honeycomb is one of four regular star-honeycombs. With Schläfli symbol {5/2,5,3,3}, it has three small stellated 120-cells around each face. It is dual to the pentagrammic-order 600-cell honeycomb.
It can be seen as a stellation of the 120-cell honeycomb, and is thus analogous to the three-dimensional small stellated dodecahedron {5/2,5} and four-dimensional small stellated 120-cell {5/2,5,3}. It has density 5.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II, III, IV, V, p212-213)
Honeycombs (geometry)
5-polytopes | Small stellated 120-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 246 | [
"Honeycombs (geometry)",
"Tessellation",
"Crystallography",
"Geometry",
"Geometry stubs",
"Symmetry"
] |
41,964,116 | https://en.wikipedia.org/wiki/Pentagrammic-order%20600-cell%20honeycomb | In the geometry of hyperbolic 4-space, the pentagrammic-order 600-cell honeycomb is one of four regular star-honeycombs. With Schläfli symbol {3,3,5,5/2}, it has five 600-cells around each face in a pentagrammic arrangement. It is dual to the small stellated 120-cell honeycomb. It can be considered the higher-dimensional analogue of the 4-dimensional icosahedral 120-cell and the 3-dimensional great dodecahedron. It is related to the order-5 icosahedral 120-cell honeycomb and great 120-cell honeycomb: the icosahedral 120-cells and great 120-cells in each honeycomb are replaced by the 600-cells that are their convex hulls, thus forming the pentagrammic-order 600-cell honeycomb.
This honeycomb can also be constructed by taking the order-5 5-cell honeycomb and replacing clusters of 600 5-cells meeting at a vertex with 600-cells. Each 5-cell belongs to five such clusters, and thus the pentagrammic-order 600-cell honeycomb has density 5.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II, III, IV, V, p212-213)
Honeycombs (geometry)
5-polytopes | Pentagrammic-order 600-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 346 | [
"Honeycombs (geometry)",
"Tessellation",
"Crystallography",
"Geometry",
"Geometry stubs",
"Symmetry"
] |
41,964,194 | https://en.wikipedia.org/wiki/Order-5%20icosahedral%20120-cell%20honeycomb | In the geometry of hyperbolic 4-space, the order-5 icosahedral 120-cell honeycomb is one of four regular star-honeycombs. With Schläfli symbol {3,5,5/2,5}, it has five icosahedral 120-cells around each face. It is dual to the great 120-cell honeycomb.
It can be constructed by replacing the great dodecahedral cells of the great 120-cell honeycomb with their icosahedral convex hulls, thus replacing the great 120-cells with icosahedral 120-cells. It is thus analogous to the four-dimensional icosahedral 120-cell. It has density 10.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II, III, IV, V, p212-213)
Honeycombs (geometry)
5-polytopes | Order-5 icosahedral 120-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 247 | [
"Honeycombs (geometry)",
"Tessellation",
"Crystallography",
"Geometry",
"Geometry stubs",
"Symmetry"
] |
41,964,210 | https://en.wikipedia.org/wiki/Great%20120-cell%20honeycomb | In the geometry of hyperbolic 4-space, the great 120-cell honeycomb is one of four regular star-honeycombs. With Schläfli symbol {5,5/2,5,3}, it has three great 120-cells around each face. It is dual to the order-5 icosahedral 120-cell honeycomb.
It can be seen as a greatening of the 120-cell honeycomb, and is thus analogous to the three-dimensional great dodecahedron {5,5/2} and four-dimensional great 120-cell {5,5/2,5}. It has density 10.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II, III, IV, V, p212-213)
Honeycombs (geometry)
5-polytopes | Great 120-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 239 | [
"Honeycombs (geometry)",
"Tessellation",
"Crystallography",
"Geometry",
"Geometry stubs",
"Symmetry"
] |
41,966,834 | https://en.wikipedia.org/wiki/Criticality%20%28status%29 | In the operation of a nuclear reactor, criticality is the state in which a nuclear chain reaction is self-sustaining—that is, when reactivity is zero. In supercritical states, reactivity is greater than zero.
Applications
Criticality is the normal operating condition of a nuclear reactor, in which nuclear fuel sustains a fission chain reaction. A reactor achieves criticality (and is said to be critical) when each fission releases a sufficient number of neutrons to sustain an ongoing series of nuclear reactions.
The International Atomic Energy Agency defines the first criticality date as the date when the reactor is made critical for the first time. This is an important milestone in the construction and commissioning of a nuclear power plant.
See also
Criticality accident
Critical mass
Prompt criticality
References
Nuclear chemistry
Nuclear physics
Nuclear technology
Radioactivity | Criticality (status) | [
"Physics",
"Chemistry"
] | 166 | [
"Nuclear chemistry",
"Nuclear technology",
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"nan",
"Nuclear physics",
"Radioactivity"
] |
41,969,333 | https://en.wikipedia.org/wiki/Chemogenetics | Chemogenetics is the process by which macromolecules can be engineered to interact with previously unrecognized small molecules. Chemogenetics as a term was originally coined to describe the observed effects of mutations on chalcone isomerase activity on substrate specificities in the flowers of Dianthus caryophyllus. This method is very similar to optogenetics; however, it uses chemically engineered molecules and ligands instead of light and light-sensitive channels known as opsins.
In recent research projects, chemogenetics has been widely used to understand the relationship between brain activity and behavior. Prior to chemogenetics, researchers used methods such as transcranial magnetic stimulation and deep brain stimulation to study the relationship between neuronal activity and behavior.
Comparison to optogenetics
Optogenetics and chemogenetics are the more recent and popular methods used to study this relationship. Both of these methods target specific brain circuits and cell population to influence cell activity. However, they use different procedures to accomplish this task. Optogenetics uses light-sensitive channels and pumps that are virally introduced into neurons. Cells' activity, having these channels, can then be manipulated by light. Chemogenetics, on the other hand, uses chemically engineered receptors and exogenous molecules specific for those receptors, to affect the activity of those cells. The engineered macromolecules used to design these receptors include nucleic acid hybrids, kinases, variety of metabolic enzymes, and G-protein coupled receptors such as DREADDs.
DREADDs are the most common G protein–coupled receptors used in chemogenetics. These receptors solely get activated by the drug of interest (inert molecule) and influence physiological and neural processes that take place within and outside of the central nervous system.
Chemogenetics has recently been favored over optogenetics, and it avoids some of the challenges of optogenetics. Chemogenetics does not require the expensive light equipment, and therefore, is more accessible. The resolution in optogenetics declines due to light scattering and illuminance declined levels as the distance between the subject and the light source increases. These factors, therefore, don’t allow for all cells to be affected by light and lead to a lower spatial resolution. Chemogenetics, however, does not require light usage and therefore can achieve a higher spatial resolution.
Applications
G-protein coupled receptors' usage and chemogenetics are nowadays the targets for many of the pharmaceutical companies to cure and alleviate symptoms of diseases that involve all tissues of the body. More specifically, DREADDs have been used to explore treatment options for various neurodegenerative and psychological conditions such as Parkinson’s disease, depression, anxiety, and addiction. These aforementioned conditions involve processes that occur within and outside of the nervous system involving neurotransmitters such as gamma-aminobutyric acid and glutamate. Chemogenetics has therefore been used in pharmacology to adjust the levels of such neurotransmitters in specific neuron while minimizing the side effects of treatment. To treat and relieve the symptoms of any disease using the DREADDs, these receptors are delivered to the area of interest via viral transduction.
Recently some studies have considered using a new method called retro DREADDs. This method allows specific neuronal pathways to be studied under cellular resolution. Unlike classic DREADDs, this method is usually used in wild type animals, and these receptors are given to the targeted cells via injection of two viral vectors.
Animal Models
DREADDS have been used in many animal models (e.g., mice and other non-primate animals) to target and influence the activity of various cells. Chemogenetics used in animals assists with demonstrating human disease models such as Parkinson's disease. Having this information allows scientists understand whether viral expression of DREADD proteins, both in-vivo enhancers and inhibitors of neuronal function can be used to bidirectionally affect the behaviors and the activity of the involved neurons. Recent studies have shown that DREADDs were successfully used to treat the motor deficits of rats modeling Parkinson's disease. Other studies have had successes linking the usage of DREADDs and influencing drug seeking and drug sensitization behavior.
The progression of chemogenetics from rodents to non-human primates has been slow due to increased demand in time and expense surrounding these projects. However, some recent studies in 2016 have been able to demonstrate successes showing that silencing the activity of neurons in the orbitofrontal cortex along with the removal of rhinal cortex, restricted the reward task performance in macaques.
Limitation and future directions
Chemogenetics and usage of DREADDs have allowed researchers to advance in biomedical research areas including many neurodegenerative and psychiatric conditions. Chemogenetics have been used in these fields to induce specific and reversible brain lesions and therefore, study specific activities of neuron population. Although chemogenetics offers specificity and high spatial resolution, it still faces some challenges when used in investigating neuropsychiatric disorders. Neuropsychiatric disorders usually have a complex nature where lesions in the brain have not been identified as the main cause. Chemogenetics has been used to reverse some of the deficits revolving such conditions; however, it has not been able to identify the main cause of neuropsychiatric diseases and cure these conditions completely due to complex nature of these conditions. Nevertheless, chemogenetics has been used successfully in a preclinical model of drug-resistant epilepsy, where seizures arise from a discrete part of the brain.
See also
Receptor activated solely by a synthetic ligand
References
Chemical engineering
Neurology procedures
Neurotechnology
Neurophysiology
Neuropsychology
Genetics | Chemogenetics | [
"Chemistry",
"Engineering",
"Biology"
] | 1,212 | [
"Chemical engineering",
"Genetics",
"nan"
] |
41,969,620 | https://en.wikipedia.org/wiki/Mancozeb | Mancozeb is a dithiocarbamate non-systemic agricultural fungicide with multi-site, protective action on contact. It is a combination of two other dithiocarbamates: maneb and zineb. The mixture controls many fungal diseases in a wide range of field crops, fruits, nuts, vegetables, and ornamentals. It is marketed as Penncozeb, Trimanoc, Vondozeb, Dithane, Manzeb, Nemispot, and Manzane. In Canada, a mixture of zoxamide and mancozeb was registered for control of the mildew named Gavel as early as 2008.
Mechanism
Mancozeb reacts with, and inactivates, the sulfhydryl groups of amino acids and enzymes within fungal cells, resulting in disruption of lipid metabolism, respiration, and production of adenosine triphosphate.
Mancozeb is listed under FRAC code M:03 The "M:" refers to Chemicals with Multi-Site Activity. "M:" FRAC groups are defined as generally considered as a low risk group without any signs of resistance developing to the fungicides.
Toxicology
A major toxicological concern is ethylenethiourea (ETU), an industrial contaminant and a breakdown product of mancozeb and other EBDC pesticides. It has potential to cause goiter, a condition in which the thyroid gland is enlarged and has produced birth defects and cancer in experimental animals. ETU has been classified as a probable human carcinogen by the EPA. Mancozeb has been shown to have significant negative effects on beneficial root fungi - totally preventing spore germination at levels far below recommended dosage levels.
See also
Fungicide use in the United States
Maneb
References
External links
Aldehyde dehydrogenase inhibitors
Dithiocarbamates
Fungicides
Monoaminergic neurotoxins | Mancozeb | [
"Chemistry",
"Biology"
] | 404 | [
"Fungicides",
"Dithiocarbamates",
"Biocides",
"Functional groups"
] |
41,972,103 | https://en.wikipedia.org/wiki/Super-resolution%20optical%20fluctuation%20imaging | Super-resolution optical fluctuation imaging (SOFI) is a post-processing method for the calculation of super-resolved images from recorded image time series that is based on the temporal correlations of independently fluctuating fluorescent emitters.
SOFI has been developed for super-resolution of biological specimen that are labelled with independently fluctuating fluorescent emitters (organic dyes, fluorescent proteins). In comparison to other super-resolution microscopy techniques such as STORM or PALM that rely on single-molecule localization and hence only allow one active molecule per diffraction-limited area (DLA) and timepoint, SOFI does not necessitate a controlled photoswitching and/ or photoactivation as well as long imaging times. Nevertheless, it still requires fluorophores that are cycling through two distinguishable states, either real on-/off-states or states with different fluorescence intensities. In mathematical terms SOFI-imaging relies on the calculation of cumulants, for what two distinguishable ways exist. For one thing an image can be calculated via auto-cumulants that by definition only rely on the information of each pixel itself, and for another thing an improved method utilizes the information of different pixels via the calculation of cross-cumulants. Both methods can increase the final image resolution significantly although the cumulant calculation has its limitations. Actually SOFI is able to increase the resolution in all three dimensions.
Principle
Likewise to other super-resolution methods SOFI is based on recording an image time series on a CCD- or CMOS camera. In contrary to other methods the recorded time series can be substantially shorter, since a precise localization of emitters is not required and therefore a larger quantity of activated fluorophores per diffraction-limited area is allowed. The pixel values of a SOFI-image of the n-th order are calculated from the values of the pixel time series in the form of a n-th order cumulant, whereas the final value assigned to a pixel can be imagined as the integral over a correlation function. The finally assigned pixel value intensities are a measure of the brightness and correlation of the fluorescence signal. Mathematically, the n-th order cumulant is related to the n-th order correlation function, but exhibits some advantages concerning the resulting resolution of the image. Since in SOFI several emitters per DLA are allowed, the photon count at each pixel results from the superposition of the signals of all activated nearby emitters. The cumulant calculation now filters the signal and leaves only highly correlated fluctuations. This provides a contrast enhancement and therefore a background reduction for good measure.
As it is implied in the figure on the left the fluorescence source distribution:
is convolved with the system's point spread function (PSF) U(r). Hence the fluorescence signal at time t and position is given by
Within the above equations N is the amount of emitters, located at the positions with a time-dependent molecular brightness where is a variable for the constant molecular brightness and is a time-dependent fluctuation function. The molecular brightness is just the average fluorescence count-rate divided by the number of molecules within a specific region. For simplification it has to be assumed that the sample is in a stationary equilibrium and therefore the fluorescence signal can be expressed as a zero-mean fluctuation:
where denotes time-averaging. The auto-correlation here e.g. the second-order can then be described deductively as follows for a certain time-lag :
From these equations it follows that the PSF of the optical system has to be taken to the power of the order of the correlation. Thus in a second-order correlation the PSF would be reduced along all dimensions by a factor of . As a result, the resolution of the SOFI-images increases according to this factor.
Cumulants versus correlations
Using only the simple correlation function for a reassignment of pixel values, would ascribe to the independency of fluctuations of the emitters in time in a way that no cross-correlation terms would contribute to the new pixel value. Calculations of higher-order correlation functions would suffer from lower-order correlations for what reason it is superior to calculate cumulants, since all lower-order correlation terms vanish.
Cumulant-calculation
Auto-cumulants
For computational reasons it is convenient to set all time-lags in higher-order cumulants to zero so that a general expression for the n-th order auto-cumulant can be found:
is a specific correlation based weighting function influenced by the order of the cumulant and mainly depending on the fluctuation properties of the emitters.
Albeit there is no fundamental limitation in calculating very high orders of cumulants and thereby shrinking the FWHM of the PSF there are practical limitations according to the weighting of the values assigned to the final image. Emitters with a higher molecular brightness will show a strong increase in terms of the pixel cumulant value assigned at higher-orders as well as this performance can be expected from a diverse appearance of fluctuations of different emitters. A wide intensity range of the resulting image can therefore be expected and as a result dim emitters can get masked by bright emitters in higher-order images:. The calculation of auto-cumulants can be realized in a very attractive way in a mathematical sense. The n-th order cumulant can be calculated with a basic recursion from moments
where K is a cumulant of the index's order, likewise represents the moments. The term within the brackets indicates a binomial coefficient. This way of computation is straightforward in comparison with calculating cumulants with standard formulas. It allows for the calculation of cumulants with only little time of computing and is, as it is well implemented, even suitable for the calculation of high-order cumulants on large images.
Cross-cumulants
In a more advanced approach cross-cumulants are calculated by taking the information of several pixels into account. Cross-cumulants can be described as follows:
j, l and k are indices for contributing pixels whereas i is the index for the current position. All other values and indices are used as before. The major difference in the comparison of this equation with the equation for the auto-cumulants is the appearance of a weighting-factor . This weighting-factor (also termed distance-factor) is PSF-shaped and depends on the distance of the cross-correlated pixels in a sense that the contribution of each pixels decays along the distance in a PSF-shaped manner. In principle this means that the distance-factor is smaller for pixels that are further apart. The cross-cumulant approach can be used to create new, virtual pixels revealing true information about the labelled specimen by reducing the effective pixel size. These pixels carry more information than pixels that arise from simple interpolation.
In addition the cross-cumulant approach can be used to estimate the PSF of the optical system by making use of the intensity differences of the virtual pixels that is due to the "loss" in cross-correlation as aforementioned. Each virtual pixel can be re-weighted with the inverse of the distance-factor of the pixel leading to a restoration of the true cumulant value. At last the PSF can be used to create a resolution dependency of n for the nth-order cumulant by re-weighting the "optical transfer function" (OTF). This step can also be replaced by using the PSF for a deconvolution that is associated with less computational cost.
Cross-cumulant calculation requires the usage of a computational much more expensive formula that comprises the calculation of sums over partitions. This is of course owed to the combination of different pixels to assign a new value. Hence no fast recursive approach is usable at this point. For the calculation of cross-cumulants the following equation can be used:
In this equation P denotes the amount of possible partitions, p denotes the different parts of each partition. In addition i is the index for the different pixel positions taken into account during the calculation what for F is just the image stack of the different contributing pixels. The cross-cumulant approach facilitates the generation of virtual pixels depending on the order of the cumulant as previously mentioned. These virtual pixels can be calculated in a particular pattern from the original pixels for a 4th-order cross-cumulant image, as it is depicted in the lower image, part A. The pattern itself arises simple from the calculation of all possible combinations of the original image pixels A, B, C and D. Here this was done by a scheme of "combinations with repetitions". Virtual pixels exhibit a loss in intensity that is due to the correlation itself. Part B of the second image depicts this general dependency of the virtual pixels on the cross-correlation. To restore meaningful pixel values the image is smoothed by a routine that defines a distance-factor for each pixel of the virtual pixel grid in a PSF-shaped manner and applies the inverse on all image pixels that are related to the same distance-factor.
References
Microscopy
Image processing
Covariance and correlation | Super-resolution optical fluctuation imaging | [
"Chemistry"
] | 1,901 | [
"Microscopy"
] |
41,974,836 | https://en.wikipedia.org/wiki/Moment%20of%20inertia%20factor | In planetary sciences, the moment of inertia factor or normalized polar moment of inertia is a dimensionless quantity that characterizes the radial distribution of mass inside a planet or satellite. Since a moment of inertia has dimensions of mass times length squared, the moment of inertia factor is the coefficient that multiplies these.
Definition
For a planetary body with principal moments of inertia , the moment of inertia factor is defined as
,
where C is the first principal moment of inertia of the body, M is the mass of the body, and R is the mean radius of the body. For a sphere with uniform density, . For a differentiated planet or satellite, where there is an increase of density with depth, . The quantity is a useful indicator of the presence and extent of a planetary core, because a greater departure from the uniform-density value of 2/5 conveys a greater degree of concentration of dense materials towards the center.
Solar System values
The Sun has by far the lowest moment of inertia factor value among Solar System bodies; it has by far the highest central density (, compared to ~13 for Earth) and a relatively low average density (1.41 g/cm3 versus 5.5 for Earth). Saturn has the lowest value among the gas giants in part because it has the lowest bulk density (). Ganymede has the lowest moment of inertia factor among solid bodies in the Solar System because of its fully differentiated interior, a result in part of tidal heating due to the Laplace resonance, as well as its substantial component of low density water ice. Callisto is similar in size and bulk composition to Ganymede, but is not part of the orbital resonance and is less differentiated. The Moon is thought to have a small core, but its interior is otherwise relatively homogenous.
Measurement
The polar moment of inertia is traditionally determined by combining measurements of spin quantities (spin precession rate and/or obliquity) with gravity quantities (coefficients of a spherical harmonic representation of the gravity field). These geodetic data usually require an orbiting spacecraft to collect.
Approximation
For bodies in hydrostatic equilibrium, the Darwin–Radau relation can provide estimates of the moment of inertia factor on the basis of shape, spin, and gravity quantities.
Role in interior models
The moment of inertia factor provides an important constraint for models representing the interior structure of a planet or satellite. At a minimum, acceptable models of the density profile must match the volumetric mass density and moment of inertia factor of the body.
Gallery of internal structure models
Notes
References
Astrophysics
Planetary science
Moment (physics) | Moment of inertia factor | [
"Physics",
"Astronomy",
"Mathematics"
] | 548 | [
"Physical quantities",
"Quantity",
"Astrophysics",
"Planetary science",
"Astronomical sub-disciplines",
"Moment (physics)"
] |
41,975,588 | https://en.wikipedia.org/wiki/ETE%20%28tokamak%29 | The Spherical Tokamak Experiment () is a machine dedicated to plasma studies in low aspect ratio tokamaks. The ETE was entirely designed and assembled at the Associated Plasma Laboratory (Laboratório Associado de Plasma, LAP) of Brazil's National Institute for Space Research (INPE).
Development
The ETE is a spherical tokamak with major radius of 0.3 m and minor radius of 0.2 m. It began operations in late 2000.
References
Tokamaks | ETE (tokamak) | [
"Physics"
] | 103 | [
"Plasma physics stubs",
"Plasma physics"
] |
33,726,129 | https://en.wikipedia.org/wiki/SAE%20Aerodesign | SAE Aero Design, also called the SAE Aero Design Collegiate Design Series, is a series of competitive aerospace and mechanical engineering events held in the United States and Brazil every year. It is conducted by SAE International. It is generally divided into three categories: Regular class, Advanced class and Micro class.
Regular class
SAE Aero Design regular class requires teams to construct a plane within specified parameters annually updated on the SAE Aero Design website. Each team is judged on three categories: Oral presentation, written report, and flight performance. The objective of the regular class is to design and construct a radio-controlled model aircraft that will lift the largest payload while still maintaining structural integrity.
References
Engineering education
Mechanical engineering competitions
Student sports competitions | SAE Aerodesign | [
"Engineering"
] | 146 | [
"Mechanical engineering competitions",
"Mechanical engineering stubs",
"Mechanical engineering"
] |
33,726,920 | https://en.wikipedia.org/wiki/Friction%20disk%20shock%20absorber | Friction disc shock absorbers or André Hartford dampers were an early form of shock absorber or damper used for car suspension. They were commonly used in the 1930s but were considered obsolete post-war.
Compared to modern shock absorbers friction dampers only provided limited shock absorption but served mainly to damp down oscillation.
Origins
The friction disk pattern was invented by Truffault, before 1900. These used oiled leather friction surfaces between bronze disks compressed by adjustable conical springs, with the disk pack floating between arms to both chassis and axle, in the distinctive style. From 1904 these were licensed to several makers including Mors, who had first applied shock absorbers to cars, and Hartford in the US. Similar dampers were also applied as steering dampers from this early date.
Construction
The dampers rely, as their name suggests, on the friction within a stack of disks, clamped tightly together with a spring and clamp bolt.
André Hartford pattern
The friction disk material was usually a wooden disk between the two faces of the steel arms. As for the development of the clutch and brake shoes, the development of these friction materials was in its infancy. Treated leather had been used for clutches and although it offered good friction behaviour, it was prone to stiction when first moving off and also failed when overheated. Asbestos-based friction materials were sometimes used for racing, in an attempt to keep dampers working correctly even when overheating.
The damping force of a friction shock absorber is adjusted with the central pivot and clamping bolt. A star-shaped spring applies a force to the stack of disks. The damping force is roughly proportional to this force and the clamping nut is provided with a pointer arm to indicate the approximate setting.
André Hartford dampers were made in four sizes, according to vehicle weight and intended use. These were the combination of two disk diameters: and and as either single or multiplate designs. Single dampers had two friction surfaces: a single arm on one side was nested between two arms connected to the other. Multiplate dampers had two and three arms on each side.
Dampers were mounted to the chassis and axle through Silentbloc bushes at each end. Silentbloc bushes were another development of the early 1930s, a vulcanised rubber bush bonded into a steel tube. These provided the stiff location that accurate suspension required, but reduced vibration and road noise, compared to earlier cars. Many cars used a different design for front and rear, where the rear arms were rigidly bolted to the chassis, rather than with a swivelling bush.
de Ram pattern
Georges de Ram invented and manufactured hydraulically actuated friction shock absorbers during the 1920s and 1930s. These were a more sophisticated pattern, intended to provide variable, self-adjusting damping in order to work effectively at both low and high speeds. They were only used on high-end vehicles, notably Bugattis, due to their extremely high cost for the time. In 1935, a set of de Ram dampers cost £170, plus an additional £30 for installation. Early Bugattis had used Bugatti's own pattern of multi-plate damper, similar to the André Hartford.
de Ram patented several styles of friction dampers that used hydraulic mechanisms to engage the friction surfaces and modulate their clamping force. While the mechanical details of de Ram's patents varied, the de Ram dampers fitted to 1930s Bugattis used a multi-plate disk stack. In this design each disk is connected to either an inner or outer cylindrical carrier via splines, much as in multi-plate clutches of the time. The splines cause each disk to rotate with either the chassis or axle side of the damper, alternating with each disk in the stack. A series of hydraulic pistons and valves varied pressure on the friction disk stack in proportion to the speed of the lever arm attached to the axle. This mechanism resulted in stiffer damping when encountering strong bumps at higher speeds, but also soft damping during slow or slight movements of the suspension. de Ram dampers were mounted to the chassis with a single arm to the axle, in a manner interchangeable with Hartford pattern dampers.
Cylindrical friction elements
A similar pattern, with a cylindrical friction element, was used on Mercedes-Benz cars from 1928. The earlier Mercedes had used Hartford pattern.
A form using a cylindrical roller bearing with a resilient race was patented in 1930.
Adjustable damping
The damping rate for frictional dampers has less than ideal behaviour for car suspension. An ideal suspension would offer more damping to greater suspension forces, with less damping at low speeds for a smoother ride. Frictional dampers though had a mostly constant rate. This was even greater when stationary, owing to stiction between stationary plates. For larger bumps the damping may even be reduced. This is particularly a problem for fast driving, when repeated high forces may cause the friction plates to heat up and lose their efficiency.
Motor racing in the 1930s was often an amateur affair, where sports cars would be driven to racetracks such as Brooklands, adjusted in the paddock to their racing trim and then raced. It was normal to re-adjust the damping between "road" and "race" settings.
The need for adjustable damping was so great that it was even useful to provide a means of adjusting this whilst driving. This was a feature only used on luxury cars, often larger cars that might need to set their suspension for varying numbers of passengers. Stiffness could be increased between "town" and a stiffer setting for the faster open road. These dampers were best known under the Telecontrol brand name. A hydraulic control, with an inflatable rubber bag in the disk pack, could be used to increase the clamping force and thus their damping stiffness.
Georges de Ram's hydraulically actuated friction shock absorbers also attempted to address this issue by automatically adjusting damping forces, rather than providing manual control. These dampers used a series of hydraulic valves and pistons to vary the friction between plates, proportional to the speed of suspension movement. This resulted in stiffer damping when encountering strong bumps at higher speeds, while remaining soft during slow or slight movements of the suspension.
One of the major reasons for the decline of frictional dampers post-war, in favour of hydraulic lever arms, was the hydraulic damper's better change of rate with suspension amplitude. Hydraulic dampers had a resistance that inherently increased with velocity of suspension movement, a far more useful behaviour. This useful inherent behaviour meant that manual adjustment was far less necessary, certainly not whilst driving.
Motorcycle and bicycle use
Motorcycles of the same period, through to the 1950s, that used girder forks also used friction disk shock absorbers. These were often provided with a large handwheel, so that they could be adjusted easily during a ride, or even whilst in motion.
Many Moulton bicycles continue to use friction damping for the leading link front fork. This is adjustable by tightening or loosening the small bolts that hold the two halves of the leading links together, the friction discs being sandwiched between them.
Signalboxes
Hartford shock absorbers were also used within the manual lever frames of some UK signalboxes. They were used to prevent shock loads if the levers were allowed to slam back in the frame under the weight of the counterweights.
Manufacturers
André Hartford held patents on this design but the manufacturing technology required was simple, and so many other makers also produced them.
André Hartford, probably the best-known brand pre-war.
F. Repusseau & Cie made the Hartford pattern under license in Paris.
Bentley & Draper Ltd
Telecontrol, noted for their adjustable dampers
de Ram, a more complicated damper used on later Bugattis. An attempt to solve the problems that would later be addressed by hydraulic dampers.
Modern components and spare parts are still manufactured for restoration projects.
Notes
References
Shock absorbers
Automotive suspension technologies
Shock absorbers | Friction disk shock absorber | [
"Physics",
"Chemistry"
] | 1,634 | [
"Mechanical phenomena",
"Physical phenomena",
"Force",
"Friction",
"Physical quantities",
"Surface science"
] |
33,731,122 | https://en.wikipedia.org/wiki/Pileup%20format | Pileup format is a text-based format for summarizing the base calls of aligned reads to a reference sequence. This format facilitates visual display of SNP/indel calling and alignment. It was first used by
Tony Cox and Zemin Ning at the Wellcome Trust Sanger Institute, and became widely known through its implementation within the SAMtools software suite.
Format
Example
The columns
Each line consists of 5 (or optionally 6) tab-separated columns:
Sequence identifier
Position in sequence (starting from 1)
Reference nucleotide at that position
Number of aligned reads covering that position (depth of coverage)
Bases at that position from aligned reads
Phred Quality of those bases, represented in ASCII with -33 offset (OPTIONAL)
Column 5: The bases string
. (dot) means a base that matched the reference on the forward strand
, (comma) means a base that matched the reference on the reverse strand
</> (less-/greater-than sign) denotes a reference skip. This occurs, for example, if a base in the reference genome is intronic and a read maps to two flanking exons. If quality scores are given in a sixth column, they refer to the quality of the read and not the specific base.
AGTCN (upper case) denotes a base that did not match the reference on the forward strand
agtcn (lower case) denotes a base that did not match the reference on the reverse strand
A sequence matching the regular expression denotes an insertion of one or more bases starting from the next position. For example, +2AG means insertion of AG in the forward strand
A sequence matching the regular expression denotes a deletion of one or more bases starting from the next position. For example, -2ct means deletion of CT in the reverse strand
^ (caret) marks the start of a read segment and the ASCII of the character following `^' minus 33 gives the mapping quality
$ (dollar) marks the end of a read segment
* (asterisk) is a placeholder for a deleted base in a multiple basepair deletion that was mentioned in a previous line by the notation
Column 6: The base quality string
This is an optional column. If present, the ASCII value of the character minus 33 gives the mapping Phred quality of each of the bases in the previous column 5. This is similar to quality encoding in the FASTQ format.
File extension
There is no standard file extension for a Pileup file, but .msf (multiple sequence file), .pup and .pileup are used.
See also
Variant Call Format
FASTQ format
List of file formats for molecular biology
References
External links
SAMtools pileup description
bioruby-pileup_iterator (A Ruby pileup parser)
pysam (A Python pileup parser)
Bioinformatics
Biological sequence format | Pileup format | [
"Engineering",
"Biology"
] | 593 | [
"Bioinformatics",
"Biological engineering",
"Biological sequence format"
] |
29,793,334 | https://en.wikipedia.org/wiki/Piola%E2%80%93Kirchhoff%20stress%20tensors | In the case of finite deformations, the Piola–Kirchhoff stress tensors (named for Gabrio Piola and Gustav Kirchhoff) express the stress relative to the reference configuration. This is in contrast to the Cauchy stress tensor which expresses the stress relative to the present configuration. For infinitesimal deformations and rotations, the Cauchy and Piola–Kirchhoff tensors are identical.
Whereas the Cauchy stress tensor relates stresses in the current configuration, the deformation gradient and strain tensors are described by relating the motion to the reference configuration; thus not all tensors describing the state of the material are in either the reference or current configuration. Describing the stress, strain and deformation either in the reference or current configuration would make it easier to define constitutive models (for example, the Cauchy Stress tensor is variant to a pure rotation, while the deformation strain tensor is invariant; thus creating problems in defining a constitutive model that relates a varying tensor, in terms of an invariant one during pure rotation; as by definition constitutive models have to be invariant to pure rotations). The 1st Piola–Kirchhoff stress tensor, is one possible solution to this problem. It defines a family of tensors, which describe the configuration of the body in either the current or the reference state.
The first Piola–Kirchhoff stress tensor, , relates forces in the present ("spatial") configuration with areas in the reference ("material") configuration.
where is the deformation gradient and is the Jacobian determinant.
In terms of components with respect to an orthonormal basis, the first Piola–Kirchhoff stress is given by
Because it relates different coordinate systems, the first Piola–Kirchhoff stress is a two-point tensor. In general, it is not symmetric. The first Piola–Kirchhoff stress is the 3D generalization of the 1D concept of engineering stress.
If the material rotates without a change in stress state (rigid rotation), the components of the first Piola–Kirchhoff stress tensor will vary with material orientation.
The first Piola–Kirchhoff stress is energy conjugate to the deformation gradient.
It relates forces in the current configuration to areas in the reference configuration.
The second Piola–Kirchhoff stress tensor, , relates forces in the reference configuration to areas in the reference configuration. The force in the reference configuration is obtained via a mapping that preserves the relative relationship between the force direction and the area normal in the reference configuration.
In index notation with respect to an orthonormal basis,
This tensor, a one-point tensor, is symmetric.
If the material rotates without a change in stress state (rigid rotation), the components of the second Piola–Kirchhoff stress tensor remain constant, irrespective of material orientation.
The second Piola–Kirchhoff stress tensor is energy conjugate to the Green–Lagrange finite strain tensor.
References
J. Bonet and R. W. Wood, Nonlinear Continuum Mechanics for Finite Element Analysis, Cambridge University Press.
Tensor physical quantities | Piola–Kirchhoff stress tensors | [
"Physics",
"Mathematics",
"Engineering"
] | 651 | [
"Quantity",
"Tensors",
"Tensor physical quantities",
"Physical quantities"
] |
29,800,943 | https://en.wikipedia.org/wiki/Soluble%20urokinase%20plasminogen%20activator%20receptor | Soluble urokinase plasminogen activator receptor (suPAR) (NCBI Accession no. AAK31795) is a protein and the soluble form of uPAR. uPAR is expressed mainly on immune cells, endothelial cells, and smooth muscle cells. uPAR is a membrane-bound receptor for uPA, also known as urokinase and Vitronectin. The soluble version of uPAR, called suPAR, results from the cleavage and membrane-bound uPAR during inflammation or immune activation. The suPAR concentration is positively correlated to the activation level of the immune system. Therefore, suPAR is a marker of disease severity and aggressiveness and is associated with morbidity and mortality in several acute and chronic diseases. suPAR levels have been observed to increase with age. suPAR is present in plasma, urine, blood, serum, and cerebrospinal fluid.
General population
In the general population, the suPAR level is higher in females than in males. The median suPAR level for men and women in blood donors is 2.22 ng/mL and 2.54 ng/mL, respectively. In general, women have slightly higher suPAR than men. suPAR levels are higher in serum than in plasma for the same individual.
Clinical significance
suPAR is a biomarker reflecting the level of activity of the immune system in response to an inflammatory stimulus. suPAR levels positively correlate with pro-inflammatory biomarkers, including tumor necrosis factor-α (TNFα) and C-reactive protein (CRP) and other parameters, including leukocyte counts. suPAR is also associated with organ damage in various diseases.[2-5] Elevated levels of suPAR are associated with increased risk of systemic inflammatory response syndrome (SIRS), cancer, focal segmental glomerulosclerosis, cardiovascular disease, type 2 diabetes, infectious diseases, HIV, and mortality.
Acute medical patients
In the emergency departments, suPAR can aid in the triage and risk assessment of patients. This allows for many patients can be discharged rather than admitted. This also ensures that the most ill patients are prioritised first and put under careful observation without delay. A suPAR level below 4 ng/mL indicates a good prognosis in acute medical patients and supports discharge. In contrast, patients presenting with a suPAR level above 6 ng/mL have a high risk of a negative outcome.
COVID-19
In COVID-19, an early elevation of suPAR, e.g. in patients that present with symptoms of SARS-CoV-2 infection, is associated with an increased risk of severe COVID-19 development, which may lead to respiratory failure, acute kidney injury, and death. Clinical relevant cut-offs have been identified with a suPAR below 4 ng/mL indicating low risk of adverse outcomes and a suPAR above 6 ng/mL for high risk of negative outcomes such as severe respiratory failure.
Cardiovascular diseases
The suPAR level is elevated in patients with cardiovascular diseases compared to healthy individuals. suPAR is a predictor of cardiovascular morbidity and mortality in the general population.
Nephrology
In the kidneys, suPAR plays a role in regulating the permeability of the glomerular filtration barrier. An elevated suPAR level is associated with chronic renal diseases, the future incidence of chronic renal diseases, and declining eGFR. A high level is significantly associated with mortality and incidence of cardiovascular diseases in these patients.
Molecular characteristics
suPAR has a secondary structure of 17 anti-parallel β-sheets with three short α-helices. It consists of the three homologous domains D1, D2, and D3. Comparing cDNA sequences, D1 differs from D2 and D3 in its primary and tertiary structure, causing its distinct ligand binding properties.
uPAR has cleavage sites for several proteases in the linker region (chymotrypsin, elastase, matrix metalloproteases, cathepsin G, plasmin, urokinase plasminogen activator (uPA, or urokinase), and in the GPI anchor (phospholipase C and D, cathepsin G, plasmin).
The GPI-anchor links uPAR to the cell membrane making it available for uPA binding. When uPA is bound to the receptor, a cleavage between the GPI-anchor and D3 forms suPAR. Of the three suPAR forms: suPAR1-3, suPAR2-3, and suPAR1, suPAR2-3 is the chemotactic agent for promoting the immune system.
The molecular weight of suPAR varies between 24–66 kDa due to variations in posttranslational glycosylations. Additional isoforms generated by alternative splicing have been described on the RNA level, but whether these are transcribed and their possible roles remain unclear.
Plasma and serum levels
suPAR is mainly measured in serum and plasma isolated from human venous blood.
Technology
The suPAR level can be measured using the suPARnostic® product line. suPARnostic® is a CE-IVD certified antibody-based product range applied for quantitative measurements of suPAR in the clinical setting. Three product formats are available: 1) TurbiLatex, validated for clinical chemistry systems currently including the Roche Diagnostics cobas c501/2 and c701/2 systems; the Siemens ADVIA XPT and Atellica systems, and the Abbott Architect c and Alinity systems. 2) Quick Triage, which is a platform that is applied at the Point-Of-Care. 3) ELISA.
References
Human proteins
Clusters of differentiation
Biomarkers | Soluble urokinase plasminogen activator receptor | [
"Biology"
] | 1,193 | [
"Biomarkers"
] |
35,348,479 | https://en.wikipedia.org/wiki/Quippian | In mathematics, a quippian is a degree 5 class 3 contravariant of a plane cubic introduced by and discussed by . In the same paper Cayley also introduced another similar invariant that he called the pippian, now called the Cayleyan.
See also
Glossary of classical algebraic geometry
References
Algebraic geometry
Invariant theory | Quippian | [
"Physics",
"Mathematics"
] | 69 | [
"Symmetry",
"Group actions",
"Fields of abstract algebra",
"Algebraic geometry",
"Invariant theory"
] |
35,351,239 | https://en.wikipedia.org/wiki/Signaling%20Gateway%20%28website%29 | Signaling Gateway is a web portal dedicated to signaling pathways powered by the San Diego Supercomputer Center at the University of California, San Diego. It was initiated by a collaboration between the Alliance for Cellular Signaling and Nature. A primary feature is the Molecule Pages database.
Molecule Pages Database (online database journal)
Signaling Gateway Molecule Pages is a database containing "essential information on more than 8000 mammalian proteins (Mouse and Human) involved in cellular signaling."
The content of molecule pages is authored by invited experts and is peer-reviewed. The published pages are citable by digital object identifiers (DOIs). All data in the Molecule Pages are freely available to the public.
Data can be exported to PDF, XML, BioPAX/SBPAX and SBML.
MIRIAM Registry Details.
Some Published Molecule Pages
References
External links
Signaling Gateway Molecule Pages
Cell biology
Cell signaling
Works about neurochemistry
Signal transduction
Biological databases | Signaling Gateway (website) | [
"Chemistry",
"Biology"
] | 188 | [
"Works about neurochemistry",
"Cell biology",
"Signal transduction",
"Bioinformatics",
"Biochemistry",
"Neurochemistry",
"Works about biochemistry",
"Biological databases"
] |
35,356,715 | https://en.wikipedia.org/wiki/Top-lit%20updraft%20gasifier | A top-lit updraft gasifier (also known as a TLUD) is a micro-kiln used to produce charcoal, especially biochar, and heat for cooking. A TLUD pyrolyzes organic material, including wood or manure, and uses a reburner to eliminate volatile byproducts of pyrolization. The process leaves mostly carbon as a residue, which can be incorporated into soil to create terra preta.
Dr Thomas B Reed and the Norwegian architect Paal Wendelbo independently developed the working idea of a TLUD gasifier in the 1990s.
A TLUD gasifier is a considerable improvement on the rocket stove, being a more efficient way to achieve smoke-free combustion of the fuel.
Design
A TLUD gasifier stove is commonly constructed with two concentric cylindrical containers.
The inner cylinder is the fuel pot. The fuel pot has holes in the base. These holes are the primary air inlet. The fuel pot also has holes on the neck, like the skirt, serving as a secondary air inlet.
The outer cylinder has holes near the bottom on the sides. During combustion, air enters these holes, either by natural air draft or forced with a DC fan depending on requirement and construction model.
Any biomass with less than 20% water content can be used as fuel. The user fills the fuel pot up to the neck, just below the secondary air inlet holes. The user ignites the top layer of fuel for the pyrolysis to start. Air then flows in through the primary and secondary air inlets. The primary inlet helps the draft of pyrolysed wood gas flow up.
The secondary air inlet blows hot air by the time it travels around the fuel pot. The secondary inlet above the fuel layer helps burn the wood gas.
YouTube has many instructional videos, with further explanation on other websites.
A range of construction plans to make TLUD gasifier stoves
See also
List of ovens
Notes
References
Biodegradable waste management
Thermal treatment
Bioenergy
Sustainable gardening
Kilns | Top-lit updraft gasifier | [
"Chemistry",
"Engineering"
] | 423 | [
"Biodegradation",
"Biodegradable waste management",
"Chemical equipment",
"Kilns"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.