id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
40,922,108 | https://en.wikipedia.org/wiki/Spectrochimica%20Acta%20Part%20B | Spectrochimica Acta Part B: Atomic Spectroscopy is a monthly peer-reviewed scientific journal covering spectroscopy.
The journal was established in 1939 as Spectrochimica Acta. In 1967, Spectrochimica Acta was split into two journals, Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy and Spectrochimica Acta Part B: Atomic Spectroscopy. Part B obtained its current title around the time of the split.
According to the Journal Citation Reports, the journal has a 2019 impact factor of 3.086.
the editor-in-chief is Alessandro De Giacomo of the University of Bari, Italy.
See also
Elsevier / Spectrochimica Acta Atomic Spectroscopy Award
References
External links
Spectroch. Acta B at CAS Source Index
Elsevier academic journals
Academic journals established in 1939
Spectroscopy journals
English-language journals
Journals published between 13 and 25 times per year | Spectrochimica Acta Part B | [
"Physics",
"Chemistry",
"Astronomy"
] | 191 | [
"Spectroscopy stubs",
"Spectrum (physical sciences)",
"Astronomy stubs",
"Spectroscopy journals",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
40,925,324 | https://en.wikipedia.org/wiki/Finite%20volume%20method%20for%20one-dimensional%20steady%20state%20diffusion | The Finite volume method in computational fluid dynamics is a discretization technique for partial differential equations that arise from physical conservation laws. These equations can be different in nature, e.g. elliptic, parabolic, or hyperbolic. The first well-documented
use of this method was by Evans and Harlow (1957) at Los Alamos. The general equation for steady diffusion can easily be derived from the general transport equation for property Φ by deleting transient and convective terms.
General Transport equation can be defined as
,
where
is density and is the conserved quantity,
is the Diffusion coefficient and is the Source term.
is the Net rate of flow of out of fluid element (convection),
is Rate of increase of due to diffusion,
is Rate of increase of due to sources.
is Rate of increase of of fluid element(transient),
Conditions under which the transient and convective terms goes to zero:
Steady State
Low Reynolds Number
For one-dimensional, steady-state diffusion, General Transport equation reduces to:
,
or,
.
The following steps comprise the finite volume method for one-dimensional steady state diffusion -
STEP 1
Grid Generation
Divide the domain into equal parts of small domain.
Place nodal points at the center of each small domain.
Create control volumes using these nodal points.
Create control volumes near the edges in such a way that the physical boundaries coincide with control volume boundaries (Figure 1).
Assume a general nodal point 'P' for a general control volume. Adjacent nodal points to the East and West are identified by E and W respectively. The West-side face of the control volume is referred to by 'w' and the East-side control volume face by 'e' (Figure 2).
The distance between WP, wP, Pe and PE are identified by ,, and respectively (Figure 4).
STEP 2
Discretization
The crux of Finite volume method is to integrate the governing equation over each control volume.
Nodal points are used to discretize equations.
At nodal point P, the control volume integral is given by (Figure 3)
,
where
is Cross-sectional Area Cross section (geometry) of control volume face, is Volume, is average value of source S over the control volume.
It states that the difference between the diffusive flux Fick's laws of diffusion of through the east and west faces of some volume corresponds to the change in the quantity in that volume.
The diffusive coefficient of and are required in order to reach a useful conclusion.
Central differencing technique is used to derive the diffusive coefficient of :
,
.
is calculated using the nodal point values (Figure 4):
,
,
In some practical situations, the source term can be linearized:
.
Merging the above equations leads to
.
Re-arranging gives
.
Compare and identify the above equation with
where
STEP 3:
Solution of equations
Discretized equation must be set up at each of the nodal points in order to solve the problem.
The resulting system of linear algebraic equations Linear equation can then be solved to obtain at the nodal points.
The matrix of higher order can be solved in MATLAB.
This method can also be applied to a 2D situation. See Finite volume method for two dimensional diffusion problem.
References
Patankar, Suhas V. (1980), Numerical Heat Transfer and Fluid Flow, Hemisphere.
Hirsch, C. (1990), Numerical Computation of Internal and External Flows, Volume 2: Computational Methods for Inviscid and Viscous Flows, Wiley.
Laney, Culbert B.(1998), Computational Gas Dynamics, Cambridge University Press.
LeVeque, Randall(1990), Numerical Methods for Conservation Laws, ETH Lectures in Mathematics Series, Birkhauser-Verlag.
Tannehill, John C., et al., (1997), Computational Fluid mechanics and Heat Transfer, 2nd Ed., Taylor and Francis.
Wesseling, Pieter(2001), Principles of Computational Fluid Dynamics, Springer-Verlag.
Carslaw, H. S. and Jager, J. C. (1959). Conduction of Heat in Solids. Oxford: Clarendon Press
Crank, J. (1956). The Mathematics of Diffusion. Oxford: Clarendon Press
Thambynayagam, R. K. M (2011). The Diffusion Handbook: Applied Solutions for Engineers: McGraw-Hill
External links
Finite difference
http://opencourses.emu.edu.tr/course/view.php?id=27&lang=en
https://web.archive.org/web/20120303230200/http://nptel.iitm.ac.in/courses/112105045/
http://ingforum.haninge.kth.se/armin/CFD/dirCFD.htm
Diffusion equation
Computational fluid dynamics
Convection–diffusion equation
Finite volume method, Cheng Long
Finite volume method, Robert Eymard et al. (2010), Scholarpedia,5(6):9835
See also
Heat equation
Fokker–Planck equation
Fick's laws of diffusion
Maxwell–Stefan equation
Computational fluid dynamics | Finite volume method for one-dimensional steady state diffusion | [
"Physics",
"Chemistry"
] | 1,061 | [
"Computational fluid dynamics",
"Fluid dynamics",
"Computational physics"
] |
40,925,363 | https://en.wikipedia.org/wiki/Campus%20Biotech | The Campus Biotech is a Swiss institution hosting research institutes and biotechnology companies. The Campus Biotech is located in the former Merck Serono building, in Geneva (Switzerland).
The Campus Biotech is a part of the Swiss Innovation Park.
History
End of June 2013, Merck Serono left its headquarters in Geneva and the building was bought by Ernesto Bertarelli and Hansjörg Wyss (for more than 300 million Swiss francs) to create the Campus Biotech.
Structure
EPFL-UNIGE Biomedical Center (14000 m2)
Center for Neuroprosthetics (EPFL)
Human Brain Project and Blue Brain Project (EPFL) (5000 m2)
Wyss Center for Bio and Neuroengineering (8000 m2)
Foundation for Innovative New Diagnostics (FIND)
Biotech Innovation Square (12000 m2)
Health 2030 Genome Center
See also
Lausanne campus
Notes and references
External links
Buildings and structures in Geneva
Engineering research institutes
Biotechnology in Switzerland
Laboratories in Switzerland
Research institutes in Switzerland
Multidisciplinary research institutes | Campus Biotech | [
"Engineering",
"Biology"
] | 213 | [
"Biotechnology in Switzerland",
"Engineering research institutes",
"Biotechnology by country"
] |
50,036,569 | https://en.wikipedia.org/wiki/Circuit%20Value%20Problem | The Circuit Value Problem (or Circuit Evaluation Problem) is the computational problem of computing the output of a given Boolean circuit on a given input.
The problem is complete for P under uniform AC reductions. Note that, in terms of time complexity, it can be solved in linear time simply by a topological sort.
The Boolean Formula Value Problem (or Boolean Formula Evaluation Problem) is the special case of the problem when the circuit is a tree. The Boolean Formula Value Problem is complete for NC.
The problem is closely related to the Boolean Satisfiability Problem which is complete for NP and its complement, the Propositional Tautology Problem, which is complete for co-NP.
See also
Circuit satisfiability
Switching lemma
References
Polynomial-time problems
Computational problems
Theoretical computer science | Circuit Value Problem | [
"Mathematics"
] | 161 | [
"Theoretical computer science",
"Applied mathematics",
"Computational problems",
"Polynomial-time problems",
"Mathematical problems"
] |
50,037,064 | https://en.wikipedia.org/wiki/Amorphous%20silicon | Amorphous silicon (a-Si) is the non-crystalline form of silicon used for solar cells and thin-film transistors in LCDs.
Used as semiconductor material for a-Si solar cells, or thin-film silicon solar cells, it is deposited in thin films onto a variety of flexible substrates, such as glass, metal and plastic. Amorphous silicon cells generally feature low efficiency.
As a second-generation thin-film solar cell technology, amorphous silicon was once expected to become a major contributor in the fast-growing worldwide photovoltaic market, but has since lost its significance due to strong competition from conventional crystalline silicon cells and other thin-film technologies such as CdTe and CIGS. Amorphous silicon is a preferred material for the thin film transistor (TFT) elements of liquid crystal displays (LCDs) and for x-ray imagers.
Amorphous silicon differs from other allotropic variations, such as monocrystalline silicon—a single crystal, and polycrystalline silicon, that consists of small grains, also known as crystallites.
Description
Silicon is a fourfold coordinated atom that is normally tetrahedrally bonded to four neighboring silicon atoms. In crystalline silicon (c-Si) this tetrahedral structure continues over a large range, thus forming a well-ordered crystal lattice.
In amorphous silicon this long range order is not present. Rather, the atoms form a continuous random network. Moreover, not all the atoms within amorphous silicon are fourfold coordinated. Due to the disordered nature of the material some atoms have a dangling bond. Physically, these dangling bonds represent defects in the continuous random network and may cause anomalous electrical behavior.
The material can be passivated by hydrogen, which bonds to the dangling bonds and can reduce the dangling bond density by several orders of magnitude. Hydrogenated amorphous silicon (a-Si:H) has a sufficiently low amount of defects to be used within devices such as solar photovoltaic cells, particularly in the protocrystalline growth regime. However, hydrogenation is associated with light-induced degradation of the material, termed the Staebler–Wronski effect.
Amorphous silicon and carbon
Amorphous alloys of silicon and carbon (amorphous silicon carbide, also hydrogenated, a-Si1−xCx:H) are an interesting variant. Introduction of carbon atoms adds extra degrees of freedom for control of the properties of the material. The film could also be made transparent to visible light.
Increasing the concentration of carbon in the alloy widens the electronic gap between conduction and valence bands (also called "optical gap" and bandgap). This increases the light efficiency of solar cells made with amorphous silicon carbide layers. On the other hand, the electronic properties as a semiconductor (mainly electron mobility), are adversely affected by the increasing content of carbon in the alloy, presumably due to the increased disorder in the atomic network.
Several studies are found in the scientific literature, mainly investigating the effects of deposition parameters on electronic quality, but practical applications of amorphous silicon carbide in commercial devices are still lacking.
Properties
The density of ion implanted amorphous Si has been calculated as 4.90×1022 atom/cm3 (2.285 g/cm3) at 300 K. This was done using thin (5 micron) strips of amorphous silicon. This density is 1.8±0.1% less dense than crystalline Si at 300 K. Silicon is one of the few elements that expands upon cooling and has a lower density as a solid than as a liquid.
Hydrogenated amorphous silicon
Unhydrogenated a-Si has a very high defect density which leads to undesirable semiconductor properties such as poor photoconductivity and prevents doping which is critical to engineering semiconductor properties. By introducing hydrogen during the fabrication of amorphous silicon, photoconductivity is significantly improved and doping is made possible. Hydrogenated amorphous silicon, a-Si:H, was first fabricated in 1969 by Chittick, Alexander and Sterling by deposition using a silane gas (SiH4) precursor. The resulting material showed a lower defect density and increased conductivity due to impurities. Interest in a-Si:H came when (in 1975), LeComber and Spear discovered the ability for substitutional doping of a-Si:H using phosphine (n-type) or diborane (p-type). The role of hydrogen in reducing defects was verified by Paul's group at Harvard who found a hydrogen concentration of about 10 atomic % through IR vibration, which for Si-H bonds has a frequency of about 2000 cm−1. Starting in the 1970s, a-Si:H was developed in solar cells by David E. Carlson and C. R. Wronski at RCA Laboratories. Conversion efficiency steadily climbed to about 13.6% in 2015.
Deposition processes
Applications
While a-Si suffers from lower electronic performance compared to c-Si, it is much more flexible in its applications. For example, a-Si layers can be made thinner than c-Si, which may produce savings on silicon material cost.
One further advantage is that a-Si can be deposited at very low temperatures, e.g., as low as 75 degrees Celsius. This allows deposition on not only glass, but on plastic or even on paper substrates as well, making it a candidate for a roll-to-roll processing technique. Once deposited, a-Si can be doped in a fashion similar to c-Si, to form p-type or n-type layers and ultimately to form electronic devices.
Another advantage is that a-Si can be deposited over large areas by PECVD. The design of the PECVD system has great impact on the production cost of such panel, therefore most equipment suppliers put their focus on the design of PECVD for higher throughput, that leads to lower manufacturing cost particularly when the silane is recycled.
Arrays of small (under 1 mm by 1 mm) a-Si photodiodes on glass are used as visible-light image sensors in some flat panel detectors for fluoroscopy and radiography.
Photovoltaics
Hydrogenated amorphous silicon (a-Si:H) has been used as a photovoltaic solar cell material for devices which require very little power, such as pocket calculators, because their lower performance compared to conventional crystalline silicon (c-Si) solar cells is more than offset by their simplified and lower cost of deposition onto a substrate. Moreover, the vastly higher shunt resistance of the p-i-n device means that acceptable performance is achieved even at very low light levels. The first solar-powered calculators were already available in the late 1970s, such as the Royal Solar 1, Sharp EL-8026, and Teal Photon.
More recently, improvements in a-Si:H construction techniques have made them more attractive for large-area solar cell use as well. Here their lower inherent efficiency is made up, at least partially, by their thinness – higher efficiencies can be reached by stacking several thin-film cells on top of each other, each one tuned to work well at a specific frequency of light. This approach is not applicable to c-Si cells, which are thick as a result of its indirect band-gap and are therefore largely opaque, blocking light from reaching other layers in a stack.
The source of the low efficiency of amorphous silicon photovoltaics is due largely to the low hole mobility of the material. This low hole mobility has been attributed to many physical aspects of the material, including the presence of dangling bonds (silicon with 3 bonds), floating bonds (silicon with 5 bonds), as well as bond reconfigurations. While much work has been done to control these sources of low mobility, evidence suggests that the multitude of interacting defects may lead to the mobility being inherently limited, as reducing one type of defect leads to formation others.
The main advantage of a-Si:H in large scale production is not efficiency, but cost. a-Si:H cells use only a fraction of the silicon needed for typical c-Si cells, and the cost of the silicon has historically been a significant contributor to cell cost. However, the higher costs of manufacture due to the multi-layer construction have, to date, made a-Si:H unattractive except in roles where their thinness or flexibility are an advantage.
Typically, amorphous silicon thin-film cells use a p-i-n structure. The placement of the p-type layer on top is also due to the lower hole mobility, allowing the holes to traverse a shorter average distance for collection to the top contact. Typical panel structure includes front side glass, TCO, thin-film silicon, back contact, polyvinyl butyral (PVB) and back side glass. Uni-Solar, a division of Energy Conversion Devices produced a version of flexible backings, used in roll-on roofing products. However, the world's largest manufacturer of amorphous silicon photovoltaics had to file for bankruptcy in 2012, as it could not compete with the rapidly declining prices of conventional solar panels.
Microcrystalline and micromorphous silicon
Microcrystalline silicon (also called nanocrystalline silicon) is amorphous silicon, but also contains small crystals. It absorbs a broader spectrum of light and is flexible. Micromorphous silicon module technology combines two different types of silicon, amorphous and microcrystalline silicon, in a top and a bottom photovoltaic cell. Sharp produces cells using this system in order to more efficiently capture blue light, increasing the efficiency of the cells during the time where there is no direct sunlight falling on them. Protocrystalline silicon is often used to optimize the open circuit voltage of a-Si photovoltaics.
Large-scale production
Xunlight Corporation, which has received over $40 million of institutional investments, has completed the installation of its first 25 MW wide-web, roll-to-roll photovoltaic manufacturing equipment for the production of thin-film silicon PV modules. Anwell Technologies has also completed the installation of its first 40 MW a-Si thin film solar panel manufacturing facility in Henan with its in-house designed multi-substrate-multi-chamber PECVD equipment.
Photovoltaic thermal hybrid solar collectors
Photovoltaic thermal hybrid solar collectors (PVT), are systems that convert solar radiation into electrical energy and thermal energy. These systems combine a solar cell, which converts electromagnetic radiation (photons) into electricity, with a solar thermal collector, which captures the remaining energy and removes waste heat from the solar PV module. Solar cells suffer from a drop in efficiency with the rise in temperature due to increased resistance. Most such systems can be engineered to carry heat away from the solar cells thereby cooling the cells and thus improving their efficiency by lowering resistance. Although this is an effective method, it causes the thermal component to under-perform compared to a solar thermal collector. Recent research showed that a-Si:H PV with low temperature coefficients allow the PVT to be operated at high temperatures, creating a more symbiotic PVT system and improving performance of the a-Si:H PV by about 10%.
Thin-film-transistor liquid-crystal display
Amorphous silicon has become the material of choice for the active layer in thin-film transistors (TFTs), which are most widely used in large-area electronics applications, mainly for liquid-crystal displays (LCDs).
Thin-film-transistor liquid-crystal display (TFT-LCD) show a similar circuit layout process to that of semiconductor products. However, rather than fabricating the transistors from silicon, that is formed into a crystalline silicon wafer, they are made from a thin film of amorphous silicon that is deposited on a glass panel. The silicon layer for TFT-LCDs is typically deposited using the PECVD process. Transistors take up only a small fraction of the area of each pixel and the rest of the silicon film is etched away to allow light to easily pass through it.
Polycrystalline silicon is sometimes used in displays requiring higher TFT performance. Examples include small high-resolution displays such as those found in projectors or viewfinders. Amorphous silicon-based TFTs are by far the most common, due to their lower production cost, whereas polycrystalline silicon TFTs are more costly and much more difficult to produce.
See also
Atomic layer deposition (ALD)
Chemical-mechanical planarization (CMP)
Chemical vapor deposition (CVD)
Crystalline silicon
Ion implantation
Nanoparticle
Physical vapor deposition (PVD)
Protocrystalline
Rapid thermal processing (RTP)
References
External links
Amorphous Silicon Devices group at the University of Waterloo, Ontario, Canada
Theory and Simulation at Ohio University, Athens Ohio
Allotropes of silicon
Silicon, Amorphous
Amorphous solids
Thin-film cells | Amorphous silicon | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 2,726 | [
"Allotropes",
"Thin-film cells",
"Semiconductor materials",
"Unsolved problems in physics",
"Group IV semiconductors",
"Allotropes of silicon",
"Planes (geometry)",
"Amorphous solids",
"Thin films"
] |
50,040,777 | https://en.wikipedia.org/wiki/Dynamic%20substructuring | Dynamic Substructuring (DS) is an engineering tool used to model and analyse the dynamics of mechanical systems by means of its components or substructures. Using the dynamic substructuring approach one is able to analyse the dynamic behaviour of substructures separately and to later on calculate the assembled dynamics using coupling procedures. Dynamic substructuring has several advantages over the analysis of the fully assembled system:
Substructures can be modelled in the domain that is most appropriate, e.g. experimentally obtained substructures can be combined with numerical models.
Large and/or complex systems can be optimized on substructure level.
Numerical computation load can be reduced as solving several substructures is computationally less demanding than solving one large system.
Substructure models of different development groups can be shared and combined without exposing the modelling details.
Dynamic substructuring is particularly tailored to simulation of mechanical vibrations, which has implications for many product aspects such as sound / acoustics, fatigue / durability, comfort and safety. Also, dynamic substructuring is applicable to any scale of size and frequency. It is therefore a widely used paradigm in industrial applications ranging from automotive and aerospace engineering to design of wind turbines and high-tech precision machinery.
History
The roots of dynamic substructuring can be found in the field of domain decomposition. In 1890 the mathematician Hermann Schwarz came up with an iterative procedure for domain decomposition which allows to solve for continuous coupled subdomains. However, many of the analytical models of coupled continuous subdomains do not have closed-form solutions, which led to discretization and approximation techniques such as the Ritz method (which is sometimes called the Rayleigh-Ritz method due to the similarity between Ritz's formulation and the Rayleigh ratio) the boundary element method (BEM) and the finite element method (FEM). These methods can be considered as "first level" domain decomposition techniques.
The finite element method proved to be the most efficient method and the invention of the microprocessor made it possible to easily solve a large variety of physical problems. In order to analyse even larger and more complex problems, methods were invented to optimize the efficiency of the discretized calculations. The first step was replacing the direct solvers by iterative solvers such as the conjugate gradient method. The lack of robustness and slow convergence of these solvers did not make them an interesting alternative in the beginning. The rise of parallel computing in the 1980s however sparked their popularity. Complex problems could now be solved by dividing the problem into subdomains, each processed by a separate processor, and solving for the interface coupling iteratively. This can be seen as a second level domain decomposition as is visualized in the figure.
The efficiency of dynamic modelling could be increased even further by reducing the complexity of the individual subdomains. This reduction of the subdomains (or substructures in the context of structural dynamics) is realized by representing substructures by means of their general responses. Expressing the separate substructures by means of their general response instead of their detailed discretization led to the so-called dynamic substructuring method. This reduction step also allowed for replacing the mathematical description of the domains by experimentally obtained information. This reduction step is also visualized by the reduction arrow in the figure.
The first dynamic substructuring methods were developed in the 1960s and were more commonly known under the name component mode synthesis (CMS). The benefits of dynamic substructuring were quickly discovered by the scientific and engineering communities and it became an important research topic in the field of structural dynamics and vibrations. Major developments followed, resulting in e.g. the classic Craig-Bampton method. The Craig-Bampton method employs static condensation (Guyan reduction) and modal truncation techniques to effectively reduce the degrees of freedom in a system.
Due to improvements in sensor and signal processing technology in the 1980s, substructuring techniques also became attractive for the experimental community. Methods dealing with structural dynamic modification were created in which coupling techniques were directly applied to measured frequency response functions (FRFs). Broad popularity of the method was gained when Jetmundsen et al. formulated the classical frequency-based substructuring (FBS) method, which laid the ground work for frequency-based dynamic substructuring. In 2006 a systematic notation was introduced by De Klerk et al. in order to simplify the difficult and elaborate notation that had been used prior. The simplification was done by means of two Boolean matrices that handle all the "bookkeeping" involved in the assembly of substructures
Domains
Dynamic substructuring can best be seen as a domain-independent toolset for assembly of component models, rather than a modelling method of its own. Generally, dynamic substructuring can be used for all domains that are well suited to simulate multiple input/multiple output behaviour. Five domains that are well suited for substructuring are summarized in the below table.
The physical domain concerns methods that are based on (linearised) mass, damping and stiffness matrices, typically obtained from numerical FEM modelling. Popular solutions to solve the associated system of second order differential equations are the time integration schemes of Newmark and the Hilbert-Hughes-Taylor scheme. The modal domain concerns component mode synthesis (CMS) techniques such as the Craig-Bampton, Rubin and McNeal method. These methods provide efficient modal reduction bases and assembly techniques for numerical models in the physical domain. The frequency domain is more popularly known as Frequency Based Substructuring (FBS). Based on the classic formulation of Jetmundsen et al. and the reformulation of De Klerk et al., it has become the most commonly used domain for substructuring, because of the ease of expressing the differential equations of a dynamical system (by means of Frequency Response Functions, FRFs) and the convenience of implementing experimentally obtained models. The time domain refers to the recently proposed concept of Impulse Based Substructuring (IBS), which expresses the behaviour of a dynamic system using a set of Impulse Response Functions (IRFs). The state-space domain, finally, refers to methods proposed by Sjövall et al. that employ system identification techniques common to control theory.
As dynamic substructuring is a domain-independent toolset, it is applicable to the dynamic equations of all domains. In order to establish substructure assembly in a particular domain, two interface conditions need to be implemented. This is explained next, followed by a few common substructuring techniques.
Interface Conditions
To establish substructuring coupling / decoupling in each of the above-mentioned domains, two conditions should be met:
Coordinate compatibility, i.e. the connecting nodes of two substructures should have equal interface displacement.
Force equilibrium, i.e. the interface forces between connecting nodes have equal magnitude and opposing sign.
These are the two essential conditions that keep substructures together, hence allow to construct an assembly of multiple components. Note that the conditions are comparable with Kirchhoff's laws for electric circuits, in which case similar conditions apply to currents and voltages though/over electric components in a network; see also Mechanical–electrical analogies.
Substructure connectivity
Consider two substructures A and B as depicted in the figure. The two substructures comprise a total of six nodes; the displacements of the nodes are described by a set of Degrees of Freedom (DoFs). The DoFs of the six nodes are partitioned as follows:
DoFs of the internal nodes of substructure A;
DoFs of the coupling nodes of substructures A and B, i.e. interface DoFs;
DoFs of the internal nodes of substructure B.
Note that the denotation 1, 2 and 3 indicates the function of the nodes/DoFs rather than the total amount. Let us define the sets of DoFs for the two substructures A and B in concatenated form. The displacements and applied forces are represented by the sets and . For the purpose of substructuring, a set of interface forces is introduced which only contains non-zero entries on the interface DoFs:
The relation between dynamic displacements and applied forces of the uncoupled problem is governed by a particular dynamic equation, such as presented in the table above. The uncoupled equations of motion are augmented by extra terms/equations for compatibility and equilibrium, as discussed next.
Compatibility
The compatibility condition requires that the interface DoFs have the same sign and value at both sides of the interface: . This condition can be expressed using a so-called signed Boolean matrix, denoted by . For the given example this can be expressed as:
In some cases the interface nodes of the substructures are non-conforming, e.g. when two substructures are meshed separately. In such cases a non-Boolean matrix has to be used in order to enforce a weak interface compatibility.
A second form in which the compatibility condition can be expressed is by means of coordinate substitution by a set of generalised coordinates . The set contains the unique coordinates that remain after assembly of the substructures. Every matching pair of interface DoFs is described by a single generalised coordinate, which means that the compatibility condition is automatically enforced. Expressing using gives:
Matrix is referred to as the Boolean localisation matrix. A useful relation between matrix and can be exposed by noting that compatibility should hold for any set of physical coordinates expressed by . Indeed, substituting in the equation :
Hence represents the nullspace of :
This means in practice that one only needs to define or ; the other Boolean matrix is calculated using the nullspace property.
Equilibrium
The second condition that has to be satisfied for substructure assembly is the force equilibrium for matching interface forces . For the current example, this condition can be written as . Similar to the compatibility equation, the force equilibrium condition can be expressed using a Boolean matrix. Use is made of the transpose of the Boolean localisation matrix that was introduced to write compatibility:
The equations for and state that the interface forces on internal nodes are zero, hence not present. The equation for correctly establishes the force equilibrium between a matching pair of interface DoFs according to Newton's third law.
A second notation in which the equilibrium condition can be expressed is by introducing a set of Lagrange multipliers . The substitution of these Lagrange multipliers is possible as and differ only in sign, not in value. Using again the signed Boolean matrix :
The set defines the intensity of the interface forces . Each Lagrange multiplier represents the magnitude of two matching interface forces in the assembly. By defining the interface forces using Lagrange multipliers , force equilibrium is automatically satisfied. This can be seen by substituting into the first equilibrium equation:
Again, the nullspace property of the Boolean matrices is used here, namely: .
The two conditions as presented above can be applied to establish coupling / decoupling in a myriad of domains and are thus independent of variables such as time, frequency, mode, etc. Some implementations of the interface conditions for the most common domains of substructuring are presented below.
Substructuring in the physical domain
The physical domain is the domain that has the most straightforward physical interpretation. For each discrete linearised dynamic system one is able to write an equilibrium between the externally applied forces and the internal forces originating from intrinsic inertia, viscous damping and elasticity. This relation is governed by one of the most elementary formulas in structural vibrations:
represent the mass, damping and stiffness matrix of the system. These matrices are often obtained from finite element modelling (FEM), and are referred to as the numerical model of the structure. Furthermore, represents the DoFs and the force vector which are dependent on time . This dependency is omitted in the following equations in order to improve readability.
Coupling in the physical domain
Coupling of substructures in the physical domain first requires writing the uncoupled equations of motion of the substructures in block diagonal form:
Next, two assembly approaches can be distinguished: primal and dual assembly.
Primal assembly
For primal assembly, a unique set of degrees of freedom is defined in order to satisfy compatibility, . Furthermore, a second equation is added to enforce interface force equilibrium. This results in the following coupled dynamic equilibrium equations:
Pre-multiplying the first equation by and noting that , the primal assembly reduces to:
The primally assembled system matrices can be used for a transient simulation by any standard time stepping algorithm. Note that the primal assembly technique is analogue to assembly of super-elements in finite element methods.
Dual assembly
In the dual assembly formulation the global set of DoFs is retained and an assembly is made by a priori satisfying the equilibrium condition . Again, the Lagrange multipliers represent the interface forces connecting the DoFs at the interface. As these are unknowns, they are moved to the left-hand side of the equation. In order to satisfy compatibility, a second equation is added to the system, now operating on the displacements:
The dually assembled system can be written in matrix form as:
This dually assembled system can also be used in a transient simulation by means of a standard time stepping algorithm.
Substructuring in the frequency domain
In order to write out the equations for frequency based substructuring (FBS), the dynamic equilibrium first has to be put in the frequency domain. Starting with the dynamic equilibrium in the physical domain:
Taking the Fourier transform of this equation gives the dynamic equilibrium in the frequency domain:
Matrix is referred to as the dynamic stiffness matrix. This matrix consists of the complex-valued frequency-dependent functions that describe the force required to generate a unit harmonic displacement at a certain DoF. The inverse of the matrix is defined as and yields the more intuitive admittance notation:
The receptance matrix contains the frequency response functions (FRFs) of the structure which describe the displacement response to a unit input force. Other variants of the receptance matrix are the mobility and accelerance matrix, which respectively describe the velocity and acceleration response. The elements of the dynamic stiffness (or impedance in general) and receptance (or admittance in general) matrix are defined as follows:
Coupling in the frequency domain
In order to couple two substructures in the frequency domain, use is made of the admittance and impedance matrices of both substructures. Using the definition of substructures A and B as introduced previously, the following impedance and admittance matrices are defined (note that the frequency dependency is omitted from the terms to improve readability):
The two admittance and impedance matrices can be put in block diagonal form in order to align with the global set of DoFs :
The off-diagonal zero terms show that at this point no coupling is present between the two substructures. In order to create this coupling, use can be made of the primal or dual assembly method. Both assembly methods make use of the dynamic equations as was defined before:
In these equations is again used to define the set of interface forces, which are yet unknown.
Primal assembly
In order to obtain the primal system of equations, a unique set of coordinates is defined: . By definition of an appropriate Boolean localisation matrix , a unique set of DoFs remains for which the compatibility condition is satisfied a priori (compatibility condition). In order to satisfy the equilibrium condition a second equation is added to the equations of motion:
Pre-multiplying the first equation with yields the notation of the assembled equations of motion for the generalised coordinates :
This result can be rewritten in admittance form as:
This last result gives access to the generalised responses as a result of the generalised applied forces , namely by inverting the primally assembled impedance matrix.
The primal assembly procedure is mainly of interest when one has access to the dynamics in impedance form, e.g. from finite element modelling. When one only has access to the dynamics in admittance notation, the dual formulation is a more suitable approach.
Dual assembly
A dually assembled system starts with the system written in the admittance notation. For a dually assembled system the force equilibrium condition is satisfied a priori by substituting Lagrange multipliers for the interface forces: . The compatibility condition is enforced by adding an additional equation:
Substituting the first line in the second and solving for gives:
The term represents the incompatibility caused by the uncoupled responses of the substructures to the applied forces . By multiplying the incompatibility with the combined interface stiffness, i.e. , the forces that keep the substructures together are determined. The coupled response is obtained by substituting the calculated back into the original equation:
This coupling method is referred to as the Lagrange-multiplier frequency-based substructuring (LM-FBS) method. The LM-FBS method allows for quick and easy assembling of an arbitrary number of substructures in a systematic fashion. Note that the result is theoretically the same as was obtained above by application of primal assembly.
Decoupling in the frequency domain
In addition to coupling of substructures, one is also able to decouple substructures from assemblies. Using the plus sign as a substructure coupling operator, the coupling procedure could simply be described as AB = A + B. Using a similar notation, decoupling could be formulated as AB - B = A. Decoupling procedures are often required to remove substructures that were added for measurement purposes, e.g. to fix the structure. Similar to coupling, a primal and dual formulation exists for decoupling procedures.
Primal disassembly
As a result of the primal coupling, the impedance matrix of the assembled system can be written as follows:
Using this relation, the following trivial subtraction operation would suffice for the decoupling of the substructure B from assembly AB:
By placing the impedance of AB and B in block-diagonal form, with a minus sign for the impedance of B to account for the subtraction operation, the same equation that was used for primal coupling can now be used to perform the primal decoupling procedures.
with:
The primal disassembly can thus be understood as the assembly of structure AB with the negative impedance of substructure B. A limitation of the primal disassembly is that all DoF of the substructure that is to be decoupled have to be exactly represented in the assembled situation. For numerical decoupling situations this should not pose any problems, however for experimental cases this can be troublesome. A solution to this problem can be found in the dual disassembly.
Dual disassembly
Similar to the dual assembly, the dual disassembly approaches the decoupling problem using the admittance matrices. Decoupling in the dual domain means finding a force that ensures compatibility, yet acts in the opposite direction. This newly found force would then counteract the force that is applied to the assembly due to the dynamics of substructure B. Writing this out in equations of motion:
In order to write the dynamics of both systems in one equation, using the LM-FBS assembly notation, the following matrices are defined:
In order to enforce compatibility, a similar approach is used as for the assembly task. Defining a -matrix to enforce compatibility:
Using this notation, the disassembly procedure can be performed using exactly the same equation as was used for the dual assembly:
This means that coupling and decoupling procedures using LM-FBS require identical steps, the only difference being the manner in which the global admittance matrix is defined. Indeed, the substructures to couple appear with a plus sign, whereas decoupled structures carry a minus sign:
More advanced decoupling techniques use the fact that internal points of substructure B appear in both the admittances of AB and B, hence can be used to enhance the decoupling process. Such techniques are described in.
See also
Vibration
Finite element method
Finite element tearing and interconnect (FETI)
Mechanical engineering
Acoustic engineering
Mechanical resonance
Mode shape
Modal analysis
Modal analysis using FEM
Shaker (testing device)
SEM International Modal Analysis Conference (IMAC)
SEM/IMAC Dynamic Substructuring Wiki
Structural dynamics
Structural acoustics
Noise, vibration, and harshness
Transfer path analysis
Vibration control
Vibration isolation
References
Mechanical vibrations
Dynamics (mechanics)
Continuum mechanics
Structural analysis | Dynamic substructuring | [
"Physics",
"Engineering"
] | 4,330 | [
"Structural engineering",
"Physical phenomena",
"Continuum mechanics",
"Structural analysis",
"Classical mechanics",
"Motion (physics)",
"Dynamics (mechanics)",
"Mechanics",
"Mechanical vibrations",
"Aerospace engineering",
"Mechanical engineering"
] |
50,044,342 | https://en.wikipedia.org/wiki/DNA%20transposon | DNA transposons are DNA sequences, sometimes referred to "jumping genes", that can move and integrate to different locations within the genome. They are class II transposable elements (TEs) that move through a DNA intermediate, as opposed to class I TEs, retrotransposons, that move through an RNA intermediate. DNA transposons can move in the DNA of an organism via a single-or double-stranded DNA intermediate. DNA transposons have been found in both prokaryotic and eukaryotic organisms. They can make up a significant portion of an organism's genome, particularly in eukaryotes. In prokaryotes, TE's can facilitate the horizontal transfer of antibiotic resistance or other genes associated with virulence. After replicating and propagating in a host, all transposon copies become inactivated and are lost unless the transposon passes to a genome by starting a new life cycle with horizontal transfer. DNA transposons do not randomly insert themselves into the genome, but rather show preference for specific sites.
With regard to movement, DNA transposons can be categorized as autonomous and nonautonomous. Autonomous ones can move on their own, while nonautonomous ones require the presence of another transposable element's gene, transposase, to move. There are three main classifications for movement for DNA transposons: "cut and paste," "rolling circle" (Helitrons), and "self-synthesizing" (Polintons). These distinct mechanisms of movement allow them to move around the genome of an organism. Since DNA transposons cannot synthesize DNA, they replicate using the host replication machinery. These three main classes are then further broken down into 23 different superfamilies characterized by their structure, sequence, and mechanism of action.
DNA transposons are a cause of gene expression alterations. As newly inserted DNA into active coding sequences, they can disrupt normal protein functions and cause mutations. Class II TEs make up about 3% of the human genome. Today, there are no active DNA transposons in the human genome. Therefore, the elements found in the human genome are called fossils.
Mechanisms of action
Cut and paste
Traditionally, DNA transposons move around in the genome by a cut and paste method. The system requires a transposase enzyme that catalyzes the movement of the DNA from its current location in the genome and inserts it in a new location. Transposition requires three DNA sites on the transposon: two at each end of the transposon called terminal inverted repeats and one at the target site. The transposase will bind to the terminal inverted repeats of the transposon and mediate synapsis of the transposon ends. The transposase enzyme then disconnects the element from the flanking DNA of the original donor site and mediates the joining reaction that links the transposon to the new insertion site. The addition of the new DNA into the target site causes short gaps on either side of the inserted segment. Host systems repair these gaps resulting in the target sequence duplication (TSD) that are characteristic of transposition. In many reactions, the transposon is completely excised from the donor site in what is called a "cut and paste" transposition and inserted into the target DNA to form a simple insertion. Occasionally, genetic material not originally in the transposable element gets copied and moved as well.
Helitrons
Helitrons are also a group of eukaryotic class II TEs. Helitrons do not follow the classical "cut and paste" mechanism. Instead, they are hypothesized to move around the genome via a rolling circle like mechanism. This process involves making a nick to a circular strand by an enzyme, which separates the DNA into two single strands. The initiation protein then remains attached to the 5' Phosphate on the nicked strand, exposing the 3' hydroxyl of the complementary strand. This allows a polymerase enzyme to begin replication on the un-nicked strand. Eventually the entire strand is replicated at which point the newly synthesized DNA disassociates and is replicated in parallel with the original template strand. Helitrons encode an unknown protein which is thought to have HUH endonuclease function as well as 5' to 3' helicase activity. This enzyme would make a single stranded cut in the DNA which explains the lack of Target Site Duplications found in Helitrons. Helitrons were also the first class of transposable elements to be discovered computationally and marked a paradigm shift in the way that whole genomes were studied.
Polintons
Polintons are also a group of eukaryotic class II TEs. As one of the most complex known DNA transposons in eukaryotes, they make up the genomes of protists, fungi, and animals, such as the entamoeba, soybean rust, and chicken, respectively. They contain genes with homology to viral proteins and which are often found in eukaryotic genomes, like polymerase and retroviral integrase. However, there is no known protein functionally similarly to the viral capsid or envelope proteins. They share their many structural characteristics with linear plasmids, bacteriophages and adenoviruses, which replicate using protein-primed DNA polymerases. Polintons have been proposed to go through a similar self-synthesis by their polymerase. Polintons, 15–20 kb long, encode up to 10 individual proteins. For replication, they utilize a protein-primed DNA polymerase B, retroviral integrase, cysteine protease, and ATPase. First, during host genome replication, a single-stranded extra-chromosomal Polinton element is excised from the host DNA using the integrase, forming a racket-like structure. Second, the Polinton undergoes replication using the DNA polymerase B, with initiation started by a terminal protein, which may encoded in some linear plasmids. Once the double stranded Polinton is generated, the integrase serves to insert it into the host genome. Polintons exhibit high variability between difference species and may tightly regulated, resulting in a low frequency rate in many genomes.
Classification
As of the most recent update in 2023, 31 superfamilies of DNA transposons were recognized and annotated in
Repbase, a database of repetitive DNA elements maintained by the Genetic Information Research Institute:
Effects of transposons
DNA transposons, like all transposons, are quite impactful with respect to gene expression. A sequence of DNA may insert itself into a previously functional gene and create a mutation. This can happen in three distinct ways: 1. alteration of function, 2. chromosomal rearrangement, and 3. a source of novel genetic material. Since DNA transposons may happen to take parts of genomic sequences with them, exon shuffling may occur. Exon shuffling is the creation of novel gene products due to the new placement of two previously unrelated exons through transposition. Because of their ability to alter DNA expression, transposons have become an important target of research in genetic engineering.
Examples
Maize
Barbara McClintock first discovered and described DNA transposons in Zea mays, during the 1940s; this is an achievement that would earn her the Nobel Prize in 1983. She described the Ac/Ds system where the Ac unit (activator) was autonomous but the Ds genomic unit required the presence of the activator in order to move. This TE is one of the most visually obvious as it was able to cause the maize to change color from yellow to brown/spotted on individual kernels.
Fruit flies
The Mariner/Tc1 transposon, found in many animals but studied in Drosophila was first described by Jacobson and Hartl. Mariner is well known for being able to excise and insert horizontally in to a new organism. Thousands of copies of the TE have been found interspersed in the human genome as well as other animals.
The Hobo transposons in Drosophila have been extensively studied due to their ability to cause gonadal dysgenesis. The insertion and subsequent expression of hobo-like sequences results in the loss of germ cells in the gonads of developing flies.
Bacteria
Bacterial transposons are especially good at facilitating horizontal gene transfer between microbes. Transposition facilitates the transfer and accumulation of antibiotic resistance genes. In bacteria, transposable elements can easily jump between the chromosomal genome and plasmids. In a 1982 study by Devaud et al., a multi-drug resistant strain of Acinetobacter was isolated and examined. Evidence pointed to the transfer of a plasmid in to the bacterium, where the resistance genes were transposed in to the chromosomal genome.
Genetic diversity
Transposons may have an effect on the promotion of genetic diversity of many organisms. DNA transposons can drive the evolution of genomes by promoting the relocation of sections of DNA sequences. As a result, this can alter gene regulatory regions and phenotypes. The discovery of transposons was made by Barbara McClintock who noticed that these elements could actually change the color of the maize plants she was studying, providing quick evidence of one outcome from transposon movement. Another example is the Tol2 DNA transposon in medaka fish that is said to be the result of their variety in pigmentation patterns. These examples show that transposons can greatly influence the process of evolution by rapidly inducing changes in the genome.
Inactivation
All DNA transposons are inactive in the human genome. Inactivated, or silenced, transposons do not result in a phenotypic outcome and do not move around in the genome. Some are inactive because they have mutations that affect their ability to move between chromosomes, while others are capable of moving but remain inactive due to epigenetic defenses, like DNA methylation and chromatin remodeling. For example, chemical modifications of DNA can constrict certain areas of the genome such that transcription enzymes are unable to reach them. RNAi, specifically siRNA and miRNA silencing, is a naturally occurring mechanisms that, in addition to regulating eukaryotic gene expression, prevents transcription of DNA transposons. Another mode of inactivation is overproduction inhibition. When transposase exceeds a threshold concentration, transposon activity is decreased. Since transposase can form inactive or less active monomers that will decrease transposition activity overall, a decrease in the production of transposase will also occur when large copies of those less active elements increase in the host genome.
Horizontal transfer
Horizontal transfer refers to the movement of DNA information between cells of different organisms. Horizontal transfer can involve the movement of TEs from one organism into the genome of another. The insertion itself allows the TE to become an activated gene in the new host. Horizontal transfer is used by DNA transposons to prevent inactivation and complete loss of the transposon. This inactivation is termed vertical inactivation, meaning that the DNA transposon is inactive and remains as a fossil. This type of transfer is not the most common, but has been seen in the case of the wheat virulence protein ToxA, which was transferred between the different fungal pathogens Parastagonospora nodorum, Pyrenophora tritici-repentis, and Bipolaris sorokiniana. Other examples include transfer between marine crustaceans, insects of different orders, and organisms of different phyla, such as humans and nematodes.
Evolution
Eukaryotic genomes differ in TE content. Recently, a study of the different superfamilies of TEs reveals that there are striking similarities between the groups. It has been hypothesized that many of them are represented in two or more Eukaryotic supergroups. This means that divergence of the transposon superfamilies could even predate the divergence of Eukaryotic supergroups.
V(D)J recombination
V(D)J recombination, although not a DNA TE, is remarkably similar to transposons. V(D)J recombination is the process by which the large variation in antibody binding sites is created. In this mechanism, DNA is recombined in order to create genetic diversity. Because of this, it has been hypothesized that these proteins, particularly Rag1 and Rag2 are derived from transposable elements.
Extinction in the human genome
There is evidence suggesting that at least 40 human DNA transposon families were active during mammalian radiation and early primate lineage. Then, there was a pause in transpositional activity during the later portion of primate radiation, with a complete halt in transposon movement in an anthropoid primate ancestor. There is no evidence of any transposable element younger than about 37 million years.
References
External links
Dfam, a database of repeating DNA sequences
Repbase, a database and classification system for repeating DNA sequences
DNA transposon derived genes, in HGNC database
DNA
Mobile genetic elements | DNA transposon | [
"Biology"
] | 2,752 | [
"Molecular genetics",
"Mobile genetic elements"
] |
50,045,141 | https://en.wikipedia.org/wiki/Atmospheric%20distillation%20of%20crude%20oil | Refining of crude oils essentially consists of primary separation processes and secondary conversion processes. The petroleum refining process is the separation of the different hydrocarbons present in crude oil into useful fractions and the conversion of some of the
hydrocarbons into products having higher quality performance.
Atmospheric and vacuum distillation of crude oils are the main primary separation processes producing various straight run products, e.g., gasoline to lube oils/vacuum gas oils. Distillation of crude oil is typically performed first under atmospheric pressure and then under a vacuum. Low boiling fractions usually vaporize below 400°C at atmospheric pressure without cracking the hydrocarbon compounds. Therefore, all the low boiling fractions of crude oil are separated by atmospheric distillation. A crude distillation unit (CDU) consists of the pre-flash distillation column. The petroleum products obtained from the distillation process are light, medium, and heavy naphtha, kerosene, diesel, and oil residue.
Atmospheric crude distillation unit
Crude oil must first be desalted, by heating to a temperature of 100-150 °C and mixing with 4-10% fresh water to dilute the salt. Crude oil exits from the desalter at a temperature of 250 °C–260 °C and is further heated by a tube-still heater to a temperature of 350 °C–360 °C. The hot crude oil is then passed into a distillation column that allows the separation of the crude oil into different fractions depending on the difference in volatility. The pressure at the top is maintained at 1.2–1.5 atm so that the distillation can be carried out at close to atmospheric pressure, and therefore it is known as the atmospheric distillation column.
The vapors from the top of the column are a mixture of hydrocarbon gases and naphtha, at a temperature of 120 °C–130 °C. The vapor stream associated with steam used at the bottom of the column is condensed by the water cooler and the liquid collected in a vessel is known as a reflux drum which is present at the top of the column. Some part of the liquid is returned to the top plate of the column as overhead reflux, and the remaining liquid is sent to a stabilizer column which separates gases from liquid naphtha.
A few plates below the top plate, the kerosene is obtained as a product at a temperature of 190 °C–200 °C. Part of this fraction is returned to the column after it is cooled by a heat exchanger. This cooled liquid is known as circulating reflux, and it is important to control the heat load in the column. The remaining crude oil is passed through a side stripper which uses steam to separate kerosene. The kerosene obtained is cooled and collected in a storage tank as raw kerosene, known as straight run kerosene that boils at a range of 140 °C–270 °C.
A few plates below the kerosene draw plate, the diesel fraction is obtained at a temperature of 280 °C–300 °C. The diesel fraction is then cooled and stored. The top product from the atmospheric distillation column is a mixture of hydrocarbon gases, e.g., methane, ethane, propane, butane, and naphtha vapors. Residual oil present at the bottom of the column is known as reduced crude oil (RCO). The temperature of the stream at the bottom is 340 °C–350 °C, which is below the cracking temperature of the oil.
Simulation helps in crude oil characterization so that thermodynamic and transport properties can be predicted. Dynamic models help in examining the relationships that could not be found by experimental methods (Ellner & Guckenheimer, 2006). By using modeling and simulation software, 80% of the time can be saved rather than constructing an actual working model. This also saves cost, and models provide more accurate studies of real systems.
See also
Distillation
Continuous distillation
References
Distillation | Atmospheric distillation of crude oil | [
"Chemistry"
] | 819 | [
"Distillation",
"Separation processes"
] |
50,045,170 | https://en.wikipedia.org/wiki/G%C3%B6bel%27s%20sequence | In mathematics, a Göbel sequence is a sequence of rational numbers defined by the recurrence relation
with starting value
Göbel's sequence starts with
1, 1, 2, 3, 5, 10, 28, 154, 3520, 1551880, ...
The first non-integral value is x43.
History
This sequence was developed by the German mathematician Fritz Göbel in the 1970s. In 1975, the Dutch mathematician Hendrik Lenstra showed that the 43rd term is not an integer.
Generalization
Göbel's sequence can be generalized to kth powers by
The least indices at which the k-Göbel sequences assume a non-integral value are
43, 89, 97, 214, 19, 239, 37, 79, 83, 239, ...
Regardless of the value chosen for k, the initial 19 terms are always integers.
See also
Somos sequence
References
External links
Göbel's Sequence
Integer sequences
Recurrence relations | Göbel's sequence | [
"Mathematics"
] | 195 | [
"Sequences and series",
"Integer sequences",
"Mathematical structures",
"Recurrence relations",
"Recreational mathematics",
"Mathematical objects",
"Combinatorics",
"Mathematical relations",
"Numbers",
"Number theory"
] |
36,736,693 | https://en.wikipedia.org/wiki/Tinetti%20test | The Tinetti Test (TT), or Performance Oriented Mobility Assessment (POMA), is a common clinical test for assessing a person's static and dynamic balance abilities. It is named after one of the inventors, Mary Tinetti.
The test is in two short sections that contain one examining static balance abilities in a chair and then standing, and the other gait. The two sections are sometimes used as separate tests.
It has numerous other names, including Tinetti Gait and Balance Examination, Tinetti's Mobility Test, and Tinetti Balance Test; the wide variation in naming, test sections and cut off values sometimes cause confusion.
See also
Romberg's test
Sitting-rising test
Timed Up and Go test
References
External links
Free online Tinetti Test calculator
Biomechanics
Medical scales
Geriatrics | Tinetti test | [
"Physics"
] | 168 | [
"Biomechanics",
"Mechanics"
] |
36,737,310 | https://en.wikipedia.org/wiki/C7H13NO2 | {{DISPLAYTITLE:C7H13NO2}}
The molecular formula C7H13NO2 (molar mass: 143.19 g/mol) may refer to:
Dimethylaminoethyl_acrylate
N-(2-Hydroxypropyl)_methacrylamide
Stachydrine
Molecular formulas | C7H13NO2 | [
"Physics",
"Chemistry"
] | 75 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
36,740,699 | https://en.wikipedia.org/wiki/Position%20and%20momentum%20spaces | In physics and geometry, there are two closely related vector spaces, usually three-dimensional but in general of any finite dimension.
Position space (also real space or coordinate space) is the set of all position vectors r in Euclidean space, and has dimensions of length; a position vector defines a point in space. (If the position vector of a point particle varies with time, it will trace out a path, the trajectory of a particle.) Momentum space is the set of all momentum vectors p a physical system can have; the momentum vector of a particle corresponds to its motion, with units of [mass][length][time]−1.
Mathematically, the duality between position and momentum is an example of Pontryagin duality. In particular, if a function is given in position space, f(r), then its Fourier transform obtains the function in momentum space, φ(p). Conversely, the inverse Fourier transform of a momentum space function is a position space function.
These quantities and ideas transcend all of classical and quantum physics, and a physical system can be described using either the positions of the constituent particles, or their momenta, both formulations equivalently provide the same information about the system in consideration. Another quantity is useful to define in the context of waves. The wave vector k (or simply "k-vector") has dimensions of reciprocal length, making it an analogue of angular frequency ω which has dimensions of reciprocal time. The set of all wave vectors is k-space. Usually r is more intuitive and simpler than k, though the converse can also be true, such as in solid-state physics.
Quantum mechanics provides two fundamental examples of the duality between position and momentum, the Heisenberg uncertainty principle ΔxΔp ≥ ħ/2 stating that position and momentum cannot be simultaneously known to arbitrary precision, and the de Broglie relation p = ħk which states the momentum and wavevector of a free particle are proportional to each other. In this context, when it is unambiguous, the terms "momentum" and "wavevector" are used interchangeably. However, the de Broglie relation is not true in a crystal.
Classical mechanics
Lagrangian mechanics
Most often in Lagrangian mechanics, the Lagrangian L(q, dq/dt, t) is in configuration space, where q = (q1, q2,..., qn) is an n-tuple of the generalized coordinates. The Euler–Lagrange equations of motion are
(One overdot indicates one time derivative). Introducing the definition of canonical momentum for each generalized coordinate
the Euler–Lagrange equations take the form
The Lagrangian can be expressed in momentum space also, L′(p, dp/dt, t), where p = (p1, p2, ..., pn) is an n-tuple of the generalized momenta. A Legendre transformation is performed to change the variables in the total differential of the generalized coordinate space Lagrangian;
where the definition of generalized momentum and Euler–Lagrange equations have replaced the partial derivatives of L. The product rule for differentials allows the exchange of differentials in the generalized coordinates and velocities for the differentials in generalized momenta and their time derivatives,
which after substitution simplifies and rearranges to
Now, the total differential of the momentum space Lagrangian L′ is
so by comparison of differentials of the Lagrangians, the momenta, and their time derivatives, the momentum space Lagrangian L′ and the generalized coordinates derived from L′ are respectively
Combining the last two equations gives the momentum space Euler–Lagrange equations
The advantage of the Legendre transformation is that the relation between the new and old functions and their variables are obtained in the process. Both the coordinate and momentum forms of the equation are equivalent and contain the same information about the dynamics of the system. This form may be more useful when momentum or angular momentum enters the Lagrangian.
Hamiltonian mechanics
In Hamiltonian mechanics, unlike Lagrangian mechanics which uses either all the coordinates or the momenta, the Hamiltonian equations of motion place coordinates and momenta on equal footing. For a system with Hamiltonian H(q, p, t), the equations are
Quantum mechanics
In quantum mechanics, a particle is described by a quantum state. This quantum state can be represented as a superposition of basis states. In principle one is free to choose the set of basis states, as long as they span the state space. If one chooses the (generalized) eigenfunctions of the position operator as a set of basis functions, one speaks of a state as a wave function in position space. The familiar Schrödinger equation in terms of the position r is an example of quantum mechanics in the position representation.
By choosing the eigenfunctions of a different operator as a set of basis functions, one can arrive at a number of different representations of the same state. If one picks the eigenfunctions of the momentum operator as a set of basis functions, the resulting wave function is said to be the wave function in momentum space.
A feature of quantum mechanics is that phase spaces can come in different types: discrete-variable, rotor, and continuous-variable. The table below summarizes some relations involved in the three types of phase spaces.
Reciprocal relation
The momentum representation of a wave function and the de Broglie relation are closely related to the Fourier inversion theorem and the concept of frequency domain. Since a free particle has a spatial frequency proportional to the momentum , describing the particle as a sum of frequency components is equivalent to describing it as the Fourier transform of a "sufficiently nice" wave function in momentum space.
Position space
Suppose we have a three-dimensional wave function in position space , then we can write this functions as a weighted sum of orthogonal basis functions :
or, in the continuous case, as an integral
It is clear that if we specify the set of functions , say as the set of eigenfunctions of the momentum operator, the function holds all the information necessary to reconstruct and is therefore an alternative description for the state .
In coordinate representation the momentum operator is given by
(see matrix calculus for the denominator notation) with appropriate domain. The eigenfunctions are
and eigenvalues ħk. So
and we see that the momentum representation is related to the position representation by a Fourier transform.
Momentum space
Conversely, a three-dimensional wave function in momentum space can be expressed as a weighted sum of orthogonal basis functions ,
or as an integral,
In momentum representation the position operator is given by
with eigenfunctions
and eigenvalues r. So a similar decomposition of can be made in terms of the eigenfunctions of this operator, which turns out to be the inverse Fourier transform,
Unitary equivalence
The position and momentum operators are unitarily equivalent, with the unitary operator being given explicitly by the Fourier transform, namely a quarter-cycle rotation in phase space, generated by the oscillator Hamiltonian. Thus, they have the same spectrum. In physical language, p acting on momentum space wave functions is the same as r acting on position space wave functions (under the image of the Fourier transform).
Reciprocal space and crystals
For an electron (or other particle) in a crystal, its value of k relates almost always to its crystal momentum, not its normal momentum. Therefore, k and p are not simply proportional but play different roles. See k·p perturbation theory for an example. Crystal momentum is like a wave envelope that describes how the wave varies from one unit cell to the next, but does not give any information about how the wave varies within each unit cell.
When k relates to crystal momentum instead of true momentum, the concept of k-space is still meaningful and extremely useful, but it differs in several ways from the non-crystal k-space discussed above. For example, in a crystal's k-space, there is an infinite set of points called the reciprocal lattice which are "equivalent" to k = 0 (this is analogous to aliasing). Likewise, the "first Brillouin zone" is a finite volume of k-space, such that every possible k is "equivalent" to exactly one point in this region.
See also
Phase space
Reciprocal space
Configuration space
Fractional Fourier transform
Notes
References
Momentum
Quantum mechanics
de:Impulsraum | Position and momentum spaces | [
"Physics",
"Mathematics"
] | 1,747 | [
"Physical quantities",
"Quantity",
"Theoretical physics",
"Quantum mechanics",
"Momentum",
"Moment (physics)"
] |
36,740,877 | https://en.wikipedia.org/wiki/Mislow%E2%80%93Evans%20rearrangement | The Mislow–Evans rearrangement is a name reaction in organic chemistry. It is named after Kurt Mislow who reported the prototypical reaction in 1966, and David A. Evans who published further developments. The reaction allows the formation of allylic alcohols from allylic sulfoxides in a 2,3-sigmatropic rearrangement.
General reaction scheme
The reaction is a powerful way to create particular stereoisomers of the alcohol since it is highly diastereoselective and the chirality at the sulphur atom can be transmitted to the carbon next to the oxygen in the product.
The sulfoxide 1 reagent can be generated easily and enantioselectively from the corresponding sulfide by an oxidation reaction. In this reaction various organic groups can be used, R1 = alkyl, allyl and R2 = alkyl, aryl or benzyl
Mechanism
A proposed mechanism is shown below:
The mechanism starts with an allylic sulfoxide 1 which undergoes a thermal 2,3-sigmatropic rearrangement to give a sulfenate ester 2. This can be cleaved using a thiophile, such as phosphite ester, which leaves the allylic alcohol 3 as the product.
Scope
The reaction has general application in the preparation of trans-allylic alcohols. Douglass Taber used the Mislow–Evans rearrangement in the synthesis of the hormone prostaglandin E2.
References
Rearrangement reactions
Name reactions | Mislow–Evans rearrangement | [
"Chemistry"
] | 313 | [
"Name reactions",
"Rearrangement reactions",
"Organic reactions"
] |
39,605,149 | https://en.wikipedia.org/wiki/Fractional-order%20system | In the fields of dynamical systems and control theory, a fractional-order system is a dynamical system that can be modeled by a fractional differential equation containing derivatives of non-integer order. Such systems are said to have fractional dynamics. Derivatives and integrals of fractional orders are used to describe objects that can be characterized by power-law nonlocality, power-law long-range dependence or fractal properties. Fractional-order systems are useful in studying the anomalous behavior of dynamical systems in physics, electrochemistry, biology, viscoelasticity and chaotic systems.
Definition
A general dynamical system of fractional order can be written in the form
where and are functions of the fractional derivative operator of orders and and and are functions of time. A common special case of this is the linear time-invariant (LTI) system in one variable:
The orders and are in general complex quantities, but two interesting cases are when the orders are commensurate
and when they are also rational:
When , the derivatives are of integer order and the system becomes an ordinary differential equation. Thus by increasing specialization, LTI systems can be of general order, commensurate order, rational order or integer order.
Transfer function
By applying a Laplace transform to the LTI system above, the transfer function becomes
For general orders and this is a non-rational transfer function. Non-rational transfer functions cannot be written as an expansion in a finite number of terms (e.g., a binomial expansion would have an infinite number of terms) and in this sense fractional orders systems can be said to have the potential for unlimited memory.
Motivation to study fractional-order systems
Exponential laws are a classical approach to study dynamics of population densities, but there are many systems where dynamics undergo faster or slower-than-exponential laws. In such case the anomalous changes in dynamics may be best described by Mittag-Leffler functions.
Anomalous diffusion is one more dynamic system where fractional-order systems play significant role to describe the anomalous flow in the diffusion process.
Viscoelasticity is the property of material in which the material exhibits its nature between purely elastic and pure fluid. In case of real materials the relationship between stress and strain given by Hooke's law and Newton's law both have obvious disadvances. So G. W. Scott Blair introduced a new relationship between stress and strain given by
In chaos theory, it has been observed that chaos occurs in dynamical systems of order 3 or more. With the introduction of fractional-order systems, some researchers study chaos in the system of total order less than 3.
In neuroscience, it has been found that single rat neocortical pyramidal neurons adapt with a time scale that depends on the time scale of changes in stimulus statistics. This multiple time scale adaptation is consistent with fractional order differentiation, such that the neuron's firing rate is a fractional derivative of slowly varying stimulus parameters.
Analysis of fractional differential equations
Consider a fractional-order initial value problem:
Existence and uniqueness
Here, under the continuity condition on function f, one can convert the above equation into corresponding integral equation.
One can construct a solution space and define, by that equation, a continuous self-map on the solution space, then apply a fixed-point theorem, to get a fixed-point, which is the solution of above equation.
Numerical simulation
For numerical simulation of solution of the above equations, Kai Diethelm has suggested fractional linear multistep Adams–Bashforth method or quadrature methods.
See also
Acoustic attenuation
Differintegral
Fractional calculus
Fractional order control
Fractional order integrator
Fractional Schrödinger equation
Fractional quantum mechanics
References
Further reading
External links
Fractional Calculus Applications in Automatic Control and Robotics A tutorial on fractional calculus, fractional order systems and fractional order control theory.
Fractional calculus
Dynamical systems
Mathematical modeling | Fractional-order system | [
"Physics",
"Mathematics"
] | 812 | [
"Mathematical modeling",
"Calculus",
"Applied mathematics",
"Mechanics",
"Fractional calculus",
"Dynamical systems"
] |
39,606,496 | https://en.wikipedia.org/wiki/Stimulated%20Raman%20adiabatic%20passage | In quantum optics, stimulated Raman adiabatic passage (STIRAP) is a process that permits transfer of a population between two applicable quantum states via at least two coherent electromagnetic (light) pulses. These light pulses drive the transitions of the three level Ʌ atom or multilevel system. The process is a form of state-to-state coherent control.
Population transfer in three level Ʌ atom
Consider the description of three level Ʌ atom having ground states and (for simplicity suppose that the energies of the ground states are the same) and excited state . Suppose in the beginning the total population is in the ground state . Here the logic for transformation of the population from ground state to is that initially the unpopulated states and couple, afterward superposition of states and couple to the state . Thereby a state is formed that permits the transformation of the population into state without populating the excited state . This process of transforming the population without populating the excited state is called the stimulated Raman adiabatic passage.
Three level theory
Consider states , and with the goal of transferring population initially in state to state without populating state . Allow the system to interact with two coherent radiation fields, the pump and Stokes fields. Let the pump field couple only states and and the Stokes field couple only states and , for instance due to far-detuning or selection rules. Denote the Rabi frequencies and detunings of the pump and Stokes couplings by and . Setting the energy of state to zero, the rotating wave Hamiltonian is given by
The energy ordering of the states is not critical, and here it is taken so that only for concreteness. Ʌ and V configurations can be realized by changing the signs of the detunings. Shifting the energy zero by allows the Hamiltonian to be written in the more configuration independent form
Here and denote the single and two-photon detunings respectively. STIRAP is achieved on two-photon resonance . Focusing to this case, the energies upon diagonalization of are given by
where . Solving for the eigenstate , it is seen to obey the condition
The first condition reveals that the critical two-photon resonance condition yields a dark state which is a superposition of only the initial and target state. By defining the mixing angle and utilizing the normalization condition , the second condition can be used to express this dark state as
From this, the STIRAP counter-intuitive pulse sequence can be deduced. At which corresponds the presence of only the Stokes field (), the dark state exactly corresponds to the initial state . As the mixing angle is rotated from to , the dark state smoothly interpolates from purely state to purely state . The latter case corresponds to the opposing limit of a strong pump field (). Practically, this corresponds to applying Stokes and pump field pulses to the system with a slight delay between while still maintaining significant temporal overlap between pulses; the delay provides the correct limiting behavior and the overlap ensures adiabatic evolution. A population initially prepared in state will adiabatically follow the dark state and end up in state without populating state as desired. The pulse envelopes can take on fairly arbitrary shape so long as the time rate of change of the mixing angle is slow compared to the energy splitting with respect to the non-dark states. This adiabatic condition takes its simplest form at the single-photon resonance condition where it can be expressed as
References
Quantum mechanics
Raman scattering
Raman spectroscopy | Stimulated Raman adiabatic passage | [
"Physics",
"Chemistry"
] | 693 | [
"Scattering stubs",
"Theoretical physics",
"Quantum mechanics",
"Scattering",
"Quantum physics stubs"
] |
39,608,703 | https://en.wikipedia.org/wiki/C9H9NO4 | {{DISPLAYTITLE:C9H9NO4}}
The molecular formula C9H9NO4 (molar mass: 195.17 g/mol) may refer to:
L-Dopaquinone, also known as o-dopaquinone
Pencolide
Salicyluric acid
Molecular formulas | C9H9NO4 | [
"Physics",
"Chemistry"
] | 69 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
48,657,998 | https://en.wikipedia.org/wiki/Littlewood%27s%204/3%20inequality | In mathematical analysis, Littlewood's 4/3 inequality, named after John Edensor Littlewood, is an inequality that holds for every complex-valued bilinear form defined on , the Banach space of scalar sequences that converge to zero.
Precisely, let or
be a bilinear form. Then the following holds:
where
The exponent 4/3 is optimal, i.e., cannot be improved by a smaller exponent. It is also known that for real scalars the aforementioned constant is sharp.
Generalizations
Bohnenblust–Hille inequality
Bohnenblust–Hille inequality is a multilinear extension of Littlewood's inequality that states that for all -linear mapping
the following holds:
See also
Grothendieck inequality
References
Theorems in analysis
Inequalities | Littlewood's 4/3 inequality | [
"Mathematics"
] | 169 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Mathematical analysis stubs",
"Binary relations",
"Mathematical relations",
"Inequalities (mathematics)",
"Mathematical problems"
] |
48,658,919 | https://en.wikipedia.org/wiki/Hydrazinium | Hydrazinium is the cation with the formula . This cation has a methylamine-like structure (). It can be derived from hydrazine by protonation (treatment with a strong acid). Hydrazinium is a weak acid with pKa = 8.1.
Salts of hydrazinium are common reagents in chemistry and are often used in certain industrial processes. Notable examples are hydrazinium hydrogensulfate, or , and hydrazinium azide, or . In the common names of such salts, the cation is often called "hydrazine", as in "hydrazine sulfate" for hydrazinium hydrogensulfate.
The terms "hydrazinium" and "hydrazine" may also be used for the doubly protonated cation , more properly called hydrazinediium or hydrazinium(2+). This cation has an ethane-like structure (). Salts of this cation include hydrazinediium sulfate and hydrazinediium bis(6-carboxypyridazine-3-carboxylate), .
See also
Ammonium,
References
Hydrazinium compounds
Cations | Hydrazinium | [
"Physics",
"Chemistry"
] | 240 | [
"Matter",
"Hydrazinium compounds",
"Salts",
"Cations",
"Ions"
] |
48,659,244 | https://en.wikipedia.org/wiki/Red%20mud | Red mud, now more frequently termed bauxite residue, is an industrial waste generated during the processing of bauxite into alumina using the Bayer process. It is composed of various oxide compounds, including the iron oxides which give its red colour. Over 97% of the alumina produced globally is through the Bayer process; for every tonne () of alumina produced, approximately of red mud are also produced; the global average is 1.23. Annual production of alumina in 2023 was over resulting in the generation of approximately of red mud.
Due to this high level of production and the material's high alkalinity, if not stored properly, it can pose a significant environmental hazard. As a result, significant effort is being invested in finding better methods for safe storage and dealing with it such as waste valorization in order to create useful materials for cement and concrete.
Less commonly, this material is also known as bauxite tailings, red sludge, or alumina refinery residues. Increasingly, the name processed bauxite is being adopted, especially when used in cement applications.
Production
Red mud is a side-product of the Bayer process, the principal means of refining bauxite en route to alumina. The resulting alumina is the raw material for producing aluminium by the Hall–Héroult process. A typical bauxite plant produces one to two times as much red mud as alumina. This ratio is dependent on the type of bauxite used in the refining process and the extraction conditions.
More than 60 manufacturing operations across the world use the Bayer process to make alumina from bauxite ore. Bauxite ore is mined, normally in open cast mines, and transferred to an alumina refinery for processing. The alumina is extracted using sodium hydroxide under conditions of high temperature and pressure. The insoluble part of the bauxite (the residue) is removed, giving rise to a solution of sodium aluminate, which is then seeded with an aluminium hydroxide crystal and allowed to cool which causes the remaining aluminium hydroxide to precipitate from the solution. Some of the aluminium hydroxide is used to seed the next batch, while the remainder is calcined (heated) at over in rotary kilns or fluid flash calciners to produce aluminium oxide (alumina).
The alumina content of the bauxite used is normally between 42 and 50%, but ores with a wide range of alumina contents can be used. The aluminium compound may be present as gibbsite (Al(OH)3), boehmite (γ-AlO(OH)) or diaspore (α-AlO(OH)). The residue invariably has a high concentration of iron oxide which gives the product a characteristic red colour. A small residual amount of the sodium hydroxide used in the process remains with the residue, causing the material to have a high pH/alkalinity, normally above 12. Various stages of solid/liquid separation processes recycle as much sodium hydroxide as possible from the residue back into the Bayer Process, in order to reduce production costs and make the process as efficient as possible. This also lowers the final alkalinity of the residue, making it easier and safer to handle and store.
Composition
Red mud is composed of a mixture of solid and metallic oxides. The red colour arises from iron oxides, which can comprise up to 60% of the mass. The mud is highly basic with a pH ranging from 10 to 13. In addition to iron, the other dominant components include silica, unleached residual aluminium compounds, and titanium oxide.
The main constituents of the residue after the extraction of the aluminium component are insoluble metallic oxides. The percentage of these oxides produced by a particular alumina refinery will depend on the quality and nature of the bauxite ore and the extraction conditions. The table below shows the composition ranges for common chemical constituents, but the values vary widely:
Mineralogically expressed the components present are:
In general, the composition of the residue reflects that of the non-aluminium components, with the exception of part of the silicon component: crystalline silica (quartz) will not react but some of the silica present, often termed, reactive silica, will react under the extraction conditions and form sodium aluminium silicate as well as other related compounds.
Environmental hazards
Discharge of red mud can be hazardous environmentally because of its alkalinity and species components.
Until 1972, Italian company Montedison was discharging red mud off the coast of Corsica. The case is important in international law governing the Mediterranean sea.
In October 2010, approximately of red mud slurry from an alumina plant near Kolontár in Hungary was accidentally released into the surrounding countryside in the Ajka alumina plant accident, killing ten people and contaminating a large area. All life in the Marcal river was said to have been "extinguished" by the red mud, and within days the mud had reached the Danube. The long-term environmental effects of the spill have been minor after a remediation effort by the Hungarian government.
Residue storage areas
Residue storage methods have changed substantially since the original plants were built. The practice in early years was to pump the slurry, at a concentration of about 20% solids, into lagoons or ponds sometimes created in former bauxite mines or depleted quarries. In other cases, impoundments were constructed with dams or levees, while for some operations valleys were dammed and the residue deposited in these holding areas.
It was once common practice for the red mud to be discharged into rivers, estuaries, or the sea via pipelines or barges; in other instances the residue was shipped out to sea and disposed of in deep ocean trenches many kilometres offshore. From 2016, all disposal into the sea, estuaries and rivers was stopped.
As residue storage space ran out and concern increased over wet storage, since the mid-1980s dry stacking has been increasingly adopted. In this method, residues are thickened to a high density slurry (48–55% solids or higher), and then deposited in a way that it consolidates and dries.
An increasingly popular treatment process is filtration whereby a filter cake (typically resulting in 23–27% moisture) is produced. This cake can be washed with either water or steam to reduce alkalinity before being transported and stored as a semi-dried material. Residue produced in this form is ideal for reuse as it has lower alkalinity, is cheaper to transport, and is easier to handle and process. Another option for ensuring safe storage is to use amphirols to dewater the material once deposited and then 'conditioned' using farming equipment such as harrows to accelerate carbonation and thereby reduce the alkalinity. Bauxite residue produced after press filtration and 'conditioning as described above are classified as non-hazardous under the EU Waste Framework Directive.
In 2013 Vedanta Aluminium, Ltd. commissioned a red mud powder-producing unit at its Lanjigarh refinery in Odisha, India, describing it as the first of its kind in the alumina industry, tackling major environmental hazards.
Use
Since the Bayer process was first adopted industrially in 1894, the value of the remaining oxides has been recognized. Attempts have been made to recover the principal componentsespecially the iron oxides. Since bauxite mining began, a large amount of research effort has been devoted to seeking uses for the residue. Many studies are now being financed by the European Union under the Horizon Europe programme. Several studies have been conducted to develop uses of red mud. An estimated are used annually in the production of cement, road construction and as a source for iron. Potential applications include the production of low cost concrete, application to sandy soils to improve phosphorus cycling, amelioration of soil acidity, landfill capping and carbon sequestration.
Reviews describing the current use of bauxite residue in Portland cement clinker, supplementary cementious materials/blended cements and special calcium aluminate cements (CAC) and calcium sulfo-aluminate (CSA) cements have been extensively researched and documented.
Cement manufacture, use in concrete as a supplementary cementitious material. From .
Raw material recovery of specific components present in the residue: iron, titanium, steel and REE (rare-earth elements) production. From 400,000 to 1,500,000 tonnes;
Landfill capping/roads/soil amelioration – 200,000 to 500,000 tonnes;
Use as a component in building or construction materials (bricks, tiles, ceramics etc.) – 100,000 to 300,000 tonnes;
Other (refractory, adsorbent, acid mine drainage (Virotec), catalyst etc.) – 100,000 tonnes.
Use in building panels, bricks, foamed insulating bricks, tiles, gravel/railway ballast, calcium and silicon fertilizer, refuse tip capping/site restoration, lanthanides (rare earths) recovery, scandium recovery, gallium recovery, yttrium recovery, treatment of acid mine drainage, adsorbent of heavy metals, dyes, phosphates, fluoride, water treatment chemical, glass ceramics, ceramics, foamed glass, pigments, oil drilling or gas extraction, filler for PVC, wood substitute, geopolymers, catalysts, plasma spray coating of aluminium and copper, manufacture of aluminium titanate-mullite composites for high temperature resistant coatings, desulfurisation of flue gas, arsenic removal, chromium removal.
In 2015, a major initiative was launched in Europe with funds from the European Union to address the valorization of red mud. Some 15 PhD students were recruited as part the European Training Network (ETN) for Zero-Waste Valorisation of Bauxite Residue. The key focus will be the recovery of iron, aluminium, titanium and rare-earth elements (including scandium) while valorising the residue into building materials.
A European Innovation Partnership has been formed to explore options for using by-products from the aluminium industry, BRAVO (Bauxite Residue and Aluminium Valorisation Operations). This sought to bring together industry with researchers and stakeholders to explore the best available technologies to recover critical raw materials but has not proceeded. Additionally, EU funding of approximately has been allocated to a four-year programme starting in May 2018 looking at uses of bauxite residue with other wastes, RemovAL. A particular focus of this project is the installation of pilot plants to evaluate some of the interesting technologies from previous laboratory studies. As part of the H2020 project RemovAl, it is planned to erect a house in the Aspra Spitia area of Greece that will be made entirely out of materials from bauxite residue.
Other EU funded projects that have involved bauxite residue and waste recovery have been ENEXAL (ENergy-EXergy of ALuminium industry) [2010–2014], EURARE (European Rare earth resources) [2013–2017] and three more recent projects are ENSUREAL (ENsuring SUstainable ALumina production) [2017–2021], SIDEREWIN (Sustainable Electro-winning of Iron) [2017–2022] and SCALE (SCandium – ALuminium in Europe) [2016–2020] a project to look at the recovery of scandium from bauxite residue.
In 2020, the International Aluminium Institute, launched a roadmap for maximising the use of bauxite residue in cement and concrete.
In November 2020, The ReActiv: Industrial Residue Activation for Sustainable Cement Production research project was launched, this is being funded by the EU. One of the world's largest cement companies, Holcim, in cooperation with 20 partners across 12 European countries, launched the ambitious 4-year ReActiv project (reactivproject.eu). The ReActiv project will create a novel sustainable symbiotic value chain, linking the by-product of the alumina production industry and the cement production industry. In ReActiv modification will be made to both the alumina production and the cement production side of the chain, in order to link them through the new ReActiv technologies. The latter will modify the properties of the industrial residue, transforming it into a reactive material (with pozzolanic or hydraulic activity) suitable for new, low footprint, cement products. In this manner ReActiv proposes a win-win scenario for both industrial sectors (reducing wastes and emissions respectively).
Fluorchemie GmbH have developed a new flame-retardant additive from bauxite residue, the product is termed MKRS (modified re-carbonised red mud) with the trademark ALFERROCK(R) and has potential applicability in a wide range of polymers (PCT WO2014/000014). One of its particular benefits is the ability to operate over a much broader temperature range, , that alternative zero halogen inorganic flame retardants such as aluminium hydroxide, boehmite or magnesium hydroxide. In addition to polymer systems where aluminium hydroxide or magnesium hydroxide can be used, it has also found to be effective in foamed polymers such as EPS and PUR foams at loadings up to 60%.
In a suitable compact solid form, with a density of approximately , ALFERROCK produced by the calcination of bauxite residues, has been found to be very effective as a thermal energy storage medium (WO2017/157664). The material can repeatedly be heated and cooled without deterioration and has a specific thermal capacity in the range of at and at ; this enables the material to work effectively in energy storage device to maximise the benefits of solar power, wind turbines and hydro-electric systems. High strength geopolymers have been developed from red mud.
Sustainable Approach to Low-Grade Bauxite Processing
The IB2 process is a French technology developed to enhance the extraction of alumina from bauxite, especially low-grade bauxite. This method aims to boost alumina production efficiency while decreasing the environmental impacts typically linked with this process, notably the generation of red mud and carbon dioxide emissions.
The IB2 technology, patented in 2019, is the outcome of a decade of research and development efforts by Yves Occello, a former Pechiney chemist. This process improves the traditional Bayer process, which has been utilized for more than a century to extract alumina from bauxite. It presents a significant decrease in caustic soda consumption and a notable reduction in red mud output, thereby minimizing hazardous waste and environmental risks.
In addition to reducing red mud production, the IB2 process aids in lowering emissions, primarily through the optimized treatment of low-grade bauxite. By limiting the necessity to import high-grade bauxite, this process reduces the carbon footprint associated with ore transportation. Furthermore, the process yields a byproduct that can be utilized in the production of eco-friendly cements, promoting the concept of a circular economy.
The inventor of the technology is chemist Yves Occello, who founded the company IB2 with Romain Girbal in 2017.
See also
Chemical waste
Olivier Dubuquoy
References
Sources
Cooper M. B., “Naturally Occurring Radioactive Material (NORM) in Australian Industries”, EnviroRad report ERS-006 prepared for the Australian Radiation Health and Safety Advisory Council (2005).
Agrawal, K. K. Sahu, B. D. Pandey, "Solid waste management in non-ferrous industries in India", Resources, Conservation and Recycling 42 (2004), 99–120.
Jongyeong Hyuna, Shigehisa Endoha, Kaoru Masudaa, Heeyoung Shinb, Hitoshi Ohyaa, "Reduction of chlorine in bauxite residue by fine particle separation", Int. J. Miner. Process., 76, 1–2, (2005), 13–20.
Claudia Brunori, Carlo Cremisini, Paolo Massanisso, Valentina Pinto, Leonardo Torricelli, "Reuse of a treated red mud bauxite waste: studies on environmental compatibility", Journal of Hazardous Materials, 117(1), (2005), 55–63.
Genc¸-Fuhrman H., TjellJ. C., McConchie D., "Increasing the arsenate adsorption capacity of neutralized red mud (Bauxsol™)", J. Colloid Interface Sci. 271 (2004) 313–320.
Genc¸-Fuhrman H., Tjell J. C., McConchie D., Schuiling O., "Adsorption of arsenate from water using neutralized red mud", J. Colloid Interface Sci. 264 (2003) 327–334.
External links
, from The Periodic Table of Videos (University of Nottingham)
Metallurgical Materials Science and Alloy Design - What is red mud and why is it dangerous ?
Waste
Water pollution
Soil contamination
Minerals | Red mud | [
"Physics",
"Chemistry",
"Materials_science",
"Environmental_science"
] | 3,526 | [
"Metallurgy",
"Environmental chemistry",
"Water pollution",
"Materials",
"Soil contamination",
"Metallurgical by-products",
"Waste",
"Matter"
] |
48,661,943 | https://en.wikipedia.org/wiki/Bismuth%20titanate | Bismuth titanate or bismuth titanium oxide is a solid inorganic compound of bismuth, titanium and oxygen with the chemical formula of Bi12TiO20,
Bi 4Ti3O12 or Bi2Ti2O7.
Synthesis
Bismuth titanate ceramics can be produced by heating a mixture of bismuth and titanium oxides. Bi12TiO20 forms at 730–850 °C, and melts when the temperature is raised above 875 °C, decomposing in the melt to Bi4Ti3O12 and Bi2O3. Millimeter-sized single crystals of Bi12TiO20 can be grown by the Czochralski process, from the molten phase at 880–900 °C.
Properties and applications
Bismuth titanates exhibit electrooptical effect and photorefractive effect, that is, a reversible change in the refractive index under applied electric field or illumination, respectively. Consequently, they have potential applications in reversible recording media for real-time holography or image processing applications.
See also
Bismuth germanate
Sillénite
References
Titanates
Bismuth compounds
Ceramic materials
Piezoelectric materials
Ferroelectric materials
B | Bismuth titanate | [
"Physics",
"Materials_science",
"Engineering"
] | 247 | [
"Physical phenomena",
"Ferroelectric materials",
"Materials",
"Electrical phenomena",
"Ceramic materials",
"Ceramic engineering",
"Piezoelectric materials",
"Hysteresis",
"Matter"
] |
48,665,701 | https://en.wikipedia.org/wiki/Karen%20McNally | Karen Cook McNally (1940 – December 20, 2014) was an American seismologist and earthquake risk expert.
Personal life
McNally was born in Clovis, California on January 26, 1940. She married at a young age and had two daughters, Kim Cook and Meredith Hurley; the couple divorced in 1966. She also had two siblings, a brother, Jerry Einar Cook, and a sister, Jean Howard Brown.
Professional life
In 1971 she earned her bachelor's degree and in 1973 she received her master's degrees; and just three years later she obtained her PhD (1976) in geophysics from the University of California, Berkeley. McNally worked at the California Institute of Technology with Charles Francis Richter, creator of the Richter scale, and became part of the faculty at the University of California, Santa Cruz in 1981, as an Earth and planetary sciences professor. She was director of the Richter Seismological Laboratory there and their instruments were able to capture high-quality recordings of the 1989 Loma Prieta earthquake. She founded the Institute of Tectonics and helped establish a seismology research program at the university.
In 1984, McNally established a modern geophysical observatory (the Observatorio Vulcanológico y Sismológico de Costa Rica, Universidad Nacional (OVSICORI-UNA)) and a national seismographic network in Costa Rica, and with this she was able to improve the country's program for reducing earthquake hazards. With funding from Office of Foreign Disaster Assistance of the U.S Agency for International Development McNally was able to lead a team of UCSC and Costa Rician scientists to set up the seismographic network. She was awarded the University Medal, more specifically named the Medalla Universidad Nacional by the National University of Costa Rica for her contributions on July 2, 2004. Her work in Costa Rica also encouraged ongoing collaborations between the UCSC faculty and researchers in Costa Rica. Her work in predicting and helping prepare Costa Rica for the Loma Prieta earthquake also earned her a spotlight in Time Magazine.
She was a member of the board of directors for the Seismological Society of America and the Incorporated Research Institutions for Seismology and sat on the California Earthquake Prediction Evaluation Council. In 1982, she received the Richtmyer Memorial Award from the American Association of Physics Teachers.
Death
She died at home in Davenport at the age of 74.
References
American seismologists
1940 births
2014 deaths
American women geologists
20th-century American geologists
University of California, Santa Cruz faculty
People from Clovis, California
20th-century American women scientists
Women geophysicists
American geophysicists
Fellows of the Seismological Society of America
21st-century American women
University of California, Berkeley alumni
Earthquake and seismic risk mitigation | Karen McNally | [
"Engineering"
] | 562 | [
"Structural engineering",
"Earthquake and seismic risk mitigation"
] |
48,665,888 | https://en.wikipedia.org/wiki/Shaft%20%28mechanical%20engineering%29 | In mechanical engineering, a shaft is a rotating machine element, usually circular in cross section, which is used to transmit power from one part to another, or from a machine which produces power to a machine which absorbs power.
Types
They are mainly classified into two types.
Transmission shafts are used to transmit power between the source and the machine absorbing power; e.g. counter shafts and line shafts.
Machine shafts are the integral part of the machine itself; e.g. crankshaft.
Axle shaft.
Spindle shaft.
Materials
The material used for ordinary shafts is mild steel. When high strength is required, an alloy steel such as nickel, nickel-chromium or chromium-vanadium steel is used. Shafts are generally formed by hot rolling and finished to size by cold drawing or turning and grinding.
Standard sizes
Source:
Machine shafts
Up to 25 mm steps of 0.5 mm
Transmission shafts
25 mm to 60 mm with 5 mm steps
60 mm to 110 mm with 10 mm steps
110 mm to 140 mm with 15 mm steps
140 mm to 500 mm with 20 mm steps
The standard lengths of the shafts are 5 m, 6 m and 7 m.
Usually 1m to 5m is used.
Stresses
The following stresses are induced in the shafts.
Shear stresses due to the transmission of torque (due to torsional load).
Bending stresses (tensile or compressive) due to the forces acting upon the machine elements like gears and pulleys as well as the self weight of the shaft.
Stresses due to combined torsional and bending loads.
References
External links
Online verification of shafts according standard
Machines
Mechanical engineering
Kinematics
Articles containing video clips
Shaft drives | Shaft (mechanical engineering) | [
"Physics",
"Technology",
"Engineering"
] | 339 | [
"Machines",
"Kinematics",
"Physical phenomena",
"Applied and interdisciplinary physics",
"Classical mechanics",
"Physical systems",
"Motion (physics)",
"Mechanics",
"Mechanical engineering"
] |
42,370,172 | https://en.wikipedia.org/wiki/Protein%20chemical%20shift%20re-referencing | Protein chemical shift re-referencing is a post-assignment process of adjusting the assigned NMR chemical shifts to match IUPAC and BMRB recommended standards in protein chemical shift referencing. In NMR chemical shifts are normally referenced to an internal standard that is dissolved in the NMR sample. These internal standards include tetramethylsilane (TMS), 4,4-dimethyl-4-silapentane-1-sulfonic acid (DSS) and trimethylsilyl propionate (TSP). For protein NMR spectroscopy the recommended standard is DSS, which is insensitive to pH variations (unlike TSP). Furthermore, the DSS 1H signal may be used to indirectly reference 13C and 15N shifts using a simple ratio calculation [1]. Unfortunately, many biomolecular NMR spectroscopy labs use non-standard methods for determining the 1H, 13C or 15N “zero-point” chemical shift position. This lack of standardization makes it difficult to compare chemical shifts for the same protein between different laboratories. It also makes it difficult to use chemical shifts to properly identify or assign secondary structures or to improve their 3D structures via chemical shift refinement. Chemical shift re-referencing offers a means to correct these referencing errors and to standardize the reporting of protein chemical shifts across laboratories.
Importance of NMR chemical shift re-referencing in biomolecular NMR
Incorrect chemical shift referencing is a particularly acute problem in biomolecular NMR. It has been estimated that up to 20% of 13C and up to 35% of 15N shift assignments are improperly referenced.
Given that the structural and dynamic information contained within chemical shifts is often quite subtle, it is critical that protein chemical shifts be properly referenced so that these subtle differences can be detected. Fundamentally, the problem with chemical shift referencing comes from the fact that chemical shifts are relative frequency measurements rather than absolute frequency measurements. Because of the historic problems with chemical shift referencing, chemical shifts are perhaps the most precisely measurable but the least accurately measured parameters in all of NMR spectroscopy.
Programs for protein chemical shift re-referencing
Because of the magnitude and severity of the problems with chemical shift referencing in biomolecular NMR, a number of computer programs have been developed to help mitigate the problem (see Table 1 for a summary). The first program to comprehensively tackle chemical shift mis-referencing in biomolecular NMR was SHIFTCOR.
Table 1. Summary and comparison of different chemical shift re-referencing and mis-assignment detection programs.
SHIFTCOR: A structure-based chemical shift correction program
SHIFTCOR is an automated protein chemical shift correction program that uses statistical methods to compare and correct predicted NMR chemical shifts (derived from the 3D structure of the protein) relative to an input set of experimentally measured chemical shifts. SHIFTCOR uses several simple statistical approaches and pre-determined cut-off values to identify and correct potential referencing, assignment and typographical errors. SHIFTCOR identifies potential chemical shift referencing problems by comparing the difference between the average value of each set of observed backbone (1Hα, 13Cα, 13Cβ, 13CO, 15N and 1HN) shifts and their corresponding predicted chemical shifts. The difference between these two averages results in a nucleus-specific chemical shift offset or reference correction (i.e. one for 1H, one for 13C and one for 15N). In order to ensure that certain extreme outliers do not unduly bias these average offset values, the average of the observed shifts is only calculated after excluding potential mis-assignments or typographical errors.
SHIFTCOR output
SHIFTCOR generates and reports chemical shift offsets or differences for each nucleus. The results contain the chemical shift analyses (including lists of potential mis-assignments, the estimated referencing errors, the estimated error in the calculated reference offset (95% confidence interval), the applied or suggested reference offset, correlation coefficients, RMSD values) and the corrected BMRB formatted chemical shift file (see Figure 1 for details).
SHIFTCOR uses the chemical shift calculation program SHIFTX to predict 1Hα, 13Cα,15N shifts based on the 3D structure coordinates of the protein being analyzed. By comparing the predicted shifts to the observed shifts, SHIFTCOR is able to accurately identify chemical shift reference offsets as well as potential mis-assignments. A key limitation to the SHIFTCOR approach is that requires that the 3D structure for the target protein be available to assess the chemical shift reference offsets. Given that chemical shift assignments are typically made before the structure is determined, it was soon realized that structure-independent approaches were required to develop.
Structure-independent chemical shift correction programs
Several methods have been developed that make use of the estimated (via 1H or 13C shifts) or predicted (via sequence) secondary structure content of the protein being analyzed. These programs include PSSI, CheckShift, LACS, and PANAV. Both PANAV <> and CheckShift are also available as web servers.
The PSSI and PANAV programs use the secondary structure determined by 1H shifts (which are almost never mis-referenced) to adjust the target protein’s 13C and 15N shifts to match the 1H-derived secondary structure. LACS uses the difference between secondary 13Cα and 13Cβ shifts plotted against secondary 13Cα shifts or secondary 13Cβ shifts to determine reference offsets. A more recent version of LACS has been adapted to identify 15N chemical shift mis-referencing. This new version of LACS exploits the well-known relationship between secondary 15N shifts and the secondary 13Cα and 13Cβ shifts of the preceding residue. In contrast to LACS and PANAV/PSSI, CheckShift uses secondary structure predicted from high-performance secondary structure prediction programs such as PSIPRED to iteratively adjust 13C and 15N chemical shifts so that their secondary shifts match the predicted secondary structure. These programs have all been shown to accurately identify mis-referenced and properly re-reference protein chemical shifts deposited in the BMRB,. Note that both LACS and CheckShift are programmed to always predict the same offset for 13Cα and 13Cβ shifts, whereas PSSI and PANAV do not make this assumption. As a general rule, PANAV and PSSI typically exhibit a smaller spread (or standard deviation) in calculated reference offsets, indicating that these programs are slightly more precise than either LACS or CheckShift. Neither LACS nor CheckShift are able to handle proteins that have the extremely large (above 40 ppm) reference offsets, whereas PANAV and PSSI seem to be able to deal with these kinds of anomalous proteins.
In a recent study, a chemical shift re-referencing program (PANAV) was run on a total of 2421 BMRB entries that had a sufficient proportion of (>80%) of assigned chemical shifts to perform a robust chemical shift reference correction. A total of 243 entries were found with 13Cα shifts offset by more than 1.0 ppm, 238 entries with 13Cβ shifts offset of more than 1.0 ppm, 200 entries with 13C’ shifts offset of more than 1.0 ppm and 137 entries with 15N shifts offset by more than 1.5 ppm. From this study, 19.7% of the entries in the BMRB appear to be mis-referenced. Evidently, chemical shift referencing continues to be a significant, and as yet unresolved problem for the biomolecular NMR community.
See also
Chemical Shift
Random Coil Index
Chemical shift index
Protein NMR
RefDB (chemistry)
SHIFTCOR
Protein structure database
NMR
Nuclear magnetic resonance spectroscopy
Protein nuclear magnetic resonance spectroscopy
Protein
References
General References
Nuclear magnetic resonance
Nuclear magnetic resonance software
Chemistry software | Protein chemical shift re-referencing | [
"Physics",
"Chemistry"
] | 1,602 | [
"Nuclear magnetic resonance",
"Chemistry software",
"Nuclear magnetic resonance software",
"nan",
"Nuclear physics"
] |
42,375,327 | https://en.wikipedia.org/wiki/Born%E2%80%93Mayer%20equation | The Born–Mayer equation is an equation that is used to calculate the lattice energy of a crystalline ionic compound. It is a refinement of the Born–Landé equation by using an improved repulsion term.
where:
NA = Avogadro constant;
M = Madelung constant, relating to the geometry of the crystal;
z+ = charge number of cation
z− = charge number of anion
e = elementary charge, 1.6022 C
ε0 = permittivity of free space
4ε0 = 1.112 C2/(J·m)
r0 = distance to closest ion
ρ = a constant dependent on the compressibility of the crystal; 30 pm works well for all alkali metal halides
See also
Born–Landé equation
Kapustinskii equation
References
Eponymous equations of physics
Solid-state chemistry
Ions | Born–Mayer equation | [
"Physics",
"Chemistry",
"Materials_science"
] | 173 | [
"Matter",
"Equations of physics",
"Eponymous equations of physics",
"Condensed matter physics",
"nan",
"Ions",
"Solid-state chemistry"
] |
42,381,107 | https://en.wikipedia.org/wiki/SensoMotoric%20Instruments | SensoMotoric Instruments (SMI) was a German provider of dedicated computer vision applications with a major focus on eye-tracking technology. SMI was founded in 1991 as a spin-off from academic and medical research at the Free University of Berlin. The company has its headquarters in Teltow near Berlin, Germany, offices in Boston, Massachusetts and San Francisco, California, in the United States, and a worldwide distributor and partner network.
SMI provided eye tracking systems for scientific research, professional solutions and OEM applications. The eye trackers can be combined with motion tracking systems, EEG, and other biometric data. They can be integrated into virtual reality CAVEs, head-mounted displayssuch as Google Glass or Oculus Rift, simulators, cars, or computers as a measurement or interaction modality.
History
The company was founded by Dr. Winfried Teiwes in 1991. SMI's first system 3D VOG was employed by the ESA, the NASA and on board the Russian space station Mir to analyze the effect of space missions on gravity-responsive torsional eye movements of astronauts. Gradually, the company shifted its focus from astronautics towards ophthalmology and scientific research. Dr. Teiwes remained the company's Managing Director until 2008, when Eberhard Schmidt took over this role. After the sale of the ENT productline to Interacoustics the diagnostics arm of William Demant Group in 2001, the spin-out of the retinal treatment activities into OD-OS in 2008, and the sale of the Ophthalmic division to Alcon in 2012, the company focused on scientific and professional eye tracking research solutions, virtual reality applications, and OEM integrations.
Technology and Products
The technology is based on the dark pupil and corneal reflection tracking: The cameras in the SMI eye trackers detect face, eyes, and pupils, as well as the corneal reflections from the infrared light sources, and calculate eye movements, gaze direction and points of regard. The sampling frequency of the eye trackers ranges from 30 Hz up to the kHz range.
On the hardware side, the company has three main product lines: mobile Eye Tracking Glasses (ETG), remote eye tracking systems (RED), and tower-mounted systems (Hi-Speed).
The software for experimental design and data analysis is called Experiment Suite and comes in different packages depending on the user's research interests.
Partnerships
At the 2014 Game Developers Conference, Sony unveiled the prototype InFamous: Second Son game for PlayStation 4, using SMI's RED-oem eye tracking system.
At the CES 2016, SMI demoed a new 250 Hz eye tracking system and a working foveated rendering solution. It resulted from a partnership with camera sensor manufacturer Omnivision who provided the camera hardware for the new system.
In 2015 DEWESoft together with SMI integrated the Eye Tracking Glasses into a driver machine monitoring and analysis platform for advanced driver-assistance systems (ADAS).
In 2014 Red Bull started using the Eye Tracking Glasses as part of their Red Bull Surf Science project. At the Game Developers Conference 2014, Sony unveiled the prototype of PlayStation 4 game Infamous: Second Son with the RED-oem eye tracking system integrated into it.
In 2013 TechViz integrated SMI's 3D Eye Tracking Glasses with TechViz 3D visualization software to enable eye tracking in a virtual reality CAVE. The 3D Eye Tracking Glasses were developed in partnership with Volfoni. In the same year, WorldViz started cooperating with SMI to enable calculation of intersects of gaze vectors with 3D objects and saving the data in one common database for deeper analysis. German Research Center for Artificial Intelligence (DFKI) used the Eye Tracking Glasses to create Talking Places the prototype of an interactive city guide.
In 2012, in partnership with Emotiv SMI developed a software package that combined the EEG data from the Emotiv EEG Neuroheadset with the eye tracking data. Neuromarketers can use this software to analyze consumer reactions to brands according to visual and emotional cues. Prentke Romich Company integrated SMI's NuEye eye-gaze accessory into its speech-generating platform for people with disabilities. The system allows users to control a communication device using only their eyes. Visual Interaction offers myGaze eye tracking accessory based on SMI technology with selected software packages for assistive applications.
Acquisition
It was reported that Apple acquired SMI in June 2017.
Awards
In 1992, SMI won the Berlin and Brandenburg Innovation Prize.
In 2009, SMI's iView X RED system received the iF Product Design Award.
See also
Biopac
Emotiv
Eye tracking
Video-oculography
Visual perception
References
Information technology companies of Germany
Data analysis software
Data collection in research
Companies based in Brandenburg
Physiological instruments
Vision | SensoMotoric Instruments | [
"Technology",
"Engineering"
] | 984 | [
"Physiological instruments",
"Measuring instruments"
] |
42,381,647 | https://en.wikipedia.org/wiki/Serial%20concatenated%20convolutional%20codes | Serial concatenated convolutional codes (SCCC) are a class of forward error correction (FEC) codes highly suitable for turbo (iterative) decoding. Data to be transmitted over a noisy channel may first be encoded using an SCCC. Upon reception, the coding may be used to remove any errors introduced during transmission. The decoding is performed by repeated decoding and [de]interleaving of the received symbols.
SCCCs typically include an inner code, an outer code, and a linking interleaver. A distinguishing feature of SCCCs is the use of a recursive convolutional code as the inner code. The recursive inner code provides the 'interleaver gain' for the SCCC, which is the source of the excellent performance of these codes.
The analysis of SCCCs was spawned in part by the earlier discovery of turbo codes in 1993. This analysis of SCCC's took place in the 1990s in a series of publications from NASA's Jet Propulsion Laboratory (JPL). The research offered SCCC's as a form of turbo-like serial concatenated codes that 1) were iteratively ('turbo') decodable with reasonable complexity, and 2) gave error correction performance comparable with the turbo codes.
Prior forms of serial concatenated codes typically did not use recursive inner codes. Additionally, the constituent codes used in prior forms of serial concatenated codes were generally too complex for reasonable soft-in-soft-out (SISO) decoding. SISO decoding is considered essential for turbo decoding.
Serial concatenated convolutional codes have not found widespread commercial use, although they were proposed for communications standards such as DVB-S2. Nonetheless, the analysis of SCCCs has provided insight into the performance and bounds of all types of iterative decodable codes including turbo codes and LDPC codes.
US patent 6,023,783 covers some forms of SCCCs. The patent expired on May 15, 2016.
History
Serial concatenated convolutional codes were first analyzed with a view toward turbo decoding in "Serial Concatenation of Interleaved Codes: Performance Analysis, Design, and Iterative Decoding" by S. Benedetto, D. Divsalar, G. Montorsi and F. Pollara. This analysis yielded a set of observations for designing high performance, turbo decodable serial concatenated codes that resembled turbo codes. One of these observations was that "the use of a recursive convolutional inner encoder always yields an interleaver gain." This is in contrast to the use of block codes or non-recursive convolutional codes, which do not provide comparable interleaver gain.
Additional analysis of SCCCs was done in "Coding Theorems for 'Turbo-Like' Codes" by D. Divsalar, Hui Jin, and Robert J. McEliece. This paper analyzed repeat-accumulate (RA) codes which are the serial concatenation of an inner two-state recursive convolutional code (also called an 'accumulator' or parity-check code) with a simple repeat code as the outer code, with both codes linked by an interleaver. The performance of the RA codes is quite good considering the simplicity of the constituent codes themselves.
SCCC codes were further analyzed in "Serial Turbo Trellis Coded Modulation with Rate-1 Inner Code". In this paper SCCCs were designed for use with higher order modulation schemes. Excellent performing codes with inner and outer constituent convolutional codes of only two or four states were presented.
Example Encoder
Fig 1 is an example of a SCCC.
The example encoder is composed of a 16-state outer convolutional code and a 2-state inner convolutional code linked by an interleaver. The natural code rate of the configuration shown is 1/4, however, the inner and/or outer codes may be punctured to achieve higher code rates as needed. For example, an overall code rate of 1/2 may be achieved by puncturing the outer convolutional code to rate 3/4 and the inner convolutional code to rate 2/3.
A recursive inner convolutional code is preferable for turbo decoding of the SCCC. The inner code may be punctured to a rate as high as 1/1 with reasonable performance.
Example Decoder
An example of an iterative SCCC decoder.
The SCCC decoder includes two soft-in-soft-out (SISO) decoders and an interleaver. While shown as separate units, the two SISO decoders may share all or part of their circuitry. The SISO decoding may be done is serial or parallel fashion, or some combination thereof. The SISO decoding is typically done using Maximum a posteriori (MAP) decoders using the BCJR algorithm.
Performance
SCCCs provide performance comparable to other iteratively decodable codes including turbo codes and LDPC codes. They are noted for having slightly worse performance at lower SNR environments (i.e. worse waterfall region), but slightly better performance at higher SNR environments (i.e. lower error floor).
See also
Convolutional code
Viterbi algorithm
Soft-decision decoding
Interleaver
BCJR algorithm
Low-density parity-check code
Repeat-accumulate code
Turbo equalizer
References
External links
Data
Error detection and correction
Encodings | Serial concatenated convolutional codes | [
"Technology",
"Engineering"
] | 1,167 | [
"Information technology",
"Error detection and correction",
"Data",
"Reliability engineering"
] |
42,382,175 | https://en.wikipedia.org/wiki/Hydroxycarteolol | Hydroxycarteolol is a beta blocker and metabolite of carteolol.
References
Beta blockers
4-Quinolones
Human drug metabolites
Quinolinols
Secondary alcohols
Tert-butyl compounds | Hydroxycarteolol | [
"Chemistry"
] | 51 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
42,382,635 | https://en.wikipedia.org/wiki/%28523671%29%202013%20FZ27 | (provisional designation ) is a trans-Neptunian object located in the Kuiper belt in the outermost region of the Solar System, approximately in diameter. It was discovered on 16 March 2013, by American astronomers Scott Sheppard and Chad Trujillo at the CTIO in Chile. Numbered in 2018, this minor planet has not been named.
Orbit and classification
is a trans-Neptunian object (TNO), located beyond the orbit of Neptune (30.1 AU). The Johnston's archive classifies it as an unspecific "other TNO", meaning that the minor planet is neither a resonant nor a classical TNO. Taking the mean of the two magnitudes, and using the standard 0.25 ~ 0.05 range for minor planets of unknown albedo, a wide 335 to 748 km spread can be estimated for the diameter.
orbits the Sun at a distance of 37.6–58.7 AU once every 334 years and 1 month (122,013 days; semi-major axis of 48.14 AU). Its orbit has an eccentricity of 0.22 and an inclination of 14° with respect to the ecliptic.
The body's observation arc begins with a precovery taken by the Sloan Digital Sky Survey on 20 February 2001, over 10 years prior to its official discovery observation at Cerro Tololo. The object was first announced on 2 April 2014, when American astronomers Scott Sheppard and Chad Trujillo at the CTIO in Chile published their observations in a Minor Planet Electronic Circular. At the time the object was at 49 AU from the Sun and had an apparent magnitude of 21.1. The Pan-STARRS-1 survey at the Haleakala Observatory, Hawaii, in the United States also found precovery observations of 2013 FZ27 after 2013 FZ27 was announced and reported them to the Minor Planet Center at a later date.
Numbering and Naming
This minor planet was numbered by the Minor Planet Center on 25 September 2018 (). The body was given the wrong discovery credit in the initial MPC Circular and The Minor Planet Center issued an Errata on April 6, 2019 on MPC 112429 correcting the mistake and gives the discovery credit of to Scott S. Sheppard and Chad Trujillo. As of August 2019, it has not been named.
Physical characteristics
Diameter and albedo
According to Michael Brown and the Johnston's archive, measures 561 and 584 kilometers in diameter, based on an absolute magnitude of 4.6 and 4.4 respectively. Both sources assume a standard albedo of 0.09 for the body's surface. As of 2018, no physical characteristics have been determined from photometric observations. The body's rotation period, pole and shape remain unknown.
See also
List of Solar System objects most distant from the Sun
Notes
References
External links
MPEC 2014-G07 : 2013 FZ27, Minor Planet Electronic Circular
Discovery Circumstances: Numbered Minor Planets (520001)-(525000) – Minor Planet Center
523671
523671
523671
20101215 | (523671) 2013 FZ27 | [
"Physics",
"Astronomy"
] | 640 | [
"Concepts in astronomy",
"Unsolved problems in astronomy",
"Possible dwarf planets"
] |
34,177,870 | https://en.wikipedia.org/wiki/Pure%20shear | In mechanics and geology, pure shear is a three-dimensional homogeneous flattening of a body. It is an example of irrotational strain in which body is elongated in one direction while being shortened perpendicularly. For soft materials, such as rubber, a strain state of pure shear is often used for characterizing hyperelastic and fracture mechanical behaviour. Pure shear is differentiated from simple shear in that pure shear involves no rigid body rotation.
The deformation gradient for pure shear is given by:
Note that this gives a Green-Lagrange strain of:
Here there is no rotation occurring, which can be seen from the equal off-diagonal components of the strain tensor. The linear approximation to the Green-Lagrange strain shows that the small strain tensor is:
which has only shearing components.
See also
Simple shear
Squeeze mapping
References
Fluid mechanics
Continuum mechanics | Pure shear | [
"Physics",
"Engineering"
] | 174 | [
"Continuum mechanics",
"Classical mechanics stubs",
"Classical mechanics",
"Civil engineering",
"Fluid mechanics"
] |
34,180,476 | https://en.wikipedia.org/wiki/Navy%20Operational%20Global%20Atmospheric%20Prediction%20System%20Model | Navy Operational Global Atmospheric Prediction System Model (NOGAPS) is a global numerical weather prediction computer model run by Fleet Numerical. This mathematical model is run four times a day and produces weather forecasts. Along with the ECMWF's Integrated Forecast System (IFS), the Canadian Global Environmental Multiscale Model (GEM) it is one of several synoptic scale medium-range models in general use.
References
External links
NOGAPS Portal
Weather prediction
Numerical climate and weather models | Navy Operational Global Atmospheric Prediction System Model | [
"Physics"
] | 101 | [
"Weather",
"Weather prediction",
"Physical phenomena"
] |
34,180,753 | https://en.wikipedia.org/wiki/Variable-mass%20system | In mechanics, a variable-mass system is a collection of matter whose mass varies with time. It can be confusing to try to apply Newton's second law of motion directly to such a system. Instead, the time dependence of the mass m can be calculated by rearranging Newton's second law and adding a term to account for the momentum carried by mass entering or leaving the system. The general equation of variable-mass motion is written as
where Fext is the net external force on the body, vrel is the relative velocity of the escaping or incoming mass with respect to the center of mass of the body, and v is the velocity of the body. In astrodynamics, which deals with the mechanics of rockets, the term vrel is often called the effective exhaust velocity and denoted ve.
Derivation
There are different derivations for the variable-mass system motion equation, depending on whether the mass is entering or leaving a body (in other words, whether the moving body's mass is increasing or decreasing, respectively). To simplify calculations, all bodies are considered as particles. It is also assumed that the mass is unable to apply external forces on the body outside of accretion/ablation events.
Mass accretion
The following derivation is for a body that is gaining mass (accretion). A body of time-varying mass m moves at a velocity v at an initial time t. In the same instant, a particle of mass dm moves with velocity u with respect to ground. The initial momentum can be written as
Now at a time t + dt, let both the main body and the particle accrete into a body of velocity v + dv. Thus the new momentum of the system can be written as
Since dmdv is the product of two small values, it can be ignored, meaning during dt the momentum of the system varies for
Therefore, by Newton's second law
Noting that u - v is the velocity of dm relative to m, symbolized as vrel, this final equation can be arranged as
Mass ablation/ejection
In a system where mass is being ejected or ablated from a main body, the derivation is slightly different. At time t, let a mass m travel at a velocity v, meaning the initial momentum of the system is
Assuming u to be the velocity of the ablated mass dm with respect to the ground, at a time t + dt the momentum of the system becomes
where u is the velocity of the ejected mass with respect to ground, and is negative because the ablated mass moves in opposite direction to the mass. Thus during dt the momentum of the system varies for
Relative velocity vrel of the ablated mass with respect to the mass m is written as
Therefore, change in momentum can be written as
Therefore, by Newton's second law
Therefore, the final equation can be arranged as
Forms
By the definition of acceleration, a = dv/dt, so the variable-mass system motion equation can be written as
In bodies that are not treated as particles a must be replaced by acm, the acceleration of the center of mass of the system, meaning
Often the force due to thrust is defined as so that
This form shows that a body can have acceleration due to thrust even if no external forces act on it (Fext = 0). Note finally that if one lets Fnet be the sum of Fext and Fthrust then the equation regains the usual form of Newton's second law:
Ideal rocket equation
The ideal rocket equation, or the Tsiolkovsky rocket equation, can be used to study the motion of vehicles that behave like a rocket (where a body accelerates itself by ejecting part of its mass, a propellant, with high speed). It can be derived from the general equation of motion for variable-mass systems as follows: when no external forces act on a body (Fext = 0) the variable-mass system motion equation reduces to
If the velocity of the ejected propellant, vrel, is assumed have the opposite direction as the rocket's acceleration, dv/dt, the scalar equivalent of this equation can be written as
from which dt can be canceled out to give
Integration by separation of variables gives
By rearranging and letting Δv = v1 - v0, one arrives at the standard form of the ideal rocket equation:
where m0 is the initial total mass, including propellant, m1 is the final total mass, vrel is the effective exhaust velocity (often denoted as ve), and Δv is the maximum change of speed of the vehicle (when no external forces are acting).
References
Classical mechanics
Mechanics | Variable-mass system | [
"Physics",
"Engineering"
] | 948 | [
"Mechanics",
"Classical mechanics",
"Mechanical engineering"
] |
34,183,088 | https://en.wikipedia.org/wiki/SEA%20Native%20Peptide%20Ligation | Protein chemical synthesis by native peptide ligation of unprotected peptide segments is an interesting complement and potential alternative to the use of living systems for producing proteins.
The synthesis of proteins requires efficient native peptide ligation methods, which enable the chemoselective formation of a native peptide bond in aqueous solution between unprotected peptide segments.
The most frequently used technique for synthesizing proteins is Native chemical ligation (NCL). However, alternatives are emerging,
one of which is SEA Native Peptide Ligation.
Overview
The SEA group belongs to the N,S-acyl shift systems because its reactivity is dictated by the intramolecular nucleophilic addition of one SEA thiol group on the C-terminal carbonyl group of the peptide segment. This results in the migration of the peptide chain from the nitrogen to the sulfur. The overall process of SEA native peptide ligation involves first an N,S-acyl shift for in in situ formation of a peptide thioester, and later on, after thiol-thioester exchange, an S,N-acyl shift for formation of the peptide bond.
Description of the reaction
SEA is an abbreviation of bis(2-sulfanylethyl)amido (Scheme 1). SEA ligation involves the reaction of a peptide featuring a C-terminal bis(2-sulfanylethyl)amido group with a Cys peptide. This reaction proceeds probably through the formation of a transient thioester intermediate, obtained by intramolecular attack of one SEA thiol on the peptide C-terminal carbonyl group as shown in Scheme 1. Then, the thioester undergoes a series of thiol-thioester exchanges, including with exogeneous thiols present in the ligation mixture such as mercaptophenyl acetic acid (MPAA). Exchange with the cysteine thiol group of the second peptide segment results in a transient thioester intermediate, which as for Native Chemical Ligation, rearranges by intramolecular S,N-acyl shift migration into a native peptide bond.
Publication
The first peer reviewed publication describing SEA native peptide ligation was published in Organic Letters by Melnyk, O. et al. (Ollivier, N.; Dheur, J.; Mhidia, R.; Blanpain, A.; Melnyk, O., Bis(2-sulfanylethyl)amino native peptide ligation. Org. Lett. 2010, 12, (22), 5238–41; Publication Date (Web): October 21, 2010.
A few weeks later, the same reaction was published in the same journal by Liu, C. F (Hou, W.; Zhang, X.; Li, F.; Liu, C. F., Peptidyl N,N-Bis(2-mercaptoethyl)-amides as Thioester Precursors for Native Chemical Ligation. Org. Lett. 2011, 13, 386–389; Publication Date (Web): December 22, 2010).
SEA on/off concept
SEA on/off concept exploits the redox properties of SEA group.
Oxidation of SEA on results in a cyclic disulfide called SEA off, which is a self-protected form of SEA on. SEA off and SEA on can be easily interconverted by reduction/oxidation as shown in Scheme 2.
References
Peptides
Chemical reactions | SEA Native Peptide Ligation | [
"Chemistry"
] | 719 | [
"Biomolecules by chemical classification",
"Peptides",
"nan",
"Molecular biology"
] |
32,561,097 | https://en.wikipedia.org/wiki/Paul%20H.%20Brunner | Paul H. Brunner (* 1946) is a material flow analysis methodology and urban metabolism specialist. He is a professor emeritus of the Institute for Water Quality, Resource and Waste Management, Vienna University of Technology in Austria.
Biography
Dr. Brunner held the Chair for Waste Management at the Vienna University of Technology from 1991 to 2015, and specialized in waste and resource management. His research focused on the "metabolism of the anthroposphere", in particular on methods to analyze, evaluate and control material flows through urban regions.
On April 21, 2016 Prof. Paul H. Brunner was awarded the "Grand Decoration of Honour in Silver for Services to the Republic of Austria" by Minister Dr. Reinhold Mitterlehner.
Literature
Books
2016. Handbook of Material Flow Analysis: For Environmental, Resource and Waste Engineers, 2nd ed., CRC Press, with Helmut Rechberger
2012. Metabolism of the Anthroposphere: Analysis, Evaluation, Design, with Peter Baccini, 2nd ed., MIT Press, Cambridge, Mass.
2005. Integrated Resource and Waste Management (Advanced Methods in Resource & Waste Management), CRC Press, with Helmut Rechberger.
2004. Practical Handbook of Material Flow Analysis, with Helmut Rechberger, CRC Press LLC, Boca Raton, Florida.
1991. Metabolism of the Anthroposphere, with Peter Baccini, Springer Verlag, Berlin, Heidelberg, New York, London.
References
Academic staff of TU Wien
Living people
Industrial ecology
Year of birth missing (living people) | Paul H. Brunner | [
"Chemistry",
"Engineering"
] | 314 | [
"Industrial ecology",
"Industrial engineering",
"Environmental engineering"
] |
32,561,211 | https://en.wikipedia.org/wiki/Sirohaem%20synthase | In molecular biology, sirohaem synthase (or siroheme synthase) (CysG) is a multi-functional enzyme with S-adenosyl-L-methionine (SAM)-dependent bismethyltransferase, dehydrogenase and ferrochelatase activities. Bacterial sulphur metabolism depends on the iron-containing porphinoid sirohaem. CysG synthesizes sirohaem from uroporphyrinogen III via reactions which encompass two branchpoint intermediates in tetrapyrrole biosynthesis, diverting flux first from protoporphyrin IX biosynthesis and then from cobalamin (vitamin B12) biosynthesis. CysG is a dimer. Its dimerisation region is 74 amino acids long, and acts to hold the two structurally similar protomers held together asymmetrically through a number of salt-bridges across complementary residues within the dimerisation region. CysG dimerisation produces a series of active sites, accounting for CysG's multi-functionality, catalysing four diverse reactions:
Two SAM-dependent methylations
NAD+-dependent tetrapyrrole dehydrogenation
Metal chelation
References
Protein domains | Sirohaem synthase | [
"Biology"
] | 261 | [
"Protein domains",
"Protein classification"
] |
56,803,181 | https://en.wikipedia.org/wiki/John%20Cullen%20%28chemical%20engineer%29 | Sir Edward John Cullen FEng PhD DSc (29 October 1926 – 14 January 2018) was a British chemical engineer who was head of the UK Health and Safety Commission and received a knighthood for services to health and safety.
Life and education
Cullen was born in October 1926 in Bury St Edmonds and attended Culford School. After service in the RAF he went to Emmanuel College, Cambridge in 1948, graduating in chemical engineering in 1952, followed by a master's degree in the same subject at the University of Texas as a Fulbright Scholar, returning to Cambridge to do a PhD in gas absorption.
He married Betty Hopkins in 1954, and they had four children. He died 14 January 2018.
Career
Cullen joined the research department of the United Kingdom Atomic Energy Authority in 1956, moving to ICI, Billingham as a chemical plant manager, where he remained for three years before going to New York as technical liaison for ICI. He returned to the UK in 1963 to oversee the building of the refinery on Teesside as a joint venture between ICI and Phillips Petroleum. From 1967 to 1983 he joined the US company Rohm & Haas finishing as managing director of Rohm & Haas UK.
He had been particularly involved is safety at ICI and from 1979 was European Director of Rohm & Haas, responsible for engineering, regulatory affairs, health, safety and the environment. This led to him becoming Deputy Chairman of the Chemical Industry Safety, Health & Environment Committee of the Chemical Industries Association and to being Chairman of Health & Safety Commission from 1983 until his retirement in 1993. During this office he was responsible for overseeing the major hazards legislation COMAH and the occupational health legislation COSHH as well as dealing with major incidents such as the Piper Alpha, King's Cross fire and Clapham Junction rail crash.
Honours
Cullen was a Fellow of the Institution of Chemical Engineers, and its President 1988–9. He was also a Fellow of the Royal Academy of Engineering. He was president of the Pipeline Industries Guild from 1996 to 1998. In 1993 he was awarded an honorary doctorate (DSc) by the University of Exeter. He received a knighthood (Knight Bachelor) in the 1991 Birthday Honours.
References
1926 births
2018 deaths
Knights Bachelor
Fellows of the Royal Academy of Engineering
British chemical engineers
Alumni of Emmanuel College, Cambridge
Cockrell School of Engineering alumni
People educated at Culford School
People from Bury St Edmunds
Health and safety in the United Kingdom
Process safety
Royal Air Force airmen
20th-century Royal Air Force personnel
Military personnel from Bury St Edmunds | John Cullen (chemical engineer) | [
"Chemistry",
"Engineering"
] | 506 | [
"Chemical process engineering",
"Safety engineering",
"Process safety"
] |
56,803,430 | https://en.wikipedia.org/wiki/Vapour-phase-mediated%20antimicrobial%20activity | The vapour-phase-mediated antimicrobial activity (VMAA) is the inhibitory or cidal antimicrobial activity of a molecule in a liquid culture, following its initial evaporation and migration via the vapour-phase Two new in vitro assays i.e. the vapour-phase-mediated patch assay and the vapour-phase-mediated susceptibility assay were developed to detect and quantify the VMAA. Both assays belong to the newest class of vaporisation assays i.e. the broth microdilution derived vaporisation assays. In contrast, most other vaporisation assays belong to the class of agar disk diffusion derived vaporisation assays and quantify the antimicrobial activity of the vapour-phase itself. Both classes of vaporisation assays are useful and measure different aspects of the antimicrobial capacity of molecules.
Applications
Possible applications for volatiles like volatile organic compounds with VMAA are: maintaining hygiene in hospitals, treating post-harvest contamination, protecting crops against pathogens and pests, and treating infections of the digestive, vaginal or respiratory tract.
References
Biochemistry | Vapour-phase-mediated antimicrobial activity | [
"Chemistry",
"Biology"
] | 244 | [
"Biochemistry",
"Biocides",
"Antimicrobials",
"nan"
] |
56,807,980 | https://en.wikipedia.org/wiki/Harrisburg%20incinerator | The Harrisburg Incinerator, now under private operation as Susquehanna Resource Management Complex (SRMC), is a waste-to-energy incinerator in South Harrisburg, Pennsylvania built and operated by the city from 1972 to 2003, which was an ongoing source of contention due to toxic air emissions and unforeseen costs which greatly contributed to the bankruptcy of the city. Since December 23, 2013, it is now owned by Lancaster County Solid Waste Management Authority (LCSWMA) and operated by Reworld.
History
Harrisburg City Council approved the $4.9 million project in September 1966, but on December 22, 1969, construction began on the incinerator at a cost of $12.5 million. Mayor Al Straub was quoted as calling it "the Rolls-Royce of incinerators." The trash-to-steam incinerator was completed in 1972, but after repeated breakdowns, the cost rose to $30 million, and in 1983 a separate $3 million repair was required, plus a projected $1.7 million deficit. Though it was built to handle 720 tons daily, it consistently operated under a capacity for profit, as neighboring municipalities declined to participate—some before construction began. Mayor Harold A. Swenson described it as "a facility that far exceeds our needs and our ability to pay." The US Environmental Protection Agency shut down the incinerator for pollution on December 18, 2000, but was reopened through a loophole less than a month later, with the condition that it close within 2.5 years. On June 18, 2003, the incinerator was closed, though Mayor Stephen R. Reed planned to rebuild a new one. Over the course of the incinerators total operation, numerous problems arose with the incinerator which would lead the city of Harrisburg to file for bankruptcy in 2011 after debts accumulated of up to $400 million accrued mostly as a result of the incinerator.
Harrisburg's incinerator dioxin
For the three decades it was running, the incinerator was the "highest emitter of dioxin in the country" according to Jim Topsale, a municipal waste combustion expert for the EPA. Dioxins are very toxic and according to the World Health Organization, they can cause "reproductive and developmental problems, damage the immune system, interfere with hormones and also cause cancer." Eric Epstein, an environmental activist, accused the Pennsylvania Department of Environmental Protection of environmental racism because the incinerator was located near two low-income housing projects which had a high minority population.
2000 Dioxin Arctic study
In September 2000, a study published by the North American Commission on Environmental Cooperation (NACEC), led by Barry Commoner, found that Inuit women in the Arctic in Nunavut, Canada were found to have high levels of dioxin in their breast milk. The study tracked the origin of the dioxins using computer models from the sources that produced it and found that the dioxin pollution in the Arctic originated from the United States. Out of 44,000 sources of dioxin polluters in the United States, they found that only 19 were contributing to greater than a third of the dioxin pollution in Nunavut. Out of these 19, Harrisburg's incinerator was the #1 source of dioxin pollution in the Arctic.
External links
Stop the Burn, an archived site for the former Coalition Against the Incinerator (CAI)
References
Incinerators
Buildings and structures in Harrisburg, Pennsylvania
Harrisburg, Pennsylvania | Harrisburg incinerator | [
"Chemistry"
] | 717 | [
"Incinerators",
"Incineration"
] |
53,877,635 | https://en.wikipedia.org/wiki/Action%20potential%20pulse | An action potential pulse is a mathematically and experimentally correct Synchronized Oscillating Lipid Pulse coupled with an Action Potential. This is a continuation of Hodgkin Huxley's work in 1952 with the inclusion of accurately modelling ion channel proteins, including their dynamics and speed of activation.
The action potential pulse is a model of the speed an action potential that is dynamically dependent upon the position and number of ion channels, and the shape and make up of the axon. The action potential pulse model takes into account entropy and the conduction speed of the action potential along an axon. It is an addition to the Hodgkin Huxley model.
Investigation into the membranes of axons have shown that the spaces in between the channels are sufficiently large, such that cable theory cannot apply to them, because it depends upon the capacitance potential of a membrane to be transferred almost instantly to other areas of the membrane surface. In electrical circuits this can happen because of the special properties of electrons, which are negatively charged, whereas in membrane biophysics potential is defined by positively charged ions instead. These ions are usually Na1+ or Ca2+, which move slowly by diffusion and have limited ionic radii in which they can affect adjacent ion channels. It is mathematically impossible for these positive ions to move from one channel to the next, in the time required by the action potential flow model, due to instigated depolarization. Furthermore entropy measurements have long demonstrated that an action potential's flow starts with a large increase in entropy followed by a steadily decreasing state, which does not match the Hodgkin Huxley theory. In addition a soliton pulse is known to flow at the same rate and follow the action potential. From measurements of the speed of an action potential, hyperpolarization must have a further component of which the 'soliton' mechanical pulse is the only candidate.
The resulting action potential pulse therefore is a synchronized, coupled pulse with the entropy from depolarization at one channel providing sufficient entropy for a pulse to travel to sequential channels and mechanically open them.
This mechanism explains the speed of transmission through both myelinated and unmyelinated axons.
This is a timed pulse, that combines the entropy from ion transport with the efficiency of a flowing pulse.
The action potential pulse model has many advantages over the simpler Hodgkin Huxley version including evidence, efficiency, timing entropy measurements, and the explanation of nerve impulse flow through myelinated axons.
Myelinated axons
This model replaces saltatory conduction, which was a historical theory that relied upon cable theory to explain conduction, and was an attempt at a model that has no basis is either physiology or membrane biophysics.
In myelinated axons the myelin acts as a mechanical transducer preserving the entropy of the pulse and insulating against mechanical loss. In this model the nodes of Ranvier (where ion channels are highly concentrated) concentrate the ion channels providing maximum entropy to instigate a pulse that travels from node to node along the axon with the entropy being preserved by the shape and dynamics of the myelin sheath.
References
Capacitors
Neural coding
Electrophysiology
Electrochemistry
Computational neuroscience
Cellular neuroscience
Cellular processes
Membrane biology
Plant intelligence
Physiology
Neurons
Action potentials | Action potential pulse | [
"Physics",
"Chemistry",
"Biology"
] | 672 | [
"Physical quantities",
"Plants",
"Physiology",
"Membrane biology",
"Plant intelligence",
"Capacitors",
"Electrochemistry",
"Cellular processes",
"Molecular biology",
"Capacitance"
] |
53,881,364 | https://en.wikipedia.org/wiki/Total%20operating%20characteristic | The total operating characteristic (TOC) is a statistical method to compare a Boolean variable versus a rank variable. TOC can measure the ability of an index variable to diagnose either presence or absence of a characteristic. The diagnosis of presence or absence depends on whether the value of the index is above a threshold. TOC considers multiple possible thresholds. Each threshold generates a two-by-two contingency table, which contains four entries: hits, misses, false alarms, and correct rejections.
The receiver operating characteristic (ROC) also characterizes diagnostic ability, although ROC reveals less information than the TOC. For each threshold, ROC reveals two ratios, hits/(hits + misses) and false alarms/(false alarms + correct rejections), while TOC shows the total information in the contingency table for each threshold. The TOC method reveals all of the information that the ROC method provides, plus additional important information that ROC does not reveal, i.e. the size of every entry in the contingency table for each threshold. TOC also provides the popular area under the curve (AUC) of the ROC.
TOC is applicable to measure diagnostic ability in many fields including but not limited to: land change science, medical imaging, weather forecasting, remote sensing, and materials testing.
Basic concept
The procedure to construct the TOC curve compares the Boolean variable to the index variable by diagnosing each observation as either presence or absence, depending on how the index relates to various thresholds. If an observation's index is greater than or equal to a threshold, then the observation is diagnosed as presence, otherwise the observation is diagnosed as absence. The contingency table that results from the comparison between the Boolean variable and the diagnosis for a single threshold has four central entries. The four central entries are hits (H), misses (M), false alarms (F), and correct rejections (C). The total number of observations is P + Q. The terms “true positives”, “false negatives”, “false positives” and “true negatives” are equivalent to hits, misses, false alarms and correct rejections, respectively. The entries can be formulated in a two-by-two contingency table or confusion matrix, as follows:
Four bits of information determine all the entries in the contingency table, including its marginal totals. For example, if we know H, M, F, and C, then we can compute all the marginal totals for any threshold. Alternatively, if we know H/P, F/Q, P, and Q, then we can compute all the entries in the table. Two bits of information are not sufficient to complete the contingency table. For example, if we know only H/P and F/Q, which is what ROC shows, then it is impossible to know all the entries in the table.
TOC space
The TOC curve with four boxes indicates how a point on the TOC curve reveals the hits, misses, false alarms, and correct rejections. The TOC curve is an effective way to show the total information in the contingency table for all thresholds. The data used to create this TOC curve is available for download here. This dataset has 30 observations, each of which consists of values for a Boolean variable and an index variable. The observations are ranked from the greatest to the least value of the index. There are 31 thresholds, consisting of the 30 values of the index and one additional threshold that is greater than all the index values, which creates the point at the origin (0,0). Each point is labeled to indicate the value of each threshold. The horizontal axes ranges from 0 to 30 which is the number of observations in the dataset (P + Q). The vertical axis ranges from 0 to 10, which is the Boolean variable's number of presence observations P (i.e. hits + misses). TOC curves also show the threshold at which the diagnosed amount of presence matches the Boolean amount of presence, which is the threshold point that lies directly under the point where the maximum line meets the hits + misses line, as the TOC curve on the left illustrates.
The following four pieces of information are the central entries in the contingency table for each threshold:
The number of hits at each threshold is the distance between the threshold's point and the horizontal axis.
The number of misses at each threshold is the distance between the threshold's point and the hits + misses horizontal line across the top of the graph.
The number of false alarms at each threshold is the distance between threshold's point and the blue dashed maximum line that bounds the left side of the TOC space.
The number of correct rejections at each threshold is the distance between the threshold's point and the purple dashed minimum line that bounds the right side of the TOC space.
TOC vs. ROC curves
These figures are the TOC and ROC curves using the same data and thresholds. Consider the point that corresponds to a threshold of 74. The TOC curve shows the number of hits, which is 3, and hence the number of misses, which is 7. Additionally, the TOC curve shows that the number of false alarms is 4 and the number of correct rejections is 16. At any given point in the ROC curve, it is possible to glean values for the ratios of false alarms/(false alarms+correct rejections) and hits/(hits+misses). For example, at threshold 74, it is evident that the x coordinate is 0.2 and the y coordinate is 0.3. However, these two values are insufficient to construct all entries of the underlying two-by-two contingency table.
Interpreting TOC curves
It is common to report the area under the curve (AUC) to summarize a TOC or ROC curve. However, condensing diagnostic ability into a single number fails to appreciate the shape of the curve. The following three TOC curves are TOC curves that have an AUC of 0.75 but have different shapes.
This TOC curve on the left exemplifies an instance in which the index variable has a high diagnostic ability at high thresholds near the origin, but random diagnostic ability at low thresholds near the upper right of the curve. The curve shows accurate diagnosis of presence until the curve reaches a threshold of 86. The curve then levels off and predicts around the random line.
This TOC curve exemplifies an instance in which the index variable has a medium diagnostic ability at all thresholds. The curve is consistently above the random line.
This TOC curve exemplifies an instance in which the index variable has random diagnostic ability at high thresholds and high diagnostic ability at low thresholds. The curve follows the random line at the highest thresholds near the origin, then the index variable diagnoses absence correctly as thresholds decrease near the upper right corner.
Area under the curve
When measuring diagnostic ability, a commonly reported measure is the area under the curve (AUC). The AUC is calculable from the TOC and the ROC. The value of the AUC is consistent for the same data whether you are calculating the area under the curve for a TOC curve or a ROC curve. The AUC indicates the probability that the diagnosis ranks a randomly chosen observation of Boolean presence higher than a randomly chosen observation of Boolean absence.
The AUC is appealing to many researchers because AUC summarizes diagnostic ability in a single number, however, the AUC has come under critique as a potentially misleading measure, especially for spatially explicit analyses.
Some features of the AUC that draw criticism include the fact that 1) AUC ignores the thresholds; 2) AUC summarizes the test performance over regions of the TOC or ROC space in which one would rarely operate; 3) AUC weighs omission and commission errors equally; 4) AUC does not give information about the spatial distribution of model errors; and, 5) the selection of spatial extent highly influences the rate of accurately diagnosed absences and the AUC scores.
However, most of those criticisms apply to many other metrics.
When using normalized units, the area under the curve (often referred to as simply the AUC) is equal to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one (assuming 'positive' ranks higher than 'negative'). This can be seen as follows: the area under the curve is given by (the integral boundaries are reversed as large T has a lower value on the x-axis)
where is the score for a positive instance and is the score for a negative instance, and and are probability densities as defined in previous section.
It can further be shown that the AUC is closely related to the Mann–Whitney U, which tests whether positives are ranked higher than negatives. It is also equivalent to the Wilcoxon test of ranks. The AUC is related to the Gini coefficient () by the formula , where:
In this way, it is possible to calculate the AUC by using an average of a number of trapezoidal approximations.
It is also common to calculate the area under the TOC convex hull (ROC AUCH = ROCH AUC) as any point on the line segment between two prediction results can be achieved by randomly using one or the other system with probabilities proportional to the relative length of the opposite component of the segment. It is also possible to invert concavities – just as in the figure the worse solution can be reflected to become a better solution; concavities can be reflected in any line segment, but this more extreme form of fusion is much more likely to overfit the data.
Another problem with TOC AUC is that reducing the TOC Curve to a single number ignores the fact that it is about the tradeoffs between the different systems or performance points plotted and not the performance of an individual system, as well as ignoring the possibility of concavity repair, so that related alternative measures such as informedness or DeltaP are recommended. These measures are essentially equivalent to the Gini for a single prediction point with DeltaP' = informedness = 2AUC-1, whilst DeltaP = markedness represents the dual (viz. predicting the prediction from the real class) and their geometric mean is the Matthews correlation coefficient.
Whereas TOC AUC varies between 0 and 1 — with an uninformative classifier yielding 0.5 — the alternative measures known as informedness, Certainty and Gini coefficient (in the single parameterization or single system case) all have the advantage that 0 represents chance performance whilst 1 represents perfect performance, and −1 represents the "perverse" case of full informedness always giving the wrong response. Bringing chance performance to 0 allows these alternative scales to be interpreted as Kappa statistics. Informedness has been shown to have desirable characteristics for machine learning versus other common definitions of Kappa such as Cohen kappa and Fleiss kappa.
Sometimes it can be more useful to look at a specific region of the TOC curve rather than at the whole curve. It is possible to compute partial AUC. For example, one could focus on the region of the curve with low false positive rate, which is often of prime interest for population screening tests. Another common approach for classification problems in which P ≪ N (common in bioinformatics applications) is to use a logarithmic scale for the x-axis.
See also
Brier score
Coefficient of determination
Constant false alarm rate
Detection error tradeoff
Detection theory
F1 score
False alarm
Precision and recall
ROCCET
Receiver operating characteristic
References
Further reading
External links
An Introduction to the Total Operating Characteristic: Utility in Land Change Model Evaluation
TOC utilization in Wildfire Risk
How to run the TOC Package in R
TOC R package on Github
Excel Workbook for generating TOC curves
Google Earth Engine TOC Curve Tutorial
Google Earth Engine TOC Curve Source Code
Boolean algebra
Detection theory
Data mining
Biostatistics
Statistical classification
Summary statistics for contingency tables | Total operating characteristic | [
"Mathematics"
] | 2,491 | [
"Boolean algebra",
"Fields of abstract algebra",
"Mathematical logic"
] |
53,882,691 | https://en.wikipedia.org/wiki/Three-finger%20protein | Three-finger proteins or three-finger protein domains (3FP or TFPD) are a protein superfamily consisting of small, roughly 60-80 amino acid residue protein domains with a common tertiary structure: three beta strand loops extended from a hydrophobic core stabilized by disulfide bonds. The family is named for the outstretched "fingers" of the three loops. Members of the family have no enzymatic activity, but are capable of forming protein-protein interactions with high specificity and affinity. The founding members of the family, also the best characterized by structure, are the three-finger toxins found in snake venom, which have a variety of pharmacological effects, most typically by disruption of cholinergic signaling. The family is also represented in non-toxic proteins, which have a wide taxonomic distribution; 3FP domains occur in the extracellular domains of some cell-surface receptors as well as in GPI-anchored and secreted globular proteins, usually involved in signaling.
Three-finger toxins
The founding members of the 3FP family are the three-finger toxins (3FTx) often found in snake venom. 3FTx proteins are widely distributed in venomous snake families, but are particularly enriched in the family Elapidae, in which the relative proportion of 3FTx to other venom toxins can reach 95%. Many 3FTx proteins are neurotoxins, though the mechanism of toxicity varies significantly even among proteins of relatively high sequence identity; common protein targets include those involved in cholinergic signaling, such as the nicotinic acetylcholine receptors, muscarinic acetylcholine receptors, and acetylcholinesterase. Another large subfamily of 3FTx proteins is the cardiotoxins (also known as cytotoxins or cytolysins); this group is directly cytotoxic most likely due to interactions with phospholipids and possibly other components of the cell membrane.
Ly6/uPAR family
The Ly6/uPAR family broadly describes a gene family containing three-finger protein domains that are not toxic and not venom components; these are often known as LU domains and can be found in the extracellular domains of cell-surface receptors and in either GPI-anchored or secreted globular proteins. The family is named for two representative groups of members, the small globular protein lymphocyte antigen 6 (LY6) family and the urokinase plasminogen activator receptor (uPAR). Other receptors with LU domains include members of the transforming growth factor beta receptor (TGF-beta) superfamily, such as the activin type 2 receptor; and bone morphogenetic protein receptor, type IA. Other LU domain proteins are small globular proteins such as CD59 antigen, LYNX1, SLURP1, and SLURP2.
Many LU domain containing proteins are involved in cholinergic signaling and bind acetylcholine receptors, notably linking their function to a common mechanism of 3FTx toxicity. Members of the Ly6/uPAR family are believed to be the evolutionary ancestors of 3FTx toxins. Other LU proteins, such as the CD59 antigen, have well-studied functions in regulation of the immune system.
Gene structure
Snake three-finger toxins and the Ly6/uPAR family members share a common gene structure, typically consisting of two introns and three exons. The sequence of the first exon is generally well conserved compared to the other two. The third exon contains the major differentiating features between the two groups, as this is where the C-terminal GPI-anchor peptide common among the Ly6/uPAR globular proteins is encoded.
Evolution and taxonomic distribution
Proteins of the general three-finger fold are widely distributed among metazoans. A 2008 bioinformatics study identified about 45 examples of such proteins, containing up to three three-finger domains, represented in the human genome. A more recent profile of the Ly6/uPAR gene family identified 35 human and at least 61 mouse family members in the organisms' respective genomes.
The three-finger protein family is thought to have expanded through gene duplication in the snake lineage. 3FTx toxins are considered restricted to the Caenophidia, the taxon containing all venomous snakes; however at least one homolog has been identified in the Burmese python, a closely related subgroup. Traditionally, 3FTx genes have been thought to have evolved by repeated events of duplication followed by neofunctionalization and recruitment to gene expression patterns restricted to venom glands. However, it has been argued that this process should be extremely rare and that subfunctionalization better explains the observed distribution. More recently, non-toxic 3FP proteins have been found to be widely expressed in many different tissues in snakes, prompting the alternative hypothesis that proteins of restricted expression in saliva were selectively recruited for toxic functionality.
References
External links
SCOP: SSF57302
CATH: 2.10.60.10
Protein folds
Protein families | Three-finger protein | [
"Biology"
] | 1,055 | [
"Protein families",
"Protein classification"
] |
53,886,302 | https://en.wikipedia.org/wiki/Optomechatronics | In engineering, optomechatronics is a field that investigates the integration of optical components and technology into mechatronic systems. The optical components in these systems are used as sensors to measure mechanical quantities such as surface structure and orientation. Optical sensors are used in a feedback loop as part of control systems for mechatronic devices. Optomechatronics has applications in areas such as adaptive optics, vehicular automation, optofluidics, optical tweezers and thin-film technology.
References
External links
International Society for Optomechatronics
International Journal of Optomechatronics
Electromechanical engineering
Optical metrology | Optomechatronics | [
"Engineering"
] | 132 | [
"Electrical engineering",
"Electromechanical engineering",
"Mechanical engineering by discipline"
] |
60,227,914 | https://en.wikipedia.org/wiki/Atmosfair | Atmosfair is an independent German non-profit organization which offers offsets for greenhouse gases emitted by aircraft, cruise ships, long-distance coaches, and events. The organization, founded in 2005, develops and finances small-scale energy efficiency and renewable energy projects in developing countries, which lead to reduced carbon emissions. Atmosfair has repeatedly won acclaim for operating with a high degree of transparency and accountability, as well as efficient use of funds.
Its sole shareholder, the (Foundation for Sustainability), was born from a joint research project by the Federal Ministry of the Environment and the organization Germanwatch. Atmosfair's acting patrons are Klaus Töpfer, Mojib Latif and Hartmut Graßl. Furthermore, the organization is a signatory of the Initiative for a transparent civil society ().
Atmosfair is registered in Bonn and is run from its office in Berlin.
Method
Atmosfair has developed an emission calculator that calculates the different greenhouse gases emitted when travelling and translates them into a corresponding amount of carbon-dioxide based on their climate impact. For flights, the calculations are based on the departure and arrival airports as well as flight class and plane model. For cruises the defining factors are the type of ship, type of cabin and the number of days spent at sea. Calculations also include climate-relevant emissions other than carbon dioxide, such as nitrogen oxides and sooty particles, which contribute to the greenhouse effect, especially at high altitudes (e.g. through ozone buildup or condensation trails). According to a study led by the German Federal Ministry of the Environment, this accounts for a factor of 3–5, meaning that a liter of aviation fuel has a warming effect that is 3–5 times stronger as the effects of its carbon-dioxide emissions alone.
The calculator is available on the Atmosfair website and free to use. After calculating their emissions, the customer can make a donation corresponding to the amount of emissions they want to offset. The corresponding amount of emissions will then be cut elsewhere through climate change mitigation projects.
All projects run by Atmosfair exclusively supports climate change mitigation projects that fall within the Clean Development Mechanism (CDM, Kyoto protocol) and comply with the Gold Standard. Projects supported by Atmosfair therefore generate Gold Standard CERs (Certified Emissions Reductions) that are then retired accordingly. Atmosfair's policy does not provide for the inclusion of Verified Emission Reduction (VERs), as they do not require the liability of an external auditor.
Beyond flights and cruise ships, Atmosfair also offers carbon offsets for events such as conferences or conventions (MICE).
Funding
The organization is predominantly financed by donations, backed with interest incomes from reserves as well as incomes generated by the sale of carbon emissions calculations software. Climate change mitigation projects and technology purchases led on behalf of customers are an additional source of revenue.
According to the annual reports of 2009 to 2017, at least 90% of donations to Atmosfair were spent directly on projects in developing countries. Since projects are planned to run for many years, payments are made according to need. It can take up to two years for a donation to reach its intended project.
Thus, Atmosfair has greatly exceeded its goal of limiting expenditures for project staff, customer service staff and administration (rent, IT etc.) to 20%.
Projects
In 2017, Atmosfair financed projects in the following four categories:
Efficient cookstoves: distribution of efficient cookstoves in cooperation with the Global Alliance for Clean Cookstoves. From 2009 to 2017, 763,780 tons of carbon emissions were saved. On average, a stove saves up to 3 tons of carbon per year. The project areas are Nigeria, Rwanda, Cameroon, Lesotho and India.
Biogas and biomass: construction of small-scale biogas plants, electricity production from harvest residues, composting of organic waste. Between 2007 and 2017, savings amounted to 1,459,800 tons of carbon emissions. On average, a biogas/biomass plant saves around four tons of carbon a year. Project areas are India, Kenya, Thailand and Nepal.
Wind, water and sun: renewable energy projects saved 549,900 tons of carbon emissions between 2007 and 2017. The hydroelectric power station in Honduras saves around 73 tons of CO2 per day and the solar and wind energy stations save around 1,1 tons of CO2 per year.
Environmental education: school projects at German schools.
Awards
In 2010 a study conducted by the Eberswalde University for Sustainable Development reviewing carbon offset providers in Germany determined Atmosfair as the only provider to achieve the overall rating "very good" (""). In the categories "realistic calculation", "offsetting quality" and "consumer communication", Atmosfair scored the rating "very good".
In 2006 the climate department of the American Tufts University reviewed 13 organizations offering carbon offsetting. Evaluation criteria were transparency, precision of the calculations, offset prices and administration costs. Atmosfair was awarded the rating "very good" along with three other providers.
Furthermore, Atmosfair was awarded first place in following rankings:
2018: the German foundation Stiftung Warentest – "Finance test – carbon offsetting: these providers do most for climate change mitigation".
2010: Atmosfair was awarded the distinction "unconditionally recommendable" ("uneingeschränkt empfehlenswert") along with two other providers by the Verbraucherzentrale Bundesverband.
2009: A comparative study conducted by the University of Graz in which Atmosfair was the only subject that scored "very recommendable". Further, it reached the highest score in the main categories "offsetting quality" and "transparency".
2008: A study conducted by the Free University of Brussels rates Atmosfair, as the most recommendable carbon-offset provider.
2007: Atmosfair received the highest ranking (8/10 points) in a study by BBC Wildlife magazine.
2007: the Swedish daily newspaper Aftonbladet also ranked Atmosfair as top provider for carbon offsets.
Atmosfair is consistently listed among front runners in other rankings and comparative studies.
Environmental integrity
The advisory board, made by representatives of the Federal Ministry of the Environment, Nature Conservation and Nuclear Safety (BMU), acts to ensure the organization's compliance with the standards stated in the annual reports. These include refusing donations from donors whose carbon calculations do not comply with Atmosfair standards. All projects must comply with CDM and Gold Standard; the climate effect of flights must be calculated according to the latest scientific findings and trivializing terms such as "climate neutral" must be avoided.
In 2008, the introduction of additional pollutants into the emissions calculations rendered a cooperation between Atmosfair and Lufthansa impossible. The stance adopted by Atmosfair was well received by the scientific and environmental protection communities as well as by the media.
References
External links
www.atmosfair.de/en/air_travel_and_climate/atmosfair_airline_index/
Environmental organisations based in Germany
Transport and the environment
Aviation and the environment
Greenhouse gas emissions
Travel | Atmosfair | [
"Physics",
"Chemistry"
] | 1,497 | [
"Greenhouse gas emissions",
"Travel",
"Transport and the environment",
"Physical systems",
"Transport",
"Greenhouse gases"
] |
60,228,117 | https://en.wikipedia.org/wiki/GPS%20week%20number%20rollover | The GPS week number rollover is a phenomenon that happens every 1,024 weeks, which is about 19.6 years. The Global Positioning System (GPS) broadcasts a date, including a week number counter that is stored in only ten binary digits, whose range is therefore 0–1,023. After 1,023, an integer overflow causes the internal value to roll over, changing to zero again. Software that is not coded to anticipate the rollover to zero may stop working or could be moved back in time by a multiple of approximately 20 years. GPS is not only used for positioning, but also for accurate time. Time is used to accurately synchronize payment operations, broadcasters, and mobile operators.
1999 occurrence
The first rollover took place midnight (UTC) August 21 to 22, 1999.
NavCen issued an advisory prior to the rollover stating that some devices would not tolerate the rollover. Because of the relatively limited use of GPS during the 1999 rollover, disruption was minor.
2019 occurrence
The second rollover occurred on the night of April 6 to 7, 2019, when GPS Week 2,047, represented as 1,023 in the counter, advanced and rolled over to 0 within the counter. The United States Department of Homeland Security, the International Civil Aviation Organization, and others issued a warning about this event.
Effects
Products known to have been affected by the 2019 rollover include Honeywell's flight management and navigation software that caused delays for a KLM flight and cancellations for numerous flights in China because the technicians failed to patch the software.
Furthermore, the New York City Wireless Network (NYCWiN), a private network for New York City's municipal services, crashed.
Other products that were affected by the rollover include cellphones that were sold in 2013 or earlier, certain types of older Vaisala radiosonde groundstations, suspending launches at some stations for up to two weeks, NOAA's weather buoys,
many scientific instruments,
and consumer GPS navigation devices.
Prior to return to normal standard time from daylight saving time during the morning of November 3, 2019, Apple issued a warning to owners of iPhone and iPad devices sold before 2012 to update or risk losing Internet connectivity.
Some Furuno GPS models had an internal rollover on January 2, 2022. If the equipment was not updated with the latest software version, the equipment's date would no longer be displayed correctly.
Honda and Acura cars manufactured between 2004 and 2012 containing GPS navigation systems incorrectly displayed the year 2022 as 2002, with a time offset by several minutes. This problem was due to an overflow on the GPS epoch.
All Porsche models with PCM2.1 are also affected according to bulletin #1904 released by Porsche on December 20, 2019.
2038 occurrence
The third rollover will occur between November 20 and 21, 2038. This is unrelated to the Year 2038 problem, which will occur in January of that year.
2137 occurrence
The above rollovers are due to a ten-bit week number; the more recent CNAV protocol, successor to the original NAV protocol, uses thirteen-bit week numbers, which amounts to a 157-year cycle; therefore, using the same epoch of 1980, the first rollover will not be until 2137.
See also
Time formatting and storage bugs
Structure of the time-encoding components of GPS signals, NAV and CNAV versions
References
August 1999
April 2019
2039 in science
2100s
Global Positioning System
Software bugs
Timekeeping
Time formatting and storage bugs | GPS week number rollover | [
"Physics",
"Technology",
"Engineering"
] | 724 | [
"Wireless locating",
"Physical quantities",
"Time",
"Timekeeping",
"Aircraft instruments",
"Aerospace engineering",
"Global Positioning System",
"Spacetime"
] |
60,232,351 | https://en.wikipedia.org/wiki/Realm%20%28virology%29 | In virology, realm is the highest taxonomic rank established for viruses by the International Committee on Taxonomy of Viruses (ICTV), which oversees virus taxonomy. Six virus realms are recognized and united by specific highly conserved traits:
Adnaviria, which contains archaeal filamentous viruses with A-form double-stranded (ds) DNA genomes encoding a unique alpha-helical major capsid protein;
Duplodnaviria, which contains all dsDNA viruses that encode the HK97-fold major capsid protein;
Monodnaviria, which contains all single-stranded DNA (ssDNA) viruses that encode a HUH superfamily endonuclease and their descendants;
Riboviria, which contains all RNA viruses that encode RNA-dependent RNA polymerase and all viruses that encode reverse transcriptase;
Ribozyviria, which contains hepatitis delta-like viruses with circular, negative-sense ssRNA genomes;
and Varidnaviria, which contains all dsDNA viruses that encode a vertical jelly roll major capsid protein.
The rank of realm corresponds to the rank of domain used for cellular life, but differs in that viruses in a realm do not necessarily share a common ancestor based on common descent nor do the realms share a common ancestor. Instead, realms group viruses together based on specific traits that are highly conserved over time, which may have been obtained on a single occasion or multiple occasions. As such, each realm represents at least one instance of viruses coming into existence. While historically it was difficult to determine deep evolutionary relations between viruses, in the 21st century methods such as metagenomics and cryogenic electron microscopy have enabled such research to occur, which led to the establishment of Riboviria in 2018, three realms in 2019, and two in 2020.
Naming
The names of realms consist of a descriptive first part and the suffix -viria, which is the suffix used for virus realms. The first part of Duplodnaviria means "double DNA", referring to dsDNA viruses, the first part of Monodnaviria means "single DNA", referring to ssDNA viruses, the first part of Riboviria is taken from ribonucleic acid (RNA), and the first part of Varidnaviria means "various DNA". For viroids, the suffix is designated as -viroidia, and for satellites, the suffix is -satellitia, but as of 2019 neither viroid nor satellite realms have been designated.
Realms
Duplodnaviria
Duplodnaviria contains double-stranded DNA (dsDNA) viruses that encode a major capsid protein (MCP) that has the HK97 fold. Viruses in the realm also share a number of other characteristics involving the capsid and capsid assembly, including an icosahedral capsid shape and a terminase enzyme that packages viral DNA into the capsid during assembly. Two groups of viruses are included in the realm: tailed bacteriophages, which infect prokaryotes and are assigned to the order Caudovirales, and herpesviruses, which infect animals and are assigned to the order Herpesvirales.
The relation between caudoviruses and herpesviruses is not certain, as they may either share a common ancestor or herpesviruses may be a divergent clade from within Caudovirales. A common trait among duplodnaviruses is that they cause latent infections without replication while still being able to replicate in the future. Tailed bacteriophages are ubiquitous worldwide, important in marine ecology, and the subject of much research. Herpesviruses are known to cause a variety of epithelial diseases, including herpes simplex, chickenpox and shingles, and Kaposi's sarcoma.
Monodnaviria
Monodnaviria contains single-stranded DNA (ssDNA) viruses that encode an endonuclease of the HUH superfamily that initiates rolling circle replication and all other viruses descended from such viruses. The prototypical members of the realm are called CRESS-DNA viruses and have circular ssDNA genomes. ssDNA viruses with linear genomes are descended from them, and in turn some dsDNA viruses with circular genomes are descended from linear ssDNA viruses.
CRESS-DNA viruses include three kingdoms that infect prokaryotes: Loebvirae, Sangervirae, and Trapavirae. The kingdom Shotokuvirae contains eukaryotic CRESS-DNA viruses and the atypical members of Monodnaviria. Eukaryotic monodnaviruses are associated with many diseases, and they include papillomaviruses and polyomaviruses, which cause many cancers, and geminiviruses, which infect many economically important crops.
Riboviria
Riboviria contains all RNA viruses that encode an RNA-dependent RNA polymerase (RdRp), assigned to the kingdom Orthornavirae, and all reverse transcribing viruses, i.e. all viruses that encode a reverse transcriptase (RT), assigned to the kingdom Pararnavirae. These enzymes are vital in the viral life cycle, as RdRp transcribes viral mRNA and replicates the genome, and RT likewise replicates the genome. Riboviria mostly contains eukaryotic viruses, and most eukaryotic viruses, including most human, animal, and plant viruses, belong to the realm.
Most widely known viral diseases are caused by viruses in Riboviria, which includes influenza viruses, HIV, coronaviruses, ebolaviruses, and the rabies virus, as well as the first virus to be discovered, Tobacco mosaic virus. Reverse transcribing viruses are a major source of horizontal gene transfer by means of becoming endogenized in their host's genome, and a significant portion of the human genome consists of this viral DNA.
Varidnaviria
Varidnaviria contains DNA viruses that encode MCPs that have a jelly roll fold folded structure in which the jelly roll (JR) fold is perpendicular to the surface of the viral capsid. Many members also share a variety of other characteristics, including a minor capsid protein that has a single JR fold, an ATPase that packages the genome during capsid assembly, and a common DNA polymerase. Two kingdoms are recognized: Helvetiavirae, whose members have MCPs with a single vertical JR fold, and Bamfordvirae, whose members have MCPs with two vertical JR folds.
Marine viruses in Varidnaviria are ubiquitous worldwide and, like tailed bacteriophages, play an important role in marine ecology. Most identified eukaryotic DNA viruses belong to the realm. Notable disease-causing viruses in Varidnaviria include adenoviruses, poxviruses, and the African swine fever virus. Poxviruses have been highly prominent in the history of modern medicine, especially Variola virus, which caused smallpox. Many varidnaviruses are able to become endogenized, and a peculiar example of this are virophages, which confer protection for their hosts against giant viruses during infection.
Adnaviria
Realm Adnaviria unifies archaeal filamentous viruses with linear A-form double-stranded DNA genomes and characteristic major capsid proteins unrelated to those encoded by other known viruses. The realm currently includes viruses from three families, Lipothrixviridae, Rudiviridae, and Tristromaviridae, all infecting hyperthermophilic archaea. The nucleoprotein helix of adnaviruses is composed of asymmetric units containing two MCP molecules, a homodimer in the case of rudivirids and a heterodimer of paralogous MCPs in the case of lipothrixvirids and tristromavirids. The MCPs of ligamenviral particles have a unique α-helical fold first found in the MCP of rudivirid Sulfolobus islandicus rod-shaped virus 2 (SIRV2). All members of the Adnaviria share a characteristic feature in that the interaction between the MCP dimer and the linear dsDNA genome maintains the DNA in the A form. Consequently, the entire genome adopts the A form in virions. Like many structurally related viruses in the two other realms of dsDNA viruses (Duplodnaviria and Varidnaviria), there is no detectable sequence similarity among the capsid proteins of viruses from different tokiviricete families, suggesting a vast undescribed diversity of viruses in this part of the virosphere.
Ribozyviria
Ribozyviria is characterised by the presence of genomic and antigenomic ribozymes of the Deltavirus type. Additional common features include a rod-like structure and a RNA-binding "delta antigen" encoded in the genome.
Origins
In general, virus realms have no genetic relation to each other based on common descent, in contrast to the three domains of cellular life—Archaea, Bacteria, and Eukarya—which share a common ancestor. Likewise, viruses within each realm are not necessarily descended from a common ancestor since realms group viruses together based on highly conserved traits, not common ancestry, which is used as the basis for the taxonomy of cellular life. As such, each virus realm is considered to represent at least one instance of viruses coming into existence. By realm:
Adnaviria is of unknown origin, but it has been suggested that viruses of Adnaviria have potentially existed for a long time, as it is thought that they may have infected the last archaeal common ancestor.
Duplodnaviria is either monophyletic or polyphyletic and may predate the last universal common ancestor (LUCA) of cellular life. The exact origin of the realm is not known, but the HK97-fold MCP encoded by all members is, outside the realm, only found in encapsulins, a type of nanocompartment found in bacteria, although the relation between Duplodnaviria, and encapsulins is not fully understood.
Monodnaviria is polyphyletic and appears to have emerged multiple times from bacterial and archaeal circular plasmids, which are extra-chromosomal DNA molecules that live inside of bacteria and archaea and which self-replicate.
Riboviria is monophyletic or polyphyletic. The reverse transcriptase of kingdom Pararnavirae likely evolved on a single occasion from a retrotransposon, a type of self-replicating DNA molecule that replicates via reverse transcription. The origin of the RdRp of Orthornavirae is less certain, but they are believed to originate from a bacterial group II intron that encodes reverse transcriptase or to predate the LUCA being descendants of the ancient RNA world and precede reverse transcriptases of cellular life. A larger study (2022) where new lieneages (phyla) were described, was in favor of the hypothesis that RNA viruses descend from the RNA world, suggesting that retroelements of cellular life originated from an ancestor related to the phylum Lenarviricota and that members of a newly discovered Taraviricota lineage (phylum) would be the ancestors of all RNA viruses.
Ribozyviria is of unknown origin. It has been proposed that they may have derived from retrozymes (a family of retrotransposons) or a viroid-like element (i.e. viroids and satellites) with capsid protein capture.
Varidnaviria is either monophyletic or polyphyletic and may predate the LUCA. The kingdom Bamfordvirae is likely derived from the other kingdom Helvetiavirae via fusion of two MCPs to have an MCP with two jelly roll folds instead of one. The single jelly roll (SJR) fold MCPs of Helvetiavirae show a relation to a group of proteins that contain SJR folds, including the Cupin superfamily and nucleoplasmins. Archaeal dsDNA viruses in Portogloboviridae contain just one vertical SJR-MCP, which appears to have been duplicated to two for Halopanivirales, so the MCP of Portogloboviridae likely represents an earlier stage in the evolutionary history of Varidnaviria MCPs. However, another scenario was later proposed in which the Bamfordvirae and Helvetiavirae kingdoms would originate independently suggesting that the Bamfordvirae DJR-MCP protein snow a relation with the bacterial DUF 2961 protein, leading to a revision of the realm Varidnaviria. It is possible that the Bamfordvirae DJR-MCP will evolve from this protein independently, however the origin of the DJR-MCP by duplication of the Helvetiavirae SJR-MCP cannot yet be ruled out. A molecular phylogenetic analysis suggests that Helvetiavirae had no involvement in the origin of the Bamfordvirae DJR-MCP and that they probably derive from the class Tectiliviricetes.
While the realms generally have no genetic relation to each other, there are some exceptions:
Viruses in the family Podoviridae in Duplodnaviria encode a DNA polymerase that is related to the DNA polymerases encoded by many members of Varidnaviria.
Eukaryotic viruses in the kingdom Shotokuvirae in Monodnaviria were created on multiple occasions by recombination events that combined the DNA of ancestral plasmids with complementary DNA (cDNA) of positive sense RNA viruses in Riboviria, by which ssDNA viruses in Shotokuvirae obtained capsid proteins from RNA viruses.
The family Bidnaviridae in Monodnaviria was created via integration of a parvovirus (of Monodnaviria) genome into a polinton, a virus-like self-replicating DNA molecule, which are related to viruses in Varidnaviria. Furthermore, bidnaviruses encode a receptor-binding protein inherited from reoviruses in the realm Riboviria.
Subrealm
In virology, the second highest taxonomy rank established by the ICTV is subrealm, which is the rank below realm. Subrealms of viruses use the suffix -vira, viroid subrealms use the suffix -viroida, and satellites use the suffix -satellitida. The rank below subrealm is kingdom. As of 2019, no taxa are described at the rank of subrealm.
History
Prior to the 21st century, it was believed that deep evolutionary relations between viruses could not be discovered due to their high mutation rates and small number of genes making discovering these relations more difficult. Because of this, the highest taxonomic rank for viruses from 1991 to 2017 was order. In the 21st century, however, various methods have been developed that have enabled these deeper evolutionary relationships to be studied, including metagenomics, which has identified many previously unidentified viruses, and comparison of highly conserved traits, leading to the desire to establish higher-level taxonomy for viruses.
In two votes in 2018 and 2019, the ICTV agreed to adopt a 15-rank classification system for viruses, ranging from realm to species. Riboviria was established in 2018 based on phylogenetic analysis of the RNA-dependent polymerases being monophyletic, Duplodnaviria was established in 2019 based on increasing evidence that tailed bacteriophages and herpesviruses shared many traits, Monodnaviria was established in 2019 after the relation and origin of CRESS-DNA viruses was resolved, and Varidnaviria was established 2019 based on the shared characteristics of member viruses.
See also
Virus classification
Minerals
References
Further reading
Virus taxonomy | Realm (virology) | [
"Biology"
] | 3,323 | [
"Virus taxonomy",
"Viruses",
"Taxonomy (biology)"
] |
60,233,631 | https://en.wikipedia.org/wiki/Neodymium%20nitrate | Neodymium nitrate is an inorganic compound with the formula . It is typically encountered as the hexahydrate, Nd(NO3)3·6H2O, which is more accurately formulated as [Nd(NO3)3(H2O)4].2H2O to reflect the crystal structure. It decomposes to NdONO3 at elevated temperature.
It is used in the extraction and purification of neodymium from its ores.
References
Neodymium(III) compounds
Nitrates | Neodymium nitrate | [
"Chemistry"
] | 111 | [
"Inorganic compounds",
"Oxidizing agents",
"Inorganic compound stubs",
"Nitrates",
"Salts"
] |
60,234,530 | https://en.wikipedia.org/wiki/Anfinsen%20cage | In molecular biology, an Anfinsen cage is a model for protein folding used by some cells to improve the production speed and yield of accurate products. Space within a cell is generally limited, and a protein's folding process can be interrupted or modified if it wanders too close to outside forces while it is still in the process of forming. Even worse, the unformed molecules may begin to aggregate uncontrollably, potentially resulting in a disease such as Alzheimer's.
To prevent this, some cells will enclose actively folding proteins within one or more chaperones, forming a "cage" around them to protect them during their transformation. These cages can also serve to isolate incorrectly formed proteins that may otherwise affect other processes if it were allowed to float freely. The model is named after Christian B. Anfinsen who first showed in vitro that pure denatured proteins will sometimes refold spontaneously without an energy source.
References
Protein folding | Anfinsen cage | [
"Chemistry"
] | 193 | [
"Molecular biology stubs",
"Molecular biology"
] |
60,235,013 | https://en.wikipedia.org/wiki/Yttrium%28III%29%20nitrate | Yttrium(III) nitrate is an inorganic compound, a salt with the formula Y(NO3)3. The hexahydrate is the most common form commercially available.
Preparation
Yttrium(III) nitrate can be prepared by dissolving corresponding metal oxide in 6 mol/L nitric acid:
Properties
Yttrium(III) nitrate hexahydrate loses crystallized water at relatively low temperature. Upon further heating, basic salt YONO3 is formed. At 600 C, the thermal decomposition is complete. Y2O3 is the final product.
Y(NO3)3·3TBP is formed when tributyl phosphate is used as the extracting solvent.
Uses
Yttrium(III) nitrate is mainly used as a source of Y3+ cations. It is a precursor of some yttrium-containing materials, such as Y4Al2O9, YBa2Cu3O6.5+x and yttrium-based metal-organic frameworks.
It can also be used as a catalyst in organic synthesis.
References
Yttrium compounds
Nitrates | Yttrium(III) nitrate | [
"Chemistry"
] | 233 | [
"Oxidizing agents",
"Nitrates",
"Salts"
] |
60,239,905 | https://en.wikipedia.org/wiki/Niravoline | Niravoline is a chemical compound with the formula . It has diuretic and aquaretic effects and has been studied for its potential use for cerebral edema and cirrhosis.
It exerts its pharmacological effect as a kappa opioid receptor agonist.
References
Abandoned drugs
Diuretics
Kappa-opioid receptor agonists
3-Nitrophenyl compounds
1-Pyrrolidinyl compounds | Niravoline | [
"Chemistry"
] | 93 | [
"Drug safety",
"Organic compounds",
"Organic compound stubs",
"Abandoned drugs",
"Organic chemistry stubs"
] |
40,928,997 | https://en.wikipedia.org/wiki/Bevis%20Bulmer | Sir Bevis Bulmer (1536–1615) was an English mining engineer during the reigns of Elizabeth I and James I. He has been called "one of the great speculators of that era". Many of the events in his career were recorded by Stephen Atkinson in The Discoveries and Historie of the Gold Mynes in Scotland, compiled in part from a lost manuscript by Bulmer entitled Bulmer's Skill.
Family
According to Tyson, Bevis Bulmer's "origins are shrouded in mystery". However, according to other sources, Bevis Bulmer, born in 1536, was the son of Sir John Bulmer, eldest son and heir of Sir William Bulmer (d. 1531). His mother was Margaret Stafford, said to have been an illegitimate daughter of Edward Stafford, 3rd Duke of Buckingham.
His parents were said to have been drawn into the rebellion of Robert Aske, known as the Pilgrimage of Grace, through the influence of their nephew, Sir Francis Bigod. They were executed with others in early 1537 for their involvement, in consequence of which their lands escheated to the Crown, although some were granted at a later date to Sir George Bowes (1527–1580). The circumstances of their trial and execution were recorded by the author of Wriothesley's Chronicle:
Also the 16 day of May [1537] there were arraigned at Westminster afore the King’s Commissioners, the Lord Chancellor that day being the chief, these persons following: Sir Robert Constable, knight; Sir Thomas Percy, knight, and brother to the Earl of Northumberland; Sir John Bulmer, knight, and Ralph Bulmer, his son and heir; Sir Francis Bigod, knight; Margaret Cheney, after Lady Bulmer by untrue matrimony; George Lumley, esquire; Robert Aske, gentleman, that was captain in the insurrection of the Northern men; and one Hamerton, esquire, all which persons were indicted of high treason against the King, and that day condemned by a jury of knights and esquires for the same, whereupon they had sentence to be drawn, hanged and quartered, but Ralph Bulmer, the son of John Bulmer, was reprieved and had no sentence.
And on the 25 day of May, being the Friday in Whitsun week, Sir John Bulmer, Sir Stephen Hamerton, knights, were hanged and headed; Nicholas Tempest, esquire; Doctor Cockerell, priest; Abbot quondam of Fountains; and Doctor Pickering, friar, were drawn from the Tower of London to Tyburn, and there hanged, bowelled and quartered, and their heads set on London Bridge and divers gates in London.
And the same day Margaret Cheney, "other wife to Bulmer called", was drawn after them from the Tower of London into Smithfield, and there burned according to her judgment, God pardon her soul, being the Friday in Whitsun week; she was a very fair creature, and a beautiful.
Early years
Bulmer began his mining career at some of the former Bulmer properties at Wilton, North Yorkshire, and is said to have been interested in his youth in the iron smelter set up by Sir John Manners at Rievaulx Abbey, a project to which he returned in 1577 when a new smelter was being set up. According to Baldwin, Bulmer's "later surviving water and drainage works betray experience of this old monastic site’s water supply, and ideas illustrated in Georg Agricola’s De Re Metallica".
About 1562 Bulmer founded the lead and calamine mines in the Mendip Hills near Chewton, Somerset. The Mendip ores (calamine and galena) were used by Christopher Schutz from 1565 to 1586 at the smelter newly constructed by the Company of Mineral and Battery Works at Tintern. According to Baldwin, Bulmer was also "on the fringes" of the smelting operations at Dartford in which Schutz refined tons of worthless ore brought from Baffin Island in 1576–78 by Martin Frobisher.
1580s
About 1581 Bulmer visited the silver mines and smelters at Bannow Bay and Clonmines in Wexford.
In 1584 Bulmer and Sir Julius Caesar petitioned the Privy Council for a patent to build lighthouses, which Bulmer was granted. In February 1585 the Admiralty Court commissioned Bulmer and two others to assay the gold bullion on the captured Spanish ship Volante at Bristol.
On 13 March 1583 Doctor John Dee had entered into a lease at the London home of Lionel Duckett to work silver and lead mines at Combe Martin and Knap Down in Devon; however Dee left England in September 1584 in the wake of debts incurred as a result of the Frobisher expeditions in 1576–8. In 1587 Dee's lease was in some way taken over by his former pupil, Adrian Gilbert, brother of Sir Humphrey Gilbert, and John Poppler, a London lapidary. Gilbert and Bulmer then entered into a bargain whereby Bulmer would work the mine and bear the costs, and he and Gilbert would have an equal share of the profits. The mine developed by Bulmer, Fayes Mine, is said by Atkinson to have been 32 fathoms deep and 32 fathoms wide, and to have yielded Bulmer and Gilbert £10,000 apiece for the first two years of operation, although the output dropped to £1000 during the mine's final year. Dee returned to England in 1589, and on 19 December was generously compensated by Gilbert. Two "famous bowls" were later made of silver taken from Fayes Mine.
In 1586, backed by financial support from Elizabeth I and others, Bulmer mined silver and lead at the mines at Chewton in the Mendip Hills; the Queen is said to have lost £10,000 in the venture.
In 1588 he was granted a patent for a water-powered nail-making machine, and on 4 December 1588 was given licence for twelve years "to make and cut iron into small pieces to work out nails".
1590s
In 1593 Bulmer undertook the construction of a pump to bring potable water from the Thames to Cheapside in London, a project which was completed in 1595.
In 1593 as well the Queen provided him with letters of recommendation to the Scottish government. Christopher Schutz had died in 1592, and by Act of the Scottish Parliament Bulmer replaced him in 1593 as Master of the Works for Ores from Cathay and the North West Parts. The Scots granted Bulmer a patent to explore for gold and silver at Leadhills in Lanarkshire, and from 1594 he is said to have had as a partner an Edinburgh goldsmith named Thomas Foulis who was jeweller to King James' wife, Anne. Atkinson describes in vivid prose how Bulmer made a stamping mill at Long Clough Head in the Crawford Moor area, where he got a great deal of "small mealy gold", much of which he gave away to "unthankful persons", and how at Glengaber Burn in Ettrick Forest he got the "greatest gold", sometimes like "Indian wheat, or pearl, and black-eyed like to beans", but because he "wasted much himself" and "gave liberally to many" in order to be "praised and magnified", and had always "too many irons in the fire", he impoverished himself when he could have become a rich subject.
On his return from Scotland he presented the Queen with a porringer of pure gold engraved with these verses:
I dare not give, nor yet present,
But render part of that’s thy own;
My mind and heart shall still invent
To seek out treasure yet unknown.
The Queen is said to have liked the gift so well that Bulmer was made "one of her sworn servants", and "learned to beg, as other courtiers do". As a reward, the Queen granted him in 1599 the impost on coal brought by sea, which according to Atkinson he initially farmed for £6200 per year, but later lost the grant. He was also granted the duty on imported wines. In 1599 he offered £10,000 to gain the pre-emption for the sale of all the tin produced in Cornwall.
According to Atkinson, who based much of his own The Discoveries and Historie of the Gold Mynes in Scotland on it, after his return from Scotland Bulmer compiled a manuscript account of his career which he entitled Bulmer’s Skill. It was never printed, and is now lost.
1600s
In 1603 James I and Bulmer devised a plan by which the search for gold in Scotland could be financed by making investors "Knights of the Golden Mines". Objections by Robert Cecil, 1st Earl of Salisbury to the bestowing of further knighthoods put an end to the scheme. Bulmer himself was knighted in 1604. With a free gift from the King of £100, together with a further royal grant of £200, Bulmer returned to Scotland to search for gold in the Lowther Hills in March 1605. He had 102 workmen at Bailliegill, Langcleuch (to the east and west of Bulmer Moss), Alway, and Glenlaugh. Lord Balmerino inspected the works he was running at Crawford Mure and those of George Bowes in June 1605. In 1606 the King granted him a lease of all gold and silver mines in Scotland, and he was later given further free gifts from the King of £100 in 1607 and £500 in 1608.
In February 1607 a rich silver deposit was discovered at Hilderston near Bathgate. Bulmer and Thomas Foulis opened a mine called "God's Blessing" on the lands of Sir Thomas Hamilton. King James purchased the property from Hamilton, and appointed Bulmer master and surveyor, with a grant of £2419 16s 10d to finance the project, but within two years had proved a financial disaster.
In 1611–12 Bulmer was engaged in mining at Kilmore in Tipperary. In his The Discoveries and Historie of the Gold Mynes in Scotland, Stephen Atkinson said he spent two years in Ireland with Bulmer.
Bulmer returned to England, and died "penniless", according to Tyson, in 1613. at Alston, Cumbria. Atkinson says that at his death at "Awstinmoore", Bulmer owed him £340, as well as unsatisfied debts in Ireland.
Bulmer was alluded to in Ben Jonson's play, The Staple of News (1625):
Did I not tell you I was bred in the mines
Under Sir Bevis Bullion?
Marriage and issue
Nothing is known of Bulmer's marriage. However he had a son, John Bulmer, and three daughters, Elizabeth, Prudence and Elizabeth (again).
Prudence Bulmer married John Beeston in 1596, a nephew of Hugh Beeston, and after his death married in 1603, Patrick Murray, a son of Sir John Murray of Tullibardine.
Notes
References
External links
Wilton: Geographical and Historical Information from the Year 1890 Retrieved 29 October 2013
The History of Leadhills and Wanlockhead Lead Mines Retrieved 30 October 2013
Knap Down Mine, Combe Martin, Devon Retrieved 2 November 2013
Glengaber Burn Retrieved 3 November 2013
Crawford Moor Retrieved 3 November 2013
The Staple of News Retrieved 4 November 2013
1536 births
1615 deaths
16th-century English engineers
17th-century English scientists
17th-century English engineers
Mining engineers
Gold mines in Scotland | Bevis Bulmer | [
"Engineering"
] | 2,410 | [
"Mining engineering",
"Mining engineers"
] |
52,510,741 | https://en.wikipedia.org/wiki/Progesterone%20receptor%20C | The progesterone receptor C (PR-C) is one of three known isoforms of the progesterone receptor (PR), the main biological target of the endogenous progestogen sex hormone progesterone. The other isoforms of the PR include the PR-A and PR-B.
See also
Membrane progesterone receptor
References
Intracellular receptors
Progestogens
Transcription factors | Progesterone receptor C | [
"Chemistry",
"Biology"
] | 87 | [
"Induced stem cells",
"Gene expression",
"Transcription factors",
"Signal transduction"
] |
43,794,088 | https://en.wikipedia.org/wiki/Germanene | Germanene is a material made up of a single layer of germanium atoms. The material is created in a process similar to that of silicene and graphene, in which high vacuum and high temperature are used to deposit a layer of germanium atoms on a substrate. High-quality thin films of germanene have revealed unusual two-dimensional structures with novel electronic properties suitable for semiconductor device applications and materials science research.
Preparation and structure
In September 2014, G. Le Lay and others reported the deposition of a single atom thickness, ordered and two-dimensional multi-phase film by molecular beam epitaxy upon a gold surface in a crystal lattice with Miller indices (111). The structure was confirmed with scanning tunneling microscopy (STM) revealing a nearly flat honeycomb structure.
Additional confirmation was obtained by spectroscopic measurement and density functional theory calculations. The development of high quality and nearly flat single atom films created speculation that germanene may replace graphene if not merely add an alternative to the novel properties of related nanomaterials.
Bampoulis and others have reported the formation of germanene on the outermost layer of Ge2Pt nanocrystals. Atomically resolved STM images of germanene on Ge2Pt nanocrystals reveal a buckled honeycomb structure. This honeycomb lattice is composed of two hexagonal sublattices displaced by 0.2 Å in the vertical direction with respect to each other. The nearest-neighbor distance was found to be 2.5±0.1 Å, in close agreement with the Ge-Ge distance in germanene.
Based on STM observations and density functional theory calculations, formation of an apparently more distorted form of germanene has been reported on platinum. Epitaxial growth of germanene crystals on GaAs(0001) has also been demonstrated, and calculations suggest that the minimal interactions should allow germanene to be readily removed from this substrate.
Germanene's structure is described as "a group-IV graphene-like two-dimensional buckled nanosheet". Adsorption of additional germanium onto the graphene-like sheet leads to formation of "dumbbell" units, each with two out-of-plane atoms of germanium, one on either side of the plane. Dumbbells attract each other. Periodically repeating arrangements of dumbbell structures may lead to additional stable phases of germanene, with altered electronic and magnetic properties.
In October 2018, Junji Yuhara and others reported that germanene is easily prepared by a segregation method, using a bare Ag thin film on a Ge substrate and achieved in situ its epitaxial growth. The growth of germanene, akin to graphene and silicene, by a segregation method, is considered to be technically very important for the easy synthesis and transfer of this highly promising 2D electronic material.
Properties
Germanene's electronic and optical properties have been determined from ab initio calculations, and structural and electronic properties from first principles. These properties make the material suitable for use in the channel of a high-performance field-effect transistor and have generated discussion regarding the use of elemental monolayers in other electronic devices. The electronic properties of germanene are unusual, and provide a rare opportunity to test the properties of Dirac fermions. Germanene has no band gap, but attaching a hydrogen atom to each germanium atom creates one. These unusual properties are generally shared by graphene, silicene, germanene, stanene, and plumbene.
References
External links
Meet Graphene's Sexy New Cousin Germanene
Scientists Use Gold Substrate to Grow Graphene's Cousin, Germanene
Graphene Family Tree? Germanene Makes Its Appearance
CNRS Website (2015)
CNRS Website (2017)
Germanium
Allotropes
Group IV semiconductors
Two-dimensional nanomaterials
2014 in science
Substances discovered in the 2010s | Germanene | [
"Physics",
"Chemistry"
] | 789 | [
"Periodic table",
"Properties of chemical elements",
"Allotropes",
"Semiconductor materials",
"Group IV semiconductors",
"Materials",
"Matter"
] |
43,799,748 | https://en.wikipedia.org/wiki/Mosaicity | In crystallography, mosaicity is a measure of the spread of crystal plane orientations. A mosaic crystal is an idealized model of an imperfect crystal, imagined to consist of numerous small perfect crystals (crystallites) that are to some extent randomly misoriented. Empirically, mosaicities can be determined by measuring rocking curves. Diffraction by mosaics is described by the Darwin–Hamilton equations.
The mosaic crystal model goes back to a theoretical analysis of X-ray diffraction by C. G. Darwin (1922). Currently, most studies follow Darwin in assuming a Gaussian distribution of crystallite orientations centered on some reference orientation. The mosaicity is commonly equated with the standard deviation of this distribution.
Applications and notable materials
An important application of mosaic crystals is in monochromators for x-ray and neutron radiation. The mosaicity enhances the reflected flux, and allows for some phase-space transformation.
Pyrolitic graphite (PG) can be produced in form of mosaic crystals (HOPG: highly ordered PG) with controlled mosaicity of up to a few degrees.
Diffraction by mosaic crystals: the Darwin–Hamilton equations
To describe diffraction by a thick mosaic crystal, it is usually assumed that the constituent crystallites are so thin that each of them reflects at most a small fraction of the incident beam. Primary extinction and other dynamical diffraction effects can then be neglected. Reflections by different crystallites add incoherently, and can therefore be treated by classical transport theory. When only beams within the scattering plane are considered, then they obey the Darwin–Hamilton equations (Darwin 1922, Hamilton 1957),
where are the directions of the incident and diffracted beam, are the corresponding currents, μ is the Bragg reflectivity, and σ accounts for losses by absorption and by thermal and elastic diffuse scattering. A generic analytical solution has been obtained remarkably late (Sears 1997; for the case σ=0 Bacon/Lowde 1948). An exact treatment must allow for three-dimensional trajectories of multiply reflected radiation. The Darwin–Hamilton equations are then replaced by a Boltzmann equation with a very special transport kernel. In most cases, resulting corrections to the Darwin–Hamilton–Sears solutions are rather small (Wuttke 2014).
References
Crystallography | Mosaicity | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 479 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
46,978,792 | https://en.wikipedia.org/wiki/Nucleosome%20repeat%20length | The nucleosome repeat length, (NRL) is the average distance between the centers of neighboring nucleosomes. NRL is an important physical chromatin property that determines its biological function. NRL can be determined genome-wide for the chromatin in a given cell type and state, or locally for a large enough genomic region containing several nucleosomes.
In chromatin, neighbouring nucleosomes are separated by the linker DNA and in many cases also by the linker histone H1 as well as non-histone proteins. Since the size of the nucleosome is typically fixed (146-147 base pairs), NRL is mostly determined by the size of the linker region between nucleosomes. Alternatively, partial DNA unwrapping from the histone octamer or partial disassembly of the histone octamer can decrease the effective nucleosome size and thus affect NRL.
Past studies going back to 1970s showed that, in general, NRL is different for different species and even for different cell types of the same organism. In addition, recent publications reported NRL variations for different genomic regions of the same cell type.
Recent works have compared the NRL around yeast transcription start sites (TSSs) in vivo and that for the reconstituted chromatin on the same DNA sequences in vitro. It was shown that ordered nucleosome positioning arises only in the presence of ATP-dependent chromatin remodeling. Furthermore, it was reported that the NRL determined around yeast TSSs is an invariant value universal for a given wild type yeast strain, although it can change when one of chromatin remodelers is missing. In general, NRL depends on the DNA sequence, concentrations of histones and non-histone proteins, as well as long-range interactions between nucleosomes. NRL determines geometric properties of the nucleosome array, and therefore the higher-order packing of the DNA into the chromatin fiber, which might affect gene expression.
References
Molecular biology
Molecular genetics
DNA
Epigenetics
Nuclear substructures | Nucleosome repeat length | [
"Chemistry",
"Biology"
] | 436 | [
"Biochemistry",
"Molecular genetics",
"Molecular biology"
] |
46,984,504 | https://en.wikipedia.org/wiki/Penicillium%20ornatum | Penicillium ornatum is an anamorph species of the genus Penicillium.
References
Further reading
ornatum
Fungi described in 1968
Fungus species | Penicillium ornatum | [
"Biology"
] | 36 | [
"Fungi",
"Fungus species"
] |
46,986,874 | https://en.wikipedia.org/wiki/Nest%20Thermostat | The Nest Thermostat is a smart thermostat developed by Google Nest and designed by Tony Fadell, Ben Filson, and Fred Bould. It is an electronic, programmable, and self-learning Wi-Fi-enabled thermostat that optimizes heating and cooling of homes and businesses to conserve energy.
The Google Nest Learning Thermostat is based on a machine learning algorithm: for the first weeks users have to regulate the thermostat in order to provide the reference data set. The thermostat can then learn people's schedule, at which temperature they are used to and when. Using built-in sensors and phones' locations, it can shift into energy-saving mode when it realizes nobody is at home.
Specifications, North American versions
Note: Generation E and new thermostat supports two stage cooling OR two stage heating OR heatpump due to a shared multi-use terminal.
Hardware
Nest is compatible with most standard HVAC systems that use central heating and cooling and uses industry standard connections to facilitate the control of these appliances.
Nest is not compatible with communicating HVAC systems. Communicating systems are used with some two-stage and all variable-capacity HVAC systems. These systems require just four wires – two power wires for heating and cooling and two for communication between components (see photo).
Nest consists of two primary pieces of hardware. The display contains the main printed circuit board (PCB) and rotating ring (except for the 2020 Nest Thermostat, which has a touch-sensitive strip on the right side of the thermostat body). The base houses the connection terminals, bubble level, and holes for wall anchors. Neither can function independently; if separated, the display becomes inactive until reconnected to the base.
A special version of Nest is available in Europe, which is capable of controlling 230 volt heating systems. The Nest is paired with a "Heat Link" device, which contains the circuitry required for controlling the mains-voltage heating system. The first release was the 2nd Generation Nest thermostat which the Heat Link controlled the central heating boiler. The 3rd Generation added support for OpenTherm and for controlling domestic hot water. The Nest E was made available to the UK in October 2018. It has several major changes as the thermostat is stand mounted only, the Heat Link is grey and battery powered, the Heat Link loses the domestic hot water support, and lastly designed to be installed on the wall where the old thermostat was located.
As the Nest Thermostat cannot be battery operated, it must either be installed with a wire connecting directly to the "Heatlink" which supplies 12v DC, or mounted on a Stand and powered via a USB cable.
The Nest Temperature Sensor was added in March 2018. Available in Google Store only for United States and Canada. Up to six of these battery operated devices can be added to a single thermostat to provide remote temperature monitoring. Nest will then use the appropriate sensor based on schedule. Since they use Bluetooth Low Energy they are only compatible with the E and 3rd generation thermostats.
With the introduction of the more accessible Google Nest Thermostat on October 12, 2020, it no longer features the rotating ring which is present on other Nest models. It instead uses a touch-sensitive strip on the right side of the thermostat body to adjust temperatures and navigate the thermostat's operating system, with tapping on the touch-sensitive strip replacing physical clicking. It also features presence detection which uses Google ATAP's 60 GHz Project Soli radar, which allows the mirror-like face of the Nest Thermostat to have no visible cutouts for the radar sensor. This enables the thermostat to display the current HVAC status when human presence is detected by the Soli radar sensor. However, Nest Farsight is not supported on this model but poses a similar function at closer distances.
Software
The Nest Thermostat is built around an operating system that allows interaction with the thermostat via spinning and clicking of its control wheel, with sliding and tapping being the input method for the 2020 Nest Thermostat, which brings up option menus for switching from heating to cooling, access to device settings, energy history, and scheduling. Scheduling cannot be modified on the 2020 Nest Thermostat device and must be done in the Google Home app. Users can control Nest without a touch screen or other input device. As the thermostat is connected to the Internet, the company can push updates to fix bugs, improve performance and add additional features. For updates to occur automatically, the thermostat must be connected to Wi‑Fi and the battery must have at least a 3.7 V charge to give enough power to complete the download and installation of the update.
The Nest Thermostat has had a number of software updates. A 2017 security update enables two factor authentication.
The operating system itself is based on Linux 2.6.37 and many other free software components.
To comply with the terms of the GPLv3 license under which some components are available, Nest Labs also provides a special firmware image which will unlock the system so that it will accept arbitrary code sent to it.
Nest devices interconnect with each other using a protocol called Weave, which is based on IEEE 802.15.4 and Wi-Fi 802.11 b/g/n.
Starting April 18, 2023 Google Nest G4CVZ Thermostats will be receiving an update to enable Matter connectivity. As of January 2024, only the latest Generation 4 Thermostat currently has this capability.
Availability
Nest is available for sale in the United States, Canada, Mexico, the United Kingdom, Belgium, France, Ireland, the Netherlands, Germany, Austria, Italy, and Spain. It is, however, compatible with many heating and cooling automation systems in other countries. Nest Labs have surveyed existing users known to be outside the areas where it is officially available. Use of the thermostat outside the United States and Canada is complicated by the software setting time and other functions based on the ZIP code. For international users this means they must either disable Wi‑Fi to set the time correctly or use the nearest U.S. zipcode which may result in erratic behavior as the thermostat makes faulty assumptions about inactivity corresponding with either sleep or the home's occupants being away.
In 2013 a man-in-the-middle hack potentially allowed worldwide users to set up their time zone and local weather.
Marketing
In an effort to increase the number of homes using their learning thermostats, Nest began to partner with energy companies. In February 2014, Direct Energy and Nest laboratories launched their Comfort and Control plan. The plan allowed Canadian customers in Alberta to receive a learning thermostat when they signed up for a five-year electricity contract. In April 2014, Nest announced a partnership with the United Kingdom energy supplier nPower. The partnership offers customers a cut on the Nest installation price and locked energy prices for 5 years, when customers receive both gas and electricity from nPower and paying with direct debit.
In June 2014, Direct Energy and Nest Laboratories expanded the package to Direct Energy's United States market.
SKUs / model numbers
T100577 is 1st generation, released only in the US
T200377 is 2nd generation, UK release
T200477 is 2nd generation, Canada release
T200577 is 2nd generation, US release
T200677 is 2nd generation, France, Netherlands, and Belgium release
T3007EF is 3rd generation, Canada release
T3007ES is 3rd generation, US release
T3008US is 3rd generation, US release, pro packaged
T3010FD is 3rd generation, France release
T3010GB is 3rd generation, UK release
T3016US is 3rd generation - black ring, US release
T3017US is 3rd generation - white ring, US release
T3018US is 3rd generation - mirror black ring, US release
T3019US is 3rd generation - polished steel ring, US release
T3021US is 3rd generation - copper ring, US release
T3032US is 3rd generation - brass ring, US release
T3029EX is 3rd generation - black ring, EU release
T3030EX is 3rd generation - white ring, EU release
T3031EX is 3rd generation - copper ring, EU release
T4000ES is Thermostat E, US release
T4000EF is Thermostat E, Canada release
HF001235-GB is Thermostat E, UK release
T5000SF is Temperature Sensor - white, US and Canada release
GA01334-US is Nest Thermostat G4CVZ - snow, US
GA02082-US is Nest Thermostat G4CVZ - sand
GA02081-US is Nest Thermostat G4CVZ - charcoal, US
GA02083-US is Nest Thermostat G4CVZ - fog, US release
GA05169-US is 4th generation - polished obsidian with temperature sensor, US release
GA05171-US is 4th generation - polished gold with temperature sensor, US release
GA05551-US is 4th generation - polished silver with temperature sensor, US release
GA05557-US is 4th generation - polished obsidian with two temperature sensors, US release
T200477 and T200577 are technically the same.
T200377 and T200677 are technically the same, except for the power plug used for the USB charger.
References
2011 introductions
American inventions
Temperature control
Google hardware | Nest Thermostat | [
"Technology"
] | 2,004 | [
"Home automation",
"Temperature control"
] |
38,163,966 | https://en.wikipedia.org/wiki/DSP%20coupling | A DSP coupling is a self-sealing symmetrical coupling which is secured by inter-connecting two couplings together.
It is closed by turning the locking ring on the triangular part of the opposed DSP coupling.
Extra closure can be applied by locking the connection with couplings wrench.
The DSP coupling locking principle is similar to that of the guillemin coupling.
However, there are differences to the preformed serration on the locking ring and the design of the lugs.
The locking ring of DSP couplings can be turned up to 45°.
DSP coupling are used as fire-fighting couplings. They are typical in for e.g. France and Belgium.
DSP couplings are symmetrical.
DSP comes in different sizes. (e.g. DN40 DN65 DN100).
Typical materials for the coupling are aluminum, brass and bronze.
For the Origin or meaning of "DSP" there is only speculation.
See also
Hose coupling#DSP
References
Mechanical fasteners
Seals (mechanical)
Hoses
Firefighting equipment | DSP coupling | [
"Physics",
"Engineering"
] | 218 | [
"Seals (mechanical)",
"Mechanical fasteners",
"Materials",
"Mechanical engineering",
"Matter"
] |
38,166,590 | https://en.wikipedia.org/wiki/Israel%20Atomic%20Energy%20Commission | The Israel Atomic Energy Commission (IAEC; ) is the governmental authority responsible for the State of Israel's activities in the nuclear field.
History
The establishment of the Israel Atomic Energy Commission was announced on 13 June 1952 by Prime Minister David Ben-Gurion. The prime minister appointed Professor Ernst David Bergmann to be its first director-general. Initially the committee was housed in temporary structures near Rehovot and is now located in Ramat Aviv. It oversaw the establishment of the Soreq Nuclear Research Center, the construction of which started in 1958 and the Negev Nuclear Research Center that began construction in late 1959.
Functions
The IAEC advises the government of Israel in areas of nuclear policy and in setting priorities in nuclear research and development. The commission implements governmental policies and represents Israel in international organizations in the nuclear field, such as the International Atomic Energy Agency. The IAEC maintains relationships with relevant national authorities of other countries.
See also
Nuclear weapons and Israel
Nuclear energy in Israel
References
External links
Official site
Nuclear technology in Israel
Governmental nuclear organizations
Government agencies established in 1952
Nuclear regulatory organizations
Israeli nuclear development | Israel Atomic Energy Commission | [
"Engineering"
] | 220 | [
"Governmental nuclear organizations",
"Nuclear regulatory organizations",
"Nuclear organizations"
] |
38,169,758 | https://en.wikipedia.org/wiki/Neptunium%28VI%29%20fluoride | Neptunium(VI) fluoride (NpF6) is the highest fluoride of neptunium, it is also one of seventeen known binary hexafluorides. It is a volatile orange crystalline solid. It is relatively hard to handle, being very corrosive, volatile and radioactive. Neptunium hexafluoride is stable in dry air but reacts vigorously with water.
At normal pressure, it melts at 54.4 °C and boils at 55.18 °C. It is the only neptunium compound that boils at a low temperature. Due to these properties, it is possible to easily separate neptunium from spent fuel.
Preparation
Neptunium hexafluoride was first prepared in 1943 by American chemist Alan E. Florin, who heated a sample of neptunium(III) fluoride on a nickel filament in a stream of fluorine and condensed the product in a glass capillary tube. Methods of preparation from both neptunium(III) fluoride and neptunium(IV) fluoride were later patented by Glenn T. Seaborg and Harrison S. Brown.
Standard method
The usual method of preparation is by fluorination of neptunium(IV) fluoride (NpF4) by elemental fluorine (F2) at 500 °C.
+ →
In comparison, uranium hexafluoride (UF6) is formed relatively rapidly from uranium tetrafluoride (UF4) and F2 at 300 °C, while plutonium hexafluoride (PuF6) only begins forming from plutonium tetrafluoride (PuF4) and F2 at 750 °C. This difference allows uranium, neptunium and plutonium to be effectively separated.
Other methods
Using a different starting material
Neptunium hexafluoride can also be obtained by fluorination of neptunium(III) fluoride or neptunium(IV) oxide.
2 + 3 → 2
+ 3 → +
Using a different fluorine source
The preparation can also be done with the help of stronger fluorinating reagents like bromine trifluoride (BrF3) or bromine pentafluoride (BrF5). These reactions can be used to separate plutonium, since PuF4 does not undergo a similar reaction.
Neptunium dioxide and neptunium tetrafluoride are practically completely converted to volatile neptunium hexafluoride by dioxygen difluoride (O2F2). This works as a gas-solid reaction at moderate temperatures, as well as in anhydrous liquid hydrogen fluoride at −78 °C.
+ 3 → + 4
+ → +
These reaction temperatures are markedly different from the high temperatures of over 200 °C previously required to synthesize neptunium hexafluoride with elemental fluorine or halogen fluorides. Neptunyl fluoride (NpO2F2) has been detected by Raman spectroscopy as a dominant intermediate in the reaction with NpO2. Direct reaction of NpF4 with liquid O2F2 led instead to vigorous decomposition of the O2F2 with no NpF6 generation.
Properties
Physical properties
Neptunium hexafluoride forms orange orthorhombic crystals that melt at 54.4 °C and boil at 55.18 °C under standard pressure. The triple point is 55.10 °C and 1010 hPa (758 Torr).
The volatility of NpF6 is similar to those of UF6 and PuF6, all three being actinide hexafluorides. The standard molar entropy is 229.1 ± 0.5 J·K−1·mol−1. Solid NpF6 is paramagnetic, with a magnetic susceptibility of 165·10−6 cm3·mol−1.
Chemical properties
Neptunium hexafluoride is stable in dry air. However, it reacts vigorously with water, including atmospheric moisture, to form the water-soluble neptunyl fluoride (NpO2F2) and hydrofluoric acid (HF).
+ 2 → + 4
It can be stored at room temperature in a quartz or pyrex glass ampoule, provided that there are no traces of moisture or gas inclusions in the glass and any remaining HF has been removed. NpF6 is light-sensitive, decomposing to NpF4 and fluorine.
NpF6 forms complexes with alkali metal fluorides: with caesium fluoride (CsF) it forms CsNpF6 at 25 °C, and with sodium fluoride it reacts reversibly to form Na3NpF8. In either case, the neptunium is reduced to Np(V).
+ → + 1/2
+ 3 → + 1/2
In the presence of chlorine trifluoride (ClF3) as solvent and at low temperatures, there is some evidence of the formation of an unstable Np(IV) complex.
Neptunium hexafluoride reacts with carbon monoxide (CO) and light to form a white powder, presumably containing neptunium pentafluoride (NpF5) and an unidentified substance.
Uses
The irradiation of nuclear fuel inside nuclear reactors generates both fission products and transuranic elements, including neptunium and plutonium. The separation of these three elements is an essential component of nuclear reprocessing. Neptunium hexafluoride plays a role in the separation of neptunium from both uranium and plutonium.
In order to separate the uranium (95% of the mass) from spent nuclear fuel, it is first powdered and reacted with elemental fluorine ("direct fluorination"). The resulting volatile fluorides (mainly UF6, small amounts of NpF6) are easily extracted from the non-volatile fluorides of other actinides, like plutonium(IV) fluoride (PuF4), americium(III) fluoride (AmF3), and curium(III) fluoride (CmF3).
The mixture of UF6 and NpF6 is then selectively reduced by pelleted cobalt(II) fluoride, which converts the neptunium hexafluoride to the tetrafluoride but does not react with the uranium hexafluoride, using temperatures in the range of 93 to 204 °C. Another method is using magnesium fluoride, on which the neptunium fluoride is sorbed at 60-70% but not the uranium fluoride.
References
Neptunium compounds
Hexafluorides
Octahedral compounds
Nuclear materials
Actinide halides | Neptunium(VI) fluoride | [
"Physics"
] | 1,461 | [
"Materials",
"Nuclear materials",
"Matter"
] |
38,171,120 | https://en.wikipedia.org/wiki/Thorium%20Energy%20Alliance | Thorium Energy Alliance (TEA) is a non-governmental, non-profit 501(c)3, educational organization based in the United States, which seeks to promote energy security of the world through the use of thorium as a fuel source. The potential for the use of thorium was studied extensively during the 1950s and 60s, and now worldwide interest is being revived due to limitations and issues concerning safety, economics, use and issues in the availability of other energy sources. TEA advocates thorium based nuclear power in existing reactors and primarily in next generation reactors. TEA promotes many initiatives to educate scientists, engineers, government officials, policymakers and the general public.
Energy crisis and the role of thorium
TEA promotes the use of thorium using a different rationale. Increasing world population, depleting resources and global warming have put severe constraints on the choices of power generation available today. Traditional fossil fuel based energy generation faces two-fold challenges in terms of depleting resources and need to keep greenhouse gas emissions in control. While interim measures like natural gas and unconventional oil are proposed, these still have a carbon footprint and are not universally available. Hydropower use has reached a natural limit in many parts of the world, and the existing capacity is under stress due to climate change. Renewable energy is seen as an important component of future energy generation, but being essentially intermittent, can not be effectively managed by the current power distribution technologies. Hence, nuclear energy is seen as an important option for power generation in many countries.
Present generation nuclear reactors are all uranium based, fueled with either freshly mined uranium or recycled plutonium and uranium as the fissile material. There are concerns about a continued supply of uranium, due to resource depletion, as well as various obstacles to mining uranium deposits. Moreover, the currently widely deployed nuclear reactors harness less than 3% of the energy content of uranium fuel. This technology, in turn, leaves large quantities of radioactive wastes to be disposed of safely. The issue of disposal of these wastes has not been addressed convincingly anywhere in the world. Moreover, a vast majority of the present generation reactors are based on the original design of reactors meant to power submarines, and whose safety is ensured by several active features and standard operating practices. Under various circumstances, these features and procedures were seen to fail, bringing about catastrophic consequences. Highly enriched uranium and separated plutonium are also the feedstock for nuclear weapons.
Thorium has been proposed as a clean, safe, proliferation resistant and sustainable source of energy which additionally is free from most of the issues associated with uranium. The average crustal abundance of thorium is four times more than that of uranium. Thorium is invariably associated with rare-earth elements or rare metals like niobium, tantalum and zirconium. Hence, it can be recovered as a by-product of other mining activities. Already, large quantities of thorium recovered from rare-earth element operations have been stockpiled in many countries. Thorium is fertile material, and essentially all thorium can be used in a nuclear reactor. Thorium is not fissile in itself, absorbs a neutron to transmute into uranium-233, which can fission to produce energy. Therefore, a thorium based fuel cycle produces very little, easily manageable waste compared to uranium. Thorium based fuel cycle options can be used to 'burn' all the presently accumulated nuclear waste. Various thorium based reactor designs are inherently more safe than uranium based reactors.
However nuclear proliferation using thorium has proven to be extremely difficult and non-practical, although proof-of-concepts of the contrary also have been proposed.
Despite all the favorable factors, and use in commercial reactors in the past, interest in thorium diminished in the late 1980s due to various reasons. Critics of thorium claim that the advantages are overstated and it is unlikely to be a useful source of energy. Experts point the adverse economics and the availability of plentiful sources of energy that will deter full commercialization of thorium based energy. These and other issues regarding the use of thorium have been debated.
Advocacy for thorium
One of the stated objectives of TEA is the vigorous advocacy for use of thorium as a nuclear fuel. TEA through its activities reaches out to scientists, engineers, government official, policymakers, and lawmakers to sensitize about the advantages of using thorium as a fuel. TEA has conducted a number of publicity campaigns and social media based outreach activities. TEA has emphasized the research and development done in the USA during the 1950s to 1970s period on thorium based reactor designs and fuel cycle options. Of particular interest was the Molten-Salt Reactor Experiment (MSRE) carried out at Oak Ridge National Laboratory, the United States during 1964–1969.
TEA argues the importance of enabling thorium energy, especially in liquid fluoride thorium reactor (LFTR pronounced lifter), in public hearings, such as the Blue Ribbon Commission on America's Nuclear Future. TEA promotes the establishment of a working thorium powered reactor. TEA is particularly interested in restarting the homogeneous fuels research program and the commercialization of molten salt reactor and the supply chain infrastructure to support it.
Another aim of TEA is supporting the reemergence of a Western Rare Earths Infrastructure by bringing together rare-earth producers leading to the establishment of a consortium for refining rare earths and sequestering thorium for future use. TEA supports changes in existing thorium regulation in the US to promote safe production and stockpiling of thorium as a by-product of associated mineral industries activity.
Activities
TEA proposes to leverage education and training activities by:
creating educational resources and textbooks
providing scholarships
facilitation of expert speakers
producing museum exhibits presenting thorium based energy
TEA plans to engage politicians through round-table discussions and provide them with expert opinion, white papers, executive summaries and talking points to demonstrate thorium technology.
There is a major initiative to engage the public through regular and social media channels. TEA facilitates experts to appear on radio and television and participate in group discussions and provide interviews. In this direction TEA generates a large quantity of its own media including, webcasts, podcasts, videos, pamphlets, books and articles. TEA sponsors advertising campaigns in print, television and targeted mail.
Thorium Energy Alliance has supported a dozen research projects at the Nanotechnology
Lab at University of Missouri St Louis (UMSL), which is located in an Economic Opportunity
Zone.
Thorium Energy Alliance has supported Outreach to youth through stem-based organizations such as
Generation Atomic, North American Young Generation in Nuclear, and Mothers for Nuclear, encouraging young people to get involved in the industry.
The Thorium Energy Alliance website has added resources for international organizations and National Labs in the USA as well as industry and Military.The website acts as a resource and an encyclopedia for the history and applications of thorium as well as or repository of all of conference information and related papers and
topical documents.
Thorium Energy Alliance has offered Techno-Economic support for the development of
nuclear medicines, such as Bismuth and Actinium, derived from Thorium extraction
processes.
Thorium Energy Alliance has worked with Rare Earth organizations and the critical minerals institute (CMI) to
solve the critical materials issues in the United States and the Western world by providing
thorium policy guidance with the goal to allow a new domestic Rare Earth Metals industry to start.
The Government of El Salvador and Thorium Energy Alliance have signed a Memorandum of Understanding to promote the "El Salvador Energy Bridge" plan for clean energy through thorium. The document was signed by Daniel Alvarez, Director General of Energy, Hydrocarbons and Mines (DGEHM), and John Kutsch, Executive Director of Thorium Energy Alliance, at the Embassy of El Salvador in Washington D.C., with Ambassador Milena Mayorga as a witness of honor.
In the future, TEA plans to track the milestones in the creation of a thorium economy. One of the proposed methods will be to create a thorium and related technology stock portfolio and a Thorium ETF, which will allow the public to track and participate in the growing value of the thorium economy.
Annual conferences
TEA organizes regular annual conferences since 2009, where scientific sessions and cross-cutting energy and fuel management discussions bring together a cross-section of interested domain experts. The inaugural conference in 2009 took place in Washington, D.C., followed by California (2010), Washington, D.C. (2011), and Chicago (2012). The 2013 annual conference was held in Chicago, May 30–31.
The tenth conference, TEAC10, was held at the Pollard Technology Conference Center in Oak Ridge, Tennessee, on October 1, 2019.
The eleventh conference, TEAC11, will be held on October 13–15 2022 in Albuquerque, New Mexico, at the national nuclear
energy Museum in Albuquerque. TEA has sponsored the production of a new exhibit on thorium energy and advanced
reactors. The conference is being put on with participation of the University of New Mexico, Abilene Christian
University Nuclear Department, the nuclear museum, and the support of several of the startups that TEA has assisted with technological support and
policy information.
See also
Alvin M. Weinberg
The Alvin Weinberg Foundation
Nuclear power debate
References
Further reading
External links
Thorium Energy Alliance Website
Thorium fuel cycle – Potential benefits and challenges, International Atomic Energy Agency
Nuclear energy
Thorium
Nuclear fuels
Energy security
Oak Ridge National Laboratory
501(c)(3) organizations
Non-profit organizations based in Illinois | Thorium Energy Alliance | [
"Physics",
"Chemistry"
] | 1,929 | [
"Nuclear energy",
"Radioactivity",
"Nuclear physics"
] |
38,173,962 | https://en.wikipedia.org/wiki/HZE%20ion | HZE ions are the high-energy nuclei component of galactic cosmic rays (GCRs) which have an electric charge of +3 or greater – that is, they must be the nuclei of elements heavier than hydrogen or helium.
The abbreviation "HZE" comes from high (H), atomic number (Z), and energy (E). HZE ions include the nuclei of all elements heavier than hydrogen (which has a +1 charge) and helium (which has a +2 charge). Each HZE ion consists of a nucleus with no orbiting electrons, meaning that the charge on the ion is the same as the atomic number of the nucleus. Their source is not certain, but is thought likely to be supernova explosions.
Composition and abundance
HZE ions are rare compared to protons, for example, composing only 1% of GCRs versus 85% for protons. HZE ions, like other GCRs, travel near the speed of light.
In addition to the HZE ions from cosmic sources, HZE ions are produced by the Sun. During solar flares and other solar storms, HZE ions are sometimes produced in small amounts, along with the more typical protons, but their energy level is substantially smaller than HZE ions from cosmic rays.
Space radiation is composed mostly of high-energy protons, helium nuclei, and high-Z high-energy ions (HZE ions). The ionization patterns in molecules, cells, tissues, and the resulting biological harm are distinct from high-energy photon radiation: X-rays and gamma rays, which produce low-linear energy transfer (low-LET) radiation from secondary electrons.
While in space, astronauts are exposed to protons, helium nuclei, and HZE ions, as well as secondary radiation from nuclear reactions from spacecraft parts or tissue.
{|
|+ Prominent HZE ions
|-
| Carbon || C
|-
| Oxygen || O
|-
| Magnesium || Mg
|-
| Silicon || Si
|-
| Iron || Fe
|}
GCRs typically originate from outside the Solar System and within the Milky Way galaxy, but those from outside of the Milky Way consist mostly of highly energetic protons with a small component of HZE ions. GCR energy spectra peaks, with median energy peaks up to 1,000 MeV/amu, and nuclei (with energies up to 10,000 MeV/amu) are important contributors to the dose equivalent.
Health concerns of HZE ions
Although HZE ions make up a small proportion of cosmic rays, their high charge and high energies cause them to contribute significantly to the overall biological impact of cosmic rays, making them as significant as protons in regard to biological impact. The most dangerous GCRs are heavy ionized nuclei such as Fe, an iron nucleus with a charge of +26 . Such heavy particles are "much more energetic (millions of MeV) than typical protons accelerated by solar flares (tens to hundreds of MeV)". HZE ions can therefore penetrate through thick layers of shielding and body tissue, "breaking the strands of DNA molecules, damaging genes and killing cells".
For HZE ions that originate from solar particle events (SPEs), there is only a small contribution toward a person's absorbed dose of radiation. During a SPE, there is such a small amount of heavy ions generated that their effects are limited. Their energies per atomic mass unit are all significantly less than protons found in the same SPE, meaning that protons are by far the largest contribution to astronaut body exposure during SPEs.
See also
High-energy nuclear physics
Cosmic radiation
Solar energetic particles
Spaceflight radiation carcinogenesis
Central nervous system effects from radiation exposure during spaceflight - HZE CNS health effects
Swift heavy ion
References
External links
Subatomic particles
Cosmic rays | HZE ion | [
"Physics"
] | 780 | [
"Matter",
"Physical phenomena",
"Cosmic rays",
"Astrophysics",
"Radiation",
"Particle physics",
"Nuclear physics",
"Atoms",
"Subatomic particles"
] |
38,174,746 | https://en.wikipedia.org/wiki/Cam%20and%20groove | A cam and groove coupling, also called a camlock fitting, is a form of hose coupling. This kind of coupling is popular because it is a simple and reliable means of connecting and disconnecting hoses quickly and without tools.
Standards
Traditionally manufactured to US Military Specification MIL-C-27487, this specification covered the dimensions and machining tolerances, materials, closing torque, part numbers, pressure ratings, finish, inspection procedures and packing requirements. Compliance to this specification ensured inter-changeability of parts from different manufacturers. In 1998, the specification A-A-59326 replaced MIL-C-27487. In Europe the standard BS EN 14420-7 applies as well as the German DIN 2828 standard. Products produced to DIN 2828 are interchangeable with those made to the original MIL-C-27487 but have differences in the hose tail design, thread, part number and other details.
Function
The cams at the end of each lever on the female end align with a circumferential groove on the male end. When the levers are rotated to the locked position, they pull the male end into the female socket, creating a tight seal against a gasket within the female socket. The arms lock into position using over-center geometry, preventing accidental decoupling. Further, lever safety pins are common features for additional security, and female-end "self-locking" levers are also available. Because the groove is cut all the way around the male end, there is no specific rotational alignment necessary to couple, as there would be with threaded connectors, and there is no opportunity for cross-threading. This results in a fast, error-resistant coupling operation. Because the compression between the two fittings is limited by the size of the cams on the end of the levers and the rotation of the levers themselves, there is also no possibility of over- or under-tightening the fitting; the pressure against the sealing gasket is effectively constant from one coupling operation to the next, reducing possibility of leaks.
Materials and uses
Cam and groove fittings are commonly available in several materials, including stainless steel, aluminum, brass, and polypropylene. Because there are no threads to become fouled, cam and groove couplings are popular in moderately dirty environments, such as septic tank pump trucks and chemical or fuel tanker trucks. The system is especially well suited to a situation where frequent changes of hoses are required, such as for petroleum trucks, etc. As examples of industrial application, cam and groove fittings can be used in a system where rapid filling of chemical drums takes place, or by factories that have needs of dye, paint, and ink medium transfers.
Note: Cam and Groove couplings are not recommended for any type of compressed gas service, including steam or air.
Types and sizes
Generally speaking, the most common types of cam and groove coupling are the following types. The letter codes are the common designations, while the roman numeral codes come from the GSA CID A-A-59326 standard:
Type A or Type I: adapter (male end) with female thread, e.g. BSP or NPT
Type B or Type VII: coupler (female end) with male thread, e.g. BSP or NPT
Type C or Type VI: coupler with shank (hose barb)
Type D or Type V: coupler with female thread
Type E or Type II: adapter with shank
Type F or Type III: adapter with male thread
Type IV: adapter with flange, TTMA (Truck Trailer Manufacturer's Association)
Type VIII: coupler with flange, TTMA
Type DC or Type IX: dust caps (female)
Type DP or Type X: dust plugs (male)
Apart from these basic types, the hose/pipe connection side of a cam and groove coupler can be of various other types such as with a flange, for butt welding to a container, for truck use with a sight glass, etc.
These couplings are available in the following diameters:
Gallery
See also
References
https://www.proflow-dynamics.com/media/wysiwyg/Catalog/Cam_and_Groove_Dimensions.pdf
Mechanics
Plumbing valves | Cam and groove | [
"Physics",
"Engineering"
] | 881 | [
"Mechanics",
"Mechanical engineering"
] |
38,176,657 | https://en.wikipedia.org/wiki/Periodic%20travelling%20wave | In mathematics, a periodic travelling wave (or wavetrain) is a periodic function of one-dimensional space that moves with constant speed. Consequently, it is a special type of spatiotemporal oscillation that is a periodic function of both space and time.
Periodic travelling waves play a fundamental role in many mathematical equations, including self-oscillatory systems,
excitable systems and
reaction–diffusion–advection systems.
Equations of these types are widely used as mathematical models of biology, chemistry and physics, and many examples in phenomena resembling periodic travelling waves have been found empirically.
The mathematical theory of periodic travelling waves is most fully developed for partial differential equations, but these solutions also occur in a number of other types of mathematical system, including integrodifferential equations,
integrodifference equations,
coupled map lattices
and cellular automata.
As well as being important in their own right, periodic travelling waves are significant as the one-dimensional equivalent of spiral waves and target patterns in two-dimensional space, and of scroll waves in three-dimensional space.
History of research
While periodic travelling waves have been known as solutions of the wave equation since the 18th century, their study in nonlinear systems began in the 1970s. A key early research paper was that of Nancy Kopell and Lou Howard which proved several fundamental results on periodic travelling waves in reaction–diffusion equations. This was followed by significant research activity during the 1970s and early 1980s. There was then a period of inactivity, before interest in periodic travelling waves was renewed by mathematical work on their generation, and by their detection in ecology, in spatiotemporal data sets on cyclic populations. Since the mid-2000s, research on periodic travelling waves has benefitted from new computational methods for studying their stability and absolute stability.
Families
The existence of periodic travelling waves usually depends on the parameter values in a mathematical equation. If there is a periodic travelling wave solution, then there is typically a family of such solutions, with different wave speeds. For partial differential equations, periodic travelling waves typically occur for a continuous range of wave speeds.
Stability
An important question is whether a periodic travelling wave is stable or unstable as a solution of the original mathematical system. For partial differential equations, it is typical that the wave family subdivides into stable and unstable
parts.
For unstable periodic travelling waves, an important subsidiary question is whether they are
absolutely or convectively unstable, meaning that there are or are not stationary growing linear modes. This issue has only been resolved for a few partial differential equations.
Generation
A number of mechanisms of periodic travelling wave generation are now well established. These include:
Heterogeneity: spatial noise in parameter values can generate a series of bands of periodic travelling waves. This is important in applications to oscillatory chemical reactions, where impurities can cause target patterns or spiral waves, which are two-dimensional generalisations of periodic travelling waves. This process provided the motivation for much of the work on periodic travelling waves in the 1970s and early 1980s. Landscape heterogeneity has also been proposed as a cause of the periodic travelling waves seen in ecology.
Invasions, which can leave a periodic travelling wave in their wake. This is important in the Taylor–Couette system in the presence of through flow, in chemical systems such as the Belousov–Zhabotinsky reaction and in predator-prey systems in ecology.
Domain boundaries with Dirichlet or Robin boundary conditions. This is potentially important in ecology, where Robin or Dirichlet conditions correspond to a boundary between habitat and a surrounding hostile environment. However definitive empirical evidence on the cause of waves is hard to obtain for ecological systems.
Migration driven by pursuit and evasion. This may be significant in ecology.
Migration between sub-populations, which again has potential ecological significance.
In all of these cases, a key question is which member of the periodic travelling wave family is selected. For most mathematical systems this remains an open problem.
Spatiotemporal chaos
It is common that for some parameter values, the periodic travelling waves arising from a wave generation mechanism are unstable. In such cases the solution usually evolves to spatiotemporal chaos. Thus the solution involves a spatiotemporal transition to chaos via the periodic travelling wave.
Lambda–omega systems and the complex Ginzburg–Landau equation
There are two particular mathematical systems that serve as prototypes for periodic travelling waves, and which have been fundamental to the development of mathematical understanding and theory. These are the "lambda-omega" class of reaction–diffusion equations
() and the complex Ginzburg–Landau equation.
(A is complex-valued). Note that these systems are the same if , and . Both systems can be simplified by rewriting the equations in terms of the amplitude (r or |A|) and the phase (arctan(v/u) or arg A). Once the equations have been rewritten in this way, it is easy to see that solutions with constant amplitude are periodic travelling waves, with the phase being a linear function of space and time. Therefore, u and v, or Re(A) and Im(A), are sinusoidal functions of space and time.
These exact solutions for the periodic travelling wave families enable a great deal of further analytical study. Exact conditions for the stability of the periodic travelling waves can be found, and the condition for absolute stability can be reduced to the solution of a simple polynomial. Also exact solutions have been obtained for the selection problem for waves generated by invasions
and by zero Dirichlet boundary conditions.
In the latter case, for the complex Ginzburg–Landau equation, the overall solution is a stationary Nozaki-Bekki hole.
Much of the work on periodic travelling waves in the complex Ginzburg–Landau equation is in the physics literature, where they are usually known as plane waves.
Numerical computation of periodic travelling waves and their stability
For most mathematical equations, analytical calculation of periodic travelling wave solutions is not possible, and therefore it is necessary to perform numerical computations. For partial differential equations, denote by x and t the (one-dimensional) space and time variables, respectively. Then periodic travelling waves are functions of the travelling wave variable z=x-c t. Substituting this solution form into the partial differential equations gives a system of ordinary differential equations known as the travelling wave equations. Periodic travelling waves correspond to limit cycles of these equations, and this provides the basis for numerical computations. The standard computational approach is numerical continuation of the travelling wave equations. One first performs a continuation of a steady state to locate a Hopf bifurcation point. This is the starting point for a branch (family) of periodic travelling wave solutions, which one can follow by numerical continuation. In some (unusual) cases both end points of a branch (family) of periodic travelling wave solutions are homoclinic solutions, in which case one must use an external starting point, such as a numerical solution of the partial differential equations.
Periodic travelling wave stability can also be calculated numerically, by computing the spectrum. This is made easier by the fact that the spectrum of periodic travelling wave solutions of partial differential equations consists entirely of essential spectrum.
Possible numerical approaches include Hill's method and numerical continuation of the spectrum. One advantage of the latter approach is that it can be extended to calculate boundaries in parameter space between stable and unstable waves
Software: The free, open-source software package Wavetrain http://www.ma.hw.ac.uk/wavetrain is designed for the numerical study of periodic travelling waves.
Using numerical continuation, Wavetrain is able to calculate the form and stability of periodic travelling wave solutions of partial differential equations, and the regions of parameter space in which waves exist and in which they are stable.
Applications
Examples of phenomena resembling periodic travelling waves that have been found empirically include the following.
Many natural populations undergo multi-year cycles of abundance. In some cases these population cycles are spatially organised into a periodic travelling wave. This behaviour has been found in voles in Fennoscandia and Northern UK, geometrid moths in Northern Fennoscandia, larch budmoths in the European Alps and red grouse in Scotland.
In semi-deserts, vegetation often self-organises into spatial patterns. On slopes, this typically consists of stripes of vegetation running parallel to the contours, separated by stripes of bare ground; this type of banded vegetation is sometimes known as Tiger bush. Many observational studies have reported slow movement of the stripes in the uphill direction. However, in a number of other cases the data points clearly to stationary patterns, and the question of movement remains controversial. The conclusion that is most consistent with available data is that some banded vegetation patterns move while others do not. Patterns in the former category have the form of periodic travelling waves.
Travelling bands occur in oscillatory and excitable chemical reactions. They were observed in the 1970s in the Belousov–Zhabotinsky reaction and they formed an important motivation for the mathematical work done on periodic travelling waves at that time. More recent research has also exploited the capacity to link the experimentally observed bands with mathematical theory of periodic travelling waves via detailed modelling.
Periodic travelling waves occur in the Sun, as part of the solar cycle. They are a consequence of the generation of the Sun's magnetic field by the solar dynamo. As such, they are related to sunspots.
In hydrodynamics, convection patterns often involve periodic travelling waves. Specific instances include binary fluid convection and heated wire convection.
Patterns of periodic travelling wave form occur in the "printer's instability", in which the thin gap between two rotating acentric cylinders is filled with oil.
See also
Plane wave
Reaction–diffusion system
Wave
References
Wave mechanics | Periodic travelling wave | [
"Physics"
] | 1,990 | [
"Wave mechanics",
"Waves",
"Physical phenomena",
"Classical mechanics"
] |
50,048,068 | https://en.wikipedia.org/wiki/Simulation%20in%20manufacturing%20systems | Simulation in manufacturing systems is the use of software to make computer models of manufacturing systems, so to analyze them and thereby obtain important information. It has been syndicated as the second most popular management science among manufacturing managers. However, its use has been limited due to the complexity of some software packages, and to the lack of preparation some users have in the fields of probability and statistics.
This technique represents a valuable tool used by engineers when evaluating the effect of capital investment in equipment and physical facilities like factory plants, warehouses, and distribution centers. Simulation can be used to predict the performance of an existing or planned system and to compare alternative solutions for a particular design problem.
Objectives
The most important objective of simulation in manufacturing is the understanding of the change to the whole system because of some local changes. It is easy to understand the difference made by changes in the local system but it is very difficult or impossible to assess the impact of this change in the overall system. Simulation gives us some measure of this impact. Measures which can be obtained by a simulation analysis are:
Parts produced per unit time
Time spent in system by parts
Time spent by parts in queue
Time spent during transportation from one place to another
In time deliveries made
Build up of the inventory
Inventory in process
Percent utilization of machines and workers.
Some other benefits include Just-in-time manufacturing, calculation of optimal resources required, validation of the proposed operation logic for controlling the system, and data collected during modelling that may be used elsewhere.
The following is an example: In a manufacturing plant one machine processes 100 parts in 10 hours but the parts coming to the machine in 10 hours is 150. So there is a buildup of inventory. This inventory can be reduced by employing another machine occasionally. Thus we understand the reduction in local inventory buildup. But now this machine produces 150 parts in 10 hours which might not be processed by the next machine and thus we have just shifted the in-process inventory from one machine to another without having any impact on overall production
Simulation is used to address some issues in manufacturing as follows: In workshop to see the ability of system to meet the requirement, To have optimal inventory to cover for machine failures.
Methods
In the past, manufacturing simulation tools were classified as languages or simulators. Languages were very flexible tools, but rather complicated to use by managers and too time consuming. Simulators were more user friendly but they came with rather rigid templates that didn’t adapt well enough to the rapidly changing manufacturing techniques. Nowadays, there is software available that combines the flexibility and user friendliness of both, but still some authors have reported that the use of this simulation to design and optimize manufacturing processes is relatively low.
One of the most used techniques by manufacturing system designers is the discrete event simulation. This type of simulation allows to assess the system’s performance by statistically and probabilistically reproducing the interactions of all its components during a determined period of time. In some cases, manufacturing systems modelling needs a continuous simulation approach. These are the cases where the states of the system change continuously, like, for example, in the movement of liquids in oil refineries or chemical plants. As continuous simulation cannot be modeled by digital computers, it is done by taking small discrete steps. This is a useful feature, since there are many cases where both, continuous and discrete simulation, have to be combined. This is called hybrid simulation, which is needed in many industries, for example, the food industry.
A framework to evaluate different manufacturing simulation tools was developed by Benedettini & Tjahjono (2009) using the ISO 9241 definition of usability: “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.” This framework considered effectiveness, efficiency and user satisfaction as the three main performance criterion as follow:
The following is a list of popular simulation techniques:
Discrete event simulation (DES)
System dynamics (SD)
Agent-based modelling (ABM)
Intelligent simulation: based on an integration of simulation and artificial intelligence (AI) techniques
Petri net
Monte Carlo simulation (MCS)
Virtual simulation: allows the user to model the system in a 3D immersive environment
Hybrid techniques: combination of different simulation techniques.
Applications
The following is a list of common applications of simulation in manufacturing:
References
Manufacturing
Simulation | Simulation in manufacturing systems | [
"Engineering"
] | 879 | [
"Manufacturing",
"Mechanical engineering"
] |
50,049,408 | https://en.wikipedia.org/wiki/Interspecies%20hydrogen%20transfer | Interspecies hydrogen transfer (IHT) is a form of interspecies electron transfer. It is a syntrophic process by which H2 is transferred from one organism to another, particularly in the rumen and other anaerobic environments.
IHT was discovered between Methanobacterium bryantii strain M.o.H and an "S" organism in 1967 by Marvin Bryant, Eileen Wolin, Meyer Wolin, and Ralph Wolfe at the University of Illinois. The two form a culture that was mistaken as a species Methanobacillus omelianskii. It was shown in 1973 that this process occurs between Ruminococcus albus and Wolinella succinogenes. A more recent publication describes how the gene expression profiles of these organisms changes when they undergo interspecies hydrogen transfer; of note, a switch to an electron-confurcating hydrogenase occurs in R. albus 7.
This process affects the carbon cycle: methanogens can participate in interspecies hydrogen transfer combining H2 and CO2 to produce CH4. Besides methanogens, acetogens, and sulfate-reducing bacteria can participate in IHT.
References
Biochemistry | Interspecies hydrogen transfer | [
"Chemistry",
"Biology"
] | 250 | [
"Biochemistry",
"nan"
] |
50,056,542 | https://en.wikipedia.org/wiki/Bingham-Papanastasiou%20model | An important class of non-Newtonian fluids presents a yield stress limit which must be exceeded before significant deformation can occur – the so-called viscoplastic fluids or Bingham plastics. In order to model the stress-strain relation in these fluids, some fitting have been proposed such as the linear Bingham equation and the non-linear Herschel-Bulkley and Casson models.
Analytical solutions exist for such models in simple flows. For general flow fields, it is necessary to develop numerical techniques to track down yielded/unyielded regions. This can be avoided by introducing into the models a continuation parameter, which facilitates the solution process and produces virtually the same results as the ideal models by the right choice of its value.
Viscoplastic materials like slurries, pastes, and suspension materials have a yield stress, i.e. a critical value of stress below which they do not flow are also called Bingham plastics, after Bingham.
Viscoplastic materials can be well approximated uniformly at all levels of stress as liquids that exhibit infinitely high viscosity in the limit of low shear rates followed by a continuous transition to a viscous liquid. This approximation could be made more and more accurate at even vanishingly small shear rates by means of a material parameter that controls the exponential growth of stress. Thus, a new impetus was given in 1987 with the publication by Papanastasiou of such a modification of the Bingham model with an exponential stress-growth term. The new model basically rendered the original discontinuous Bingham viscoplastic model as a purely viscous one, which was easy to implement and solve and was valid for all rates of deformation. The early efforts by Papanastasiou and his co-workers were taken up by the author and his coworkers, who in a series of papers solved many benchmark problems and presented useful solutions always providing the yielded/unyielded regions in flow fields of interest. Since the early 1990s, other workers in the field also used the Papanastasiou model for many different problems.
Papanastasiou
Papanastasiou in 1987, who took into account earlier works in the early 1960s as well as a well-accepted practice in the modelling of soft solids and the sigmoidal modelling behaviour of density changes across interfaces. He introduced a continuous regularization for the viscosity function which has been largely used in numerical simulations of viscoplastic fluid flows, thanks to its easy computational implementation. As a weakness, its dependence on a non-rheological (numerical) parameter, which controls the exponential growth of the yield-stress term of the classical Bingham model in regions subjected to very small strain-rates, may be pointed. Thus, he proposed an exponential regularization of eq., by introducing a parameter m, which controls the exponential growth of stress, and which has dimensions of time. The proposed model (usually called Bingham-Papanastasiou model) has the form:
and is valid for all regions, both yielded and unyielded. Thus it avoids solving explicitly for the location of the yield surface, as was done by Beris et al.
Papanastasiou's modification, when applied to the Bingham model, becomes in simple shear flow (1-D flow):
Bingham-Papanastasiou model:
where η is the apparent viscosity.
References
Plasticity (physics) | Bingham-Papanastasiou model | [
"Materials_science"
] | 696 | [
"Deformation (mechanics)",
"Plasticity (physics)"
] |
50,058,607 | https://en.wikipedia.org/wiki/Modeling%20and%20simulation%20of%20batch%20distillation%20unit | Aspen Plus, Aspen HYSYS, ChemCad and MATLAB, PRO are the commonly used process simulators for modeling, simulation and optimization of a distillation process in the chemical industries. Distillation is the technique of preferential separation of the more volatile components from the less volatile ones in a feed followed by condensation. The vapor produced is richer in the more volatile components. The distribution of the component in the two phase is governed by the vapour-liquid equilibrium relationship. In practice, distillation may be carried out by either two principal methods. The first method is based on the production of vapor boiling the liquid mixture to be separated and condensing the vapors without allowing any liquid to return to the still. There is no reflux. The second method is based on the return of part of the condensate to still under such conditions that this returning liquid is brought into intimate contact with the vapors on their way to condenser.
Chemical process modeling
Chemical Process modeling is a technique used in chemical engineering process design. Process modeling is defined as the physical, mathematical or logical representation of the real process, system or phenomena using model library present in the process simulator software. In this technique by using process simulator software we define a system of interconnected components. A system is defined as group of object that are joined together in some regular order or interdependence toward the accomplishment of some purpose. Which system are then solved so that the steady-state or dynamic behavior of the system can be predicted. Components of the system and connections are represented as a process flow diagram. A flow diagram for the ammonia process (Finlayson, 2006) is shown in figure 1 below using aspen plus software.
The most important result of developing of mathematical model of chemical engineering system is the understanding that is gained what really make the process tick. Mathematical models can be useful in all phase of chemical engineering from research and development to plant operations and even in business and economics studies. The basis for the mathematical models are the fundamentals physical and chemical law, such as the laws of conservation of mass, energy and momentum, degree of freedom. Mathematical modeling is very much an art. It takes experience, practice and brain power to be a good mathematical modeler.
Process simulation
A simulation is the representation of the real world process or system over a period of time. Simulation can be done by hand or on a computer, simulation involves the generation of artificial history of the system and the observation of artificial history to draw inferences concerning of the operating characteristic of the real system. Thus, simulation modelling can be used both as an analysis tool for predicating the effect of changes to existing system and as a design tool to predict the performance of new system under the varying set of circumstances. Process simulation describes processes flow diagram where various unit operations are present and connected by product streams.
It is extensively used both in educational arena and industry to predicate the behavior of a process using material balance equations, equilibrium relationship, reaction kinetics, etc.
Batch distillation
In batch distillation, the feed is charged to the still pot to which heat is supplied continuously through a steam jacket or a steam coil. As the mixture boils, it generates a vapour richer in the more volatile. But as boiling continue, concentration of more volatile in the liquid decrease. It is generally assumed that equilibrium vaporization occurs in the still. The vapour is led to a condenser and the condensate or the top product is collected in the receiver. At the beginning, the condensate will be pretty rich in the more volatiles, but the concentrations of the more volatiles in it decrease as the condensate keep on accumulating in the receiver. The condensate is usually withdrawn intermittently having products or cuts of different concentrations. Batch distillation is used when the feed rate is not large enough to justify installation of a continuous distillation unit. It may also be used when the constituents greatly differ in volatility. Figure 1 show the batch distillation setup.
Batch distillation of binary mixture
Let L be the moles of material in the still and x be the concentration of the volatile component (i.e. A) and let the moles of accumulated condensate be D. Concentration of the equilibrium vapour is Over a small time, the change in the amount of liquid in the still is and the amount of vapour withdrawn is . The following differential mass balance equation may be written as:Let L be the moles of material in the still and x be the concentration of the volatile component (i.e. A) and let the moles of accumulated condensate be D. Concentration of the equilibrium vapour is Over a small time, the change in the amount of liquid in the still is and the amount of vapour withdrawn is . The following
differential mass balance equation may be written as:
Total material balance: = ----- (i)
Component A balance: ----- (ii) ----- (iii)
Equation (i) means that the total amount of vapour generated must be equal to the decrease in the total amount of liquid. Similarly, equation (ii) means that loss in the number of moles of A from the still because of vaporization is the same as the amount of A in the small amount of vapour generated.
Putting in Equation (iii) and rearranging,
= ------(iv)
If distillation starts with F moles of feed of concentration and continues till the amount of liquid reduces to W moles (composition =xw), the above equation can be integrated to give
= = ------(v)
Equation (v) is the basic equation of batch distillation and is called as the Rayleigh equation.Rayleigh equation is used for calculation of data in the batch distillation column.
Aspen Plus software
History
During the 1970s, the research have develop a novel technology at the Massachusetts Institute of Technology (MIT) with United States Department of Energy funding. The undertaking known as the Advanced System for Process Engineering (ASPEN) Project, was originally intended to design nonlinear simulation software that could aid in development of synthetic fuels. In 1981, AspenTech, a publicly traded company was founded to commercialize the simulation software package. Aspen Tech went public in October 1994 and has acquired 19 industry-leading companies as a part of its mission to offer complete, integrated solution to the process industries.
As the complexity of a plant integrated with the several process unit increase, solving a large equation set becomes a challenge. In this situation, we usually use the process flowsheets simulator.
Type of Aspen simulator package
The sophisticated Aspen Software tool can simulate large process with a high degree of accuracy. It has a model library that includes mixers, splitters, as phase separator, heat exchanger, distillation columns, and reactor pressure changers manipulators, etc. By interconnecting several unit operations, we are able to develop a process flow diagram (PFD) for a complete plant. To solve the model structure of either a single unit of a chemical plant, required Fortran code are built-in in the Aspen simulator.
Aspen simulator has been developed for the simulation of wide variety of processes such as chemical and petrochemical, petroleum refining, polymer, and coal based processes.
Nowadays, different Aspen package are available for simulations with promising performance. Briefly, some of them are presented below.
Aspen Plus – This type of process simulator is used for steady state simulation of chemicals, petrochemicals and petroleum industries. It is also used for performance monitoring, design, optimization and business planning.
Aspen Dynamics –This type of process simulator is used for dynamics study and closed loop control of several process industries. Aspen Dynamics is integrated with Aspen plus.
Aspen Batch CAD – this simulator is typically used for batch processing, reaction and distillations. It allow us to derive reaction and kinetic information form the experimental data to create a process simulation.
Aspen Chromatography-This is a dynamic simulation software package used for both batch chromatography and chromatography simulated moving bed processes.
Aspen Properties- It is useful for the thermophysical properties calculation.
Aspen Polymer Plus – It is a modeling tool for steady state and dynamic simulation and optimization of polymer processes. This is available within Aspen Plus or Aspen Properties rather than via an external menu.
Aspen HYSIS – this process modeling package is typically used for steady state simulation, performance monitoring, design, optimization and business planning for petroleum refining, and oil and gas industries.
Aspen simulate the performance of the designed process. A solid understanding of the underlying chemical engineering principles is needed to supply reasonable value of input parameters and analyse the result obtained. In addition to the process flow diagram, required input information to simulate a process are: setup, components properties, streams and blocks.
Simulation result of batch distillation unit
The BatchFrac is rigorous model used for simulation of batch distillation column present in the model library of the software. It also includes the reactions occurred in any stage of
the separator. BatchFrac model does not consider column hydraulics, and there is negligible vapour holdup and constant liquid holdup. Modeling and simulation of batch distillation unit is done with the help of one of the most important process simulators (aspen plus) used in chemical industry with the following data given in the table and check the simulation result.
Various steps are involved in the simulation of batch distillation column using aspen plus software is :
Understand the problem statement and input stream data
Specifying the components- add Ethanol and water from the components list
Specifying the property method-UNIFAC
Go to the simulation library
Creating flowsheet with help of model library present in the software, for batch distillation choose batchfrac model form library( figure 2)
Specifying input stream information that is temperature, pressure, Composition type and flow rate of the components
Temperature = 373 K, Pressure=1 bar
Flow basis = Volume( 50 L/hr)
Composition type = mole fraction
Ethanol - 0.5, Water – 0.5
Specifying block information what type of distillation column is present (Pot +overhead condenser type)
Configuring settings –types of global unit used – METCBR
Running the simulation
Viewing the result- check streams result or steam table obtain from the running the simulation (figure 3) Similarly we can do simulation for other distillation column like fractional distillation by using Redfrac model from model library present in the software.
See also
Flash distillation
Fractional distillation
Steam distillation
Process optimization
Process design
Software Process simulation
Modeling and simulation
Computer simulation
Advanced Simulation Library
List of chemical process simulators
References
Distillation
Simulation | Modeling and simulation of batch distillation unit | [
"Chemistry"
] | 2,211 | [
"Distillation",
"Separation processes"
] |
57,254,124 | https://en.wikipedia.org/wiki/ADCIRC | The ADCIRC model is a high-performance, cross-platform numerical ocean circulation model popular in simulating storm surge, tides, and coastal circulation problems.
Originally developed by Drs. Rick Luettich and Joannes Westerink,
the model is developed and maintained by a combination of academic, governmental, and corporate partners, including the University of North Carolina at Chapel Hill, the University of Notre Dame, and the US Army Corps of Engineers.
The ADCIRC system includes an independent multi-algorithmic wind forecast model and also has advanced coupling capabilities, allowing it to integrate effects from sediment transport, ice, waves, surface runoff, and baroclinicity.
Access
The model is free, with source code made available by request via the website, allowing users to run the model on any system with a Fortran compiler. A pre-compiled Windows version of the model can also be purchased alongside the SMS software. ADCIRC is coded in Fortran, and can be used with native binary, text, or netCDF file formats.
Capabilities
The model formulation
is based on the shallow water equations, solving the continuity equation (represented in the form of the Generalized Wave Continuity Equation)
and the momentum equations (with advective, Coriolis, eddy viscosity, and surface stress terms included). ADCIRC utilizes the finite element method in either three-dimensional or two-dimensional depth-integrated form on a triangular unstructured grid with Cartesian or spherical coordinates. It can run in either barotropic or baroclinic modes, allowing inclusion of changes in water density and properties such as salinity and temperature. ADCIRC can be run either in serial mode (e.g. on a personal computer) or in parallel on supercomputers via MPI. The model has been optimized to be highly parallelized, in order to facilitate rapid computation of large, complex problems.
ADCIRC is able to apply several different bottom friction formulations including Manning's n-based bottom drag due to changes in land coverage (such as forests, cities, and seafloor composition), as well as utilize atmospheric forcing data (wind stress and atmospheric pressure) from several sources, and further reduce the strength of the wind forcing due to surface roughness effects.
The model is also able to incorporate effects such as time-varying topography and bathymetry, boundary fluxes from rivers or other sources, tidal potential, and sub-grid scale features like levees.
ADCIRC is frequently coupled to a wind wave model such as STWAVE, SWAN, or WAVEWATCH III, especially in storm surge applications where wave radiation stress can have important effects on ocean circulation and vice versa. In these applications, the model is able to take advantage of tight coupling with wave models to increase calculation accuracy.
References
External links
ADCIRC official website
Physical oceanography
Water waves
Numerical climate and weather models
Computational science
Computational fluid dynamics
science software
Scientific simulation software | ADCIRC | [
"Physics",
"Chemistry",
"Mathematics"
] | 602 | [
"Physical phenomena",
"Applied and interdisciplinary physics",
"Water waves",
"Computational fluid dynamics",
"Applied mathematics",
"Computational physics",
"Computational science",
"Waves",
"Physical oceanography",
"Fluid dynamics"
] |
57,254,307 | https://en.wikipedia.org/wiki/Prestressed%20concrete%20cylinder%20pipe | Prestressed concrete cylinder pipe (PCCP) is a common variety of large-diameter concrete pressure pipe used for transporting water and wastewater. PCCP is typical manufactured according to the American Water Works Association (AWWA) standard C304. PCCP is a composite structure that is composed of a concrete core, high-tensile prestressed steel wires, a thin steel cylinder, and mortar coating. It is widely used globally in water systems infrastructure.
References
Structural engineering standards
Plumbing
Drainage | Prestressed concrete cylinder pipe | [
"Physics",
"Engineering"
] | 100 | [
"Structural engineering",
"Materials stubs",
"Plumbing",
"Construction",
"Materials",
"Structural engineering standards",
"Matter"
] |
57,255,362 | https://en.wikipedia.org/wiki/Thermal%20boundary%20layer%20thickness%20and%20shape | This page describes some parameters used to characterize the properties of the thermal boundary layer formed by a heated (or cooled) fluid moving along a heated (or cooled) wall. In many ways, the thermal boundary layer description parallels the velocity (momentum) boundary layer description first conceptualized by Ludwig Prandtl. Consider a fluid of uniform temperature and velocity impinging onto a stationary plate uniformly heated to a temperature . Assume the flow and the plate are semi-infinite in the positive/negative direction perpendicular to the plane. As the fluid flows along the wall, the fluid at the wall surface satisfies a no-slip boundary condition and has zero velocity, but as you move away from the wall, the velocity of the flow asymptotically approaches the free stream velocity . The temperature at the solid wall is and gradually changes to as one moves toward the free stream of the fluid. It is impossible to define a sharp point at which the thermal boundary layer fluid or the velocity boundary layer fluid becomes the free stream, yet these layers have a well-defined characteristic thickness given by and . The parameters below provide a useful definition of this characteristic, measurable thickness for the thermal boundary layer. Also included in this boundary layer description are some parameters useful in describing the shape of the thermal boundary layer.
99% thermal boundary layer thickness
The thermal boundary layer thickness, , is the distance across a boundary layer from the wall to a point where the flow temperature has essentially reached the 'free stream' temperature, . This distance is defined normal to the wall in the -direction. The thermal boundary layer thickness is customarily defined as the point in the boundary layer, , where the temperature reaches 99% of the free stream value :
such that = 0.99
at a position along the wall. In a real fluid, this quantity can be estimated by measuring the temperature profile at a position along the wall. The temperature profile is the temperature as a function of at a fixed position.
For laminar flow over a flat plate at zero incidence, the thermal boundary layer thickness is given by:
where
is the Prandtl Number
is the thickness of the velocity boundary layer thickness
is the freestream velocity
is the distance downstream from the start of the boundary layer
is the kinematic viscosity
For turbulent flow over a flat plate, the thickness of the thermal boundary layer that is formed is not determined by thermal diffusion, but instead, it is random fluctuations in the outer region of the boundary layer of the fluid that is the driving force determining thermal boundary layer thickness. Thus the thermal boundary layer thickness for turbulent flow does not depend on the Prandtl number but instead on the Reynolds number. Hence, the turbulent thermal boundary layer thickness is given approximately by the turbulent velocity boundary layer thickness expression given by:
where
is the Reynolds number
This turbulent boundary layer thickness formula assumes 1) the flow is turbulent right from the start of the boundary layer and 2) the turbulent boundary layer behaves in a geometrically similar manner (i.e. the velocity profiles are geometrically similar along the flow in the x-direction, differing only by stretching factors in and ). Neither one of these assumptions is true for the general turbulent boundary layer case so care must be exercised in applying this formula.
Thermal displacement thickness
The thermal displacement thickness, may be thought of in terms of the difference between a real fluid and a hypothetical fluid with thermal diffusion turned off but with velocity and temperature . With no thermal diffusion, the temperature drop is abrupt. The thermal displacement thickness is the distance by which the hypothetical fluid surface would have to be moved in the -direction to give the same integrated temperature as occurs between the wall and the reference plane at in the real fluid. It is a direct analog to the velocity displacement thickness which is often described in terms of an equivalent shift of a hypothetical inviscid fluid (see Schlichting for velocity displacement thickness).
The definition of the thermal displacement thickness for incompressible flow is based on the integral of the reduced temperature:
where the dimensionless temperature is . In a wind tunnel, the velocity and temperature profiles are obtained by measuring the velocity and temperature at many discrete -values at a fixed -position. The thermal displacement thickness can then be estimated by numerically integrating the scaled temperature profile.
Moment method
A relatively new method for describing the thickness and shape of the thermal boundary layer utilizes the moment method commonly used to describe a random variable's probability distribution. The moment method was developed from the observation that the plot of the second derivative of the thermal profile for laminar flow over a plate looks very much like a Gaussian distribution curve. It is straightforward to cast the properly scaled thermal profile into a suitable integral kernel.
The thermal profile central moments are defined as:
where the mean location, , is given by:
There are some advantages to also include descriptions of moments of the boundary layer profile derivatives with respect to the height above the wall. Consider the first derivative temperature profile central moments given by:
where the mean location is the thermal displacement thickness .
Finally the second derivative temperature profile central moments are given by:
where the mean location, , is given by:
With the moments and the thermal mean location defined, the boundary layer thickness and shape can be described in terms of the thermal boundary layer width (variance), thermal skewnesses, and thermal excess (excess kurtosis). For the Pohlhausen solution for laminar flow on a heated flat plate, it is found that thermal boundary layer thickness defined as where , tracks the 99% thickness very well.
For laminar flow, the three different moment cases all give similar values for the thermal boundary layer thickness. For turbulent flow, the thermal boundary layer can be divided into a region near the wall where thermal diffusion is important and an outer region where thermal diffusion effects are mostly absent. Taking a cue from the boundary layer energy balance equation, the second derivative boundary layer moments, track the thickness and shape of that portion of the thermal boundary layer where thermal diffusivity is significant. Hence the moment method makes it possible to track and quantify the region where thermal diffusivity is important using moments whereas the overall thermal boundary layer is tracked using and moments.
Calculation of the derivative moments without the need to take derivatives is simplified by using integration by parts to reduce the moments to simply integrals based on the thermal displacement thickness kernel:
This means that the second derivative skewness, for example, can be calculated as:
Further reading
Hermann Schlichting, Boundary-Layer Theory, 7th ed., McGraw Hill, 1979.
Frank M. White, Fluid Mechanics, McGraw-Hill, 5th Edition, 2003.
Amir Faghri, Yuwen Zhang, and John Howell, Advanced Heat and Mass Transfer, Global Digital Press, , 2010.
Notes
References
Schlichting, Hermann (1979). Boundary-Layer Theory, 7th ed., McGraw Hill, New York, U.S.A.
Weyburne, David (2006). "A mathematical description of the fluid boundary layer," Applied Mathematics and Computation, vol. 175, pp. 1675–1684
Weyburne, David (2018). "New thickness and shape parameters for describing the thermal boundary layer," arXiv:1704.01120[physics.flu-dyn]
Boundary layers
Aerodynamics | Thermal boundary layer thickness and shape | [
"Chemistry",
"Engineering"
] | 1,480 | [
"Boundary layers",
"Aerospace engineering",
"Aerodynamics",
"Fluid dynamics"
] |
48,667,933 | https://en.wikipedia.org/wiki/Environmental%20impact%20of%20Mardi%20Gras%20beads | When the parade season ended in 2014, the New Orleans city government spent $1.5 million to pick up about 1,500 tons of Mardi Gras-induced waste, consisting mostly of beads. This is a recurring problem every year for the city. In addition, the city must also deal with the environmental repercussions endured after Mardi Gras. Because they are not biodegradable and contain high amounts of heavy metals, Mardi Gras beads put the local
environment and health of southern Louisianians at risk.
Bead composition
Polyethylene and polystyrene are popular plastics used in beads. Polystyrene is very stable and can last for many decades as the beads lay in landfills. Eventually, it will begin to slowly oxidize via UV light from the sun. In contrast, polyethylene cannot decompose with UV radiation and biodegrades extremely slowly.
Lead, cadmium, and other elements have been detected in beads in extremely high amounts through various analytical techniques. Many of these elements exceed the suggested safety limits set by the Consumer Product Safety Commission. For example, the safe amount of lead in a product is 100 ppm; however, there have been findings where the amount of lead in a bead surpassed the limit 300 times over. This threatens parade-goers with exposure to high amounts of lead, especially younger children that could potentially put the beads in their mouths.
History
Plastic beads became popular in the 1960s, and were not always a part of Mardi Gras; they were introduced only in the late 1970s. The ritual of throwing Mardi Gras beads dates back to the nineteenth century, particularly the 1970s, in New Orleans. Beads used to be manufactured of glass, and many of them were imported from Czechoslovakia. The delicate glass beams were then replaced with the brightly colored and inexpensive plastic beads.
Entry into the environment
Beads can accidentally enter storm drains, which empty into Lake Pontchartrain and the Mississippi River, which drains into the Gulf of Mexico. The metals in the beads put fish and other marine lifeforms at risk for lead and cadmium poisoning. Exposure to these metals in water causes high mortality rates and increased biomass of these metals among fish species within a month of exposure. Seafood is prevalent in the south Louisiana diet, most of which is harvested from the Gulf. Eating seafood contaminated with lead and cadmium puts people at risk for poisoning.
Beads also can get tangled in trees during parades. Here, the lead in the beads can get washed off via rain water and find its way into leaves and soil. Lead has been shown to be an inhibitor of cell division, water uptake, and photosynthesis, eventually causing death to the plant.
Impact on humans
Lead exposure has been evidenced to significantly inhibit neurological function. One study examined identical twins who worked together as painters using lead-based paint. Using magnetic resonance spectroscopy, it was discovered that they both had lead levels in their bones about 5-10 times more than the average adult. One twin put himself at a higher risk of lead exposure because he was the only one that removed paint on the job. His lead concentration was 2.5 times higher than his twin’s; and after further testing, his memory was shown to be much worse than his twin’s.
Cadmium has been shown to be carcinogenic due to interactions with DNA topoisomerase IIα. This enzyme helps facilitate cell division and DNA repair, specifically with double strand breaks. Cadmium cations react with the topoisomerase in the following manner:
Here, the cadmium ions react with sulfur-containing thiol groups in cysteine residues, effectively ruining the structure and function of the topoisomerase.
Solutions
Mardi Gras will unlikely be cancelled due to its popularity, cultural significance, and economic importance, but a concerted effort can still be made to curb the negative environmental effects of the beads. One suggested avenue is to replace currently used plastics with polylactic acid (PLA), an environmentally much more friendly material. This polymer can be degraded naturally into lactic acid via hydrolysis or self-hydrolysis, which decomposes whole PLA products in as quickly as a month. A second way is to “recycle” by purchasing used beads rather than buying new ones, which can also translate into cost savings for individual purchasers or re-sellers who buy the beads in large quantities; recycling also provides an environmentally friendly method of "disposal" for those who initially purchased Mardi Gras beads.
Another alternative that has an exponentially reduced environmental impact is to impose restrictions on the presence of the current Mardi Gras beads, such as banning them altogether but permitting non-toxic, eco-friendly alternatives such as beads made from paste, paper, clay, wood, or even vegetables (peas painted with a water-based, non-toxic paint, for example). Some cities and communities in the United States have successfully banned plastic bags, so this would not be an impossible goal. To support and enforce the restriction on toxic beads and ensure implementation of the non-toxic alternatives, the City of New Orleans could also begin imposing a substantial tax or fee on vendors, entertainers, attendees, and other individuals and businesses associated with Mardi Gras to alleviate the hefty financial cost of clean-up that the city itself must bear every year.
References
Mardi Gras in New Orleans
Environment of Louisiana
Environmental issues in the United States
Lead poisoning
Litter
Carcinogens | Environmental impact of Mardi Gras beads | [
"Chemistry",
"Environmental_science"
] | 1,115 | [
"Carcinogens",
"Toxicology"
] |
48,671,216 | https://en.wikipedia.org/wiki/List%20of%20isomers%20of%20tridecane | This is the list of the 802 isomers of tridecane, with their IUPAC names.
Straight-chain
Tridecane
With dodecane backbone
2-Methyldodecane
3-Methyldodecane
4-Methyldodecane
5-Methyldodecane
6-Methyldodecane
With undecane backbone
Dimethyl
2,2-Dimethylundecane
2,3-Dimethylundecane
2,4-Dimethylundecane
2,5-Dimethylundecane
2,6-Dimethylundecane
2,7-Dimethylundecane
2,8-Dimethylundecane
2,9-Dimethylundecane
2,10-Dimethylundecane
3,3-Dimethylundecane
3,4-Dimethylundecane
3,5-Dimethylundecane
3,6-Dimethylundecane
3,7-Dimethylundecane
3,8-Dimethylundecane
3,9-Dimethylundecane
4,4-Dimethylundecane
4,5-Dimethylundecane
4,6-Dimethylundecane
4,7-Dimethylundecane
4,8-Dimethylundecane
5,5-Dimethylundecane
5,6-Dimethylundecane
5,7-Dimethylundecane
6,6-Dimethylundecane
Ethyl
3-Ethylundecane
4-Ethylundecane
5-Ethylundecane
6-Ethylundecane
With decane backbone
Trimethyl
2,2,3-Trimethyldecane
2,2,4-Trimethyldecane
2,2,5-Trimethyldecane
2,2,6-Trimethyldecane
2,2,7-Trimethyldecane
2,2,8-Trimethyldecane
2,2,9-Trimethyldecane
2,3,3-Trimethyldecane
2,3,4-Trimethyldecane
2,3,5-Trimethyldecane
2,3,6-Trimethyldecane
2,3,7-Trimethyldecane
2,3,8-Trimethyldecane
2,3,9-Trimethyldecane
2,4,4-Trimethyldecane
2,4,5-Trimethyldecane
2,4,6-Trimethyldecane
2,4,7-Trimethyldecane
2,4,8-Trimethyldecane
2,4,9-Trimethyldecane
2,5,5-Trimethyldecane
2,5,6-Trimethyldecane
2,5,7-Trimethyldecane
2,5,8-Trimethyldecane
2,5,9-Trimethyldecane
2,6,6-Trimethyldecane
2,6,7-Trimethyldecane
2,6,8-Trimethyldecane
2,7,7-Trimethyldecane
2,7,8-Trimethyldecane
2,8,8-Trimethyldecane
3,3,4-Trimethyldecane
3,3,5-Trimethyldecane
3,3,6-Trimethyldecane
3,3,7-Trimethyldecane
3,3,8-Trimethyldecane
3,4,4-Trimethyldecane
3,4,5-Trimethyldecane
3,4,6-Trimethyldecane
3,4,7-Trimethyldecane
3,4,8-Trimethyldecane
3,5,5-Trimethyldecane
3,5,6-Trimethyldecane
3,5,7-Trimethyldecane
3,5,8-Trimethyldecane
3,6,6-Trimethyldecane
3,6,7-Trimethyldecane
3,7,7-Trimethyldecane
4,4,5-Trimethyldecane
4,4,6-Trimethyldecane
4,4,7-Trimethyldecane
4,5,5-Trimethyldecane
4,5,6-Trimethyldecane
4,5,7-Trimethyldecane
4,6,6-Trimethyldecane
5,5,6-Trimethyldecane
Ethyl+Methyl
3-Ethyl-2-methyldecane
3-Ethyl-3-methyldecane
3-Ethyl-4-methyldecane
3-Ethyl-5-methyldecane
3-Ethyl-6-methyldecane
3-Ethyl-7-methyldecane
3-Ethyl-8-methyldecane
4-Ethyl-2-methyldecane
4-Ethyl-3-methyldecane
4-Ethyl-4-methyldecane
4-Ethyl-5-methyldecane
4-Ethyl-6-methyldecane
4-Ethyl-7-methyldecane
5-Ethyl-2-methyldecane
5-Ethyl-3-methyldecane
5-Ethyl-4-methyldecane
5-Ethyl-5-methyldecane
5-Ethyl-6-methyldecane
6-Ethyl-2-methyldecane
6-Ethyl-3-methyldecane
6-Ethyl-4-methyldecane
7-Ethyl-2-methyldecane
7-Ethyl-3-methyldecane
8-Ethyl-2-methyldecane
Propyl
4-Propyldecane
5-Propyldecane
4-(1-Methylethyl)decane
5-(1-Methylethyl)decane
With nonane backbone
Tetramethyl
2,2,3,3-Tetramethylnonane
2,2,3,4-Tetramethylnonane
2,2,3,5-Tetramethylnonane
2,2,3,6-Tetramethylnonane
2,2,3,7-Tetramethylnonane
2,2,3,8-Tetramethylnonane
2,2,4,4-Tetramethylnonane
2,2,4,5-Tetramethylnonane
2,2,4,6-Tetramethylnonane
2,2,4,7-Tetramethylnonane
2,2,4,8-Tetramethylnonane
2,2,5,5-Tetramethylnonane
2,2,5,6-Tetramethylnonane
2,2,5,7-Tetramethylnonane
2,2,5,8-Tetramethylnonane
2,2,6,6-Tetramethylnonane
2,2,6,7-Tetramethylnonane
2,2,6,8-Tetramethylnonane
2,2,7,7-Tetramethylnonane
2,2,7,8-Tetramethylnonane
2,2,8,8-Tetramethylnonane
2,3,3,4-Tetramethylnonane
2,3,3,5-Tetramethylnonane
2,3,3,6-Tetramethylnonane
2,3,3,7-Tetramethylnonane
2,3,3,8-Tetramethylnonane
2,3,4,4-Tetramethylnonane
2,3,4,5-Tetramethylnonane
2,3,4,6-Tetramethylnonane
2,3,4,7-Tetramethylnonane
2,3,4,8-Tetramethylnonane
2,3,5,5-Tetramethylnonane
2,3,5,6-Tetramethylnonane
2,3,5,7-Tetramethylnonane
2,3,5,8-Tetramethylnonane
2,3,6,6-Tetramethylnonane
2,3,6,7-Tetramethylnonane
2,3,6,8-Tetramethylnonane
2,3,7,7-Tetramethylnonane
2,3,7,8-Tetramethylnonane
2,4,4,5-Tetramethylnonane
2,4,4,6-Tetramethylnonane
2,4,4,7-Tetramethylnonane
2,4,4,8-Tetramethylnonane
2,4,5,5-Tetramethylnonane
2,4,5,6-Tetramethylnonane
2,4,5,7-Tetramethylnonane
2,4,5,8-Tetramethylnonane
2,4,6,6-Tetramethylnonane
2,4,6,7-Tetramethylnonane
2,4,6,8-Tetramethylnonane
2,4,7,7-Tetramethylnonane
2,5,5,6-Tetramethylnonane
2,5,5,7-Tetramethylnonane
2,5,5,8-Tetramethylnonane
2,5,6,6-Tetramethylnonane
2,5,6,7-Tetramethylnonane
2,5,7,7-Tetramethylnonane
2,6,6,7-Tetramethylnonane
2,6,7,7-Tetramethylnonane
3,3,4,4-Tetramethylnonane
3,3,4,5-Tetramethylnonane
3,3,4,6-Tetramethylnonane
3,3,4,7-Tetramethylnonane
3,3,5,5-Tetramethylnonane
3,3,5,6-Tetramethylnonane
3,3,5,7-Tetramethylnonane
3,3,6,6-Tetramethylnonane
3,3,6,7-Tetramethylnonane
3,3,7,7-Tetramethylnonane
3,4,4,5-Tetramethylnonane
3,4,4,6-Tetramethylnonane
3,4,4,7-Tetramethylnonane
3,4,5,5-Tetramethylnonane
3,4,5,6-Tetramethylnonane
3,4,5,7-Tetramethylnonane
3,4,6,6-Tetramethylnonane
3,4,6,7-Tetramethylnonane
3,5,5,6-Tetramethylnonane
3,5,5,7-Tetramethylnonane
3,5,6,6-Tetramethylnonane
4,4,5,5-Tetramethylnonane
4,4,5,6-Tetramethylnonane
4,4,6,6-Tetramethylnonane
4,5,5,6-Tetramethylnonane
Ethyl+Dimethyl
3-Ethyl-2,2-dimethylnonane
3-Ethyl-2,3-dimethylnonane
3-Ethyl-2,4-dimethylnonane
3-Ethyl-2,5-dimethylnonane
3-Ethyl-2,6-dimethylnonane
3-Ethyl-2,7-dimethylnonane
3-Ethyl-2,8-dimethylnonane
3-Ethyl-3,4-dimethylnonane
3-Ethyl-3,5-dimethylnonane
3-Ethyl-3,6-dimethylnonane
3-Ethyl-3,7-dimethylnonane
3-Ethyl-4,4-dimethylnonane
3-Ethyl-4,5-dimethylnonane
3-Ethyl-4,6-dimethylnonane
3-Ethyl-4,7-dimethylnonane
3-Ethyl-5,5-dimethylnonane
3-Ethyl-5,6-dimethylnonane
3-Ethyl-5,7-dimethylnonane
3-Ethyl-6,6-dimethylnonane
4-Ethyl-2,2-dimethylnonane
4-Ethyl-2,3-dimethylnonane
4-Ethyl-2,4-dimethylnonane
4-Ethyl-2,5-dimethylnonane
4-Ethyl-2,6-dimethylnonane
4-Ethyl-2,7-dimethylnonane
4-Ethyl-2,8-dimethylnonane
4-Ethyl-3,3-dimethylnonane
4-Ethyl-3,4-dimethylnonane
4-Ethyl-3,5-dimethylnonane
4-Ethyl-3,6-dimethylnonane
4-Ethyl-3,7-dimethylnonane
4-Ethyl-4,5-dimethylnonane
4-Ethyl-4,6-dimethylnonane
4-Ethyl-5,5-dimethylnonane
4-Ethyl-5,6-dimethylnonane
5-Ethyl-2,2-dimethylnonane
5-Ethyl-2,3-dimethylnonane
5-Ethyl-2,4-dimethylnonane
5-Ethyl-2,5-dimethylnonane
5-Ethyl-2,6-dimethylnonane
5-Ethyl-2,7-dimethylnonane
5-Ethyl-2,8-dimethylnonane
5-Ethyl-3,3-dimethylnonane
5-Ethyl-3,4-dimethylnonane
5-Ethyl-3,5-dimethylnonane
5-Ethyl-3,6-dimethylnonane
5-Ethyl-3,7-dimethylnonane
5-Ethyl-4,4-dimethylnonane
5-Ethyl-4,5-dimethylnonane
5-Ethyl-4,6-dimethylnonane
6-Ethyl-2,2-dimethylnonane
6-Ethyl-2,3-dimethylnonane
6-Ethyl-2,4-dimethylnonane
6-Ethyl-2,5-dimethylnonane
6-Ethyl-2,6-dimethylnonane
6-Ethyl-2,7-dimethylnonane
6-Ethyl-3,3-dimethylnonane
6-Ethyl-3,4-dimethylnonane
6-Ethyl-3,5-dimethylnonane
6-Ethyl-3,6-dimethylnonane
6-Ethyl-4,4-dimethylnonane
7-Ethyl-2,2-dimethylnonane
7-Ethyl-2,3-dimethylnonane
7-Ethyl-2,4-dimethylnonane
7-Ethyl-2,5-dimethylnonane
7-Ethyl-2,6-dimethylnonane
7-Ethyl-2,7-dimethylnonane
7-Ethyl-3,3-dimethylnonane
7-Ethyl-3,4-dimethylnonane
Diethyl
3,3-Diethylnonane
3,4-Diethylnonane
3,5-Diethylnonane
3,6-Diethylnonane
3,7-Diethylnonane
4,4-Diethylnonane
4,5-Diethylnonane
4,6-Diethylnonane
5,5-Diethylnonane
Methyl+Propyl
2-Methyl-4-propylnonane
3-Methyl-4-propylnonane
4-Methyl-4-propylnonane
5-Methyl-4-propylnonane
6-Methyl-4-propylnonane
2-Methyl-5-propylnonane
3-Methyl-5-propylnonane
4-Methyl-5-propylnonane
5-Methyl-5-propylnonane
2-Methyl-6-propylnonane
3-Methyl-6-propylnonane
2-Methyl-3-(1-methylethyl)nonane
2-Methyl-4-(1-methylethyl)nonane
3-Methyl-4-(1-methylethyl)nonane
4-Methyl-4-(1-methylethyl)nonane
5-Methyl-4-(1-methylethyl)nonane
6-Methyl-4-(1-methylethyl)nonane
2-Methyl-5-(1-methylethyl)nonane
3-Methyl-5-(1-methylethyl)nonane
4-Methyl-5-(1-methylethyl)nonane
5-Methyl-5-(1-methylethyl)nonane
2-Methyl-6-(1-methylethyl)nonane
3-Methyl-6-(1-methylethyl)nonane
Butyl
5-Butylnonane
5-(1-Methylpropyl)nonane (or 5-sec-Butylnonane)
5-(2-Methylpropyl)nonane (or 5-Isobutylnonane)
4-(1,1-Dimethylethyl)nonane (or 4-tert-Butylnonane)
5-(1,1-Dimethylethyl)nonane (or 5-tert-Butylnonane)
With octane backbone
Pentamethyl
2,2,3,3,4-Pentamethyloctane
2,2,3,3,5-Pentamethyloctane
2,2,3,3,6-Pentamethyloctane
2,2,3,3,7-Pentamethyloctane
2,2,3,4,4-Pentamethyloctane
2,2,3,4,5-Pentamethyloctane
2,2,3,4,6-Pentamethyloctane
2,2,3,4,7-Pentamethyloctane
2,2,3,5,5-Pentamethyloctane
2,2,3,5,6-Pentamethyloctane
2,2,3,5,7-Pentamethyloctane
2,2,3,6,6-Pentamethyloctane
2,2,3,6,7-Pentamethyloctane
2,2,3,7,7-Pentamethyloctane
2,2,4,4,5-Pentamethyloctane
2,2,4,4,6-Pentamethyloctane
2,2,4,4,7-Pentamethyloctane
2,2,4,5,5-Pentamethyloctane
2,2,4,5,6-Pentamethyloctane
2,2,4,5,7-Pentamethyloctane
2,2,4,6,6-Pentamethyloctane
2,2,4,6,7-Pentamethyloctane
2,2,4,7,7-Pentamethyloctane
2,2,5,5,6-Pentamethyloctane
2,2,5,5,7-Pentamethyloctane
2,2,5,6,6-Pentamethyloctane
2,2,5,6,7-Pentamethyloctane
2,2,6,6,7-Pentamethyloctane
2,3,3,4,4-Pentamethyloctane
2,3,3,4,5-Pentamethyloctane
2,3,3,4,6-Pentamethyloctane
2,3,3,4,7-Pentamethyloctane
2,3,3,5,5-Pentamethyloctane
2,3,3,5,6-Pentamethyloctane
2,3,3,5,7-Pentamethyloctane
2,3,3,6,6-Pentamethyloctane
2,3,3,6,7-Pentamethyloctane
2,3,4,4,5-Pentamethyloctane
2,3,4,4,6-Pentamethyloctane
2,3,4,4,7-Pentamethyloctane
2,3,4,5,5-Pentamethyloctane
2,3,4,5,6-Pentamethyloctane
2,3,4,5,7-Pentamethyloctane
2,3,4,6,6-Pentamethyloctane
2,3,4,6,7-Pentamethyloctane
2,3,5,5,6-Pentamethyloctane
2,3,5,5,7-Pentamethyloctane
2,3,5,6,6-Pentamethyloctane
2,4,4,5,5-Pentamethyloctane
2,4,4,5,6-Pentamethyloctane
2,4,4,5,7-Pentamethyloctane
2,4,4,6,6-Pentamethyloctane
2,4,5,5,6-Pentamethyloctane
2,4,5,6,6-Pentamethyloctane
2,5,5,6,6-Pentamethyloctane
3,3,4,4,5-Pentamethyloctane
3,3,4,4,6-Pentamethyloctane
3,3,4,5,5-Pentamethyloctane
3,3,4,5,6-Pentamethyloctane
3,3,4,6,6-Pentamethyloctane
3,3,5,5,6-Pentamethyloctane
3,4,4,5,5-Pentamethyloctane
3,4,4,5,6-Pentamethyloctane
Ethyl+Trimethyl
3-Ethyl-2,2,3-trimethyloctane
3-Ethyl-2,2,4-trimethyloctane
3-Ethyl-2,2,5-trimethyloctane
3-Ethyl-2,2,6-trimethyloctane
3-Ethyl-2,2,7-trimethyloctane
3-Ethyl-2,3,4-trimethyloctane
3-Ethyl-2,3,5-trimethyloctane
3-Ethyl-2,3,6-trimethyloctane
3-Ethyl-2,3,7-trimethyloctane
3-Ethyl-2,4,4-trimethyloctane
3-Ethyl-2,4,5-trimethyloctane
3-Ethyl-2,4,6-trimethyloctane
3-Ethyl-2,4,7-trimethyloctane
3-Ethyl-2,5,5-trimethyloctane
3-Ethyl-2,5,6-trimethyloctane
3-Ethyl-2,5,7-trimethyloctane
3-Ethyl-2,6,6-trimethyloctane
3-Ethyl-2,6,7-trimethyloctane
3-Ethyl-3,4,4-trimethyloctane
3-Ethyl-3,4,5-trimethyloctane
3-Ethyl-3,4,6-trimethyloctane
3-Ethyl-3,5,5-trimethyloctane
3-Ethyl-3,5,6-trimethyloctane
3-Ethyl-3,6,6-trimethyloctane
3-Ethyl-4,4,5-trimethyloctane
3-Ethyl-4,4,6-trimethyloctane
3-Ethyl-4,5,5-trimethyloctane
3-Ethyl-4,5,6-trimethyloctane
4-Ethyl-2,2,3-trimethyloctane
4-Ethyl-2,2,4-trimethyloctane
4-Ethyl-2,2,5-trimethyloctane
4-Ethyl-2,2,6-trimethyloctane
4-Ethyl-2,2,7-trimethyloctane
4-Ethyl-2,3,3-trimethyloctane
4-Ethyl-2,3,4-trimethyloctane
4-Ethyl-2,3,5-trimethyloctane
4-Ethyl-2,3,6-trimethyloctane
4-Ethyl-2,3,7-trimethyloctane
4-Ethyl-2,4,5-trimethyloctane
4-Ethyl-2,4,6-trimethyloctane
4-Ethyl-2,4,7-trimethyloctane
4-Ethyl-2,5,5-trimethyloctane
4-Ethyl-2,5,6-trimethyloctane
4-Ethyl-2,5,7-trimethyloctane
4-Ethyl-2,6,6-trimethyloctane
4-Ethyl-3,3,4-trimethyloctane
4-Ethyl-3,3,5-trimethyloctane
4-Ethyl-3,3,6-trimethyloctane
4-Ethyl-3,4,5-trimethyloctane
4-Ethyl-3,4,6-trimethyloctane
4-Ethyl-3,5,5-trimethyloctane
4-Ethyl-3,5,6-trimethyloctane
4-Ethyl-4,5,5-trimethyloctane
5-Ethyl-2,2,3-trimethyloctane
5-Ethyl-2,2,4-trimethyloctane
5-Ethyl-2,2,5-trimethyloctane
5-Ethyl-2,2,6-trimethyloctane
5-Ethyl-2,2,7-trimethyloctane
5-Ethyl-2,3,3-trimethyloctane
5-Ethyl-2,3,4-trimethyloctane
5-Ethyl-2,3,5-trimethyloctane
5-Ethyl-2,3,6-trimethyloctane
5-Ethyl-2,3,7-trimethyloctane
5-Ethyl-2,4,4-trimethyloctane
5-Ethyl-2,4,5-trimethyloctane
5-Ethyl-2,4,6-trimethyloctane
5-Ethyl-2,5,6-trimethyloctane
5-Ethyl-2,6,6-trimethyloctane
5-Ethyl-3,3,4-trimethyloctane
5-Ethyl-3,3,5-trimethyloctane
5-Ethyl-3,3,6-trimethyloctane
5-Ethyl-3,4,4-trimethyloctane
5-Ethyl-3,4,5-trimethyloctane
6-Ethyl-2,2,3-trimethyloctane
6-Ethyl-2,2,4-trimethyloctane
6-Ethyl-2,2,5-trimethyloctane
6-Ethyl-2,2,6-trimethyloctane
6-Ethyl-2,2,7-trimethyloctane
6-Ethyl-2,3,3-trimethyloctane
6-Ethyl-2,3,4-trimethyloctane
6-Ethyl-2,3,5-trimethyloctane
6-Ethyl-2,3,6-trimethyloctane
6-Ethyl-2,4,4-trimethyloctane
6-Ethyl-2,4,5-trimethyloctane
6-Ethyl-2,4,6-trimethyloctane
6-Ethyl-2,5,5-trimethyloctane
6-Ethyl-2,5,6-trimethyloctane
6-Ethyl-3,3,4-trimethyloctane
6-Ethyl-3,3,5-trimethyloctane
6-Ethyl-3,4,4-trimethyloctane
Diethyl+Methyl
3,3-Diethyl-2-methyloctane
3,3-Diethyl-4-methyloctane
3,3-Diethyl-5-methyloctane
3,3-Diethyl-6-methyloctane
3,4-Diethyl-2-methyloctane
3,4-Diethyl-3-methyloctane
3,4-Diethyl-4-methyloctane
3,4-Diethyl-5-methyloctane
3,4-Diethyl-6-methyloctane
3,5-Diethyl-2-methyloctane
3,5-Diethyl-3-methyloctane
3,5-Diethyl-4-methyloctane
3,5-Diethyl-5-methyloctane
3,6-Diethyl-2-methyloctane
3,6-Diethyl-3-methyloctane
3,6-Diethyl-4-methyloctane
4,4-Diethyl-2-methyloctane
4,4-Diethyl-3-methyloctane
4,4-Diethyl-5-methyloctane
4,5-Diethyl-2-methyloctane
4,5-Diethyl-3-methyloctane
4,5-Diethyl-4-methyloctane
4,6-Diethyl-2-methyloctane
4,6-Diethyl-3-methyloctane
5,5-Diethyl-2-methyloctane
5,5-Diethyl-3-methyloctane
5,6-Diethyl-2-methyloctane
6,6-Diethyl-2-methyloctane
Dimethyl+Propyl
2,2-Dimethyl-4-propyloctane
2,3-Dimethyl-4-propyloctane
2,4-Dimethyl-4-propyloctane
2,5-Dimethyl-4-propyloctane
2,6-Dimethyl-4-propyloctane
2,7-Dimethyl-4-propyloctane
3,3-Dimethyl-4-propyloctane
3,4-Dimethyl-4-propyloctane
3,5-Dimethyl-4-propyloctane
3,6-Dimethyl-4-propyloctane
4,5-Dimethyl-4-propyloctane
2,2-Dimethyl-5-propyloctane
2,3-Dimethyl-5-propyloctane
2,4-Dimethyl-5-propyloctane
2,5-Dimethyl-5-propyloctane
2,6-Dimethyl-5-propyloctane
3,3-Dimethyl-5-propyloctane
3,4-Dimethyl-5-propyloctane
3,5-Dimethyl-5-propyloctane
4,4-Dimethyl-5-propyloctane
2,2-Dimethyl-3-(1-methylethyl)octane
2,3-Dimethyl-3-(1-methylethyl)octane
2,4-Dimethyl-3-(1-methylethyl)octane
2,5-Dimethyl-3-(1-methylethyl)octane
2,6-Dimethyl-3-(1-methylethyl)octane
2,7-Dimethyl-3-(1-methylethyl)octane
2,2-Dimethyl-4-(1-methylethyl)octane
2,3-Dimethyl-4-(1-methylethyl)octane
2,4-Dimethyl-4-(1-methylethyl)octane
2,5-Dimethyl-4-(1-methylethyl)octane
2,6-Dimethyl-4-(1-methylethyl)octane
2,7-Dimethyl-4-(1-methylethyl)octane
3,3-Dimethyl-4-(1-methylethyl)octane
3,4-Dimethyl-4-(1-methylethyl)octane
3,5-Dimethyl-4-(1-methylethyl)octane
3,6-Dimethyl-4-(1-methylethyl)octane
4,5-Dimethyl-4-(1-methylethyl)octane
2,2-Dimethyl-5-(1-methylethyl)octane
2,3-Dimethyl-5-(1-methylethyl)octane
2,4-Dimethyl-5-(1-methylethyl)octane
2,5-Dimethyl-5-(1-methylethyl)octane
2,6-Dimethyl-5-(1-methylethyl)octane
3,3-Dimethyl-5-(1-methylethyl)octane
3,4-Dimethyl-5-(1-methylethyl)octane
3,5-Dimethyl-5-(1-methylethyl)octane
4,4-Dimethyl-5-(1-methylethyl)octane
Ethyl+Propyl
3-Ethyl-4-propyloctane
4-Ethyl-4-propyloctane
4-Ethyl-5-propyloctane
3-Ethyl-5-propyloctane
3-Ethyl-4-(1-methylethyl)octane
4-Ethyl-4-(1-methylethyl)octane
4-Ethyl-5-(1-methylethyl)octane
3-Ethyl-5-(1-methylethyl)octane
Butyl+Methyl
2-Methyl-4-(1-methylpropyl)octane
3-Methyl-4-(1-methylpropyl)octane
2-Methyl-4-(2-methylpropyl)octane
4-(1,1-Dimethylethyl)-2-methyloctane
4-(1,1-Dimethylethyl)-3-methyloctane
4-(1,1-Dimethylethyl)-4-methyloctane
4-(1,1-Dimethylethyl)-5-methyloctane
5-(1,1-Dimethylethyl)-2-methyloctane
5-(1,1-Dimethylethyl)-3-methyloctane
With heptane backbone
Hexamethyl
2,2,3,3,4,4-Hexamethylheptane
2,2,3,3,4,5-Hexamethylheptane
2,2,3,3,4,6-Hexamethylheptane
2,2,3,3,5,5-Hexamethylheptane
2,2,3,3,5,6-Hexamethylheptane
2,2,3,3,6,6-Hexamethylheptane
2,2,3,4,4,5-Hexamethylheptane
2,2,3,4,4,6-Hexamethylheptane
2,2,3,4,5,5-Hexamethylheptane
2,2,3,4,5,6-Hexamethylheptane
2,2,3,4,6,6-Hexamethylheptane
2,2,3,5,5,6-Hexamethylheptane
2,2,3,5,6,6-Hexamethylheptane
2,2,4,4,5,5-Hexamethylheptane
2,2,4,4,5,6-Hexamethylheptane
2,2,4,4,6,6-Hexamethylheptane
2,2,4,5,5,6-Hexamethylheptane
2,3,3,4,4,5-Hexamethylheptane
2,3,3,4,4,6-Hexamethylheptane
2,3,3,4,5,5-Hexamethylheptane
2,3,3,4,5,6-Hexamethylheptane
2,3,3,5,5,6-Hexamethylheptane
2,3,4,4,5,5-Hexamethylheptane
2,3,4,4,5,6-Hexamethylheptane
3,3,4,4,5,5-Hexamethylheptane
Ethyl+Tetramethyl
3-Ethyl-2,2,3,4-tetramethylheptane
3-Ethyl-2,2,3,5-tetramethylheptane
3-Ethyl-2,2,3,6-tetramethylheptane
3-Ethyl-2,2,4,4-tetramethylheptane
3-Ethyl-2,2,4,5-tetramethylheptane
3-Ethyl-2,2,4,6-tetramethylheptane
3-Ethyl-2,2,5,5-tetramethylheptane
3-Ethyl-2,2,5,6-tetramethylheptane
3-Ethyl-2,2,6,6-tetramethylheptane
3-Ethyl-2,3,4,4-tetramethylheptane
3-Ethyl-2,3,4,5-tetramethylheptane
3-Ethyl-2,3,4,6-tetramethylheptane
3-Ethyl-2,3,5,5-tetramethylheptane
3-Ethyl-2,3,5,6-tetramethylheptane
3-Ethyl-2,4,4,5-tetramethylheptane
3-Ethyl-2,4,4,6-tetramethylheptane
3-Ethyl-2,4,5,5-tetramethylheptane
3-Ethyl-2,4,5,6-tetramethylheptane
3-Ethyl-3,4,4,5-tetramethylheptane
3-Ethyl-3,4,5,5-tetramethylheptane
4-Ethyl-2,2,3,3-tetramethylheptane
4-Ethyl-2,2,3,4-tetramethylheptane
4-Ethyl-2,2,3,5-tetramethylheptane
4-Ethyl-2,2,3,6-tetramethylheptane
4-Ethyl-2,2,4,5-tetramethylheptane
4-Ethyl-2,2,4,6-tetramethylheptane
4-Ethyl-2,2,5,5-tetramethylheptane
4-Ethyl-2,2,5,6-tetramethylheptane
4-Ethyl-2,2,6,6-tetramethylheptane
4-Ethyl-2,3,3,4-tetramethylheptane
4-Ethyl-2,3,3,5-tetramethylheptane
4-Ethyl-2,3,3,6-tetramethylheptane
4-Ethyl-2,3,4,5-tetramethylheptane
4-Ethyl-2,3,4,6-tetramethylheptane
4-Ethyl-2,3,5,5-tetramethylheptane
4-Ethyl-2,3,5,6-tetramethylheptane
4-Ethyl-2,4,5,5-tetramethylheptane
4-Ethyl-3,3,4,5-tetramethylheptane
4-Ethyl-3,3,5,5-tetramethylheptane
5-Ethyl-2,2,3,3-tetramethylheptane
5-Ethyl-2,2,3,4-tetramethylheptane
5-Ethyl-2,2,3,5-tetramethylheptane
5-Ethyl-2,2,3,6-tetramethylheptane
5-Ethyl-2,2,4,4-tetramethylheptane
5-Ethyl-2,2,4,5-tetramethylheptane
5-Ethyl-2,2,4,6-tetramethylheptane
5-Ethyl-2,2,5,6-tetramethylheptane
5-Ethyl-2,3,3,4-tetramethylheptane
5-Ethyl-2,3,3,5-tetramethylheptane
5-Ethyl-2,3,3,6-tetramethylheptane
5-Ethyl-2,3,4,4-tetramethylheptane
5-Ethyl-2,3,4,5-tetramethylheptane
5-Ethyl-2,4,4,5-tetramethylheptane
5-Ethyl-3,3,4,4-tetramethylheptane
Diethyl+Dimethyl
3,3-Diethyl-2,2-dimethylheptane
3,3-Diethyl-2,4-dimethylheptane
3,3-Diethyl-2,5-dimethylheptane
3,3-Diethyl-2,6-dimethylheptane
3,3-Diethyl-4,4-dimethylheptane
3,3-Diethyl-4,5-dimethylheptane
3,3-Diethyl-5,5-dimethylheptane
3,4-Diethyl-2,2-dimethylheptane
3,4-Diethyl-2,3-dimethylheptane
3,4-Diethyl-2,4-dimethylheptane
3,4-Diethyl-2,5-dimethylheptane
3,4-Diethyl-2,6-dimethylheptane
3,4-Diethyl-3,4-dimethylheptane
3,4-Diethyl-3,5-dimethylheptane
3,4-Diethyl-4,5-dimethylheptane
3,5-Diethyl-2,2-dimethylheptane
3,5-Diethyl-2,3-dimethylheptane
3,5-Diethyl-2,4-dimethylheptane
3,5-Diethyl-2,5-dimethylheptane
3,5-Diethyl-2,6-dimethylheptane
3,5-Diethyl-3,4-dimethylheptane
3,5-Diethyl-3,5-dimethylheptane
3,5-Diethyl-4,4-dimethylheptane
4,4-Diethyl-2,2-dimethylheptane
4,4-Diethyl-2,3-dimethylheptane
4,4-Diethyl-2,5-dimethylheptane
4,4-Diethyl-2,6-dimethylheptane
4,4-Diethyl-3,3-dimethylheptane
4,4-Diethyl-3,5-dimethylheptane
4,5-Diethyl-2,2-dimethylheptane
4,5-Diethyl-2,3-dimethylheptane
4,5-Diethyl-2,4-dimethylheptane
4,5-Diethyl-2,5-dimethylheptane
4,5-Diethyl-3,3-dimethylheptane
5,5-Diethyl-2,2-dimethylheptane
5,5-Diethyl-2,3-dimethylheptane
5,5-Diethyl-2,4-dimethylheptane
Triethyl
3,3,4-Triethylheptane
3,3,5-Triethylheptane
3,4,4-Triethylheptane
3,4,5-Triethylheptane
Trimethyl+Propyl
2,2,3-Trimethyl-4-propylheptane
2,2,4-Trimethyl-4-propylheptane
2,2,5-Trimethyl-4-propylheptane
2,2,6-Trimethyl-4-propylheptane
2,3,3-Trimethyl-4-propylheptane
2,3,4-Trimethyl-4-propylheptane
2,3,5-Trimethyl-4-propylheptane
2,3,6-Trimethyl-4-propylheptane
2,4,5-Trimethyl-4-propylheptane
2,4,6-Trimethyl-4-propylheptane
2,5,5-Trimethyl-4-propylheptane
3,3,4-Trimethyl-4-propylheptane
3,3,5-Trimethyl-4-propylheptane
3,4,5-Trimethyl-4-propylheptane
2,2,3-Trimethyl-3-(1-methylethyl)heptane
2,2,4-Trimethyl-3-(1-methylethyl)heptane
2,2,5-Trimethyl-3-(1-methylethyl)heptane
2,2,6-Trimethyl-3-(1-methylethyl)heptane
2,3,4-Trimethyl-3-(1-methylethyl)heptane
2,3,5-Trimethyl-3-(1-methylethyl)heptane
2,3,6-Trimethyl-3-(1-methylethyl)heptane
2,4,4-Trimethyl-3-(1-methylethyl)heptane
2,4,5-Trimethyl-3-(1-methylethyl)heptane
2,4,6-Trimethyl-3-(1-methylethyl)heptane
2,5,5-Trimethyl-3-(1-methylethyl)heptane
2,5,6-Trimethyl-3-(1-methylethyl)heptane
2,2,3-Trimethyl-4-(1-methylethyl)heptane
2,2,4-Trimethyl-4-(1-methylethyl)heptane
2,2,5-Trimethyl-4-(1-methylethyl)heptane
2,2,6-Trimethyl-4-(1-methylethyl)heptane
2,3,3-Trimethyl-4-(1-methylethyl)heptane
2,3,4-Trimethyl-4-(1-methylethyl)heptane
2,3,5-Trimethyl-4-(1-methylethyl)heptane
2,3,6-Trimethyl-4-(1-methylethyl)heptane
2,4,5-Trimethyl-4-(1-methylethyl)heptane
2,4,6-Trimethyl-4-(1-methylethyl)heptane
2,5,5-Trimethyl-4-(1-methylethyl)heptane
3,3,4-Trimethyl-4-(1-methylethyl)heptane
3,3,5-Trimethyl-4-(1-methylethyl)heptane
3,4,5-Trimethyl-4-(1-methylethyl)heptane
2,2,6-Trimethyl-5-(1-methylethyl)heptane
Ethyl+Methyl+Propyl
3-Ethyl-2-methyl-4-propylheptane
3-Ethyl-3-methyl-4-propylheptane
3-Ethyl-4-methyl-4-propylheptane
3-Ethyl-5-methyl-4-propylheptane
4-Ethyl-2-methyl-4-propylheptane
4-Ethyl-3-methyl-4-propylheptane
5-Ethyl-2-methyl-4-propylheptane
3-Ethyl-2-methyl-3-(1-methylethyl)heptane
4-Ethyl-2-methyl-3-(1-methylethyl)heptane
5-Ethyl-2-methyl-3-(1-methylethyl)heptane
3-Ethyl-2-methyl-4-(1-methylethyl)heptane
3-Ethyl-3-methyl-4-(1-methylethyl)heptane
3-Ethyl-4-methyl-4-(1-methylethyl)heptane
3-Ethyl-5-methyl-4-(1-methylethyl)heptane
4-Ethyl-2-methyl-4-(1-methylethyl)heptane
4-Ethyl-3-methyl-4-(1-methylethyl)heptane
5-Ethyl-2-methyl-4-(1-methylethyl)heptane
Dipropyl
4,4-Dipropylheptane
4-(1-Methylethyl)-4-propylheptane
4,4-Bis(1-methylethyl)heptane
Dimethyl+Butyl
2,5-Dimethyl-4-(1-methylpropyl)heptane
2,6-Dimethyl-4-(1-methylpropyl)heptane
3,5-Dimethyl-4-(1-methylpropyl)heptane
2,6-Dimethyl-4-(2-methylpropyl)heptane
3-(1,1-Dimethylethyl)-2,2-dimethylheptane
4-(1,1-Dimethylethyl)-2,2-dimethylheptane
4-(1,1-Dimethylethyl)-2,3-dimethylheptane
4-(1,1-Dimethylethyl)-2,4-dimethylheptane
4-(1,1-Dimethylethyl)-2,5-dimethylheptane
4-(1,1-Dimethylethyl)-2,6-dimethylheptane
4-(1,1-Dimethylethyl)-3,3-dimethylheptane
4-(1,1-Dimethylethyl)-3,4-dimethylheptane
4-(1,1-Dimethylethyl)-3,5-dimethylheptane
Ethyl+Butyl
4-(1,1-Dimethylethyl)-3-ethylheptane
4-(1,1-Dimethylethyl)-4-ethylheptane
With hexane backbone
Heptamethyl
2,2,3,3,4,4,5-Heptamethylhexane
2,2,3,3,4,5,5-Heptamethylhexane
Ethyl+Pentamethyl
3-Ethyl-2,2,3,4,4-pentamethylhexane
3-Ethyl-2,2,3,4,5-pentamethylhexane
3-Ethyl-2,2,3,5,5-pentamethylhexane
3-Ethyl-2,2,4,4,5-pentamethylhexane
3-Ethyl-2,2,4,5,5-pentamethylhexane
3-Ethyl-2,3,4,4,5-pentamethylhexane
4-Ethyl-2,2,3,3,4-pentamethylhexane
4-Ethyl-2,2,3,3,5-pentamethylhexane
4-Ethyl-2,2,3,4,5-pentamethylhexane
Diethyl+Trimethyl
3,3-Diethyl-2,2,4-trimethylhexane
3,3-Diethyl-2,2,5-trimethylhexane
3,3-Diethyl-2,4,4-trimethylhexane
3,3-Diethyl-2,4,5-trimethylhexane
3,4-Diethyl-2,2,3-trimethylhexane
3,4-Diethyl-2,2,4-trimethylhexane
3,4-Diethyl-2,2,5-trimethylhexane
3,4-Diethyl-2,3,4-trimethylhexane
3,4-Diethyl-2,3,5-trimethylhexane
4,4-Diethyl-2,2,3-trimethylhexane
4,4-Diethyl-2,2,5-trimethylhexane
4,4-Diethyl-2,3,3-trimethylhexane
Triethyl+Methyl
3,3,4-Triethyl-2-methylhexane
3,3,4-Triethyl-4-methylhexane
3,4,4-Triethyl-2-methylhexane
Tetramethyl+Propyl
2,2,3,4-Tetramethyl-3-(1-methylethyl)hexane
2,2,3,5-Tetramethyl-3-(1-methylethyl)hexane
2,2,4,4-Tetramethyl-3-(1-methylethyl)hexane
2,2,4,5-Tetramethyl-3-(1-methylethyl)hexane
2,2,5,5-Tetramethyl-3-(1-methylethyl)hexane
2,3,4,4-Tetramethyl-3-(1-methylethyl)hexane
2,3,4,5-Tetramethyl-3-(1-methylethyl)hexane
2,2,3,5-Tetramethyl-4-(1-methylethyl)hexane
2,2,4,5-Tetramethyl-4-(1-methylethyl)hexane
2,3,3,5-Tetramethyl-4-(1-methylethyl)hexane
Ethyl+Dimethyl+Propyl
3-Ethyl-2,2-dimethyl-3-(1-methylethyl)hexane
3-Ethyl-2,4-dimethyl-3-(1-methylethyl)hexane
3-Ethyl-2,5-dimethyl-3-(1-methylethyl)hexane
4-Ethyl-2,2-dimethyl-3-(1-methylethyl)hexane
4-Ethyl-2,3-dimethyl-3-(1-methylethyl)hexane
4-Ethyl-2,4-dimethyl-3-(1-methylethyl)hexane
4-Ethyl-2,5-dimethyl-3-(1-methylethyl)hexane
Methyl+bis(Propyl)
2-Methyl-3,3-bis(1-methylethyl)hexane
Butyl+Trimethyl
3-(1,1-Dimethylethyl)-2,2,3-trimethylhexane
3-(1,1-Dimethylethyl)-2,2,4-trimethylhexane
3-(1,1-Dimethylethyl)-2,2,5-trimethylhexane
With pentane backbone
Diethyl+Tetramethyl
3,3-Diethyl-2,2,4,4-tetramethylpentane
Pentamethyl+Propyl
2,2,3,4,4-Pentamethyl-3-(1-methylethyl)pentane
Ethyl+Trimethyl+Propyl
3-Ethyl-2,2,4-trimethyl-3-(1-methylethyl)pentane
Dimethyl+bis(Propyl)
2,4-Dimethyl-3,3-bis(1-methylethyl)pentane
Butyl+Tetramethyl
3-(1,1-Dimethylethyl)-2,2,4,4-tetramethylpentane
References
Lists of isomers of alkanes
Isomerism
Hydrocarbons | List of isomers of tridecane | [
"Chemistry"
] | 13,117 | [
"Hydrocarbons",
"Stereochemistry",
"Organic compounds",
"Lists of isomers of alkanes",
"Isomerism"
] |
48,673,404 | https://en.wikipedia.org/wiki/Fort%20Lauvallieres | The , renamed Fort Lauvallière after 1919, is a military installation near Metz. It is part of the second fortified belt of forts of Metz.
Historical context
While it was German territory, Metz' garrison grew from 15,000-20,000 men when after the Franco-Prussian War to more than 25,000 at the start of World War I, gradually becoming the premier stronghold of the German Reich. Built in the early 20th century, the infantry works and barracks of Lauvallière completed the Second fortified belt of Metz composed of Festen Wagner (1904-1912), Crown Prince (1899 - 1905), Leipzig (1907–1912), empress (1899-1905), Lorraine (1899-1905), Freiherr von der Goltz (1907–1916), Haeseler (1899-1905), Prince Regent Luitpold (1907-1914) and I-werke Belle-Croix (1908-1914). This fort was part of a wider program of fortifications called "Moselstellung" encompassing fortresses scattered between Thionville and Metz in the Moselle valley. The aim of Germany was to protect against a French campaign to take back Alsace-Lorraine from the German Empire.
Overall design
The fortification system was designed to accommodate the growing advances in artillery since the end of the 19th century. Based on new defensive concepts, such as dispersal and concealment, the fortified group was to be, in case of attack, an impassable barrier for French forces. From 1899, the Schlieffen plan of the German General Staff designed the fortifications of the Moselstellung, between Metz and Thionville to be like a lock for blocking any advance of French troops in case of conflict. This concept of a fortified line on the Moselle was a significant innovation compared to the Séré de Rivières system developed by the French. It later inspired the engineers of the Maginot Line.
Construction and facilities
Built between 1908 and 1914 in the northeast of Metz in Moselle, the infantry works occupy a plot of 47 ha. It is located in the communes of Coincy, Nouilly and Vantoux near the intersection of the Saarbrücken and Saarlouis roads. The fort is named after the hamlet of Lauvallières, located two kilometers to the east in Nouilly. A cross carved on the building reads "Belle-Croix 1908-1912".
The fort could hold two hundred men and had :
three infantry observatories with armored fixed turrets and thirteen gatehouse observatories;
a telephone to central command;
central heating;
two hundred meters of underground galleries;
four 22-horsepower diesel engines driving four dynamos (14.5 kW).
Successive assignments
From 1890 the garrison relief is guaranteed by the fort troops Corps XVI stationed at Metz and Thionville. In November 1918, the fort was again occupied by the French army. In early September 1944, at the beginning of the Battle of Metz, the German command integrates the fort into the defensive system set up around Metz.
Notes and references
See as well
Forts of Metz
Fortifications of Metz
Battle of Metz
Fortifications of Metz
World War II defensive lines | Fort Lauvallieres | [
"Engineering"
] | 651 | [
"World War II defensive lines",
"Fortification lines"
] |
34,187,558 | https://en.wikipedia.org/wiki/Rotating%20unbalance | Rotating unbalance is the uneven distribution of mass around an axis of rotation. A rotating mass, or rotor, is said to be out of balance when its center of mass (inertia axis) is out of alignment with the center of rotation (geometric axis). Unbalance causes a moment which gives the rotor a wobbling movement characteristic of vibration of rotating structures.
Causes of imbalance
Routine manufacturing processes can cause stress on metal components. Without stress relief, the rotor will distort itself to adjust.: Thermal distortion often occurs with parts exposed to increased temperatures. Metals are able to expand when in contact with heat, so exposure to warmer temperatures can cause either the entire piece of machinery to expand, or just certain parts, causing distortion.: Rotating parts involved in material handling almost always accumulate buildup. Moreover, when exposed to oil, these parts can be very easily distorted. Without adhering to a maintenance routine or implementing an inspection process, oil can seep into the parts, causing unbalance.: In some cases, vibration is desired, and a rotor is deliberately unbalanced to serve as a vibrator. An example of this is an aircraft's stick shaker.
Effects of unbalance
Vibration
Noise
Decreased life of bearings
Unsafe work conditions
Reduced machine life
Increased maintenance
Units used to express unbalance
In terms of the mass eccentricity : μm, mm, cm, ...; μin, mil, in, ...
In terms of mass at a given radius: μg, mg, g, kg, ...; moz, oz, ...
In terms of mass × radius moment (mR): mg-mm, g-mm, mg-cm, g-cm, kg-mm, ...; oz-in, g-in, ...
Types of balance
Static balance
A static balance (sometimes called a force balance) occurs when the inertial axis of a rotating mass is displaced from and parallel to the axis of rotation. Static unbalances can occur more frequently in disk-shaped rotors because the thin geometric profile of the disk allows for an uneven distribution of mass with an inertial axis that is nearly parallel to the axis of rotation. Only one plane receives balance correction.
where U = balance, m = mass, r = distance between unbalance and the centre of the object
Couple balance
A couple balance occurs when a rotating mass has two equal unbalance forces that are situated 180° opposite each other. A system that is statically balanced may still have a couple unbalance.
Couple unbalance occurs frequently in elongated cylindrical rotors.
where d = distance between the two unbalance forces along the rotation axis.
Dynamic balance
In rotation an unbalance when the mass/inertia axis does not intersect with shaft axis then it is called dynamic balance. Combination of static and couple balance is dynamic unbalance. It occurs in virtually all rotors and is the most common kind of unbalance. It can be fixed by correcting the weight on at least two planes.
How to correct or compensate balance
Mass addition.
Mass removal.
Mass shifting.
Mass centering.
The measurement of existing vibration and calculation of the change of mass required is typically carried out using some form of balancing machine.
Grades
ISO 21940 classifies vibration in terms of G codes. Unfortunately, it is the theoretical value assuming the rotor was spinning in free space so it does not relate to actual operating conditions. Rotors of the same type having permissible residual specific unbalance value eper, varies inversely with the speed of the rotor.
eper × ω = Constant,
where ω = angular velocity (radians per second) eper = permissible residual specific unbalance
This constant is quality grade G. Balance Grades are used to specify the allowable residual imbalance for rotating machinery. The ISO 1940 standard defines balance grades for different classes of machinery. A rotor balanced to G2.5 will vibrate at 2.5 mm/s at operating speed if rotating in a suspended state with no external influences.
Uper = (9.54 × G number × mass)/rpm
Where Uper = balance tolerance (or) residual imbalance
Important formulas
Where F = force due to unbalance, U = unbalance. ω = angular frequency. e = specific unbalance. m = mass. r = distance between unbalance and the axis of rotation of the object.
References
Rotation
Mechanical vibrations | Rotating unbalance | [
"Physics",
"Engineering"
] | 899 | [
"Structural engineering",
"Physical phenomena",
"Classical mechanics",
"Rotation",
"Motion (physics)",
"Mechanics",
"Mechanical vibrations"
] |
55,410,471 | https://en.wikipedia.org/wiki/CRISPR%20activation | CRISPR activation (CRISPRa) is a gene regulation technique that utilizes an engineered form of the CRISPR-Cas9 system to enhance the expression of specific genes without altering the underlying DNA sequence. Unlike traditional CRISPR-Cas9, which introduces double-strand breaks to edit genes, CRISPRa employs a modified, catalytically inactive Cas9 (dCas9) fused with transcriptional activators to target promoter or enhancer regions, thereby boosting gene transcription. This method allows for precise control of gene expression, making it a valuable tool for studying gene function, creating gene regulatory networks, and developing potential therapeutic interventions for a variety of diseases.
Like for CRISPR interference, the CRISPR effector is guided to the target by a complementary guide RNA. However, CRISPR activation systems are fused to transcriptional activators to increase expression of genes of interest. Such systems are usable for many purposes including but not limited to, genetic screens and overexpression of proteins of interest.
The most commonly-used effector is based on Cas9 (from Type II systems), but other effectors like Cas12a (Type V) have been used as well.
Components
dCas9
Cas9 Endonuclease Dead, also known as dead Cas9 or dCas9, is a mutant form of Cas9 whose endonuclease activity is removed through point mutations in its endonuclease domains. Similar to its unmutated form, dCas9 is used in CRISPR systems along with gRNAs to target specific genes or nucleotides complementary to the gRNA with PAM sequences that allow Cas9 to bind. Cas9 ordinarily has 2 endonuclease domains called the RuvC and HNH domains. The point mutations D10A and H840A change 2 important residues for endonuclease activity that ultimately results in its deactivation. Although dCas9 lacks endonuclease activity, it is still capable of binding to its guide RNA and the DNA strand that is being targeted because such binding is managed by other domains. This alone is often enough to attenuate if not outright block transcription of the targeted gene if the gRNA positions dCas9 in a way that prevents transcriptional factors and RNA polymerase from accessing the DNA. However, this ability to bind DNA can also be exploited for activation since dCas9 has modifiable regions, typically the N and C terminus of the protein, that can be used to attach transcriptional activators.
Guide RNA
See: Guide RNA, CRISPR
A small guide RNA (sgRNA), or gRNA is an RNA with around 20 nucleotides used to direct Cas9 or dCas9 to their targets. gRNAs contain two major regions of importance for CRISPR systems: the scaffold and spacer regions. The spacer region has nucleotides that are complementary to those found on the target genes, often in the promoter region. The scaffold region is responsible for formation of a complex with (d)Cas9. Together, they bind (d)Cas9 and direct it to the gene(s) of interest. Since the spacer region of a gRNA can be modified for any potential sequence, they give CRISPR systems much more flexibility as any genes and nucleotides with a sequence complementary to the spacer region can become possible targets.
Transcriptional activators
See: Transcriptional Activator, Transcription Factor
Transcriptional Activators are protein domains or whole proteins linked to dCas9 or sgRNAs that assist in the recruitment of important co-factors as well as RNA Polymerase for transcription of the gene(s) targeted by the system. In order for a protein to be made from the gene that encodes it, RNA polymerase must make RNA from the DNA template of the gene during a process called transcription. Transcriptional activators have a DNA binding domain and a domain for activation of transcription. The activation domain can recruit general transcription factors or RNA polymerase to the gene sequence. Activation domains can also function by facilitating transcription by stalled RNA polymerases, and in eukaryotes can act to move nucleosomes on the DNA or modify histones to increase gene expression. These activators can be introduced into the system through attachment to dCas9 or to the sgRNA. Some researchers have noted that the extent of transcriptional upregulation can be modulated by using multiple sites for activator attachment in one experiment and by using different variations and combinations of activators at once in a given experiment or sample.
Expression system
An expression system is required for the introduction of the gRNAs and (d)Cas9 proteins into the cells of interest. Typically employed options include but are not limited to plasmids and viral vectors such as adeno-associated virus (AAV) vector or lentivirus vector.
Specific activation systems
VP64-p65-Rta
The VP64-p65-Rta, or VPR, dCas9 activator was created by modifying an existing dCas9 activator, in which a Vp64 transcriptional activator is joined to the C terminus of dCas9. In the dCas9-VPR protein, the transcription factors p65 and Rta are added to the C terminus of dCas9-Vp64. Therefore, all three transcription factors are targeted to the same gene. The use of three transcription factors, as opposed to solely Vp64, results in increased expression of targeted genes. When different genes were targeted by dCas9, they all showed significantly greater expression with dCas9-VPR than with dCas9-VP64. It has also been demonstrated that dCas9-VPR can be used to increase expression of multiple genes within the same cell by putting multiple sgRNAs into the same cell.
dCas9-VPR has been used to activate the neurogenin 2 (link) and neurogenic differentiation 1 (link) genes, resulting in differentiation of induced pluripotent stem cells into induced neurons. A study comparing dCas9 activators found that the VPR, SAM, and Suntag activators worked best with dCas9 to increase gene expression in a variety of fruit fly, mouse, and human cell types.
Synergistic activation mediator
To overcome the limitation of the dCas9-VP64 gene activation system, the dCas9-SAM system was developed to incorporate multiple transcriptional factors. Utilizing MS2, p65, and HSF1 proteins, dCas9-SAM system recruits various transcriptional factors working synergistically to activate the gene of interest.
In order to assemble different transcriptional activators, the dCas9-SAM system uses a modified single guide RNA (sgRNA) that has binding sites for the MS2 protein. Hairpin aptamers are attached to the tetra loop and the stem loop 2 of the sgRNA to become binding sites for dimerized MS2 bacteriophage coat proteins. As the hairpins are exposed outside of the dCas9-sgRNA complex, other transcriptional factors can bind to the MS2 protein without disrupting the dCas9-sgRNA complex. Thus, the MS2 protein is engineered to include p65 and HSF1 proteins. The MS2-p65-HSF1 fusion protein interacts with the dCas9-VP64 to recruit more transcriptional factors onto the promoter of the target genes.
Employing the dCas-SAM system, Zhang et al. (2015) successfully reactivated the latent HIV gene to over-express viral proteins from the HIV host cells. They were able to over-express viral proteins substantially to trigger apoptosis of HIV-1 latent cells due to the toxicity of viral proteins. In another dCas-SAM system experiment, Konermann et al. (2015) found genes in melanoma cells that give resistance to a BRAF inhibitor through activating candidate genes via dCas system. Thus, the dCas9-SAM system can further be employed to activate latent genes, develop gene therapies, and discover new genes.
SunTag
The SunTag activator system uses the dCas9 protein, which is modified to be linked with the SunTag. The SunTag is a repeating polypeptide array that can recruit multiple copies of antibodies. Through attaching transcriptional factors on the antibodies, the SunTag dCas9 activating complex amplifies its recruitment of transcriptional factors. In order to guide the dCas9 protein to its target gene, the dCas9 SunTag system uses sgRNA.
Tanenbaum et al.(2014) are credited for creating the dCas9 SunTag system. For the antibodies, they employed GCN4 antibodies which was bound to transcriptional factor VP64. In order to transport the antibodies to the nuclei of the cells, they attached NLS tag. To confirm the nuclear localization of the antibodies, sfGFP was used for visualization purpose. Therefore, the GCN4-sfGFP-NLS-VP64 protein was developed to be interact with dCas SunTag system. The antibodies successfully bound to SunTag polypeptides and activated target CXCR4 gene in K562 cell lines. Comparing with the dCas9-VP64 activation complex, they were able to increase the CXCR4 gene expression 5-25 times greater in K562 cell lines. Not only was there a greater CXCR4 protein overexpression but also CXCR4 proteins were active to further travel on the transwell migration assay. Thus, the dCas9-SunTag system can be used to activate genes that are present latently such as virus genes.
Applications
The dCas9 activation system allows a desired gene or multiple genes in the same cell to be expressed. It is possible to study genes involved in a certain process using a genome wide screen that involves activating expression of genes. Examining which sgRNAs yield a phenotype suggests which genes are involved in a specific pathway. The dCas9 activation system can be used to control exactly which cells are activated and at what time activation occurs. dCas9 constructs have been made that turn on a dCas9-activator fusion protein in the presence of light or chemicals. Cells can also be reprogrammed or differentiated from one cell type into another by increasing the expression of certain genes important for the formation or maintenance of a cell type.
Greater control over gene expression
One research group used a system in which dCas9 was fused to a particular domain, C1B1. When blue light is shined on the cell, the cryptochrome 2 (Cry2) domain binds to C1B1. The Cry2 domain is fused to a transcriptional activator, so blue light targets the activator to the spot where dCas9 is bound. The use of light allows a great deal of control over when the targeted gene is activated. Removing the light from the cell results in only dCas9 remaining at the target gene, so expression is not increased. In this way, the system is reversible. A similar system was developed using chemical control. In this system, dCas9 recruits an MS2 fusion protein that contains the domain FKBP. In the presence of the chemical RAP, an FRB domain fused to a chromatin modifying complex binds to FKBP. Whenever RAP is added to the cells, a specific chromatin modifier complex can be targeted to the gene. That allows scientists to examine how specific chromatin modifications affect the expression of a gene. The dCAs9-VPR system is used as an activator by targeting it to the promoter of a gene upstream of the coding region. A study used various sgRNAs to target different portions of the gene, finding that the dCas9-VPR activator can act as an activator or a repressor, depending on the location it binds. In a cell, sgRNAs targeting the promoter could allow dCas9-VPR to increase expression, while sgRNAs targeting the coding region of the gene result in dCas9-VPR decreasing expression.
Genome wide activation
The versatility of sgRNAs allows dCas9 activators to increase the expression of any gene within an organism's genome. That could be used to increase expression of a protein coding gene or a transcribed RNA. A paper demonstrated that genome wide activation could be used to determine which proteins are involved in mediated resistance to a specific drug. Another paper used genome wide activation of long, noncoding RNAs and observed that increasing the expression of certain long noncoding RNAs conferred resistance to the drug vemurafenib. In both cases, the cells that survive the drug could be studied to determine which sgRNAs they contain. That allows researchers to determine which gene was activated in each surviving cell, which suggests which genes are important for resistance to that drug.
Use in organisms
A dCas9 fusion with VP64, p65, and HSF1 (heat shock factor 1) allowed researchers to target genes in Arabidopsis thaliana and increase transcription to a similar level as when the gene itself is inserted into the plant's genome. For one of the two genes tested, the dCas9 activator changes the number and size of leaves and made the plants better able to handle drought. The authors conclude that the dCas9 activator can create phenotypes in plants that are similar to those observed when a transgene is inserted for overexpression. Researchers have used multiple guide RNAs to target dCas9 activation system to multiple genes in a specific mouse strain in which dCas9 can be turned on in specific cell lines using the Cre recombinase system. Scientists used the targeting and increased expression of several genes to examine the processes involved in regeneration and carcinomas of the liver.
References
Genetic engineering
Genome editing | CRISPR activation | [
"Chemistry",
"Engineering",
"Biology"
] | 2,882 | [
"Genetics techniques",
"Biological engineering",
"Genome editing",
"Genetic engineering",
"Molecular biology"
] |
55,414,784 | https://en.wikipedia.org/wiki/4U%201543-475 | 4U 1543-475 is a recurrent X-ray transient located in the southern constellation Lupus, the wolf. IL Lupi is its variable star designation. It has an apparent magnitude that fluctuates between 14.6 and 16.7, making it readily visible in large telescopes but not to the naked eye. The object is located relatively far at a distance of approximately 17,000 light years based on Gaia DR3 parallax measurements.
4U 1543-475 was first observed by Uhuru in 1971. In 1976, 4U 1543-475's spectrum was observed. However, its status as a black hole binary had not been confirmed until 1984 by astronomer S. Kiamoto and colleagues. After subsequent observations, it was given the variable star designation IL Lupi in 1995. 4U 1543-475 erupted three times.
See also
List of black holes
List of nearest black holes
References
Lupus (constellation)
Stellar black holes
X-ray binaries
Lupi, IL
A-type main-sequence stars | 4U 1543-475 | [
"Physics",
"Astronomy"
] | 213 | [
"Black holes",
"Stellar black holes",
"Unsolved problems in physics",
"Constellations",
"Lupus (constellation)"
] |
55,418,052 | https://en.wikipedia.org/wiki/Comparison%20of%20API%20simulation%20tools | The tools listed here support emulating or simulating APIs and software systems. They are also called API mocking tools, service virtualization tools, over the wire test doubles and tools for stubbing and mocking HTTP(S) and other protocols. They enable component testing in isolation.
In alphabetical order by name (click on a column heading to sort by that column):
See also
Test double
Service virtualization
References
API simulation tools | Comparison of API simulation tools | [
"Technology"
] | 88 | [
"Software comparisons",
"Computing comparisons"
] |
39,619,984 | https://en.wikipedia.org/wiki/Hydrogel%20encapsulation%20of%20quantum%20dots | The behavior of quantum dots (QDs) in solution and their interaction with other surfaces is of great importance to biological and industrial applications, such as optical displays, animal tagging, anti-counterfeiting dyes and paints, chemical sensing, and fluorescent tagging. However, unmodified quantum dots tend to be hydrophobic, which precludes their use in stable, water-based colloids. Furthermore, because the ratio of surface area to volume in a quantum dot is much higher than for larger particles, the thermodynamic free energy associated with dangling bonds on the surface is sufficient to impede the quantum confinement of excitons. Once solubilized by encapsulation in either a hydrophobic interior micelle or a hydrophilic exterior micelle, the QDs can be successfully introduced into an aqueous medium, in which they form an extended hydrogel network. In this form, quantum dots can be utilized in several applications that benefit from their unique properties, such as medical imaging and thermal destruction of malignant cancers.
Quantum dots
Quantum dots (QDs) are nano-scale semiconductor particles on the order of 2–10 nm in diameter. They possess electrical properties between those of bulk semi-conductors and individual molecules, as well as optical characteristics that make them suitable for applications where fluorescence is desirable, such as medical imaging. Most QDs synthesized for medical imaging are in the form of CdSe(ZnS) core(shell) particles. CdSe QDs have been shown to possess optical properties superior to organic dyes. The ZnS shell has a two-fold effect:
to interact with dangling bonds that would otherwise result in particle aggregation, loss of visual resolution, and impedance of quantum confinement effects
to further increase the fluorescence of the particles themselves.
Problems with CdSe(ZnS) quantum dots
Despite their potential for use as contrast agents for medical imaging techniques, their use in vivo is hindered by the cytotoxicity of cadmium. To address this issue, methods have been developed to “wrap” or “encapsulate” potentially-toxic QDs in bio-inert polymers to facilitate use in living tissue. While Cd-free QDs are commercially available, they are unsuitable for use as a substitute for organic contrasts. Another issue with CdSe(ZnS) nanoparticles is significant hydrophobicity, which hinders their ability to enter solution with aqueous media, such as blood or spinal fluid. Certain hydrophilic polymers could be used to render the dots water-soluble.
Synthesizing the encapsulant polymer
Rf-PEG synthesis
One notable quantum dot encapsulation technique involves utilizing a double fluoroalkyl-ended polyethylene glycol molecule (Rf-PEG) as a surfactant, which will spontaneously form micellular structures at its critical micelle concentration (CMC). The critical micelle concentration of the Rf-PEG depends on the length of the PEG portion of the polymer. This molecule consists of a hydrophilic PEG backbone with two hydrophilic terminal groups (CnF2n+1-CH2CH2O) attached via isophorone diurethane. It is synthesized by dehydrating a solution of 1,3-dimethyl-5-fluorouracil and PEG, mixing them in the presence of heavy water (D2O) via a sonicator to combine then.
Micellization
At the appropriate Krafft temperature and critical micelle concentration these molecules will form individual tear-drop loops, where the hydrophobic ends are attracted to one another, to other molecules, and also to the similarly hydrophobic QDs. This forms a loaded micelle with a hydrophilic outer shell and a hydrophobic core.
When encapsulating hydrophobes in this way it is important to ensure the particle size is appropriate for the PEG backbone being utilized, as the number of PEG mer units (generally with a molecular weight of 6 kDa or 10 kDa) determines the maximum particle size that can be successfully contained at the core of the micelle.
To determine the average diameter, D, of the QDs, the following empirical equation is used:
Where
is the diameter of the CdSe QD in nm
is the wavelength of the first absorption peak in nm
Role of ZnS shell
It is during encapsulation that the ZnS shell plays an especially important role, in that it helps prevent the agglomeration of CdSe particles that had no shell by occupying the previously mentioned bonds on the dot's surface; however, clumping can still occur through secondary forces that arise from common hydrophobicity. This can result in multiple particles within each micelle, which may negatively impact overall resolution. For this reason multiple combinations of PEG chain length and particle diameter are necessary to achieve optimal imaging properties.
Hydrogel network
After initial encapsulation the remaining molecules form connections between the individual micelles to form a network within the aqueous media called a hydrogel, creating a diffuse and relatively constant concentration of the encapsulated particle within the gel. The formation of hydrogels is a phenomenon observed in superabsorbent polymers, or "slush powders," in which the polymer, often in the form of a powder, absorbs water, becoming up to 99% liquid and 30-60 times larger in size.
Stokes-Einstein equation
The diffusivity of spherical particles in a suspension is approximated by the Stokes–Einstein equation:
where
is the temperature
is the particle radius
is the Boltzmann constant
is the hydrogel viscosity
Typical Rf-PEG hydrogel diffusivities for 2 nm quantum dots are on the order of 10−16 m2/s, so suspensions of quantum dots tend to be very stable. Hydrogel viscosity can be determined by using rheological techniques.
Micelle rheology
When encapsulating hydrophobic or potentially toxic materials it is important that the encapsulant remain intact while inside the body. Studying the rheological properties of the micelles permits identification and selection of the polymer that is most appropriate for use in long-term biological applications. Rf-PEG exhibits superior rheological properties when used in vivo.
Importance of polymer length
The properties of the polymer are influenced by the chain length. The correct chain length ensures that the encapsulant is not released over time. Avoiding the release of QDs and other toxic particles is critical to prevent unintentional cell necrosis in patients.
The length of the polymer is controlled by two factors:
Weight of the PEG backbone measured in daltons or kilodaltons (Da or kDa),
Length of the hydrophobic ends, denoted by the number of carbon atoms in the terminal group (C#).
Increasing the PEG length increases the solubility of the polymer. However, if the PEG chain is too long the micelle will become unstable. It has been observed that a stable hydrogel can only be formed with PEG backbones weighing between six and ten kilodaltons.
On the other hand, increasing the length of the hydrophobic terminal groups decreases aqueous solubility. For a given PEG weight, if the hydrophobe is too short the polymer will just dissolve into the solution, and if it is too long the polymer won't dissolve at all. Generally, two end groups result in the highest conversion into micelles (91%):
Maxwell fluid
At molecular weights between 6 and 10 kilodaltons the Rf-PEG hydrogel acts as a Maxwell material, which means the fluid has both viscosity and elasticity. This is determined by measuring the plateau modulus, the elastic modulus for a viscoelastic polymer is constant or "relaxed" when deformed, at a range of frequencies via oscillatory rheology. Plotting the first- vs second-order integrals of the modulus values, a Cole-Cole plot is obtained, which, when fitted to a Maxwell model, provides the following relationship:
Where
is the plateau modulus
is the oscillation frequency in radians per second
Mechanical properties of common Rf-PEG molecules
Based on the Maxwellian behavior of the hydrogel and observations of erosion via surface plasmon resonance (SPR), the following data results for 3 common Rf-PEG types at their specified concentrations:
XKCY denotes X thousand daltons of molecular mass and Y carbon atoms.
These values can give us information on the degree of entanglement (or degree of cross linking, depending on what polymer is being considered). In general, higher degrees of entanglement leads to higher time required for the polymer to return to the undeformed state or relaxation times.
Applications
Hydrogel encapsulation of the QDs opens up a new range of applications, such as:
Biosensors
Enzymes and other bio-active molecules serve as biorecognition units while QDs serve as signalling units. By adding enzymes to the QD hydrogel network both units can be combined to form a biosensor. The enzymatic reaction that detects a particular molecule causes the fluoresce of QDs to be quenched. In this way, the location of molecules of interest can be observed.
Cell Influence and Imaging
Adding iron oxide nanoparticles to the QD micelles allows them to be fluorescent and magnetic. These micelles can be moved in a magnetic field to create concentration gradients that will influence a cell's processes.
Gold Hyperthermia
When excited by high energy radiation, such as with a laser, gold nanoparticles emit a thermal field. This phenomenon can be used as a form of hyperthermia therapy to destroy malignant cancers without damaging surrounding tissues. When combined with QDs in a hydrogel this could facilitate real-time monitoring of the tumor treatment.
See also
Hydrophobe
Thermodynamics of micellization
Krafft temperature
Surfactants
Detergent
Entropic force
Cole–Cole equation
References
Surface science
Quantum dots | Hydrogel encapsulation of quantum dots | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,066 | [
"Condensed matter physics",
"Surface science"
] |
39,620,647 | https://en.wikipedia.org/wiki/Surface%20chemistry%20of%20paper | The surface chemistry of paper is responsible for many important paper properties, such as gloss, waterproofing, and printability. Many components are used in the paper-making process that affect the surface.
Pigment and dispersion medium
Coating components are subject to particle-particle, particle-solvent, and particle-polymer interactions. Van der Waals forces, electrostatic repulsions, and steric stabilization are the reasons for these interactions. Importantly, the characteristics of adhesion and cohesion between the components form the base coating structure. Calcium carbonate and kaolin are commonly used pigments. Pigments support a structure of fine porosity and form a light scattering surface. The surface charge of the pigment plays an important role in dispersion consistency. The surface charge of calcium carbonate is negative and not dependent on pH, however it can decompose under acidic conditions. Kaolin has negatively charged faces while the charge of its laterals depend on pH, being positive in acidic conditions and negative in basic conditions with an isoelectric point at 7.5. The equation for determining the isoelectric point is as follows:
In the papermaking process, the pigment dispersions are generally kept at a pH above 8.0.
Pigments, binders, and co-binders
Binders promote the binding of pigment particles between themselves and the coating layer of the paper. Binders are spherical particles less than 1 µm in diameterr. Common binders are styrene maleic anhydride copolymer or styrene-acrylate copolymer. The surface chemical composition is differentiated by the adsorption of acrylic acid or an anionic surfactant, both of which are used for stabilization of the dispersion in water. Co-binders, or thickeners, are generally water-soluble polymers that influence the paper's color viscosity, water retention, sizing, and gloss. Some common examples are carboxymethyl cellulose (CMC), cationic and anionic hydroxyethyl cellulose (EHEC), modified starch, and dextrin.
Sizing
In sizing, the strength and printability of paper is increased. Sizing also improves the hydrophilic character, liquid spreading, and affinity for ink. Starch is the most common sizing agent. Cationic starch and hydrophilic agents are also applied, including alkenyl succinic anhydride (ASA) and alkyl ketene dimers (AKD).
Cationic starch increases strength because it binds to the anionic paper fibers. The amount added is usually between ten and thirty pounds per ton. When starch exceeds the amount the fibers can bind to, it causes foaming in the production process as well as decreased retention and drainage.
Surface modification
Plasma surface modification
Surface modification makes paper hydrophobic and oleophilic. This combination allows ink oil to penetrate the paper, but prevents dampening water absorption, which increases papers printability.
Three different plasma-solid interactions are used: etching/ablation, plasma activation, and plasma coating. Etching or ablation is when material is removed from the surface of the solid. Plasma activation is where species in the plasma like ions, electrons, or radicals are used to chemically or physically modify the surface. Lastly, plasma coating is where material is coated to the surface in the form of a thin film. Plasma coating can be used to add hydrocarbons to surfaces which can make a surface non-polar or hydrophobic. The specific type of plasma coating used to add hydrocarbons is called plasma enhanced chemical vapor deposition process or PCVD.
Contact angle
An ideal hydrophobic surface would have a contact angle of 180 degrees to water. This means that the hydrocarbons lie flat against the surface creating a thin layer and preventing dampening water absorption. However, in practice it is fine or even preferred to have a low level of dampening water absorption because of a phenomenon that occurs when water settles at the surface of paper. This phenomenon is when ink is unable to transfer to the paper because of the water layer at the surface. The contact angle value for hydrocarbons on a rough pigment-coated paper can be found to be approximately 110° through a contact angle meter.
The Young's equation can be used to calculate the surface energy of a liquid on paper. Young's equation is:
where is the interfacial tension between the solid and the liquid, is the interfacial tension between the liquid and the vapor, and is the interfacial tension between the solid and the vapor.
An ideal oleophilic surface would have a contact angle of 0° with oil, therefore allowing the ink to transfer to the paper and be absorbed. The hydrocarbon plasma coating provides an oleophilic surface to the paper by lowering the contact angle of the paper with the oil in the ink.
The hydrocarbon plasma coating increases the non-polar interactions while decreasing polar interactions which allow paper to absorb ink while preventing dampening water absorption.
Applications
Printing quality is highly influenced by the various treatments and methods used in creating paper and enhancing the paper surface. Consumers are most concerned with the paper-ink interactions which vary for certain types of paper due to different chemical properties of the surface. Inkjet paper is the most commercially used type of paper. Filter paper is another key type of paper whose surface chemistry affects its various forms and uses. The ability of adhesives to bond to a paper surface is also affected by the surface chemistry.
Inkjet printing paper
Co-styrene-maleic anhydride and co-styrene acrylate are common binders associated with a cationic starch pigment in Inkjet printing paper. Table 1 shows their surface tension under given conditions.
There have been several studies that have focused on how the paper printing quality is dependent on the concentration of these binders and ink pigment. Data from the experiments are congruent and stated in Table 2 as the corrected contact angle of water, the corrected contact angle of black ink, and the total surface energy.
The contact angle measurement has proven to be a very useful tool to evaluate the influence of the sizing formulation on the printing properties. Surface free energy has also shown to be very valuable in explaining the differences in sample behavior.
Filter paper
Various composite coatings were analyzed on filter paper in an experiment done by Wang et al. The ability to separate homogenous liquid solutions based on varying surface tensions has great practical use. Creating superhydrophobic and superoleophilic filter paper was achieved by treating the surface of commercially available filter paper with hydrophobic silica nanoparticles and polystyrene solution in toluene. Oil and water were successfully separated through the use of the filter paper created with an efficiency greater than 96%. In a homogenous solution the filter paper was also successful in separating the liquids through differentiating for surface tensions. Although with a lower efficiency, aqueous ethanol was also extracted from the solution when tested on the filter paper.
See also
Paper chemicals
Papermaking
Sizing
References
Paper
Paper | Surface chemistry of paper | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,463 | [
"Condensed matter physics",
"Surface science"
] |
42,392,462 | https://en.wikipedia.org/wiki/Chasles%27%20theorem%20%28kinematics%29 | In kinematics, Chasles' theorem, or Mozzi–Chasles' theorem, says that the most general rigid body displacement can be produced by a screw displacement. A direct Euclidean isometry in three dimensions involves a translation and a rotation. The screw displacement representation of the isometry decomposes the translation into two components, one parallel to the axis of the rotation associated with the isometry and the other component perpendicular to that axis. The Chasles theorem states that the axis of rotation can be selected to provide the second component of the original translation as a result of the rotation. This theorem in three dimensions extends a similar representation of planar isometries as rotation. Once the screw axis is selected, the screw displacement rotates about it and a translation parallel to the axis is included in the screw displacement.
Planar isometries with complex numbers
Euclidean geometry is expressed in the complex plane by points where i squared is −1. Rotations result from multiplications by .
Note that a rotation about complex point p is obtained by complex arithmetic with
where the last expression shows the mapping equivalent to rotation at 0 and a translation.
Therefore, given direct isometry one can solve to obtain as the center for an equivalent rotation, provided that , that is, provided the direct isometry is not a pure translation. As stated by Cederberg, "A direct isometry is either a rotation or a translation."
History
The proof that a spatial displacement can be decomposed into a rotation and slide around and along a line is attributed to the astronomer and mathematician Giulio Mozzi (1763), in fact the screw axis is traditionally called asse di Mozzi in Italy. However, most textbooks refer to a subsequent similar work by Michel Chasles dating from 1830. Several other contemporaries of M. Chasles obtained the same or similar results around that time, including G. Giorgini, Cauchy, Poinsot, Poisson and Rodrigues. An account of the 1763 proof by Giulio Mozzi and some of its history can be found here.
Proof
Mozzi considers a rigid body undergoing first a rotation about an axis passing through the center of mass and then a translation of displacement D in an arbitrary direction. Any rigid motion can be accomplished in this way due to a theorem by Euler on the existence of an axis of rotation.
The displacement D of the center of mass can be decomposed into components parallel and perpendicular to the axis. The perpendicular (and parallel) component acts on all points of the rigid body but Mozzi shows that for some points the previous rotation acted exactly with an opposite displacement, so those points are translated parallel to the axis of rotation. These points lie on the Mozzi axis through which the rigid motion can be accomplished through a screw motion.
Another elementary proof of Mozzi–Chasles' theorem was given by E. T. Whittaker in 1904. Suppose A is to be transformed into B. Whittaker suggests that line AK be selected parallel to the axis of the given rotation, with K the foot of a perpendicular from B. The appropriate screw displacement is about an axis parallel to AK such that K is moved to B. In Whittaker's terms, "A rotation about any axis is equivalent to a rotation through the same angle about any axis parallel to it, together with a simple translation in a direction perpendicular to the axis."
Calculation
The calculation of the commuting translation and rotation from a screw motion can be performed using 3DPGA (), the geometric algebra of 3D Euclidean space. It has three Euclidean basis vectors satisfying representing orthogonal planes through the origin, and one Grassmanian basis vector satisfying to represent the plane at infinity. Any plane a distance from the origin can then be formed as a linear combination which is normalized such that . Because reflections can be represented by the plane in which the reflection occurs, the product of two planes and is the bireflection . The result is a rotation around their intersection line , which could also lie on the plane at infinity when the two reflections are parallel, in which case the bireflection is a translation.
A screw motion is the product of four non-collinear reflections, and thus . But according to the Mozzi-Chasles' theorem a screw motion can be decomposed into a commuting translation where is the axis of translation satisfying , and rotationwhere is the axis of rotation satisfying . The two bivector lines and are orthogonal and commuting. To find and from , we simply write out and consider the result grade-by-grade:Because the quadrivector part and , is directly found to beand thusThus, for a given screw motion the commuting translation and rotation can be found using the two formulae above, after which the lines and are found to be proportional to and respectively.
Other dimensions and fields
The Chasles' theorem is a special case of the Invariant decomposition.
References
Further reading
Benjamin Peirce (1872) A System of Analytical Mechanics, III. Combined Motions of Rotation and Translation, especially § 32 and § 39, David van Nostrand & Company, link from Internet Archive
Richard M. Friedberg (2022) "Rodrigues, Olinde: "Des lois géométriques qui régissent les déplacements d'un systéme solide...", translation and commentary", explication of 1840 article by Rodrigues, see §4 on Chasles theorem
Mathematical theorems
Kinematics
Euclidean solid geometry
Rotation in three dimensions | Chasles' theorem (kinematics) | [
"Physics",
"Mathematics",
"Technology"
] | 1,136 | [
"Machines",
"Kinematics",
"Physical phenomena",
"Mathematical theorems",
"Euclidean solid geometry",
"Classical mechanics",
"Physical systems",
"Motion (physics)",
"Mechanics",
"Space",
"nan",
"Spacetime",
"Mathematical problems"
] |
42,392,488 | https://en.wikipedia.org/wiki/Chasles%27%20theorem%20%28gravity%29 | In gravitation, Chasles' theorem says that the Newtonian gravitational attraction of a spherical shell, outside of that shell, is equivalent mathematically to the attraction of a point mass. The theorem is conventionally known as Newton's shell theorem, but is attributed to Michel Chasles (1793–1880) by Benjamin Peirce.
Benjamin Peirce followed Chasles work on that developed an analogy between conduction of heat and gravitational attraction:
A single current in a direction perpendicular to the level surfaces, and having a velocity proportionate to the decrease in density … is the law of propagation of heat, when there is no radiation, and hence arise the analogies between the levels and isothermal surfaces, and the identity of the mathematical investigations of the attraction of bodies and the propagation of heat which have been developed by Chasles.
The Chaslesian shell is the figure that Peirce exploits:
If an infinitely thin homogeneous shell is formed upon each level surface, of a system of bodies, having at each point a thickness proportional to the attraction at that point, the portion of either of these shells, which is included in a canal formed by trajectories, bears the same ratio to the whole shell, which the portion of another shell included in the same canal bears to that shell, provided there is no mass included between the shells.
The conception of these shells, and the investigation of their acting and reacting properties was original with Chasles, and it will be convenient, as it is appropriate, to designated them as Chaslesian shells.
Chasles' theorem as expressed by Peirce:
The external level surfaces of a shell are the same with those of the original masses, and the attraction of the shell upon an external point has the same direction with the attraction of the original masses, and is normal to the level surface passing through the point. This theorem is due to Chasles.
The ellipsoid is recruited to bound the Chaslesian shells:
An infinitely thin homogeneous shell, of which the inner and outer surfaces are those of similar, and similarly placed, concentric ellipsoids, is a Chaslesian shell.
See also
Newton's shell theorem
References
Theorems in mathematical physics
Theories of gravity
Newtonian gravity | Chasles' theorem (gravity) | [
"Physics",
"Mathematics"
] | 457 | [
"Mathematical theorems",
"Equations of physics",
"Theoretical physics",
"Theorems in mathematical physics",
"Theories of gravity",
"Mathematical problems",
"Physics theorems"
] |
42,394,237 | https://en.wikipedia.org/wiki/Integrated%20manure%20utilization%20system | IMUS (also known as integrated manure utilization system) is an anaerobic digestion technology that converts organic material into biogas that is used to produce electricity, heat and nutrients. The technology uses waste such as municipal waste, cow manure, sand laden feed lot waste, and food processing waste. The technology can be integrated with other industrial process, such as municipal facilities, open pen feedlots, food processing, and ethanol plants. The technology was developed in 1999 by Himark BioGas.
References
Anaerobic digestion | Integrated manure utilization system | [
"Chemistry",
"Engineering"
] | 110 | [
"Water technology",
"Anaerobic digestion",
"Environmental engineering"
] |
42,396,644 | https://en.wikipedia.org/wiki/Membrane%20vesicle%20trafficking | Membrane vesicle trafficking in eukaryotic animal cells involves movement of biochemical signal molecules from synthesis-and-packaging locations in the Golgi body to specific release locations on the inside of the plasma membrane of the secretory cell. It takes place in the form of Golgi membrane-bound micro-sized vesicles, termed membrane vesicles (MVs).
In this process, the packed cellular products are released or secreted outside the cell, across its membrane. On the other hand, the vesicular membrane is retained and recycled by the secretory cells. This phenomenon has a major role in synaptic neurotransmission, endocrine secretion, mucous secretion, granular-product secretion by neutrophils, and other phenomena. The scientists behind this discovery were awarded Nobel Prize for the year 2013.
In prokaryotic, gram-negative bacterial cells, membrane vesicle trafficking is mediated through bacterial outer membrane bounded nano-sized vesicles, called outer membrane vesicles (OMVs). In this case, however, the OMV membrane is secreted as well, along with OMV-contents to outside the secretion-active bacterium. This different phenomenon has a major role in host–pathogen interactions, endotoxic shock in patients, invasion and infection of animals or plants, inter-species bacterial competition, quorum sensing, exocytosis, and other areas.
Movement within eukaryotic cells
Once vesicles are produced in the endoplasmic reticulum and modified in the Golgi body they make their way to a variety of destinations within the cell. Vesicles first leave the Golgi body and are released into the cytoplasm in a process called budding. Vesicles are then moved towards their destination by motor proteins. Once the vesicle arrives at its destination it joins with the bi-lipid layer in a process called fusion, and then releases its contents.
Budding
Receptors embedded in the membrane of the Golgi body bind specific cargo (such as dopamine) on the lumenal side of the vesicle. These cargo receptors then recruit a variety of proteins including other cargo receptors and coat proteins such as clathrin, COPI and COPII. As more and more of these coating proteins come together, they cause the vesicle to bud outward and eventually break free into the cytoplasm. The coating proteins are then shed into the cytoplasm to be recycled and reused.
Motility between cell compartments
For movement between different compartments within the cell, vesicles rely on the motor proteins myosin, kinesin (primarily anterograde transport) and dynein (primarily retrograde transport). One end of the motor proteins attaches to the vesicle while the other end attaches to either microtubulees or microfilaments. The motor proteins then move by hydrolyzing ATP, which propels the vesicle towards its destination.
Docking and Fusion
As a vesicle nears its intended location, RAB proteins in the vesicle membrane interact with docking proteins at the destination site. These docking proteins bring the vesicle in closer to interact with the SNARE Complex found in the target membrane. The SNARE complex reacts with synaptobrevin found on the vesicle membrane. This forces the vesicle membrane against the membrane of the target complex (or the outer membrane of the cell) and causes the two membranes to fuse. Depending on whether the vesicle fuses with a target complex or the outer membrane, the contents of the vesicle are then released either into the target complex or outside the cell.
Examples in eukaryotes
Intracellular trafficking occurs between subcellular compartments like Golgi cisternae and multivesicular endosomes for transport of soluble proteins as MVs.
Budding of MVs directly from plasma membrane as microvesicles released outside the secretory cells.
Exosomes are MVs that can form inside an internal compartment like multivesicular endosome. Exosomes are released eventually due to fusion of this endosome with plasma membrane of cell.
Hijacking of exosomal machinery by some viruses like retroviruses, wherein viruses bud inside multivesicular endosomes and get secreted subsequently as exosomes.
All these types (1–4) of modes of membrane vesicle trafficking, taking place in eukaryotic cells have been explained diagrammatically.
In prokaryotes
Unlike in eukaryotes, membrane vesicular trafficking in prokaryotes is an emerging area in interactive biology for intra-species (quorum sensing) and inter-species signaling at the host–pathogen interface, as prokaryotes lack internal membrane-compartmentalization of their cytoplasm. Bacterial outer membrane vesicles dispersion along the cell surface was measured in live Escherichia coli, commensal bacteria common in the human gut. Antibiotic treatment altered vesicle dynamics, vesicle-to-membrane affinity, and surface properties of the cell membranes, generally enhancing vesicle transport along the surfaces of bacterial membranes and suggesting that their motion properties could be a signature of antibiotic stress.
For more than four decades, cultures of gram negative bacteria revealed the presence of nanoscale membrane vesicles. A role for membrane vesicles in pathogenic processes has been suspected since the 1970s, when they were observed in gingival plaque by electron microscopy. These vesicles were suspected to promote bacterial adhesion to the host epithelial cell surface. Their role in invasion of animal host cells in vivo was then demonstrated. In inter-bacterial interactions, OMVs released by Pseudomonas aeruginosa were shown to fuse with outer membrane of other gram negative bacteria causing their bacteriolysis; these OMVs could lyse gram-positive bacteria as well. Role of OMVs in Helicobacter pylori infection of human primary antral epithelial cells, as model that closely resembles human stomach, has also been confirmed VacA-containing OMVs could also be detected in human gastric mucosa, infected with H. pylori.. Salmonella OMVs were also shown to have direct role in invasion of chicken ileal epithelial cells in vivo in the year, 1993 (ref 4) and later, in hijacking of defense macrophages into sub-service for pathogen replication and consequent apoptosis of infected macrophages in typhoid-like animal infection. These studies brought the focus on OMVs into membrane vesicle trafficking and showed this phenomenon as involved in multifarious processes including genetic transformation, quorum sensing, competition arsenal among microbes, and invasion, infection, immuno-modulation, of animal hosts. A mechanism has already been proposed for generation of OMVs by gram negative bacteria involving, expansion of pockets of periplasm (named, periplasmic organelles) due to accumulation of bacterial cell secretions and their pinching off as outer membrane bounded vesicles (OMVs) on the lines of a 'soap bubble' formation with a bubble tube, and further fusion or uptake of diffusing OMVs by host/target cells (Fig. 2).
In conclusion, membrane vesicle trafficking via OMVs of Gram-negative organisms, cuts across species and kingdoms – including plant kingdom – in the realm of cell-to-cell signaling.
See also
Bacterial outer membrane vesicles
Endocytosis
Exocytosis
Host–pathogen interaction
Secretory pathway
Vesicle (Biology and Chemistry)
Virulence
References
External links
Nobel Prize of year 2013 in Physiology and Medicine – press release http://www.nobelprize.org/nobel_prizes/medicine/laureates/2013/press.html
Discovery of vesicular exocytosis in prokaryotes https://www.researchgate.net/publication/230793568_Discovery_of_vesicular_exocytosis_in_prokaryotes_and_its_role_in_Salmonella_invasion?ev=prf_pub
Membrane biology
Cell communication | Membrane vesicle trafficking | [
"Chemistry",
"Biology"
] | 1,701 | [
"Cell communication",
"Membrane biology",
"Cellular processes",
"Molecular biology"
] |
42,397,902 | https://en.wikipedia.org/wiki/Microscale%20and%20macroscale%20models | Microscale models form a broad class of computational models that simulate fine-scale details, in contrast with macroscale models, which amalgamate details into select categories. Microscale and macroscale models can be used together to understand different aspects of the same problem.
Applications
Macroscale models can include ordinary, partial, and integro-differential equations, where categories and flows between the categories determine the dynamics, or may involve only algebraic equations. An abstract macroscale model may be combined with more detailed microscale models. Connections between the two scales are related to multiscale modeling. One mathematical technique for multiscale modeling of nanomaterials is based upon the use of multiscale Green's function.
In contrast, microscale models can simulate a variety of details, such as individual bacteria in biofilms, individual pedestrians in simulated neighborhoods, individual light beams in ray-tracing imagery, individual houses in cities, fine-scale pores and fluid flow in batteries, fine-scale compartments in meteorology, fine-scale structures in particulate systems, and other models where interactions among individuals and background conditions determine the dynamics.
Discrete-event models, individual-based models, and agent-based models are special cases of microscale models. However, microscale models do not require discrete individuals or discrete events. Fine details on topography, buildings, and trees can add microscale detail to meteorological simulations and can connect to what is called mesoscale models in that discipline. Square-meter-sized landscape resolution available from images allows water flow across land surfaces to be modeled, for example, rivulets and water pockets, using gigabyte-sized arrays of detail. Models of neural networks may include individual neurons but may run in continuous time and thereby lack precise discrete events.
History
Ideas for computational microscale models arose in the earliest days of computing and were applied to complex systems that could not accurately be described by standard mathematical forms.
Two themes emerged in the work of two founders of modern computation around the middle of the 20th century. First, pioneer Alan Turing used simplified macroscale models to understand the chemical basis of morphogenesis, but then proposed and used computational microscale models to understand the nonlinearities and other conditions that would arise in actual biological systems. Second, pioneer John von Neumann created a cellular automaton to understand the possibilities for self-replication of arbitrarily complex entities, which had a microscale representation in the cellular automaton but no simplified macroscale form. This second theme is taken to be part of agent-based models, where the entities ultimately can be artificially intelligent agents operating autonomously.
By the last quarter of the 20th century, computational capacity had grown so far that up to tens of thousands of individuals or more could be included in microscale models, and that sparse arrays could be applied to also achieve high performance. Continued increases in computing capacity allowed hundreds of millions of individuals to be simulated on ordinary computers with microscale models by the early 21st century.
The term "microscale model" arose later in the 20th century and now appears in the literature of many branches of physical and biological science.
Example
Figure 1 represents a fundamental macroscale model: population growth in an unlimited environment. Its equation is relevant elsewhere, such as compounding growth of capital in economics or exponential decay in physics. It has one amalgamated variable, , the number of individuals in the population at some time . It has an amalgamated parameter , the annual growth rate of the population, calculated as the difference between the annual birth rate and the annual death rate . Time can be measured in years, as shown here for illustration, or in any other suitable unit.
The macroscale model of Figure 1 amalgamates parameters and incorporates a number of simplifying approximations:
the birth and death rates are constant;
all individuals are identical, with no genetics or age structure;
fractions of individuals are meaningful;
parameters are constant and do not evolve;
habitat is perfectly uniform;
no immigration or emigration occurs; and
randomness does not enter.
These approximations of the macroscale model can all be refined in analogous microscale models. On the first approximation listed above—that birth and death rates are constant—the macroscale model of Figure 1 is exactly the mean of a large number of stochastic trials with the growth rate fluctuating randomly in each instance of time. Microscale stochastic details are subsumed into a partial differential diffusion equation and that equation is used to establish the equivalence.
To relax other assumptions, researchers have applied computational methods. Figure 2 is a sample computational microscale algorithm that corresponds to the macroscale model of Figure 1. When all individuals are identical and mutations in birth and death rates are disabled, the microscale dynamics closely parallel the macroscale dynamics (Figures 3A and 3B). The slight differences between the two models arise from stochastic variations in the microscale version not present in the deterministic macroscale model. These variations will be different each time the algorithm is carried out, arising from intentional variations in random number sequences.
When not all individuals are identical, the microscale dynamics can differ significantly from the macroscale dynamics, simulating more realistic situations than can be modeled at the macroscale (Figures 3C and 3D). The microscale model does not explicitly incorporate the differential equation, though for large populations it simulates it closely. When individuals differ from one another, the system has a well-defined behavior but the differential equations governing that behavior are difficult to codify. The algorithm of Figure 2 is a basic example of what is called an equation-free model.
When mutations are enabled in the microscale model (), the population grows more rapidly than in the macroscale model (Figures 3C and 3D). Mutations in parameters allow some individuals to have higher birth rates and others to have lower death rates, and those individuals contribute proportionally more to the population. All else being equal, the average birth rate drifts to higher values and the average death rate drifts to lower values as the simulation progresses. This drift is tracked in the data structures named beta and delta of the microscale algorithm of Figure 2.
The algorithm of Figure 2 is a simplified microscale model using the Euler method. Other algorithms such as the Gillespie method and the discrete event method are also used in practice. Versions of the algorithm in practical use include efficiencies such as removing individuals from consideration once they die (to reduce memory requirements and increase speed) and scheduling stochastic events into the future (to provide a continuous time scale and to further improve speed). Such approaches can be orders of magnitude faster.
Complexity
The complexity of systems addressed by microscale models leads to complexity in the models themselves, and the specification of a microscale model can be tens or hundreds of times larger than its corresponding macroscale model. (The simplified example of Figure 2 has 25 times as many lines in its specification as does Figure 1.) Since bugs occur in computer software and cannot completely be removed by standard methods such as testing, and since complex models often are neither published in detail nor peer-reviewed, their validity has been called into question. Guidelines on best practices for microscale models exist but no papers on the topic claim a full resolution of the problem of validating complex models.
Future
Computing capacity is reaching levels where populations of entire countries or even the entire world are within the reach of microscale models, and improvements in the census and travel data allow further improvements in parameterizing such models. Remote sensors from Earth-observing satellites and ground-based observatories such as the National Ecological Observatory Network (NEON) provide large amounts of data for calibration. Potential applications range from predicting and reducing the spread of disease to helping understand the dynamics of the earth.
Figures
Figure 1. One of the simplest of macroscale models: an ordinary differential equation describing continuous exponential growth. is the size of the population at the time and is the rate of change through time in a single dimension . is the initial population, is the birth rate per time unit, and is a death rate per time unit. At the left is the differential form; at the right is the explicit solution in terms of standard mathematical functions, which follows in this case from the differential form. Almost all macroscale models are more complex than this example, in that they have multiple dimensions, lack explicit solutions in terms of standard mathematical functions, and must be understood from their differential forms.
Figure 2. A basic algorithm applying the Euler method to an individual-based model. See text for discussion. The algorithm, represented in pseudocode, begins with invocation of procedure , which uses the data structures to carry out the simulation according to the numbered steps described at the right. It repeatedly invokes function , which returns its parameter perturbed by a random number drawn from a uniform distribution with standard deviation defined by the variable . (The square root of 12 appears because the standard deviation of a uniform distribution includes that factor.) Function in the algorithm is assumed to return a uniformly distributed random number . The data are assumed to be reset to their initial values on each invocation of .
Figure 3. Graphical comparison of the dynamics of macroscale and microscale simulations of Figures 1 and 2, respectively.
(A) The black curve plots the exact solution to the macroscale model of Figure 1 with per year, per year, and individuals.
(B) Red dots show the dynamics of the microscale model of Figure 2, shown at intervals of one year, using the same values of , , and , and with no mutations .
(C) Blue dots show the dynamics of the microscale model with mutations having a standard deviation of .
(D) Green dots show results with larger mutations, .
References
Dynamical systems
Mathematical and theoretical biology
Mathematical modeling
Numerical differential equations
Population models
Scientific models
Simulation
Crowds | Microscale and macroscale models | [
"Physics",
"Mathematics"
] | 2,036 | [
"Mathematical modeling",
"Mathematical and theoretical biology",
"Applied mathematics",
"Mechanics",
"Dynamical systems"
] |
40,935,941 | https://en.wikipedia.org/wiki/9-Carboxymethoxymethylguanine | 9-Carboxymethoxymethylguanine (CMMG) is a compound which is known as the principal metabolite of the antiviral medication aciclovir (and its prodrug valaciclovir), and has been suggested as the causative agent in the neuropsychiatric side effects sometimes associated with these medications. These are mainly suffered by patients with kidney failure or otherwise decreased kidney function, and can include psychotic reactions, hallucinations, and rarely more complex disorders such as Cotard delusion. Patients suffering these symptoms following aciclovir treatment were found to have much higher levels of CMMG than normal, and since this is the first time Cotard delusion has been linked to a drug as a side effect, this discovery may be useful in the study of Cotard delusion and its treatment.
References
Purines
Carboxylic acids | 9-Carboxymethoxymethylguanine | [
"Chemistry"
] | 186 | [
"Pharmacology",
"Carboxylic acids",
"Functional groups",
"Medicinal chemistry stubs",
"Pharmacology stubs"
] |
40,939,889 | https://en.wikipedia.org/wiki/Nucleoplasmin | Nucleoplasmin, the first identified molecular chaperone is a thermostable acidic protein with a pentameric structure. The protein was first isolated from Xenopus species
Functions
The pentameric protein participates in various significant cellular activities like sperm chromatin remodeling, nucleosome assembly, genome stability, ribosome biogenesis, DNA duplication and transcriptional regulation. During the assembly of regular nucleosomal arrays, these nucleoplasmins transfer the DNA to them by binding to the histones. This reaction requires ATP.
Human proteins
Humans express three members of the nucleoplasmin family:
Nucleophosmin (NPM1)
Nucleoplasmin 2 (NPM2)
Nucleoplasmin 3 (NPM3)
References
Further reading
Molecular chaperones | Nucleoplasmin | [
"Chemistry"
] | 172 | [
"Biochemistry stubs",
"Protein stubs"
] |
40,941,004 | https://en.wikipedia.org/wiki/C5H6S | {{DISPLAYTITLE:C5H6S}}
The molecular formula C5H6S may refer to:
Methylthiophenes
2-Methylthiophene, an organosulfur compound that can be produced by Wolff-Kishner reduction of thiophene-2-carboxaldehyde
3-Methylthiophene, an organosulfur that can be produced by sulfidation of 2-methylsuccinate
Thiopyran, a heterocyclic compound | C5H6S | [
"Chemistry"
] | 110 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
40,941,916 | https://en.wikipedia.org/wiki/Palatini%20identity | In general relativity and tensor calculus, the Palatini identity is
where denotes the variation of Christoffel symbols and indicates covariant differentiation.
The "same" identity holds for the Lie derivative . In fact, one has
where denotes any vector field on the spacetime manifold .
Proof
The Riemann curvature tensor is defined in terms of the Levi-Civita connection as
.
Its variation is
.
While the connection is not a tensor, the difference between two connections is, so we can take its covariant derivative
.
Solving this equation for and substituting the result in , all the -like terms cancel, leaving only
.
Finally, the variation of the Ricci curvature tensor follows by contracting two indices, proving the identity
.
See also
Einstein–Hilbert action
Palatini variation
Ricci calculus
Tensor calculus
Christoffel symbols
Riemann curvature tensor
Notes
References
[English translation by R. Hojman and C. Mukku in P. G. Bergmann and V. De Sabbata (eds.) Cosmology and Gravitation, Plenum Press, New York (1980)]
Equations of physics
Tensors
General relativity | Palatini identity | [
"Physics",
"Mathematics",
"Engineering"
] | 233 | [
"Tensors",
"Equations of physics",
"Mathematical objects",
"Equations",
"General relativity",
"Theory of relativity"
] |
40,942,048 | https://en.wikipedia.org/wiki/Air%20sparging | Air sparging, also known as in situ air stripping and in situ volatilization is an in situ remediation technique, used for the treatment of saturated soils and groundwater contaminated by volatile organic compounds (VOCs) like petroleum hydrocarbons, a widespread problem for the ground water and soil health. Vapor extraction has become a very successful and practical method of VOC remediation. In saturated zone remediation, air sparging refers to the injection of a hydrocarbon-free gaseous medium into the ground where contamination has been found. When it comes to situ air sparging it became an intricate phase process that was proven to be successful in Europe since the 1980s. Currently, there have been further developments into bettering the engineering design and process of air sparging.
Mechanism
Air sparging is a subsurface contaminant remediation technique that involves the injection of pressurized air into contaminated ground water causing hydrocarbons to change state from dissolved to vapor state. The air is then sent to the vacuum extraction systems to remove the contaminants. The extracted air or "off vapors" are treated to remove any toxic contaminants.
Methods and treatment
Soil vapor extraction (SVE) involves the use of multiple air injection points and multiple soil vapor extraction points that can be installed in contaminated soils to extract vapor phase contaminants above the water table. Contamination must be at least deep beneath the ground surface in order for the system to be effective. A blower is attached to wells, usually through a manifold, below the water table creating pressure. The pressurized air forms small bubbles that travel through the contamination in and above water column. The bubbles of air volatilize contaminants and carry them to the unsaturated soils above. Vacuum points are installed in the unsaturated soils above the saturated zone. The vacuum points extract the vapors through to a soil vapor extraction system. In order for the vacuum to avoid pulling the air from the surface, the ground has to be covered with a tarp or other method of sealing out surface air. Surface air intrusion into the system reduces efficiency and can reduce the accuracy of system metrics. The tarp is used to stop vapors from breakthrough to the surface above.
The air sparging system treats the off-gases (referred as contaminated vapors and extracted air). The vapor is treated with granulated activated carbon prior to release to the atmosphere.
Applicability
Air sparging is generally applied for commercial usage. Air sparging contaminant groups are VOCs and fuels found in groundwater. Air sparging is usually applied to the lighter gasoline constituents such as benzene, ethylbenzene, toluene, and xylene. This method is typically not applied on the heavier gasoline products such as kerosene and diesel fuels. The usage of air sparging is commonly applied when cleaning up contaminated water under buildings and obstacles to prevent the further contamination of that water source. The usage of air sparging and SVE is safe when properly conducted. This makes sure only clean air that meets a certain quality standard is released, therefore it does not pose a threat when the proper sample method is done to make sure that hazardous gases do not exit into the atmosphere.
Arsenic-contaminated groundwater can be treated by air sparging to remove a certain percentage of arsenic in a solution of iron and arsenic at a molar ratio of 2. Treatment using air sparging is beneficial as groundwater contains high amounts of dissolved iron, which contains the theoretical capacity for the treatment.
References
Further reading
Hinchee, Robert E., ed. Air sparging for site remediation. Vol. 2. CRC Press, 1994.
Water treatment | Air sparging | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 770 | [
"Water treatment",
"Water pollution",
"Water technology",
"Environmental engineering"
] |
40,943,457 | https://en.wikipedia.org/wiki/Small%20molecule%20drug%20conjugate | Small molecule drug conjugates or SMDCs are built with three modules: a targeting ligand, a linker and a drug payload. The targeting ligands consist of low molecular weight, high-affinity ligands that are precisely linked to potent drugs. The linkers are designed to be stable in the bloodstream and then release the active drug from the targeting ligand when the SMDC is taken up by the diseased cell. The drug payloads are highly active molecules that are too toxic to be administered in their untargeted forms at therapeutic dose levels. This modular approach allows varying targeting ligands, linker systems and drug payloads and generate SMDCs for different diseases.
The most advanced SMDC is vintafolide, a derivative of the anti-mitotic chemotherapy drug vinblastine which is chemically linked to folic acid. Potent, bioactive natural products like triptolide that inhibit mammalian transcription has been recently reported as a glucose conjugate for targeting hypoxic cancer cells with increased glucose transporter expression.
SMDCs are currently being developed by Endocyte for treating cancer, inflammatory diseases and kidney disease, as well as a companion imaging agent that is created by replacing the potent drug with an imaging agent.
References
Drug delivery devices | Small molecule drug conjugate | [
"Chemistry"
] | 260 | [
"Pharmacology",
"Drug delivery devices"
] |
40,943,939 | https://en.wikipedia.org/wiki/Bat%20SARS-like%20coronavirus%20WIV1 | Bat SARS-like coronavirus WIV1 (Bat SL-CoV-WIV1), also sometimes called SARS-like coronavirus WIV1, is a strain of severe acute respiratory syndrome–related coronavirus (SARSr-CoV) isolated from Chinese rufous horseshoe bats in 2013 (Rhinolophus sinicus). Like all coronaviruses, virions consist of single-stranded positive-sense RNA enclosed within an envelope.
WIV1 was named for the Wuhan Institute of Virology, where it was discovered by a researcher on Shi Zhengli's team.
Zoonosis
The discovery confirms that bats are the natural reservoir of SARS-CoV. Phylogenetic analysis shows the possibility of direct transmission of SARS from bats to humans without the intermediary Chinese civets, as previously believed.
Phylogenetic
See also
Bat as food
Bat coronavirus RaTG13
Bat virome
SARS-CoV-2
Wuhan Institute of Virology (WIV)
References
Animal virology
SARS-related coronavirus
Zoonoses
Bat virome
Infraspecific virus taxa | Bat SARS-like coronavirus WIV1 | [
"Biology"
] | 233 | [
"Virus stubs",
"Viruses"
] |
40,945,512 | https://en.wikipedia.org/wiki/Myriagram | The myriagram () is a former French and metric unit of mass equal to 10,000 grams (myriad being the Greek word for ten thousand). Although never as widely used as the kilogram, the myriagram was employed during the 19th century as a replacement for the earlier American customary system quarter, which was equal to .
In 1975, the United States, having previously authorized use of the myriagram in 1866, declared the term no longer acceptable.
See also
myria-
History of the International System of Units
References
Units of mass | Myriagram | [
"Physics",
"Mathematics"
] | 112 | [
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
40,946,774 | https://en.wikipedia.org/wiki/Pattern%20language%20%28formal%20languages%29 | In theoretical computer science, a pattern language is a formal language that can be defined as the set of all particular instances of a string of constants and variables. Pattern Languages were introduced by Dana Angluin in the context of machine learning.
Definition
Given a finite set Σ of constant symbols and a countable set X of variable symbols disjoint from Σ, a pattern is a finite non-empty string of symbols from Σ∪X.
The length of a pattern p, denoted by |p|, is just the number of its symbols.
The set of all patterns containing exactly n distinct variables (each of which may occur several times) is denoted by Pn, the set of all patterns at all by P*.
A substitution is a mapping f: P* → P* such that
f is a homomorphism with respect to string concatenation (⋅), formally: ∀p,q∈P*. f(p⋅q) = f(p)⋅f(q);
f is non-erasing, formally: ∀p∈P*. f(p) ≠ ε, where ε denotes the empty string; and
f respects constants, formally: ∀s∈Σ. f(s) = s.
If p = f(q) for some patterns p, q ∈ P* and some substitution f, then p is said to be less general than q, written p≤q;
in that case, necessarily |p| ≥ |q| holds.
For a pattern p, its language is defined as the set of all less general patterns that are built from constants only, formally: L(p) = { s ∈ Σ+ : s ≤ p }, where Σ+ denotes the set of all finite non-empty strings of symbols from Σ.
For example, using the constants Σ = { 0, 1 } and the variables X = { x, y, z, ... }, the pattern 0x10xx1 ∈P1 and xxy ∈P2 has length 7 and 3, respectively.
An instance of the former pattern is 00z100z0z1 and 01z101z1z1, it is obtained by the substitution that maps x to 0z and to 1z, respectively, and each other symbol to itself. Both 00z100z0z1 and 01z101z1z1 are also instances of xxy. In fact, L(0x10xx1) is a subset of L(xxy). The language of the pattern x0 and x1 is the set of all bit strings which denote an even and odd binary number, respectively. The language of xx is the set of all strings obtainable by concatenating a bit string with itself, e.g. 00, 11, 0101, 1010, 11101110 ∈ L(xx).
Properties
The problem of deciding whether s ∈ L(p) for an arbitrary string s ∈ Σ+ and pattern p is NP-complete (see picture),
and so is hence the problem of deciding p ≤ q for arbitrary patterns p, q.
The class of pattern languages is not closed under ...
union: e.g. for Σ = {0,1} as above, L(01)∪L(10) is not a pattern language;
complement: Σ+ \ L(0) is not a pattern language;
intersection: L(x0y)∩L(x1y) is not a pattern language;
Kleene plus: L(0)+ is not a pattern language;
homomorphism: f(L(x)) = L(0)+ is not a pattern language, assuming f(0) = 0 = f(1);
inverse homomorphism: f−1(111) = { 01, 10, 000 } is not a pattern language, assuming f(0) = 1 and f(1) = 11.
The class of pattern languages is closed under ...
concatenation: L(p)⋅L(q) = L(p⋅q);
reversal: L(p)rev = L(prev).
If p, q ∈ P1 are patterns containing exactly one variable, then p ≤ q if and only if L(p) ⊆ L(q);
the same equivalence holds for patterns of equal length.
For patterns of different length, the above example p = 0x10xx1 and q = xxy shows that L(p) ⊆ L(q) may hold without implying p ≤ q.
However, any two patterns p and q, of arbitrary lengths, generate the same language if and only if they are equal up to consistent variable renaming.
Each pattern p is a common generalization of all strings in its generated language L(p), modulo associativity of (⋅).
Location in the Chomsky hierarchy
In a refined Chomsky hierarchy, the class of pattern languages is a proper superclass and subclass of the singleton and the indexed languages, respectively, but incomparable to the language classes in between; due to the latter, the pattern language class is not explicitly shown in the table below.
The class of pattern languages is incomparable with the class of finite languages, with the class of regular languages, and with the class of context-free languages:
the pattern language L(xx) is not context-free (hence neither regular nor finite) due to the pumping lemma;
the finite (hence also regular and context-free) language { 01, 10 } is not a pattern language.
Each singleton language is trivially a pattern language, generated by a pattern without variables.
Each pattern language can be produced by an indexed grammar:
For example, using Σ = { a, b, c } and X = { x, y },
the pattern a x b y c x a y b is generated by a grammar with nonterminal symbols N = { Sx, Sy, S } ∪ X, terminal symbols T = Σ, index symbols F = { ax, bx, cx, ay, by, cy }, start symbol Sx, and the following production rules:
An example derivation is:
⇒
⇒
⇒
⇒
⇒
⇒
⇒
⇒
⇒
⇒
⇒ ... ⇒
⇒
⇒ ... ⇒
⇒
⇒ ... ⇒
⇒
In a similar way, an index grammar can be constructed from any pattern.
Learning patterns
Given a sample set S of strings, a pattern p is called descriptive of S if S ⊆ L(p), but not S ⊆ L(q) ⊂ L(p) for any other pattern q.
Given any sample set S, a descriptive pattern for S can be computed by
enumerating all patterns (up to variable renaming) not longer than the shortest string in S,
selecting from them the patterns that generate a superset of S,
selecting from them the patterns of maximal length, and
selecting from them a pattern that is minimal with respect to ≤.
Based on this algorithm, the class of pattern languages can be identified in the limit from positive examples.
Notes
References
Formal languages
Theoretical computer science
Machine learning | Pattern language (formal languages) | [
"Mathematics",
"Engineering"
] | 1,458 | [
"Machine learning",
"Theoretical computer science",
"Applied mathematics",
"Formal languages",
"Mathematical logic",
"Artificial intelligence engineering"
] |
40,946,785 | https://en.wikipedia.org/wiki/Shkarofsky%20function | The Shkarofsky function is a physics formula which describes the behavior of microwaves. It is named after Canadian physicist Issie Shkarofsky (1931-2018), who first identified the function in 1966.
N.M. Temme and S.S. Sazhin later developed this idea further to give what they called the generalized Shkarofsky function.
References
Waves | Shkarofsky function | [
"Physics"
] | 80 | [
"Waves",
"Physical phenomena",
"Motion (physics)"
] |
53,891,403 | https://en.wikipedia.org/wiki/Amino%20acid%20replacement | Amino acid replacement is a change from one amino acid to a different amino acid in a protein due to point mutation in the corresponding DNA sequence. It is caused by nonsynonymous missense mutation which changes the codon sequence to code other amino acid instead of the original.
Conservative and radical replacements
Not all amino acid replacements have the same effect on function or structure of protein. The magnitude of this process may vary depending on how similar or dissimilar the replaced amino acids are, as well as on their position in the sequence or the structure. Similarity between amino acids can be calculated based on substitution matrices, physico-chemical distance, or simple properties such as amino acid size or charge (see also amino acid chemical properties). Usually amino acids are thus classified into two types:
Conservative replacement - an amino acid is exchanged into another that has similar properties. This type of replacement is expected to rarely result in dysfunction in the corresponding protein .
Radical replacement - an amino acid is exchanged into another with different properties. This can lead to changes in protein structure or function, which can cause potentially lead to changes in phenotype, sometimes pathogenic. A well known example in humans is sickle cell anemia, due to a mutation in beta globin where at position 6 glutamic acid (negatively charged) is exchanged with valine (not charged).
Physicochemical distances
Physicochemical distance is a measure that assesses the difference between replaced amino acids. The value of distance is based on properties of amino acids. There are 134 physicochemical properties that can be used to estimate similarity between amino acids. Each physicochemical distance is based on different composition of properties.
Grantham's distance
Grantham's distance depends on three properties: composition, polarity and molecular volume.
Distance difference D for each pair of amino acid i and j is calculated as:
where c = composition, p = polarity, and v = molecular volume; and are constants of squares of the inverses of the mean distance for each property, respectively equal to 1.833, 0.1018, 0.000399. According to Grantham's distance, most similar amino acids are leucine and isoleucine and the most distant are cysteine and tryptophan.
Sneath's index
Sneath's index takes into account 134 categories of activity and structure. Dissimilarity index D is a percentage value of the sum of all properties not shared between two replaced amino acids. It is percentage value expressed by , where S is Similarity.
Epstein's coefficient of difference
Epstein's coefficient of difference is based on the differences in polarity and size between replaced pairs of amino acids. This index that distincts the direction of exchange between amino acids, described by 2 equations:
when smaller hydrophobic residue is replaced by larger hydrophobic or polar residue
when polar residue is exchanged or larger residue is replaced by smaller
Miyata's distance
Miyata's distance is based on 2 physicochemical properties: volume and polarity.
Distance between amino acids ai and aj is calculated as where is value of polarity difference between replaced amino acids and and is difference for volume; and are standard deviations for and
Experimental Exchangeability
Experimental Exchangeability was devised by Yampolsky and Stoltzfus. It is the measure of the mean effect of exchanging one amino acid into a different amino acid.
It is based on analysis of experimental studies where 9671 amino acids replacements from different proteins, were compared for effect on protein activity.
Typical and idiosyncratic amino acids
Amino acids can also be classified according to how many different amino acids they can be exchanged by through single nucleotide substitution.
Typical amino acids - there are several other amino acids which they can change into through single nucleotide substitution. Typical amino acids and their alternatives usually have similar physicochemical properties. Leucine is an example of a typical amino acid.
Idiosyncratic amino acids - there are few similar amino acids that they can mutate to through single nucleotide substitution. In this case most amino acid replacements will be disruptive for protein function. Tryptophan is an example of an idiosyncratic amino acid.
Tendency to undergo amino acid replacement
Some amino acids are more likely to be replaced. One of the factors that influences this tendency is physicochemical distance. Example of a measure of amino acid can be Graur's Stability Index. The assumption of this measure is that the amino acid replacement rate and protein's evolution is dependent on the amino acid composition of protein. Stability index S of an amino acid is calculated based on physicochemical distances of this amino acid and its alternatives than can mutate through single nucleotide substitution and probabilities to replace into these amino acids. Based on Grantham's distance the most immutable amino acid is cysteine, and the most prone to undergo exchange is methionine.
Patterns of amino acid replacement
Evolution of proteins is slower than DNA since only nonsynonymous mutations in DNA can result in amino acid replacements. Most mutations are neutral to maintain protein function and structure. Therefore, the more similar amino acids are, the more probable that they will be replaced. Conservative replacements are more common than radical replacements, since they can result in less important phenotypic changes. On the other hand, beneficial mutations, enhancing protein functions are most likely to be radical replacements. Also, the physicochemical distances, which are based on amino acids properties, are negatively correlated with probability of amino acids substitutions. Smaller distance between amino acids indicates that they are more likely to undergo replacement.
References
Amino acids
Biochemistry | Amino acid replacement | [
"Chemistry",
"Biology"
] | 1,166 | [
"Amino acids",
"Biomolecules by chemical classification",
"Biochemistry",
"nan"
] |
53,894,060 | https://en.wikipedia.org/wiki/Discrete%20ordinates%20method | In the theory of radiative transfer, of either thermal or neutron radiation, a position and direction-dependent intensity function is usually sought for the description of the radiation field. The intensity field can in principle be solved from the integrodifferential radiative transfer equation (RTE), but an exact solution is usually impossible and even in the case of geometrically simple systems can contain unusual special functions such as the Chandrasekhar's H-function and Chandrasekhar's X- and Y-functions. The method of discrete ordinates, or the Sn method, is one way to approximately solve the RTE by discretizing both the xyz-domain and the angular variables that specify the direction of radiation. The methods were developed by Subrahmanyan Chandrasekhar when he was working on radiative transfer.
Radiative Transfer Equation
In the case of time-independent monochromatic radiation in an elastically scattering medium, the RTE is
where the first term on the RHS is the contribution of emission, the second term the contribution of absorption and the last term is the contribution from scattering in the medium. The variable is a unit vector that specifies the direction of radiation and the variable is a dummy integration variable for the calculation of scattering from direction to direction .
Angular Discretization
In the discrete ordinates method, the full solid angle of is divided to some number of discrete angular intervals, and the continuous direction variable is replaced by a discrete set of direction vectors . Then the scattering integral in the RTE, which makes the solution problematic, becomes a sum
where the numbers are weighting coefficients for the different direction vectors. With this the RTE becomes a linear system of equations for a multi-index object, the number of indices depending on the dimensionality and symmetry properties of the problem.
Solution
It is possible to solve the resulting linear system directly with Gauss–Jordan elimination, but this is problematic due to the large memory requirement for storing the matrix of the linear system. Another way is to use iterative methods, where the required number of iterations for a given degree of accuracy depends on the strength of scattering.
Applications
The discrete ordinates method, or some variation of it, is applied for solving radiation intensities in several physics and engineering simulation programs, such as COMSOL Multiphysics or the Fire Dynamics Simulator.
See also
Radiative transfer
Thermal radiation
Neutron radiation
Bickley-Naylor functions
References
Radiometry
Electromagnetic radiation | Discrete ordinates method | [
"Physics",
"Engineering"
] | 499 | [
"Physical phenomena",
"Telecommunications engineering",
"Electromagnetic radiation",
"Radiation",
"Radiometry"
] |
53,895,456 | https://en.wikipedia.org/wiki/Cell-based%20models | Cell-based models are mathematical models that represent biological cells as discrete entities. Within the field of computational biology they are often simply called agent-based models of which they are a specific application and they are used for simulating the biomechanics of multicellular structures such as tissues. to study the influence of these behaviors on how tissues are organised in time and space. Their main advantage is the easy integration of cell level processes such as cell division, intracellular processes and single-cell variability within a cell population.
Continuum-based models (PDE-based) models have also been developed – in particular, for cardiomyocytes and neurons. These represent the cells through explicit geometries and take into account spatial distributions of both intracellular and extracellular processes. They capture, depending on the research question and areas, ranges from a few to many thousand cells. In particular, the framework for electrophysiological models of cardiac cells is well-developed and made highly efficient using high-performance computing.
Model types
Cell-based models can be divided into on- and off-lattice models.
On-lattice
On-lattice models such as cellular automata or cellular potts restrict the spatial arrangement of the cells to a fixed grid. The mechanical interactions are then carried out according to literature-based rules (cellular automata) or by minimizing the total energy of the system (cellular potts), resulting in cells being displaced from one grid point to another.
Off-lattice
Off-lattice models allow for continuous movement of cells in space and evolve the system in time according to force laws governing the mechanical interactions between the individual cells. Examples of off-lattice models are center-based models, vertex-based models, models
based on the immersed boundary method and the subcellular element
method. They differ mainly in the level of detail with which they represent the
cell shape. As a consequence they vary in their ability to capture different biological mechanisms, the effort needed to extend them from two- to three-dimensional models and also in their computational cost.
The simplest off-lattice model, the center-based model, depicts cells as spheres and models their mechanical interactions using pairwise potentials. It is easily extended to a large number of cells in both 2D and 3D.
Vertex
Vertex-based models are a subset of off-lattice models. They track the cell membrane as a set of polygonal points and update the position of each vertex according to tensions in the cell membrane resulting from cell-cell adhesion forces and cell elasticity. They are more difficult to implement and also more costly to run.
As cells move past one another during a simulation, regular updates of the polygonal edge connections are necessary.
Applications
Since they account for individual behavior at the cell level such as cell proliferation, cell migration or apoptosis, cell-based models are a useful tool to study the influence of these behaviors on how tissues are organised in time and space.
Due in part to the increase in computational power, they have arisen as an alternative to continuum mechanics models which treat tissues as viscoelastic materials by averaging over single cells.
Cell-based mechanics models are often coupled to models describing intracellular dynamics, such as an ODE representation of a relevant gene regulatory network. It is also common to connect them to a PDE describing the diffusion of a chemical signaling molecule through the extracellular matrix, in order to account for cell-cell communication. As such, cell-based models have been used to study processes ranging from embryogenesis over epithelial morphogenesis to tumour growth and intestinal crypt dynamics
Simulation frameworks
There exist several software packages implementing cell-based models, e.g.
References
Cells
Simulation software
Numerical analysis
Biophysics
Computational biology
Tissues (biology) | Cell-based models | [
"Physics",
"Mathematics",
"Biology"
] | 755 | [
"Applied and interdisciplinary physics",
"Computational biology",
"Computational mathematics",
"Mathematical relations",
"Biophysics",
"Numerical analysis",
"Approximations"
] |
53,902,845 | https://en.wikipedia.org/wiki/Graphene%20plasmonics | Graphene is a 2D nanosheet with atomic thin thickness in terms of 0.34 nm. Due to the ultrathin thickness, graphene showed many properties that are quite different from their bulk graphite counterparts. The most prominent advantages are known to be their high electron mobility and high mechanical strengths.
Thus, it exhibits potential for applications in optics and electronics especially for the development of wearable devices as flexible substrates. More importantly, the optical absorption rate of graphene is 2.3% in the visible and near-infrared region. This broadband absorption characteristic also attracted great attention of the research community to exploit the graphene-based photodetectors/modulators.
Plasmons are collective electron oscillations usually excited at metal surfaces by a light source. Doped graphene layers have also shown the similar surface plasmon effects to that of metallic thin films. Through the engineering of metallic substrates or nanoparticles (e.g., gold, silver and copper) with graphene, the plasmonic properties of the hybrid structures could be tuned for improving the optoelectronic device performances. The electrons at the metallic structure could transfer to the graphene conduction band. This is attributed to the zero bandgap property of graphene nanosheet.
Graphene plasmons can also be decoupled from their environment and give rise to genuine Dirac plasmon at low-energy range where the wavelengths exceed the damping length. These graphene plasma resonances have been observed in the GHz–THz electronic domain.
Graphene plasmonics is an emergent research field, that is attracting plenty of interest and has already resulted in a textbook.
Application
When the plasmons were resonant at the graphene/metal surface, a strong electric field would be induced which could enhance the generation of electron-hole pairs in the graphene layer. The excited electron carrier numbers linearly increased with the field intensity based on the Fermi’s rule. The induced charge carriers of metal/graphene hybrid nanostructure could be up to 7 times higher than that of pristine graphene ones due to the plasmonic enhancement.
So far, the graphene plasmonic effects have been demonstrated for different applications ranging from light modulation to biological/chemical sensing. High-speed photodetection at 10 Gbit/s based on graphene and 20-fold improvement on the detection efficiency through graphene/gold nanostructure were also reported. Graphene plasmonics are considered as good alternatives to the noble metal plasmons not only due to their cost-effectiveness for large-scale production but also by the higher confinement of the plasmonics at the graphene surface. The enhanced light-matter interactions could further be optimized and tuned through electrostatic gating. These advantages of graphene plasmonics paved a way to achieve single-molecule detection and single-plasmon excitation.
See also
Surface plasmon polariton
Nanomaterial
References
Graphene
Plasmonics | Graphene plasmonics | [
"Physics",
"Chemistry",
"Materials_science"
] | 629 | [
"Plasmonics",
"Surface science",
"Condensed matter physics",
"Nanotechnology",
"Solid state engineering"
] |
53,903,005 | https://en.wikipedia.org/wiki/Metallurgical%20and%20Materials%20Engineering | Metallurgical and Materials Engineering is a peer-reviewed Open Access scientific journal, published by the Association of Metallurgical Engineers of Serbia. The first name of the journal was Metalurgija, published in 1995. The new name was adopted in 2012. The journal publishes contributions on fundamental and engineering aspects in the area of metallurgy and materials.
The journal publishes full length research papers, preliminary communications, reviews, and technical papers.
References
External links
English-language journals
Open access journals
Materials science journals
Academic journals established in 1995
Academic journals of Serbia | Metallurgical and Materials Engineering | [
"Materials_science",
"Engineering"
] | 113 | [
"Materials science stubs",
"Materials science journals",
"Materials science journal stubs",
"Materials science"
] |
52,518,686 | https://en.wikipedia.org/wiki/Beth%20Orcutt | Beth N. Orcutt is an American oceanographer whose research focuses on the microbial life of the ocean floor. As of 2012, she is a senior research scientist at the Bigelow Laboratory for Ocean Sciences. She is also a senior scientist of the Center for Dark Energy Biosphere Investigations, a Science and Technology Center funded by the National Science Foundation and headquartered at the University of Southern California and part of the Deep Carbon Observatory Deep Life Community. Orcutt has made fundamental contributions to the study of life below the seafloor, particularly in oceanic crust and has worked with the International Scientific Ocean Drilling Program.
Academic background and career
Orcutt attended the University of Georgia, obtaining a BS degree in 2002 and a PhD in marine sciences in 2007, supervised by Samantha Joye. During her graduate studies she collaborated extensively with Antje Boetius at the Max Planck Institute for Marine Microbiology and Kai-Uwe Hinrichs at the University of Bremen, both in Bremen, Germany. She held postdoctoral positions at the University of Southern California (2007–2009) under Katrina Edwards and at the Aarhus University in Denmark (2009–2012) under Bo Barker Jørgensen. She joined the Bigelow Laboratory for Ocean Sciences in 2012. She has also been an adjunct assistant professor at the University of Southern California since 2009.
Research activities
Orcutt’s research involves deep-sea exploration. Orcutt has traveled to the ocean’s seafloor several times aboard the submersibles Alvin and Johnson Sea Link. In 2015, she co-led an IODP scientific drilling Expedition 357 called “Atlantis Massif Serpentinization and Life” to explore life below the seafloor at the Atlantis Massif which hosts the Lost City hydrothermal field. This expedition was coordinated by ECORD and co-led with Gretchen Früh-Green of ETH Zurich. This expedition successfully used deep-sea drilling to collect rock samples from the mantle of the Atlantis Massif of the Mid-Atlantic Ridge, and showed that they contain hydrogen and methane. Orcutt’s research was featured in the documentary “North Pond: The Search for Intraterrestrials” which won “Best Documentary Feature Film” at the 2014 Yosemite International Film Festival and “Honorable Mention” at the 2014 Blue Ocean Film Festival.
References
External links
Center for Dark Energy Biosphere Investigations
Living people
University of Georgia alumni
Biogeochemists
American women marine biologists
American marine biologists
Year of birth missing (living people) | Beth Orcutt | [
"Chemistry"
] | 505 | [
"Geochemists",
"Biogeochemistry",
"Biogeochemists"
] |
57,257,634 | https://en.wikipedia.org/wiki/WireGuard | WireGuard is a communication protocol and free and open-source software that implements encrypted virtual private networks (VPNs). It aims to be lighter and better performing than IPsec and OpenVPN, two common tunneling protocols. The WireGuard protocol passes traffic over UDP.
In March 2020, the Linux version of the software reached a stable production release and was incorporated into the Linux 5.6 kernel, and backported to earlier Linux kernels in some Linux distributions. The Linux kernel components are licensed under the GNU General Public License (GPL) version 2; other implementations are under GPLv2 or other free/open-source licenses.
Protocol
The WireGuard protocol is a variant of the Noise Protocol Framework IK handshake pattern, as illustrated by the choice of Noise_IKpsk2_25519_ChaChaPoly_BLAKE2s for the value of the Construction string listed on p10 of the Whitepaper.
WireGuard uses the following:
Curve25519 for key exchange
ChaCha20 for symmetric encryption
Poly1305 for message authentication codes
SipHash24 for hashtable keys
BLAKE2s for cryptographic hash function
HKDF for key derivation function
UDP-based only
Base64-encoded private keys, public keys and preshared keys
In May 2019, researchers from INRIA published a machine-checked proof of the WireGuard protocol, produced using the CryptoVerif proof assistant.
Optional pre-shared symmetric key mode
WireGuard supports pre-shared symmetric key mode, which provides an additional layer of symmetric encryption to mitigate future advances in quantum computing. This addresses the risk that traffic may be stored until quantum computers are capable of breaking Curve25519, at which point traffic could be decrypted. Pre-shared keys are "usually troublesome from a key management perspective and might be more likely stolen", but in the shorter term, if the symmetric key is compromised, the Curve25519 keys still provide more than sufficient protection.
Networking
WireGuard uses only UDP, due to the potential disadvantages of TCP-over-TCP. Tunneling TCP over a TCP-based connection is known as "TCP-over-TCP", and doing so can induce a dramatic loss in transmission performance due to the TCP meltdown problem.
Its default server port is UDP 51820.
WireGuard fully supports IPv6, both inside and outside of tunnel. It supports only layer 3 for both IPv4 and IPv6 and can encapsulate v4-in-v6 and vice versa.
MTU overhead
The overhead of WireGuard breaks down as follows:
20-byte IPv4 header or 40 bytes IPv6 header
8-byte UDP header
4-byte type
4-byte key index
8-byte nonce
N-byte encrypted data
16-byte authentication tag
MTU operational considerations
Assuming the underlay network transporting the WireGuard packets maintains a 1500 bytes MTU, configuring the WireGuard interface to 1420 bytes MTU for all involved peers is ideal for transporting IPv6 + IPv4 traffic. However, when exclusively carrying legacy IPv4 traffic, a higher MTU of 1440 bytes for the WireGuard interface suffices.
From an operational perspective and for network configuration uniformity, choosing to configure a 1420 MTU network-wide for the WireGuard interfaces would be advantageous. This approach ensures consistency and facilitates a smoother transition to enabling IPv6 for the WireGuard peers and interfaces in the future.
Caveat
There may be situations where, for instance, a peer is behind a network with 1500 bytes MTU, and a second peer is behind a wireless network such as an LTE network, where often times, the carrier opted to use an MTU that is far lower than 1420 bytes — In such cases, the underlying IP networking stack of the host will fragment the UDP encapsulated packet and send the packets through, the packets inside the tunnel however will remain consistent and will not be required to fragment as PMTUD will detect the MTU between the peers (in this example, that would be 1420 bytes) and send a fixed packet size between the peers.
Extensibility
WireGuard is designed to be extended by third-party programs and scripts. This has been used to augment WireGuard with various features including more user-friendly management interfaces (including easier setting up of keys), logging, dynamic firewall updates, dynamic IP assignment, and LDAP integration.
Excluding such complex features from the minimal core codebase improves its stability and security. For ensuring security, WireGuard restricts the options for implementing cryptographic controls, limits the choices for key exchange processes, and maps algorithms to a small subset of modern cryptographic primitives. If a flaw is found in any of the primitives, a new version can be released that resolves the issue.
Reception
A review by Ars Technica found that WireGuard was easy to set up and use, used strong ciphers, and had a minimal codebase that provided for a small attack surface.
WireGuard has received funding from the Open Technology Fund and donations from Jump Trading, Mullvad, Tailscale, Fly.io, and the NLnet Foundation.
Oregon senator Ron Wyden has recommended to the National Institute of Standards and Technology (NIST) that they evaluate WireGuard as a replacement for existing technologies.
Availability
Implementations
Implementations of the WireGuard protocol include:
Donenfeld's initial implementation, written in C and Go.
Cloudflare's BoringTun, a user space implementation written in Rust.
Matt Dunwoodie's implementation for OpenBSD, written in C.
Ryota Ozaki's wg(4) implementation for NetBSD, written in C.
The FreeBSD implementation is written in C and shares most of the data path with the OpenBSD implementation.
Native Windows kernel implementation named "wireguard-nt", since August 2021.
AVM Fritz!Box modem-routers that support Fritz!OS version 7.39 and later. Permits site-to-site WireGuard connections from version 7.50 onwards.
Vector Packet Processing user space implementation written in C.
History
Early snapshots of the code base exist from 30 June 2016. The logo is inspired by a stone engraving of the mythological Python that Jason Donenfeld saw while visiting a museum in Delphi.
On 9 December 2019, David Miller – primary maintainer of the Linux networking stack – accepted the WireGuard patches into the "net-next" maintainer tree, for inclusion in an upcoming kernel.
On 28 January 2020, Linus Torvalds merged David Miller's net-next tree, and WireGuard entered the mainline Linux kernel tree.
On 20 March 2020, Debian developers enabled the module build options for WireGuard in their kernel config for the Debian 11 version (testing).
On 29 March 2020 WireGuard was incorporated into the Linux 5.6 release tree. The Windows version of the software remains at beta.
On 30 March 2020, Android developers added native kernel support for WireGuard in their Generic Kernel Image.
On 22 April 2020, NetworkManager developer Beniamino Galvani merged GUI support for WireGuard in GNOME.
On 12 May 2020, Matt Dunwoodie proposed patches for native kernel support of WireGuard in OpenBSD.
On 22 June 2020, after the work of Matt Dunwoodie and Jason A. Donenfeld, WireGuard support was imported into OpenBSD.
On 23 November 2020, Jason A. Donenfeld released an update of the Windows package improving installation, stability, ARM support, and enterprise features.
On 29 November 2020, WireGuard support was imported into the FreeBSD 13 kernel.
On 19 January 2021, WireGuard support was added for preview in pfSense Community Edition (CE) 2.5.0 development snapshots.
In March 2021, kernel-mode WireGuard support was removed from FreeBSD 13.0, still in testing, after an urgent code cleanup in FreeBSD WireGuard could not be completed quickly. FreeBSD-based pfSense Community Edition (CE) 2.5.0 and pfSense Plus 21.02 removed kernel-based WireGuard as well.
In May 2021, WireGuard support was re-introduced back into pfSense CE and pfSense Plus development snapshots as an experimental package written by a member of the pfSense community, Christian McDonald. The WireGuard package for pfSense incorporates the ongoing kernel-mode WireGuard development work by Jason A. Donenfeld that was originally sponsored by Netgate.
In June 2021, the official package repositories for both pfSense CE 2.5.2 and pfSense Plus 21.05 included the WireGuard package.
In 2023, WireGuard got over 200,000 Euros support from Germany's Sovereign Tech Fund.
See also
Comparison of virtual private network services
Secure Shell (SSH), a cryptographic network protocol used to secure services over an unsecured network.
Notes
References
Free security software
Linux network-related software
Tunneling protocols
Virtual private networks | WireGuard | [
"Engineering"
] | 1,884 | [
"Computer networks engineering",
"Tunneling protocols"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.