id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
40,776,948 | https://en.wikipedia.org/wiki/Derived%20scheme | In algebraic geometry, a derived scheme is a homotopy-theoretic generalization of a scheme in which classical commutative rings are replaced with derived versions such as differential graded algebras, commutative simplicial rings, or commutative ring spectra.
From the functor of points point-of-view, a derived scheme is a sheaf X on the category of simplicial commutative rings which admits an open affine covering .
From the locally ringed space point-of-view, a derived scheme is a pair consisting of a topological space X and a sheaf either of simplicial commutative rings or of commutative ring spectra on X such that (1) the pair is a scheme and (2) is a quasi-coherent -module.
A derived stack is a stacky generalization of a derived scheme.
Differential graded scheme
Over a field of characteristic zero, the theory is closely related to that of a differential graded scheme. By definition, a differential graded scheme is obtained by gluing affine differential graded schemes, with respect to étale topology. It was introduced by Maxim Kontsevich "as the first approach to derived algebraic geometry." and was developed further by Mikhail Kapranov and Ionut Ciocan-Fontanine.
Connection with differential graded rings and examples
Just as affine algebraic geometry is equivalent (in categorical sense) to the theory of commutative rings (commonly called commutative algebra), affine derived algebraic geometry over characteristic zero is equivalent to the theory of commutative differential graded rings. One of the main example of derived schemes comes from the derived intersection of subschemes of a scheme, giving the Koszul complex. For example, let , then we can get a derived scheme
where
is the étale spectrum. Since we can construct a resolution
the derived ring , a derived tensor product, is the koszul complex . The truncation of this derived scheme to amplitude provides a classical model motivating derived algebraic geometry. Notice that if we have a projective scheme
where we can construct the derived scheme where
with amplitude
Cotangent complex
Construction
Let be a fixed differential graded algebra defined over a field of characteristic . Then a -differential graded algebra is called semi-free if the following conditions hold:
The underlying graded algebra is a polynomial algebra over , meaning it is isomorphic to
There exists a filtration on the indexing set where and for any .
It turns out that every differential graded algebra admits a surjective quasi-isomorphism from a semi-free differential graded algebra, called a semi-free resolution. These are unique up to homotopy equivalence in a suitable model category. The (relative) cotangent complex of an -differential graded algebra can be constructed using a semi-free resolution : it is defined as
Many examples can be constructed by taking the algebra representing a variety over a field of characteristic 0, finding a presentation of as a quotient of a polynomial algebra and taking the Koszul complex associated to this presentation. The Koszul complex acts as a semi-free resolution of the differential graded algebra where is the graded algebra with the non-trivial graded piece in degree 0.
Examples
The cotangent complex of a hypersurface can easily be computed: since we have the dga representing the derived enhancement of , we can compute the cotangent complex as
where and is the usual universal derivation. If we take a complete intersection, then the koszul complex
is quasi-isomorphic to the complex
This implies we can construct the cotangent complex of the derived ring as the tensor product of the cotangent complex above for each .
Remarks
Please note that the cotangent complex in the context of derived geometry differs from the cotangent complex of classical schemes. Namely, if there was a singularity in the hypersurface defined by then the cotangent complex would have infinite amplitude. These observations provide motivation for the hidden smoothness philosophy of derived geometry since we are now working with a complex of finite length.
Tangent complexes
Polynomial functions
Given a polynomial function then consider the (homotopy) pullback diagram
where the bottom arrow is the inclusion of a point at the origin. Then, the derived scheme has tangent complex at is given by the morphism
where the complex is of amplitude . Notice that the tangent space can be recovered using and the measures how far away is from being a smooth point.
Stack quotients
Given a stack there is a nice description for the tangent complex:
If the morphism is not injective, the measures again how singular the space is. In addition, the Euler characteristic of this complex yields the correct (virtual) dimension of the quotient stack.
In particular, if we look at the moduli stack of principal -bundles, then the tangent complex is just .
Derived schemes in complex Morse theory
Derived schemes can be used for analyzing topological properties of affine varieties. For example, consider a smooth affine variety . If we take a regular function and consider the section of
Then, we can take the derived pullback diagram
where is the zero section, constructing a derived critical locus of the regular function .
Example
Consider the affine variety
and the regular function given by . Then,
where we treat the last two coordinates as . The derived critical locus is then the derived scheme
Note that since the left term in the derived intersection is a complete intersection, we can compute a complex representing the derived ring as
where is the koszul complex.
Derived critical locus
Consider a smooth function where is smooth. The derived enhancement of , the derived critical locus, is given by the differential graded scheme where the underlying graded ring are the polyvector fields
and the differential is defined by contraction by .
Example
For example, if
we have the complex
representing the derived enhancement of .
Notes
References
Reaching Derived Algebraic Geometry - Mathoverflow
M. Anel, The Geometry of Ambiguity
K. Behrend, On the Virtual Fundamental Class
P. Goerss, Topological Modular Forms [after Hopkins, Miller, and Lurie]
B. Toën, Introduction to derived algebraic geometry
M. Manetti, The cotangent complex in characteristic 0
G. Vezzosi, The derived critical locus I - basics
Algebraic geometry
Topology | Derived scheme | [
"Physics",
"Mathematics"
] | 1,274 | [
"Fields of abstract algebra",
"Topology",
"Space",
"Geometry",
"Algebraic geometry",
"Spacetime"
] |
40,778,281 | https://en.wikipedia.org/wiki/Derived%20algebraic%20geometry | Derived algebraic geometry is a branch of mathematics that generalizes algebraic geometry to a situation where commutative rings, which provide local charts, are replaced by either differential graded algebras (over ), simplicial commutative rings or -ring spectra from algebraic topology, whose higher homotopy groups account for the non-discreteness (e.g., Tor) of the structure sheaf. Grothendieck's scheme theory allows the structure sheaf to carry nilpotent elements. Derived algebraic geometry can be thought of as an extension of this idea, and provides natural settings for intersection theory (or motivic homotopy theory) of singular algebraic varieties and cotangent complexes in deformation theory (cf. J. Francis), among the other applications.
Introduction
Basic objects of study in the field are derived schemes and derived stacks. The oft-cited motivation is Serre's intersection formula. In the usual formulation, the formula involves the Tor functor and thus, unless higher Tor vanish, the scheme-theoretic intersection (i.e., fiber product of immersions) does not yield the correct intersection number. In the derived context, one takes the derived tensor product , whose higher homotopy is higher Tor, whose Spec is not a scheme but a derived scheme. Hence, the "derived" fiber product yields the correct intersection number. (Currently this is hypothetical; the derived intersection theory has yet to be developed.)
The term "derived" is used in the same way as derived functor or derived category, in the sense that the category of commutative rings is being replaced with a ∞-category of "derived rings." In classical algebraic geometry, the derived category of quasi-coherent sheaves is viewed as a triangulated category, but it has natural enhancement to a stable ∞-category, which can be thought of as the ∞-categorical analogue of an abelian category.
Definitions
Derived algebraic geometry is fundamentally the study of geometric objects using homological algebra and homotopy. Since objects in this field should encode the homological and homotopy information, there are various notions of what derived spaces encapsulate. The basic objects of study in derived algebraic geometry are derived schemes, and more generally, derived stacks. Heuristically, derived schemes should be functors from some category of derived rings to the category of sets
which can be generalized further to have targets of higher groupoids (which are expected to be modelled by homotopy types). These derived stacks are suitable functors of the form
Many authors model such functors as functors with values in simplicial sets, since they model homotopy types and are well-studied. Differing definitions on these derived spaces depend on a choice of what the derived rings are, and what the homotopy types should look like. Some examples of derived rings include commutative differential graded algebras, simplicial rings, and -rings.
Derived geometry over characteristic 0
Over characteristic 0 many of the derived geometries agree since the derived rings are the same. algebras are just commutative differential graded algebras over characteristic zero. We can then define derived schemes similarly to schemes in algebraic geometry. Similar to algebraic geometry, we could also view these objects as a pair which is a topological space with a sheaf of commutative differential graded algebras. Sometimes authors take the convention that these are negatively graded, so for . The sheaf condition could also be weakened so that for a cover of , the sheaves would glue on overlaps only by quasi-isomorphism.
Unfortunately, over characteristic p, differential graded algebras work poorly for homotopy theory, due to the fact . This can be overcome by using simplicial algebras.
Derived geometry over arbitrary characteristic
Derived rings over arbitrary characteristic are taken as simplicial commutative rings because of the nice categorical properties these have. In particular, the category of simplicial rings is simplicially enriched, meaning the hom-sets are themselves simplicial sets. Also, there is a canonical model structure on simplicial commutative rings coming from simplicial sets. In fact, it is a theorem of Quillen's that the model structure on simplicial sets can be transferred over to simplicial commutative rings.
Higher stacks
It is conjectured there is a final theory of higher stacks which model homotopy types. Grothendieck conjectured these would be modelled by globular groupoids, or a weak form of their definition. Simpson gives a useful definition in the spirit of Grothendieck's ideas. Recall that an algebraic stack (here a 1-stack) is called representable if the fiber product of any two schemes is isomorphic to a scheme. If we take the ansatz that a 0-stack is just an algebraic space and a 1-stack is just a stack, we can recursively define an n-stack as an object such that the fiber product along any two schemes is an (n-1)-stack. If we go back to the definition of an algebraic stack, this new definition agrees.
Spectral schemes
Another theory of derived algebraic geometry is encapsulated by the theory of spectral schemes. Their definition requires a fair amount of technology in order to precisely state. But, in short, spectral schemes are given by a spectrally ringed -topos together with a sheaf of -rings on it subject to some locality conditions similar to the definition of affine schemes. In particular
must be equivalent to the -topos of some topological space
There must exist a cover of such that the induced topos is equivalent to a spectrally ringed topos for some -ring
Moreover, the spectral scheme is called connective if for .
Examples
Recall that the topos of a point is equivalent to the category of sets. Then, in the -topos setting, we instead consider -sheaves of -groupoids (which are -categories with all morphisms invertible), denoted , giving an analogue of the point topos in the -topos setting. Then, the structure of a spectrally ringed space can be given by attaching an -ring . Notice this implies that spectrally ringed spaces generalize -rings since every -ring can be associated with a spectrally ringed site.
This spectrally ringed topos can be a spectral scheme if the spectrum of this ring gives an equivalent -topos, so its underlying space is a point. For example, this can be given by the ring spectrum , called the Eilenberg–Maclane spectrum, constructed from the Eilenberg–MacLane spaces .
Applications
Derived algebraic geometry was used by to prove Weibel's conjecture on vanishing of negative K-theory.
The formulation of the Geometric Langlands conjecture by Arinkin and Gaitsgory uses derived algebraic geometry.
See also
Derived scheme
Pursuing Stacks
Noncommutative algebraic geometry
Simplicial commutative ring
Derivator
Algebra over an operad
En-ring
Higher Topos Theory
∞-topos
étale spectrum
Notes
References
Simplicial DAG
Differential graded DAG
En and E∞ -rings
Spectral algebraic geometry - Rezk
Operads and Sheaf Cohomology - JP May - -rings over characteristic 0 and -structure for sheaf cohomology
Tangent complex and Hochschild cohomology of En-rings https://arxiv.org/abs/1104.0181
Francis, John; Derived Algebraic Geometry Over -Rings
Applications
Lowrey, Parker; Schürg, Timo. (2018). Grothendieck-Riemann-Roch for Derived Schemes
Ciocan-Fontanine, I., Kapranov, M. (2007). Virtual fundamental classes via dg-manifolds
Mann, E., Robalo M. (2018). Gromov-Witten theory with derived algebraic geometry
Ben-Zvi, D., Francis, J., and D. Nadler. Integral Transforms and Drinfeld Centers in Derived Algebraic Geometry.
Quantum Field Theories
Notes on supersymmetric and holomorphic field theories in dimensions 2 and 4
External links
Jacob Lurie's Home Page
Overview of Spectral Algebraic Geometry
DAG reading group (Fall 2011) at Harvard
http://ncatlab.org/nlab/show/derived+algebraic+geometry
Michigan Derived Algebraic Geometry RTG Learning Workshop, 2012
Derived algebraic geometry: how to reach research level math?
Derived Algebraic Geometry and Chow Rings/Chow Motives
Gabriele Vezzosi, An overview of derived algebraic geometry, October 2013
Algebraic geometry
Homotopical algebra
Algebraic topology
Ring theory
Scheme theory | Derived algebraic geometry | [
"Mathematics"
] | 1,796 | [
"Ring theory",
"Algebraic topology",
"Fields of abstract algebra",
"Topology",
"Algebraic geometry"
] |
40,780,016 | https://en.wikipedia.org/wiki/PaloDEx | PaloDEx Group is a Finnish company making equipment for dental radiography. Since 2009, the company is owned by the American Envista Holdings Corporation which is a spinn-off company from Danaher Corporation since December 2019.
The company was started in 1964 as Palomex Oy to manufacture the Orthopantomograph®, a device for making panoramic radiographs invented by Finnish professor Yrjö Paatero. The device made possible to take a panoramic X-ray dental image in a single exposure.
Palomex Oy was acquired by the Finnish optical and medical equipment company Instrumentarium in 1977. Some of the company’s operations formed a new company Soredex Oy, which was merged with the Orion Group in 1981. Palomex was renamed as Instrumentarium Imaging in 1988. In 2001, Instrumentarium acquired Soredex and now possessed two strong brands in dental imaging.
GE Healthcare acquired Instrumentarium in 2003. Two years later, GE sold its dental imaging operations which were formed into a separate company, PaloDEx Group Oy. In 2009, the company was acquired by Danaher Corporation. The company’s product families now include KaVo ORTHOPANTOMOGRAPH™, Instrumentarium Dental and Soredex.
Sources
Danaher Corporation
Instrument-making corporations
Health care companies established in 1964
Medical technology companies of Finland
Finnish companies established in 1964 | PaloDEx | [
"Biology"
] | 284 | [
"Danaher Corporation",
"Life sciences industry"
] |
40,781,220 | https://en.wikipedia.org/wiki/J%C3%B3zef%20Luba%C5%84ski | Józef Kazimierz Lubański (1914 – 8 December 1946) was a Polish theoretical physicist. He developed the Pauli–Lubanski pseudovector in relativistic quantum mechanics.
Life and works
Lubanski obtained the degree of magister philosophies at Wilna in 1937. He then worked for two years as an assistant in theoretical physics at Polish universities, and obtained a grant in order to travel to the Netherlands and to work under Hans Kramers at Leiden University. His original intention was to go to Copenhagen in the following year, although the Second World War prevented this.
Lubanski worked with Léon Rosenfeld at Utrecht, and dating from this period he wrote a number of papers on the properties of mesons mainly in the journal Physica, one in the Arkiv för matematik, astronomi och fysik.
Around 1937 in Kraków, he collaborated with Myron Mathisson and 's colleagues on the motion of spinning particles in linearized gravitational fields according to general relativity, and under Mathisson's lead, published a paper on the derivation of the Mathisson–Papapetrou–Dixon equations.
He also worked at the laboratory of Delft University of Technology in aerodynamics and hydrodynamics.
See also
Relativistic wave equations
References
Quantum physicists
20th-century Polish physicists
1914 births
1947 deaths | Józef Lubański | [
"Physics"
] | 279 | [
"Quantum physicists",
"Quantum mechanics"
] |
40,782,000 | https://en.wikipedia.org/wiki/Simplicial%20commutative%20ring | In algebra, a simplicial commutative ring is a commutative monoid in the category of simplicial abelian groups, or, equivalently, a simplicial object in the category of commutative rings. If A is a simplicial commutative ring, then it can be shown that is a ring and are modules over that ring (in fact, is a graded ring over .)
A topology-counterpart of this notion is a commutative ring spectrum.
Examples
The ring of polynomial differential forms on simplexes.
Graded ring structure
Let A be a simplicial commutative ring. Then the ring structure of A gives the structure of a graded-commutative graded ring as follows.
By the Dold–Kan correspondence, is the homology of the chain complex corresponding to A; in particular, it is a graded abelian group. Next, to multiply two elements, writing for the simplicial circle, let be two maps. Then the composition
,
the second map the multiplication of A, induces . This in turn gives an element in . We have thus defined the graded multiplication . It is associative because the smash product is. It is graded-commutative (i.e., ) since the involution introduces a minus sign.
If M is a simplicial module over A (that is, M is a simplicial abelian group with an action of A), then the similar argument shows that has the structure of a graded module over (cf. Module spectrum).
Spec
By definition, the category of affine derived schemes is the opposite category of the category of simplicial commutative rings; an object corresponding to A will be denoted by .
See also
E_n-ring
References
What is a simplicial commutative ring from the point of view of homotopy theory?
What facts in commutative algebra fail miserably for simplicial commutative rings, even up to homotopy?
Reference request - CDGA vs. sAlg in char. 0
A. Mathew, Simplicial commutative rings, I.
B. Toën, Simplicial presheaves and derived algebraic geometry
P. Goerss and K. Schemmerhorn, Model categories and simplicial methods
Commutative algebra
Ring theory
Algebraic structures | Simplicial commutative ring | [
"Mathematics"
] | 499 | [
"Mathematical structures",
"Mathematical objects",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures",
"Commutative algebra"
] |
40,782,363 | https://en.wikipedia.org/wiki/Semicircle%20law%20%28quantum%20Hall%20effect%29 | The semicircle law, in condensed matter physics, is a mathematical relationship that occurs between quantities measured in the quantum Hall effect. It describes a relationship between the anisotropic and isotropic components of the macroscopic conductivity tensor , and, when plotted, appears as a semicircle.
The semicircle law was first described theoretically in Dykhne and Ruzin's analysis of the quantum Hall effect as a mixture of 2 phases: a free electron gas, and a free hole gas. Mathematically, it states that where is the mean-field Hall conductivity, and is a parameter that encodes the classical conductivity of each phase. A similar law also holds for the resistivity.
A convenient reformulation of the law mixes conductivity and resistivity: where is an integer, the Hall divisor.
Although Dykhne and Ruzin's original analysis assumed little scattering, an assumption that proved empirically unsound, the law holds in the coherent-transport limits commonly observed in experiment.
Theoretically, the semicircle law originates from a representation of the modular group , which describes a symmetry between different Hall phases. (Note that this is not a symmetry in the conventional sense; there is no conserved current.) That group's strong connections to number theory also appear: Hall phase transitions (in a single layer) exhibit a selection rulethat also governs the Farey sequence. Indeed, plots of the semicircle law are also Farey diagrams.
In striped quantum Hall phases, the relationship is slightly more complex, because of the broken symmetry:Here and describe the macroscopic conductivity in directions aligned with and perpendicular to the stripes.
References
Condensed matter physics
Hall effect | Semicircle law (quantum Hall effect) | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 350 | [
"Physical phenomena",
"Hall effect",
"Phases of matter",
"Electric and magnetic fields in matter",
"Materials science",
"Electrical phenomena",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
50,871,925 | https://en.wikipedia.org/wiki/R%20bodies | R bodies (from refractile bodies, also R-bodies) are polymeric protein inclusions formed inside the cytoplasm of bacteria. Initially discovered in kappa particles, bacterial endosymbionts of the ciliate Paramecium, R bodies (and genes encoding them) have since been discovered in a variety of taxa.
Morphology, assembly, and extension
At neutral pH, type 51 R bodies resemble a coil of ribbon approximately 500 nm in diameter and approximately 400 nm deep. Encoded by a single operon containing four open reading frames, R bodies are formed from two small structural proteins, RebA and RebB. A third protein, RebC, is required for the covalent assembly of these two structural proteins into higher-molecular weight products, visualized as a ladder on an SDS-PAGE gel.
At low pH, Type 51 R bodies undergo a dramatic structural rearrangement. Much like a paper yo-yo, the ribbon extends (from the center) to form hollow tube with pointed ends that can reach up to 20μm in length.
Other types of R bodies from different bacterial species vary in their size, ribbon morphology, and triggers for extension.
Function
When kappa particles shed from a killer paramecium are ingested, R bodies extend within the acidic food vacuole of the predatory paramecium, distending and rupturing the membrane. This liberates the contents of the food vacuole into the cytoplasm of the paramecium. While feeding kappa particles to sensitive paramecium results in the death of paramecium, feeding purified R bodies or R bodies recombinantly expressed in E. coli is not toxic. Thus, R bodies are thought to function as a toxin delivery system.
R bodies are also capable of rupturing E. coli spheroplasts, demonstrating that they can rupture membranes in a foreign context, and they can be engineered to extend at a variety of different pH levels.
References
Cell biology
Cell anatomy
Protein complexes
Bacteriology
Biotechnology | R bodies | [
"Biology"
] | 422 | [
"Biotechnology",
"Cell biology",
"nan"
] |
50,873,599 | https://en.wikipedia.org/wiki/Staff%20gauge | A staff gauge or head gauge is calibrated scale which is used to provide a visual indication of liquid level. When installed perpendicular to an inclined or sloped surface, a staff gauge is usually calibrated so that the indicated level is the true vertical level.
Staff gauges are commonly installed at stream gauging stations to indicate the water stage or water level. They are also used to indicate the level (and hence flow rate) in open channel primary devices (flumes or weirs); see discharge (hydrology).
See also
Head (hydrology)
Level staff
Water supply
Hydrology instrumentation | Staff gauge | [
"Chemistry",
"Technology",
"Engineering",
"Environmental_science"
] | 124 | [
"Hydrology",
"Hydrology instrumentation",
"Measuring instruments",
"Environmental engineering",
"Water supply"
] |
50,876,594 | https://en.wikipedia.org/wiki/Surface%20Science%20%28journal%29 | Surface Science is a monthly peer-reviewed scientific journal published by Elsevier that covers the physics and chemistry of surfaces and interfaces. It was established in 1964. The journal encompasses Surface Science Letters, which was published separately until 1993.
The scope of the journal includes nanotechnology, catalysis, and soft matter and features both experimental and computational studies. Extended reviews are published in its companion journal, Surface Science Reports.
According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.942.
References
External links
Physics journals
Materials science journals
Academic journals established in 1964
Elsevier academic journals
Monthly journals
English-language journals | Surface Science (journal) | [
"Materials_science",
"Engineering"
] | 128 | [
"Materials science stubs",
"Nanotechnology journals",
"Materials science journals",
"Materials science journal stubs",
"Materials science"
] |
50,880,861 | https://en.wikipedia.org/wiki/Roofline%20model | The roofline model is an intuitive visual performance model used to provide performance estimates of a given compute kernel or application running on multi-core, many-core, or accelerator processor architectures, by showing inherent hardware limitations, and potential benefit and priority of optimizations. By combining locality, bandwidth, and different parallelization paradigms into a single performance figure, the model can be an effective alternative to assess the quality of attained performance instead of using simple percent-of-peak estimates, as it provides insights on both the implementation and inherent performance limitations.
The most basic roofline model can be visualized by plotting floating-point performance as a function of machine peak performance, machine peak bandwidth, and arithmetic intensity. The resultant curve is effectively a performance bound under which kernel or application performance exists, and includes two platform-specific performance ceilings: a ceiling derived from the memory bandwidth and one derived from the processor's peak performance (see figure on the right).
Related terms and performance metrics
Work
The work denotes the number of operations performed by a given kernel or application. This metric may refer to any type of operation, from number of array points updated, to number of integer operations, to number of floating point operations (FLOPs), and the choice of one or another is driven by convenience. In the majority of the cases however, is expressed as FLOPs.
Note that the work is a property of the given kernel or application and thus depend just partially on the platform characteristics.
Memory traffic
The memory traffic denotes the number of bytes of memory transfers incurred during the execution of the kernel or application. In contrast to , is heavily dependent on the properties of the chosen platform, such as for instance the structure of the cache hierarchy.
Arithmetic intensity
The arithmetic intensity , also referred to as operational intensity, is the ratio of the work to the memory traffic :and denotes the number of operations per byte of memory traffic. When the work is expressed as FLOPs, the resulting arithmetic intensity will be the ratio of floating point operations to total data movement (FLOPs/byte).
Naive Roofline
The naïve roofline is obtained by applying simple bound and bottleneck analysis. In this formulation of the roofline model, there are only two parameters, the peak performance and the peak bandwidth of the specific architecture, and one variable, the arithmetic intensity. The peak performance, in general expressed as GFLOPS, can be usually derived from benchmarking, while the peak bandwidth, that references to peak DRAM bandwidth to be specific, is instead obtained via architectural manuals. The resulting plot, in general with both axes in logarithmic scale, is then derived by the following formula:where is the attainable performance, is the peak performance, is the peak bandwidth and is the arithmetic intensity. The point at which the performance saturates at the peak performance level , that is where the diagonal and horizontal roof meet, is defined as ridge point. The ridge point offers insight on the machine's overall performance, by providing the minimum arithmetic intensity required to be able to achieve peak performance, and by suggesting at a glance the amount of effort required by the programmer to achieve peak performance.
A given kernel or application is then characterized by a point given by its arithmetic intensity (on the x-axis). The attainable performance is then computed by drawing a vertical line that hits the roofline curve. Hence. the kernel or application is said to be memory-bound if . Conversely, if , the computation is said to be compute-bound.
Adding ceilings to the model
The naive roofline provides just an upper bound (the theoretical maximum) to performance. Although it can still give useful insights on the attainable performance, it does not provide a complete picture of what is actually limiting it. If, for instance, the considered kernel or application performs far below the roofline, it might be useful to capture other performance ceilings, other than simple peak bandwidth and performance, to better guide the programmer on which optimization to implement, or even to assess the suitability of the architecture used with respect to the analyzed kernel or application. The added ceilings impose then a limit on the attainable performance that is below the actual roofline, and indicate that the kernel or application cannot break through anyone of these ceilings without first performing the associated optimization.
The roofline plot can be expanded upon three different aspects: communication, adding the bandwidth ceilings; computation, adding the so-called in-core ceilings; and locality, adding the locality walls.
Bandwidth ceilings
The bandwidth ceilings are bandwidth diagonals placed below the idealized peak bandwidth diagonal. Their existence is due to the lack of some kind of memory related architectural optimization, such as cache coherence, or software optimization, such as poor exposure of concurrency (that in turn limit bandwidth usage).
In-core ceilings
The in-core ceilings are roofline-like curve beneath the actual roofline that may be present due to the lack of some form of parallelism. These ceilings effectively limit how high performance can reach. Performance cannot exceed an in-core ceiling until the underlying lack of parallelism is expressed and exploited. The ceilings can be also derived from architectural optimization manuals other than benchmarks.
Locality walls
If the ideal assumption that arithmetic intensity is solely a function of the kernel is removed, and the cache topology - and therefore cache misses - is taken into account, the arithmetic intensity clearly becomes dependent on a combination of kernel and architecture. This may result in a degradation in performance depending on the balance between the resultant arithmetic intensity and the ridge point. Unlike "proper" ceilings, the resulting lines on the roofline plot are vertical barriers through which arithmetic intensity cannot pass without optimization. For this reason, they are referenced to as locality walls or arithmetic intensity walls.
Extension of the model
Since its introduction, the model has been further extended to account for a broader set of metrics and hardware-related bottlenecks. Already available in literature there are extensions that take into account the impact of NUMA organization of memory, of out-of-order execution, of memory latencies, and to model at a finer grain the cache hierarchy in order to better understand what is actually limiting performance and drive the optimization process.
Also, the model has been extended to better suit specific architectures and the related characteristics, such as FPGAs.
See also
Software performance testing
Benchmark (computing)
References
External links
The Roofline Model: A Pedagogical Tool for Auto-tuning Kernels on Multicore Architectures
Applying the Roofline model
Extending the Roofline Model: Bottleneck Analysis with Microarchitectural Constraints
Roofline Model Toolkit
Roofline Model Toolkit: A Practical Tool for Architectural and Program Analysis - publication related to the tool.
Perfplot
Extended Roofline Model
Intel Advisor - Roofline model automation
Youtube Video on how to use Intel Advisor Roofline
Software testing
Software optimization | Roofline model | [
"Engineering"
] | 1,382 | [
"Software engineering",
"Software testing"
] |
50,880,883 | https://en.wikipedia.org/wiki/International%20Center%20for%20Relativistic%20Astrophysics | ICRA, the International Center for Relativistic Astrophysics is an international research institute for relativistic astrophysics and related areas. Its members are seven Universities and four organizations. The center is located in Rome, Italy.
The International Center for Relativistic Astrophysics (ICRA) was founded in 1985 by Remo Ruffini (University of Rome "La Sapienza") together with Riccardo Giacconi (Nobel Prize for Physics 2002), Abdus Salam (Nobel Prize for Physics 1979), Paul Boyton (University of Washington), George Coyne (former director of the Vatican observatory), Francis Everitt (Stanford University), Fang Li-Zhi (University of Science and Technology of China). It became a legal entity in 1991 with the Ministerial Decree 22/11/1991 from the Ministry of Education, Universities and Research.In 1978 Fang was assigned to host Ruffini, a guest of the Chinese Academy of Sciences. They gave joint university lectures and developed a profound friendship. In 1981 in China they published a small book introducing relativistic astrophysics that became revered among astrophysics students. In 1982 Fang and Ruffini organized the first international conference on astrophysics in China—the third Marcel Grossmann Meeting—and thereafter remained organizers of the Grossmann meetings. Together with Abdus Salam, Riccardo Giacconi, George Coyne, and Francis Everitt, they founded the International Center for Relativistic Astrophysics (ICRA) in 1985. Physics TodayThe International Center of Relativistic Astrophysics is located in the Department of Physics building at the main Campus of the University of Rome "Sapienza".
In 2005 ICRA has been among the founders of ICRANet, the International Center for Relativistic Astrophysics Network. The national activities of research and teaching in Italy remained operative at ICRA in Rome, while international activities and coordination are now based in ICRANet in Pescara.
Structure
President: Yu Wang
Former President: Remo Ruffini
ICRA Council:
Stefano Ansoldi, University of Udine
Francesco Haardt, University of Insubria
Paul Boynton, University of Washington in Seattle
Remo Ruffini, former President and Director of Research
Member institutions
International Centre For Theoretical Physics (ICTP) - Trieste (Italy)
Space Telescope Institute - Baltimore - Maryland - (USA)
Specola Vaticana - Castelgandolfo (Vatican City)
Stanford University - Stanford, California (USA)
University Campus Bio-Medico of Rome (Italy)
University of Science and Technology of China (China)
University of Insubria (Italy)
University of Rome "Sapienza" (Italy)
University of Udine (Italy)
University of Washington at Seattle (USA)
World Academy of Sciences (TWAS) - Trieste (Italy)
International collaboration
Collaboration agreements have been signed between ICRA and scientific institutions worldwide, in particular:
AIGRC (The Australian International Gravitational Research Centre), Australia
ARSEC (Astrophysical Research center for the Structure and Evolution of the Cosmos), South Korea
BAO (Beijing Astronomical Observatory), China
CECS (Centro de Estudios Cientificos de Santiago), Chile
Universidad Nacional de Colombia, Colombia
IHES (Institut Hautes Etudes Scientifiques), France
KSNU (Kyrgiz State National University), Kyrgyzstan
IPM (Keldysh Institute for Applied Mathematics), Russia
MEPhI (Moscow State Engineering Physics Institute), Russia
NCST (National Centre for Science and Technology), Vietnam
OCA (Côte d’Azur Observatory), France
PAO (Pyongyang Astronomical Observatory), North Korea
University of Tirana, Albania
YITP (Yukawa Institute for Theoretical Physics), Japan
UADP (Physics Department, University of Arizona), USA
IRAP PhD
Since 2002, ICRA co-organizes an International Ph.D. program in Relativistic Astrophysics - International Relativistic Astrophysics Ph.D. Program, IRAP-PhD, the first joint PhD astrophysics program.
Other activities
Meetings initiated by ICRA
Marcel Grossmann meetings
It is believed that in honor of Marcel Grossman's work and collaboration with Einstein, Remo Ruffini and Abdus Salam established in 1975 the Marcel Grossmann meetings (MG) on Recent Developments in Theoretical and Experimental General Relativity, Gravitation, and Relativistic Field Theories which take place every three years in different countries. MG1 and MG2 were held in 1975 and in 1979 in Trieste; MG3 in 1982 in Shanghai; MG4 in 1985 in Rome; MG5 in 1988 in Perth; MG6 in 1991 in Kyoto; MG7 in 1994 at Stanford; MG8 in 1997 in Jerusalem; MG9 in 2000 in Rome; MG10 in 2003 in Rio de Janeiro; MG11 in 2006 in Berlin; MG12 in 2009 in Paris; MG13 in 2012 in Stockholm; MG14 in 2015 in Rome.
Italian-Korean Meetings on Relativistic Astrophysics
The Italian-Korean Symposia on Relativistic Astrophysics is a series of biannual meetings organized alternatively in Italy and in Korea since 1987. It has been focused on exchange of information and collaborations between Italian and Korean astrophysicists on new issues in the field of Relativistic Astrophysics. The symposia cover topics in astrophysics and cosmology, such as gamma ray bursts and compact stars, high energy cosmic rays, dark energy and dark matter, general relativity, black holes, and new physics related to cosmology.
William Fairbank Meetings on Relativistic Gravitational Experiments in Space
The First William Fairbank Meeting was held at the University of Rome "La Sapienza," in 1990, under the auspices of ICRA with support from ASI (Italian Space Agency), ESA (European Space Agency), the Vatican Observatory, Stanford University and the University of Rome. Almost 80 physicists and engineers in widely diversified fields relativistic gravitation, space research, SQUID technology, large scale cryogenics, clock technology, laser and radar science and other fields - came together in the kinds of free technical exchange so characteristic of William Fairbank, in whose honor the meeting was held. The second meeting was held in Hong Kong and was devoted to relativistic gravitational experiments in space. The third meeting held in Rome and Pescara in 1998 was focused on the Lense-Thirring effect.
First William Fairbank Meeting, Rome, 10–14 September 1990, ICRA, University of Rome "La Sapienza" - ICRA Network, Pescara.
Second William Fairbank Meeting, December 13–16, 1993, Hong Kong
Third William Fairbank Meeting. The Lense-Thirring Effect, June 29 - July 4, 1998, ICRA, University of Rome "La Sapienza" - ICRA Network, Pescara.
The Galileo-Xu Guangqi meetings
The Galileo-Xu Guangqi meetings have been created in the name of Galileo and Xu Guangqi, the collaborator of Matteo Ricci (Ri Ma Dou), generally recognized for bringing to China the works of Euclid and Galileo and for his strong commitment to the process of modernization and scientific development of China. The 1st Galileo - Xu Guangqi Meeting was held in Shanghai, China in 2009. The 2nd Galileo - Xu Guangqi meeting took place in Hanbury Botanic Gardens (Ventimiglia, Italy) and Villa Ratti (Nice, France) in 2010. The 3rd and 4th Galileo - Xu Guangqi meetings were held in Beijing, China in 2011 and 2015, respectively.
ICRA Network Workshops
INW I: LXV of R. Giacconi, Rome and Castelgandolfo, October 24–26, 1997
INW II: The Chaotic Universe, Rome and Pescara, February 1–5, 1999
INW III: Electrodynamics and Magnetohydrodynamics around Black Holes, Rome and Pescara, July 12–24, 1999
INW IV: Science at new Millennium, UWA, March 10–14, 2000
INW VI: Time structures in Relativistic Astrophysics, Pescara, July 2–14, 2001
INW VIII: Step and General Relativity, Pescara, September 16–21, 2002
INW IX: Fermi and Astrophysics, Rome and Pescara, October 3–7, 2001
INW X: Black Holes, Gravitational Waves and Cosmology, Rome and Pescara, July 15–20, 2002
INW XV: Testing the Equivalence Principle on Ground and in Space, Rome and Pescara, Italy September 20–23, 2004
Publications
In addition to the proceedings of conferences several books have been published, in particular:
ICRA and G9 history
The history of the relativistic astrophysics group in the Department of Physics (Fisica) of the University of Rome "La Sapienza" led by Remo Ruffini, started with his appointment to a chair of theoretical physics there in 1978 is represented here.
Marcel Grossmann Awards
Each meeting, one or two institutions and between two and six individual scientists are selected to receive the Marcel Grossmann Award. Each recipient is presented with a silver T. E. S. T. sculpture designed by artist A. Pierelli.
References
Astrophysics
International research institutes
International scientific organizations | International Center for Relativistic Astrophysics | [
"Physics",
"Astronomy"
] | 1,927 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
60,027,450 | https://en.wikipedia.org/wiki/Target%20Malaria | Target Malaria is a not-for-profit international research consortium that aims to co-develop and share novel genetic technologies to help control malaria in Africa. The consortium brings together research institutes and universities from Africa, Europe and North America.
The project is working to develop genetically modified mosquitoes that carry a trait that would result in the reduction of malaria mosquito populations. Reducing the number of mosquitoes that can transmit the malaria parasite would lead to fewer malaria infections. The project’s novel genetic approach aims to be complementary to existing malaria control interventions. The project’s research is still at an early stage, and even though results so far have been promising, there is a long way to go.
The malaria burden in Africa
Every year, malaria kills half a million people and infects over 200 million people; a third of the world is at risk of contracting this disease transmitted by mosquitoes. The majority of the victims are children under the age of five living in Africa. While all regions in the world have made tremendous progress towards control and elimination of malaria, Africa accounts for 94% of malaria deaths in the world.
New vector control tools
According to the World Malaria Report 2020 published by the World Health Organization, despite tremendous progress in reducing malaria around the world, since 2015 this progress has slowed, stalling in the last three years. Current interventions, such as drug treatments, bed nets and insecticide spraying, have helped to lower the burden of malaria but have not been able to eradicate the disease in many countries. WHO warns that the global response to malaria has reached a “crossroads”: if new tools are not found, key targets of WHO’s global malaria strategy will likely be missed.
Gene drive for malaria control
Target Malaria is adapting a natural mechanism called a gene drive. The genetically modified mosquitoes carry a trait that targets their ability to reproduce. Gene drive ensures this modification is inherited at a higher rate than it normally would, thus reducing the fertility of the mosquito populations over time and ultimately their numbers. Gene drive technologies hold the promise of being a self-sustaining and cost-effective method to help in the fight against malaria by reducing the population of malaria mosquitoes. The WHO stated in its Position Statement on the evaluation and use of GMMs for the control of vector-borne diseases published on October 14, 2020: "In the spirit of fostering innovation, WHO takes the position that all potentially beneficial new technologies, including GMMs, should be investigated to determine whether they could be useful in the continued fight against diseases of public health concern. Such research should be conducted in steps and be supported by clear governance mechanisms to evaluate the health, environmental and ecological implications."
History and funding
Target Malaria started as a university-based research programme in 2005. Since 2012, the project has expanded to include scientists, social scientists, stakeholder engagement experts, regulatory affairs experts, project management teams, risk assessment specialists and communications professionals from Africa, Europe, and North America. The project receives core funding from the Bill and Melinda Gates Foundation. and from the Open Philanthropy Project Fund, an advised fund of Silicon Valley Community Foundation. Individual labs also received additional funding from a variety of sources to support their work, including but not limited to: DEFRA, The European Commission, MRC, NIH, Uganda Ministry of Health, Uganda National Council for Science & Technology, Wellcome Trust and the World Bank.
List of partner institutions
CDC Foundation, USA
Imperial College London, UK
Institut de Recherche en Sciences de la Santé – IRSS (Research Institute for Health Sciences, Burkina Faso)
Polo d’Innovazione di Genomica, Genetica e Biologia – PoloGGB, Italy
Uganda Virus Research Institute, Uganda
University of Ghana, Ghana
University of Oxford, UK
See also
Gene drive
Oxitec
Rat Guard
References
Genome editing
Genomics organizations
Insect-borne diseases
Malaria
Pest control | Target Malaria | [
"Engineering",
"Biology"
] | 779 | [
"Genetics techniques",
"Genome editing",
"Pest control",
"Genetic engineering",
"Pests (organism)"
] |
60,034,038 | https://en.wikipedia.org/wiki/Anti-Cancer%20Agents%20in%20Medicinal%20Chemistry | Anti-Cancer Agents in Medicinal Chemistry is a peer-reviewed academic journal covering the disciplines of medicinal chemistry and drug design relating to chemotherapeutic agents in cancer. It is published by Bentham Science Publishers and the editor-in-chief is Simone Carradori ("G. d'Annunzio" University of Chieti-Pescara). The journal covers developments in "medicinal chemistry and rational drug design for the discovery of anti-cancer agents" and publishes original research reports and review papers.
It is related to the journal Current Medicinal Chemistry and was established in 2001 as Current Medicinal Chemistry – Anti-Cancer Agents. The journal obtained its present title in 2006.
Abstracting and indexing
The journal is abstracted and indexed in:
According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.8.
References
External links
Medicinal chemistry journals
Academic journals established in 2001
Bentham Science Publishers academic journals
English-language journals | Anti-Cancer Agents in Medicinal Chemistry | [
"Chemistry"
] | 195 | [
"Biochemistry stubs",
"Medicinal chemistry journals",
"Medicinal chemistry",
"Medicinal chemistry stubs"
] |
60,035,395 | https://en.wikipedia.org/wiki/Serial%20femtosecond%20crystallography | Serial femtosecond crystallography (SFX) is a form of X-ray crystallography developed for use at X-ray free-electron lasers (XFELs). Single pulses at free-electron lasers are bright enough to generate resolvable Bragg diffraction from sub-micron crystals. However, these pulses also destroy the crystals, meaning that a full data set involves collecting diffraction from many crystals. This method of data collection is referred to as serial, referencing a row of crystals streaming across the X-ray beam, one at a time.
History
While the idea of serial crystallography had been proposed earlier, it was first demonstrated with XFELs by Chapman et al. at the Linac Coherent Light Source (LCLS) in 2011. This method has since been extended to solve unknown structures, perform time-resolved experiments, and later even brought back to synchrotron X-ray sources.
Methods
In comparison to conventional crystallography, where a single (relatively large) crystal is rotated in order to collect a 3D data set, some additional methods have to be developed to measure in the serial mode. First, a method is required to efficiently stream crystals across the beam focus. The other major difference is in the data analysis pipeline. Here, each crystal is in a random, unknown orientation which must be computationally determined before the diffraction patterns from all the crystals can be merged into a set of 3D hkℓ intensities.
Sample Delivery
The first sample delivery system used for this technique was the Gas Dynamic Virtual Nozzle (GDVN) which generates a liquid jet in vacuum (accelerated by a concentric helium gas stream) containing crystals. Since then, many other methods have been successfully demonstrated at both XFELs and synchrotron sources. A summary of these methods along with their key relative features is given below:
Gas Dynamic Virtual Nozzle (GDVN) - low background scattering, but high sample consumption. Only method available for high repetition rate sources.
Lipidic Cubic Phase (LCP) injector - Low sample consumption, with relatively high background. Specially suited for membrane proteins
Other viscous delivery media - Similar to LCP, low sample consumption with high background
Fixed target scanning systems (wide variety of systems have been used with different features, with standard crystal loops, or silicon chips) - Low sample consumption, background depends on system, mechanically complex
Tape drive (crystals auto-pipetted onto a Kapton tape and brought to X-ray focus) - Similar to fixed target systems, except with fewer moving parts
Data Analysis
In order to recover a 3D structure from the individual diffraction patterns, they must be oriented, scaled and merged to generate a list of hkℓ intensities. These intensities can then be passed to standard crystallographic phasing and refinement programs. The first experiments only oriented the patterns and obtained accurate intensity values by averaging over a large number of crystals (> 100,000). Later versions correct for variations in individual pattern properties such as overall intensity variations and B-factor variations as well as refining the orientations to fix the "partialities" of the individual Bragg reflections.
References
External links
CrystFEL
cctbx.xfel
NXDS
The revolution of XFEL
X-ray crystallography | Serial femtosecond crystallography | [
"Chemistry",
"Materials_science"
] | 680 | [
"X-ray crystallography",
"Crystallography"
] |
57,084,837 | https://en.wikipedia.org/wiki/Heterogeneous%20gold%20catalysis | Heterogeneous gold catalysis refers to the use of elemental gold as a heterogeneous catalyst. As in most heterogeneous catalysis, the metal is typically supported on metal oxide. Furthermore, as seen in other heterogeneous catalysts, activity increases with a decreasing diameter of supported gold clusters. Several industrially relevant processes are also observed such as H2 activation, Water-gas shift reaction, and hydrogenation. One or two gold-catalyzed reactions may have been commercialized.
The high activity of supported gold clusters has been proposed to arise from a combination of structural changes, quantum-size effects and support effects that preferentially tune the electronic structure of gold such that optimal binding of adsorbates during the catalytic cycle is enabled. The selectivity and activity of gold nanoparticles can be finely tuned by varying the choice of support material, with e.g. titania (TiO2), hematite (α-Fe2O3), cobalt(II/III) oxide (Co3O4) and nickel(II) oxide (NiO) serving as the most effective support materials for facilitating the catalysis of CO combustion. Besides enabling an optimal dispersion of the nanoclusters, the support materials have been suggested to promote catalysis by altering the size, shape, strain and charge state of the cluster. A precise shape control of the deposited gold clusters has been shown to be important for optimizing the catalytic activity, with hemispherical, few atomic layers thick nanoparticles generally exhibiting the most desirable catalytic properties due to maximized number of high-energy edge and corner sites.
Proposed applications
In the past, heterogeneous gold catalysts have found preliminary commercial applications for the industrial production of vinyl chloride (precursor to polyvinyl chloride or PVC) and methyl methacrylate. Traditionally, PVC production uses mercury catalysts and leads to serious environmental concerns. China accounts for 50% of world's mercury emissions and 60% of China's mercury emission is caused by PVC production. Although gold catalysts are slightly expensive, overall production cost is affected by only ~1%. Therefore, green gold catalysis is considered valuable. The price fluctuation in gold has later led to cease the operations based on their use in catalytic converters. Very recently, there has been a lot of developments in gold catalysis for the synthesis of organic molecules including the C-C bond forming homocoupling or cross-coupling reactions and it has been speculated that some of these catalysts could find applications in various fields.
CO oxidation
Gold can be a very active catalyst in oxidation of carbon monoxide (CO), i.e. the reaction of CO with molecular oxygen to produce carbon dioxide (CO2). Particles of 2 to 5 nm exhibit high catalytic activities. Supported gold clusters, thin films and nanoparticles are one to two orders of magnitude more active than atomically dispersed gold cations or unsupported metallic gold.
Gold cations can be dispersed atomically on basic metal oxide supports such as MgO and La2O3. Monovalent and trivalent gold cations have been identified, the latter being more active but less stable than the former. The turnover frequency (TOF) of CO oxidation on these cationic gold catalysts is in the order of magnitude of 0.01 s−1, exhibiting the very high activation energy of 138 kJ/mol.
Supported gold nanoclusters with a diameter < 2 nm are active to CO oxidation with turnover number (TOF) in the order of magnitude of 0.1 s−1. It has been observed that clusters with 8 to 100 atoms are catalytically active. The reason is that, on one hand, eight atoms are the minimum necessary to form a stable, discrete energy band structure, and on the other hand, d-band splitting decreases in clusters with more than 100 atoms, resembling the bulk electronic structure. The support has a substantial effect on the electronic structure of gold clusters. Metal hydroxide supports such as Be(OH)2, Mg(OH)2, and La(OH)3, with gold clusters of < 1.5 nm in diameter constitute highly active catalysts for CO oxidation at 200 K (-73 °C). By means of techniques such as HR-TEM and EXAFS, it has been proven that the activity of these catalysts is due exclusively to clusters with 13 atoms arranged in an icosahedron structure. Furthermore, the metal loading should exceed 10 wt% for the catalysts to be active.
Gold nanoparticles in the size range of 2 to 5 nm catalyze CO oxidation with a TOF of about 1 s−1 at temperatures below 273 K (0 °C). The catalytic activity of nanoparticles is brought about in the absence of moisture when the support is semiconductive or reducible, e.g. TiO2, MnO2, Fe2O3, ZnO, ZrO2, or CeO2. However, when the support is insulating or non-reducible, e.g. Al2O3 and SiO2, a moisture level > 5000 ppm is required for activity at room temperature. In the case of powder catalysts prepared by wet methods, the surface OH− groups on the support provide sufficient aid as co-catalysts, so that no additional moisture is necessary. At temperatures above 333 K (60 °C), no water is needed at all.
The apparent activation energy of CO oxidation on supported gold powder catalysts prepared by wet methods is 2-3 kJ/mol above 333 K (60 °C) and 26-34 kJ/mol below 333 K. These energies are low, compared to the values displayed by other noble metal catalysts (80-120 kJ/mol). The change in activation energy at 333 K can be ascribed to a change in reaction mechanism. This explanation has been supported experimentally. At 400 K (127 °C), the reaction rate per surface Au atom is not dependent on particle diameter, but the reaction rate per perimeter Au atom is directly proportional to particle diameter. This suggests that the mechanism above 333 K takes place on the gold surfaces. By contrast, at 300 K (27 °C), the reaction rate per surface Au atom is inversely proportional to particle diameter, while the rate per perimeter interface does not depend on particle size. Hence, CO oxidation occurs on the perimeter sites at room temperature. Further information on the reaction mechanism has been revealed by studying the dependency of the reaction rate on the partial pressures of the reactive species. Both at 300 K and 400 K, there is a first order rate dependency on CO partial pressure up to 4 Torr (533 Pa), above which the reaction is zero order. With respect to O2, the reaction is zero order above 10 Torr (54.7 kPa) at both 300 and 400 K. The order with respect to O2 at lower partial pressures is 1 at 300 K and 0.5 at 400 K. The shift towards zero order indicates that the catalyst's active sites are saturated with the species in question. Hence, a Langmuir-Hinshelwood mechanism has been proposed, in which CO adsorbed on gold surfaces reacts with O adsorbed at the edge sites of the gold nanoparticles.
The need to use oxide supports, and more specifically reducible supports, is due to their ability to activate dioxygen. Gold nanoparticles supported on inert materials such as carbon or polymers have been proven inactive in CO oxidation. The aforementioned dependency of some catalysts on water or moisture also relates to oxygen activation. The ability of certain reducible oxides, such as MnO2, Co3O4, and NiO to activate oxygen in dry conditions (< 0.1 ppm H2O) can be ascribed to the formation of oxygen defects during pretreatment.
Water gas shift
Water gas shift is the most widespread industrial process for the production of dihydrogen, H2. It involves the reaction of carbon monoxide and water (syngas) to form hydrogen and carbon dioxide as a byproduct. In many catalytic reaction schemes, one of the elementary reactions is the oxidation of CO with an adsorbed oxygen species. Gold catalysts have been proposed as an alternative for water gas shift at low temperatures, viz. < 523 K (250 °C). This technology is essential to the development of solid oxide fuel cells. Hematite has been found to be an appropriate catalyst support for this purpose. Furthermore, a bimetallic Au-Ru/Fe2O3 catalyst has been proven highly active and stable for low-temperature water gas shift. Titania and ceria have also been used as supports for effective catalysts. Unfortunately, Au/CeO2 is prone to deactivation caused by surface-bound carbonate or formate species.
Although gold catalysts are active at room temperature to CO oxidation, the high amounts of water involved in water gas shift require higher temperatures. At such temperatures, gold is fully reduced to its metallic form. However, the activity of e.g. Au/CeO2 has been enhanced by CN− treatment, whereby metallic gold is leached, leaving behind highly active cations. According to DFT calculations, the presence of such Au cations on the catalyst is allowed by empty, localized nonbonding f states in CeO2. On the other hand, STEM studies of Au/CeO2 have revealed nanoparticles of 3 nm in diameter. Water gas shift has been proposed to occur at the interface of Au nanoparticles and the reduced CeO2 support.
Epoxidations
Although the epoxidation of ethylene is routinely achieved in the industry with selectivities as high as 90% on Ag catalysts, most catalysts provided < 10% selectivity for propylene epoxidation. Using a gold catalyst supported on titanium silicate-1 (TS-1) molecular sieve, yields of 350 g/h per gram of gold were obtained at 473 K (200 °C). The reaction took place in the gas phase. Furthermore, using mesoporous titanosilicate supports (Ti-MCM-41 and Ti-MCM-48), gold catalysts provided > 90% selectivity at ~ 7% propylene conversion, 40% H2 efficiency, and 433 K (160 °C). The active species in these catalysts were identified to be hemispherical gold nano-crystals of less than 2 nm in diameter in intimate contact with the support.
Alkene epoxidation has been demonstrated in absence of H2 reductant in the liquid phase. For example, using 1% Au/graphite, ~80% selectivities of cis-cyclooctene to cyclooctene oxide (analogous to cyclohexene oxide) were obtained at 7-8% conversion, 353 K (80 °C), and 3 MPa O2 in absence of hydrogen or solvent. Other liquid-phase selective oxidations have been achieved with saturated hydrocarbons. For instance, cyclohexane has been converted to cyclohexanone and cyclohexanol with a combined selectivity of ~100% on gold catalysts. Product selectivities can be tuned in liquid phase reactions by the presence or absence of solvent and by the nature of the latter, viz. water, polar, or nonpolar. With gold catalysts, the catalyst's support has less influence on reactions in the liquid phase than on reactions in the gas phase.
Selective hydrogenations
Typical hydrogenation catalysts are based on metals from the 8, 9, and 10 groups, such as Ni, Ru, Pd, and Pt. By comparison, gold has a poor catalytic activity for hydrogenation. This low activity is caused by the difficulty of dihydrogen activation on gold. While hydrogen dissociates on Pd and Pt without an energy barrier, dissociation on Au(111) has an energy barrier of ~1.3 eV, according to DFT calculations. These calculations agree with experimental studies, in which hydrogen dissociation was not observed on gold (111) or (110) terraces, nor on (331) steps. No dissociation was observed on these surfaces either at room temperature or at 473 K (200 °C). However, the rate of hydrogen activation increases for Au nanoparticles. Notwithstanding its poor activity, nano-sized gold immobilized in various supports has been found to provide a good selectivity in hydrogenation reactions.
One of the early studies (1966) of hydrogenation on supported, highly dispersed gold was performed with 1-butene and cyclohexene in the gas phase at 383 K (110 °C). The reaction rate was found to be first order with respect to alkene pressure and second order with respect to chemisorbed hydrogen. In later works, it was shown that gold-catalyzed hydrogenation can be highly sensitive to Au loading (hence to particle size) and to the nature of the support. For example, 1-pentene hydrogenation occurred optimally on 0.04 wt% Au/SiO2, but not at all on Au/γ-Al2O3. By contrast, the hydrogenation of 1,3-butadiene to 1-butene was shown to be relatively insensitive to Au particle size in a study with a series of Au/Al2O3 catalysts prepared by different methods. With all the tested catalysts, conversion was ~100% and selectivity, < 60%. Concerning reaction mechanisms, in a study of propylene hydrogenation on Au/SiO2, reaction rates were determined using D2 and H2. Because the reaction with deuterium was substantially slower, it was suggested that the rate-determining step in alkene hydrogenation was the cleavage of the H-H bond. Lastly, ethylene hydrogenation was studied on Au/MgO at atmospheric pressure and 353 K (80 °C) with EXAFS, XANES and IR spectroscopy, suggesting that the active species might be Au+3 and the reaction intermediate, an ethylgold species.
Gold catalysts are especially selective in the hydrogenation of α,β-insaturated aldehydes, i.e. aldehydes containing a C=C double bond on the carbon adjacent to the carbonyl. Gold catalysts are able to hydrogenate only the carbonyl group, so that the aldehyde is transformed to the corresponding alcohol, while leaving the C=C double bond untouched. In the hydrogenation of crotonaldehyde to crotyl alcohol, 80% selectivity was attained at 5-10% conversion and 523 K (250 °C) on Au/ZrO2 and Au/ZnO. The selectivity increased along with Au particle size in the range of ~2 to ~5 nm. Other instances of this reaction include acrolein, citral, benzal acetone, and pent-3-en-2-one. The activity and selectivity of gold catalysts for this reaction has been linked to the morphology of the nanoparticles, which in turn is influenced by the support. For example, round particles tend to form on TiO2, while ZnO promotes particles with clear facets, as observed by TEM. Because the round morphology provides a higher relative amount of low-coordinated metal surface sites, the higher activity observed with Au/TiO2 compared to Au/ZnO is explained. Finally, a bimetallic Au-In/ZnO catalyst has been observed to improve the selectivity towards the hydrogenation of the carbonyl in acrolein. It was observed in HRTEM images that indium thin films decorate some of the facets of the gold nanoparticle. The promoting effect on selectivity might result from the fact that only the Au sites that promote side-reactions are decorated by In.
A strategy that in many reactions has succeeded at improving gold's catalytic activity without impairing its selectivity is to synthesize bimetallic Pd-Au or Pt-Au catalysts. For the hydrogenation of 1,3-butadiene to butenes, model surfaces of Au(111), Pd-Au(111), Pd-Au(110), and Pd(111) were studied with LEED, AES, and LEIS. A selectivity of ~100% was achieved on Pd70Au30(111) and it was suggested that Au might promote the desorption of the product during the reaction. A second instance is the hydrogenation of p-chloronitrobenzene to p-chloroaniline, in which selectivity suffers with typical hydrogenation catalysts due to the parallel hydrodechlorination to aniline. However, Pd-Au/Al2O3 (Au/Pd ≥20) has been proven thrice as active as the pure Au catalyst, while being ~100% selective to p-chloroaniline. In a mechanistic study of hydrogenation of nitrobenzenes with Pt-Au/TiO2, the dissociation of H2 was identified as rate-controlling, hence the incorporation of Pt, an efficient hydrogenation metal, highly improved catalytic activity. Dihydrogen dissociated on Pt and the nitroaromatic compound was activated on the Au-TiO2 interface. Finally, hydrogenation was enabled by the spillover of activated H surface species from Pt to the Au surface.
Theoretical background
Bulk metallic gold is known to be inert, exhibiting a surface reactivity at room temperature only towards a few substances such as formic acid and sulphur-containing compounds, e.g. H2S and thiols. Within heterogeneous catalysis, reactants adsorb onto the surface of the catalyst thus forming activated intermediates. However, if the adsorption is weak such as in the case of bulk gold, a sufficient perturbation of the reactant electronic structure does not occur and catalysis is hindered (Sabatier's principle). When gold is deposited as nanosized clusters of less than 5 nm onto metal oxide supports, a markedly increased interaction with adsorbates is observed, thereby resulting in surprising catalytic activities. Evidently, nano-scaling and dispersing gold on metal oxide substrates makes gold less noble by tuning its electronic structure, but the precise mechanisms underlying this phenomenon are as of yet uncertain and hence widely studied.
It is generally known that decreasing the size of metallic particles in some dimension to the nanometer scale will yield clusters with a significantly more discrete electronic band structure in comparison with the bulk material. This is an example of a quantum-size effect and has been previously correlated with an increased reactivity enabling nanoparticles to bind gas phase molecules more strongly. In the case of TiO2-supported gold nanoparticles, Valden et al. observed the opening of a band gap of approximately 0.2-0.6 eV in the gold electronic structure as the thickness of the deposited particles was decreased below three atomic layers. The two-layer thick supported gold clusters were also shown to be exceptionally active for CO combustion, based on which it was concluded that quantum-size effects inducing a metal-insulator transition play a key role in enhancing the catalytic properties of gold. However, decreasing the size further to a single atomic layer and a diameter of less than 3 nm was reported to again decrease the activity. This has later been explained by a destabilization of clusters composed of very few atoms, resulting in too strong bonding of adsorbates and thus poisoning of the catalyst.
The properties of the metal d-band are central for describing the origin of catalytic activity based on electronic effects. According to the d-band model of heterogeneous catalysis, substrate-adsorbate bonds are formed as the discrete energy levels of the adsorbate molecule interacts with the metal d-band, thus forming bonding and antibonding orbitals. The strength of the formed bond depends on the position of the d-band center such that a d-band closer to the Fermi level () will result in stronger interaction. The d-band center of bulk gold is located far below , which qualitatively explains the observed weak binding of adsorbates as both the bonding and antibonding orbitals formed upon adsorption will be occupied, resulting in no net bonding. However, as the size of gold clusters is decreased below 5 nm, it has been shown that the d-band center of gold shifts to energies closer to the Fermi level, such that the as formed antibonding orbital will be pushed to an energy above , hence reducing its filling. In addition to a shift in the d-band center of gold clusters, the size-dependency of the d-band width as well as the spin-orbit splitting has been studied from the viewpoint of catalytic activity. As the size of the gold clusters is decreased below 150 atoms (diameter ca. 2.5 nm), rapid drops in both values occur. This can be attributed to d-band narrowing due to the decreased number of hybridizing valence states of small clusters as well as to the increased ratio of high-energy edge atoms with low coordination to the total number of Au atoms. The effect of the decreased spin-orbit splitting as well as the narrower distribution of d-band states on the catalytic properties of gold clusters cannot be understood via simple qualitative arguments as in the case of the d-band center model. Nevertheless, the observed trends provide further evidence that a significant perturbation of the Au electronic structure occurs upon nanoscaling, which is likely to play a key role in the enhancement of the catalytic properties of gold nanoparticles.
A central structural argument explaining the high activity of metal oxide supported gold clusters is based on the concept of periphery sites formed at the junction between the gold cluster and the substrate. In the case of CO oxidation, it has been hypothesized that CO adsorbs onto the edges and corners of the gold clusters, while the activation of oxygen occurs at the peripheral sites. The high activity of edge and corner sites towards adsorption can be understood by considering the high coordinative unsaturation of these atoms in comparison with terrace atoms. The low degree of coordination increases the surface energy of corner and edge sites, hence making them more active towards binding adsorbates. This is further coupled with the local shift of the d-band center of the unsaturated Au atoms towards energies closer to the Fermi level, which in accordance with the d-band model results in increased substrate-adsorbate interaction and lowering of the adsorption-dissociation energy barriers. Lopez et al. calculated the adsorption energy of CO and O2 on the Au(111) terrace on which the Au-atoms have a coordination number of 9 as well as on an Au10 cluster where the most reactive sites have a coordination of 4. They observed that the bond strengths are in general increased by as much as 1 eV, indicating a significant activation towards CO oxidation if one assumes that the activation barriers of surface reactions scale linearly with the adsorption energies (Brønsted-Evans-Polanyi principle). The observation that hemispherical two-layer gold clusters with a diameter of a few nanometers are most active for CO oxidation is well in line with the assumption that edge and corner atoms serve as the active sites, since for clusters of this shape and size the ratio of edge atoms to the total number of atoms is indeed maximized.
The preferential activation of O2 at the perimeter sites is an example of a support effect that promotes the catalytic activity of gold nanoparticles. Besides enabling a proper dispersion of the deposited particles and hence a high surface-to-volume ratio, the metal oxide support also directly perturbs the electronic structure of the deposited gold clusters via various mechanisms, including strain-inducing and charge transfer. For gold deposited on magnesia (MgO), a charge transfer from singly charged oxygen vacancies (F-centers) at the MgO surface to the Au cluster has been observed. This charge transfer induces a local perturbation in the electronic structure of the gold clusters at the perimeter sites, enabling the formation of resonance states as the antibonding orbital of oxygen interacts with the metal d-band. As the antibonding orbital is occupied, the O-O bond is significantly weakened and stretched, i.e. activated. In gas-phase model studies, the formation of activated super-oxo species O2− is found to correlate with the size-dependent electronic properties of the clusters. The activation of O2 at the perimeter sites is also observed for defect-free surfaces and neutral gold clusters, but to a significantly smaller extent. The activity enhancing effect of charge transfer from the substrate to gold has also been reported by Chen and Goodman in the case of a gold bilayer supported on ultrathin TiO2 on Mo(112). In addition to charge transfer between the substrate and the gold nanoparticles, the support material has been observed to increase the catalytic activity of gold by inducing strain as a consequence of lattice mismatch. The induced strains especially affect the Au atoms close to the substrate-cluster interface, resulting in a shift of the local d-band center towards energies closer to the Fermi level. This corroborates the periphery hypothesis and the creation of catalytically active bifunctional sites at the cluster-support interface. Furthermore, the support-cluster interaction directly influences the size and shape of the deposited gold nanoparticles. In the case of weak interaction, less active 3D clusters are formed, whereas if the interaction is stronger more active 2D few-layer structures are formed. This illustrates the ability to fine-tune the catalytic activity of gold clusters via varying the support material as well as the underlying metal upon which the substrate has been grown.
Finally, it has been observed that the catalytic activity of supported gold clusters towards CO oxidation is further enhanced by the presence of water. Invoking the periphery hypothesis, water promotes the activation of O2 by co-adsorption onto the perimeter sites where it reacts with O2 to form adsorbed hydroxyl (OH*) and hydroperoxo (OOH*) species. The reaction of these intermediates with adsorbed CO is very rapid, and results in the efficient formation of CO2 with concomitant recovery of the water molecule.
See also
Gold
Gold cluster
Organogold chemistry
Colloidal gold
Heterogeneous catalysis
Cluster chemistry
Hydrogen spillover
References
Gold
Chemical reactions
Chemical kinetics
Catalysis
Surface science | Heterogeneous gold catalysis | [
"Physics",
"Chemistry",
"Materials_science"
] | 5,557 | [
"Catalysis",
"Chemical reaction engineering",
"Surface science",
"Condensed matter physics",
"nan",
"Chemical kinetics"
] |
57,088,643 | https://en.wikipedia.org/wiki/Proceedings%20of%20the%20Institution%20of%20Electrical%20Engineers | Proceedings of the Institution of Electrical Engineers was a series journals which published the proceedings of the Institution of Electrical Engineers. It was originally established as the Journal of the Society of Telegraph Engineers in 1872, and was known under several titles over the years, such as Journal of the Institution of Electrical Engineers, Proceedings of the IEE and IEE Proceedings.
History
The journal was originally established in 1872, as
Journal of the Society of Telegraph Engineers (1872–1880)
Then underwent a series of name changes
Journal of the Society of Telegraph Engineers and of Electricians (1881–1882)
Journal of the Society of Telegraph-Engineers and Electricians (1883–1888)
Until in 1889 it settled into
Journal of the Institution of Electrical Engineers (1889–1940)
The journal remained under that name for over 50 years.
From 1926 to 1940, a new journal was started
Institution of Electrical Engineers - Proceedings of the Wireless Section of the Institution (1926–1940)
In 1941, the journals were reorganized in distinct parts. From 1941 to 1948 those were
Journal of the Institution of Electrical Engineers - Part I: General
Journal of the Institution of Electrical Engineers - Part II: Power Engineering
Journal of the Institution of Electrical Engineers - Part IIA: Automatic Regulators and Servo Mechanisms
Journal of the Institution of Electrical Engineers - Part III: Communication Engineering
Journal of the Institution of Electrical Engineers - Part III: Radio and Communication Engineering
Journal of the Institution of Electrical Engineers - Part IIIA: Radiocommunication
Journal of the Institution of Electrical Engineers - Part IIIA: Radiolocation
In 1949, until 1954, the publications were reorganized into
Journal of the Institution of Electrical Engineers
and
Proceedings of the IEE - Part I: General
Proceedings of the IEE - Part IA: Electric Railway Traction
Proceedings of the IEE - Part II: Power Engineering
Proceedings of the IEE - Part IIA: Insulating Materials
Proceedings of the IEE - Part III: Radio and Communication Engineering
Proceedings of the IEE - Part IIIA: Television
Proceedings of the IEE - Part IV: Institution Monographs
Which in 1955 were renamed
Journal of the IEE (1955–1963)
and
Proceedings of the IEE - Part A: Power Engineering
Proceedings of the IEE - Part B: Electronic and Communication Engineering
Proceedings of the IEE - Part B: Radio and Electronic Engineering
Proceedings of the IEE - Part C: Monographs
These merged into a single journal in 1963, which remained until 1979.
Proceedings of the Institution of Electrical Engineers (1963–1979)
In 1964, Journal of the IEE became
Electronics & Power (1964–1987)
which in 1988 became
IEE Review (1988–2006)
The proceedings were renamed in 1980 as IEE Proceedings. From 1980 until 1993, the IEE Proceedings had lettered parts
IEE Proceedings A (Physical Science, Measurement and Instrumentation, Management and Education)
IEE Proceedings A (Physical Science, Measurement and Instrumentation, Management and Education, Reviews)
IEE Proceedings A (Science, Measurement and Technology)
IEE Proceedings B (Electric Power Applications)
IEE Proceedings C (Generation, Transmission and Distribution)
IEE Proceedings D (Control Theory and Applications)
IEE Proceedings E (Computers and Digital Techniques)
IEE Proceedings F (Communications, Radar and Signal Processing)
IEE Proceedings F (Radar and Signal Processing)
IEE Proceedings G (Circuits, Devices and Systems)
IEE Proceedings G (Electronic Circuits and Systems)
IEE Proceedings H (Microwaves, Antennas and Propagation)
IEE Proceedings H (Microwaves, Optics and Antennas)
IEE Proceedings I (Communications, Speech and Vision)
IEE Proceedings I (Solid-State and Electron Devices)
IEE Proceedings J (Optoelectronics)
and were reorganized in 1994 until 2006
IEE Proceedings - Circuits, Devices and Systems
IEE Proceedings - Communications
IEE Proceedings - Computers and Digital Techniques
IEE Proceedings - Control Theory and Applications
IEE Proceedings - Electric Power Applications
IEE Proceedings - Generation, Transmission and Distribution
IEE Proceedings - Information Security
IEE Proceedings - Intelligent Transport Systems
IEE Proceedings - Microwaves, Antennas and Propagation
IEE Proceedings - Nanobiotechnology
IEE Proceedings - Optoelectronics
IEE Proceedings - Radar, Sonar and Navigation
IEE Proceedings - Science, Measurement and Technology
IEE Proceedings - Software
IEE Proceedings - Systems Biology
IEE Proceedings - Vision, Image and Signal Processing
After 2006, the IEE merged with the Institution of Incorporated Engineers (IIE) to form the Institution of Engineering and Technology (IET), and its journals were reorganized into various IET publications.
External links
Institution of Engineering and Technology academic journals
Electrical and electronic engineering journals | Proceedings of the Institution of Electrical Engineers | [
"Engineering"
] | 926 | [
"Institution of Engineering and Technology",
"Institution of Engineering and Technology academic journals",
"Electronic engineering",
"Electrical engineering",
"Electrical and electronic engineering journals"
] |
57,091,934 | https://en.wikipedia.org/wiki/Ion%20implantation-induced%20nanoparticle%20formation | Ion implantation-induced nanoparticle formation is a technique for creating nanometer-sized particles for use in electronics.
Ion implantation
Ion Implantation is a technique extensively used in the field of materials science for material modification. The effect it has on nanomaterials allows manipulation of mechanical, electronic, morphological, and optical properties.
One-dimensional nano-materials are an important contributor to the creation of nano-devices such as field effect transistors, nanogenerators and solar cells. The offer the potential of high integration density, lower power consumption, higher speed and super high frequency.
The effects of ion implantation varies according to multiple variables. Collision cascade may occur during implantation and this causes of interstitials and vacancies in target materials (although these defects may be mitigated through dynamic annealing). Collision modes are nuclear collision, electron collision and charge exchange. Another process is the sputtering effect, which significantly affects the morphology and shape of nano-materials.
References
Materials science
Nanotechnology
Semiconductor fabrication materials | Ion implantation-induced nanoparticle formation | [
"Physics",
"Materials_science",
"Engineering"
] | 214 | [
"Materials science stubs",
"Applied and interdisciplinary physics",
"Materials science",
"Nanotechnology stubs",
"nan",
"Nanotechnology"
] |
57,093,897 | https://en.wikipedia.org/wiki/Design%20Commons | Design Commons is a deconstructed conference founded by the CEO of Interactive Africa, Ravi Naidoo in collaboration with World Design Weeks founder Kari Korkman. The event was established in 2017 and acts as a travelling discussion platform on the future of cities.
Unlike the conventional conference format, Design Commons places stakeholders in urban planning and public space at the same table as notable designers to foster a sense of common ground and find solutions to some of the world's pressing issues.
Background
The first Design Commons event took place in Clarion Congress Centre, Helsinki as part of the World Design Weeks summit from 14 to 15 September 2017.
Speakers include architect David Adjaye, Dutch architect Winy Maas, Finnish technology entrepreneur Marko Ahtisaari, Dutch landscape architect Cees van der Veeken and Studio Swine, a collaboration between Japanese architect Azusa Murakami and British artist Alexander Groves.
The event is open to the public and is partially released as a video series on the Design Indaba website.
References
Conferences in Finland
Urban planning | Design Commons | [
"Engineering"
] | 210 | [
"Urban planning",
"Architecture"
] |
57,094,192 | https://en.wikipedia.org/wiki/Kastus | Kastus Technologies is an Irish multinational nanotechnology company that specialises in patented, visible-light-activated, photocatalytic, antimicrobial coatings. These coatings prevent the growth of bacteria on surfaces such as ceramics, glass, and touchscreens, with no negative side effects for the end user. Founded in Dublin in 2014, Kastus’ antimicrobial coatings were in development for over 10 years as part of a collaboration with Dublin Institute of Technology and the Advanced Materials and Bio Engineering Research (AMBER) Centres.
History
John Browne, Kastus CEO, founded the company in 2014 in Dublin following 10 years of collaborative research with Dublin Institute of Technology. It was developed out of an increasing demand for a reduction in the spread of antibiotic-resistant infections commonly found on indoor surfaces. In October 2017, the Department of Health published “Ireland’s National Action Plan on Antimicrobial Resistance 2017-2020”, which highlighted the threat antimicrobial resistance poses and the urgent need for new technology to combat this.
In April 2016, the Sligo Institute of Technology, which is funded by Kastus, announced the creation of a non-toxic antimicrobial nanotechnology, which Kastus plans to market globally. This research is supported by a €1.5 million funding investment from Atlantic Bridge.
In 2018, Kastus partnered with Oman-based ceramic tile producer Al Maha Ceramics, which exports to 15 countries in Asia and Africa. The deal saw Kastus use its antimicrobial technology to produce a range of new tiles called iProtect.
In 2019, Kastus partnered with Faytech to enhance their development of touch display manufacturing capabilities.
In 2020, Kastus developed antimicrobial and antiviral technology used on touch screens to prevent the spread of diseases such as COVID-19. The screen technology has been shown to kill up to 99% of harmful bacteria, fungi, and antibiotic-resistant superbugs, including human coronavirus. Kastus was awarded EU funding to further develop and expand these technologies and their applications, and has partnered with companies such as Lenovo, Zagg, and Lavazza for a range of commercial applications for their products.
In 2021, Kastus raised €5.65 million in a Series A round to build out its global commercial team to meet growing demand for its antiviral surface protection technology.
Awards
Spin-out Company Impact Award (2017)
Irish Times Innovation of the Year award (2017)
Irish Times Life Sciences and Healthcare award (2017)
KTI Impact Award Winners 2017
Med Tech Award Finalists 2020
EY Entrepreneur of The Year Finalists
References
Nanotechnology companies
Companies based in Dublin (city) | Kastus | [
"Materials_science"
] | 557 | [
"Nanotechnology",
"Nanotechnology companies"
] |
57,094,769 | https://en.wikipedia.org/wiki/Knudsen%20absolute%20manometer | A Knudsen absolute manometer is an instrument to measure absolute pressures. Named after Martin Knudsen.
Working principle
Pressure is determined by the interaction of particles with a surface, its kinetic energy, and is temperature dependent. When a particle hits a hotter surface, heat transfer will take place and the particle will gain energy. When a particle hits a colder surface, the opposite occurs. Particles that interact with a hotter or colder surface will exert a force on that surface. A Knudsen manometer uses this temperature-effect to make a plate with dual temperatures rotate. It consists of a rotating plate, of which the centre of rotation is in the centre of the plate. Image the plate rotating, the parts that push the 'air' are the plate parts that are 'normal' temperature, the other sides are heated. At the heated sides the particles that interact will gain kinetic energy and push to plate to rotation. By reading the speed of this rotation the pressure can be determined. (This is an interpretation from the information about Knudsen manometer from the Dutch book Vacuum Technologie)
References
Pressure gauges
Laboratory equipment | Knudsen absolute manometer | [
"Technology",
"Engineering"
] | 228 | [
"Pressure gauges",
"Measuring instruments"
] |
46,809,749 | https://en.wikipedia.org/wiki/Penicillium%20molle | Penicillium molle is an anamorph species of the genus Penicillium.
References
molle
Fungi described in 1980
Fungus species | Penicillium molle | [
"Biology"
] | 32 | [
"Fungi",
"Fungus species"
] |
46,814,568 | https://en.wikipedia.org/wiki/Photosynthetic%20state%20transition | In photosynthesis, state transitions are rearrangements of the photosynthetic apparatus which occur on short time-scales (seconds to minutes). The effect is prominent in cyanobacteria, whereby the phycobilisome light-harvesting antenna complexes alter their preference for transfer of excitation energy between the two reaction centers, PS I and PS II. This shift helps to minimize photodamage caused by reactive oxygen species (ROS) under stressful conditions such as high light, but may also be used to offset imbalances between the rates of generating reductant and ATP.
The phenomenon was first discovered in unicellular green algae, and may also occur in plants. However, in these organisms it occurs by a different mechanism, which is not as well understood. The plant/algal mechanism is considered functionally analogous to the cyanobacterial mechanism but involves completely different components. The foremost difference is the presence of fundamentally different types of light-harvesting antenna complexes: plants and green algae use an intrinsically-bound membrane complex of chlorophyll a/b binding proteins for their antenna, instead of the soluble phycobilisome complexes used by cyanobacteria (and certain algae).
References
Biochemistry | Photosynthetic state transition | [
"Chemistry",
"Biology"
] | 260 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry",
"nan"
] |
46,815,495 | https://en.wikipedia.org/wiki/Metanephrines | The metanephrines are a group of molecules consisting of metanephrine and normetanephrine.
An article in the Journal of the American Medical Association, 2002, indicated that the measurement of plasma free levels of metanephrines is the best tool in the diagnosis of pheochromocytoma, an adrenal medullary neoplasm.
References
Phenol ethers
Phenylethanolamines
Tumor markers | Metanephrines | [
"Chemistry",
"Biology"
] | 96 | [
"Chemical pathology",
"Tumor markers",
"Biomarkers"
] |
46,818,848 | https://en.wikipedia.org/wiki/EPNdB | Effective perceived noise in decibels (EPNdB) or Effective Perceived Noise Level (EPNL) is a measure of the relative noisiness of an individual aircraft pass-by event. It is used for aircraft noise certification and applies to an individual aircraft, not the noise exposure from an airport. Separate ratings are stated for takeoff, overflight and landing events, and represent the integrated power sum of noisiness during the event. Instantaneous value of noisiness is computed with the PNL or PNdB metric over the period within which the noise from the aircraft is within 10 dB of the maximum noise (usually at the point of closest approach.) It is defined, with computational instructions, in Annex 16 of the Convention on International Civil Aviation and in Part 36 of the US Federal Aviation Regulations. The scaling is such that the EPNdB rating represents the integrated noisiness over a ten-second period; EPNdB of 100 dB means that the event has the same integrated noisiness as a 100 PNdB sound lasting ten seconds. Direct comparison with A-weighted sound pressure level, which is used for many other environmental sound measurements, is not possible because PNdB is a noisiness metric rather than a sound pressure metric.
The term "cumulative" EPNdB is the combination of the noise margins from the three ratings. It is defined as the sum of the individual margins (difference between certified noise level and noise limit) at takeoff lateral, takeoff flyover and approach.
It is important to make the distinction between loudness and noisiness. The same kinds of analytical methods are used but instead of using equal-loudness contours, equal-noisiness contours are derived and used instead.
The EPNdB metric is only used in the US for aircraft certification purposes. In Australia and Canada, it's the basis for the ANEF and NEF noise exposure forecast used in place of the DNL and Day-evening-night metrics used in the US and Europe respectively.
Computation of EPNdB
Detailed information on measurement of aircraft acoustic signature to meet the requirements of Annex 16 is found in ICAO Document 9501 and IEC 61265. Data acquisition in one-third-octave bands is required, followed by processing to yield a logarithmically-scaled value in decibels relative to a sound pressure of 20 micropascals for each one-third-octave band. The individual band sound pressure levels are converted to "noy" values which are then summed in the manner of Stevens' MKVI loudness to yield a total noy value. Noy is a linear unit of noisiness like sone is for loudness, and is then converted into PNL or PNdB (the terms are interchangeable) which is a logarithmic unit like phon which is the logarithmic unit for loudness. EPNdB is the integrated PNdB value over the duration of the pass-by event, normalized to a 10-second event duration using Stevens's power law. The frequency weighting function in the "noy" curves is very close to the old D-weighting curve.
See also
Aircraft noise
Noise pollution
Noise measurement
References
Sound measurements
Noise pollution
Aviation law | EPNdB | [
"Physics",
"Mathematics"
] | 668 | [
"Quantity",
"Sound measurements",
"Physical quantities"
] |
53,705,242 | https://en.wikipedia.org/wiki/Peter%20Trefonas | Peter Trefonas (born 1958) is a retired DuPont Fellow (a senior scientist) at DuPont, where he had worked on the development of electronic materials. He is known for innovations in the chemistry of photolithography, particularly the development of anti-reflective coatings and polymer photoresists that are used to create circuitry for computer chips. This work has supported the patterning of smaller features during the lithographic process, increasing miniaturization and microprocessor speed.
Education
Peter Trefonas is a son of Louis Marco Trefonas, also a chemist, and Gail Thames. He was inspired by Star Trek and the writings of Isaac Asimov, and created his own chemistry lab at home.
Trefonas attended the University of New Orleans, receiving his Bachelor of Science in chemistry in 1980.
While an undergraduate, Trefonas earned money by writing video games for early personal computers. These included Worm, a clone of the 1976 arcade video game Blockade, and a clone of the arcade game Hustle (1977), which itself was based on Blockcade. Worm was the first of what would become many games in the snake video game genre for home computers. Trefonas also wrote a game based on Dungeons & Dragons.
Trefonas studied at the University of Wisconsin-Madison with Robert West, completing a Ph.D. in inorganic chemistry in late 1984. Trefonas became interested in electronic materials after working with West and chip makers from IBM to create organosilicon bilayer photoresists. His thesis topic was "Synthesis, properties and chemistry of organosilane and organogermane high polymers" (1985).
Career
Trefonas joined MEMC Electronic Materials in late 1984. In 1986, he and others co-founded Aspect Systems Inc., utilizing photolithography technology acquired from MEMC. Trefonas worked at Aspect from 1986-1989. Then, through a succession of company acquisitions, he moved to Shipley Company (1990-2000), Rohm and Haas (1997-2008), to The Dow Chemical Company (2008-2019), and finally to DuPont (2019-current).
Trefonas has published at least 137 journal articles and technical publications. He has received 132 US patents.
Research
Throughout his career, Trefonas has focused on materials science and the chemistry of photolithography. By understanding the chemistry of photoresists used in lithography, he has been able to develop anti-reflective coatings and polymer photoresists that support finely-tuned etching used in the production of integrated circuits.
These materials and techniques make it possible to fit more circuits into a given area. Over time, lithographic technologies have developed to allow lithography to use smaller wavelengths of light. Trefonas has helped to overcome a number of apparent limits to the sizes that are achievable, developing photoresists that are responsive to 436-nm and 365-nm ultraviolet light, and as small as 193 nm deep.
In 1989, Trefonas and others at Aspect Systems Inc. reported on extensive studies of polyfunctional photosensitive groups in positive photoresists. They studied diazonaphthoquinone (DNQ), a chemical compound used for dissolution inhibition of novolak resin in photomask creation. They mathematically modeled effects, predicted possible optimizations, and experimentally verified their predictions. They found that chemically bonding together three of the molecules of DNQ to create a new molecule containing three dissolution inhibitors in a single molecule, led to a better feature contrast, with better resolution and miniaturization. These modified DNQs became known as "polyfunctional photoactive components" (PACs). This approach, which they termed polyphotolysis,
has also been referred to as the "Trefonas Effect."
The technology of trifunctional diazonaphthoquinone PACs has become the industry standard in positive photoresists. Their mechanism has been elucidated and relates to a cooperative behavior of each of the three DNQ units in the new trifunctional dissolution inhibitor molecule. Phenolic strings from the acceptor groups of PACs that are severed from their anchors may reconnect to living strings, replacing two shorter polarized strings with one longer polarized string.
Trefonas has also been a leader in the development of fast etch organic Bottom Antireflective Coating (BARC)
BARC technology minimizes the reflection of light from the substrate when imaging the photoresist. Light that is used to form the latent image in the photoresist film can reflect back from the substrate and compromise feature contrast and profile shape. Controlling interference from reflected light results in the formation of a sharper pattern with less variability and a larger process window.
In 2014, Trefonas and others at Dow were named Heroes of Chemistry by the American Chemical Society, for the development of Fast Etch Organic Bottom Antireflective Coatings (BARCs). In 2016, Trefonas was recognized with The SCI Perkin Medal for outstanding contributions to industrial chemistry. In 2018, Trefonas was named as a Fellow of the SPIE for "achievements in design for manufacturing & compact modeling." Peter Trefonas was elected to the National Academy of Engineering in 2018 for the "invention of photoresist materials and microlithography methods underpinning multiple generations of microelectronics". DuPont Company in 2019 recognized Trefonas with its top recognition, the Lavoisier Medal, for "commercialized electronic chemicals which enabled customers to manufacture integrated circuits with higher density and faster speeds".
Awards and honors
2022, “Industry Project Award” of the Institution of Chemical Engineers (IChemE)
2021, ACS Carothers Award
2019, DuPont Company Lavoisier Medal
2018, elected to the National Academy of Engineering
2018, named Fellow of the SPIE
2016, Perkin Medal, from the Society of Chemical Industry
2014, ACS Heroes of Chemistry, from the American Chemical Society (one of thirteen scientists from the Dow Chemical Company credited with the development of Dow AR Fast Etch Organic Bottom Antireflectant Coatings)
2013, SPIE C. Grant Willson Best Paper Award in Patterning Materials and Processes jointly with researchers at Dow and Texas A&M University, from SPIE for the paper "Bottom-up / top-down high-resolution, high-throughput lithography using vertically assembled block brush polymers".
References
1958 births
Living people
American physical chemists
Inorganic chemists
Photochemists
Polymer scientists and engineers
Chemists from Louisiana
People from New Orleans
20th-century American chemists
21st-century American chemists
University of New Orleans alumni
University of Wisconsin–Madison College of Letters and Science alumni
Chemists from Minnesota | Peter Trefonas | [
"Chemistry",
"Materials_science"
] | 1,389 | [
"Physical chemists",
"Inorganic chemists",
"Photochemists",
"Polymer chemistry",
"Polymer scientists and engineers"
] |
53,706,711 | https://en.wikipedia.org/wiki/Nested%20wells | Nested wells, also referred to as nested monitoring wells, are composed of multiple tubes or pipes, typically terminating with short screened intervals (2–3 ft), installed in single boreholes. Sand packs must be installed at the screen depths and seals in the borehole are constructed between the sand packs. Nested wells are different from well clusters in that the latter consists of a cluster of wells where tubes or pipes are constructed in separate, individual boreholes that are drilled and completed at different depths.
When constructing nested wells, attention must be paid to ensure the proper placement of sand in the screened intervals and bentonite between monitored intervals, by measuring the depth of the sand or bentonite frequently as the materials are being placed. However, if the seals are placed to the exact depths specified in the well design, riser casings that are touching due to de-centralization in the borehole inhibits seal placement between the risers and can allow for vertical movement of groundwater within the borehole between different monitoring zones. The likelihood of vertical leakage through the bentonite seals of a nested well increases with the number of separate casings within the borehole, or when only a small thickness of annular seal exists between the various monitored zones.
It is for these reasons that the installation of nested wells is discouraged or prohibited by some governmental or regulatory agencies.
Successful installation of nested wells has been reported by the U.S. Geological Survey in deep (several hundreds to over one thousand feet), large diameter boreholes (≥12 in), with multiple casings (monitoring zones), resulting in seals that are several tens to hundreds of feet thick. This work illustrates that nested wells can be useful to monitor discrete zones separated by thick borehole seals.
However, nested wells are not recommended for contaminant hydrogeology monitoring mainly because it is generally not possible to obtain enough monitoring intervals in a nested well without compromising the seals between intervals. Further, it is often difficult to conclusively determine if a seal has failed in a nested well. Alternatives to nested wells are engineered nested wells or multilevel monitoring systems that are designed to provide robust annular seals along with the ability to collect groundwater samples and measure hydraulic heads at many discrete depths in a single well.
References
Further reading
Johnson, T.L. 1983. "A comparison of well nests vs. single-well completions." Ground Water Monitoring Review 3 (1):76-78. doi=10.1111/j.1745-6592.1983.tb00864.x
Einarson, Murray D. 2006. "Multi-Level Ground Water Monitoring." In Practical Handbook of Ground Water Monitoring, edited by D.M. Nielsen, 807–848. CRC Press.
External links
Recommendations on Model Criteria for Groundwater Sampling, Testing, and Monitoring of Oil and Gas Development in California
Water wells | Nested wells | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 598 | [
"Hydrology",
"Water wells",
"Environmental engineering"
] |
53,706,715 | https://en.wikipedia.org/wiki/Event%20storming | Event storming is a workshop-based method to quickly find out what is happening in the domain of a software program.
Compared to other methods it is extremely lightweight and intentionally requires no support by a computer.
The result is expressed in sticky notes on a wide wall.
The business process is "stormed out" as a series of domain events which are denoted as orange stickies.
It was invented by Alberto Brandolini in the context of domain-driven design (DDD).
Event storming can be used as a means for business process modeling and requirements engineering.
The idea is to bring together software developers and domain experts and learn from each other.
The name was chosen to show that the focus should be on the domain events and the method works similar to brainstorming or agile modeling's model storming.
Requirements
It is important for an event storming workshop to have the right people present.
This includes people who know the questions to ask (typically developers) and those who know the answers (domain experts, product owners).
The modeling will be placed on a wide wall with a roll of paper rolled out on it.
The sticky notes will be placed on this paper.
You will require at least 5 distinct colors for the sticky notes.
Steps
The first step is to find the domain events and write them on orange sticky notes.
When all domain events are found the second step is to find the command that caused each of the domain events. Commands are written on blue notes and placed directly before the corresponding domain event.
In the third step the aggregates within which commands are executed and where events happen are identified.
The aggregates are written in yellow stickies.
The concepts gathered during an event storming session fall into several categories, each with its own colour of sticky note:
An event that occurs in the business process. Written in past tense.
A person who executes a command through a view.
Processes a command according to business rules and logic. Creates one or more domain events.
A command executed by a user through a view on an aggregate that results in the creation of a domain event.
Cluster of domain objects that can be treated as a single unit.
A third-party service provider such as a payment gateway or shipping company.
A view that users interact with to carry out a task in the system.
Example notes
These are examples, these would be different for different organizations.
Domain events
Actors
Commands
Aggregates
External systems
Views
Errors
Example
Users
CreateAccount
AccountCreated
Signup
Result
As a result, the business process can be seen on the modeling space.
But more important is the knowledge that was built in the minds of the participants.
References
External links
https://miro.com/miroverse/event-storming/
Collaboration
Group problem solving methods
Software architecture
Software design | Event storming | [
"Engineering"
] | 562 | [
"Design",
"Software design"
] |
53,707,822 | https://en.wikipedia.org/wiki/Radioactivity%20Fixatives | Radioactivity or radionuclide fixatives are specialized polymer coatings used to “fix” radioactive isotopes or radioactive material to surfaces. These fixatives, also known as permanent coatings in the radioactive contamination control field, have been used for many decades in facilities processing radioactive material to control radioactive contamination. There has been increased interest in these fixatives or coatings recently due to the growing concern of contamination from a radioactivity dispersal device (RDD also known as a dirty bomb) and because radioactivity fixatives in use today lose the ability to contain the radioactivity to the surface during a fire.
Radioactivity fixatives reduce or eliminate the movement of radionuclides from surfaces thereby lowering the health risk of inhalation or other exposure to radioactive isotopes. There are many articles on the use of radioactive fixatives with a review article from 1983 often used as a reference. A more recent review article looks at the use of these radioactive fixatives for use after the detonation of a RDD. Current research is investigating new coatings that are effective at containing radioactive material to the surface during and after fires.
References
Radioactivity | Radioactivity Fixatives | [
"Physics",
"Chemistry"
] | 243 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Radioactivity",
"Nuclear physics"
] |
53,713,224 | https://en.wikipedia.org/wiki/Computational%20thermodynamics | Computational thermodynamics is the use of computers to simulate thermodynamic problems specific to materials science, particularly used in the construction of phase diagrams.
Several open and commercial programs exist to perform these operations. The concept of the technique is minimization of Gibbs free energy of the system; the success of this method is due not only to properly measuring thermodynamic properties, such as those in the list of thermodynamic properties, but also due to the extrapolation of the properties of metastable allotropes of the chemical elements.
History
The computational modeling of metal-based phase diagrams, which dates back to the beginning of the previous century mainly by Johannes van Laar and to the modeling of regular solutions, has evolved in more recent years to the CALPHAD (CALculation of PHAse Diagrams). This has been pioneered by American metallurgist Larry Kaufman since the 1970s.
Current state
Computational thermodynamics may be considered a part of materials informatics and is a cornerstone of the concepts behind the materials genome project. While crystallographic databases are used mainly as a reference source, thermodynamic databases represent one of the earliest examples of informatics, as these databases were integrated into thermochemical computations to map phase stability in binary and ternary alloys. Many concepts and software used in computational thermodynamics are credited to the SGTE Group, a consortium devoted to the development of thermodynamic databases; the open elements database is freely available based on the paper by Dinsdale. This so-called "unary" system proves to be a common basis for the development of binary and multiple systems and is used by both commercial and open software in this field.
However, as stated in recent CALPHAD papers and meetings, such a Dinsdale/SGTE database will likely need to be corrected over time despite the utility in keeping a common base. In this case, most published assessments will likely have to be revised, similarly to rebuilding a house due to a severely broken foundation. This concept has also been depicted as an "inverted pyramid." Merely extending the current approach (limited to temperatures above room temperature) is a complex task. PyCalphad, a Python library, was designed to facilitate simple computational thermodynamics calculation using open source code. In complex systems, computational methods such as CALPHAD are employed to model thermodynamic properties for each phase and simulate multicomponent phase behavior. The application of CALPHAD to high pressures in some important applications, which are not restricted to one side of materials science like the Fe-C system, confirms experimental results by using computational thermodynamic calculations of phase relations in the Fe–C system at high pressures. Other scientists even considered viscosity and other physical parameters, which are beyond the domain of thermodynamics.
Future developments
There is still a gap between ab initio methods and operative computational thermodynamics databases. In the past, a simplified approach introduced by the early works of Larry Kaufman, based on Miedema's Model, was employed to check the correctness of even the simplest binary systems. However, relating the two communities to Solid State Physics and Materials Science remains a challenge, as it has been for many years. Promising results from ab initio quantum mechanics molecular simulation packages like VASP are readily integrated in thermodynamic databases with approaches like Zentool.
A relatively easy way to collect data for intermetallic compounds is now possible by using Open Quantum Materials Database. A series of papers focused on the concept of Zentropy has been proposed by prof. Z.K. Liu and his research group has been recently proposed
See also
Phase diagram
Gibbs energy
Enthalpy of mixing
Miedema's Model
Materials Genome
UNIQUAC
UNIFAC
References
External links
Official CALPHAD website
Python-based libraries for the calculation of phase diagrams and thermodynamic properties
Computational Phase Diagram Database (CPDDB), binary databases, free access with a registration
Open Calphad
Thermocalc for Students
Pandat (free up to three components)
Matcalc (free up to three components, open databases available)
FactSage Education 7.2
Thermodynamic Modeling of Multicomponent Phase Equilibria
NIST
Thermodynamic Modeling using the Calphad Method at ETH Zurich
MELTS Software for thermodynamic modeling of phase equilibria in magmatic systems
SGTE Scientific Group Thermodata Europe
Larry Kaufman at Hmolpedia
[Open Quantum Materials Database OQMD]
University Courses on Computational Thermodynamics
Computational Thermodynamics for Materials Design KTH, Sweden
MatSE580: Computational Thermodynamics of Materials, Pennsylvania State University, USA
Computational Thermodynamics University of Brno, Czech Republic
Computational physics
Materials science | Computational thermodynamics | [
"Physics",
"Materials_science",
"Engineering"
] | 1,009 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan",
"Computational physics"
] |
58,527,422 | https://en.wikipedia.org/wiki/Computational%20Spectroscopy%20In%20Natural%20Sciences%20and%20Engineering | COmputational Spectroscopy In Natural Sciences and Engineering (COSINE) is a Marie Skłodowska-Curie Innovative Training Network in the field of theoretical and computational chemistry, focused on computational spectroscopy. The main goal of the projects is to develop theoretical tools: computational codes based on electronic structure theory for the investigation of organic photochemistry and for simulation of spectroscopic experiments. It is part of the European Union's Horizon 2020 research funding framework.
Objective
The main purpose of COSINE is the development of ab-initio research tools to study optical properties and excited electronic states, which are dominated by electron correlation. This tools are developed for the investigation of organic photochemistry with the aim of accurate simulation of spectroscopic experiments on the computer. To this end a complementary series of tools, rooted in coupled cluster, algebraic diagrammatic construction, density functional theory, as well as selected multi-reference methods, are developed.
Nodes
The project is divided into 8 different nodes:
Node 1, Heidelberg University is the coordinating node, led by Andreas Dreuw
Node 2, KTH Royal Institute of Technology in Stockholm, led by Patrick Norman
Node 3, Ludwig Maximilian University of Munich, led by Christian Ochsenfeld
Node 4, Scuola Normale Superiore in Pisa, led by Chiara Cappelli
Node 5, University of Southern Denmark in Odense, led by Jacob Kongsted
Node 6, L'École Nationale Supérieure de Chimie de Paris, led by Ilaria Ciofini
Node 7, Norwegian University of Science and Technology in Trondheim, led by Henrik Koch
Node 8, Technical University of Denmark in Lyngby, led by Sonia Coriani
Partner organisations
ELETTRA, Sincotrone Trieste, Italy;
Electromagnetic Geoservices ASA, Norway;
EXACT LAB SRL, Italy;
Nvidia GmbH, Germany;
DELL S.P.A., Italy;
Inc., United States;
PDC Center for High-Performance Computing, KTH, Sweden;
Dipartimento di Scienze Chimiche e Farmaceutiche, Università degli Studi di Trieste, Italy.
References
External links
COSINE homepage
ITN Marie Skłodowska-Curie actions
CORDIS Community REsearch and Development Information Service
College and university associations and consortia in Europe
Computational chemistry
Engineering university associations and consortia | Computational Spectroscopy In Natural Sciences and Engineering | [
"Chemistry"
] | 482 | [
"Theoretical chemistry",
"Computational chemistry"
] |
58,535,019 | https://en.wikipedia.org/wiki/Imputation%20and%20Variance%20Estimation%20Software | Imputation and Variance Estimation Software (IVEware) is a collection of routines written under various platforms and packaged to perform multiple imputations, variance estimation (or standard error) and, in general, draw inferences from incomplete data. It can also be used to perform analysis without any missing data. IVEware defaults to assuming a simple random sample, but uses the Jackknife Repeated Replication or Taylor Series Linearization techniques for analyzing data from complex surveys.
Overview
Version 0.1 of IVEware was developed in the late 1990s by Trivellore Raghunathan, Peter W. Solenberger, and John Van Hoewyk and released in 1997 as beta software with an official release in 2002 from the Survey Research Center, University of Michigan Institute for Social Research. Version 0.2 was released in 2011 and the newest version, V 0.3, was released in 2017.
The software includes seven modules: IMPUTE, BBDESIGN, DESCRIBE, REGRESS, SASMOD, SYNTHESIZE, and COMBINE.
IVEware can be run with SAS, Stata, R, SPSS or as a stand-alone tool under the Windows or Linux environment. The R, Stata, SPSS and stand-alone version can also be used with the Mac OS. The stand-alone version has limited capabilities for analyzing multiply imputed data though the routines for creating imputations are the same across all packages. And the command structure is the same across all platforms. IVEware can be executed using the built-in XML editor or it can be run using the built-in editor within the four software packages previously mentioned. The user can also mix and match the codes from these software packages through a standard XML toggle-parser (for example, < SAS name = “myfile” > SAS commands < /SAS > will execute the SAS commands and store the commands in the file “myfile.sas”.) if the provided XML editor is used to execute IVEware commands.
References
Further reading
Raghunathan, T.E., Lepkowski, J., Van Hoewyk, J. and Solenberger, P. (2001). A multivariate technique for multiply imputing missing values using a sequence of regression models. Survey Methodology, 27(1): 85-95.
Raghunathan, T. E, Berglund, P., and Solenberger, P. W. (2018). Multiple Imputation in Practice: With Examples Using IVEware. Boca Raton, FL: CRC Press
Bondarenko, I. & Raghunathan, T. E. (2016). Graphical and numerical diagnostic tools to assess suitability of multiple imputations and imputation models. Statistics in Medicine, 35, 3007-3020.
Bondarenko, I. & Raghunathan, T. E. (2010). Multiple imputation for causal inference. Section on Survey Research Methods-JSM.
Raghunathan, T. E., Solenberger, P., Berglund, P., van Hoewyk, J. (2017). IVEware: Imputation and Variance Estimation Software (Version 0.3): Complete User Guide. Ann Arbor: Survey Research Center, University of Michigan.
External links
IVEware Version 0.3
IVEware Versions 0.1 and 0.2
Survey Research Center
Statistical software | Imputation and Variance Estimation Software | [
"Mathematics"
] | 702 | [
"Statistical software",
"Mathematical software"
] |
58,537,932 | https://en.wikipedia.org/wiki/C25H32O3 | {{DISPLAYTITLE:C25H32O3}}
The molecular formula C25H32O3 (molar mass: 380.520 g/mol, exact mass: 380.2351 u) may refer to:
Levonorgestrel cyclopropylcarboxylate
Nilestriol
Molecular formulas | C25H32O3 | [
"Physics",
"Chemistry"
] | 70 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
58,537,950 | https://en.wikipedia.org/wiki/C26H34O3 | {{DISPLAYTITLE:C26H34O3}}
The molecular formula C26H34O3 (molar mass: 394.546 g/mol) may refer to:
Androstanolone benzoate
Levonorgestrel cyclobutylcarboxylate
Molecular formulas | C26H34O3 | [
"Physics",
"Chemistry"
] | 67 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
55,215,371 | https://en.wikipedia.org/wiki/Ced-3 | Ced-3 is one of the major protein components of the programmed cell death (PCD) pathway for Caenorhabditis elegans. There are in total 14 genes that are involved in programmed cell death, other important ones including ced-4 and ced-9 genes. The healthy nematode worm will require 131 somatic cell deaths out of the 1090 cells during the developmental stages. The gene initially encodes for a prototypical caspase (procaspase) where the active cysteine residue cleaves aspartate residues, thus becoming a functional caspase. Ced-3 is an executioner caspase (cysteine-dependent aspartate-directed protease) that must dimerize with itself and be initiated by ced-4 in order to become active. Once active, it will have a series of reactions that will ultimately lead to the apoptosis of targeted cells.
Programmed cell death in C. elegans will occur in the embryonic and post-embryonic stages in both somatic and germ line cells. During embryogenesis is when the ced-3 transcript is at its highest peak due to the numerous cells that need to undergo cell suicide. Most programmed cell deaths occur in the brain tissue of the C. elegans where the majority of cells targeted for cell death have lineages from neuronal and glial cells. From there, ced-3 is localized to perinuclear regions of the cells.
In order for ced-3 to become functional, it requires auto-catalytic cleavage which is initiated by ced-4, acting as an initiator caspase. Ced-3 gene is found downstream of ced-4 and positively regulates ced-3. It can also be indirectly inhibited by ced-9 and prevent apoptosis by inhibiting the function of ced-4 thus inhibiting the function of ced-3.
The ced-3 ortholog in humans is caspase 9, an interleukin-1β converting enzyme (ICE) and the ortholog in mice was found to be the Nedd-2 gene.
History
In 1986, the two researchers, Hilary Ellis and H. Robert Horvitz discovered that ced-3 and ced-4 genes were somehow involved in the apoptosis.
Later on, in 2002, Sydney Brenner, H. Robert Horvitz, and John E. Sulston were awarded the 2002 Nobel Prize in Physiology or Medicine for their research in programmed cell death They were able to visualize the process of PCD using differential interference contrast (DIC) microscopy.
During their research, Ellis, performed various experiments mutating the ced-3 gene and found that the cells encoding the mutated ced-3 gene all survived even though they were originally targeted for cell death. This led to the discovery of the ced-3 protein and its role in PCD; prior to the experiment, ced-3 was first thought to act as a repressor for the ced-1 gene. Ced-1 and ced-2 were the first ced genes to be initially discovered in 1983.
In order for biologists to learn about PCD, they needed a model organism and this was first introduced by Sydney Brenner in 1974 with the nematode, C. elegans. This organism would serve as the subject of research for many years, leading to other biological discoveries, not only for C. elegans but for mammals as well.
Function
One of the main roles of the ced-3 protein in C. elegans is to help the development and growth of the organism. Without apoptosis, the cells that have been damaged or aged will not be able to be replaced with newer, healthier cells thus inducing growth. Targeted cells are fated to die at certain times and places during development which showed it is all part of a developmental plan. These cells once had a function that was necessary to the growth of the organism but later becomes useless and are targeted for elimination. Some other roles of programmed cell death include tissue homeostasis and disease prevention. If a cell is transformed or if its DNA has been damaged then the cell must be degraded before further damage can be done.
In a recent study, it was found that for C. elegans in particular, programmed cell death is also found to be related to an immune system response to a pathogenic infection. By eliminating the infected cells, the nematode can ensure its survival against the attack. C. elegans also undergoes major anatomy changes that must be mediated by programmed cell deaths, and it was found that PCD is regulated by environmental conditions due to the fact that cell deaths were more commonly found in old, starving worms rather than new, healthy worms.
Ced-3 during apoptosis
During the process of apoptosis, the cell undergoes:
DNA fragmentation
Nucleus fragmentation
Disruption of cytoskeletal proteins
Golgi matrix protein fragmentation
Phagocytosis of neighbouring cells
Cytoplasm shrinkage
As a wild-type protein, ced-3 will cleave other protein substrates within the cell and trigger apoptosis. In the nucleus, ced-3 cleaves DCR-1, so that the RNA can no longer be processed, and then it converts RNase into DNase thus promoting DNA degradation in the nucleus and mitochondrial elimination in the cytoplasm. Afterwards, ced-3 indirectly releases another protein, WAH-1, that can cause signals on the surface of the cell to be released so that the cell can be phagocytosed by a neighbouring cell.
Structure
In C. elegans, the ced-3 gene is found on chromosome 4 with an exon count of 8 and it is a protein expressed gene. The gene encodes for a caspase; more specifically, a cysteine-aspartate protease The gene is described as a "Cell death protein 3" and it is an ortholog to the mammalian version of the gene, caspase 9. Its name is derived from the term "cell death".
Structurally, ced-3 has two protein domains:
CARD domain (Caspase recruitment domain)
Caspase domain
CARD domains have protein-protein interactions where the CARD domain of both ced-3 and ced-4 are able to have homophilic interactions with each other. The caspase domain is the main domain of the protein, where the cleavage activity of the protease takes place. The active protease contains a large and small subunit where the large subunit is 17kDa and the small subunit is 15kDa in weight.
Ced-3 consists of 2 isoforms, isoform a and isoform b. Isoform a has a transcript length of 2437 nucleotide (nt), 1512 nt coding sequence, and a 503 amino acid (aa) protein length. Isoform b has 864 nt transcript length, 864 nt coding sequence, and 287 aa protein length. The middle regions of the amino acid sequence is rich in serine residues, but these regions are not conserved for the ICE proteins in humans. Instead, the carboxy-terminal regions of the proteins are the most well conserved in both humans and mice.
Mechanism
Ced-3 genes are highly expressed in the mother of daughter cells that are targeted to die. The procaspase ced-3 gene produced in mother cells gets inherited to daughter cells where they are translated and activated.
When the ced-3 gene is translated into a protein, it is first made into a precursor protein that needs to undergo modifications in order to become an active caspase. First, the active cysteine recognizes specific sequences containing aspartate and cleaves the aspartate which causes the C-terminal domain and the central polypeptides to heterodimerize to form the protease. This process is an autocatalytic process, meaning that the ced-3 protein cleaves itself in order to become functional. The remaining N-terminal domain is now called the prodomain and it is a part of the CARD domain but it is not a part of the cleaved protease. The prodomain gets recognized by ced-4 and consequently initiates ced-3 processing. Prior to this, apoptosis must be triggered by the increased gene expression of another protein known as the "death receptor", called EGL-1 protein. EGL-1 will then bind to and inhibit ced-9 which is an inhibitor caspase that recognizes and binds to ced-4 so that it can no longer activate ced-3. This causes a failure in apoptosis and the cell would continue live. These 4 proteins, including ced-3, are considered to make up the core apoptotic machinery which can also be found in orthologs of mammals.
Once the ced-3 caspase is activated, the same cysteine residue of the protease goes and recognizes the amino acid aspartate, in other proteins, effectively cleaving them. These proteins are found in the nucleus, nuclear lamina, cytoskeleton, endoplasmic reticulum, and cytosol. The action of cleaving certain proteins instigate a series of pathways leading to the degradation of the cell.
Significance
Ced-3 is a critical part of the programmed cell death pathway which is a well known pathway for being associated with cancer, autoimmune diseases, and neurodegenerative diseases in mammals. The discovery of the ced-3 function and mutations in C. elegans led to the understanding of how programmed cell death works in mammals. The C.elegans provided as a model organism that allowed researchers to compare the ortholog genes in the programmed cell death pathway. The ortholog of ced-3 gene is caspase 9 and its mutated form is involved in the origin of certain cancers and tumourous tissues. A mutation in the caspase gene can either cause the protein to be non-functional thereby allowing the cells to live and accumulate in the tissue or cause a DNA damaged protein to live and disrupt the body for further harm. This occurs commonly in the brain, leading to neurodevelopmental or neurodegenerative diseases.
Mutations
Various experiments were performed on C. elegans to determine the function of ced-3. Most of these experiments involved mutating the ced-3 gene and seeing how that affected the worm's development overall. With the loss of function mutations in the ced-3 gene, it was found that the somatic cells that were programmed to die were instead found alive. With missense mutations in the ced-3 gene, there was a decrease in ced-3 activation by ced-4 indicating that the prodomain was affected. A deletion mutation in the protease region of ced-3 also caused a decrease in the effectiveness of cell death activity. Then finally, with gain of function mutations, the worm was found with extra cells that were dead from the normal 131 cells.
Interactions
Ced-3 has been shown to interact with:
ced-4
ced-9
EGL-1 (BH3)
ced-1
References
Programmed cell death
Caenorhabditis elegans genes | Ced-3 | [
"Chemistry",
"Biology"
] | 2,381 | [
"Senescence",
"Programmed cell death",
"Signal transduction"
] |
55,215,749 | https://en.wikipedia.org/wiki/Catalan%20time%20system | The Catalan time system is the traditional manner in which to tell time in Catalan, and it is exclusive to this language. Telling the time through this system works by dividing it in fractions of a quarter and half a quarter of an hour. Hour-fractions refer to the starting hour, taking into account that when a clock reaches a whole hour (e.g. three o'clock) it actually indicates its end.
The order is quarts-minuts-hora posterior (quarters-minutes-next hour). Hence, for example, 10:15 h would be un quart d'onze ("a quarter of eleven"); 12:30 h, dos quarts d'una ("two quarters of one"), and 19:52 h would be tres quarts i set minuts de vuit ("three quarters and seven minutes of eight"). Additionally, there are little variations by which the expression dos quarts ("two quarters") is shortened to just quarts ("quarters"); also, mig quart ("half a quarter") is used as an approximation in place of the too specific set minuts i mig ("seven minutes and a half").
Notation examples
"Quarters" (quarts) examples:
01:15 un quart de dues ("a quarter of two")
01:30 dos quarts de dues ("two quarters of two")
01:45 tres quarts de dues ("three quarters of two")
Whenever it's clear that you are referring specifically to the time (for example, when someone's asking it), you can give the time by leaving out the quarts ("quarters"), thus shortening the notation as follows:
07:45 tres de vuit ("three [quarters] of eight")
16:30 dos de cinc ("two [quarters] of five")
"Half a quarter" (mig quart) examples:
02:07:30 mig quart de tres (half a quarter of three)
02:22:30 un quart i mig de tres (a quarter and a half of three)
02:37:30 dos quarts i mig de tres (two quarters and a half of three)
02:52:30 tres quarts i mig de tres (three quarters and a half of three)
The Catalan time system is restricted to 12-hour clocks; if necessary, one can specify if it's an hour of the morning (matinada or matí), noon (migdia), afternoon (tarda or vesprada), evening (vespre) or night (nit).
References
External links
Rellotge Català, dissenyat per a facilitar la lectura segons el sistema basat en quarts (in Catalan)
Marca de rellotges amb l'hora catalana (in Catalan)
Revisió històrica de la notació de l'hora en els territoris de parla catalana (in Catalan)
L'enunciat de les hores en català (in Catalan)
Quina hora és?, article by Núria Puyuelo (in Catalan)
Time measurement systems
Catalan traditions | Catalan time system | [
"Physics"
] | 683 | [
"Spacetime",
"Time measurement systems",
"Physical quantities",
"Time"
] |
52,320,486 | https://en.wikipedia.org/wiki/Clostebol%20acetate | Clostebol acetate (BAN; brand names Macrobin, Steranabol, Alfa-Trofodermin, and Megagrisevit; also known as 4-chlorotestosterone 17β-acetate (4-CLTA) or 4-chloroandrost-4-en-17β-ol-3-one 17β-acetate) is a synthetic, injected anabolic-androgenic steroid (AAS) and a derivative of testosterone that is marketed in Germany and Italy. It is an androgen ester – specifically, the C17β acetate ester of clostebol (4-chlorotestosterone) – and acts as a prodrug of clostebol in the body. Clostebol acetate is administered via intramuscular injection.
See also
Clostebol caproate
Clostebol propionate
Norclostebol
Norclostebol acetate
Oxabolone
Oxabolone cipionate
References
Acetate esters
Androgen esters
Anabolic–androgenic steroids
Androstanes
Organochlorides
Prodrugs | Clostebol acetate | [
"Chemistry"
] | 246 | [
"Chemicals in medicine",
"Prodrugs"
] |
52,322,492 | https://en.wikipedia.org/wiki/Remdesivir | Remdesivir, sold under the brand name Veklury, is a broad-spectrum antiviral medication developed by the biopharmaceutical company Gilead Sciences. It is administered via injection into a vein. During the COVID19 pandemic, remdesivir was approved or authorized for emergency use to treat COVID19 in numerous countries.
Remdesivir was originally developed to treat hepatitis C, and was subsequently investigated for Ebola virus disease and Marburg virus infections before being studied as a post-infection treatment for COVID19.
Remdesivir is a prodrug that is intended to allow intracellular delivery of GS-441524 monophosphate and subsequent biotransformation into GS-441524 triphosphate, a ribonucleotide analogue inhibitor of viral RNA polymerase.
The most common side effect in healthy volunteers is raised blood levels of liver enzymes. The most common side effect in people with COVID19 is nausea. Side effects may include liver inflammation and an infusion-related reaction with nausea, low blood pressure, and sweating.
The US Food and Drug Administration (FDA) considers it to be a first-in-class medication.
Medical uses
In the European Union, remdesivir is indicated for the treatment of COVID19 in adults and adolescents (aged twelve years and older with body weight at least ) with pneumonia requiring supplemental oxygen and for adults who do not require supplemental oxygen and who are at increased risk of progressing to severe COVID19.
In the United States, remdesivir is indicated for the treatment of COVID19 in people 28 days of age and older and weighing at least who are hospitalized; or not hospitalized and have mild-to-moderate COVID19, and are at high risk for progression to severe COVID19, including hospitalization or death.
In November 2020, the FDA issued an emergency use authorization (EUA) for the combination of baricitinib with remdesivir, for the treatment of suspected or laboratory confirmed COVID19 in hospitalized people two years of age or older requiring supplemental oxygen, invasive mechanical ventilation, or extracorporeal membrane oxygenation (ECMO).
In Australia, it is approved for those aged four weeks of age and older with a body weight at least with pneumonia requiring supplemental oxygen or those aged four weeks of age and older with body weight at least who do not require supplemental oxygen and who are at high risk of progressing to severe COVID19.
In 2024, a retrospective study found treatment with the antiviral remdesivir plus dexamethasone was associated with fewer deaths in hospitalized COVID-19 patients compared to dexamethasone alone. The combination led to a 26% reduction in mortality at 14 days and a 24% reduction at 28 days.
Side effects
The most common adverse effects in people treated with remdesivir were respiratory failure and blood biomarkers of organ impairment, including low albumin, low potassium, low count of red blood cells, low count of thrombocytes, and elevated bilirubin (jaundice). Other reported adverse effects include gastrointestinal distress, elevated transaminase levels in the blood (liver enzymes), infusion site reactions, and electrocardiogram abnormalities. Remdesivir may cause infusion-related reactions, including low blood pressure, nausea, vomiting, sweating or shivering.
Other possible side effects of remdesivir include:
Infusion-related reactions. Infusion-related reactions have been seen during a remdesivir infusion or around the time remdesivir was given. Signs and symptoms of infusion-related reactions may include: low blood pressure, nausea, vomiting, sweating, and shivering.
Increases in levels of liver enzymes, seen in abnormal liver blood tests. Increases in levels of liver enzymes have been seen in people who have received remdesivir, which may be a sign of inflammation or damage to cells in the liver.
Pharmacology
Activation
Remdesivir is a protide (prodrug of nucleotide) able to diffuse into cells, where it is converted to GS-441524 monophosphate via the actions of esterases (CES1 and CTSA) and a phosphoamidase (HINT1); this in turn is further phosphorylated to its active metabolite triphosphate by nucleoside-phosphate kinases. This pathway of bioactivation is meant to occur intracellularly, but a substantial amount of remdesivir is prematurely hydrolyzed in plasma, with GS-441524 being the major metabolite in plasma, and the only metabolite remaining two hours after dosing.
Mechanism of action
As an adenosine nucleoside triphosphate analog (GS-443902), the active metabolite of remdesivir interferes with the action of viral RNA-dependent RNA polymerase and evades proofreading by viral exoribonuclease (ExoN), causing a decrease in viral RNA production. In some viruses, such as the respiratory syncytial virus, it causes the RNA-dependent RNA polymerases to pause, but its predominant effect (as in Ebola) is to induce an irreversible chain termination. Unlike with many other chain terminators, this is not mediated by preventing addition of the immediately subsequent nucleotide, but is instead delayed, occurring after five additional bases have been added to the growing RNA chain. For the RNA-dependent RNA polymerases of MERS-CoV, SARS-CoV-1, and SARS-CoV-2, arrest of RNA synthesis occurs after incorporation of three additional nucleotides. Hence, remdesivir is classified as a direct-acting antiviral agent that works as a delayed chain terminator.
Pharmacokinetics
In non-human primates, the plasma half-life of the prodrug is 20 minutes, with the main metabolite being the nucleoside, GS-441524. Two hours post injection, the main metabolite GS-441524 is present at micromolar concentrations, whilst intact Remdesivir is no longer detectable. Because of this rapid extracellular conversion to the nucleoside GS-441524, some researchers have questioned whether the active nucleotide triphosphate is truly derived from Remdesivir pro-drug removal or whether it occurs by GS-441524 phosphorylation, and whether direct administration of GS-441524 would constitute a cheaper and easier to administer COVID19 drug compared to Remdesivir. The activated nucleotide triphosphate form has sustained intracellular levels in PBMC and presumably in other cells as well.
Resistance
Mutations in the mouse hepatitis virus RNA replicase that cause partial resistance to remdesivir were identified in 2018. These mutations make the viruses less effective in nature, and the researchers believe they will likely not persist where the drug is not being used.
Interactions
Remdesivir is at least partially metabolized by the cytochrome P450 enzymes CYP2C8, CYP2D6, and CYP3A4. Blood plasma concentrations of remdesivir are expected to decrease if it is administered together with cytochrome P450 inducers such as rifampicin, carbamazepine, phenobarbital, phenytoin, primidone, and St John's wort.
Using chloroquine or hydroxychloroquine with remdesivir may reduce the antiviral activity of remdesivir. Coadministration of remdesivir and chloroquine phosphate or hydroxychloroquine sulfate is not recommended based on in vitro data demonstrating an antagonistic effect of chloroquine on the intracellular metabolic activation and antiviral activity of remdesivir.
Synthesis
Remdesivir can be synthesized in multiple steps from ribose derivatives. The figure to the right is one of the synthesis routes of remdesivir invented by Chun and coauthors from Gilead Sciences.
In vitro experiments
An in vitro study of remdesivir assessing antiviral activity against SARS-CoV-2 was performed. Cells were pre-treated with the different doses of remdesivir for 1 hour, and the virus (MOI of 0.05) was subsequently added to allow infection for 2 hours. The results found that remdesivir functioned well as an inhibitor of the infection. The study was published as a letter to the editor, and as such did not undergo peer review.
Manufacturing
Remdesivir requires "70 raw materials, reagents, and catalysts" to make, and approximately "25 chemical steps." Some of the ingredients are extremely dangerous to humans, especially trimethylsilyl cyanide. The original end-to-end manufacturing process required 9 to 12 months to go from raw materials at contract manufacturers to finished product, but after restarting production in January 2020, Gilead Sciences was able to find ways to reduce the production time to six months.
In January 2020, Gilead began working on restarting remdesivir production in glass-lined steel chemical reactors at its manufacturing plant in Edmonton, Alberta. On 2 February 2020, the company flew its entire stock of remdesivir, 100 kilograms in powder form (left over from Ebola research), to its filling plant in La Verne, California to start filling vials. The Edmonton plant finished its first new batch of remdesivir in April 2020. Around the same time, fresh raw materials began to arrive from contract manufacturers reactivated by Gilead in January.
Another challenge is getting remdesivir into patients despite the drug's "poor predicted solubility and poor stability." In June 2020, Ligand Pharmaceuticals revealed that Gilead has been managing those issues by mixing Ligand's proprietary excipient Captisol (based on University of Kansas research into cyclodextrin) with remdesivir at a 30:1 ratio. Since that implies an enormous amount of Captisol is needed to stabilize and deliver remdesivir (on top of amounts needed for several other drugs for which the excipient is already in regular use), Ligand announced that it is trying to boost Captisol annual manufacturing capacity to as much as 500 metric tons.
On 12 May 2020, Gilead announced that it had granted non-exclusive voluntary licenses to five generic drug companies in India and Pakistan to manufacture remdesivir for distribution to 127 countries. The agreements were structured so that the licensees can set their own prices and will not have to pay royalties to Gilead until the WHO declares an end to the COVID19 emergency or another medicine or vaccine is approved for COVID19, whichever comes first. On 23 June 2020, India granted emergency marketing approval of generic remdesivir manufactured by two Gilead licensees, Cipla and Hetero Drugs.
Society and culture
Legal status
Remdesivir is approved, or authorized for emergency use, to treat COVID19 in many countries. Remdesivir has been authorized for emergency use in India, Singapore, and approved for use in Japan, the European Union, the United States, and Australia for people with severe symptoms.
Remdesivir is the first treatment for COVID19 to be approved by the US Food and Drug Administration (FDA). The approval by the FDA does not include the entire population that had been authorized to use remdesivir under an Emergency Use Authorization (EUA) originally issued in May 2020. In order to ensure continued access to the pediatric population previously covered under the EUA, the FDA revised the EUA for remdesivir to authorize the drug's use for treatment of suspected or laboratory-confirmed COVID19 in hospitalized pediatric patients weighing to less than or hospitalized pediatric patients less than twelve years of age weighing at least .
Australia
In July 2020, remdesivir was provisionally approved for use in Australia for use in adults and adolescents with severe COVID19 symptoms who have been hospitalized. Australia claims to have a sufficient supply of remdesivir in its national stockpile.
Canada
As of 11 April 2020, access in Canada was available only through clinical trials. Health Canada approved requests to treat twelve people with remdesivir under the department's special-access program (SAP). Additional doses of remdesivir are not available through the SAP except for pregnant women or children with confirmed COVID19 and severe illness.
In June 2020, Health Canada received an application from Gilead for the use of remdesivir for treating COVID19. On 27 July 2020, Health Canada conditionally approved the application.
In September 2020, Minister of Public Services and Procurement Anita Anand announced that Canada had entered into a deal to obtain up to 150,000 vials of remdesivir from Gilead starting in October. As of 8 October, remdesivir was still not widely available in Alberta, because Alberta Health Services was undertaking a "formulary review" to be completed by mid-November.
Czech Republic
In March 2020, the drug was provisionally approved for use for COVID19 patients in a serious condition as a result of the outbreak in the Czech Republic.
European Union
In February 2016, orphan designation (EU/3/16/1615) was granted by the European Commission to Gilead Sciences International Ltd, United Kingdom, for remdesivir for the treatment of Ebola virus disease.
In April 2020, the European Medicines Agency (EMA) provided recommendations on compassionate use of remdesivir for COVID19 in the EU.
In May 2020, the Committee for Medicinal Products for Human Use (CHMP) of the EMA recommended expanding the compassionate use of remdesivir to those not on mechanical ventilation. In addition to those undergoing invasive mechanical ventilation, the compassionate use recommendations cover the treatment of hospitalized individuals requiring supplemental oxygen, non-invasive ventilation, high-flow oxygen devices or ECMO (extracorporeal membrane oxygenation). The updated recommendations were based on preliminary results from the NIAID-ACTT study, which suggested a beneficial effect of remdesivir in the treatment of hospitalized individuals with severe COVID19. In addition, a treatment duration of five days was introduced alongside the longer ten-day course, based on preliminary results from another study (GS-US-540-5773) suggesting that for those not requiring mechanical ventilation or ECMO, the treatment course may be shortened from ten to five days without any loss of efficacy. Individuals who receive a five-day treatment course but do not show clinical improvement will be eligible to continue receiving remdesivir for an additional five days.
In July 2020, the European Union granted a conditional marketing authorization for remdesivir with an indication for the treatment of COVID19 in adults and adolescents (aged twelve years and older with body weight at least ) with pneumonia requiring supplemental oxygen.
In August 2022, the European Union granted a full marketing authorization for remdesivir.
Iran
Remdesivir has been also produced in Iran by Barakat; Iran is planning to increase the productions of Remdesivir ampoules from 20,000 to 150,000 ampoules per month. It has also the permission of the "Food and Drug Administration" of MOHME
Japan
In May 2020, Japan's Ministry of Health, Labour and Welfare approved the drug for use in Japan, in a fast-tracked process, based on the US emergency authorization.
Mexico
In October 2020, Deputy Secretary of Prevention and Health Promotion Hugo López-Gatell Ramírez stated at a news conference that Mexico would not necessarily follow the United States in approving the drug for use in Mexico. López-Gatell explained that the Federal Commission for the Protection against Sanitary Risk (Cofepris) had already twice denied the approval of remdesivir because, in that agency's view, the evidence does not suggest "sufficient efficacy". In March 2020, Cofepris authorized the drug for emergency cases, advising to give continuous surveillance of the integral health of the patient.
Singapore
In June 2020, Singapore's Health Sciences Authority conditionally approved the usage of remdesivir in Singapore.
United States
In March 2020, United States President Donald Trump announced that remdesivir was available for "compassionate use" for people with COVID19; FDA Commissioner Stephen Hahn confirmed the statement at the same press conference. It was later revealed that Gilead had been providing remdesivir in response to compassionate use requests since 25 January. On 23 March 2020, Gilead voluntarily suspended access for compassionate use (excepting cases of critically ill children and pregnant women), for reasons related to supply, citing the need to continue to provide the agent for testing in clinical trials.
In May 2020, the US Food and Drug Administration granted Gilead emergency use authorization (EUA) for remdesivir to be distributed and used by licensed healthcare providers to treat adults and children hospitalized with severe COVID19. Severe COVID19 is defined as patients with an oxygen saturation (SpO2) <= 94% on room air or requiring supplemental oxygen or requiring mechanical ventilation or requiring extracorporeal membrane oxygenation (ECMO), a heart–lung bypass machine. Distribution of remdesivir under the EUA was controlled by the US government for use consistent with the terms and conditions of the EUA. Gilead supplied remdesivir to authorized distributors, or directly to a US government agency, who distributed it to hospitals and other healthcare facilities as directed by the US government, in collaboration with state and local government authorities, as needed. Gilead stated they were donating 1.5million vials for emergency use and estimated, as of April 2020, they had enough remdesivir for 140,000 treatment courses and expect to have 500,000 courses by October 2020, and one million courses by the end of 2020.
The initial distribution of the drug in the US was tripped up by seemingly capricious decision-making and finger-pointing, resulting in over a week of confusion and frustration among healthcare providers and patients alike. On 9 May 2020, the United States Department of Health and Human Services (HHS) explained in a statement that it would be distributing remdesivir vials to state health departments, then would allow each department to redistribute vials to hospitals in their respective states based upon each department's insight into "community-level needs." HHS also clarified that only 607,000 vials of Gilead's promised donation of 1.5million vials would be going to American patients. However, HHS did not explain why several states with some of the highest caseloads had been omitted from the first two distribution rounds, including California, Florida, and Pennsylvania. In May 2020, Gilead indicated they would increase the number of doses donated to the US from 607,000 to around 940,000. Some of the initial distribution was sent to the wrong hospitals, to hospitals with no intensive care units, and to facilities without the needed refrigeration to store it.
In June 2020, HHS announced an unusual agreement with Gilead in which HHS agreed to Gilead's wholesale acquisition price, HHS would continue to work together with state governments and drug wholesaler AmerisourceBergen to allocate shipments of remdesivir vials to American hospitals through the end of September 2020, and in exchange, during that three-month timeframe (July, August, and September), American patients would be allocated over 90% of Gilead's projected remdesivir output of more than 500,000 treatment courses. Absent from these announcements was any discussion of allocation of remdesivir production to the approximately 70 countries omitted from Gilead's generic drug licensing agreements—including much of Europe and countries as populous as Brazil, China, and Mexico—or the 127 countries listed on those agreements (during the time it will take for Gilead's generic licensees to ramp up their own production). As the implications of this began to sink in, several countries publicly confirmed the next day that they already had adequate supplies of remdesivir to cover current needs, including Australia, Germany, and the United Kingdom.
In August 2020, the FDA broadened the Emergency Use Authorization (EUA) for remdesivir to include all hospitalized patients with suspected or laboratory-confirmed COVID19, irrespective of the severity of their disease. The Fact Sheet was updated to reflect the new guidance.
In October 2020, Gilead and HHS announced that HHS was relinquishing control over remdesivir allocation because production of the drug had finally caught up with US domestic demand. AmerisourceBergen will remain the sole distributor of Veklury in the US through the end of 2020.
On 22 October 2020, the FDA approved remdesivir and also revised the EUA to permit the use of remdesivir for treatment of suspected or laboratory confirmed COVID19 in hospitalized children weighing to less than or hospitalized children less than twelve years of age weighing at least . This decision was criticized for an alleged lack of previous consultation on part of the FDA given the complications of antiviral drug issues.
In November 2020, the FDA issued an EUA for the combination of baricitinib with remdesivir, for the treatment of suspected or laboratory-confirmed COVID19 in hospitalized people two years of age or older requiring supplemental oxygen, invasive mechanical ventilation, or extracorporeal membrane oxygenation (ECMO). The data supporting the EUA for baricitinib combined with remdesivir are based on a randomized, double-blind, placebo-controlled clinical trial (ACTT-2), which was conducted by the National Institute of Allergy and Infectious Diseases (NIAID). The EUA was issued to Eli Lilly and Company.
Remdesivir received approval from the US Food and Drug Administration (FDA) in October 2020, for use in adults and children twelve years and older requiring hospitalization for treatment of severe COVID19 infections. In January 2022, the FDA gave regulatory approval to remdesivir for use in adults and children (twelve years of age and older who weigh at least and are positive for COVID19, not hospitalized, and are ill with COVID19 having high risk for developing severe COVID19, including hospitalization or death. In April 2022, the FDA expanded the approval of remdesivir to include people 28 days of age and older weighing at least .
The FDA also provided emergency use authorization in 2022, for remdesivir treatment of children under age twelve who are COVIDpositive and not hospitalized, but have mild-to-moderate COVID19 with high risk of developing severe infection, including hospitalization or death.
Economics
In June 2020, Gilead announced that it had set the price of remdesivir at per vial for the governments of developed countries, including the United States, and for US private health insurance companies. The expected course of treatment is six vials over five days for a total cost of . Being a repurposed drug, the minimum production cost for remdesivir is estimated at per day of treatment.
In July 2020, the European Union secured a () contract with Gilead, to make the drug available there in early August 2020. In October 2020, Gilead Sciences and the European Commission announced they had signed a joint procurement framework contract in which Gilead agreed to provide up to 500,000 remdesivir treatment courses over the next six months to 37 European countries. Among the contracting countries were all 27 EU member states plus the United Kingdom, "Albania, Bosnia & Herzegovina, Iceland, Kosovo, Montenegro, North Macedonia, Norway, and Serbia". At the time, the price per treatment course was not disclosed; Reuters reported the price was 2,070 euros, thereby implying the total value of the contract (if all 500,000 courses are ordered) is approximately €1.035 billion. Under the contract, each participating country will directly place orders with Gilead and pay Gilead directly for its own orders.
Names
Remdesivir is the international nonproprietary name (INN) while the development code name was GS-5734.
Research
Remdesivir was originally created and developed by Gilead Sciences in 2009, to treat hepatitis C and respiratory syncytial virus (RSV). It did not work against hepatitis C or RSV, but was then repurposed and studied as a potential treatment for Ebola virus disease and Marburg virus infections. According to the Czech News Agency, this new line of research was carried out under the direction of scientist Tomáš Cihlář. A collaboration of researchers from the Centers for Disease Control and Prevention (CDC) and Gilead Sciences subsequently discovered that remdesivir had antiviral activity in vitro against multiple filoviruses, pneumoviruses, paramyxoviruses, and coronaviruses.
Preclinical and clinical research and development was done in collaboration between Gilead Sciences and various US government agencies and academic institutions.
During the mid-2010s, the Mintz Levin law firm prosecuted various patent applications for remdesivir on behalf of Gilead Sciences before the United States Patent and Trademark Office (USPTO). The USPTO granted two patents on remdesivir to Gilead Sciences on 9 April 2019: one for filoviruses, and one which covered both arenaviruses and coronaviruses.
Ebola
In October 2015, the United States Army Medical Research Institute of Infectious Diseases (USAMRIID) announced preclinical results that remdesivir had blocked the Ebola virus in Rhesus monkeys. Travis Warren, who has been a USAMRIID principal investigator since 2007, said that the "work is a result of the continuing collaboration between USAMRIID and Gilead Sciences". The "initial screening" of the "Gilead Sciences compound library to find molecules with promising antiviral activity" was performed by scientists at the Centers for Disease Control and Prevention (CDC). As a result of this work, it was recommended that remdesivir "should be further developed as a potential treatment."
Remdesivir was rapidly pushed through clinical trials due to the West African Ebola virus epidemic of 2013–2016, eventually being used in people with the disease. Preliminary results were promising; it was used in the emergency setting during the Kivu Ebola epidemic that started in 2018, along with further clinical trials, until August 2019, when Congolese health officials announced that it was significantly less effective than monoclonal antibody treatments such as ansuvimab and atoltivimab/maftivimab/odesivimab. The trials, however, established its safety profile.
COVID-19
Hospitalized patients
Remdesivir was approved for medical use in the United States in October 2020. The US Food and Drug Administration (FDA) approved remdesivir based on the agency's analysis of data from three randomized, controlled clinical trials that included participants hospitalized with mild-to-severe COVID19. The FDA granted approval and reissued the revised EUA to Gilead Sciences Inc. The FDA approved remdesivir based primarily on evidence from three clinical trials (NCT04280705, NCT04292899, and NCT04292730) of 2043 hospitalized participants with COVID19. The trials were conducted at 226 sites in 17 countries including the United States.
In November 2020, the World Health Organization (WHO) updated its guideline on therapeutics for COVID19 to include a conditional recommendation against the use of remdesivir, triggered by results from the WHO Solidarity trial. Meanwhile, the Public Health Agency of Canada's COVID19 Clinical Pharmacology Task Group recommended that remdesivir only be administered to hospitalized patients as part of a randomized controlled trial due to limited information on risks and benefits.
In January 2022, the Canadian component of the WHO Solidarity Trial reported that in-hospital people with COVID19 treated with remdesivir had 17% lower relative risk of death (18.7% versus 22.6% death rates) and 47% reduced relative risk for needing oxygen and mechanical ventilation (8.0% versus 15.0%) compared to people receiving standard-of-care treatments.
In September 2022, the WHO updated their guidelines to recommend use of remdesivir for both non-hospitalized and hospitalized patients. This was based on final results from the SOLIDARITY trial that showed a reduction in mortality or progression to mechanical ventilation for non-ventilated patients.
Nonhospitalized outpatients
In January 2022, a study indicated that nonhospitalized people who were at high risk for COVID19 progression had an 87% lower risk of hospitalization or death after a 3-day course of intravenous remdesivir.
Remdesivir/baricitinib
In May 2020, the National Institute of Allergy and Infectious Diseases (NIAID) started the Adaptive COVID19 Treatment Trial 2 (ACTT-2) to evaluate the safety and efficacy of a treatment regimen consisting of remdesivir plus baricitinib for treating hospitalized adults who have a laboratory-confirmed SARS-CoV-2 infection with evidence of lung involvement, including a need for supplemental oxygen, abnormal chest X-rays, or illness requiring mechanical ventilation.
In November 2020, the US Food and Drug Administration (FDA) issued an emergency use authorization (EUA) for the drug baricitinib, in combination with remdesivir, for the treatment of suspected or laboratory-confirmed COVID19 in hospitalized people two years of age or older requiring supplemental oxygen, invasive mechanical ventilation, or extracorporeal membrane oxygenation (ECMO). The data supporting the EUA for baricitinib combined with remdesivir are based on a randomized, double-blind, placebo-controlled clinical trial (ACTT-2), which was conducted by the National Institute of Allergy and Infectious Diseases (NIAID). The EUA was issued to Eli Lilly and Company.
Remdesivir/interferon beta-1a
In August 2020, the NIAID started the Adaptive COVID19 Treatment Trial 3 (ACTT 3) to evaluate the safety and efficacy of a treatment regimen consisting of remdesivir plus interferon beta-1a for hospitalized adults who have a laboratory-confirmed SARS-CoV-2 infection with evidence of lung involvement, including a need for supplemental oxygen, abnormal chest X-rays, or illness requiring mechanical ventilation.
Veterinary uses
In 2019, GS-441524 was shown to have promise for treating feline infectious peritonitis caused by a coronavirus. It has not been evaluated or approved by the US Food and Drug Administration (FDA) for the treatment of feline coronavirus or feline infectious peritonitis but has been available since 2019, through websites and social media as an unregulated black market substance. Because GS-441524 is the main circulating metabolite of remdesivir and because GS-441524 has similar potency against SARS-CoV-2 in vitro, some researchers have argued for the direct administration of GS-441524 as a COVID19 treatment.
References
External links
Anti–RNA virus drugs
Antiviral drugs
COVID-19 drug development
Experimental antiviral drugs
Gilead Sciences
Heterocyclic compounds with 2 rings
Nitriles
Nitrogen heterocycles
Nucleotides
Orphan drugs
Phenol esters
Phosphoramidates | Remdesivir | [
"Chemistry",
"Biology"
] | 6,689 | [
"Antiviral drugs",
"Biocides",
"Drug discovery",
"Functional groups",
"COVID-19 drug development",
"Nitriles"
] |
52,325,106 | https://en.wikipedia.org/wiki/Johnson%27s%20parabolic%20formula | In structural engineering, Johnson's parabolic formula is an empirically based equation for calculating the critical buckling stress of a column. The formula is based on experimental results by J. B. Johnson from around 1900 as an alternative to Euler's critical load formula under low slenderness ratio (the ratio of radius of gyration to effective length) conditions.
The equation interpolates between the yield stress of the material to the critical buckling stress given by Euler's formula relating the slenderness ratio to the stress required to buckle a column.
Buckling refers to a mode of failure in which the structure loses stability. It is caused by a lack of structural stiffness. Placing a load on a long slender bar may cause a buckling failure before the specimen can fail by compression.
Johnson Parabola
Eulers formula for buckling of a slender column gives the critical stress level to cause buckling but doesn't consider material failure modes such as yield which has been shown to lower the critical buckling stress. Johnson's formula interpolates between the yield stress of the column material and the critical stress given by Euler's formula. It creates a new failure border by fitting a parabola to the graph of failure for Euler buckling using
There is a transition point on the graph of the Euler curve, located at the critical slenderness ratio. At slenderness values lower than this point (occurring in specimens with a relatively short length compared to their cross section), the graph will follow the Johnson parabola; in contrast, larger slenderness values will align more closely with the Euler equation.
Euler's formula is
where
critical stress,
critical force,
area of cross section,
Effective length of the rod,
modulus of elasticity,
area moment of inertia of the cross section of the rod,
= slenderness ratio.
Euler's equation is useful in situations such as an ideal pinned-pinned column, or in cases in which the effective length can be used to adjust the existing formula (ie. Fixed-Free).
(L is the original length of the specimen before the force was applied.)
However, certain geometries are not accurately represented by the Euler formula. One of the variables in the above equation that reflects the geometry of the specimen is the slenderness ratio, which is the column's length divided by the radius of gyration.
The slenderness ratio is an indicator of the specimen's resistance to bending and buckling, due to its length and cross section. If the slenderness ratio is less than the critical slenderness ratio, the column is considered to be a short column. In these cases, the Johnson parabola is more applicable than the Euler formula.
The slenderness ratio of the member can be found with
The critical slenderness ratio is
Example
One common material in aerospace applications is aluminum 2024. Certain material properties of aluminum 2024 have been determined experimentally, such as the tensile yield strength (324 MPa) and the modulus of elasticity (73.1 GPa). The Euler formula could be used to plot a failure curve, but it would not be accurate below a certain value, the critical slenderness ratio.
Therefore, the Euler equation is applicable for values of greater than 66.7.
Euler: for
(units in Pascals)
Johnson's parabola takes care of the smaller values.
Johnson: for
(units in Pascals)
References
Elasticity (physics)
Materials science
Mechanical failure modes
Structural analysis
Mechanics | Johnson's parabolic formula | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 718 | [
"Structural engineering",
"Physical phenomena",
"Mechanical failure modes",
"Applied and interdisciplinary physics",
"Elasticity (physics)",
"Deformation (mechanics)",
"Aerospace engineering",
"Structural analysis",
"Technological failures",
"Materials science",
"Mechanics",
"nan",
"Mechanic... |
36,580,989 | https://en.wikipedia.org/wiki/Omnitruncated%206-simplex%20honeycomb | In six-dimensional Euclidean geometry, the omnitruncated 6-simplex honeycomb is a space-filling tessellation (or honeycomb). It is composed entirely of omnitruncated 6-simplex facets.
The facets of all omnitruncated simplectic honeycombs are called permutahedra and can be positioned in n+1 space with integral coordinates, permutations of the whole numbers (0,1,..,n).
A lattice
The A lattice (also called A) is the union of seven A6 lattices, and has the vertex arrangement of the dual to the omnitruncated 6-simplex honeycomb, and therefore the Voronoi cell of this lattice is the omnitruncated 6-simplex.
∪
∪
∪
∪
∪
∪
= dual of
Related polytopes and honeycombs
See also
Regular and uniform honeycombs in 6-space:
6-cubic honeycomb
6-demicubic honeycomb
6-simplex honeycomb
Truncated 6-simplex honeycomb
222 honeycomb
Notes
References
Norman Johnson Uniform Polytopes, Manuscript (1991)
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings)
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
Honeycombs (geometry)
7-polytopes | Omnitruncated 6-simplex honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 389 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
36,580,992 | https://en.wikipedia.org/wiki/Omnitruncated%208-simplex%20honeycomb | In eight-dimensional Euclidean geometry, the omnitruncated 8-simplex honeycomb is a space-filling tessellation (or honeycomb). It is composed entirely of omnitruncated 8-simplex facets.
The facets of all omnitruncated simplectic honeycombs are called permutahedra and can be positioned in n+1 space with integral coordinates, permutations of the whole numbers (0,1,..,n).
A lattice
The A lattice (also called A) is the union of nine A8 lattices, and has the vertex arrangement of the dual honeycomb to the omnitruncated 8-simplex honeycomb, and therefore the Voronoi cell of this lattice is an omnitruncated 8-simplex
∪
∪
∪
∪
∪
∪
∪
∪
= dual of .
Related polytopes and honeycombs
See also
Regular and uniform honeycombs in 8-space:
8-cubic honeycomb
8-demicubic honeycomb
8-simplex honeycomb
Truncated 8-simplex honeycomb
521 honeycomb
251 honeycomb
152 honeycomb
Notes
References
Norman Johnson Uniform Polytopes, Manuscript (1991)
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings)
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
Honeycombs (geometry)
9-polytopes | Omnitruncated 8-simplex honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 399 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
36,580,994 | https://en.wikipedia.org/wiki/Omnitruncated%207-simplex%20honeycomb | In seven-dimensional Euclidean geometry, the omnitruncated 7-simplex honeycomb is a space-filling tessellation (or honeycomb). It is composed entirely of omnitruncated 7-simplex facets.
The facets of all omnitruncated simplectic honeycombs are called permutahedra and can be positioned in n+1 space with integral coordinates, permutations of the whole numbers (0,1,..,n).
A7* lattice
The A lattice (also called A) is the union of eight A7 lattices, and has the vertex arrangement to the dual honeycomb of the omnitruncated 7-simplex honeycomb, and therefore the Voronoi cell of this lattice is an omnitruncated 7-simplex.
∪
∪
∪
∪
∪
∪
∪
= dual of .
Related polytopes and honeycombs
See also
Regular and uniform honeycombs in 7-space:
7-cubic honeycomb
7-demicubic honeycomb
7-simplex honeycomb
Truncated 7-simplex honeycomb
331 honeycomb
Notes
References
Norman Johnson Uniform Polytopes, Manuscript (1991)
Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings)
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45]
Honeycombs (geometry)
8-polytopes | Omnitruncated 7-simplex honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 395 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
36,582,500 | https://en.wikipedia.org/wiki/Abraham%20Nitzan | Abraham Nitzan (, born 1944) is a professor of chemistry at the Tel Aviv University department of chemical physics and the University of Pennsylvania department of chemistry.
Education
Abraham Nitzan was born in Tel Aviv. He received his bachelor's degree in chemistry from the Hebrew University in Jerusalem in 1964, and his master's degree in physical chemistry from the same institute in 1966. His research towards the master's degree, on the radiation chemistry of aqueous solutions, was supervised by Gideon Czapski. During the period of 1966 to 1969, he served in the Israel Defense Forces and in 1972 completed his Ph.D. studies at Tel Aviv University under the supervision of Joshua Jortner. His thesis focused on the theory of non-radiative transitions in large molecules.
Professional career
From 1972 to 1974, Nitzan worked as a research fellow at the Massachusetts Institute of Technology. The following year he stayed at Northwestern University, after which he returned to Israel and joined the faculty of the chemistry department of Tel Aviv University. In 1981 he was promoted to the rank of professor. Between 1983 and 1986 he served as head of the School of Chemistry, developing study programs integrating the study of Chemistry with other subjects, and in the period 1995-1998 he served as dean of the Faculty of Exact Sciences. Among his academic duties, he served as head of the faculty teaching committee, member of the university promotions committee, and Senate representative in the central committee and board of trustees. He was also a member of the Ministry of Education topical committee on Chemistry. Current to 2012, Nitzan directs the Advanced Studies Institute at Tel Aviv University. Apart from his being a professor and visiting researcher at universities around the world, Nitzan was a member of the editorial boards of the scientific journals "Physical Review Letters", "Journal of Chemical Physics" and "Journal of Physical Chemistry".
Research
Nitzan's field of research is chemical dynamics, which studies the dynamics of chemical processes on the microscopic level. In particular, his studies deal with processes involving interactions between light and matter, chemical reactions in condensed phases and chemical processes at interfaces. His early research (1970-1980) has focused on energy transfer processes in molecular systems. This was followed (1980-1990) a series of works (with Joel Gersten) on surface enhanced optical process that later led to the development of the field of molecular plasmonics, as well as works on activated rate processes and charge transfer in complex molecular environments. Later work focused on electron solvation, transfer and transport in molecular environment and at interfaces, culminating in a series of studies (with Mark Ratner, Michael Galperin, Dvira Segal and others) on molecular electronics. By 2012, he has published 350 scientific articles, and his work has been presented in dozens of international conferences. He was ranked as one of the hundred top chemists in the world between 2000 and 2010, as ranked by the impact of their scientific research, by Science Watch.
His research subjects are topics in chemical dynamics, which studies chemical reactions in chemical and physical processes. In particular, his studies deal with processes involving interactions between light and matter, chemical reactions in condensed phases and chemical processes at interfaces. In the years 1970-1980 his activity was focused on energy transfer processes in molecular systems, and during his stay at MIT, Nitzan conducted research with John Ross which predicted the existence of periodic chemical reactions in photochemical systems away from equilibrium. In his work during the 1980s, he was among the founders of the field of molecular plasmonics. A series of studies he conducted between 1975 and 1995 brought to the development of theories of chemical processes in complex molecules, and of models for the description of electron transfer in such systems. By 2012, 350 scientific articles were published by him, and his work has been presented in dozens of international conferences.
He was ranked as one of the hundred top chemists in the world between 2000 and 2010 by Science Watch.
Honors and awards
Nitzan is a member of the Israel Academy of Sciences and Humanities, a fellow of American Association for the Advancement of Science and of the American Physical Society, a foreign honorary member of the American Academy of Arts and Sciences and a foreign associate of the National Academy of Sciences. He received an Honorary Doctorate (Doctor Honoris Causa) from the University of Konstanz in 2010.
Apart from these, his work has earned him many awards, some of which are:
Fulbright scholarship (1972)
I. M. Kolthoff award (1995)
Humboldt Prize (1995)
Israel Chemical Society Prize (2003)
Israel prize in chemistry (2010)
The EMET Prize in chemistry (2012)
Israel Chemical Society Medal (2015)
References
Abraham Nitzan, "Chemical Dynamics in Condensed Phases" (Oxford University Press, 2006)
External links
Abraham Nitzan's website: The University of Tel-Aviv
Abraham Nitzan's website: The University of Pennsylvania
1944 births
Living people
Israeli chemists
Theoretical chemists
Hebrew University of Jerusalem alumni
Tel Aviv University alumni
Academic staff of Tel Aviv University
Members of the Israel Academy of Sciences and Humanities
Israel Prize in exact science recipients who were chemists
Israel Prize in chemistry recipients
Israeli Jews
Computational chemists
Foreign associates of the National Academy of Sciences
Fellows of the American Association for the Advancement of Science
Fellows of the American Physical Society | Abraham Nitzan | [
"Chemistry"
] | 1,079 | [
"Quantum chemistry",
"Physical chemists",
"Computational chemists",
"Theoretical chemistry",
"Computational chemistry",
"Theoretical chemists"
] |
40,785,234 | https://en.wikipedia.org/wiki/Jean-Baptiste%20Donnet | Jean-Baptiste Donnet (28 September 1923 in Pontgibaud (Puy-de-Dôme) – 30 November 2014 in Sentheim (Haut-Rhin)) was a French chemist who is noted as a pioneer in the surface chemistry of carbon black and as a founder of the Upper Alsace University.
He was the father of French journalist Pierre-Antoine Donnet.
Biography
Jean-Baptiste Donnet, from a modest background, received his secondary education by correspondence, while an apprentice craftsman. After World War II, he earned his Bachelor of Science degree in chemical engineering.
His scientific career began in CNRS at Strasbourg, then Mulhouse from 1953. He is one of the founders in 1970 of the academic center of Mulhouse, which in 1975 became the University of Haute-Alsace.
Career
Professor (emeritus) of the Upper Alsace University
former Research director at CNRS
former President of the Société chimique de France
former University president of the Upper Alsace University
Recognitions
Carl-Dietrich-Harries-Medal for commendable scientific achievements (1985)
Colwyn medal in 1988
Honorary Doctorate from the université de Neuchâtel (1993)
Honorary Doctorate from the Lodz University of Technology (1989)
Honorary President of the Upper Alsace University
Honorary President of the Société chimique de France
Commander of the Legion of Honor (decreed 19 April 2000)
Charles Goodyear Medal (1998)
References
http://www.memoiresdeguerre.com/article-donnet-jean-baptiste-116075295.html
Polymer scientists and engineers
1923 births
2014 deaths
Research directors of the French National Centre for Scientific Research | Jean-Baptiste Donnet | [
"Chemistry",
"Materials_science"
] | 333 | [
"Polymer scientists and engineers",
"Physical chemists",
"Polymer chemistry"
] |
40,788,000 | https://en.wikipedia.org/wiki/The%20Sea%20That%20Thinks | The Sea That Thinks (Dutch: De zee die denkt) is a 2000 Dutch experimental film directed by Gert de Graaff. The film makes heavily use of optical illusions to tell a "story within a story" revolving around a screenwriter writing a script called The Sea That Thinks. The script details what is happening around him and eventually begins to affect what happens around him.
External links
Official site
2000 films
2000s Dutch-language films
Dutch drama films
2000 drama films
2000 fantasy films
Dutch fantasy films
Dutch avant-garde and experimental films
Existentialist films
Films about philosophy
Metafictional works
Optical illusions
Self-reflexive films
Nonlinear narrative films | The Sea That Thinks | [
"Physics"
] | 130 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
40,788,829 | https://en.wikipedia.org/wiki/Annular%20fluidized%20bed | Fluidisation is a phenomenon whereby solid particulate is placed under certain conditions to cause it to behave like a fluid. A fluidized bed is a system conceived to facilitate the fluidisation. Fluidized beds have a wide range of applications including but not limited to: assisting with chemical reactions, heat transfer, mixing and drying. According to Collin et al. (2009), an annular fluidized bed consists of "a large central nozzle surrounded by a stationary fluidized bed".
History
Fritz Winkler created the first fluidised bed in 1922 for coal gasification.
The next advancement in fludizied bed was the Circulating fluidised bed produced in 1942 for catalytic cracking of organic oils.
Finally in the early 1990s annular fluidised beds was conceptualised and its current uses are:
Waste heat boiler pilot plant (1992)
Circored direct reduction plant (1996)
Ore preheater, Australia (2002)
Reducing ilmenite roaster, Mozambique (2005)
Process characteristics
A general Annular Fluidized Bed (AFB) introduces gas at high speeds that enter the reactor from the bottom of the large central nozzle and additional fluidized gas is introduced through an annular nozzle ring. As a result, gas and solids are extensively mixed in the dense bottom part of the mixing chamber and flow upward in the riser. The gas and solids both leave the riser and are separated in a cyclone depending on the set velocities. The separated gas flows through a bag filter and the solids move downwards in the downer which is fed into the bottom of the plant which repeats the process again.
Main components
The bottom section of the riser is narrowed to avoid solids from accumulating in the bottom section. Instead of the riser walls being smooth it is generally composed of membrane waterwall surfaces, this added feature influences the solid flow patterns in the vicinity, hence influences mixing and gas-solid mixing.
The riser exits are divided into two types; “once through exits” which involves the exits being smoothly curve or tapered. This exit allows a large net circulation and is optimal for short uniform residence time as well as quickly decaying catalysts. The other exit is “internal reflux exits” which is an abrupt exit causing a substantial amount of entrained solids being internally separated from the gas reaching the top of the reactor.
The cyclone is an integral part of an annular fluidized bed, particular sized particles are separated by varying the velocity of feed gas. Consequently, at high velocity gas provides enough kinetic energy to separate particles from the fluidized bed. The feed gas and small particles fly into a cyclone separator and there the feed- gas and particles are separated. In turn, particles can be returned or removed to the bed depending on the size of the particle. The entrained solids are captured and sent back to the base of the riser through a vertical standpipe.
The large central nozzle is the main component of the Annular Fluidized bed and this differentiate itself from other fluidized beds. The central nozzle is surrounded by a stationary fluidized bed and “due to moderate primary gas fluidisation of the annulus, the solids overflow at the upper edge of the central nozzle” which is then transported and mixed in the mixing chamber by a high upward velocity central secondary gas stream.
Flow regime
The annular fluidized bed is a new type of fluidized bed that has a specific type of motion where it
moves in radial. There is relatively little axial mixing of gases and there is radial motion.
The axial flow profile of the annular fluidized bed can be determined by pressure drops along the plant height, which can be divided into three major parts: the annulus, the bottom and the top part of the mixing chamber. Based on the height of the bed, while the annulus has a porosity close to the solids minimal fluidization porosity, each region of bed is characterized by different pressure gradients. The closer to the central nozzle, the lower the pressure gradient and the higher the pressure drop in the mixing chamber. With known pressure gradient (ΔP/ΔH), the solid concentration can be calculated using Wirth equation shown below:
〖(1-ε)〗_∆P=∆P/∆H(ρ_s-ρ_f )g
According to an experiment characterization of the flow pattern in an annular fluidized bed carried out by Anne Collin, Karl-Ernst Wirth and Michael Stroder, at a height of 150mm above the central nozzle, the pressure gradient is approximately zero for small velocities and increases with increasing velocity.
Two distinct types of flow are shown in two different regions: “the flow pattern directly above the central nozzle shows a typical jet profile characterized by low solids concentrations around 8% and high upwards solids velocities (3 m/s) thus resulting in high local solids mass fluxes.” The surrounding of annular region at the bottom of the mixing chamber is on the other hand, the flow pattern is characterized by high solids concentration “with increasing values towards the wall e.g. 46% for the 100 mm probe height above the central nozzle”
The solids velocities and mass fluxes are positive around the wall region where a descending is expected. However, the measured velocities may not be an accurate portrayal of the actual solids velocities in the region where high transversal and radial mixing are present. This is due to only vertical velocities being recorded by the capacitance probes. Hence, the calculated solids mass fluxes always have to be considered in the same direction.
To summarize, the fully developed flow pattern in the annular fluidized bed shows a core-annulus structure, which is “characterized by the typical formation of a central jet surrounded by a region of high solids concentration at the bottom of the mixing chamber.” Varying the fluidization velocity in the annulus promotes more solids to be removed from bubbles and enables the convective mass flux to penetrate into the jet increase. The amount of solids that can be integrated into the jet at the end is determined by the gas velocity. Moreover, the ratio of the internal to the external solids circulation in the plant can be regulated due to the interaction of both mechanisms.
Height of 25 mm over the central nozzle
As gas velocity in the annulus depends on a calculated velocity of the solids ejected from bubbles, it is more difficult for the solids coming from the annulus with increasing velocity in the nozzle to penetrate into the central gas jet under a constant fluidization velocity. Increasing the central velocity at a height of 25 mm above the nozzle decreases the time-averaged solids concentration. However, an increase in this velocity has no effect on the solids concentration above the annulus. On the other hand, for a low central gas velocity, the solids velocities over the annulus and above the nozzle show nearly the same value with sharp velocity gradient.
Height of 200mm over the central nozzle
The flow pattern of a circulating fluidized bed is fully developed at the probe height of 200mm above the central nozzle. At this height, the typical concentration increases towards the wall and after combining with the falling solids velocity, this results in a negative solid mass flux.
The shape of the solids concentration profile is independent on the gas velocity, however the absolute concentration is lower over the cross-section with integral solid concentrations.
As a result, the solids mass flux has a slight decrease with increasing gas velocity in the central nozzle with integral values over the plant cross-section.
Influence of gas velocity in the annular fluidized bed
Bubbling occurs in the annular fluidized bed caused by the introduction of gas by the central nozzle at a certain velocity moves in the general upwards direction. The sudden eruption of gas at the central nozzle causes particles to be transport in the bubbles wake By increasing the velocity of the annulus results in an increase in the bubble size and the bubbling velocity. The new increase in bubble dynamics enables “the ejected solids to penetrate deeper into the central gas jet”. As a result of this, the concentration and velocity of solids increases and consequently the solid optimum mass flux increases.
Design heuristics
Cohesive particles and large particles greater than 1 mm does not fluidize well and usually are separated in other ways.
Rough correlations have been made of minimum fluidization velocity, bed expansion, minimum bubbling velocity, bed level fluctuation and disengaging height. It is recommended by experts that any real design be based on pilot plant work.
“Practical operations are conducted at two or more multiples of the minimum fluidizing velocity” .
Products can be maximised by varying the fluidization velocity in the annulus, more solids can be ejected from the bubbles and the convective mass flux able to penetrate into the jet increases.
Advantages and disadvantages
Due to the particular characteristics of AFB whereby gasses are introduced through the central nozzle at a high velocity, an intense mixing zone is achieved on the bed comparable to the conditions by an external loop of a Circulating fluidized bed. The AFB combines the advantages of long solid residence time and good heat and mass transfer, making it ideal for use heat exchanging processes such as cooling, heating or heat recovery and facilitating reactions. AFB can be combined with other fluidized bed types to assist with the process and further enhance its existing properties to increase productivity of a process.
The AFB characteristics are highly desirable in some applications however it can have an undesirable effect on other applications, which would require shorter residence times and a less intense mixing such as an ore roasters where particles would not be required to leave the fluidized bed. The cost of an AFB would also be higher compared to that of other fluidized beds as the introduction of the central nozzle complicates production of the components and introduces extra cost. An AFB would require more frequent maintenance and higher maintenance costs due to the extra and more complicated components. The central nozzle may easily clog due to unwanted particles entering the nozzle.
Though the AFB has potential to improve the efficiency of current processes, it is not without limitations. Due to the AFB being a recent advancement in fluidization technology, little systematic study has been done on this, and characterising global and local flow patterns may prove difficult for chemical engineers as the “bed hydrodynamics are not the same in small and large scale fluidized beds”. The implementation of this new technology into existing plants may prove difficult and costly; therefore there have been only a few advancement of the AFB since its conception. Few plants exists where AFB technology has been implemented however there may still be a few years before its full industrial applications will be realized and widely used.
Applications
An annular fluidized bed (AFB) can have a wide range of applications due to its ability to be used in conjunction with other fluidized bed type. The AFB is ideal for applications that require a fast and efficient heat and mass transfer with intense mixing. These applications can range from dryers, heat exchangers, heaters, coolers and reactors.
Designs available and new developments
Though a relatively new technology, the use of AFB in the industry has slowly increased over the years. One such example is the company Outotec, who specialises in the field of fluidization technology. Outotec has integrated the use of AFB in its recent plants designs to further improve the process. Current existing plants by Outotec utilising AFB include:
Waste heat boiler pilot plant, 1 tpd
Circored direct reduction plant, CAL, Trinidad, 1,500 tpd
Ore preheater, HIsmelt Corporation, Australia, 4,000 tpd
Reducing ilmenite roaster, Kenmare Resources plc, Mozambique, 1,200 tpd
Note: Facts and figures obtained for Outetec
The Circored, Circoheat and Circotherm processes devised by the company are some examples of applications for this fluidized bed technology.
Circored- the process, developed in the 1990s for direct reduction of iron. “Circored process uses hydrogen as the only reductant to apply a two stage circulating fluidized bed/ bubbling fluidized bed reactor configuration for reduction. An AFB based flash heater is used to achieve the direct reduced iron briquetting temperature.”
Circoheat- this process preheats iron ore fines to a temperature of 850 °C. The iron ore are introduced to a circulating fluidized bed where offgas from an Hlsmelt smelt reduction vessel is introduced to the reactor via an AFB. The offgas is then combusted with air to heat the ores.
Circotherm- one of the latest development of Outotec, the core system of AFB is utilised for heat recovery and solids recovery via cyclone.
As seen from the Outotec examples, an annular fluidized bed can have a wide range of applications as any other fluidization technology. However, as it is a recent development in this field its full potential has yet to be realized and implemented for industrial applications
Safety and environmental issues
Air purification
One application of an AFB is the purification of air. It begins by focusing the sun's ultraviolet light on particles of silica gel, which are coated with a fine layer of titanium dioxide catalyst. The uvlight then able to charge these particles. These positive and negative charged particles are then available to initiate various chemical reactions.
When polluted air is passed through the central nozzle and into the fluidised bed, contaminants that contact the photo-catalytic particles are adsorbed onto the particle surface. The contaminants react with the positive and negative charges and are chemically broken down. The result is purified air.
Off-gas
Off-gas is the gaseous product exiting a cyclone separator that is connected to a fluidized bed. If the gas is clean and contaminate free it can be cooled via a condenser and then filtered to remove fine particles. Once filtered it may be directed back into the system or tapered off.
In various cases volatile and/or poisonous gases may be used as feed gas for fluidised beds. The off gas produced from the operation may have a considerable amount of such gases and therefore need to be neutralised. Allowing the gases to escape into the environment may cause green house gases and are toxic to local flora and fauna. Cleaning off-gas increases sustainability and negates adverse effects to the environment.
Fine particulates
During the operation of a fluidised bed particles are transported by the kinetic energy provided by a feed gas. At certain velocities fine particles may fly into a cyclone and separated from the flue gas. These fine particles can either be returned to the system or removed. Once removed these particles depending on their nature may have adverse effects on the environment and must be treated carefully.
For example, in mining process currently in Mozambique, annular fluidised beds are used to preheat and reduce ilmenite ore, ilmenite is hazardous compound as crystalline silica is known to cause lung fibrosis and is a known carcinogen. Companies operating such equipment and detrimental substances must dispose of their waste properly.
See also
Cyclonic separation
Fluidization
Fluidized bed combustion
Fluidized bed reactor
Outotec
References
Chemical equipment
Fluidization | Annular fluidized bed | [
"Chemistry",
"Engineering"
] | 3,112 | [
"Chemical equipment",
"Fluidization",
"nan"
] |
40,788,910 | https://en.wikipedia.org/wiki/Climbing%20and%20falling%20film%20plate%20evaporator | A climbing/falling film plate evaporator is a specialized type of evaporator in which a thin film of liquid is passed over a rising and falling plate to allow the evaporation process to occur. It is an extension of the falling film evaporator, and has application in any field where the liquid to be evaporated cannot withstand extended exposure to high temperatures, such as the concentration of fruit juices.
Design
The basic design of the climbing/falling film plate evaporator consists of two phases. In the climbing phase, the liquid feed is heated by a flow of steam as it rises through a corrugated plate. In the subsequent falling phase, the liquid flows downward at high velocity under gravitational force. Evaporation and cooling occurs rapidly in the falling phase.
There are several design variations that are commonly used in industrial field. These include single-effect and multiple-effect evaporators. The choice of evaporator design is dictated by constraints of the process. Fundamentally, there are four factors involved in designing this evaporator:
prevention of nucleate boiling;
short residence time;
waste stream production; and
post treatment.
The main advantage of climbing/falling film plate evaporator is its short residence time. Since the liquid feed does not remain in the evaporator for long, this evaporator is suitable for heat/temperature sensitive material. Thus, this evaporator is used widely in food, beverages and pharmaceutical industries. Besides that, the colour, texture, nutritional content and taste of the liquid feed can be preserved too. Despite its functionality, this evaporator has a few drawbacks such as large energy consumption. Future development compromises installing large number of steam effects and recycle the steam where possible for better energy efficiency.
Designs available
Climbing/falling film plate evaporator designs can be group into single-effect and multiple-effect film plate designs.
Single-effect
The operations for single-effect evaporator can be carried out in a batch, semi-batch, or continuous- batch or continuously. Single-effect evaporators are indicated in any of the following conditions:
the vapor cannot be recycled as it is contaminated;
the feed is highly corrosive, requiring expensive construction materials;
the energy cost to produce the steam heating is low;
the required capacity is small.
Thermocompression
Thermocompression is useful whenever evaporator energy requirements need to be reduced. This could be achieved by compressing and recycling the vapor from a single-effects evaporator into the same evaporator as the heating medium. Thermocompression of vapor can be achieved by applying the steam-jet or by using mechanical means such as compressors
Steam Jet Thermocompression
Steam jet would be required in order to compress the vapor back to the evaporator.
Mechanical Thermocompression
Mechanical thermocompression lies on the same principle as thermocompression but the only different is that the compression is done by reciprocating, rotary positive-displacement, centrifugal or axial-flow compressors.
Multiple-effect evaporators
The best way to reduce energy consumption is by using multiple-effect evaporators. For multiple-effect evaporators, the steam from outside is condensed by the heating element of the first effect and the vapors produced from the first effect are then recycled back to the second effect, where the feed will be partially concentrated product of the first effect. The process expands until the last effect when the final desired concentration is achieved.
Process characteristics
There are several process characteristics that should be taken into account in order for the evaporator to operate at its best performance.
Evaporation of thin liquid film
Evaporation of liquid film in film evaporators is very important in order to cool the flowing liquid and the surface on which the liquid flows. It can also increase the concentration of the components in the liquid. The climbing/falling film plate evaporator is specifically designed to produce a thin film during both the climbing and falling phases. For the climbing film evaporators, the feed is introduced at the bottom of the tubes. Evaporation causes the vapor to expand thus causing a thin film of liquid to rise along the tubes. The vapor shear will push the thin film to climb up the wall of the tubes. The feed for the falling film evaporator on the other hand is introduced at the top of the tubes. The liquid flows down the tubes and it will get evaporated as it descends. The flow of the liquid down the tubes is driven by the vapor shear stress and the gravitational forces. The effect of the vapor shear and the gravity will lead to a higher flow rates and shorter residence time. The flow of the thin liquid film in the falling film evaporator is possible in two ways: cocurrent and countercurrent. It is cocurrent if the vapor is drawn from the top to the bottom of the tubes and vice versa for the countercurrent flow. The cocurrent flow will increase the flow rates resulting in a shorter residence time. The type of flow can be described in figure 2.
Heat transfer characteristic
The heat transfer performance of the climbing and falling film plate evaporator is affected by several factors, including the height of the feed inside the tube and the temperature difference. The height of the feed water is inversely related to the climbing film height. The low height of feed water will lead to the high height of climbing film. Higher height of the climbing film will increase the percentage of saturated flow boiling region therefore it will lead to an increase in the local heat transfer coefficient. The optimum height ratio of feed water is found to be . Any height ratio less than 0.3 will cause the local heat transfer coefficient to decrease. Besides that, small liquid content in the tube can minimize the foaming problem.
The combination of climbing and falling film evaporator allows the evaporator to operate within a wide temperature range. The evaporators can operate in a small temperature difference between the heating medium and the liquid. This is due to the lack of hydrostatic pressure drop in the evaporator. The lack of hydrostatic pressure drop will eliminate the temperature drop thus causing the temperature to be relatively uniform. Besides that, the local heat transfer coefficient inside the tube is depending on the change in temperature. A minimum threshold of change in temperature (ΔT) of 5 °C was found by Luopeng Yang in one of his experiments. If the change in temperature is less than 5 °C, the liquid film will not be able to travel up the tubes which resulting in a drop of local heat transfer coefficient in the tube.
Residence time
Since the evaporator is mainly used in processes dealing with heat-sensitive materials, the residence time should be kept as low as possible. Residence time is the time taken for the product to be in contact with heat. To improve the product quality, short heat contact period from single pass operation can minimize product deterioration. The climbing and falling film plate evaporator is capable of satisfying this requirement. Short residence time can be achieved by higher liquid flow rates down the tube in the falling film evaporator. The effect of gravitational force will increase the flow rate of the liquid resulting in the short residence time.
Application of design guidelines
Prevention of nucleate boiling
In designing a film plate evaporator, the use of superheated liquid needs to be controlled in order to prevent nucleate boiling. Nucleate boiling will cause product deterioration resulting from increases in chemical reaction rates that come from the raise in temperature. Nucleate boiling will cause fouling to occur, thus affecting the rate of heat transfer of the process. In order to avoid nucleate boiling, the liquid superheat should be in the range of 3 to 40 K depending on the liquid used.
Short residence time
Minimizing residence time is important in order to minimize the occurrence of chemical reactions between the feed and the evaporator materials, thus reducing fouling within the evaporator. This guideline is especially important in the food processing industry, where purity of output product is paramount. In this application, residence time bears directly on product quality, thus it is important for the climbing and falling film plate evaporator to have low residence time.
Waste stream production
Condensate is the waste that had been discharged through waste stream in climbing and falling-film evaporator. This evaporator will discharge vapor as the condensate as vapor pass through more rapidly than the liquid flows in the tube.
In each evaporation unit, the feed enter from the bottom of tubesheet of pass through the climbing and falling film segment. When the liquid rises throughout the tube, boiling and vaporization process occurred as it is contact with the steam heated plates. Then the mixture that contains liquid and vapour are discharged and it is reallocated at the top of falling film pass tubes. The vapour that produced by climbing film is used to increase the velocity of liquid in the distribution liquid tubes in order to rise up the heat transfer. External separator is used to separate the mixture of both liquid and water that produced at the down-flow.
Post-treatment
In a multi-effect evaporator, the vapor output of one phase of the evaporator is recycled as the heating medium for the next phase, reducing the overall consumption of steam for the evaporator.
A surface condenser is used to condense the vapor that produced in the second effect process. In order to recover the heat that had been used in this evaporator, both of the vapor condensate is pumped to the pre-heater feed so that it can produce heat to this process.
Range of applications
Climbing/falling film plate evaporators are used in a range of applications:
Fruit juice concentration
Fruit juices are condensed by evaporation for storage, transport and commercial use. If fruit juices are exposed to heat, the nutrient content such as vitamin C may be affected. Furthermore, these nutrients are easily oxidized at high temperature. The evaporator can overcome this constraint as it operates at high feed flow rate and small temperature difference. In addition, the change in color and texture of the juices can be prevented via the operation of this evaporator type.
Dairy industry
Other protein-rich products such as whey powder in dietary supplement and milk (including both skim and whole milk) are concentrated to remove most liquid components for further processes. Protein is easily denatured at high temperature because its tertiary structure is degraded upon exposure to heat. Evaporation via climbing and falling film plate design can minimize the effect of protein denaturation and thus, optimizing the product quality.
Other food industry applications
Instant and concentrated cooking ingredients such as pasta sauce, chicken broth, vegetable purees etc. undergo evaporation through the same evaporating equipment. Although they are relatively less sensitive to heat, evaporating them at low temperature and short residence time is crucial to maintain the quality taste, texture appearance and nutritional value.
Pharmaceuticals
Antibiotics, supplementary pills and drugs containing organic and inorganic compounds are evaporated to remove as much moisture as possible for crystallization. This is because in crystallized form, antibiotics and enzyme compounds will be well preserved and improved in stability. Moreover, exposure to high temperature will lead to decomposition of inorganic compounds. Although most pharmaceutical products are extremely sensitive in temperature, this type of evaporator is still practical to be used since several design of this evaporators are able to operate at low pressure since the boiling point of water is low as pressure decreases.
Limitations
There are few limitations of this evaporator that makes it not applicable for all range of industrial processes. The evaporator needs to be operated within the range of 26–100 °C and is able to remove water in a range of 450–16,000 kg/h. In order to provide the proper rising/falling characteristics, most evaporators are quite tall and can only be installed in a space that is high. The suspended solid in the liquid feed need to be low and can pass through 50 mesh screen.
Development
There are several problems related to climbing and falling film plate evaporators. One of them is the energy intensive system. In order to improve the productivity of the plant, the energy consumption needs to be reduced with the intention of reducing the use of steam. New strategies had been proposed by investigator to reduce the utilization of steam to improve the system of economic steam. The examples of operating strategies are flashing of feed, product and condensate, splitting of feed and steam and use of optimum feed flow series.
Several techniques have been suggested to minimize energy consumption:
Installation of a large number of steam effects to an evaporator.
Recycling the steam via thermocompression or mechanical compression when possible.
Assuring that the feed to an evaporator is preheated to the boiling point.
Minimizing the heat gradient in the evaporator.
Insulating the equipment to minimize the losses of the heat.
References
Evaporators | Climbing and falling film plate evaporator | [
"Chemistry",
"Engineering"
] | 2,663 | [
"Chemical equipment",
"Distillation",
"Evaporators"
] |
40,789,637 | https://en.wikipedia.org/wiki/Pusher%20centrifuge | A pusher centrifuge is a type of filtration device that offers continuous operation to de-water and wash materials such as relatively incompressible feed solids, free-draining crystalline materials, polymers and fibrous substances. It consists of a constant speed rotor and is fixed to one of several baskets. This assembly is applied with centrifugal force that is generated mechanically for smaller units and hydraulically for larger units to enable separation.
Pusher centrifuges can be used for a variety of applications. They were typically used in inorganic industries and later, extensively in chemical industries such as organic intermediates, plastics, food processing and rocket fuels.
A suspension feed enters the process to undergo pre-acceleration and distribution. The subsequent processes involve main filtration and intermediate de-watering, after which the main filtrate is collected. Wash liquid enters the washing step and final de-watering follows. Wash filtrate is extracted from these two stages. The final step involves discharge of solids which are then collected as the finished product. These process steps take place simultaneously in different parts of the centrifuge.
It is widely accepted due to its ease of modification, such as gas-tight and explosion protection configurations.
Applications
Pusher centrifuges are mainly used in chemical, pharmaceutical, food (mainly to produce sodium chloride as common salt) and mineral industries. During the twentieth century, the pusher centrifuge was used for desiccation of comparatively large crystals and solids.
Although pushers are typically used for inorganic products, they appear in chemical industries such as organic intermediates, plastics, food processing and rocket fuels. Organic intermediates include paraxylene, adipic acid, oxalic acid caprolactam, nitrocellulose, carboxymethylcellulose, etc.
In food processing, pusher centrifugation is used to produce monosodium glutamate, salt, lysine and saccharin.
Pusher centrifugation is also used in the plastic industry, contributing to products such as PVC, polyethylene and polypropylene, and a number of other resins.
Individual products
Soda Ash—Particle size is commonly beyond 150 μm. Feed slurry usually has 50% solids by weight, and discharged cake has about 4% moisture.
Sodium bicarbonate—Feeds usually contain more than 40% of solids in weight with and crystals generally beyond the particle size of 45 μm. Cake production usually has only 5% water. To achieve such high efficiency of desiccation, requires device modifications.
Paraxylene—Fed as frozen slurry with a particle size ranging from 100 to 400 μm. Purity of 99.9% is available using a single stage long basket design. Considerations and measurements have to be taken to avoid contamination of paraxylene and oil. Lip seals and rod scrapers are used on the shaft seal to eliminate cross-contamination. The feed is purified using a funnel. Vents integrated into process housing ensure that gases moves uninhibited, preventing contamination.
Adipic acid—Undergoes repeated process of crystallisation, centrifugation and remelting to achieve the required purity. Adipic acid crystals are generally larger than 150 μm. nitric acid is reduced from 30% in the feed to 15 ppm in the cake produced. Separation of nitric acid from adipic acid is essential for further treatment.
Cotton seed delinting—Cotton seeds contain fibres that grow and form a ball of lint. This is separated using sulphuric acid, where the lint may be used to produce cotton fibre. Adding sulphuric acid causes the lint to become brittle, hence ensuring that in the subsequent tumbling process de-linting occurs effectively.
Advantages and limitations
Advantages
Pushers offer higher processing capacities than batch filtering centrifuges such as vertical basket and inverting filter.
Provides the best washing characteristics of any continuous centrifuge due to control of retention time and uniform cake bed.
Gentle handling makes pushers better suited for fragile crystals.
Limitations
Pushers require a constant feed flood due to their continuous nature.
Although high capacities may be preferred, this may result in longer residence time.
Typical particle sizes must be at least 150 μm and average 200 μm.
A high viscosity feed lowers throughput.
Pushers have a limited liquid filtration capacity and requires fast-draining materials, since it must form a cake within the period of one stroke.
Designs
The designs for pusher centrifuge are as follows:
Pushers come with eithermechanical and/or hydraulic drive units. Speed can vary.
Single-stage
Single-stage units can be cylindrical or cylindrical/conical with a single long basket and screen
Can maximize solids volumetric capacity
Resulting cake can shear or buckle due to unstable operation of the longer screen length
Capacity may be slightly less than with multistage units
Lesser fine losses due to small contact of particles with the slotted screen and no reorientation of crystals between stages
Used to achieve stability for low-speed operation
Multi-stage
Multistage (two-, three-, or four- stage designs): cylindrical and cylindrical/conical
Most common
Greater flexibility due to higher filtration capacity
Reorientation can enhance wash effect on the latter portion of the first stage and through transition onto second stage
Three/four stage
Used for largest sizes with long baskets
Recommended for materials with high friction coefficients, low internal cake shear strength, or high compressibility, e.g., processing high rubber ABS
Lower capacity affects performance due to correspondingly thin cakes and short retention time
Cylindrical/conical
Feed distributor design: conical/cylindrical or plate
Optionally applied for single- and two stage- designs.
Cylindrical feed section combined with a sloping design towards the discharge end
Axial component of force in the conical end aids solids transport
Lower production costs compared to that of baskets
Process characteristics
The important parameters are screen area, acceleration level in the final drainage zone and cake thickness. Cake filtration affects residence time and volumetric throughput. Residence on the screen is controlled by the screen's length and diameter, cake thickness and the frequency and stroke length of the cake.
Feed
Pushers utilise the cake layer to act as a filter, hence the feed normally contains high solid concentration containing fast draining, crystalline, granular or fibrous solids. The solid concentration ranges from 25-65 wt%. The mean particle size suitable for pushers must be at least 150 μm. The capacity depends on the basket diameter and ranges from 1 ton/h to 120tons/h.
Operations
The cake is under centrifugal force. It becomes drier as it progresses in the basket and is discharged from the pusher basket into the solid discharge housing (pusher centrifuge operation). The stroke length ranges from 30 to 80 mm and the stroke frequency is between 45 and 90strokes/min.
The push efficiency is defined as the distance of the forward movement of the cake ring divided by the stroke length. The push efficiency is a function of the solid volumetric loading, which results in self-compensating control of varying rates. Up to 90% push efficiency is achievable depending on the cake properties. dQ3ET42T
Filtration rate
The equation for the filtration rate, Q:
(1)
(2)
Where and are viscosity and liquid density, respectively. is the angular speed, is the average cake permeability, which is related to equation (2), , and are the radius of the liquid surface, cake surface and filter medium adjacent to the perforated bowl respectively, is the combined resistance, is the specific resistance and is the solid density.
The numerator describes the pusher's driving force, which is due to the hydrostatic pressure difference across the wall and the liquid surface. The denominator describes the resistance due to the cake layer and the filter medium.
Process variables
Performance is a function of many parameters, including particle size, viscosity, solid concentration and cake quality.
Particle size/porosity
To create the cake layer, the particle size has to be as large as practically possible. Larger particle size increases the porosity of the cake layer and allows feed liquid to pass through. Particle shape is equally important, because it determines the surface area per unit mass. As it decreases, less surface area is available to bind moisture, providing a drier cake.
Viscosity
Filtration rate is a function of the viscosity of the feed fluid. From equation (1), the relationship of the filtration rate is inversely proportional to the viscosity. Increasing viscosity means adding resistance to the fluid flow, which complicates separation of the fluids from the slurry. Consequently, the throughput of the pusher is de-rated.
Solid concentration
In most cases the solids discharge capacity/hydraulic capacity is not the limiting factor. The usual limitation is the filtration rate. Therefore, more solids can be processed by increasing the feed slurry concentration.
Cake quality
The cake quality is determined by the purity and the amount of volatile matter.
Purity
Wash liquid is introduced on the cake in order to displace the mother liquor along with the impurities. The cake wash ratio is normally between 0.1 and 0.3 kg wash/kg solids, which displace at least 95% of the feed fluid and impurities within the wash zone's normal residence time.
Volatile matter
The amount of volatile matter present in the discharge is a function of the centrifugal force (G) and the residence time at that force. Separation increases with G and hence favours the filtration rate as illustrated in equation (3).
where is the centrifugal force, is the angular speed, is the radius of the basket, and is the gravitational force.
By relating equation 3 to equation 1, the relationship of the centrifugal force is shown to be proportional to the filtration rate. As pushers often deal with fragile crystals, the movement of the pusher plate and acceleration in the feed funnel matter, because they can break some of the particles. In addition to the movement plate, G can cause breakage and compaction, and volatile matter in the cake increases. The gentle movement of cake in low G, single stage, long basket designs results in low particle attrition. As more solids pass through, residence time decreases, which increases volatile matter in the discharge cake.
Process design heuristics
The heuristics of pusher centrifuge design consider equipment size, operation sequence and recycle structure.
Design process
Overall approach:
Define the problem
Outline process conditions
Make preliminary selections
Develop a test program
Test sample batches
Adjust process conditions as required
Consult equipment manufacturers
Make final selection and obtain quotes
Equipment sizing
Variables considered in sizing equipment:
Feed rate
Feed concentration
Cake thickness
Bulk density
Long and short baskets
Single and two-stage baskets
Individual drive for rotor and hydraulic system
Easy accessibility for maintenance
Energy consumption
Previous applications
Equipment selection
Equipment selection is based upon test results, references from similar processes and experience and considered in terms of:
Cost, quality and productivity
Financial modeling
Optimising performance
For conical and cylindrical designs and assembly, the cone slant angle should not exceed sliding friction cake angle. Otherwise it would result in high vibration and poor performance.
In order to optimise capacity and performance, it is desirable to pre-concentrate the feed slurry as much as possible. Some designs have a short conical section at the feed end for thickening within the unit, but generally it is preferable to thicken before entering the centrifuge with gravity settlers, hydrocyclones or inclined screens, producing a higher concentration of solids.
The volumetric throughput for multistage designs can be increased by increasing the forced cake height while still retaining acceptable push efficiency.
Design selection
Selection of designs is usually done by scale-up from lab tests. Test data analysis should be rationalised in preparation for equipment scale-up. Computer-aided design software can assist in design and scale-up. Pilot-testing and rollout then follows.
Waste
Production
The majority of liquid contained within the mixture is drawn out at an early stage, in the feed zone of the slot screen. It is discharged into the filtrate housing. After formation of solid cakes, the main by-product produced is water, which may be used in all sorts of industrial usage. Filtration cakes are washed using nozzles or waste baskets.
Post-treatment
Post-treatment processes are a function of the specifics of the waste stream and are diverse.
Later designs
Design advances have enhanced performance and broaden the application range. These include additional stages, push hesitation, horizontal split process housing, integrated hydraulics, seals, pre-drained funnels and an integrated thickening function.
Stages
B&P Process Equipment and Systems (B&P) makes the largest single-stage pusher centrifuge, which they claimed to be superior to multistage designs. They claimed that additional impurities enter the liquid housing due to additional particles tumbling in each stage. The problem can be overcome by using a shorter inner basket with smaller diameter between the pusher plates and the basket and enabling pusher movement to take place between the pusher plate and the basket as well as between the inner basket and the outer basket. Compared to single-stage pushers that have pusher movement only between pusher plate and basket, multistage centrifuges have the advantages that the cake height is reduced, filtration resistance is lower and lesser force is required.
Push hesitation
Push hesitation holds the pusher plate in the back stroke, allowing the cake to build on itself. The cake acts as the filtering media that can even capture finer solids. This reduces the loss of solids passing through the wedge slots. Although this modification reduces capacity, it has helped improved the solid capture efficiency and make pusher centrifuges applicable to smaller particles.
Horizontal split process housing
This allows the removal of the rotating assembly without disassembling the basket and pusher centrifuge from the shafting assembly.
Integral hydraulics
An automated mechanism allows the system to operate independently.
Seals
Shaft seals eliminate the possibility of cross-contamination between the hydraulic and process ends. Options include a centrifugal liquid ring seal and a non-contacting inert gas purged labyrinth seal that eliminates leakage.
Pre-drained funnel
The pre-drained funnel removes a portion of the feed fluid through a puncture surface. This feature helps to concentrate the feed, which is especially important for drainage-limited applications. However the funnel cannot be back-washed therefore this feature is only available for crystals that tend not to back-crystallise.
Integrated thickening
Integrating the thickening function enables the pusher to be loaded with mixture with as little as 30-35% wt of solid. It also reduces process costs of solid-liquid separation by as much as 20%.
References
Bibliography
.
.
.
.
.
.
.
.
.
Centrifuges | Pusher centrifuge | [
"Chemistry",
"Engineering"
] | 3,084 | [
"Chemical equipment",
"Centrifugation",
"Centrifuges"
] |
40,789,885 | https://en.wikipedia.org/wiki/Peeler%20centrifuge | The peeler centrifuge is a device that performs by rotating filtration basket in an axis. A centrifuge follows on the principle of centrifugal force to separate solids from liquids by density difference. High rotation speed provides high centrifugal force that allows the suspended solid in feed to settle on the inner surface of basket. There are three kinds of centrifuge, horizontal, vertical peeler centrifuge and siphon peeler centrifuge. These classes of instrument apply to various areas such as fertilisers, pharmaceutical, plastics and food including artificial sweetener and modified starch.
General principles
Operation
Peeler centrifuge operates on the principle of centrifugal force to separate solids from liquids by density difference. And high rotation speed provides high centrifugal force that allows the suspended solid in feed to settle on the inner surface of drum, also washing and washing processes at the same rotational speed and in same centrifuge vessel.
Peeler centrifuges are batch and continuous centrifuge processes, and this process may be used to achieve maximum removal of solid from liquid that may be required to be as pure as possible and can not be easily separated by differences their densities.
Types
Horizontal peeler
The horizontal peeler centrifuge is one of oldest peeler centrifuge designs. The first horizontal peeler centrifuge was manufactured by Buffaud et Robatel (Lyon, FRANCE) in 1905 for synthetic ammonia production at plants in Europe, Russia, Japan, and the USA.
The horizontal peeler centrifuge has a general structure (refer to section 1.1) with horizontal rotating basket, which sits inside of external casing. The door of drum can be opened fully and contains feed; wash, feed control and solids discharge components. Modern machines are supported by cantilever for ease of access to inside of drum and components for purpose of maintenance. The unit required rugged structure, as it needs to handle high speeds of rotation and feeds, discharging capability.
The horizontal arrangement of rotating drum provides several advantages (refer to section 6.1) over other centrifuge systems, such as vertical basket centrifuge.
Vertical peeler
The vertical peeler (also known as vertical basket peeler) is a centrifuge system that has the same basic operating principles as the horizontal peeler centrifuge. The only difference other than arrangement, is that the scrapped off solids layers are not removed by chute of peeler, but it is discharged through the discharge chute at the bottom of centrifuge vessel.
Because it does not utilise peeler action to remove solid layers, the centrifuge system must be decelerated so that the solid product can be discharged by gravity without centrifugal force that prevents solids to be dropped to discharge chute.
Siphon peeler
The siphon peeler centrifuge is another peeler centrifuge design developed by Krauss-Maffei in the 1970s. Rather than only centrifugal pressure based filtration, siphon peeler centrifuge contains perforate units both horizontally and vertically.
Siphon peeler centrifuge has similar structure to Krauss-Maffei horizontal peeler centrifuge, except instead of inner drum wall with pores, siphon peeler centrifuge has solid inner drum wall where the liquid filtered through solid cake and filter medium will flow along the wall axially and through siphon pipe into separate chamber unlike horizontal peeler centrifuge.
This solid wall and instant removal of filtered liquid advantage this design by providing large pressure difference across the solid cake and filter medium, so does strong driving force, which increases filtration efficiency.
Range of Applications
Horizontal peeler
Generally peeler centrifuges are used to separate solids, usually fine particles from suspension liquid feed mixtures. Horizontal peeler centrifuges are widely used design in separation processes in:
Bulk chemicals such as petrochemical intermediates, fertilisers, chlorides, sulfates and calcium.
Fine chemicals: aluminium fluoride, amino acids, bleaching agents, surfactants, pesticides, catalyst and dyestuffs.
Pharmaceuticals
Plastics
Foods including artificial sweeteners, caffeine and modified starches.
Vertical peeler
Vertical peeler centrifuge applications are similar to the horizontal peeler centrifuge. The following areas are where vertical centrifuge is mainly used for processes (refer to section 3.1).
Siphon peeler
Siphon peeler centrifuge has similar applications to the horizontal peeler centrifuge (refer to section 3.1) and is very flexible in use. It is used for starch, herbicides or fine chemicals.
Main process characteristics
Main process of centrifuge
There are several steps in peeler centrifuge process:
Feeding
The suspension is introduced to the rotating centrifuge basket via the feed distributor to prevent from spilling over the basket rim also to ensure the even cake level. The level of the feed is monitored and regulated by a feed controller, and normally the basket is filled up to 75-80% of the basket rim height. The feed step is complete when the filter cake has reached the desired level.
Filtration
Primary filtration for liquid phase of the feed through the filter medium attached in the basket is proceeding until the liquid has submerged into the filter cake and drained outside the rotating basket. The solid phase is held on the filter medium and become sediment, which forms secondary filter giving extra efficiency.
Washing
After the filtration step, washing liquid is introduced through feed distributor and separate spray bar. Under the effect of the centrifugal force, the washing liquid submerged into cake and filter. Washing can be carried out at the same or higher speed than the feeding step.
Dry spinning
The fluid in filter cake is drained by accelerating basket to maximum allowable speed and kept constant. The residual cake moisture decreases over time under the constant centrifugal force and ends when it reaches the desired residual humidity.
Discharging the solids/filter cake
The filter cake is removed from the basket by a pivoting peeling device equipped with a broad peeling knife and scraping until a thin layer of filter cake is retained to protect the filter medium. The scraped layers of product are discharged through an inclined chute or screw conveyor.
Cleaning centrifuge
Cleaning of the interior of the centrifuge required after every, or several cycles to clear the solids out of holes in basket and preserve the efficiency of filtration. The inside of the centrifuge with all built in devices can be cleaned automatically with an integrated CIP (cleaning in place system) without the need to open the centrifuge.
Operational parameters
The manufacturer can vary the operational parameters of the peeler centrifuge for specific applications. The parameters in this section are from several different peeler manufacturers.
Centrifugal force
GMP-Compatible HZ peeler Centrifuge (HZ-Phll): 1060-2030G
Mitsubishi/KM Siphon peeler Centrifuge (HZ-Si): 200-1895G
Rotational speed
Horizontal peeler Centrifuges (H630P-H1250P): 1180-2400rpm
Vertical peeler Centrifuges (V 800 – 1600 BG): 575-1000rpm
Mitsubishi/KM Siphon peeler Centrifuge (HZ-Si): 950-3000rpm
Particle size
Horizontal Peeler Centrifuge: 2-500 μm
Horizontal discontinuous separation Peeler Centrifuge: 2 μm-10mm
Capacity
Horizontal peeler centrifuges (H630P-H1250P): 42-303L
GMP-Compatible HZ peeler Centrifuge(HZ-Phll): 11-333L
Vertical peeler centrifuges (V 800 – 1600 BG): 160-1250L
Mitsubishi/KM Siphon peeler Centrifuge(HZ-Si): 11-875L
Processes and Equipment Design
Design Heuristics
Horizontal discontinuous separation Peeler Centrifuge
First of all, the suspension is filled into the batch at fixed or variable speed by gravity or pump. Then the feed is controlled by filling valve and the basket is observed by filling level controller. Following step is washing solids. Using a variety of fluids, cake is washed and become intensive. After the washing step, a hydraulically powered scraper is used for removal of cake.
Vertical peeler centrifuge
At first, the suspension is filled into the vertical basket at adjusted speed. The effect of the centrifugal force make the solid particle settles on the filter fabric on the basket shell. Depending on the product, intermediate centrifuging is required and it interrupts the filling process. Also it will be repeated several times by the filtration reaction of the product. The solids in the basket are washed at the same or higher speed than the filling process after suspension has been centrifuged. And hydro-extraction stage is followed. Depending on the solids and fluids parameters, the height of the filter cake and, naturally, the centrifugal force acting upon the fluid to be separated. After the extraction, the filter cake discharged from the basket by means of a short peeling knife.
Design Equation
Centrifugal Acceleration
(Equation 1)
where
is the tangential velocity at the given point on the curve trajectory
is the radius of curvature at the point.
Equation 1 depicts the kinematic relationship of centrifugal force required to sustain the movement of mass along a curve trajectory. The force acts perpendicularly to the direction of motion and is directed radially inward.
Solid body rotation
When body of fluid rotates in a solid-body mode, the tangential or circumferential velocity is linearly proportional to radius.
(Equation 2)
where
is the angular speed of the rotating frame
is the radius from the axis of rotation.
Equation 2 shows linear relationship of radius and circumferential velocity, .
Fluid Viscosity and Inertia
(Equation 3)
where
is the kinematic viscosity of the liquid and Ω is the angular speed of the rotating frame
is the thickness of the layers called ‘Ekman layers’. This layer is responsible for transfer of angular momentum between the rotating surfaces to fluid during acceleration and deceleration.
Equation 3 shows the dynamic effect of viscosity of liquid slurry in sedimenting centrifuge is confined in very think fluid layers.
Cake dryness
Dewatering is important step for centrifuge filtration to ensure the quality of solid output. The dryness can be measured by cake porosity,
(Equation 4)
where
is the mass of solid
is the cake volume
is the cake solid density.
From porosity, the content of moisture in the wet cake is measured by looking at saturation, S.
(Equation 5)
where
is the weight of solid fraction
is the liquid density.
Equation 5 describes when saturation is less than 1, for unsaturated solid cake.
When the solid cake is saturated, S=1 , the cake porosity can be determined by Equation 6 below
(Equation 6)
Total solid recovery
In clarification, the total solids recovered in the solid cake measure the clarity of the effluent indirectly. This indirect relationship is shown in Equation 7,
(Equation 7)
where
subscript c and f denote, the cake and feed respectively.
denotes the bulk mass flow rate.
Critical Speed
The critical speed is important factor to consider design. Critical speed is the speed of rotation at which the frequency of rotation matches the natural frequency. At this speed, any vibration caused by slight unbalance in the rotor is strongly reinforced, which may results high stresses or even failure of equipment.
Advantages and limitations
Advantages
Horizontal Peeler Centrifuge
The horizontal peeler centrifuge is known for its many advantages from its horizontal rotation arrangement of the main drum. By arranging the axis horizontally, advantages in washing capability, uniform solid size distribution for better solid output quality. The feeding action within centrifuge is effective over large inner surface of drum as the feeds are fed perpendicularly to gravity and centrifugal force spreads out the solids evenly.
Also the door of centrifuge system that can be fully opened which allows easy access to inside of rotating drum. This also means the operator can get access to internal components including filter cloth which required replacement.
And its high discharge speed reduces time taken to accelerate and braking for rotation for high capacity, so does power consumption, wear and tear. This short cycle time is particularly beneficial for short cycle, fast-filtering requirements for certain processes. This means horizontal peeler centrifuge provides higher centrifugal forces than vertical peeler, and increases performances and flexibility.
Due to continuous discharge of filtered liquid through perforated inner surface of rotating basket, the pressure drop, main driving force of filtration is increased across the solid cake and filter medium, as a result, the filtration rate can be boosted.
Moreover, high rotational speed results high rotational force which allows lowering residual cake moisture effectively, so does the washing liquid and washing results. Because of effective washing, drying processes that yield high purity of output, it is widely used in ultra-clean material processes.
Mitsubishi Peeler Centrifuge
The Mitsubishi Peeler Centrifuge has a liquid-tight construction, which is suitable for treating solvents and dangerous liquids. Over 1000G of Centrifugal force make the cake have low moisture. The cake-raking knife located above the feed liquid piping prevents that liquid drips directly onto the cake outlet, thereby recovering contamination-free solids. Fully opened front door makes filter cloth replacement and internal flushing easy.
Siphonage enlarges the effect of centrifugal filtration and siphon chamber is used for backwashing to avoid the loading of filter cloth. To ensure non-vibration filtration, filtering speed can be controlled. Once high G and siphon effect are combined, powerful dewatering, greatly saving on energy are possible at the drying stage. Fast cake raking minimizes loss of time and driving power as well. And siphon mechanism makes the cleaning operation various and cleaning time and liquid are saved.
Horizontal discontinuous separation Peeler Centrifuge
The horizontal peeler centrifuge can be used for discontinuous separation process. This discontinuous process can be automated and operate automatically, ensure constant basket speed.
Vertical Peeler Centrifuge
Vertical peeler centrifuge is more cost efficient compared to horizontal Krauss-Maffei peeler centrifuge due to its compact design. The continuous operation of centrifuge allows all process steps are carried out simultaneously, increases the overall throughput.
Siphon peeler centrifuge
Siphon peeler has similar configuration to Krauss-Maffei’s simple horizontal centrifuge. However the use of solid inner drum wall instead of perforated basket wall allows increasing the pressure drop for filtration. Instantly removed liquid which flows into rear chamber through siphon pipe, creates vacuum underneath the filter medium. This increased pressure gradient means high driving force for filtration, so does much efficient filtration.
Moreover filtrate flows along the solid wall, the siphon basket provide skimming stage to enhance the purity of filtrate. The siphon basket that has larger radius than filter cloth reduces the pressure behind the cloth to the vapour pressure of the filtrate liquid to overcome the wet layer of liquid over the surface of solid cake by capillary reaction, which reduces the filtration rate. This rotational siphon makes many advantages possible, such as accuracy of control of the filtration rate, batch washing for each cycle renews the heel and maintain the permeability, also extended heel life.
Limitations
Horizontal peeler centrifuge
Despite many advantages of the horizontal peeler centrifuge, there are many operational limitations associated with the characteristics of peeler centrifuge, which may need to be developed, and/or other competitive processes can replace for the peeler centrifuge with its limitations.
In operation, the peeler is kept away from the innermost surface of the separation drum as sharp peeler may damage the surface of the filter medium or be damaged by abrasion, which may require replacement. This means the solid cake on wall cannot be completely removed by peelers as it is recommended to leave thin layer of solid cakes. Also despite this layer of solid acting as sub filtration layer provides extra separation step, however this also could mean the filtration time may be lengthened as there is another layer for liquid has to pass.
Cost wise, the peeler centrifuge is not cost efficient compared to comparable size of vertical centrifuge due to high capital cost. In comparison over other competitive separation processes (refer to section 7)
Vertical peeler centrifuge
For collecting output solid that has been scraped off by peeler during the process, can only be manually retrieved by slowing down the process. The solids are collected at the bottom of basket and rotation has to be stopped or slowed down to get control over the discharge. This extends the batch processing time and consequently the filtration rate.
Also despite the vertical peeler’s low cost due to compact design, the throughput of vertical centrifuge is restricted compared to other peeler or centrifuge systems. With consideration of low batch throughput rate, overall performance in comparison to other processes, (comparison)
Siphon peeler centrifuge
The siphon peeler has also similar limitations to Krauss-Maffei horizontal peeler centrifuge as siphon peeler centrifuge is based on horizontal peeler centrifuge except the siphon basket in design. The limitations are careful peeler action control to prevent possible filtration media damage from abrasion, high capital costs and large space requirement due to horizontal arrangement.
Moreover, as siphon peeler centrifuge utilises not only centrifugal forces but also pressure difference as filtration driving force, sometimes overpressure in process housing. The high pressure difference, the high pressure above solid cake inside of basket, and nearly vacuum condition across the solid cake and filter medium, the installation procedure becomes more complex and siphon peeler system has not been utilised widely.
Competitive processes
Sedimentation Centrifuge
Tubular bowl
Tubular bowl centrifuge is widely used for nano-scale particles separation and is one of old design of centrifuge processes. Nanoparticles are separated from a suspension using this process because very high G value i.e. high rotation speed ensures reasonable throughputs are produced.
Because of its ability to separate nano-scale particles with mean size below 1 μm, this process is widely used in pharmaceutical and biotechnology applications. The demand for fine particles that exhibit a defined particle size distribution increases steadily in such areas.
Over many advantages, the manual scrapping, dismantling of deposited solid bed is limitation of this process.
Chamber bowl
Chamber bowl is another competitive process due to its ability to process feed with low solid contents also effective classification of solids. Chamber bowl separators are solid bowl centrifuges with cylindrical inserts with increasing in diameter to form multiple chambers. They are used to separate solids in low contents from liquid and long retention time in the centrifuge makes the separated solids very compact state.
The chamber bowl centrifuge has maximum solids holding capacity of 0.064m^3 and is commonly used to clarifying fruit juice and beer. This separation processes usually incorporate centripetal pump at effluent discharge point to minimise foaming and contact with air.
Due to its ability to operate in discontinuous mode, chamber bowl is one of suitable option for batch processing. Also the classification of solids separated from feed is effective with chamber bowl as increasing in centrifugal force exerting on the slurry as it pass from middle chamber towards out.
In terms of maintenance, unlike peeler centrifuge designs, the machine has to shut down completely to remove chambers from casing. Also this requires manual scraping, removal of deposited solids on the chamber wall.
Decanter centrifuge
Decanter centrifuge is usually available in two arrangements, vertical and horizontal. This centrifuge is suitable for treating suspension including very high content of solids(40-60%) which makes this process much more applicable in many applications than peeler centrifuge. Decanter centrifuge is widely used also because of its ability to separate solids from liquid mixtures, also its decanting action can used to separate two immiscible liquids in feed with necessary baffles and outlets while continuously discharging bulky sludge.
However its limitation on dewatering makes peeler centrifuge with effective dewatering and high quality of solid output competitive. This design has recent challenge that is to get throughput as dry as possible.
Moving bed filters
Pusher centrifuge
Pusher centrifuge consists of cylindrical basket that is fixed to a hollow shaft and plate on the basket bottom mounted on the rod. This pusher rod moves along axial directions so that pushing action can be created at the bottom of basket. The feed is fed into system and the solid cake forms along the wall. This cake ring formed is then pushed back towards discharged by pushing motion of pusher plate.
Unlike peeler centrifuges, the pusher centrifuge does get influenced by feed condition which need to be kept as constant as possible due to continuous product transport by pushing motion. However pusher centrifuge is one of the most competitive processes to peeler centrifuges due to its effective washing and dewatering of any continuous centrifuge processes. The pusher centrifuge performance is usually in function of crystal size as well as shape; meaning pusher is much effective for larger particle separation (generally around 150 μm).
Its performance is function of the crystal size and shape. The increased crystal size decreases the surface area per unit mass, and there are fewer surfaces. It enables moisture not to bind to the surface, providing a drier cake. In addition this centrifuge has very effective process of washing.
Hydrocyclones
The Hydrocyclone is used for separation of solid from a liquid or two immiscible liquids feed. Hydrocyclones are centrifugal separators that consist of a vertical cylinder with a conical bottom. The feed is entering in through tangential entry nozzle, and rotational movement imparted to feed, so does giving rise to centrifugal force.
Hydrocyclones are relatively inexpensive and also simple in operation. However there are capacity limitations, as hydrocyclone with large diameter will not generate sufficient centrifugal force for separation. Due to solve this limit, unlike the peeler centrifuge, hydrocyclones are sometimes used in series arrangement to achieve multiple separation stages.
The Hydrocyclone is one of the well-studied designs due to its low cost, simplicity and other advantages.
Horizontal Peeler and Inverting Filtering Centrifuge
A complete discharge of product can be guaranteed by pneumatic residual heel removal. A fully automated system enables that all surfaces in the separation process area are thoroughly cleaned.
Throughput solids leave through chute suited to products. Therefore the products can be discharged easily from centrifuges and slide easily without sticking to the chute. Solids are allowed to leave from centrifuges via screw conveyor. The screw conveyor facilitates the removal of solids having tendency of not being discharged via chute because of its sticky property.
Discharging throughput solids are conducted via a hydraulic peeler knife, which is designed based on the size of centrifuges such as a full size peeler or a pivot and dip peeler.
References
19. Tarleton, E. S. Wakeman, R. J. (2007). Solid/Liquid Separation - Equipment Selection and Process Design. Elsevier. (Online version available at: http://app.knovel.com/hotlink/toc/id:kpSLSESPD6/solid-liquid-separation)
20. Harald Anlauf (2007). Recent developments in centrifuge technology (Online version available at: digbib.ubka.uni-karlsruhe.de/volltexte/documents/874457)
21. [Peer-reviewed] (2005). "Peeler centrifuge provides solution for Rhodia." Filtration & Separation 42(3): 12-13. [13]News, T. (2002). "Centrifuge facilitates small scale testing." from http://news.thomasnet.com/fullstory/Centrifuge-facilitates-small-scale-testing-10126.
22. News, T. (2002). "Centrifuge facilitates small scale testing." from http://news.thomasnet.com/fullstory/Centrifuge-facilitates-small-scale-testing-10126.
23. [Peer-Reviewed] (1995). "Tubular bowl centrifuges for biotech applications." Filtration & Separation 32(10): 924.
24. Sutherland, K. (2009). "Filtration and separation technology: What's new in sedimentation?" Filtration & Separation 46(1): 34-36.
25. Sutherland, K. (2007). "Back to basic: Bulk chemical industry separations." Filtration & Separation 44(8): 32-35.
26. Presentation prepared by Catherine Chao, Danielle Turney and Dana Zoratto. ‘Basket Centrifuge: Applications in Insulin Production.
27. Comicondor 2012, Pharma Peeler Centrifuge HX/GMP model, brochure. (Retrieved from http://www.comicondor.com/wp-content/uploads/2012/06/GMP_aprile.pdf)
28. Heinkel Filtering Systems Inc. and HEINKEL Process Technology GmbH (2010). “Selection of Filtering Centrifuges”
29. Mitsubishi Kakoki Kaisha, LTD, 2012, GMP-Compatible Horizontal Peeler Centrifuge (HZ-PHII)
Centrifuges | Peeler centrifuge | [
"Chemistry",
"Engineering"
] | 5,512 | [
"Chemical equipment",
"Centrifugation",
"Centrifuges"
] |
49,803,641 | https://en.wikipedia.org/wiki/Excess%20noise%20ratio | In electronics, excess noise ratio is a characteristic of a noise generator such as a "noise diode", that is used to measure the noise performance of amplifiers. The Y-factor method is a common measurement technique for this purpose.
By using a noise diode, the output noise of an amplifier is measured using two input noise levels, and by measuring the output noise factor (referred to as Y) the noise figure of the amplifier can be determined without having to measure the amplifier gain.
Background
Any amplifier generates noise. In a radio receiver the first stage dominates the overall noise of the receiver and in most cases thermal, or Johnson noise, determines the overall noise performance of a receiver. As radio signals decrease in size, the noise at the input of the receiver will determine a lower threshold of what can be received. The level of noise is determined by calculating the noise in a 50 ohm resistor at the input of the receiver as follows:
where:
= Boltzmann constant = 1.38 × 10−23 J/K
= Temperature
= Bandwidth
Thus, receivers with a narrow bandwidth have a higher sensitivity than receivers with a large bandwidth and input noise can be decreased by cooling the receiver input stage.
A noise diode is a device which has a defined excess noise ratio (ENR).
When the diode is off (unpowered) the noise from it will be thermal noise defined by the above formula. The bandwidth to be used is the bandwidth of the receiver.
When the diode is on (powered) the noise from it will be increased from the thermal noise by the diode's excess noise ratio. This figure could be 6 dB for testing an amplifier with 40 dB gain and could be 16 dB for an amplifier with less gain or higher noise.
To determine the noise figure of an amplifier one uses a noise diode at the input to the amplifier and determines the output noise Y with the diode switched on and off.
Knowing both Y and the ENR, one can then determine the amount of noise contributed by the amplifier and hence can calculate the noise figure of the amplifier.
Other techniques exist for making this measurement but either require accurate measurements of impedance or are inaccurate.
The following formula relates Y-factor to ENR:
Measurements
Noise figure measurements can be made with a noise diode, a power supply for the noise diode, and a spectrum analyser. They can also be made with a specialist noise figure meter. The advantage of the noise figure meter is that it will automatically switch the noise diode on and off, giving a continuous reading of Y; it will also have the correct bandwidths in its receiver to average the received noise in an optimum fashion. However, accurate noise figure measurements are possible with the noise figure meter and a spectrum analyser.
References
Noise (electronics)
Engineering ratios | Excess noise ratio | [
"Mathematics",
"Engineering"
] | 571 | [
"Quantity",
"Metrics",
"Engineering ratios"
] |
49,808,784 | https://en.wikipedia.org/wiki/Multi-parametric%20surface%20plasmon%20resonance | Multi-parametric surface plasmon resonance (MP-SPR) is based on surface plasmon resonance (SPR), an established real-time label-free method for biomolecular interaction analysis, but it uses a different optical setup, a goniometric SPR configuration. While MP-SPR provides same kinetic information as SPR (equilibrium constant, dissociation constant, association constant), it provides also structural information (refractive index, layer thickness). Hence, MP-SPR measures both surface interactions and nanolayer properties.
History
The goniometric SPR method was researched alongside focused beam SPR and Otto configurations at VTT Technical Research Centre of Finland since 1980s by Dr. Janusz Sadowski. The goniometric SPR optics was commercialized by Biofons Oy for use in point-of-care applications. Introduction of additional measurement laser wavelengths and first thin film analyses were performed in 2011 giving way to MP-SPR method.
Principle
The MP-SPR optical setup measures at multiple wavelengths simultaneously (similarly to spectroscopic SPR), but instead of measuring at a fixed angle, it rather scans across a wide range of θ angles (for instance 40 degrees). This results in measurements of full SPR curves at multiple wavelengths providing additional information about structure and dynamic conformation of the film.
Measured values
The measured full SPR curves (x-axis: angle, y-axis: reflected light intensity) can be transcribed into sensograms (x-axis: time, y-axis: selected parameter such as peak minimum, light intensity, peak width). The sensograms can be fitted using binding models to obtain kinetic parameters including on- and off-rates and affinity. The full SPR curves are used to fit Fresnel equations to obtain thickness and refractive index of the layers. Also due to the ability of scanning the whole SPR curve, MP-SPR is able to separate bulk effect and analyte binding from each other using parameters of the curve.
While QCM-D measures wet mass, MP-SPR and other optical methods measure dry mass, which enables analysis of water content of nanocellulose films.
Applications
The method has been used in life sciences, material sciences and biosensor development.
In life sciences, the main applications focus on pharmaceutical development including small-molecule, antibody or nanoparticle interactions with target with a biomembrane or with a living cell monolayer. As first in the world, MP-SPR is able to separate transcellular and paracellular drug uptake in real-time and label-free for targeted drug delivery.
In biosensor development, MP-SPR is used for assay development for point-of-care applications. Typical developed biosensors include electrochemical printed biosensors, ELISA and SERS.
In material sciences, MP-SPR is used for optimization of thin solid films from Ångströms to 100 nanometers (graphene, metals, oxides), soft materials up to microns (nanocellulose, polyelectrolyte) including nanoparticles. Applications including thin film solar cells, barrier coatings including anti-reflective coatings, antimicrobial surfaces, self-cleaning glass, plasmonic metamaterials, electro-switching surfaces, layer-by-layer assembly, and graphene.
References
Protein methods
Scientific instruments
Scientific equipment
Molecular biology techniques
Nanotechnology
Spectroscopy
Biochemistry methods
Biophysics
Forensic techniques
Protein–protein interaction assays
Optical metrology
Plasmonics | Multi-parametric surface plasmon resonance | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering",
"Biology"
] | 736 | [
"Protein–protein interaction assays",
"Surface science",
"Measuring instruments",
"Nanotechnology",
"Spectroscopy",
"Plasmonics",
"Instrumental analysis",
"Protein methods",
"Protein biochemistry",
"Materials science",
"Scientific instruments",
"Molecular biology techniques",
"Biophysics",
... |
43,647,930 | https://en.wikipedia.org/wiki/Molecular%20drag%20pump | A molecular drag pump is a type of vacuum pump that utilizes the drag of air molecules against a rotating surface. The most common sub-type is the Holweck pump, which contains a rotating cylinder with spiral grooves which direct the gas from the high vacuum side of the pump to the low vacuum side of the pump. The older Gaede pump design is similar, but is much less common due to disadvantages in pumping speed. In general, molecular drag pumps are more efficient for heavy gasses, so the lighter gasses (hydrogen, deuterium, helium) will make up the majority of the residual gasses left after running a molecular drag pump.
The turbomolecular pump invented in the 1950s, is a more advanced version based on similar operation, and a Holweck pump is often used as the backing pump for it. The Holweck pump can produce a vacuum as low as .
History
Gaede
The earliest molecular drag pump was created by Wolfgang Gaede, who had the idea of the pump in 1905, and spent several years corresponding with Leybold trying to build a practical device. The first prototype device to meet expectations was completed in 1910, achieving a pressure of less than mbar. By 1912, twelve pumps had been created, and the concept was presented to the meeting of the Physical Society in Münster on 16 September of that year, and was generally well received.
Gaede published several papers on the principles of this molecular pump, and patented the design. The working principle is that the gas in the chamber is exposed to one side of a rapidly spinning cylinder. Collisions between the gas and the spinning cylinder gives the molecules of gas momentum in the same direction as the surface of the cylinder, which designed to turn away from the vacuum chamber and toward a fore-line. A separate backing pump is used to lower the pressure at the fore-line (output of the molecular pump), since in order to function, the molecular pump needs to operate under pressures low enough that the gas inside is in free molecular flow. One important measure of the pump is the compression ratio, . This is the ratio of the pressure of the vacuum, to the pressure to the outlet, and is roughly constant across different pressures, but depends on the individual gas.
The compression ratio can be estimated using the kinetic theory of gases by calculating the flow due to collisions with the rotating surfaces, and rate of diffusion in the reverse direction. The compression ratio tends to be better for heavy molecules, since the thermal velocity of lighter gasses is higher and speed of the rotating cylinder has a less effect on these faster moving, lighter gasses.
This "Gaede molecular pump" was used in an early experiment testing vacuum gauges.
Holweck
The improved Holweck design was invented in the early 1920s by Fernand Holweck as part of his apparatus for his work in studying soft X-rays. It was manufactured by French scientific instrument maker, Charles Beaudouin. He applied for a patent on the device in 1925. The main difference from the Gaede pump was the addition of a spiral, cut into either to the spinning cylinder, or to the static housing. Holweck pumps have been frequently modeled theoretically. Holweck's classmate and collaborator, H. Gondet, would later suggest other improvements to the design.
Siegbahn
Another design was given by Manne Siegbahn. He had produced a pump which was used in 1926. About 50 of Siegbahn's pumps were made from 1926 to 1940. These pumps were generally slower than comparable diffusion pumps, so were rare outside of Uppsala University. Larger, faster pumps of the Siegbahn type began to be made around 1940 for use in a cyclotron. In 1943, Seigbahn published a paper regarding these pumps, which were based on a rotating disk.
Use in turbomolecular pumps
While the molecular drag pumps of Gaede, Holweck, and Siegbahn are functional designs, they have remained relatively uncommon as stand-alone pumps. One issue was pumping speed: alternatives such as the diffusion pump are much faster. Secondly, a major issue with these pumps is reliability: with a gap between moving parts in the tens of micrometers, any dust or temperature change threatens to bring the parts into contact and cause the pump to fail.
The turbomolecular pump overcame many of these disadvantages. Many modern turbomolecular pumps contain built-in molecular drag stages, which allows them to operate at higher foreline pressures.
As a stage in turbo molecular pumps, the most widely used design is the Holweck type, due to a significantly higher pumping speed than the Gaede design. While slower, the Gaede design has the advantage of tolerating a higher inlet pressure for the same compression ratio, and being more compact than the Holweck type. While the Gaede and Holweck designs are significantly more widely used, Siegbahn-type designs continue to be investigated, due to their significantly more compact design compared with Holweck stages.
See also
Sprengel pump
References
Further reading
Pompe à vide modèle Holweck N°2 de mai 1922
Vacuum pumps | Molecular drag pump | [
"Physics",
"Engineering"
] | 1,063 | [
"Vacuum pumps",
"Vacuum systems",
"Vacuum",
"Matter"
] |
32,391,990 | https://en.wikipedia.org/wiki/Dissipation%20model%20for%20extended%20environment | A unified model for Diffusion Localization and Dissipation (DLD), optionally termed Diffusion with Local Dissipation, has been introduced for the study of Quantal Brownian Motion (QBM) in dynamical disorder. It can be regarded as a generalization of the familiar Caldeira-Leggett model.
where denotes the dynamical coordinate of the scatterer or bath mode. is the interaction potential, and are coupling constants. The spectral characterization of the bath is analogous to that of the Caldeira-Leggett model:
i.e. the oscillators that appear in the Hamiltonian are distributed uniformly over space, and in each location have the same spectral distribution . Optionally the environment is characterized by the power spectrum of the fluctuations , which is determined by and by the assumed interaction . See examples.
The model can be used to describes the dynamics of a Brownian particle in an Ohmic environment whose fluctuations are uncorrelated in space. This should be contrasted with the Zwanzig-Caldeira-Leggett model, where the induced fluctuating force is assumed to be uniform in space (see figure).
At high temperatures the propagator possesses a Markovian property and one can write down an equivalent Master equation. Unlike the case of the Zwanzig-Caldeira-Leggett model, genuine quantum mechanical effects manifest themselves due to the disordered nature of the environment.
Using the Wigner picture of the dynamics one can distinguish between two different mechanisms for destruction of coherence: scattering and smearing. The analysis of dephasing can be extended to the low temperature regime by using a semiclassical strategy. In this context the dephasing rate SP formula can be derived. Various results can be derived for ballistic, chaotic, diffusive, and both ergodic and non-ergodic motion.
See also
Quantum dissipation
dephasing
The dephasing rate SP formula
References
Quantum mechanics | Dissipation model for extended environment | [
"Physics"
] | 412 | [
"Theoretical physics",
"Quantum mechanics"
] |
32,393,699 | https://en.wikipedia.org/wiki/Mechanically%20stimulated%20gas%20emission | Mechanically stimulated gas emission (MSGE) is a complex phenomenon embracing various physical and chemical processes occurring on the surface and in the bulk of a solid under applied mechanical stress and resulting in emission of gases. MSGE is a part of a more general phenomenon of mechanically stimulated neutral emission. MSGE experiments are often performed in ultra-high vacuum.
Phenomenology
The specific characteristics of MSGE as compared with MSNE is that the emitted neutral particles are limited to gas molecules. MSGE is opposite to Mechanically Stimulated Gas Absorption that usually occurs under fretting corrosion of metals, exposure to gases at high pressures, etc.
There are three main sources of MSGE:
I. Gas molecules adsorbed on the surface of a solid
IIa. Gases dissolved in the material bulk
IIb. Gases occluded or trapped in micro- and nanovoids, discontinuities and on defects in the material bulk
III. Gases generated as a result of mechanical activation of chemical reactions.
Generally, for producing MSGE, the mechanical action on the solid can be of any type including tension, compression, torsion, shearing, rubbing, fretting, rolling, indentation, etc. In previous studies carried out by various groups it was found that MSGE is associated mainly with plastic deformation, fracture, wear and other irreversible modifications of a solid. Under elastic deformation MSGE is almost negligible and only was observed just below elastic limit due to possible microplastic deformation.
In accordance to the main sources, the emitted gases usually contain hydrogen (source type IIa), argon (for coatings obtained using PVD in Ar plasma - source type IIb), methane (source type III), water (source type I and/or III), carbon mono- and dioxide (source type I/III).
The knowledge on the mechanisms of MSGE is still vague. On the basis of the experimental findings it was speculated that the following processes can be related with MSGE:
Transport of gas atoms by moving dislocations
Gas diffusion in the bulk driven by gradient of mechanical stress
Phase transformation induced by deformation
Removal of oxide and other surface layers, which prevent exit of dissolved atoms on the surface
Extension of free surface
Thermal effect seems to be irrelevant to the gas emission under light load conditions.
Terminology
Emerging character of this interdisciplinary branch of science is reflected by a lack of established terminology. There are different terms and definitions used by different authors depending on the main approach used (chemical, physical, mechanical, vacuum science, etc.), specific gas emission mechanism (desorption, emanation, emission, etc.) and type of mechanical activation (friction, traction, etc.):
Mechanically stimulated outgassing (MSO)
Tribodesorption
Triboemission,
Fractoemission
Atomic and Molecular emission
Outgassing stimulated by friction
Outgassing stimulated by deformation
Desorption (tribodesorption, fractodesorption, etc.) refers to release of gases dissolved in the bulk and adsorbed on the surface. Therefore, desorption is only one of the contributing processes to MSGE. Outgassing is a technical term usually utilized in vacuum science. Thus, the term "gas emission" embraces various processes, reflects the physical nature of this complex phenomenon and is preferable for use in scientific publications.
Experimental observations
Due to low emission rate experiments should be performed in ultrahigh vacuum (UHV). In some studies the materials were previously doped with tritium. MSGE rate then was measured by radioactivity outcome from the material under applied mechanical stress.
See also
Mechanochemistry
References
Materials science
Physical chemistry
Gases
Hydrogen | Mechanically stimulated gas emission | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 751 | [
"Matter",
"Applied and interdisciplinary physics",
"Phases of matter",
"Materials science",
"nan",
"Statistical mechanics",
"Physical chemistry",
"Gases"
] |
32,402,084 | https://en.wikipedia.org/wiki/Polysilicon%20depletion%20effect | Polysilicon depletion effect is the phenomenon in which unwanted variation of threshold voltage of the MOSFET devices using polysilicon as gate material is observed, leading to unpredicted behavior of the electronic circuit. Because of this variation High-k Dielectric Metal Gates (HKMG) were introduced to solve the issue.
Polycrystalline silicon, also called polysilicon, is a material consisting of small silicon crystals. The latter differs from monocrystalline silicon used for semiconductor electronics and solar cells, and from amorphous silicon, used for thin film devices and solar cells.
Gate material choice
The gate contact may be of polysilicon or metal, previously polysilicon was chosen over metal because the interfacing between polysilicon and gate oxide (SiO2) was favorable. But the conductivity of the poly-silicon layer is very low and because of this low conductivity, the charge accumulation is low, leading to a delay in channel formation and thus unwanted delays in circuits. The poly layer is doped with N-type or P-type impurity to make it behave like a perfect conductor and reduce the delay.
Doped polysilicon gate disadvantages
Vgs = Gate Voltage
Vth = Threshold Voltage
n+ = Highly doped N region
In figure 1(a) of an nMOS transistor it is observed that the free majority carriers are scattered throughout the structure because of the absence of an external electric field. When a positive field is applied on the gate, the scattered carriers arrange themselves like figure 1(b), the electrons move closer toward the gate terminal but due to the open circuit configuration they don't start to flow. As a result of the separation of charges a depletion region is formed on the polysilicon-oxide interface, which has a direct effect on the channel formation in MOSFETs.
In an NMOS with n+ Polysilicon gate, the poly depletion effect aids in the channel formation by the combined effect of the (+)ve field of donor ions (ND) and the externally applied (+)ve field at gate terminal. Basically the accumulation of the (+)ve charged Donor ions (ND) on the polysilicon enhances the Formation of the inversion channel and when an inversion layer is formed, which can be seen in the figure 1(b) where the inversion channel is formed of acceptor ions (NA) (minority carriers). Polysilicon depletion can vary laterally across a transistor depending on the fabrication process, which can lead to significant transistor variability in certain transistor dimensions.
Metal gates re-introduced
For the above reason as the devices go down on the scaling (32-28nm nodes) poly gates are being replaced by metal gates. The following technology is known as High-k Dielectric Metal Gate (HKMG) integration. In 2011 Intel has released a press-kit regarding their fabrication procedures of different nodes, which showed the use of Metal gate technology.
Doped polysilicon was preferred earlier as gate material in MOS devices. Polysilicons were used as their work function matched with the Si substrate (which results in the low threshold voltage of MOSFET). Metal gates were re-introduced at the time when SiO2 dielectrics are being replaced by high-k dielectrics like Hafnium oxide as gate oxide in the mainstream CMOS technology. Also at the interface with gate dielectric, Polysilicon forms an SiOx layer. Moreover, there remains a high probability for Fermi level pinning to occur. So the effect with doped poly is an undesired reduction of threshold voltage that wasn't taken into account during circuit simulation. In order to avoid this kind of variation in vth of the MOSFET, at present metal gate is preferred over Polysilicon.
See also
Reduction of Polysilicon Gate Depletion Effect in NMOS
Drain-induced barrier lowering
Gate material
Fabrication of microprocessor by intel
References
Transistors
Semiconductor devices
Semiconductor technology
MOSFETs | Polysilicon depletion effect | [
"Materials_science"
] | 839 | [
"Semiconductor technology",
"Microtechnology"
] |
32,402,755 | https://en.wikipedia.org/wiki/Hyperparameter%20%28machine%20learning%29 | In machine learning, a hyperparameter is a parameter that can be set in order to define any configurable part of a model's learning process. Hyperparameters can be classified as either model hyperparameters (such as the topology and size of a neural network) or algorithm hyperparameters (such as the learning rate and the batch size of an optimizer). These are named hyperparameters in contrast to parameters, which are characteristics that the model learns from the data.
Hyperparameters are not required by every model or algorithm. Some simple algorithms such as ordinary least squares regression require none. However, the LASSO algorithm, for example, adds a regularization hyperparameter to ordinary least squares which must be set before training. Even models and algorithms without a strict requirement to define hyperparameters may not produce meaningful results if these are not carefully chosen. However, optimal values for hyperparameters are not always easy to predict. Some hyperparameters may have no meaningful effect, or one important variable may be conditional upon the value of another. Often a separate process of hyperparameter tuning is needed to find a suitable combination for the data and task.
As well was improving model performance, hyperparameters can be used to by researchers introduce robustness and reproducibility into their work, especially if it uses models that incorporate random number generation.
Considerations
The time required to train and test a model can depend upon the choice of its hyperparameters. A hyperparameter is usually of continuous or integer type, leading to mixed-type optimization problems. The existence of some hyperparameters is conditional upon the value of others, e.g. the size of each hidden layer in a neural network can be conditional upon the number of layers.
Difficulty-learnable parameters
The objective function is typically non-differentiable with respect to hyperparameters. As a result, in most instances, hyperparameters cannot be learned using gradient-based optimization methods (such as gradient descent), which are commonly employed to learn model parameters. These hyperparameters are those parameters describing a model representation that cannot be learned by common optimization methods, but nonetheless affect the loss function. An example would be the tolerance hyperparameter for errors in support vector machines.
Untrainable parameters
Sometimes, hyperparameters cannot be learned from the training data because they aggressively increase the capacity of a model and can push the loss function to an undesired minimum (overfitting to the data), as opposed to correctly mapping the richness of the structure in the data. For example, if we treat the degree of a polynomial equation fitting a regression model as a trainable parameter, the degree would increase until the model perfectly fit the data, yielding low training error, but poor generalization performance.
Tunability
Most performance variation can be attributed to just a few hyperparameters. The tunability of an algorithm, hyperparameter, or interacting hyperparameters is a measure of how much performance can be gained by tuning it. For an LSTM, while the learning rate followed by the network size are its most crucial hyperparameters, batching and momentum have no significant effect on its performance.
Although some research has advocated the use of mini-batch sizes in the thousands, other work has found the best performance with mini-batch sizes between 2 and 32.
Robustness
An inherent stochasticity in learning directly implies that the empirical hyperparameter performance is not necessarily its true performance. Methods that are not robust to simple changes in hyperparameters, random seeds, or even different implementations of the same algorithm cannot be integrated into mission critical control systems without significant simplification and robustification.
Reinforcement learning algorithms, in particular, require measuring their performance over a large number of random seeds, and also measuring their sensitivity to choices of hyperparameters. Their evaluation with a small number of random seeds does not capture performance adequately due to high variance. Some reinforcement learning methods, e.g. DDPG (Deep Deterministic Policy Gradient), are more sensitive to hyperparameter choices than others.
Optimization
Hyperparameter optimization finds a tuple of hyperparameters that yields an optimal model which minimizes a predefined loss function on given test data. The objective function takes a tuple of hyperparameters and returns the associated loss. Typically these methods are not gradient based, and instead apply concepts from derivative-free optimization or black box optimization.
Reproducibility
Apart from tuning hyperparameters, machine learning involves storing and organizing the parameters and results, and making sure they are reproducible. In the absence of a robust infrastructure for this purpose, research code often evolves quickly and compromises essential aspects like bookkeeping and reproducibility. Online collaboration platforms for machine learning go further by allowing scientists to automatically share, organize and discuss experiments, data, and algorithms. Reproducibility can be particularly difficult for deep learning models. For example, research has shown that deep learning models depend very heavily even on the random seed selection of the random number generator.
See also
Hyper-heuristic
Replication crisis
References
Machine learning
Model selection | Hyperparameter (machine learning) | [
"Engineering"
] | 1,060 | [
"Artificial intelligence engineering",
"Machine learning"
] |
56,646,524 | https://en.wikipedia.org/wiki/Lead%20magnesium%20niobate | Lead magnesium niobate is a relaxor ferroelectric. It has been used to make piezoelectric microcantilever sensors.
References
Niobates
Magnesium compounds
Lead compounds
Piezoelectric materials | Lead magnesium niobate | [
"Physics",
"Materials_science"
] | 45 | [
"Physical phenomena",
"Materials science stubs",
"Condensed matter stubs",
"Materials",
"Electrical phenomena",
"Condensed matter physics",
"Piezoelectric materials",
"Electromagnetism stubs",
"Matter"
] |
56,654,711 | https://en.wikipedia.org/wiki/Entropy%20network | Entropy networks have been investigated in many research areas, on the assumption that entropy can be measured in a network. The embodiment of the network is often physical or informational. An entropy network is composed of entropy containers which are often called nodes, elements, features, or regions and entropy transfer occurs between containers. The transfer of entropy in networks was characterized by Schreiber in his transfer entropy.
Physical basis
A discrete physical basis for entropy networks can be found in the observation, and discussions of discrete observations appear briefly in the work of Prokopenko, Lizier & Price. More complete discussions of observations were offered by Leo Szilárd and Léon Brillouin.
Structures and motifs
Network motifs have been proposed to be scale independent. Networks have been classified by total entropy. The entropy content of graphs has been considered throughout fields of math and computer science.
Design of entropy networks and in depth investigation has been publicized by Wissner-Gross and Freer who have proposed a time entropy relation (where entropy is maximized of a lifespan) through which predictions of the emergence of complexity can be shown.
Domains of study
The role of entropy networks in formation of structures is critical in engineering and its physical implications determine chirality, organize biological molecules, and quantify the topologies of condensed matter (mass) networks.
References
Entropy | Entropy network | [
"Physics",
"Chemistry",
"Mathematics"
] | 269 | [
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Entropy",
"Asymmetry",
"Wikipedia categories named after physical quantities",
"Symmetry",
"Dynamical systems"
] |
56,656,634 | https://en.wikipedia.org/wiki/IPIECA | Ipieca is a global not-for-profit oil and gas industry association for environmental and social issues, headquartered in London. The association was established in 1974 as the International Petroleum Industry Environmental Conservation Association and changed its name in 2002.
Company members contribute to Ipieca's budget according to an individually agreed percentage based on the volume of crude oil produced and petroleum products sold by each company and the number of geographical areas where the company has interests.
Ipieca is the industry channel into UN's Intergovernmental Panel on Climate Change (IPCC) and the UNFCCC, both concerned with climate change.
Its geographical coverage encompasses North America, Asia and the Pacific, Latin America and the Caribbean, Africa, Western Europe, Eastern Europe and the global energy market.
Beginning in the 1980s, the organization was involved in efforts to dispute climate science and weaken international climate policy.
See also
International Association of Oil & Gas Producers, set up at the same time and in the same place
References
External links
https://www.ipieca.org
Ipieca reporting table BP Sustainability Report 2016
Petroleum industry | IPIECA | [
"Chemistry"
] | 225 | [
"Petroleum industry",
"Petroleum",
"Chemical process engineering"
] |
56,657,109 | https://en.wikipedia.org/wiki/Seismic%20risk%20in%20Malta | Seismic risk in Malta is considered to be low with little historic damage noted and no known victims. The archipelago is however in a potentially significant seismic zone and the risk to the population is probably undervalued.
Tectonics
The Maltese Archipelago rests on an underwater plateau, a relatively stable part of the African Plate. The islands are situated around 200 km to the south of the subduction fault between the African Plate and the Eurasian Plate. The pelagic plate forms a shallow platform separating the Ionian basin from the Western Mediterranean Basin, situated roughly under the Strait of Sicily. The plate is crossed by a rift zone formed of three grabens: the Pantelleria graben, that of Malta, and that of Linosa. These grabens are linked by a system of north–south orientation faults (sometimes west–east) with dextral cavities that are responsible for most of the earthquakes that can affect the archipelago.
The islands themselves are made up of limestone rocks from the Oligocene and Miocene geological epochs, belonging to the Cenozoic era.
List of major earthquakes
Prior to the 20th century and the first seismic recordings in the region, information on Maltese earthquakes was researched in archives. These range mostly from the arrival of the knights of the Order of St John of Jerusalem in 1530 to the British colonisation of Malta. After this period, the localisation of the epicentres of earthquakes in the Sicilian Channel has been relatively limited, mostly due to an inadequate network of seismic stations, particularly before 1980.
Summary table of earthquakes since 1500
Details of major earthquakes
Earthquake of 10 December 1542
The Sicilian Chronicle of the 16th century reports that the earthquake of 10 December 1542 was strongly felt in Malta where some houses were knocked down.
Earthquake of 11 January 1693
The earthquake of 11 January 1693 in Val di Noto is the most significant earthquake felt in Malta since the 16th century. In Sicily, it caused the death of around 60,000 individuals. With a magnitude of 7.4, it is considered to be the most powerful earthquakes in Italian history. The earthquake was preceded on 9 January by a precursory earthquake of a magnitude of around 5.9 which was strongly felt but did not cause damage.
In Malta, the earthquake provoked panic among the population, with many Maltese refusing to go back to their homes in the nights that followed, seeking refuge in tents or underground shelters. No injuries or fatalities were reported. The Order delegated its head engineer, Mederico Blondel to assess the damage. At Valletta, no building escaped unharmed by the earthquake, from simple cracks to complete demolition. The other towns of the Grand Harbour were considerably less affected. However, the old city of Mdina suffered more greatly as many of the buildings were older and were poorly maintained. Notably, St. Paul's Cathedral was part destroyed – but the cathedral was already badly damaged before the earthquake, and so a reconstruction had already been planned. The Banca Giuratale in Mdina was equally damaged, and would be rebuilt in 1726 by Charles François de Mondion. At Rabat, the bell tower and the apse of the church of St. Paul came down. The Tal-Virtù Church suffered considerable damage, it was situated at a high altitude which was particularly susceptible to earthquakes.
At Gozo, the walls of the Cittadella were damaged, but Blondel notes that the damage were more likely caused by years of neglect. The Cathedral of the Assumption, Gozo lost its bell tower.
The considerable material damage in Malta has been attributed to the maximum earthquake intensity of 7–8.
Earthquake of 20 February 1743
Local historian Gian Pietro Francesco Agius de Soldanis recounts 20 February 1743 earthquake in his magnum opus Il Gozo Antico-Moderno e Sacro-Profano, a two-volume manuscript dealing with the history of Gozo completed in 1746:
A document in the archives of the cathedral of Mdina described how the coppolino (the little dome) of the cathedral fell into the church, the back end of the choir was destroyed and the bell tower heavily damaged. The cathedral was so cracked in all areas that:
An account by six architects described three large cracks of around 3 cm in width on each side of the cupola, revealing most of the stones in the cupola and the terrible damage to the walls of the choir.
Earthquake of 12 October 1856
The epicentre situated near Crete was very seriously hit by the particularly violent earthquake of 12 October 1856. Certain seismic registers attribute it with a magnitude of 8.2. It claimed numerous victims in Crete. Despite being at a distance of more than 1000 km from the epicentre, the earthquake was violently felt in Malta, as the newspapers of the era testify. People were woken in the middle of the night by a deafening growl and a movement of the earth that lasted between 22 and 60 seconds. Nearly all the houses in Valletta were damaged, as were houses in Gozo, notably on the upper floors. Numerous churches were affected, and in particular the cathedral of Mdina. The belltower of the church of the Carmelites at Mdina was so cracked that it needed to be rebuilt. The chapel on the islet of Filfla was destroyed. The 17th-century Għajn Ħadid Tower collapsed in this earthquake and has remained in ruins ever since.
Earthquake of 27 August 1886
This was probably the earthquake on the same day that struck the SW Peloponnese. Once again, the local papers reported a general panic in the population that rushed outside dwellings, awoken by the earthquake on 27 August 1886. Some buildings were affected, including the ceiling of the Palace of Justice in Valletta. For once, the cathedral in Mdina was not greatly affected.
Earthquake of 30 September 1911
30 September 1911 earthquake was more distinctly felt in Gozo than in Malta. Newspapers reported the appearance of deep cracks in the domes and the belltowers of many churches, in particular at Nadur, Għarb, and Ir-Rabat, Gozo, where many public buildings were affected. Numerous rural buildings were completely destroyed. Fort Chambray was badly hit. Many landslides were reported on the isle of Gozo. In Malta, damage was limited to a few cracks.
Earthquake of 18 September 1923
18 September 1923 earthquake was the first to follow the installation of a Milne seismograph at Valletta. It seems not to have worked and gave no useful information; the seismologic data is therefore not clear. The shock seems to have been most felt around the Grand Harbour. Some damage was reported, such as the falling of the stone crosses on churches or cracks in the domes. The greatest ravages seem to have been those exerted on the church of St Paul in Rabat, Malta. The Tal-Virtù Church was badly damaged and remained unused for more than 70 years.
Details of major tsunamis
Tsunami of 16 January 1693
The tsunami of 16 January 1693 occurred contemporaneously with the strong earthquake. Agius de Soldanis recounts how the sea at Xlendi turned a thousand times before returning with force.
Tsunami of 28 December 1908
The tsunami of 28 December 1908 corresponds to the earthquake in the Strait of Messina. At least three large waves caused significant damage and took a number of victims in the east of Sicily. The waves of this tsunami hit the shores of Malta an hour later, causing flooding at Msida, where part of the old town was damaged. At Marsaxlokk a foaming wave crossed the main road hitting the church of St. Peter. At Sliema, the sea came and went with force. The sea level was registered as abnormally high in the Grand Harbour. Many fishing boats were damaged or destroyed, but no deaths were reported.
Evaluation of seismic risk
Seismic risk was evaluated at an event of intensity 8 happening every 1000 years and an event at intensity 6 happening every 92 years. Historically, since 1530, an earthquake of 7–8 intensity has been reported, as well as at least four events of intensity 7.
The risk of a tsunami wave between 4 m and 7 m high is estimated as a possibility every 600 to 1500 years. The occurrence of an event comparable to that of 1603, could have grave social and economic consequences as areas near the sea are largely built up, in particular for tourist activity in the region of Sliema.
Integration of risk in building standards
Despite a proposed bill, no seismic building standards have been imposed as the rate of construction accelerates and numerous buildings are completed. The risk is increased further by building in unreinforced masonry, incorporating heavy floors and concrete roofs with often large cellars used as garages. This type of construction is particularly sensitive to earthquakes.
References
Geology of Malta
Earthquake and seismic risk mitigation
Malta | Seismic risk in Malta | [
"Engineering"
] | 1,789 | [
"Structural engineering",
"Earthquake and seismic risk mitigation"
] |
38,008,960 | https://en.wikipedia.org/wiki/Compliant%20bonding | Compliant bonding is used to connect gold wires to electrical components such as integrated circuit "chips". It was invented by Alexander Coucoulas in the 1960s. The bond is formed well below the melting point of the mating gold surfaces and is therefore referred to as a solid-state type bond. The compliant bond is formed by transmitting heat and pressure to the bond region through a relatively thick indentable or compliant medium, generally an aluminum tape (Figure 1).
Comparison with other solid state bond methods
Solid-state or pressure bonds form permanent bonds between a gold wire and a gold metal surface by bringing their mating surfaces in intimate contact at about 300 °C which is well below their respective melting points of 1064 °C, hence the term solid-state bonds.
Two commonly used methods of forming this type of bond are thermocompression bonding and thermosonic bonding. Both of these processes form the bonds with a hard faced bonding tool that makes direct contact to deform the gold wires against the gold mating surfaces (Figure 2).
Because gold is the only metal that does not form an oxide coating which can interfere with making a reliable metal to metal contact, gold wires are widely used to make these important wire connections in the field of microelectronic packaging. During the compliant bonding cycle the bond pressure is uniquely
controlled by the inherent flow properties of the aluminum compliant tape (Figure 3). Therefore, if higher bond pressures are needed to increase the final deformation (flatness) of a compliant bonded gold wire, a higher yielding alloy of aluminum could be employed. The use of a compliant medium also overcomes the thickness variations when attempting to bond a multiple number of conductor wires simultaneously to a gold metalized substrate (Figure 4). It also prevents the leads from being excessively deformed since the compliant member deforms around the leads during the bonding cycle thus eliminating mechanical failure of a bonded wire due to excessive deformation from a hard faced tool (Figure 3) which is employed by thermocompression, and thermosonic bonding.
History
An important application for compliant bonding arose in the early 1960s, when techniques were developed for fabricating a beam leaded silicon integrated circuit “chip” consisting of pre-attached electroformed 0.005-inch thick gold leads or “beams” extending from the silicon chip (Figure 5). Thus the beam leaded “chip” eliminated the need to thermosonically bond wires directly onto metallized pads of the fragile silicon chip (as shown in Figure 6) The extended ends of the electroformed beams could then be permanently solid-state bonded to a matching metallized sunburst circuit which has been pre-deposited on a ceramic substrate appropriately packaged in a computer in the making. Figure 7. shows a preshaped hard faced tool thermocompression bonding all of the beam leads of a chip in one bonding cycle. In order to avoid excessively deforming the fine beam leads with the hard bonding tool and putting them at risk of mechanical failure, the applied bonding forces have to be carefully monitored.
Compliant bonding eliminates the problems associated with a hard faced bonding tool, and therefore was ideally suited to simultaneously bond all of the extended electroplated gold beam leads to a matching gold metallized sunburst patterned ceramic substrate packaged in a computer (Figure 8). For example, compliant bonding eliminated the problems of using a hard faced bonding tool such as: attempting to uniformly deform the nominally 0.005-inch thick beams leads having slight variations in their thickness; excessive lead deformation that could cause mechanical damage and an ultimate "costly" failure of these fine beam leaded silicon chips which are the "brains" of our computers. The compliant bonding tape media offered the additional advantage of carrying the "beam leaded silicon chip" to the bonding site thus facilitating production.
Figures 9. and 10 show that the compliant tape offers the advantage of carrying the beam leaded chip to the bonding site as discussed above. Figure 11 shows a beam leaded silicon integrated circuit compliantly bonded to a gold metallized sunburst pattern deposited on an alumina ceramic substrate which will be encapsulated and packaged in a computer-type device. Figure 12 shows the spent compliant member used to bond the chip in Figure 11 which clearly shows a mirror image of the uniformly bonded beam leads.
Silicon integrated circuit
The two forms of integrated circuits discussed above were the beam leaded integrated circuit composed of attached electroformed gold leads or beams (Figure 5) and the silicon integrated circuit chip (Figure 6). With respect to the beam leaded silicon chip, both compliant and thermocompression bonding can be employed since each have their advantages. At this time, the most widely used form is the silicon integrated circuit chip, without the beam leads, which therefore requires electrical connections directly to the metallized silicon Chip (Figure 6). If wire connections is the method of choice to form these connections, thermosonic bonding gold wires directly to the silicon chip has been the process most widely used because of its proven reliability as a result of the low bonding parameters of force, temperature, and time needed to form the bond.
References
Packaging (microfabrication)
Integrated circuits
Semiconductor technology
American inventions | Compliant bonding | [
"Materials_science",
"Technology",
"Engineering"
] | 1,064 | [
"Computer engineering",
"Packaging (microfabrication)",
"Microtechnology",
"Semiconductor technology",
"Integrated circuits"
] |
38,013,668 | https://en.wikipedia.org/wiki/Gelsemine | Gelsemine (C20H22N2O2) is an indole alkaloid isolated from flowering plants of the genus Gelsemium, a plant native to the subtropical and tropical Americas, and southeast Asia, and is a highly toxic compound that acts as a paralytic, exposure to which can result in death. It has generally potent activity as an agonist of the mammalian glycine receptor, the activation of which leads to an inhibitory postsynaptic potential in neurons following chloride ion influx, and systemically, to muscle relaxation of varying intensity and deleterious effect. Despite its danger and toxicity, recent pharmacological research has suggested that the biological activities of this compound may offer opportunities for developing treatments related to xenobiotic or diet-induced oxidative stress, and of anxiety and other conditions, with ongoing research including attempts to identify safer derivatives and analogs to make use of gelsemine's beneficial effects.
Natural sources
Gelsemine is found in, and can be isolated from, the subtropical to tropical flowering plant genus Gelsemium, family Loganiaceae, which as of 2014 included five species, where G. sempervirens Ait., the type species, is prevalent in the Americas and G. elegans Benth. in China and East Asia. The species in the Americas, G. sempervirens, has a number of common names that include yellow or Carolina jasmine (or jessamine), gelsemium, evening trumpetflower, and woodbine. The plant genus is native to the subtropical and tropical Americas, e.g., in Mexico, Honduras, Guatemala, and Belize, as well as to China and southeast Asia. The species is prized for its "heavily fragrant yellow flowers," and has been cultivated since mid-seventeenth century (in Europe). It is found in southeastern and south-central states of the U.S., and as a garden plant in warmer areas where it can be trained to grow over arbors or to cover walls (see image).
All plant parts of the herbage and exudates of this genus, including its sap and nectar, appear to contain gelsemine and related compounds, as well as a wide variety of further alkaloids and other natural products. The plant's herbage, in particular, is known to contain several toxic alkaloids, and is generally known to be poisonous to livestock and humans.
Chemistry
Gelsemine was isolated from G. sempervirens Ait., in 1870. Its chemical formula was determined to be C20H22N2O2, thus with a molecular weight of 322.44 g/mol. Its structure was finally determined, by X-ray crystallographic analysis and by nuclear magnetic resonance (NMR) spectroscopy, in 1959 by Conroy and Chakrabarti.
It is a monoterpenoid type of indole alkaloid, and a close relative of the natural product gelseminine, which is also present from the same natural sources. The gelsemine class of alkaloids are some of a wide variety of the alkaloid and other natural products that have been isolated from this genus of plants.
Gelsemine's biosynthesis, as of 1998, is thought to proceed from 3α(S)-strictosidine (isovincoside), the common precursor for essentially all monoterpenoid indole alkaloids—itself deriving directly from mevalonic acid-derived secologanin and tryptamine. From strictosidine, the biosynthesis proceeds through five intermediates—including koumicine (akkuammidine), koumidine, vobasindiol, anhydrovobasindiol, and gelsenidine (humantienine-type). The related alkaloids koumine and gelsemicine also derive from this pathway (koumine from anhydrovobasindiol via oxidation and rearrangement, and gelsemicine from gelsemine itself, via aromatic oxidation and O-methylation).
For the chemical synthesis (natural product synthesis, studies and total synthesis), see the separate section below.
Summary of biological activities
Full sections in following are devoted to specific activities of gelsemine. Noted are the facts that it is a highly toxic compound, where exposure can result in paralysis and death. It is reported to be a glycine receptor agonist with significantly higher binding affinity for some of these receptors than its native agonist, glycine. In addition, it has been shown to have effects on pathways/systems in model animals (rat, rabbit), related to xenobiotic- or diet-induced oxidative stress, and in the treatment of anxiety and other conditions.
History
Gelsimium extracts, and so gelsemine, indirectly, have been the subject of serious scientific study for over a hundred years. On the medical side, gelsemium tinctures were used in the treatment of neuralgia by physicians in England, in the late 19th century; Arthur Conan Doyle, the noted author who first trained as a physician, after observing the success of such treatments, ingested increasing doses of a tincture daily, to “ascertain how far one might go in taking the drug, and what the primary symptoms of an overdose might be,” submitting his first career publication on this in the British Medical Journal.[primary source] On the chemistry side, the December 1910 meeting of the Division of Pharmaceutical Chemistry, of the American Chemical Society, reports among the papers read, the "Assay of Gelsemium" by L.E. Sayre.[primary source]
Mechanisms of action
Gelsemine is an agonist for the glycine receptor (GlyR) with a much greater affinity for studied examples of this receptor than glycine. These receptors are ligand-gated ion channels which affect a variety of physiological processes. When glycine receptors are activated by agonist binding to at least two of the five agonist binding sites, chloride ions enter the neuron. This causes an inhibitory postsynaptic potential, which, systemically, leads to muscle relaxation.
Toxicity and toxicology
In mice, it has been shown to have an LD50 of 56 mg/kg (intraperitoneal), and a lowest lethal dose (LDLo) of 0.1-0.12 mg/kg (intravenous). In rabbits, the LDLo was 0.05-0.06 mg/kg (intravenous). In frogs, the LDLo was 20–30 mg/kg (subcutaneous). In dogs, the LDLo was 0.5-1.0 mg/kg (intravenous).
The sap of the plant may cause skin irritation in sensitive individuals, and there are reports that inhalation from the flowers alone may, in some cases, lead to human poisoning (see below, where insect death at such flowers is likewise reported).
The plant's herbage is known to contain several toxic alkaloids, and while there is report of its feeding to pigs, it is generally considered to be an abortifacient and lethal poison when livestock or other animals feed on its leaves. It has been reportedly used as a fish poison as well, e.g., on the island of Borneo.
Human poisonings are known, including pediatric and adult cases, and in the case of adults, both accidental and intentional poisonings. At lower doses in humans, the inhibitory postsynaptic potential induced by gelsemine action at the glycine receptor can result in nausea, diarrhea and muscle spasms caused by loss of involuntary muscle control; at higher doses, vision impairment or blindness, paralysis, and death can occur. Children, mistaking the flower of G. sempervirens for honeysuckle, have been poisoned by sucking the nectar from the flower; its ingestion has been associated with honey bee (but not bumble bee) fatalities as well (e.g., in the southeast U.S.). It has been reportedly used, via ingestion or smoking, as a poison in cases of suicide, in China, Vietnam, and Borneo.
Treatment
Gelsemine is a highly toxic and therefore possibly fatal substance for which there is no antidote, but the symptoms can be managed in low dose intoxications. In the case of an oral exposure a gastric lavage is performed, which must be done within approximately one hour of ingestion. Activated charcoal is then administered to bind the free toxin in the gastrointestinal tract to prevent absorption. Benzodiazepine or phenobarbital is also generally administered to help control seizing, and atropine can be used to treat bradycardia. Electrolyte and nutrient levels are monitored and controlled.
In the case of a skin exposure, the area is washed with soap and water for 15 minutes to avoid skin damage.
While there is no current treatment to reverse the effects of gelsemine poisoning, preliminary research done in rats has suggested that strychnine has potential therapeutic applications due to its antagonistic effects at the glycine receptor, resulting in a counteraction of some downstream effects such as the increase in allopregnanolone production associated with gelsemine poisoning.
Chemical synthesis
The chemical synthesis of gelsemine has been an active target of interest since the early 1990s, given its place among the alkaloids, and its complex structure (seven contiguous stereocenters and six rings). Although the full mechanism of its biosynthesis is still being investigated, many research groups have successfully synthesized it using chemical means. The first racemic total synthesis of gelsemine was in 1994, by W.N. Speckamp's group, with a remarkable first yield of 0.83% (given the subsequent range, prior to 2014, of 0.02-1.2%).
Eight further total syntheses have been reported in the literature, including from the groups of A.P. Johnson in 1994, T. Fukuyama in 1996 and again in 2000, D.J. Hart in 1997, L.E. Overman in 1999, S.J. Danishefsky in 2002, and Y. Qin in 2012, with the latter Fukuyama group synthesis (31 steps, 0.86%) and the Qin group synthesis (25 steps, 1%) being asymmetric. A further asymmetric synthesis using an organocatalytic Diels–Alder approach from the F.G. Qiu and H. Zhai groups in China, reporting a remarkable 12 steps and a 5% yield, was reported in 2015. Additional synthetic approaches were discussed by notable scientists such as Fleming, Stork, Penkett, Pearson, Aubé, Vanderwal, and Simpkins.
Potential medical applications
Modern medical utility
Pharmacological research has suggested gelsemine activities to have potential related to the treatment of anxiety, and in treatments of conditions involving oxidative stress. In addition, gelsemine has been noted to have anti-inflammatory and anti-cancer activities. Recent research on gelsemine has included investigations aimed at developing safer gelsemine analogs and derivatives that might allow safe application of the compounds beneficial effects.
The identified anxiolytic effects of preparations derived from Gelsemium sempervirens are believed due to in largest part to the presence of gelsemine in such preparations. Based on a rat study, use of gelsemine has been reported as being potentially effective, where the comparison was to treatment with Diazepam.[primary source]
Gelsemine has been suggested to have potential in offering protective effects against oxidative stress. In a small rat study, the off-target effects of cisplatin—nephrotoxicity arising from its induction of pathways that generate reactive oxygen species, a factor impacting its use in cancer treatment—were examined, and gelsemine was found to significantly attenuate cisplatin-induced damage to DNA, and further general damage due to oxidative mechanisms. Inhibition of xanthine oxidase and lipid peroxidation activities were noted, along with "increased production and/or activity of anti-oxidants, both enzymatic... and non-enzymatic...".[primary source]
In a small rabbit study, the impact of gelsemine administration on parameters relating to diet-induced hyperlipidemia was examined, where gelsemine was observed to improve lipid profile parameters associated with hyperlipidemia to a significant extent, as well as to "decreas[e] hyperlipidemia-induced oxidative stress in a dose-dependent manner," as determined by altered activities of a number of relevant metabolite and enzyme activity levels. The results, taken together, led the study authors to conclude that supplements of gelsemine to animals exposed to high fat diets may be of use in reversing the effects and in protecting tissues from oxidative stress resulting from such diets.[primary source]
Gelsemine has been observed to have anti-inflammatory activity.[primary source]
Gelsemine has been observed to have anti-cancer activity.[primary source]
Traditional medical uses
Preparations from the plant from which gelsemine derives, Gelsemium sempervirens, have been used as treatment for a variety of ailments, for instance, through use of Gelsemium tinctures. Applications have included treatment of acne, anxiety, ear pain, migraine, and more generally with diseases associated with an inflammatory response, and in cases of abnormal nervous function (paralysis, “pins and needles” feeling, neuralgia, etc.).[primary source]
Popular culture
Gelsemine is used indirectly, via the use of "yellow jasmine", in the 1927 Agatha Christie novel, The Big Four, where an injection of this natural preparation is used to kill the character Mr. Paynter. It is then used directly, in 2013, as gelsemine, in series 13 of the ITV series, Agatha Christie's Poirot, as the agent to immobilize the character Stephen Paynter (played by Steven Pacey) before his being burnt to death, thus implicating the character Madame Olivier, a research neuroscientist (played by Patricia Hodge); and also, directly, to paralyze and immobilize Olivier and another character after their kidnappings.
In House of Cards season 5 episode 12, Jane Davis offers Claire Underwood gelsemine as a headache reliever, noting that she should only use two drops. Later on, Claire uses the gelsemine to murder Tom Yates, her lover, by putting it into his drink without his knowledge.
In episode 9 of season 3 of iZombie, the victim is poisoned by gelsemine.
See also
Gelsemium sempervirens
Strychnine
Alkaloids
References
Further reading
Indole alkaloids
Glycine receptor antagonists
Oxindoles
Plant toxins | Gelsemine | [
"Chemistry"
] | 3,115 | [
"Indole alkaloids",
"Alkaloids by chemical classification",
"Chemical ecology",
"Plant toxins"
] |
38,014,597 | https://en.wikipedia.org/wiki/Grain%20boundary%20sliding | Grain boundary sliding (GBS) is a material deformation mechanism where grains slide against each other. This occurs in polycrystalline material under external stress at high homologous temperature (above ~0.4) and low strain rate and is intertwined with creep. Homologous temperature describes the operating temperature relative to the melting temperature of the material. There are mainly two types of grain boundary sliding: Rachinger sliding, and Lifshitz sliding. Grain boundary sliding usually occurs as a combination of both types of sliding. Boundary shape often determines the rate and extent of grain boundary sliding.
Grain boundary sliding is a motion to prevent intergranular cracks from forming. Keep in mind that at high temperatures, many processes are underway, and grain boundary sliding is only one of the processes happening. Therefore it is not surprising that Nabarro Herring and Coble creep is dependent on grain boundary sliding. During high temperature creep, wavy grain boundaries are often observed. We can simulate this type of boundary with a sinusoidal curve, with amplitude h and wavelength λ. Steady-state creep rate increases with rising λ/h ratios. At high λ and high homologous temperatures, grain boundary sliding is controlled by lattice diffusion (Nabarro-Herring mechanism). On the other hand, it will be controlled by grain boundary diffusion (Coble Creep). Additionally, when λ/h ratios are high, it may impede diffusional flow, therefore diffusional voids may form, which leads to fracture in creep.
Many people have developed estimations for the contribution of grain boundary sliding to the total strain experienced by various groups of materials, such as metals, ceramics, and geological materials. Grain boundary sliding contributes a significant amount of strain, especially for fine grain materials and high temperatures. It has been shown that Lifshitz grain boundary sliding contributes about 50-60% of strain in Nabarro–Herring diffusion creep. This mechanism is the primary cause of ceramic failure at high temperatures due to the formation of glassy phases at their grain boundaries.
Rachinger sliding
Rachinger sliding is purely elastic; the grains retain most of their original shape. The internal stress will build up as grains slide until the stress balances out with the external applied stress. For example, when a uniaxial tensile stress is applied on a sample, grains move to accommodate the elongation and the number of grains along the direction of applied stress increases.
Lifshitz sliding
Lifshitz sliding only occurs with Nabarro–Herring and Coble creep. The sliding motion is accommodated by the diffusion of vacancies from induced stresses and the grain shape changes during the process. For example, when a uniaxial tensile stress is applied, diffusion will occur within grains and the grain will elongate in the same direction as the applied stress. There will not be an increase in number of grains along the direction of applied stress.
Accommodation mechanisms
When polycrystalline grains slide relative to each other, there must be simultaneous mechanisms that allow for this sliding to occur without the overlapping of grains (which would be physically impossible). Various accommodation mechanisms have been proposed to account for this issue.
Dislocation movement: Dislocations can move through the material by processes such as climb and glide to allow for compatibility
Elastic distortion: When the sliding distance is small, the grains can deform elastically (and sometimes recoverably) to allow for compatibility
Diffusional accommodation: Using diffusional creep mechanisms, the material can diffuse along grain boundaries or through grains to allow for compatibility
Grain boundary sliding accommodated by diffusional flow:
Grain boundary sliding accommodated by diffusional flow takes place by grain-switching while preserving grain shape. This type of mechanism is synonymous to Nabarro Herring and Coble creep but describes the grain at superplastic conditions. This concept was originally proposed by Ashby and Verral. During grain switching, we can describe the process through three steps: a) Initial state b) Intermediate stage c) Final state. During the intermediate stage, there must first be an applied stress exceeding the “threshold” stress so that there is an increase in grain boundary area which is provided by the diffusional flow that occurs once the threshold stress is achieved. Under the assumption that the applied stress is much greater than the threshold stress, the strain rate is greater than conventional diffusional creep. The reason for this is that for grain switching diffusion, the distance is about 1/7 the distance of diffusional creep and there are two more paths to grain switching in comparison with diffusional creep. Thus, this will lead to about an order magnitude higher strain rate than diffusional creep.
Grain boundary sliding accommodated by dislocation flow:
At superplastic temperature, strain rate and stress conditions, dislocations are really observed because they are quickly emitted and absorbed at grain boundaries. However, careful studies have been conducted to verify that dislocations are indeed emitted during superplastic deformation. During dislocation flow, the shape of the grain must be ensured to not change. Based on models of super plasticity, transitioning from dislocation creep to super plasticity occurs when the sub grain size is less than the grain size. The sub grain size: often denoted as d’ can be described in the equation below:
d’/b =10G/𝜏, Where it has an inverse relationship with shear stress.
Deformation rate from grain boundary sliding
Generally speaking, the minimum creep rate for diffusion can be expressed as:
where the terms are defined as follows:
= minimum creep rate
= constant
= diffusion coefficient
= Burgers vector
= Boltzmann constant
= temperature
= mean grain size
= stress
= shear modulus
= exponents that depend on the creep mechanism
In the case where this minimum creep rate is controlled by grain boundary sliding, the exponents become , , and the diffusion coefficient becomes (the lattice diffusion coefficient). Thus, the minimum creep rate becomes:
Estimating the contribution of GBS to the overall Strain
The total strain under creep conditions can be denoted as εt , where:
εt = εg + εgbs +εdc
εg = Strain associated with intragranular dislocation processes
εgbs = Strain due to Rachinger GBS associated with intragranular sliding
εdc = Strain due to Lifshitz GBS associated with diffusion creep
During practice, experiments are normally performed in conditions where creep is negligible, therefore equation 1 will reduce to:
εt = εg + εgbs
Therefore the contribution of GBS to the total strain can be denoted as:
Ⲝ = εgbs / εt
First, we need to illustrate the three perpendicular displacement vectors: u, v, and w, with a grain boundary sliding vector: s. It can be imagined as the w displacement vector coming out of the plane. While the v and u vectors are in the plane. The displacement vector u is also the tensile stress direction. The sliding contribution may be estimated by individual measurements of εgbs through these displacement vectors. We can further define the angle at the u v plane of displacements as Ѱ, and the angle between the u w planes as Θ. u can then be related by the tangents of these angles through the equation:
U = vtan Ѱ + wtanΘ
A common and easier way in practice is to use interferometry to measure fringes along the v displacement axis. The sliding strain is then given by:
εgbs = k’’nr vr
Where k’’ is constant, nr is the number of measurements, and vr is the average of n measurements.
Thus we can calculate the percentage of GBS strain.
Experimental evidence
Grain boundary scattering has been observed experimentally using various microscopy techniques. It was first observed in NaCl and MgO bicrystals in 1962 by Adams and Murray. By scratching the surface of their samples with a marker line, they were able to observe an offset of that line at the grain boundary as a result of adjacent grains sliding with respect to each other. Subsequently this was observed in other systems as well including in Zn-Al alloys using electron microscopy, and octachloropropane using in situ techniques.
Nanomaterials
Nano-crystalline materials, or nanomaterials, have fine grains which helps suppress lattice creep. This is beneficial for relatively low temperature operations as it impedes dislocations motion or diffusion due to high volume fraction of grain boundaries. However, fine grains are undesirable at high temperature due to the increased probability of grain boundary sliding.
Prevention
Grain shape plays a large role in determining the sliding rate and extent. Thus, by controlling the grain size and shape, the amount of grain boundary sliding can be limited. Generally, materials with coarser grains are preferred, as the material will have less grain boundaries. Ideally, single crystals will completely suppress this mechanism as the sample will not have any grain boundaries.
Another method is to reinforce grain boundaries by adding precipitates. Small precipitates located at grain boundaries can pin grain boundaries and prevent grains from sliding against each other. However, not all precipitates are desirable at boundaries. Large precipitates may have the opposite effect on grain boundary pinning as it allows more gaps or vacancies between grains to accommodate the precipitates, which reduces the pinning effect.
Modeling effects of GBS in high strength steel
The application of high-strength steel is ubiquitous in the engineering world today. To provide a substantial engineering basis for real-world construction, the modeling of high-strength steel is very important.
By inputting parameters such as elastic modulus, yield strength, Poisson’s ratio, and specific heat of high strength steel from two temperatures, we can derive the related GBS energy as a function of temperature and thus its yield strength as a function of temperature.
Experimental Study: Superplastic Forming Technique via GBS
The superplastic forming technique is a technique where materials are deformed beyond the yield stress to form a complex shaped lightweight construction. This phenomenon is possible through grain boundary sliding that is enabled by dislocation slip/creep and diffusional creep.
An example would be for commercial fine-grained Al-Mg alloys, unusually weak grain boundary sliding is observed during the initial stage of superplastic deformation. Through a tensile test, grains were elongated along the tensile direction to 50~70%. The deformation was orchestrated by increased precipitation depletion zone fractions, particle segregation on the longitudinal grain boundaries, dislocation activity, and subgrains. Increased Mg content leads to increased GBS. Increasing Mg content from 4.8 to 6.5~7.6% aids grain size stability during the increased temperature process, simplified the GBS and decreased diffusion creep contribution, and increased the failure strain from 300% to 430%.
Application to tungsten filaments
The operation temperature for tungsten filaments used in incandescent lightbulbs is around 2000K to 3200K which is near the melting point of tungsten (Tm = 3695 K). As lightbulbs are expected to operate for long periods of time at a homologous temperature up to 0.8, understanding and preventing creep mechanism is crucial to extending their life expectancy.
Researchers found that the predominant mechanism for failure in these tungsten filaments was grain boundary sliding accommodated by diffusional creep. This is because tungsten filaments, being as thin as they are, typically consist of only a handful of elongated grains. In fact there is usually less than one grain boundary per turn in a tungsten coil. This elongated grain structure is generally called a bamboo structure, as the grains look similar to the internodes of bamboo stalks. During operation, the tungsten wire is stressed under the load of its own weight and because of the diffusion that can occur at high temperatures, grains begin to rotate and slide. This stress, because of variations in the filament, causes the filament to sag nonuniformly, which ultimately introduces further torque on the filament. It is this sagging that inevitably results in a rupture of the filament, rendering the incandescent lightbulb useless. The typical lifetime for these single coil filaments is approximately 440 hours.
To combat this grain boundary sliding, researchers began to dope the tungsten filament with aluminum, silicon and most importantly potassium. This composite material (AKS tungsten) is unique as it is composed of potassium and tungsten, which are non-alloying. This feature of potassium results in nanosized bubbles of either liquid or gaseous potassium being distributed throughout the filament after proper manufacturing. These bubbles interact with all defects in the filament pinning dislocations and most importantly grain boundaries. Pinning these grain boundaries, even at high temperatures, drastically reduces grain boundary sliding. This reduction in grain boundary sliding earned these filaments the title of "non-sag filaments" as they would no longer bow under their own weight. Thus, this initially counter-intuitive approach to strengthening tungsten filaments began to be widely used in almost every incandescent lightbulb to greatly increase their lifetime.
References
Deformation (mechanics) | Grain boundary sliding | [
"Materials_science",
"Engineering"
] | 2,701 | [
"Deformation (mechanics)",
"Materials science"
] |
33,976,507 | https://en.wikipedia.org/wiki/120-cell%20honeycomb | In the geometry of hyperbolic 4-space, the 120-cell honeycomb is one of five compact regular space-filling tessellations (or honeycombs). With Schläfli symbol {5,3,3,3}, it has three 120-cells around each face. Its dual is the order-5 5-cell honeycomb, {3,3,3,5}.
Related honeycombs
It is related to the order-4 120-cell honeycomb, {5,3,3,4}, and order-5 120-cell honeycomb, {5,3,3,5}.
It is topologically similar to the finite 5-cube, {4,3,3,3}, and 5-simplex, {3,3,3,3}.
It is analogous to the 120-cell, {5,3,3}, and dodecahedron, {5,3}.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II, III, IV, V, p212-213)
Honeycombs (geometry) | 120-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics"
] | 299 | [
"Honeycombs (geometry)",
"Tessellation",
"Crystallography",
"Geometry",
"Geometry stubs",
"Symmetry"
] |
33,976,509 | https://en.wikipedia.org/wiki/Order-5%205-cell%20honeycomb | In the geometry of hyperbolic 4-space, the order-5 5-cell honeycomb is one of five compact regular space-filling tessellations (or honeycombs). With Schläfli symbol {3,3,3,5}, it has five 5-cells around each face. Its dual is the 120-cell honeycomb, {5,3,3,3}.
Related honeycombs
It is related to the order-5 tesseractic honeycomb, {4,3,3,5}, and order-5 120-cell honeycomb, {5,3,3,5}.
It is topologically similar to the finite 5-orthoplex, {3,3,3,4}, and 5-simplex, {3,3,3,3}.
It is analogous to the 600-cell, {3,3,5}, and icosahedron, {3,5}.
See also
List of regular polytopes
References
Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296)
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II, III, IV, V, p212-213)
Honeycombs (geometry) | Order-5 5-cell honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 302 | [
"Tessellation",
"Crystallography",
"Honeycombs (geometry)",
"Symmetry"
] |
33,976,769 | https://en.wikipedia.org/wiki/List%20of%20mineral%20tests | Mineral tests are simple physical and chemical methods of testing samples, which can help to identify the mineral type. This approach is used widely in mineralogy, ore geology and general geological mapping.
The following tests are some examples of those that are used on hand specimens, or on field samples, or on thin sections with the aid of a polarizing microscope.
Color
Color of the mineral. Color alone is not diagnostic. For example quartz can be almost any color, depending on minor impurities and microstructure.
Streak
Color of the mineral's powder. This can be found by rubbing the mineral onto a concrete. This is more accurate but not always mineral specific.
Lustre
This is the way light reflects from the mineral's surface. A mineral can be metallic (shiny) or non-metallic (not shiny).
Transparency
The way light travels through minerals. The mineral can be transparent (clear), translucent (cloudy) or opaque (none).
Specific gravity
Ratio between the weight of the mineral relative to an equal volume of water.
Mineral habit
The shape of a single crystal and/or aggregate of multiple crystals of the same mineral.
Magnetism
Magnetic or nonmagnetic. Can be tested by using a magnet or compass. While the most common magnetic minerals contain iron, many iron minerals are nonmagnetic. (for example, pyrite).
Cleavage
The way a mineral splits (or “cleaves”), particularly along planes in the crystal structure. Cleavage is generally described by
how well a mineral can be split to produce a flat plane, a process controlled by planes of weakness in the crystal structure.
the number of distinct directions of these cleavage planes
the angles between those directions.
UV fluorescence
Many minerals glow when put under a UV light.
Radioactivity
Is the mineral radioactive or non-radioactive? This is measured by a Geiger counter or scintillation counter.
Taste
This is not recommended. Is the mineral salty, bitter or does it have no taste? Taste is sometimes used by professionals to distinguish between specific, non-toxic minerals known to occur in a well-studied area without possible contaminants.
Bite Test
This is not recommended. This involves biting a mineral to see if it’s generally soft or hard. This was used in early gold exploration to tell the difference between pyrite (fools gold, hard) and gold (soft). Several of the minerals where a bite test could be diagnostic contain heavy metals. Even gold can be toxic, with repeated ingestion or in impure form.
Hardness
The Mohs Hardness Scale is the main scale to measure mineral hardness. Finger nail is 2.5, copper coin is 3.5, glass is 5.5 and steel is 6.5. Hardness scale is Talc is 1, Gypsum is 2, Calcite is 3, Fluorite is 4, Apatite is 5, Orthoclase Feldspar is 6, Quartz is 7, Topaz is 8, Corundum is 9 and Diamond is 10.
Odor
Not always recommended. Does the mineral have an odor of oil, sulfur or something else or is there no odour?
Electric resistance
Every mineral has a different electrical resistance which can be observed by passing an electric current through the mineral and measuring the resistance.
Relief
Appearance of roughness, texture or thickness in optical mineralogy. This relief is caused by variations in refractive index of minerals.
Fracture
Type of fracture and fracture pattern.
Shape
Mineral shape or crystal system (cubic, tetragonal, hexagonal, trigonal, orthorhombic, monoclinic or triclinic)
Birefringence
Colour of minerals in crossed polarized light (XPL), particularly notable in thin section. See also optical mineralogy.
Twinning
Crystal twinning present and type.
Extinction angle
Degrees which mineral turns black in XPL in microscope.
Zoning
Mineral zoning present.
Mineral texture
Porphyritic (large xenocryst surrounded by fine crystals), Melange (mix of minerals), Poikilitic (one mineral grown around another), Polymorph (same composition but different shape), Hetrogenous (many types of minerals), Homogeneous (one mineral type).
Reactivity
Is the mineral reactive or nonreactive when exposed to other compounds? For example, minerals with calcium carbonate composition typically fizz when exposed to a weak acid.
Associated rock type
With what rock type and/or other minerals is this mineral found?
Degree of metamorphism and alteration
Mineral shape, properties or form been altered.
Lattice structure and geochemistry
Signature chemical elements and bonds of the mineral. For example, is the mineral hydrous like mica or non hydrous like Jadeite.
See also
List of minerals
References
Further reading
Economic Geology principles and practice, Walter L Pohl
Crystallography
Mineralogy | List of mineral tests | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 987 | [
"Crystallography",
"Condensed matter physics",
"Materials science"
] |
33,984,856 | https://en.wikipedia.org/wiki/Size%20effect%20on%20structural%20strength | According to the classical theories of elastic or plastic structures made from a material with non-random strength (ft), the nominal strength (σN) of a structure is independent of the structure size (D) when geometrically similar structures are considered. Any deviation from this property is called the size effect. For example, conventional strength of materials predicts that a large beam and a tiny beam will fail at the same stress if they are made of the same material. In the real world, because of size effects, a larger beam will fail at a lower stress than a smaller beam.
The structural size effect concerns structures made of the same material, with the same microstructure. It must be distinguished from the size effect of material inhomogeneities, particularly the Hall-Petch effect, which describes how the material strength increases with decreasing grain size in polycrystalline metals.
The size effect can have two causes:
statistical, due to material strength randomness, likelihood of a critical flaw occurring in a high-stress location, and increasing volume increasing the probability of a serious flaw.
energetic (and non-statistical), due to energy release when a large crack or a large fracture process zone (FPZ) containing damaged material develops before the maximum load is reached.
Statistical Theory of Size Effect in Brittle Structures
The statistical size effect occurs for a broad class of brittle structures that follow the weakest-link model. This model means that macro-fracture initiation from one material element, or more precisely one representative volume element (RVE), causes the whole structure to fail, like the failure of one link in a chain (Fig. 1a). Since the material strength is random, the strength of the weakest material element in the structure (Fig. 1a) is likely to decrease with increasing structure size (as noted already by Mariotte in 1684).
Denoting the failure probabilities of structure as and of one RVE under stress as , and noting that the survival probability of a chain is the joint probability of survival of all its links, one readily concludes that
The key is the left tail of the distribution of . It was not successfully identified until Weibull in 1939 recognized that the tail is a power law. Denoting the tail exponent as , one can then show that, if the structure is sufficiently larger than one RVE (i.e., if ), the failure probability of a structure as a function of is
Eq. 2 is the cumulative Weibull distribution with scale parameter and shape parameter ; = constant factor depending on the structure geometry, = structure volume; = relative (size-independent) coordinate vectors, = dimensionless stress field (dependent on geometry), scaled so that the maximum stress be 1; = number of spatial dimensions ( = 1, 2 or 3); = material characteristic length representing the effective size of the RVE (typically about 3 inhomogeneity sizes).
The RVE is here defined as the smallest material volume whose failure suffices to make the whole
structure fail. From experience, the structure is sufficiently larger than one RVE if the equivalent number of RVEs in the structure is larger than about ; = number of RVEs giving the same if the stress field is homogeneous (always , and usually ). For most normal-scale applications to metals and fine-grained ceramics, except for micrometer scale devices, the size is large enough for the Weibull theory to apply (but not for coarse-grained materials such as concrete).
From Eq. 2 one can show that the mean strength and the coefficient of variation of strength are obtained as follows:
(where is the gamma function) The first equation shows that the size effect on the mean nominal strength is
a power function of size , regardless of structure geometry.
Weibull parameter can be experimentally identified by two methods: 1) The values of measured on many identical specimens are used to calculate the coefficient of variation of strength, and the value of then follows by solving Eq. (4); or 2) the values of are measured on geometrically similar specimens of several different sizes and the slope of their linear regression in the plot of versus gives . Method 1 must give the same result for different sizes, and method 2 the same as method 1. If not, the size effect is partly or totally non-Weibullian. Omission of testing for different sizes has often led to incorrect conclusions. Another check is that the histogram of the strengths of many identical specimens must be a straight line when plotted in the Weibull scale. A deviation to the right at high strength range means that is too small and the material quasibrittle.
Energetic Size Effect
The fact that the Weibull size effect is a power law means that it is self-similar, i.e., no characteristic structure size exists, and and material inhomogeneities are negligible compared to . This is the case for fatigue-embrittled metals or fine-grained ceramics except on the micrometer scale. The existence of a finite is a salient feature of the energetic size effect, discovered in 1984. This kind of size effect represents a transition between two power laws and is observed in brittle heterogenous materials, termed quasibrittle. These materials include concrete, fiber composites, rocks, coarse-grained and toughened ceramics, rigid foams, sea ice, dental ceramics, dentine, bone, biological shells, many bio- and bio-inspired materials, masonry, mortar, stiff cohesive soils, grouted soils, consolidated snow, wood, paper, carton, coal, cemented sands, etc. On the micro- or nano scale, all the brittle materials become quasibrittle, and thus must exhibit the energetic size effect.
A pronounced energetic size effect occurs in shear, torsional and punching failures of reinforced concrete, in pullout of anchors from concrete, in compression failure of slender reinforced concrete columns and prestressed concrete beams, in compression and tensile failures of fiber-polymer composites and sandwich structures, and in the failures of all the aforementioned quasibrittle materials. One may distinguish two basic types of this size effect.
Type 1: Structures that fail at crack initiation
When the macro-crack initiates from one RVE whose size is not negligible compared to the structure size, the deterministic size effect dominates over the statistical size effect. What causes the size effect is a stress redistribution in the structure (Fig. 2c) due to damage in the initiating RVE, which is typically located at fracture surface.
A simple intuitive justification of this size effect may be given by considering the flexural failure of an unnotched simply supported beam under a concentrated load at midspan (Fig. 2d). Due to material heterogeneity, what decides the maximum load is not the elastically calculated stress at the tensile face, where = bending moment, = beam depth, and = beam width. Rather, what decides is the stress value roughly at distance from the tensile face, which is at the middle of FPZ (2c). Noting that = , where = stress gradient = and = intrinsic tensile strength of the material, and considering the failure condition = , one gets = where , which is a constant because for geometrically similar beams = constant. This expression is valid only for small enough , and so (according to the first two terms of the binomial expansion) one may approximate it as
which is the law of Type 1 deterministic size effect (Fig. 2a). The purpose of the approximation made is: (a) to prevent from becoming negative for very small , for which the foregoing argument does not apply; and (b) to satisfy the asymptotic condition that the deterministic size effect must vanish for . Here = positive empirical constant; the values = or 2 have been used for concrete, while is optimum according to the existing test data from the literature (Fig. 2d).
A fundamental derivation of Eq. 5 for a general structural geometry has been given by
applying dimensional analysis and asymptotic matching to the limit case of energy release when the initial macro-crack length tends to zero. For general structures, the following effective size may be substituted in Eq. (5):
where = strain gradient at the maximum strain point located at the surface, in the direction
normal to the surface.
Eq. 5 cannot apply for large sizes because it approaches for a horizontal asymptote.
For large sizes, must approach the Weibull statistical size effect, Eq. 3. This condition is satisfied by the generalized energetic-statistical size effect law:
where are empirical constants (). The deterministic formula (5) is recovered as the limit case for . (Fig. 2d) shows a comparison of the last formula with the test results for many different concretes, plotted as dimensionless strength versus dimensionless structure size .
The probabilistic theory of Type 1 size effect can be derived from fracture nano-mechanics. Kramer's
transition rate theory shows that, on the nano-scale, the far-left tail of the probability distribution of nano-scale strength is a power law of the type . Analysis of the multiscale transition to the material macro-scale then shows that the RVE strength distribution is Gaussian but with a Weibull (or power-law) left tail whose exponent is much larger than 2 and is grafted roughly at the probability of about 0.001.
For structures with , which are common for quasibrittle materials, the Weibull theory does not apply. But the underlying weakest-link model, expressed by Eq. (1) for , does, albeit with a finite , which is a crucial point. The finiteness of the weakest-link chain model causes major deviations from the Weibull distribution. As the structure size, measured by , increases, the grafting point of the Weibullian left part moves to the right until, at about , the entire distribution becomes Weibullian. The mean strength can be computed from this distribution and, as it turns out, its plot is identical with the plot of Eq. 5 seen in Fig. 2g. The point of deviation from the Weibull asymptote is determined by the location of the grafting point on the strength distribution of one RVE (Fig. 2g). Note that the finiteness of the chain in the weakest-link model captures the deterministic part of size effect.
This theory has also been extended to the size effect on the Evans and Paris' laws of crack growth in quasibrittle materials, and to the size effect on the static and fatigue lifetimes. It appeared that the size effect on the lifetime is much stronger than it is on the short-time strength (tail exponent is an order-of-magnitude smaller).
Type 2: Structures in which a large crack or notch exists
The strongest possible size effect occurs for specimens with similar deep notches (Fig. 4b), or for structures in which a large crack, similar for different sizes, forms stably before the maximum load is reached. Because the location of fracture initiation is predetermined to occur at the crack tip and thus cannot sample the random strengths of different RVEs, the statistical contribution to the mean size effect is negligible. Such behavior is typical of reinforced concrete, damaged fiber-reinforced polymers and some compressed unreinforced structures.
The energetic size effect may be intuitively explained by considering the panel in Fig. 1c,d,
initially under a uniform stress equal to . Introduction of a crack of length , with a damage zone
of width at the tip, relieves the stress, and thus also the strain energy, from the shaded undamaged triangles of slope on the flanks of the crack. Then, if and are approximately the same for different sizes, the energy released from the shaded triangles is proportional to , while the energy dissipated by the fracture process is proportional to ; here = fracture energy of the material, = energy density before fracture, and = Young's elastic modulus. The discrepancy between and shows that a balance of energy release and dissipation rate can exist for every size only if decreases with increasing . If the energy dissipated within the damage zone of width is added, one obtains the Bažant (1984) size effect law (Type 2):
(Fig. 4c,d) where = constants, where = tensile strength of material, and accounts for the structure geometry.
For more complex geometries such an intuitive derivation is not possible. However, dimensional
analysis coupled with asymptotic matching showed that Eq. 8 is applicable in general, and that the dependence of its parameters on the structure geometry has approximately the following form:
where half of the FPZ length, = relative initial crack length (which is constant for geometrically similar scaling); = dimensionless energy release function of linear elastic fracture mechanics (LEFM), which brings about the effect of structure geometry; , and = stress intensity factor. Fitting Eq. 8 to data from tests of geometrically similar notched specimens of very different sizes is a good way to identify the and of the material.
Size Effect in Cohesive Crack, Crack Band and Nonlocal Models
Numerical simulations of failure by finite element codes can capture the energetic (or deterministic) size effect only if the material law relating the stress to deformation possesses a characteristic length. This was not the case for the classical finite element codes with a material characterized solely by stress-strain relations.
One simple enough computational method is the cohesive (or fictitious) crack model, in which it is assumed that the stress transmitted across a partially opened crack is a decreasing function of the crack opening , i.e., . The area under this function is , and
is the material characteristic length giving rise to the deterministic size effect. An even simpler method is the crack-band model, in which the cohesive crack is replaced in simulations by a crack band of width equal to one finite element size and a stress-strain relation that is softening in the cross-band direction as where = average strain in that direction.
When needs to be adjusted, the softening stress strain relation is adjusted so as to maintain the correct energy dissipation . A more versatile method is the nonlocal damage model in which the stress at a continuum point is a function not of the strain at that point but of the average of the strain field within a certain neighborhood of size centered at that point. Still another method is the gradient damage model in which the stress depends not only on the strain at that point but also on the gradient of strain. All these computational methods can ensure objectivity and proper convergence with respect to the refinement of the finite element mesh.
Fractal Aspects of Size Effect
The fractal properties of material, including the fractal aspect of crack surface roughness and the lacunar fractal aspect of pore structure, may have a role in the size effect in concrete, and may affect the fracture energy of material. However, the fractal properties have yet not been experimentally documented for a broad enough scale and the problem has not yet been studied in depth comparable to the statistical and energetic size effects. The main obstacle to the practical consideration of a fractal influence on the size effect is that, if calibrated for one structure geometry, it is not clear how infer the size effect for another geometry. The pros and cons were discussed, e.g., by Carpinteri et al. (1994, 2001) and Bažant and Yavari (2005).
Practical Importance
Taking the size effect into account is essential for safe prediction of strength of large concrete bridges, nuclear containments, roof shells, tall buildings, tunnel linings, large load-bearing parts of aircraft, spacecraft and ships made of fiber-polymer composites, wind turbines, large geotechnical excavations, earth and rock slopes, floating sea ice carrying loads, oil platforms under ice forces, etc. Their design depends on the material properties measured on much smaller laboratory specimens. These properties must be extrapolated to sizes greater by one or two orders of magnitude. Even if an expensive full-scale failure test, for example a failure test of the rudder of a very large aircraft, can be carried out, it is financially prohibitive to repeat it thousand times to obtain the statistical distribution of load capacity. Such statistical information, underlying the safety factors, is obtainable only by proper extrapolation of laboratory tests.
The size effect is gaining in importance as larger and larger structures, of more and more slender forms, are being built. The safety factors, of course, give large safety margins—so large that even for the largest civil engineering structures the classical deterministic analysis based on the mean material properties normally yields failure loads smaller than the maximum design loads. For this reasons, the size effect on the strength in brittle failures of concrete structures and structural laminates has long been ignored. Then, however, the failure probability, which is required to be , and actually does have such values for normal-size structures, may become for very large structures as low as per lifetime. Such high failure probability is intolerable as it adds significantly to the risks to which people are inevitably exposed. In fact, the historical experience shows that very large structures have been failing at a frequency several orders of magnitude higher than smaller ones. The reason it has not led to public outcry is that the large structures are few. But for the locals, who must use the structures daily, the risk is not acceptable.
Another application is the testing of the fracture energy and characteristic material length. For quasibrittle materials, measuring the size effect on the peak loads (and on the specimen softening after the peak load) is the simplest approach.
Knowing the size effect is also important in the reverse sense—for micrometer scale devices if they are designed partly or fully on the basis of material properties measured more conveniently on the scale of 0.01m to 0.1m.
See also
Material failure theory
Structural failure
Fracture mechanics
Concrete fracture analysis
Fatigue (material)
Concrete cone failure
Notes
References and bibliography
Barenblatt, G.I. (1959). "The formation of equilibrium cracks during brittle fracture. General ideas and hypothesis, axially symmetric cracks." Prikl. Mat. Mekh. 23 (3), 434—444.
Barenblatt, G.I. (1996). Scaling, Selfsimilarity and Intermediate Asymptotics. Cambridge University Press.
Barenblatt, G.I. (1978). Similarity, Self-Similarity and Intermediate Asymptotics (in Russian) Girometeoizdat, Moscow; and English translation, Consultants Bureau, New York 1979.
Barenblatt, G. I. (2003) Scaling, Cambridge University Press.
Bažant, Z.P. (1976). "Instability, ductility, and size effect in strain-softening concrete." J. Engng. Mech. Div., Am. Soc. Civil Engrs., 102, EM2, 331—344; disc. 103, 357—358, 775—777, 104, 501—502.
Bažant, Z.P. (1984). "Size effect in blunt fracture: Concrete, rock, metal." J. of Engng. Mechanics, ASCE, 110, 518—535.
Bažant, Z.P. (1997a). "Scaling of quasibrittle fracture: Asymptotic analysis." Int. J. of Fracture 83 (1), 19—40.
Bažant, Z.P. (2002). "Scaling of structural strength." 2nd ed., Elsevier, London 2005.
Bažant, Z.P., and Chen, E.-P. (1997). "Scaling of structural failure." Applied Mechanics Reviews ASME 50 (10), 593—627.
Bažant, Z.P., and Kazemi, M.T. (1990). "Determination of fracture energy, process zone length and brittleness number from size effect, with application to rock and concrete." Int. J. of Fracture, 44, 111—131.
Bažant, Z.P., and Novák, D. (2000). "Energetic-statistical size effect in quasibrittle failure at crack initiation." ACI Materials Journal 97 (3), 381—392.
Bažant, Z.P., and Planas, J. (1998). Fracture and Size Effect in Concrete and Other Quasibrittle Materials. CRC Press, Boca Raton, Florida.
Bažant, Z.P., and Yavari, A. (2005). "Is the cause of size effect on structural strength fractal or energetic-statistical?" Engrg. Fracture Mechanics 72, 1--31; with discussion and reply in vol. 74 (2007), p. 2897.
Bažant, Z. P. (2004) "Scaling theory of quaisbrittle structural failure." Proc. Nat'l. Acad. Sci., USA 101 (37), 13397-13399.
Bažant, Z. P., Daniel, I. M., and Li, Z. (1996). "Size effect and fracture characteristics of composite laminates." J. of Engrg. Materials and Technology ASME 118 (3), 317—324.
Bažant, Z. P. and Jirásek, M. (2002). "Nonlocal integral formulations of plasticity and damage: Survey of progress." J. Engrg Mech., ASCE, 128(11), 1119-1149.
Bažant, Z. P. and Le, J.-L. (2009)"Nano-mechanics based modeling of lifetime distribution of quasibrittle structures", J. Engrg. Failure Ana., 16, pp 2521–2529
Bažant, Z. P., Le, J.-L., and Bazant, M. Z. (2009). "Scaling of strength and lifetime distributions of quasibrittle structures based on atomistic fracture mechanics." Proc. National Acad. of Sciences USA 11484-11489
Bažant, Z. P., and Pang, S.-D. (2006) "Mechanics based statistics of failure risk of quasibrittle structures and size effect on safety factors." Proc. Nat'l Acad. Sci., USA 103 (25), pp. 9434–9439.
Bažant, Z. P., and Pang, S.-D. (2007) "Activation energy based extreme value statistics and size effect in brittle and quasibrittle fracture." J. Mech. Phys. Solids 55, pp. 91–134.
Bažant, Z. P., Vořechovský, M., and Novak, D. (2007) "Asymptotic prediction of energetic-statistical size effect from deterministic finite element solutions." J. Engrg. Mech, ASCE, 128, 153-162.
Bažant, Z. P. and Xi, Y. (1991) "Statistical size effect in quasi-brittle structures: II. Nonlocal theory." J. Engrg. Mech., ASCE 117(7), 2623-2640.
Bažant, Z. P., Zhou, Y., Daniel, I. M., Caner, F. C., and Yu, Q. (2006). "Size effect on strength of laminate-foam sandwich plates", J. of Engrg. Materials and Technology ASME 128 (3), 366—374.
Beremin, F.M. (1983). "A local criterion for cleavage fracture of a nuclear pressure vessel steel." Metallurgy Transactions A, 14, 2277—2287.
Bouchaud, E. (1997). "Scaling properties of cracks." J. Phys.: Condens. Matter 9, 4319—4344.
Carpinteri, A. (1994). "Scaling laws and renormalization groups for strength and toughness of disordered materials." Int. J. of Solids and Structures 31 (3), 291—302.
Carpinteri, A., Chiaia, B., and Cornetti, P. (2001). "Static-kinematic duality and the principle of virtual work in the mechanics of fractal media." Comp. Meth. in Appl. Mech. and Engrg. 19, 3--19.
Coleman, B. D. (1958) "Statistics and time dependent of mechanical breakdown in fibers." J. Appl. Phys. 29 (6), pp. 968–983.
da Vinci, L. (1500s)---see The Notebooks of Leonardo da Vinci (1945), Edward McCurdy, London (p. 546); and Les Manuscrits de Léonard de Vinci, transl. in French by C. Ravaisson-Mollien, Institut de France (1881–91), Vol. 3.
Fisher, R.A. and Tippett, L.H.C. (1928). "Limiting forms of the frequency distribution of the largest and smallest member of a sample." Proc., Cambridge Philosophical Society 24, 180—190.
Fréchet, M. (1927). "Sur la loi de probabilité de l' écart maximum." Ann. Soc. Polon. Math. 6, p. 93.
Freudenthal, A.M., and Gumbell, E.J. (1956). "Physical and statistical aspects of fatigue." in Advances in Applied Mechanics, Vol. 4, Academic Press, 117—157.
Grassl, P., and Ba žant, Z. P. (2009). "Random lattice-particle simulation of statistical size effect in quasi-brittle structures failing at crack initiation." J. of Engrg. Mech. ASCE 135 (2), Feb., 85—92.
Gumbel, E.J. (1958). Statistics of Extremes. Columbia University Press, New York.
Harlow, D. G. and Phoenix, S. L. (1978) "The Chain-of-Bundles Probability Model For the Strength of Fibrous Materials I: Analysis and Conjectures." J. Comp. Mater. 12: 195-214
Harlow, D. G. and Phoenix, S. L. (1979) "Bounds on the probability of failure of composite materials." Int. J. Frac. 15(4), 312-336
Hillerborg A. (1985). "The theoretical basis of a method to determine the fracture energy of concrete." Materials and Structures 18 (106), 291—296.
Hillerborg, A., Modéer, M. and Petersson, P.E. (1976). "Analysis of crack formation and crack growth in concrete by means of fracture mechanics and finite elements." Cement and Concrete Research 6 773—782.
Le, J.-L., and Bažant, Z. P. (2009) "Finite weakest link model with zero threshold for strength distribution of dental restorative ceramics", Dent. Mater., 25, No. 5, 2009, pp 641–648
Le, J.-L., and Bažant, Z. P. (2011). "Unified Nano-Mechanics Based Probabilistic Theory of Quasibrittle and Brittle Structures". J. of the Mech. and Phys. of Solids, in press.
Mahesh, S. and Phoenix, S. L. (2004) "Lifetime distributions for unidirectional fibrous composites under creep-rupture loading." Int. J. Fract. 127, pp. 303–360.
Mariotte, E. (1686). Traité du mouvement des eaux, posthumously edited by M. de la Hire; Engl. transl. by J.T. Desvaguliers, London (1718), p. 249; also Mariotte's collected works, 2nd ed., The Hague (1740).
Mihashi, H., Okamura, H., and Bažant, Z.P., Editors (1994). Size effect in concrete structures (Proc., Japan Concrete Institute Intern. Workshop held in Sendai, Japan, Oct.31—Nov.2, 1993). E & FN Spon, London-New York, 556 + xiv pages).
Phoenix, S. L. (1978a) "Stochastic strength and fatigue of fiber bundles." Int. J. Frac. Vol. 14, No. 3, 327-344.
Phoenix, S. L. (1978b) "The asymptotic time to failure of a mechanical system of parallel members." SIAM J. Appl. Maths. Vol. 34, No. 2, 227-246.
Phoenix, S. L., and Tierney, L.-J. (1983) "A statistical model for the time dependent failure of unidirectional composite materials under local elastic load-sharing among fibers." Engrg. Fract. Mech. 18 (1), pp. 193–215.
Phoenix, S. L., Ibnabdeljalil, M., Hui, C.-Y. (1997). "Size effects in the distribution for strength of brittle matrix fibrous composites." Int. J. Solids Struct. 34(5), 545-568.
Pijaudier-Cabot, G., and Bažant, Z.P. (1987). "Nonlocal damage theory." J. of Engrg. Mechanics, ASCE 113 (10), 1512—1533.
RILEM Committee TC-QFS (2004). "Quasibrittle fracture scaling and size effect---Final report." Materials and Structures (Paris) 37 (No. 272), 547—586.
Selected Papers by Alfred M. Freudenthal (1981). Am. Soc. of Civil Engrs., New York.
Smith, R. L. (1982) "The asymptotic distribution of the strength of a series-parallel system with equal load sharing." Ann Probab. 10(1), pp. 137 – 171.
Tierney, L.-J. (1983)"Asymptotic bounds on the time to fatigue failure of bundles of fibers under local load sharing." Adv. Appl. Prob. Vol 14, No.1, pp 95–121.
Weibull, W. (1939). "The phenomenon of rupture in solids." Proc., Royal Swedish Institute of Engineering Research (Ingenioersvetenskaps Akad. Handl.) 153, Stockholm, 1--55.
Weibull, W. (1949). "A statistical representation of fatigue failures in solids." Proc., Roy. Inst. of Techn. No. 27.
Weibull, W. (1951). "A statistical distribution function of wide applicability." J. of Applied Mechanics ASME, Vol. 18.
Weibull, W. (1956). "Basic aspects of fatigue." Proc., Colloquium on Fatigue, Stockholm, Springer—Verlag.
Xu, X. F. (2007) "A multiscale stochastic finite element method on elliptic problems involving uncertainties." Comput. Meth. Appl. Mech. Engrg. 196, pp. 2723–2736.
Zhurkov, S. N. (1965). "Kinetic concept of the strength of solids." Int. J. Fract. Mech. 1 (4), pp 311–323.
Stepanov, I. A. (1995). "The scale effect is a consequence of the cellular structure of solid bodies. Thermofluctuation nature of spread in the values of strength." Materials Science 31 (4), pp 441–447.
External links
Physical models
Continuum mechanics
Fracture mechanics | Size effect on structural strength | [
"Physics",
"Materials_science",
"Engineering"
] | 6,773 | [
"Structural engineering",
"Fracture mechanics",
"Continuum mechanics",
"Classical mechanics",
"Materials science",
"Physical objects",
"Physical models",
"Materials degradation",
"Matter"
] |
45,486,119 | https://en.wikipedia.org/wiki/Ground%20segment | A ground segment consists of all the ground-based elements of a space system used by operators and support personnel, as opposed to the space segment and user segment. The ground segment enables management of a spacecraft, and distribution of payload data and telemetry among interested parties on the ground. The primary elements of a ground segment are:
Ground (or Earth) stations, which provide radio interfaces with spacecraft
Mission control (or operations) centers, from which spacecraft are managed
Remote terminals, used by support personnel
Spacecraft integration and test facilities
Launch facilities
Ground networks, which allow for communication between the other ground elements
These elements are present in nearly all space missions, whether commercial, military, or scientific. They may be located together or separated geographically, and they may be operated by different parties. Some elements may support multiple spacecraft simultaneously.
Elements
Ground stations
Ground stations provide radio interfaces between the space and ground segments for telemetry, tracking, and command (TT&C), as well as payload data transmission and reception. Tracking networks, such as NASA's Near Earth Network and Space Network, handle communications with multiple spacecraft through time-sharing.
Ground station equipment may be monitored and controlled remotely. There are often backup stations from which radio contact can be maintained if there is a problem at the primary ground station which renders it unable to operate, such as a natural disaster. Such contingencies are considered in a Continuity of Operations plan.
Transmission and reception
Signals to be uplinked to a spacecraft must first be extracted from ground network packets, encoded to baseband, and modulated, typically onto an intermediate frequency (IF) carrier, before being up-converted to the assigned radio frequency (RF) band. The RF signal is then amplified to high power and carried via waveguide to an antenna for transmission. In colder climates, electric heaters or hot air blowers may be necessary to prevent ice or snow buildup on the parabolic dish.
Received ("downlinked") signals are passed through a low-noise amplifier (often located in the antenna hub to minimize the distance the signal must travel) before being down-converted to IF; these two functions may be combined in a low-noise block downconverter. The IF signal is then demodulated, and the data stream extracted via bit and frame synchronization and decoding. Data errors, such as those caused by signal degradation, are identified and corrected where possible. The extracted data stream is then packetized or saved to files for transmission on ground networks. Ground stations may temporarily store received telemetry for later playback to control centers, often when ground network bandwidth is not sufficient to allow real-time transmission of all received telemetry. They may support delay-tolerant networking.
A single spacecraft may make use of multiple RF bands for different telemetry, command, and payload data streams, depending on bandwidth and other requirements.
Passes
The timing of passes, when a line of sight exists to the spacecraft, is determined by the location of ground stations, and by the characteristics of the spacecraft orbit or trajectory. The Space Network uses geostationary relay satellites to extend pass opportunities over the horizon.
Tracking and ranging
Ground stations must track spacecraft in order to point their antennas properly, and must account for Doppler shifting of RF frequencies due to the motion of the spacecraft. Ground stations may also perform automated ranging; ranging tones may be multiplexed with command and telemetry signals. Ground station tracking and ranging data are passed to the control center along with spacecraft telemetry, where they are often used in orbit determination.
Mission control centers
Mission control centers process, analyze, and distribute spacecraft telemetry, and issue commands, data uploads, and software updates to spacecraft. For crewed spacecraft, mission control manages voice and video communications with the crew. Control centers may also be responsible for configuration management and data archival. As with ground stations, there are often backup control facilities available to support continuity of operations.
Telemetry processing
Control centers use telemetry to determine the status of a spacecraft and its systems. Housekeeping, diagnostic, science, and other types of telemetry may be carried on separate virtual channels. Flight control software performs the initial processing of received telemetry, including:
Separation and distribution of virtual channels
Time-ordering and gap-checking of received frames (gaps may be filled by commanding a retransmission)
Decommutation of parameter values, and association of these values with parameter names called mnemonics
Conversion of raw data to calibrated (engineering) values, and calculation of derived parameters
Limit and constraint checking (which may generate alert notifications)
Generation of telemetry displays, which may be take the form of tables, plots of parameters against each other or over time, or synoptic displays (sometimes called mimics) – essentially flow diagrams that present component or subsystem interfaces and their state
A spacecraft database provided by the spacecraft manufacturer is called on to provide information on telemetry frame formatting, the positions and frequencies of parameters within frames, and their associated mnemonics, calibrations, and soft and hard limits. The contents of this database—especially calibrations and limits—may be updated periodically to maintain consistency with onboard software and operating procedures; these can change during the life of a mission in response to upgrades, hardware degradation in the space environment, and changes to mission parameters.
Commanding
Commands sent to spacecraft are formatted according to the spacecraft database, and are validated against the database before being transmitted via a ground station. Commands may be issued manually in real time, or they may be part of automated or semi-automated procedures uploaded in their entirety. Typically, commands successfully received by the spacecraft are acknowledged in telemetry, and a command counter is maintained on the spacecraft and at the ground to ensure synchronization. In certain cases, closed-loop control may be performed. Commanded activities may pertain directly to mission objectives, or they may be part of housekeeping. Commands (and telemetry) may be encrypted to prevent unauthorized access to the spacecraft or its data.
Spacecraft procedures are generally developed and tested against a spacecraft simulator prior to use with the actual spacecraft.
Analysis and support
Mission control centers may rely on "offline" (i.e., non-real-time) data processing subsystems to handle analytical tasks such as:
Orbit determination and maneuver planning
Conjunction assessment and collision avoidance planning
Mission planning and scheduling
On-board memory management
Short- and long-term trend analysis
Path planning, in the case of planetary rovers
Dedicated physical spaces may be provided in the control center for certain mission support roles, such as flight dynamics and network control, or these roles may be handled via remote terminals outside the control center. As on-board computing power and flight software complexity have increased, there is a trend toward performing more automated data processing on board the spacecraft.
Staffing
Control centers may be continuously or regularly staffed by flight controllers. Staffing is typically greatest during the early phases of a mission, and during critical procedures and periods, such as when a spacecraft is in eclipse and unable to generate power. Increasingly commonly, control centers for uncrewed spacecraft may be set up for "lights-out" (or automated) operation, as a means of controlling costs. Flight control software will typically generate notifications of significant events – both planned and unplanned – in the ground or space segment that may require operator intervention.
Remote terminals
Remote terminals are interfaces on ground networks, separate from the mission control center, which may be accessed by payload controllers, telemetry analysts, instrument and science teams, and support personnel, such as system administrators and software development teams. They may be receive-only, or they may transmit data to the ground network.
Terminals used by service customers, including ISPs and end users, are collectively called the "user segment", and are typically distinguished from the ground segment. User terminals including satellite television systems and satellite phones communicate directly with spacecraft, while other types of user terminals rely on the ground segment for data receipt, transmission, and processing.
Integration and test facilities
Space vehicles and their interfaces are assembled and tested at integration and test (I&T) facilities. Mission-specific I&T provides an opportunity to fully test communications between, and behavior of, both the spacecraft and the ground segment prior to launch.
Launch facilities
Vehicles are delivered to space via launch facilities, which handle the logistics of rocket launches. Launch facilities are typically connected to the ground network to relay telemetry prior to and during launch. The launch vehicle itself is sometimes said to constitute a "transfer segment", which may be considered distinct from both the ground and space segments.
Ground networks
Ground networks handle data transfer and voice communication between different elements of the ground segment. These networks often combine LAN and WAN elements, for which different parties may be responsible. Geographically separated elements may be connected via leased lines or virtual private networks. The design of ground networks is driven by requirements on reliability, bandwidth, and security. Delay-tolerant networking protocols may be used.
Reliability is a particularly important consideration for critical systems, with uptime and mean time to recovery being of paramount concern. As with other aspects of the spacecraft system, redundancy of network components is the primary means of achieving the required system reliability.
Security considerations are vital to protect space resources and sensitive data. WAN links often incorporate encryption protocols and firewalls to provide information and network security. Antivirus software and intrusion detection systems provide additional security at network endpoints.
Costs
Costs associated with the establishment and operation of a ground segment are highly variable, and depend on accounting methods. According to a study by Delft University of Technology, the ground segment contributes approximately 5% to the total cost of a space system. According to a report by the RAND Corporation on NASA small spacecraft missions, operation costs alone contribute 8% to the lifetime cost of a typical mission, with integration and testing making up a further 3.2%, ground facilities 2.6%, and ground systems engineering 1.1%.
Ground segment cost drivers include requirements placed on facilities, hardware, software, network connectivity, security, and staffing. Ground station costs in particular depend largely on the required transmission power, RF band(s), and the suitability of preexisting facilities. Control centers may be highly automated as a means of controlling staffing costs.
Images
See also
Consultative Committee for Space Data Systems (CCSDS), which maintains standards for telemetry and command formatting
Radiocommunication service, as defined by ITU Radio Regulations
On-board data handling subsystem
References
Telecommunications infrastructure
Spaceflight ground equipment
Spaceflight technology
Spacecraft communication
Spaceflight concepts | Ground segment | [
"Engineering"
] | 2,165 | [
"Spacecraft communication",
"Aerospace engineering"
] |
45,487,366 | https://en.wikipedia.org/wiki/Construction%20of%20the%20Cheyenne%20Mountain%20Complex | Construction of the Cheyenne Mountain Complex began with the excavation of Cheyenne Mountain in Colorado Springs, Colorado on May 18, 1961. It was made fully operational on February 6, 1967. It is a military installation and hardened nuclear bunker from which the North American Aerospace Defense Command was headquartered at the Cheyenne Mountain Complex. The United States Air Force has had a presence at the complex since the beginning, the facility is now the Cheyenne Mountain Space Force Station, which hosts other military units, including NORAD.
Initial planning
From the beginning of the Cold War, American defense experts and political leaders began planning and implementing a defensive air shield, which they believed was necessary to defend against a possible attack by long-range, manned Soviet bombers. The Air Defense Command was transferred to Colorado Springs' Ent Air Force Base on January 8, 1951. Starting September 1953, the base was the headquarters for the U.S. Army Anti-Aircraft Command.
The North American Air Defense Command (NORAD) was established and activated at the Ent Air Force Base on September 12, 1957. In the late 1950s, a plan was developed to construct a command and control center in a hardened facility as a Cold War defensive strategy against long-range Soviet bombers, ballistic missiles, and a nuclear attack.
The Operational Research Society published scientific articles at that time, relating to the planning of such a complex, like:
Hankin, B. D. "Communication and Control of Military Forces." Journal of the Operational Research Society 4.4 (1953): 65-68.
Rivett, Berwyn Hugh Patrick. "Underground communications." Journal of the Operational Research Society 4.4 (1953): 61-65.
Eddison, R. T., and D. G. Owen. "Discharging iron ore." Journal of the Operational Research Society 4.3 (1953): 39-50.
It is also interesting to note, that of the commissions charged with the task of investigating these concerns some were based around the Colorado Springs area, near the Broadmoor hotel. The leader of these inquests, members of the Rockefeller family, were also present at its inauguration.
Psychological planning, (known as Aviation Medicine) went into the selection of candidates, which was also related to continuity of government defense programs such as Operation Looking Glass. This is also the same year the MKULTRA program was authorized.
This planning occurred simultaneously with the rollout of Civil Defense programs in 1951, which resulted in the passage of the National Defense Education Act in 1958.
Hardened bunkers were part of a national plan to ensure the continuation of the United States government in the event of nuclear attack. In the Washington, D.C. area alone, there are said to have been 96 hardened bunkers. Other command bunkers built in the 1950s and early 1960s, include Raven Rock Mountain Complex (1953), Mount Weather Emergency Operations Center (1959) in Virginia, and Project Greek Island (Greenbrier). The closest Russian counterpart to the facility is regarded to be Kosvinsky Mountain, finished in early 1996.
Excavation
The operations center was moved from an above-ground facility, vulnerable to attack, to the "granite shielded security" within Cheyenne Mountain during the Cold War. In terms of telecommunications capabilities, American Telephone and Telegraph (AT&T) had begun placing its switching stations in hardened underground bunkers during the 1950s.
The mountain was excavated under the supervision of the Army Corps of Engineers for the construction of the NORAD Combat Operations Center. Excavation began for NORAD Command Operations Center (COC) in Cheyenne Mountain on May 18, 1961, by Utah Construction & Mining Company. Clifton W. Livingston of the Colorado School of Mines was hired by the Army Corps of Engineers to consult upon use of controlled blasting for smooth-wall blasting techniques.
The official ground breaking ceremony was held June 16, 1961 at the construction site of the new NORAD Combat Operations Center. Generals Lee (ADC) and Laurence S. Kuter (NORAD) simultaneously set off symbolic dynamite charges. On December 20, 1961, with excavation 53% complete there were 200 workers that walked off on what Cecil Welton, Utah Construction Company project manager, called a wildcat strike after a worker was fired for disobeying safety rules. Workers returned three days later and the fired worker was returned to his position.
Excavation was nearly complete in August 1962, but a geological fault in the ceiling of one of the intersections needed to be reinforced with a $2.7 million massive concrete dome. President John F. Kennedy visited NORAD at the Chidlaw Building on June 5, 1963, to obtain a briefing on the status of the Cheyenne Mountain Complex. Excavation was complete on May 1, 1964.
On September 24, 1964, the Secretary of Defense approved the proposal for the underground Combat Operations Center construction and the Space Defense Center. The targeted date for turnover of the military-staffed facility to the Commander of NORAD was January 1, 1966.
Construction
The architectural design was primarily created by Parsons Brinckerhoff Company. Estimated cost of the combat operations center construction and equipment was $66 million. The complex was built in the mid-1960s.
Continental Consolidated Construction was awarded a $6,969,000 contract on February 27, 1963, to build 11 buildings on giant springs, with a total of . Eight three-story buildings were built in the main chambers and three two-story buildings were constructed in the support area. Grafe-Wallace, Inc. and J. M. Foster Co. received a joint contract in April 1964 for $7,212,033 contract for blast-control equipment and utilities installation, including the original six 956-kilowatt diesel powered generators. Continental Consolidated also excavated water and fuel oil reservoirs within the interior of the Cheyenne Mountain facility. Continental Consolidated was paid an additional $106,000 for work on the reservoirs.
Beginning in 1965, the NORAD Combat Operations Center was connected through several remote locations to the national telecommunications systems via Bell Laboratories' Close-in Automatic Route Restoral System (CARRS), a "Blast-resistant" communication system constructed hundreds of feet underneath solid granite. Having several remote locations, from 30 to 120 miles from the Cheyenne Mountain Complex, allowed for several different, automatically rerouted pathways to relay data, teletype, and voice communications. The Ballistic Missile Early Warning System (BMEWS) and Distant Early Warning Line (DEW) sites in North America, United Kingdom, and Greenland sent incoming information through the system to the Combat Operations Center.
Systems installations
Burroughs Corporation developed a command and control system for NORAD's Combat Operations Center for the underground facility and the Federal Building in downtown Colorado Springs. The electronics and communications system centralized and automated the instantaneous (one-millionth of a second) evaluation of aerospace surveillance data. The Air Defense Command's SPACETRACK Center and NORAD's Space Detection and Tracking System (SPADATS) Center merged to form the Space Defense Center. It was moved from Ent AFB to the newly completed Cheyenne Mountain Combat Operations Center and was activated on September 3, 1965. The Electronic Systems Division (ESD) turned the facility's Combat Operations Center over to NORAD on January 1, 1966. The Commander of NORAD transferred Combat Operations Center operations from Ent Air Force Base to Cheyenne Mountain and declared the 425L command and control system fully operational April 20, 1966. The Space Defense Command's 1st Aerospace Control Squadron moved from Ent AFB to Cheyenne Mountain in April 1966.
On May 20, 1966, the NORAD Attack Warning System became operational. The Combat Operations Command was fully operational on July 1, 1966. The $5 million Delta I computer system, one of the largest computer program systems of the Electronic Systems Division, became operational on October 28, 1966. With 53 different programs, it was a defense against space systems by detecting and warning of space threats, which involved recording and monitoring every detected space system. By January 4, 1967, the National Civil Defense Warning Center was in the bunker. The Space Defense Center and the Combat Operations Center achieved Full Operational Capability on February 6, 1967. The total cost was $142.4 million or $1,075,017,676.65 in 2018 value.
Notes
See also
Fortification
Underground construction
References
External links
United States Army Corps of Engineers
Government buildings completed in 1965
Cheyenne Mountain Complex
Military history of Colorado
North American Aerospace Defense Command
Nuclear bunkers in the United States
Continuity of government in the United States
Underground construction | Construction of the Cheyenne Mountain Complex | [
"Engineering"
] | 1,720 | [
"Underground construction",
"United States Army Corps of Engineers",
"Engineering units and formations",
"Construction",
"Civil engineering"
] |
45,498,354 | https://en.wikipedia.org/wiki/Fluid%20kinematics | Fluid kinematics is a term from fluid mechanics, usually referring to a mere mathematical description or specification of a flow field, divorced from any account of the forces and conditions that might actually create such a flow. The term fluids includes liquids or gases, but also may refer to materials that behave with fluid-like properties, including crowds of people or large numbers of grains if those are describable approximately under the continuum assumption as used in continuum mechanics.
Unsteady and convective effects
The composition of the material contains two types of terms: those involving the time derivative and those involving spatial derivatives. The time derivative portion is denoted as the local derivative, and represents the effects of unsteady flow. The local derivative occurs during unsteady flow, and becomes zero for steady flow.
The portion of the material derivative represented by the spatial derivatives is called the convective derivative. It accounts for the variation in fluid property, be it velocity or temperature for example, due to the motion of a fluid particle in space where its values are different.
Acceleration field
The acceleration of a particle is the time rate of change of its velocity. Using an Eulerian description for velocity, the velocity field V = V(x,y,z,t) and employing the material derivative, we obtain the acceleration field.
References
Kinematics
Fluid mechanics | Fluid kinematics | [
"Physics",
"Technology",
"Engineering"
] | 272 | [
"Machines",
"Kinematics",
"Physical phenomena",
"Classical mechanics",
"Physical systems",
"Motion (physics)",
"Civil engineering",
"Mechanics",
"Fluid mechanics"
] |
39,452,826 | https://en.wikipedia.org/wiki/Vanishing%20dimensions%20theory | Vanishing-dimensions theory is a particle physics theory suggesting that systems with higher energy have a smaller number of dimensions.
For example, the theory implies that the Universe had fewer dimensions after the Big Bang when its energy was high. Then the number of dimensions may have increased as the system cooled and the Universe may gain more dimensions with time. There could have originally been only one spatial dimension, with two dimensions total — one time dimension and one space dimension. When there were only two dimensions, the Universe lacked gravitational degrees of freedom.
The theory is also tied to smaller amount of dimensions in smaller systems with the universe expansion being a suggested motivating phenomenon for growth of the number of dimensions with time, suggesting a larger number of dimensions in systems on larger scale.
In 2011, Dejan Stojkovic from the University at Buffalo and Jonas Mureika from the Loyola Marymount University described use of a Laser Interferometer Space Antenna system, intended to detect gravitational waves, to test the vanishing-dimension theory by detecting a maximum frequency after which gravitational waves can't be observed.
The vanishing-dimensions theory is seen as an explanation to cosmological constant problem: a fifth dimension would answer the question of energy density required to maintain the constant.
References
Further reading
Particle physics
Astrophysics theories | Vanishing dimensions theory | [
"Physics"
] | 257 | [
"Astrophysics theories",
"Particle physics",
"Astrophysics"
] |
39,454,179 | https://en.wikipedia.org/wiki/Quantum-optical%20spectroscopy | Quantum-optical spectroscopy is a quantum-optical generalization of laser spectroscopy where matter is excited and probed with a sequence of laser pulses.
Classically, such pulses are defined by their spectral and temporal shape as well as phase and amplitude of the electromagnetic field. Besides these properties of light, the phase-amplitude aspects have intrinsic quantum fluctuations that are of central interest in quantum optics. In ordinary laser spectroscopy, one utilizes only the classical aspects of laser pulses propagating through matter such as atoms or semiconductors. In quantum-optical spectroscopy, one additionally utilizes the quantum-optical fluctuations of light to enhance the spectroscopic capabilities by directly shaping and/or detecting the quantum fluctuations of light. Quantum-optical spectroscopy has applications in controlling and characterizing quantum dynamics of many-body states because one can directly access a large set of many-body states, which is not possible in classical spectroscopy .
Quantum-optical state injection
A generic electromagnetic field can always be expressed in terms of a mode expansion where individual components form a complete set of modes. Such modes can be constructed with different methods and they can, e.g., be energy eigenstate, generic spatial modes, or temporal modes. Once these light mode are chosen, their effect on the quantized electromagnetic field can be described by Boson creation and annihilation operators
and
for photons, respectively. The quantum fluctuations of the light field can be uniquely defined by the photon correlations
that contain the pure -particle correlations as defined with the cluster-expansion approach. Using the same second-quantization formalism for the matter being studied, typical electronic excitations in matter can be described by Fermion operators for electronic excitations and holes, i.e.~electronic vacancies left behind to the many-body ground state. The corresponding electron–hole excitations can be described by operators and that create and annihilate an electron–hole pair, respectively.
In several relevant cases, the light–matter interaction can be described using the dipole
interaction
where the summation is implicitly taken over all possibilities to create an electron–hole pair (the part) via a photon absorption (the part); the Hamiltonian also contains the Hermitian conjugate (abbreviated as h.c.) of the terms that are explicitly written. The coupling strength between light and matter is defined by .
When the electron–hole pairs are excited resonantly with a single-mode light , the photon correlations are directly injected into the many-body correlations. More specifically, the fundamental form of the light–matter interaction inevitably leads to a correlation-transfer relation
between photons and electron–hole excitations. Strictly speaking, this relation is valid before the onset of scattering induced by the Coulomb and phonon interactions in the solid. Therefore, it is desirable to use laser pulses that are faster than the dominant scattering processes. This regime is relatively easy to realize in present-day laser spectroscopy because lasers can already output femtosecond, or even attosecond, pulses with a high precision in controllability.
Realization
Physically, the correlation-transfer relation means that one can directly inject desired
many-body states simply by adjusting the quantum fluctuations of the light pulse, as long as the light pulse is short enough. This opens a new possibility for studying properties of distinct many-body states, once the quantum-optical spectroscopy is realized through controlling the quantum fluctuations of light sources. For example, a coherent-state laser is described entirely by its single-particle expectation value . Therefore, such excitation directly injects property that is polarization related to electron–hole transitions. To directly excite bound electron–hole pairs, i.e., excitons, described by a two-particle correlation , or a biexciton transition , one needs to have a source with or photon correlations, respectively.
To realize quantum-optical spectroscopy, high-intensity light sources with freely adjustable quantum statistics are needed which are currently not available. However, one can apply projective methods to access the quantum–optical response of matter from a set of classical measurements. Especially, the method presented by Kira, M. et al is robust in projecting quantum-optical responses of genuine many-body systems. This work has shown that one can indeed reveal and access many–body properties that remain hidden in classical spectroscopy. Therefore, the quantum-optical spectroscopy is ideally suited for characterizing and controlling complicated many-body states in several different systems, ranging from molecules to semiconductors.
Relation to semiconductor quantum optics
Quantum-optical spectroscopy is an important approach in general semiconductor quantum optics. The capability to discriminate and control many-body states is certainly interesting in extended semiconductors such as quantum wells because a typical classical excitation indiscriminately detects contributions from multiple many-body configurations; With quantum-optical spectroscopy one can access and control a desired many-body state within an extended semiconductor. At the same time, the ideas of quantum-optical spectroscopy can also be useful when studying simpler systems such as quantum dots.
Quantum dots are a semiconductor equivalent to simple atomic systems where most of the first quantum-optical demonstrations have been measured. Since quantum dots are man-made, one can possibly customize them to produce new quantum-optical components for information technology. For example, in quantum-information science, one is often interested to have light sources that can output photons on demand or entangled photon pairs at specific frequencies. Such sources have already been demonstrated with quantum dots by controlling their photon emission with various schemes. In the same way, quantum-dot lasers may exhibit unusual changes in the conditional probability to emit a photon when already one photon is emitted; this effect can be measured in the so-called g2 correlation. One interesting possibility for quantum-optical spectroscopy is to pump quantum dots with quantum light to control their light emission more precisely.
Quantum-dot microcavity investigations have progressed rapidly ever since the experimental demonstration of vacuum Rabi splitting between a single dot and a cavity resonance. This regime can be understood on the basis of the Jaynes–Cummings model while the semiconductor aspects provide many new physical effects due to the electronic coupling with the lattice vibrations.
Nevertheless, the quantum Rabi splitting—stemming directly from the quantized light levels—remained elusive because many experiments were monitoring only the intensity of photoluminescence. Following the ideology of quantum-optical spectroscopy, Ref. predicted that quantum-Rabi splitting could be resolved in photon-correlation measurement even when it becomes smeared out in photoluminescence spectrum. This was experimentally demonstrated by measuring the so-called g2 correlations that quantify how regularly the photons are emitted by the quantum dot inside a microcavity.
See also
Photon antibunching
Resonance fluorescence
Semiconductor Bloch equations
Semiconductor luminescence equations
Ultrafast laser spectroscopy
References
Further reading
.
.
.
.
.
.
.
Time-resolved spectroscopy | Quantum-optical spectroscopy | [
"Physics",
"Chemistry"
] | 1,409 | [
"Time-resolved spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
39,454,695 | https://en.wikipedia.org/wiki/Wannier%20equation | The Wannier equation describes a quantum mechanical eigenvalue problem in solids where an electron in a conduction band and an electronic vacancy (i.e. hole) within a valence band attract each other via the Coulomb interaction. For one electron and one hole, this problem is analogous to the Schrödinger equation of the hydrogen atom; and the bound-state solutions are called excitons. When an exciton's radius extends over several unit cells, it is referred to as a Wannier exciton in contrast to Frenkel excitons whose size is comparable with the unit cell. An excited solid typically contains many electrons and holes; this modifies the Wannier equation considerably. The resulting generalized Wannier equation can be determined from the homogeneous part of the semiconductor Bloch equations or the semiconductor luminescence equations.
The equation is named after Gregory Wannier.
Background
Since an electron and a hole have opposite charges their mutual Coulomb interaction is attractive. The corresponding Schrödinger equation, in relative coordinate , has the same form as the hydrogen atom:
with the potential given by
Here, is the reduced Planck constant, is the nabla operator, is the reduced mass, () is the elementary charge related to an electron (hole), is the relative permittivity, and is the vacuum permittivity. The solutions of the hydrogen atom are described by eigenfunction and eigenenergy where is a quantum number labeling the different states.
In a solid, the scaling of and the wavefunction size are orders of magnitude different from the hydrogen problem because the relative permittivity is roughly ten and the reduced mass in a solid is much smaller than the electron rest mass , i.e., . As a result, the exciton radius can be large while the exciton binding energy is small, typically few to hundreds of meV, depending on material, compared to eV for the hydrogen problem.
The Fourier transformed version of the presented Hamiltonian can be written as
where is the electronic wave vector, is the kinetic energy and , are the Fourier transforms of , , respectively. The Coulomb sums follows from the convolution theorem and the -representation is useful when introducing the generalized Wannier equation.
Generalized Wannier equation
The Wannier equation can be generalized by including the presence of many electrons and holes in the excited system. One can start from the general theory of either optical excitations or light emission in semiconductors that can be systematically described using the semiconductor Bloch equations (SBE) or the semiconductor luminescence equations (SLE), respectively. The homogeneous parts of these equations produce the Wannier equation at the low-density limit. Therefore, the homogeneous parts of the SBE and SLE provide a physically meaningful way to identify excitons at arbitrary excitation levels. The resulting generalized Wannier equation is
where the kinetic energy becomes renormalized
by the electron and hole occupations and , respectively. These also modify the Coulomb interaction into
where weakens the Coulomb interaction via the so-called phase-space filling factor that stems from the Pauli exclusion principle preventing multiple excitations of fermions. Due to the phase-space filling factor, the Coulomb attraction becomes repulsive for excitations levels . At this regime, the generalized Wannier equation produces only unbound solutions which follow from the excitonic Mott transition from bound to ionized electron–hole pairs.
Once electron–hole densities exist, the generalized Wannier equation is not Hermitian anymore. As a result, the eigenvalue problem has both left- and right-handed eigenstates and , respectively. They are connected via the phase-space filling factor, i.e. . The left- and right-handed eigenstates have the same eigen value (that is real valued for the form shown) and they form a complete set of orthogonal solutions since
.
The Wannier equations can also be generalized to include scattering and screening effects that appear due to two-particle correlations within the SBE. This extension also produces left- and right-handed eigenstate, but their connection is more complicated than presented above. Additionally, becomes complex valued and the imaginary part of defines the lifetime of the resonance .
Physically, the generalized Wannier equation describes how the presence of other electron–hole pairs modifies the binding of one effective pair. As main consequences, an excitation tends to weaken the Coulomb interaction and renormalize the single-particle energies in the simplest form. Once also correlation effects are included, one additionally observes the screening of the Coulomb interaction, excitation-induced dephasing, and excitation-induced energy shifts. All these aspects are important when semiconductor experiments are explained in detail.
Applications
Due to the analogy with the hydrogen problem, the zero-density eigenstates are known analytically for any bulk semiconductor when excitations close to the bottom of the electronic bands are studied. In nanostructured materials, such as quantum wells, quantum wires, and quantum dots, the Coulomb-matrix element strongly deviates from the ideal two- and three-dimensional systems due to finite quantum confinement of electronic states. Hence, one cannot solve the zero-density Wannier equation analytically for those situations, but needs to resort to numerical eigenvalue solvers. In general, only numerical solutions are possible for all semiconductor cases when exciton states are solved within an excited matter. Further examples are shown in the context of the Elliott formula.
See also
Excitons
Semiconductor Bloch equations
Semiconductor luminescence equations
Elliott formula
Eigenvalues and eigenvectors
Quantum well
Quantum wire
Quantum dot
References
Quantum mechanics | Wannier equation | [
"Physics"
] | 1,177 | [
"Theoretical physics",
"Quantum mechanics"
] |
39,455,714 | https://en.wikipedia.org/wiki/Auger%20architectomics | Auger architectomics is a scientific imaging technique that allows biologists, working in the field of nano-technology, to slice open the cells of living organisms to view and assess their internal workings. Using argon gas etching to open the cells and a scanning electron microscope to create a three-dimensional view, researchers can harness this technique to track how cells function. This is most importantly used to assess how cells react to medication, for instance in the field of cancer research.
It was first discovered in 2010 by Professor Lodewyk Kock and his team working in the biotechnology department at the University of the Free State in South Africa. The technique was adapted from Nano Scanning Auger Microscopy (NanoSAM), a technique used by physical scientists to study the surface structures of metal and inanimate materials such as semiconductors. Originally designed to observe yeast cells to find out more about how they manufactured the gas that causes bread to rise, the scientists discovered that the process could also be used in observing other living cells. In 2012 the technique was successfully applied to human cell tissue.
History
The project was initiated at the University of the Free State by the Kock group in 1982, with the major inputs and breakthroughs occurring between 2007 and 2012. The initial aim was to explore lipid biochemical routes, which would uncover unique lipids in yeasts, and to develop new taxonomies on the structures of these lipids. This unfolded into the development of the anti-mitochondrial antifungal assay (3A system), where yeast sensors are used to indicate anti-mitochondrial activity in compounds. These compounds, aimed at selectively switching off the mitochondria, therefore, might find application in combating various diseases such as fungal infections and cancer. Auger architectomics, which opens up individual cells to scan them, can be used to assess the effectiveness of such drugs by determining if a single cell can be "powered down" with targeted treatment.
Based on the development of the anti-mitochondrial antifungal assay system, the University of the Free State scientists felt there was a need to analyse the system in more detail. As a result, they adapted Nano Scanning Auger Microscopy, a technique used to scan the properties of metals in physics, to apply it to cells. The result was a combination of auger atom electron physics, electron microscopy, and argon etching.
The main challenge in applying the technology to biological material was to invent a sample preparation procedure that would ensure that the atom and 3D structure remained stable while argon nano-etching occurred. During the NanoSAM scanning electron microscope visualisation, an electron beam at 25 kV is used instead of the normal 5 kV beam. Sample fixation and dehydration methods had to be developed and optimised to fit NanoSAM without creating sample distortions. Dehydration regimes based on alcohol extraction procedures were installed and optimised, while fixation using various fixatives was included. Electron conductivity of samples throughout Argon etching was assured by optimised gold sputtering.
Procedure
Firstly, the biological sample is plated with gold to stabilise the outer structure and make it electron conductive. It is then scanned in SEM mode and the surface visually enlarged. Auger atom electron physics are applied and selected areas on the sample surface are beamed with electrons. The incident beam ejects an electron in the inner orbital of the atom, leaving an open space. This is filled by an electron from an outer orbital by relaxation. Energy is released, causing the ejection of an electron from the outer orbital. This electron is called the Auger electron. The amount of energy that is released is measured by auger electron spectroscopy (AES) and used to identify the atom and its intensity. Similarly, the surface area can be screened by an electron beam eventually yielding auger electrons that are mapped, showing the distribution of atoms in different colours covering a surface area of predetermined size. The previously-screened surface of the sample is etched with argon, exposing a new surface of the sample that is then again analysed. In this way, a 3-dimensional image and element composition architecture of the whole cell is visualised.
Discoveries
This process in nanotechnology led to the discovery of gas bubbles inside yeasts. This is considered a paradigm shift, since naked gas bubbles are not expected inside any type of cell due to structured water in the cytoplasm. This was exposed in a fluconazole-treated bubble-like sensor of the yeast Nadsonia. This is the only technology known at present that can accomplish this type of nano-analysis on biological material.
Use in medicine
Nanotechnology developments in medicine allow microdoses of drugs and therapies to be delivered directly to infected cells, instead of killing large groups of cells, often at the expense of healthy cells. Gold at a nano-level has the ability to bind to certain types of biological material, which means that certain types of cells can be targeted. The technique of auger architectomics may be used to map the success or otherwise of targeted drug delivery by analysing cells. The team at the University of the Free State is working with the Mayo Clinic to use the technology as a part of their cancer research.
References
Nanotechnology
Electron microscopy
Cell biology
University of the Free State | Auger architectomics | [
"Chemistry",
"Materials_science",
"Engineering",
"Biology"
] | 1,089 | [
"Electron",
"Electron microscopy",
"Cell biology",
"Materials science",
"Microscopy",
"Nanotechnology"
] |
39,456,566 | https://en.wikipedia.org/wiki/Terahertz%20spectroscopy%20and%20technology | Terahertz spectroscopy detects and controls properties of matter with electromagnetic fields that are in the frequency range between a few hundred gigahertz and several terahertz (abbreviated as THz). In many-body systems, several of the relevant states have an energy difference that matches with the energy of a THz photon. Therefore, THz spectroscopy provides a particularly powerful method in resolving and controlling individual transitions between different many-body states. By doing this, one gains new insights about many-body quantum kinetics and how that can be utilized in developing new technologies that are optimized up to the elementary quantum level.
Different electronic excitations within semiconductors are already widely used in lasers, electronic components and computers. At the same time, they constitute an interesting many-body system whose quantum properties can be modified, e.g., via a nanostructure design. Consequently, THz spectroscopy on semiconductors is relevant in revealing both new technological potentials of nanostructures as well as in exploring the fundamental properties of many-body systems in a controlled fashion.
Background
There are a great variety of techniques to generate THz radiation and to detect THz fields. One can, e.g., use an antenna, a quantum-cascade laser, a free-electron laser, or optical rectification to produce well-defined THz sources. The resulting THz field can be characterized via its electric field ETHz(t). Present-day experiments can already output ETHz(t) that has a peak value in the range of MV/cm (megavolts per centimeter). To estimate how strong such fields are, one can compute the level of energy change such fields induce to an electron over microscopic distance of one nanometer (nm), i.e., L = 1 nm. One simply multiplies the peak ETHz(t) with elementary charge e and L to obtain e ETHz(t) L = 100 meV. In other words, such fields have a major effect on electronic systems because the mere field strength of ETHz(t) can induce electronic transitions over microscopic scales. One possibility is to use such THz fields to study Bloch oscillations where semiconductor electrons move through the Brillouin zone, just to return to where they started, giving rise to the Bloch oscillations.
The THz sources can be also extremely short, down to single cycle of THz field's oscillation. For one THz, that means duration in the range of one picosecond (ps). Consequently, one can use THz fields to monitor and control ultrafast processes in semiconductors or to produce ultrafast switching in semiconductor components. Obviously, the combination of ultrafast duration and strong peak ETHz(t) provides vast new possibilities to systematic studies in semiconductors.
Besides the strength and duration of ETHz(t), the THz field's photon energy plays a vital role in semiconductor investigations because it can be made resonant with several intriguing many-body transitions. For example, electrons in conduction band and holes, i.e., electronic vacancies, in valence band attract each other via the Coulomb interaction. Under suitable conditions, electrons and holes can be bound to excitons that are hydrogen-like states of matter. At the same time, the exciton binding energy is few to hundreds of meV that can be matched energetically with a THz photon. Therefore, the presence of excitons can be uniquely detected based on the absorption spectrum of a weak THz field. Also simple states, such as plasma and correlated electron–hole plasma can be monitored or modified by THz fields.
Terahertz time-domain spectroscopy
In optical spectroscopy, the detectors typically measure the intensity of the light field rather than the electric field because there are no detectors that can directly measure electromagnetic fields in the optical range. However, there are multiple techniques, such as antennas and electro-optical sampling, that can be applied to measure the time evolution of ETHz(t) directly. For example, one can propagate a THz pulse through a semiconductor sample and measure the transmitted and reflected fields as function of time. Therefore, one collects information of semiconductor excitation dynamics completely in time domain, which is the general principle of the terahertz time-domain spectroscopy.
By using short THz pulses, a great variety of physical phenomena have already been studied. For unexcited, intrinsic semiconductors one can determine the complex permittivity or THz-absorption coefficient and refractive index, respectively. The frequency of transversal-optical phonons, to which THz photons can couple, lies for most semiconductors at several THz. Free carriers in doped semiconductors or optically excited semiconductors lead to a considerable absorption of THz photons. Since THz pulses passes through non-metallic materials, they can be used for inspection and transmission of packaged items.
Terahertz-induced plasma and exciton transitions
The THz fields can be applied to accelerate electrons out of their equilibrium. If this is done fast enough, one can measure the elementary processes, such as how fast the screening of the Coulomb interaction is built up. This was experimentally explored in Ref. where it was shown that screening is complete within tens of femtoseconds in semiconductors. These insights are very important to understand how electronic plasma behaves in solids.
The Coulomb interaction can also pair electrons and holes into excitons, as discussed above. Due to their analog to the hydrogen atom, excitons have bound states that can be uniquely identified by the usual quantum numbers 1s, 2s, 2p, and so on. In particular, 1s-to-2p transition is dipole allowed and can be directly generated by ETHz(t) if the photon energy matches the transition energy. In gallium arsenide-type systems, this transition energy is roughly 4 meV that corresponds to 1 THz photons. At resonance, the dipole d1s,2p defines the Rabi energy ΩRabi = d1s,2p ETHz(t) that determines the time scale at which the 1s-to-2p transition proceeds.
For example, one can excite the excitonic transition with an additional optical pulse which is synchronized with the THz pulse. This technique is called transient THz spectroscopy. Using this technique one can follow the formation dynamics of excitons or observe THz gain arising from intraexcitonic transitions.
Since a THz pulse can be intense and short, e.g., single-cycle, it is experimentally possible to realize situations where duration of the pulse, time scale related to Rabi- as well as the THz photon energy ħω are degenerate. In this situation, one enters the realm of extreme nonlinear optics where the usual approximations, such as the rotating-wave approximation (abbreviated as RWA) or the conditions for complete state transfer, break down. As a result, the Rabi oscillations become strongly distorted by the non-RWA contributions, the multiphoton absorption or emission processes, and the dynamic Franz–Keldysh effect, as measured in Refs.
By using a free-electron laser, one can generate longer THz pulses that are more suitable for detecting the Rabi oscillations directly. This technique could indeed demonstrate the Rabi oscillations, or actually the related Autler–Townes splitting, in experiments. The Rabi splitting has also been measured with a short THz pulse and also the onset to multi-THz-photon ionization has been detected, as the THz fields are made stronger. Recently, it has also been shown that the Coulomb interaction causes nominally dipole-forbidden intra-excitonic transitions to become partially allowed.
Theory of terahertz transitions
Terahertz transitions in solids can be systematically approached by generalizing the semiconductor Bloch equations and the related many-body correlation dynamics. At this level, one realizes the THz field are directly absorbed by two-particle correlations that modify the quantum kinetics of electron and hole distributions. Therefore, a systematic THz analysis must include the quantum kinetics of many-body correlations, that can be treated systematically, e.g., with the cluster-expansion approach. At this level, one can explain and predict a wide range of effects with the same theory, ranging from Drude-like response of plasma to extreme nonlinear effects of excitons.
See also
References
Spectroscopy
Terahertz technology | Terahertz spectroscopy and technology | [
"Physics",
"Chemistry"
] | 1,791 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Electromagnetic spectrum",
"Spectroscopy",
"Terahertz technology"
] |
39,456,961 | https://en.wikipedia.org/wiki/Dunham%20expansion | In quantum chemistry, the Dunham expansion is an expression for the rotational-vibrational energy levels of a diatomic molecule:
where and are the vibrational and rotational quantum numbers, and is the projection of along the internuclear axis in the body-fixed frame.
The constant coefficients are called Dunham parameters with representing the electronic energy. The expression derives from a semiclassical treatment of a perturbational approach to deriving the energy levels. The Dunham parameters are typically calculated by a least-squares fitting procedure of energy levels with the quantum numbers.
Relation to conventional band spectrum constants
This table adapts the sign conventions from the book of Huber and Herzberg.
See also
Rotational-vibrational spectroscopy
References
Spectroscopy
Molecular vibration | Dunham expansion | [
"Physics",
"Chemistry",
"Mathematics"
] | 153 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Molecular vibration",
"Applied mathematics",
"Applied mathematics stubs",
"Spectroscopy"
] |
48,512,364 | https://en.wikipedia.org/wiki/Pople%20diagram | A Pople diagram or Pople's Diagram is a diagram which describes the relationship between various calculation methods in computational chemistry. It was initially introduced in January 1965 by Sir John Pople, , during the Symposium of Atomic and Molecular Quantum Theory in Florida. The Pople Diagram can be either 2-dimensional or 3-dimensional, with the axes representing ab initio methods, basis sets and treatment of relativity. The diagram attempts to balance calculations by giving all aspects of a computation equal weight.
History
John Pople first introduced the Pople Diagram during the Symposium on Atomic and Molecular Quantum Theory held on Sanibel Island, Florida, in January 1965. He called it a "hyperbola of quantum chemistry", which illustrates the inverse relationship between the sophistication of a calculational method and the number of electrons in a molecule that can be studied by that method. Alternative (reverse) arrangement of the vertical axis or interchange of the two axes are also possible.
Three-Dimensional Pople Diagrams
The 2-dimensional Pople diagram describes the convergence of the quantum-mechanical nonrelativistic electronic energy with the size of the basis set and the level of electron correlation included in the wavefunction. In order to reproduce accurate experimental thermochemical properties, secondary energetic contributions have to be considered. The third dimension of the Pople diagram consists of such energetic contributions. These contributions may include: spin–orbit interaction, scalar relativistic, zero-point vibrational energy, and deviations from the Born–Oppenheimer approximation. The three-dimensional Pople diagram (also known as the Csaszar cube.) describes the energy contributions involved in quantum chemistry composite methods.
See also
Electronic correlation
References
External links
Introduction to Computational Chemistry
Introduction to Quantum and Computational Chemistry
Quantum chemistry
Computational chemistry
Theoretical chemistry
Molecular modelling
Electronic structure methods | Pople diagram | [
"Physics",
"Chemistry"
] | 372 | [
"Quantum chemistry",
"Molecular physics",
"Quantum mechanics",
"Computational physics",
"Theoretical chemistry",
"Electronic structure methods",
"Computational chemistry",
"Molecular modelling",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
48,517,532 | https://en.wikipedia.org/wiki/Calorimetric%20Electron%20Telescope | The CALorimetric Electron Telescope (CALET) is a space telescope being mainly used to perform high precision observations of electrons and gamma rays. It tracks the trajectory of electrons, protons, nuclei, and gamma rays and measures their direction, charge and energy, which may help understand the nature of dark matter or nearby sources of high-energy particle acceleration.
The mission was developed and sponsored by the Japan Aerospace Exploration Agency (JAXA), involving teams from Japan, Italy, and the United States. CALET was launched aboard JAXA's H-II Transfer Vehicle Kounotori 5 (HTV-5) on 19 August 2015, and was placed on the International Space Station's Japanese Kibo module.
Overview
CALET is an astrophysics mission that searches for signatures of dark matter and provides the highest energy direct measurements of the cosmic ray electron spectrum in order to observe discrete sources of high-energy particle acceleration in our local region of the galaxy. The mission was developed and sponsored by the Japan Aerospace Exploration Agency (JAXA), involving teams from Japan, Italy, and the United States. It seeks to understand the mechanisms of particle acceleration and propagation of cosmic rays in our galaxy, to identify their sources of acceleration, their elemental composition as a function of energy, and possibly to unveil the nature of dark matter. Such sources seem to be able to accelerate particles to energies far higher than scientists can achieve on Earth using the largest accelerators. Understanding how nature does this is important to space travel and has possible applications here on Earth. The CALET Principal Investigator is Shoji Torii from the Waseda University, Japan; John Wefel is the co-principal investigator for the US team; Pier S. Marrocchesi, is the co-investigator from the Italy team.
Unlike optical telescopes, CALET operates in a scanning mode. It records each cosmic ray event that enters its field of view and triggers its detectors to take measurements of the cosmic ray in the extremely high energy region of teraelectronvolts (TeV, one trillion electronvolts). These measurements are recorded on the space station and sent to a ground station at Waseda University for analyses. CALET may also yield evidence of rare interactions between matter and dark matter by working in synergy with the Alpha Magnetic Spectrometer (AMS) – also aboard the ISS – that is looking at positrons and antiprotons to identify dark matter. Observations will be carried out more than 5 years.
CALET contains a sub-payload CIRC (Compact Infrared Camera) to observe the Earth's surface in order to detect forest fires.
Objectives
The objectives are to understand the following:
origin and mechanisms of acceleration of high-energy cosmic rays and gamma rays
propagation mechanism of cosmic rays throughout the Galaxy
identity of dark matter
As a cosmic ray observatory, CALET aims to clarify high energy space phenomena and dark matter from two perspectives; one is particle creation and annihilation in the field of particle physics (or nuclear physics) and the other is particle acceleration and propagation in the field of space physics.
Results
CALET first published data on half a million electron and positron cosmic ray events in 2017, finding a spectral index of −3.152 ± 0.016 above 30 GeV.
See also
References
External links
"Status and performance of the Calorimetric Electron Telescope (CALET) on the International Space Station". (PDF) By Roberta Sparvoli.
CALET brochure in English (PDF) at JAXA.
Astrophysics
Dark matter
Space telescopes
Gamma-ray telescopes | Calorimetric Electron Telescope | [
"Physics",
"Astronomy"
] | 727 | [
"Dark matter",
"Unsolved problems in astronomy",
"Astronomical sub-disciplines",
"Concepts in astronomy",
"Unsolved problems in physics",
"Astrophysics",
"Space telescopes",
"Exotic matter",
"Physics beyond the Standard Model",
"Matter"
] |
48,520,204 | https://en.wikipedia.org/wiki/Computational%20anatomy | Computational anatomy is an interdisciplinary field of biology focused on quantitative investigation and modelling of anatomical shapes variability. It involves the development and application of mathematical, statistical and data-analytical methods for modelling and simulation of biological structures.
The field is broadly defined and includes foundations in anatomy, applied mathematics and pure mathematics, machine learning, computational mechanics, computational science, biological imaging, neuroscience, physics, probability, and statistics; it also has strong connections with fluid mechanics and geometric mechanics. Additionally, it complements newer, interdisciplinary fields like bioinformatics and neuroinformatics in the sense that its interpretation uses metadata derived from the original sensor imaging modalities (of which magnetic resonance imaging is one example). It focuses on the anatomical structures being imaged, rather than the medical imaging devices. It is similar in spirit to the history of computational linguistics, a discipline that focuses on the linguistic structures rather than the sensor acting as the transmission and communication media.
In computational anatomy, the diffeomorphism group is used to study different coordinate systems via coordinate transformations as generated via the Lagrangian and Eulerian velocities of flow in . The flows between coordinates in computational anatomy are constrained to be geodesic flows satisfying the principle of least action for the Kinetic energy of the flow. The kinetic energy is defined through a Sobolev smoothness norm with strictly more than two generalized, square-integrable derivatives for each component of the flow velocity, which guarantees that the flows in are diffeomorphisms.
It also implies that the diffeomorphic shape momentum taken pointwise satisfying the Euler–Lagrange equation for geodesics is determined by its neighbors through spatial derivatives on the velocity field. This separates the discipline from the case of incompressible fluids for which momentum is a pointwise function of velocity. Computational anatomy intersects the study of Riemannian manifolds and nonlinear global analysis, where groups of diffeomorphisms are the central focus. Emerging high-dimensional theories of shape are central to many studies in computational anatomy, as are questions emerging from the fledgling field of shape statistics.
The metric structures in computational anatomy are related in spirit to morphometrics, with the distinction that Computational anatomy focuses on an infinite-dimensional space of coordinate systems transformed by a diffeomorphism, hence the central use of the terminology diffeomorphometry, the metric space study of coordinate systems via diffeomorphisms.
Genesis
At computational anatomy's heart is the comparison of shape by recognizing in one shape the other. This connects it to D'Arcy Wentworth Thompson's developments On Growth and Form which has led to scientific explanations of morphogenesis, the process by which patterns are formed in biology. Albrecht Durer's Four Books on Human Proportion were arguably the earliest works on computational anatomy. The efforts of Noam Chomsky in his pioneering of computational linguistics inspired the original formulation of computational anatomy as a generative model of shape and form from exemplars acted upon via transformations.
Due to the availability of dense 3D measurements via technologies such as magnetic resonance imaging (MRI), computational anatomy has emerged as a subfield of medical imaging and bioengineering for extracting anatomical coordinate systems at the morphome scale in 3D. The spirit of this discipline shares strong overlap with areas such as computer vision and kinematics of rigid bodies, where objects are studied by analysing the groups responsible for the movement in question. Computational anatomy departs from computer vision with its focus on rigid motions, as the infinite-dimensional diffeomorphism group is central to the analysis of Biological shapes. It is a branch of the image analysis and pattern theory school at Brown University pioneered by Ulf Grenander. In Grenander's general metric pattern theory, making spaces of patterns into a metric space is one of the fundamental operations since being able to cluster and recognize anatomical configurations often requires a metric of close and far between shapes. The diffeomorphometry metric of computational anatomy measures how far two diffeomorphic changes of coordinates are from each other, which in turn induces a metric on the shapes and images indexed to them. The models of metric pattern theory, in particular group action on the orbit of shapes and forms is a central tool to the formal definitions in computational anatomy.
History
Computational anatomy is the study of shape and form at the morphome or gross anatomy millimeter, or morphology scale, focusing on the study of sub-manifolds of points, curves surfaces and subvolumes of human anatomy.
An early modern computational neuro-anatomist was David Van Essen performing some of the early physical unfoldings of the human brain based on printing of a human cortex and cutting. Jean Talairach's publication of Talairach coordinates is an important milestone at the morphome scale demonstrating the fundamental basis of local coordinate systems in studying neuroanatomy and therefore the clear link to charts of differential geometry. Concurrently, virtual mapping in computational anatomy across high resolution dense image coordinates was already happening in Ruzena Bajcy's and Fred Bookstein's earliest developments based on computed axial tomography and magnetic resonance imagery.
The earliest introduction of the use of flows of diffeomorphisms for transformation of coordinate systems in image analysis and medical imaging was by Christensen, Joshi, Miller, and Rabbitt.
The first formalization of computational anatomy as an orbit of exemplar templates under diffeomorphism group action was in the original lecture given by Grenander and Miller with that title in May 1997 at the 50th Anniversary of the Division of Applied Mathematics at Brown University, and subsequent publication. This was the basis for the strong departure from much of the previous work on advanced methods for spatial normalization and image registration which were historically built on notions of addition and basis expansion. The structure preserving transformations central to the modern field of Computational Anatomy, homeomorphisms and diffeomorphisms carry smooth submanifolds smoothly. They are generated via Lagrangian and Eulerian flows which satisfy a law of composition of functions forming the group property, but are not additive.
The original model of computational anatomy was as the triple, the group , the orbit of shapes and forms , and the probability laws which encode the variations of the objects in the orbit. The template or collection of templates are elements in the orbit of shapes.
The Lagrangian and Hamiltonian formulations of the equations of motion of computational anatomy took off post 1997 with several pivotal meetings including the 1997 Luminy meeting organized by the Azencott school at Ecole-Normale Cachan on the "Mathematics of Shape Recognition" and the 1998 Trimestre at Institute Henri Poincaré organized by David Mumford "Questions Mathématiques en Traitement du Signal et de l'Image" which catalyzed the Hopkins-Brown-ENS Cachan groups and subsequent developments and connections of computational anatomy to developments in global analysis.
The developments in computational anatomy included the establishment of the Sobolev smoothness conditions on the diffeomorphometry metric to insure existence of solutions of variational problems in the space of diffeomorphisms, the derivation of the Euler–Lagrange equations characterizing geodesics through the group and associated conservation laws, the demonstration of the metric properties of the right invariant metric, the demonstration that the Euler–Lagrange equations have a well-posed initial value problem with unique solutions for all time, and with the first results on sectional curvatures for the diffeomorphometry metric in landmarked spaces. Following the Los Alamos meeting in 2002, Joshi's original large deformation singular Landmark solutions in computational anatomy were connected to peaked solitons or peakons as solutions for the Camassa–Holm equation. Subsequently, connections were made between computational anatomy's Euler–Lagrange equations for momentum densities for the right-invariant metric satisfying Sobolev smoothness to Vladimir Arnold's characterization of the Euler equation for incompressible flows as describing geodesics in the group of volume preserving diffeomorphisms. The first algorithms, generally termed LDDMM for large deformation diffeomorphic mapping for computing connections between landmarks in volumes and spherical manifolds, curves, currents and surfaces, volumes, tensors, varifolds, and time-series have followed.
These contributions of computational anatomy to the global analysis associated to the infinite dimensional manifolds of subgroups of the diffeomorphism group is far from trivial. The original idea of doing differential geometry, curvature and geodesics on infinite dimensional manifolds goes back to Bernhard Riemann's Habilitation (Ueber die Hypothesen, welche der Geometrie zu Grunde liegen); the key modern book laying the foundations of such ideas in global analysis are from Michor.
The applications within medical imaging of computational anatomy continued to flourish after two organized meetings at the Institute for Pure and Applied Mathematics conferences at University of California, Los Angeles. Computational anatomy has been useful in creating accurate models of the atrophy of the human brain at the morphome scale, as well as Cardiac templates, as well as in modeling biological systems. Since the late 1990s, computational anatomy has become an important part of developing emerging technologies for the field of medical imaging. Digital atlases are a fundamental part of modern Medical-school education and in neuroimaging research at the morphome scale. Atlas based methods and virtual textbooks which accommodate variations as in deformable templates are at the center of many neuro-image analysis platforms including Freesurfer, FSL, MRIStudio, SPM. Diffeomorphic registration, introduced in the 1990s, is now an important player with existing codes bases organized around ANTS, DARTEL, DEMONS, LDDMM, StationaryLDDMM, FastLDDMM, are examples of actively used computational codes for constructing correspondences between coordinate systems based on sparse features and dense images. Voxel-based morphometry is an important technology built on many of these principles.
The deformable template orbit model of computational anatomy
The model of human anatomy is a deformable template, an orbit of exemplars under group action. Deformable template models have been central to Grenander's metric pattern theory, accounting for typicality via templates, and accounting for variability via transformation of the template. An orbit under group action as the representation of the deformable template is a classic formulation from differential geometry. The space of shapes are denoted , with the group with law of composition ; the action of the group on shapes is denoted , where the action of the group is defined to satisfy
The orbit of the template becomes the space of all shapes, , being homogenous under the action of the elements of .
The orbit model of computational anatomy is an abstract algebra – to be compared to linear algebra – since the groups act nonlinearly on the shapes. This is a generalization of the classical models of linear algebra, in which the set of finite dimensional vectors are replaced by the finite-dimensional anatomical submanifolds (points, curves, surfaces and volumes) and images of them, and the matrices of linear algebra are replaced by coordinate transformations based on linear and affine groups and the more general high-dimensional diffeomorphism groups.
Shapes and forms
The central objects are shapes or forms in computational anatomy, one set of examples being the 0,1,2,3-dimensional submanifolds of , a second set of examples being images generated via medical imaging such as via magnetic resonance imaging (MRI) and functional magnetic resonance imaging. The 0-dimensional manifolds are landmarks or fiducial points; 1-dimensional manifolds are curves such as sulcal and gyral curves in the brain; 2-dimensional manifolds correspond to boundaries of substructures in anatomy such as the subcortical structures of the midbrain or the gyral surface of the neocortex; subvolumes correspond to subregions of the human body, the heart, the thalamus, the kidney.
The landmarks are a collections of points with no other structure, delineating important fiducials within human shape and form (see associated landmarked image).
The sub-manifold shapes such as surfaces are collections of points modeled as parametrized by a local chart or immersion , (see Figure showing shapes as mesh surfaces).
The images such as MR images or DTI images , and are dense functions
are scalars, vectors, and matrices (see Figure showing scalar image).
Groups and group actions
Groups and group actions are familiar to the Engineering community with the universal popularization and standardization of linear algebra as a basic model for analyzing signals and systems in mechanical engineering, electrical engineering and applied mathematics. In linear algebra the matrix groups (matrices with inverses) are the central structure, with group action defined by the usual definition of as an matrix, acting on as vectors; the orbit in linear-algebra is the set of -vectors given by , which is a group action of the matrices through the orbit of .
The central group in computational anatomy defined on volumes in are the diffeomorphisms which are mappings with 3-components , law of composition of functions , with inverse .
Most popular are scalar images, , with action on the right via the inverse.
For sub-manifolds , parametrized by a chart or immersion , the diffeomorphic action the flow of the position
Several group actions in computational anatomy have been defined.
Lagrangian and Eulerian flows for generating diffeomorphisms
For the study of rigid body kinematics, the low-dimensional matrix Lie groups have been the central focus. The matrix groups are low-dimensional mappings, which are diffeomorphisms that provide one-to-one correspondences between coordinate systems, with a smooth inverse. The matrix group of rotations and scales can be generated via a closed form finite-dimensional matrices which are solution of simple ordinary differential equations with solutions given by the matrix exponential.
For the study of deformable shape in computational anatomy, a more general diffeomorphism group has been the group of choice, which is the infinite dimensional analogue. The high-dimensional diffeomorphism groups used in Computational Anatomy are generated via smooth flows which satisfy the Lagrangian and Eulerian specification of the flow fields as first introduced in, satisfying the ordinary differential equation:
with the vector fields on termed the Eulerian velocity of the particles at position of the flow. The vector fields are functions in a function space, modelled as a smooth Hilbert space of high-dimension, with the Jacobian of the flow a high-dimensional field in a function space as well, rather than a low-dimensional matrix as in the matrix groups. Flows were first introduced for large deformations in image matching; is the instantaneous velocity of particle at time .
The inverse required for the group is defined on the Eulerian vector-field with advective inverse flow
The diffeomorphism group of computational anatomy
The group of diffeomorphisms is very big. To ensure smooth flows of diffeomorphisms avoiding shock-like solutions for the inverse, the vector fields must be at least 1-time continuously differentiable in space. For diffeomorphisms on , vector fields are modelled as elements of the Hilbert space using the Sobolev embedding theorems so that each element has strictly greater than 2 generalized square-integrable spatial derivatives (thus is sufficient), yielding 1-time continuously differentiable functions.
The diffeomorphism group are flows with vector fields absolutely integrable in Sobolev norm:
where
with the linear operator mapping to the dual space , with the integral calculated by integration by parts when is a generalized function in the dual space.
Diffeomorphometry: The metric space of shapes and forms
The study of metrics on groups of diffeomorphisms and the study of metrics between manifolds and surfaces has been an area of significant investigation. The diffeomorphometry metric measures how close and far two shapes or images are from each other; the metric length is the shortest length of the flow which carries one coordinate system into the other.
Oftentimes, the familiar Euclidean metric is not directly applicable because the patterns of shapes and images do not form a vector space. In the Riemannian orbit model of computational anatomy, diffeomorphisms acting on the forms do not act linearly. There are many ways to define metrics, and for the sets associated to shapes the Hausdorff metric is another. The method we use to induce the Riemannian metric is used to induce the metric on the orbit of shapes by defining it in terms of the metric length between diffeomorphic coordinate system transformations of the flows. Measuring the lengths of the geodesic flow between coordinates systems in the orbit of shapes is called diffeomorphometry.
The right-invariant metric on diffeomorphisms
Define the distance on the group of diffeomorphisms
this is the right-invariant metric of diffeomorphometry, invariant to reparameterization of space since for all ,
.
The metric on shapes and forms
The distance on shapes and forms,,
the images are denoted with the orbit as and metric .
The action integral for Hamilton's principle on diffeomorphic flows
In classical mechanics the evolution of physical systems is described by solutions to the Euler–Lagrange equations associated to the Least-action principle of Hamilton. This is a standard way, for example of obtaining Newton's laws of motion of free particles. More generally, the Euler–Lagrange equations can be derived for systems of generalized coordinates. The Euler–Lagrange equation in computational anatomy describes the geodesic shortest path flows between coordinate systems of the diffeomorphism metric. In computational anatomy the generalized coordinates are the flow of the diffeomorphism and its Lagrangian velocity , the two related via the Eulerian velocity .
Hamilton's principle for generating the Euler–Lagrange equation requires the action integral on the Lagrangian given by
the Lagrangian is given by the kinetic energy:
Diffeomorphic or Eulerian shape momentum
In computational anatomy, was first called the Eulerian or diffeomorphic shape momentum since when integrated against Eulerian velocity gives energy density, and since there is a conservation of diffeomorphic shape momentum which holds. The operator is the generalized moment of inertia or inertial operator.
The Euler–Lagrange equation on shape momentum for geodesics on the group of diffeomorphisms
Classical calculation of the Euler–Lagrange equation from Hamilton's principle requires the perturbation of the Lagrangian on the vector field in the kinetic energy with respect to first order perturbation of the flow. This requires adjustment by the Lie bracket of vector field, given by operator which involves the Jacobian given by
.
Defining the adjoint then the first order variation gives the Eulerian shape momentum satisfying the generalized equation:
meaning for all smooth
Computational anatomy is the study of the motions of submanifolds, points, curves, surfaces and volumes.
Momentum associated to points, curves and surfaces are all singular, implying the momentum is concentrated on subsets of which are dimension in Lebesgue measure. In such cases, the energy is still well defined since although is a generalized function, the vector fields are smooth and the Eulerian momentum is understood via its action on smooth functions. The perfect illustration of this is even when it is a superposition of delta-diracs, the velocity of the coordinates in the entire volume move smoothly. The Euler–Lagrange equation () on diffeomorphisms for generalized functions was derived in. In Riemannian Metric and Lie-Bracket Interpretation of the Euler–Lagrange Equation on Geodesics derivations are provided in terms of the adjoint operator and the Lie bracket for the group of diffeomorphisms. It has come to be called EPDiff equation for diffeomorphisms connecting to the Euler-Poincare method having been studied in the context of the inertial operator for incompressible, divergence free, fluids.
Diffeomorphic shape momentum: a classical vector function
For the momentum density case , then Euler–Lagrange equation has a classical solution:The Euler–Lagrange equation on diffeomorphisms, classically defined for momentum densities first appeared in for medical image analysis.
Riemannian exponential (geodesic positioning) and Riemannian logarithm (geodesic coordinates)
In medical imaging and computational anatomy, positioning and coordinatizing shapes are fundamental operations; the system for positioning anatomical coordinates and shapes built on the metric and the Euler–Lagrange equation a geodesic positioning system as first explicated in Miller Trouve and Younes.
Solving the geodesic from the initial condition is termed the Riemannian-exponential, a mapping at identity to the group.
The Riemannian exponential satisfies for initial condition , vector field dynamics ,
for classical equation diffeomorphic shape momentum , , then
for generalized equation, then , ,
Computing the flow onto coordinates Riemannian logarithm, mapping at identity from to vector field ;
Extended to the entire group they become
; .
These are inverses of each other for unique solutions of Logarithm; the first is called geodesic positioning, the latter geodesic coordinates (see exponential map, Riemannian geometry for the finite dimensional version). The geodesic metric is a local flattening of the Riemannian coordinate system (see figure).
Hamiltonian formulation of computational anatomy
In computational anatomy the diffeomorphisms are used to push the coordinate systems, and the vector fields are used
as the control within the
anatomical orbit or morphological space. The model is that of a dynamical system, the flow of coordinates and the control the vector field related via The Hamiltonian view
reparameterizes the momentum distribution in terms of the conjugate momentum or canonical momentum, introduced as a Lagrange multiplier constraining the Lagrangian velocity .accordingly:
This function is the extended Hamiltonian. The Pontryagin maximum principle gives the optimizing vector field which determines the geodesic flow satisfying as well as the reduced Hamiltonian
The Lagrange multiplier in its action as a linear form has its own inner product of the canonical momentum acting on the velocity of the flow which is dependent on the shape, e.g. for landmarks a sum, for surfaces a surface integral, and. for volumes it is a volume integral with respect to on . In all cases the Greens kernels carry weights which are the canonical momentum evolving according to an ordinary differential equation which corresponds to EL but is the geodesic reparameterization in canonical momentum. The optimizing vector field is given by
with dynamics of canonical momentum reparameterizing the vector field along the geodesic
Stationarity of the Hamiltonian and kinetic energy along Euler–Lagrange
Whereas the vector fields are extended across the entire background space of , the geodesic flows associated to the submanifolds has Eulerian shape momentum which evolves as a generalized function concentrated to the submanifolds. For landmarks the geodesics have Eulerian shape momentum which are a superposition of delta distributions travelling with the finite numbers of particles; the diffeomorphic flow of coordinates have velocities in the range of weighted Green's Kernels. For surfaces, the momentum is a surface integral of delta distributions travelling with the surface.
The geodesics connecting coordinate systems satisfying have stationarity of the Lagrangian. The Hamiltonian is given by the extremum along the path , , equalling the and is stationary along . Defining the geodesic velocity at the identity , then along the geodesic
The stationarity of the Hamiltonian demonstrates the interpretation of the Lagrange multiplier as momentum; integrated against velocity gives energy density. The canonical momentum has many names. In optimal control, the flows is interpreted as the state, and is interpreted as conjugate state, or conjugate momentum. The geodesi of EL implies specification of the vector fields or Eulerian momentum at , or specification of canonical momentum determines the flow.
The metric on geodesic flows of landmarks, surfaces, and volumes within the orbit
In computational anatomy the submanifolds are pointsets, curves, surfaces and subvolumes which are the basic primitives. The geodesic flows between the submanifolds determine the distance, and form the basic measuring and transporting tools of diffeomorphometry. At the geodesic has vector field determined by the conjugate momentum and the Green's kernel of the inertial operator defining the Eulerian momentum . The metric distance between coordinate systems connected via the geodesic determined by the induced distance between identity and group element:
Conservation laws on diffeomorphic shape momentum for computational anatomy
Given the least-action there is a natural definition of momentum associated to generalized coordinates; the quantity acting against velocity gives energy. The field has studied two forms, the momentum associated to the Eulerian vector field termed Eulerian diffeomorphic shape momentum, and the momentum associated to the initial coordinates or canonical coordinates termed canonical diffeomorphic shape momentum. Each has a conservation law. The conservation of momentum goes hand in hand with the . In computational anatomy, is the Eulerian momentum since when integrated against Eulerian velocity gives energy density; operator the generalized moment of inertia or inertial operator which acting on the Eulerian velocity gives momentum which is conserved along the geodesic:
Conservation of Eulerian shape momentum was shown in and follows from ; conservation of canonical momentum was shown in
Geodesic interpolation of information between coordinate systems via variational problems
Construction of diffeomorphic correspondences between shapes calculates the initial vector field coordinates and associated weights on the Greens kernels . These initial coordinates are determined by matching of shapes, called large-deformation diffeomorphic metric mapping (LDDMM). LDDMM has been solved for landmarks with and without correspondence and for dense image matchings. curves, surfaces, dense vector and tensor imagery, and varifolds removing orientation. LDDMM calculates geodesic flows of the onto target coordinates, adding to the action integral an endpoint matching condition measuring the correspondence of elements in the orbit under coordinate system transformation. Existence of solutions were examined for image matching. The solution of the variational problem satisfies the for with boundary condition.
Matching based on minimizing kinetic energy action with endpoint condition
Conservation from extends the B.C. at to the rest of the path . The inexact matching problem with the endpoint matching term has several alternative forms. One of the key ideas of the stationarity of the Hamiltonian along the geodesic solution is the integrated running cost reduces to initial cost at t = 0, geodesics of the are determined by their initial condition .
The running cost is reduced to the initial cost determined by of .
Matching based on geodesic shooting
The matching problem explicitly indexed to initial condition is called shooting, which can also be reparamerized via the conjugate momentum .
Dense image matching in computational anatomy
Dense image matching has a long history now with the earliest efforts exploiting a small deformation framework. Large deformations began in the early 1990s, with the first existence to solutions to the variational problem for flows of diffeomorphisms for dense image matching established in. Beg solved via one of the earliest LDDMM algorithms based on solving the variational matching with endpoint defined by the dense imagery with respect to the vector fields, taking variations with respect to the vector fields. Another solution for dense image matching reparameterizes the optimization problem in terms of the state giving the solution in terms of the infinitesimal action defined by the advection equation.
LDDMM dense image matching
For Beg's LDDMM, denote the Image with group action .
Viewing this as an optimal control problem, the state of the system is the diffeomorphic flow of coordinates , with the dynamics relating the control to the state given by . The endpoint matching condition gives the variational problem
Beg's iterative LDDMM algorithm has fixed points which satisfy the necessary optimizer conditions. The iterative algorithm is given in Beg's LDDMM algorithm for dense image matching.
Hamiltonian LDDMM in the reduced advected state
Denote the Image , with state and the dynamics related state and control given by the advective term . The endpoint gives the variational problem
Viallard's iterative Hamiltonian LDDMM has fixed points which satisfy the necessary optimizer conditions.
Diffusion tensor image matching in computational anatomy
Dense LDDMM tensor matching takes the images as 3x1 vectors and 3x3 tensors solving the variational problem matching between coordinate system based on the principle eigenvectors of the diffusion tensor MRI image (DTI) denoted consisting of the -tensor at every voxel. Several of the group actions defined based on the Frobenius matrix norm between square matrices . Shown in the accompanying figure is a DTI image illustrated via its color map depicting the eigenvector orientations of the DTI matrix at each voxel with color determined by the orientation of the directions.
Denote the tensor image with eigen-elements , .
Coordinate system transformation based on DTI imaging has exploited two actions
one based on the principle eigen-vector or entire matrix.
LDDMM matching based on the principal eigenvector of the diffusion tensor matrix
takes the image as a unit vector field defined by the first eigenvector. The group action becomes
LDDMM matching based on the entire tensor matrix
has group action becomes transformed eigenvectors
.
The variational problem matching onto the principal eigenvector or the matrix is described
LDDMM Tensor Image Matching.
High angular resolution diffusion image (HARDI) matching in computational anatomy
High angular resolution diffusion imaging (HARDI) addresses the well-known limitation of DTI, that is, DTI can only reveal one dominant fiber orientation at each location. HARDI measures diffusion along uniformly distributed directions on the sphere and can characterize more complex fiber geometries. HARDI can be used to reconstruct an orientation distribution function (ODF) that characterizes the angular profile of the diffusion probability density function of water molecules. The ODF is a function defined on a unit sphere, .
Dense LDDMM ODF matching takes the HARDI data as ODF at each voxel and solves the LDDMM variational problem in the space of ODF. In the field of information geometry, the space of ODF forms a Riemannian manifold with the Fisher-Rao metric. For the purpose of LDDMM ODF mapping, the square-root representation is chosen because it is one of the most efficient representations found to date as the various Riemannian operations, such as geodesics, exponential maps, and logarithm maps, are available in closed form. In the following, denote square-root ODF () as , where is non-negative to ensure uniqueness and . The variational problem for matching assumes that two ODF volumes can be generated from one to another via flows of diffeomorphisms , which are solutions of ordinary differential equations starting from the identity map . Denote the action of the diffeomorphism on template as , , are respectively the coordinates of the unit sphere, and the image domain, with the target indexed similarly, ,,.
The group action of the diffeomorphism on the template is given according to
,
where is the Jacobian of the affine-transformed ODF and is defined as
This group action of diffeomorphisms on ODF reorients the ODF and reflects changes in both the magnitude of and the sampling directions of due to affine transformation. It guarantees that the volume fraction of fibers oriented toward a small patch must remain the same after the patch is transformed.
The LDDMM variational problem is defined as
where the logarithm of is defined as
where is the normal dot product between points in the sphere under the metric.
This LDDMM-ODF mapping algorithm has been widely used to study brain white matter degeneration in aging, Alzheimer's disease, and vascular dementia. The brain white matter atlas generated based on ODF is constructed via Bayesian estimation. Regression analysis on ODF is developed in the ODF manifold space in.
Metamorphosis
The principle mode of variation represented by the orbit model is change of coordinates. For setting in which pairs of images are not related by diffeomorphisms but have photometric variation or image variation not represented by the template, active appearance modelling has been introduced, originally by Edwards-Cootes-Taylor and in 3D medical imaging in. In the context of computational anatomy in which metrics on the anatomical orbit has been studied, metamorphosis for modelling structures such as tumors and photometric changes which are not resident in the template was introduced in for magnetic resonance image models, with many subsequent developments extending the metamorphosis framework.
For image matching the image metamorphosis framework enlarges the action so that with action . In this setting metamorphosis combines both the diffeomorphic coordinate system transformation of computational anatomy as well as the early morphing technologies which only faded or modified the photometric or image intensity alone.
Then the matching problem takes a form with equality boundary conditions:
Matching landmarks, curves, surfaces
Transforming coordinate systems based on Landmark point or fiducial marker features dates back to Bookstein's early work on small deformation spline methods for interpolating correspondences defined by fiducial points to the two-dimensional or three-dimensional background space in which the fiducials are defined. Large deformation landmark methods came on in the late 1990s. The above Figure depicts a series of landmarks associated three brain structures, the amygdala, entorhinal cortex, and hippocampus.
Matching geometrical objects like unlabelled point distributions, curves or surfaces is another common problem in computational anatomy. Even in the discrete setting where these are commonly given as vertices with meshes, there are no predetermined correspondences between points as opposed to the situation of landmarks described above. From the theoretical point of view, while any submanifold in , can be parameterized in local charts , all reparametrizations of these charts give geometrically the same manifold. Therefore, early on in computational anatomy, investigators have identified the necessity of parametrization invariant representations. One indispensable requirement is that the end-point matching term between two submanifolds is itself independent of their parametrizations. This can be achieved via concepts and methods borrowed from Geometric measure theory, in particular currents and varifolds which have been used extensively for curve and surface matching.
Landmark or point matching with correspondence
Denoted the landmarked shape with endpoint , the variational problem becomes
The geodesic Eulerian momentum is a generalized function , supported on the landmarked set in the variational problem. The endpoint condition with conservation implies the initial momentum at the identity of the group:
The iterative algorithm
for large deformation diffeomorphic metric mapping for landmarks is given.
Measure matching: unregistered landmarks
Glaunes and co-workers first introduced diffeomorphic matching of pointsets in the general setting of matching distributions. As opposed to landmarks, this includes in particular the situation of weighted point clouds with no predefined correspondences and possibly different cardinalities. The template and target discrete point clouds are represented as two weighted sums of Diracs and living in the space of signed measures of . The space is equipped with a Hilbert metric obtained from a real positive kernel on , giving the following norm:
The matching problem between a template and target point cloud may be then formulated using this kernel metric for the endpoint matching term:
where is the distribution transported by the deformation.
Curve matching
In the one dimensional case, a curve in 3D can be represented by an embedding , and the group action of Diff becomes . However, the correspondence between curves and embeddings is not one to one as the any reparametrization , for a diffeomorphism of the interval [0,1], represents geometrically the same curve. In order to preserve this invariance in the end-point matching term, several extensions of the previous 0-dimensional measure matching approach can be considered.
Curve matching with currents
In the situation of oriented curves, currents give an efficient setting to construct invariant matching terms. In such representation, curves are interpreted as elements of a functional space dual to the space vector fields, and compared through kernel norms on these spaces. Matching of two curves and writes eventually as the variational problem
with the endpoint term is obtained from the norm
the derivative being the tangent vector to the curve and a given matrix kernel of . Such expressions are invariant to any positive reparametrizations of and , and thus still depend on the orientation of the two curves.
Curve matching with varifolds
Varifold is an alternative to currents when orientation becomes an issue as for instance in situations involving multiple bundles of curves for which no "consistent" orientation may be defined. Varifolds directly extend 0-dimensional measures by adding an extra tangent space direction to the position of points, leading to represent curves as measures on the product of and the Grassmannian of all straight lines in . The matching problem between two curves then consists in replacing the endpoint matching term by with varifold norms of the form:
where is the non-oriented line directed by tangent vector and two scalar kernels respectively on and the Grassmannian. Due to the inherent non-oriented nature of the Grassmannian representation, such expressions are invariant to positive and negative reparametrizations.
Surface matching
Surface matching share many similarities with the case of curves. Surfaces in are parametrized in local charts by embeddings , with all reparametrizations with a diffeomorphism of U being equivalent geometrically. Currents and varifolds can be also used to formalize surface matching.
Surface matching with currents
Oriented surfaces can be represented as 2-currents which are dual to differential 2-forms. In , one can further identify 2-forms with vector fields through the standard wedge product of 3D vectors. In that setting, surface matching writes again:
with the endpoint term given through the norm
with the normal vector to the surface parametrized by .
This surface mapping algorithm has been validated for brain cortical surfaces against CARET and FreeSurfer. LDDMM mapping for multiscale surfaces is discussed in.
Surface matching with varifolds
For non-orientable or non-oriented surfaces, the varifold framework is often more adequate. Identifying the parametric surface with a varifold in the space of measures on the product of and the Grassmannian, one simply replaces the previous current metric by:
where is the (non-oriented) line directed by the normal vector to the surface.
Growth and atrophy from longitudinal time-series
There are many settings in which there are a series of measurements, a time-series to which the underlying
coordinate systems will be matched and flowed onto. This occurs for example
in the dynamic growth and atrophy models and motion tracking such as have been explored in An observed time sequence is given and the goal is to infer the time flow of geometric change of coordinates carrying the exemplars or templars through the period of observations.
The generic time-series matching problem considers the series of times is . The flow optimizes at the series of costs giving optimization problems of the form
.
There have been at least three solutions offered thus far, piecewise geodesic, principal geodesic and splines.
The random orbit model of computational anatomy
The random orbit model of computational anatomy first appeared in modelling the change in coordinates associated to the randomness of the group acting on the templates, which induces the randomness on the source of images in the anatomical orbit of shapes and forms and resulting observations through the medical imaging devices. Such a random orbit model in which randomness on the group induces randomness on the images was examined for the Special Euclidean Group for object recognition in.
Depicted in the figure is a depiction of the random orbits around each exemplar, , generated by randomizing the flow by generating the initial tangent space vector field at the identity , and then generating random object .
The random orbit model induces the prior on shapes and images conditioned on a particular atlas . For this the generative model generates the mean field as a random change in coordinates of the template according to , where the diffeomorphic change in coordinates is generated randomly via the geodesic flows. The prior on random transformations on is induced by the flow , with constructed as a Gaussian random field prior . The density on the random observables at the output of the sensor are given by
Shown in the Figure on the right the cartoon orbit are a random spray of the subcortical manifolds generated by randomizing the vector fields supported over the submanifolds.
The Bayesian model of computational anatomy
The central statistical model of computational anatomy in the context of medical imaging has been the source-channel model of Shannon theory; the source is the deformable template of images , the channel outputs are the imaging sensors with observables (see Figure).
See The Bayesian model of computational anatomy for discussions (i) MAP estimation with multiple atlases, (ii)
MAP segmentation with multiple atlases, MAP estimation of templates from populations.
Statistical shape theory in computational anatomy
Shape in computational anatomy is a local theory, indexing shapes and structures to templates to which they are bijectively mapped. Statistical shape in computational anatomy is the empirical study of diffeomorphic correspondences between populations and common template coordinate systems. This is a strong departure from Procrustes Analyses and shape theories pioneered by David G. Kendall in that the central group of Kendall's theories are the finite-dimensional Lie groups, whereas the theories of shape in computational anatomy have focused on the diffeomorphism group, which to first order via the Jacobian can be thought of as a field–thus infinite dimensional–of low-dimensional Lie groups of scale and rotations.
The random orbit model provides the natural setting to understand empirical shape and shape statistics within computational anatomy since the non-linearity of the induced probability law on anatomical shapes and forms is induced via the reduction to the vector fields at the tangent space at the identity of the diffeomorphism group. The successive flow of the Euler equation induces the random space of shapes and forms .
Performing empirical statistics on this tangent space at the identity is the natural way for inducing probability laws on the statistics of shape. Since both the vector fields and the Eulerian momentum are in a Hilbert space the natural model is one of a Gaussian random field, so that given test function , then the inner-products with the test functions are Gaussian distributed with mean and covariance.
This is depicted in the accompanying figure where sub-cortical brain structures are depicted in a two-dimensional coordinate system based on inner products of their initial vector fields that generate them from the template is shown in a 2-dimensional span of the Hilbert space.
Template estimation from populations
The study of shape and statistics in populations are local theories, indexing shapes and structures to templates to which they are bijectively mapped. Statistical shape is then the study of diffeomorphic correspondences relative to the template. A core operation is the generation of templates from populations, estimating a shape that is matched to the population. There are several important methods for generating templates including methods based on Frechet averaging, and statistical approaches based on the expectation-maximization algorithm and the Bayes Random orbit models of computational anatomy. Shown in the accompanying figure is a subcortical template reconstruction from the population of MRI subjects.
Software for diffeomorphic mapping
Software suites containing a variety of diffeomorphic mapping algorithms include the following:
ANTS
DARTEL Voxel-based morphometry
DEFORMETRICA
DEMONS
LDDMM Large deformation diffeomorphic metric mapping
LDDMM based on frame-based kernel
StationaryLDDMM
Cloud software
MRICloud
See also
Bayesian estimation of templates in computational anatomy
Computational neuroanatomy
Geometric data analysis
Large deformation diffeomorphic metric mapping
Procrustes analysis
Riemannian metric and Lie-bracket in computational anatomy
Shape analysis (disambiguation)
Statistical shape analysis
References
Geometry
Fluid mechanics
Bayesian estimation
Neuroscience
Neural engineering
Biomedical engineering
Computational science | Computational anatomy | [
"Mathematics",
"Engineering",
"Biology"
] | 9,264 | [
"Biological engineering",
"Neuroscience",
"Biomedical engineering",
"Applied mathematics",
"Computational science",
"Civil engineering",
"Geometry",
"Fluid mechanics",
"Medical technology"
] |
42,212,055 | https://en.wikipedia.org/wiki/Adventurers%20%28land%20drainage%29 | Adventurers were groups of English engineers and wealthy landowners, who funded large-scale land drainage projects in the seventeenth century, in return for rights to some of the land reclaimed.
Early entrepreneurs
Land drainage works were expensive, and were usually undertaken in sparsely populated areas. In the seventeenth century a number of such schemes were carried out by Adventurers, who acted under parliamentary sanction, but who financed the works carried out themselves. In return, they gained rights to the land reclaimed as a result of the civil engineering works.
One such scheme was the draining of the Bedford Levels. The Bedford Level Corporation was in charge of the works, which when conceived in 1630, would create large tracts of "summer lands", which would be suitable for grazing during the summer months, but would still be liable to flooding in the winter. Funds for the work were provided by the Earl of Bedford and thirteen other Adventurers. Of the land reclaimed, the fourteen men were to receive , to be shared between them, while were to be given to the king, and another were designated to provide income to maintain the works once they were completed. The Dutch engineer Cornelius Vermuyden was employed to carry out the work, and on 12 October 1637, it was judged to be complete, when the Court of Sewers met at St Ives. However, there was dissatisfaction with the decision, and the Royal Commission of Sewers overturned it in 1639, when they met at Huntingdon. An Act of Parliament passed in 1649 authorised the fifth Earl of Bedford to carry out further work, so that the land could be used throughout the year for agriculture. Another Act followed in 1660, but after initial improvements, the schemes gradually deteriorated, and the Bedford Level Corporation found it increasingly difficult to find anyone prepared to invest their money, when the outcome was so full of risk.
As well as the engineering challenges faced by the Adventurers, there was also opposition from those who judged that their livelihood was affected by the works. In 1631, a group of Adventurers led by Sir Anthony Thomas were authorised to drain the East Fen, the West Fen and the Wildmore Fen, to the north and west of Boston, Lincolnshire. They spent some £30,000 on the work, and received of the drained land. They subsequently spent £20,000 on improvements and buildings, and the land generated some £8,000 per year in rent. The land had previously been extra-parochial, on which people from adjacent villages had grazing rights. After seven years, the Commoners rioted in 1642, breaking down the sluices, destroying crops, and demolishing houses. The Adventurers took their case to the House of Lords, who passed a bill for the "relief and security of the drainers", but the House of Commons were less supportive, refusing to take sides. They ordered that the Sheriff and the local Justices of the Peace should act to prevent and suppress riots. The Commoners then took their grievances to court, and won. The outcome was that when the monarchy was restored in 1661, management of the Fens returned to the Court of Sewers, and remained in a poor state until the mid eighteenth century.
To the south of Boston, the Earl of Lindsay and another group of Adventurers faced similar problems. Having reached an agreement with the Court of Sewers, they worked on draining the Lindsay Levels, the main feature of which was the South Forty-Foot Drain, running for from Bourne to Boston. The land reclaimed was suitable for agriculture, and in 1636 they took possession of it, building houses and growing crops. Again, Commoners and Fenmen felt that they had been dispossessed, and attempted to get Parliament to rule in their favour. After three years, they gave up their attempts at a legal solution, and took direct action, destroying the drains, buildings and crops. In the political turmoil of the time, just prior to the start of the Civil War, the Adventurers received no compensation for their loss.
See also
Land reclamation
Twenty, Lincolnshire
Witham Navigable Drains
Bedford Level Corporation
References
Bibliography
Hydraulic engineering
Land drainage in the United Kingdom | Adventurers (land drainage) | [
"Physics",
"Engineering",
"Environmental_science"
] | 835 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
42,212,963 | https://en.wikipedia.org/wiki/Pavement%20milling | Pavement milling (cold planing, asphalt milling, or profiling) is the process of removing at least part of the surface of a paved area such as a road, bridge, or parking lot.
Milling removes anywhere from just enough thickness to level and smooth the surface to a full depth removal.
There are a number of different reasons for milling a paved area instead of simply repaving over the existing surface.
Purpose
Recycling of the road surface is one of the main reasons for milling a road surface.
Milling is widely used for pavement recycling today, where the pavement is removed and ground up to be used as the aggregate in new pavement.
For asphalt surfaces the product of milling is reclaimed asphalt pavement (RAP), which can be recycled in the asphalt hot mix asphalt (pavement) by combining with new aggregate and asphalt cement (binder) or a recycling agent.
This reduces the impact that resurfacing has on the environment.
Milling can also remove distresses from the surface, providing a better driving experience and/or longer roadway life.
Some of the issues that milling can remove include:
Raveling: aggregate becoming separated from the binder and loose on the road
Bleeding: the binder (asphalt) coming up to the surface of the road
Rutting: formation of low spots in pavement along the direction of travel usually in the wheel path
Shoving: a washboard like effect transverse to the direction of travel
Ride quality: uneven road surface such as swells, bumps, sags, or depressions
Damage: resulting from accidents and/or fires
It can also be used to control or change the height of part or all of the road.
This can be done to control heights and clearances of other road structures such as: curb reveals, manhole and catch basin heights, shoulder and guardrail heights, and overhead clearances.
It can also be done to change the slope or camber of the road or for grade adjustments which can help with drainage.
Specialty
Specialty milling can be used to form rumble strips which are often used along highways.
Using milling instead of other methods, such as rolling them in, means that the rumble strips can be added at any time after the road surface has hardened.
Another example is to modify the roto-milling head to create slots in concrete slabs for the dowel bar retrofit process. The typical process is to saw cut and jackhammer out the slots for the dowels. Following dowel placement, the slots are then typically backfilled with a non-shrink concrete mixture, and the pavement is diamond-ground to restore smoothness. This special milling process shortens the time to create slots from the traditional method which is labor-intensive.
Types
In the USA, the Asphalt Recycling and Reclaiming Association has defined five classes of cold planing that the Federal Highway Administration has recognized.
The classes are:
Class I – milling to remove surface irregularities
Class II – milling to uniform depth as shown on plans and specifications
Class III – same as class II with the addition of cross slope
Class IV – milling to the base or subgrade (full depth)
Class V – milling to different depths at different locations
Process and machinery
Milling is performed by construction equipment called milling machines or cold planers.
These machines use a large rotating drum to remove and grind the road surface.
The drum consists of scrolls of tool holders.
The scrolls are positioned around the drum such that the ground pavement is moved toward the center and can be loaded onto the machine's conveyor belt.
The tool holders can wear out over time and can be broken if highway structures like manholes are encountered while milling.
The tool holders on the drum hold carbide cutters.
The cutters can be removed and replaced as they wear out.
The amount of wear (and therefore the interval between replacement) varies with the type and consistency of the material being milled; intervals can range from a few hours to several days.
The drum is enclosed in a housing/scrapper that is used to contain the milled material to be collected and deposited on the conveyor.
The spacing of the tool spirals around the drum affect the end surface of the road, with micro-milling having the tightest spacing.
The majority of milling machines use an up-cut setup which means that the drum rotates in the direction opposite that of the drive wheel or tracks, (i.e. work surface feeds into the cut).
The speed of the rotating drum should be slower than the forward speed of the machine for a suitable finished surface.
Modern machines generally use a front-loading conveyor system that have the advantage of picking up any material that falls off the conveyor as milling progresses.
Water is generally applied to the drum as it spins, because of the heat generated during the milling process.
Additionally, water helps control the dust created.
In order to control the depth, slopes, and profile of the final milled surface many millers now have automatic depth control using lasers, string-lines, or other methods to maintain milled surfaces to ± of the target height.
Micro milling
Micro milling is also known as carbide grinding.
It is a lower cost alternative to diamond grinding of pavement.
Micro milling uses a specialty drum with three to four times as many cutting teeth than a standard milling drum.
Micro milling can be used either as the final surface or as a treatment before applying a thin overlay.
Micro milling can be used to remove many of the same distresses that standard milling can remove, although usually to a shallower depth.
A micro milled surface has a uniform finish with reduced road noise compared to standard milling.
References
External links
Asphalt Recycling and Reclaiming Association
Engineering vehicles
Road construction | Pavement milling | [
"Engineering"
] | 1,142 | [
"Construction",
"Engineering vehicles",
"Road construction"
] |
42,213,230 | https://en.wikipedia.org/wiki/Adapted%20automobile | An adapted automobile is an automobile adapted for ease of use by people with disabilities. Automobiles, whether cars or vans, can be adapted for a range of physical disabilities.
Hand controls
Foot pedals can be raised, relocated (for instance swapped to be used by the opposite leg) or replaced with hand-controlled devices. The common form of hand controls consists of a push-pull handle mounted below and projecting to the side of the steering wheel housing. The bar connects by levers to the accelerator and brake pedals, and is typically pivoted so that pushing applies the accelerator while pulling applies the brake. As there is no facility to work a clutch pedal, hand controls must generally be used in cars with automatic transmissions. One exception is the GuidoSimplex Semi-Automatic Syncro Drive Clutch System along with an over/under ring accelerator and hand controlled brake, a car with a manual gearbox can be adapted. With one hand continuously engaged working the hand controls, the steering wheel will generally also be fitted with a steering knob to allow one-handed use. More complex fittings may also connect into the electronic circuitry of the vehicle to place indicator and other switches in easy reach of the driver without requiring them to release the hand controls or steering knob. A guard plate may be fitted to prevent inadvertent contact between the driver's feet and the pedals. Extension levers or adapted grips may also be fitted to the parking brake to allow it to be applied by a driver with limited hand or arm strength.
Adaptations may be individually customized and in more extensive adaptations the traditional pedals and steering wheel may be entirely replaced by a joystick control, or by a secondary mini-steering wheel adapted for users with restricted grip and/or arm movement. Steering knobs may also be adapted for users with restricted grip, using a three-pronged tetra-grip, or for users with a prosthetic hook.
Ergonomic adaptations, such as repositioned mirrors and adapted seating may also be needed and some larger vehicles may be fitted to allow them to be driven directly from a wheelchair.
Wheelchair and mobility device access
Standard vehicles are not fitted for wheelchair or mobility device access, leaving users of mobility devices with the choice of either transferring out of their mobility device, or purchasing a vehicle adapted for mobility device access via a lift or ramp, commonly referred to as a Wheelchair Accessible Vehicle (WAV). A range of vehicles can be adapted to fit a lift or a ramp, together with appropriate restraints to secure the mobility device, if necessary. Some users of mobility devices will either transfer directly from their mobility device into the vehicle, with use of a lift or they may be able to do a standing transfer. Some may be able to walk the distance between the boot of the vehicle and the doors of the vehicle. Some may actually drive the vehicle from their wheelchair. In any case, their mobility device May still need to placed in or on the vehicle. While some users are able to lift their mobility device into the vehicle manually, stowing it either in the boot, on the front passenger seat, or behind the front seats, others may require the assistance of a hoist to lift it into the vehicle, onto the roof, or onto a trailer behind the vehicle.
Financing (United Kingdom)
Generally, the more limiting the disability, the more expensive the adaptation needed for the vehicle. Financial assistance is available through some organizations, such as Motability in the United Kingdom, which requires a contribution by the prospective vehicle lessor, Motability also have a grants team who may be able to help with initial deposits and/or adaptation costs. Motability makes vehicles available for lease to disabled users in receipt of the Higher Rate Mobility Component of Disability Living Allowance (DLA) or its successor, Personal Independence Payment (PIP).
If a UK-based employee with a disability requires an adapted car for work use, this would potentially be considered grounds for a "reasonable adjustment" by the employer in accordance with the Equality Act 2010. In this case the responsibility for funding the adaptation would either lie with the employer or potentially be covered by the government-operated Access to Work scheme.
The Motability scheme is unique to the United Kingdom and is not replicated anywhere else in the world.
Disabled people who cannot access assisted purchase schemes must generally pay for their own vehicles to be adapted. This will add considerably to the cost of a vehicle, doubly so as adaptations generally require the purchase of a more expensive automatic model rather than one with a manual transmission, which may restrict choice to more expensive ranges, as may the need for the vehicle to have sufficient space to accommodate a wheelchair in the boot or at the driver's position. In the case of a second hand vehicle the cost of typical adaptations could well exceed the value of the vehicle. Adapting a vehicle may negatively affect the resale value, as adaptations are considered unattractive to non-disabled users, in the case of a low value vehicle sometimes rendering it worthless.
Rental
A challenge for mobility-impaired drivers is renting a vehicle when they travel. Organizations that specialize in adaptive tourism can assist in finding a vehicle, when possible. In New Zealand, Enable Tourism is an organization that helps drivers with disabilities to locate car rentals offering adapted cars or vans. In France, adapted cars with hand-controls are available from leading car rental businesses, however, it is advisable for drivers with disabilities to reserve a car well in advance of travelling. Several designs of portable push-pull hand-controls are also available which may be quickly connected to a new vehicle by screwing clamps to the pedals, however these may not be suitable for drivers with more extensive requirements.
See also
Automatic transmission
Cars for wheelchair users
Disabled parking permit
GO technologies
ISO/IEC 17025
Modified car
Wheelchair/Platform lift
Wheelchair accessible van
References
External links
Braunability Worldwide Vehicle Accessibility Solutions
Paravan Space Drive system, drive by joystick and Various other controls.
"GUIDOSIMPLEX" hand controls and Semi-Automatic Syncro Drive Clutch System
Handicap Hand Control Kits.
Carospeed hand controls
Veigel hand controls
Kenguru car: instant cruising for the disabled
Safety
Wheelchair stowage
Stowage systems for wheelchairs
Accessibility
Ergonomics
Transportation planning | Adapted automobile | [
"Engineering"
] | 1,263 | [
"Accessibility",
"Design"
] |
53,716,876 | https://en.wikipedia.org/wiki/Zen%20%28recommendation%20system%29 | Zen () is a personal recommender system that uses machine learning technology.
It was created by Yandex and launched in 2015. In September 2022, Yandex sold the service to VK.
Zen creates a feed of content that automatically adjusts to the interests of a user. The selection of content is based on the analysis of browsing history, user-specified preferences, location, time of day and other factors.
In March 2022, the average monthly site traffic was around 59 million people.
Technology
Zen is an example of the implementation of a weak artificial intelligence technology. It uses artificial intelligence to adapt to a user.
To analyze the interests and preferences of users, Yandex used information about sites that have been visited, as well as user-specified interests.
The system analyzes the user's favorite sites and other behaviors with the aim of creating a unique model of the user's preferences. With an increasing amount of data about the user, the system can offer the user more relevant and topical content, including content from sources unfamiliar to the user. Zen adapts to the changing interests of the user. For example, if a user begins to read about architecture, content on this subject will appear in their content feed more often.
The technology that underlies Zen was adapted by Yandex and CERN for use in the Large Hadron Collider. It is used to provide in-depth analysis of the results of physics experiments taking place at the LHС.
Media platform
In 2017, Yandex announced the launch of a platform that allows companies and independent authors to publish media content (articles, photos, videos) directly to Zen. The platform also allows popular authors to earn money by using micropayment channels and ads.
In August 2021, the "Videos" section was launched in Zen. It contains videos up to 1 minute long created by Zen bloggers.
In 2021, the company paid out over 2 billion rubles to authors of publications.
Prior to the launch of the platform, Zen feeds consisted only of publications selected from publicly available sources.
After buying Zen, VK began to implement changes. In January 2023, the limit on uploaded videos was increased from 10 GB to 30 GB. In February 2023, users got an opportunity to withdraw money through the VK Pay service.
Monetization
The opportunity for monetization came when, in 2017, Zen became a platform for creating content, not just distributing it. Zen-registered bloggers could get paid for their posts if they collected a minimum of 7,000 reads in a week.
In 2019, Zen paid more than 1 billion rubles to authors for placing advertisements in articles.
In May 2020, bloggers on the platform had the opportunity to place widgets with goods from Yandex Market: at that time, such social commerce was implemented only in articles. In November 2020, the platform signed an agreement with marketplace "Joom" with similar conditions for adding widgets, and in April 2021, after successful testing, placement of Auto.ru widgets became available to all authors of channels with connected monetization. The general principle of getting money from such widgets is that bloggers get paid for clicking from the widgets posted.
In March 2021, widgets began appearing not only directly in bloggers' content, but also on article cards in the feed, as well as under the articles themselves.
At the first stage of the launch of the short video feed Yandex allocated 50 million rubles to reward bloggers.
History
In 1997, Yandex began research into natural language processing, machine learning and recommendation systems. In 2009, the proprietary machine learning algorithm MatrixNet was developed by Yandex, becoming one of the key components that Zen functions on.
The first Yandex service to introduce the use of recommendation technology was Yandex.Music, which was launched in September 2014. This technology was then implemented in Yandex.Market and Yandex.Radio.
In June 2015, a beta version of Zen became available. At first, the Zen content feed showed only content from the media, and the service was only available to the 5% of users of Yandex Browser on Android that had registered a Yandex account. Prior to this, Zen was available in an experimental form on the webpage zen.yandex.ru.
In the following months, other types of content were added to Zen, such as image galleries, articles, blogs, forums, videos from YouTube, etc.
In 2017, Zen launched Narratives, a special content format for mobile devices. Narrative is a set of slides with texts, photos, videos and GIFs. In January 2018, the format became available to Zen authors.
In September 2022, Yandex sold the service to VK.
Finances
The platform has ad-based business model. In September 2017, Zen's revenue amounted to 200 million rubles.
In 2020, the planned revenue amounted to 13.1 billion rubles. According to the results of the fourth quarter of 2021, the planned revenue amounted to 18.9 billion rubles.
References
Recommender systems
Yandex | Zen (recommendation system) | [
"Technology"
] | 1,044 | [
"Information systems",
"Recommender systems"
] |
53,718,600 | https://en.wikipedia.org/wiki/Rayleigh%20problem | In fluid dynamics, Rayleigh problem also known as Stokes first problem is a problem of determining the flow created by a sudden movement of an infinitely long plate from rest, named after Lord Rayleigh and Sir George Stokes. This is considered as one of the simplest unsteady problems that have an exact solution for the Navier-Stokes equations. The impulse movement of semi-infinite plate was studied by Keith Stewartson.
Flow description
Consider an infinitely long plate which is suddenly made to move with constant velocity in the direction, which is located at in an infinite domain of fluid, which is at rest initially everywhere. The incompressible Navier-Stokes equations reduce to
where is the kinematic viscosity. The initial and the no-slip condition on the wall are
the last condition is due to the fact that the motion at is not felt at infinity. The flow is only due to the motion of the plate, there is no imposed pressure gradient.
Self-Similar solution
The problem on the whole is similar to the one dimensional heat conduction problem. Hence a self-similar variable can be introduced
Substituting this the partial differential equation, reduces it to ordinary differential equation
with boundary conditions
The solution to the above problem can be written in terms of complementary error function
The force per unit area exerted on the plate is
Arbitrary wall motion
Instead of using a step boundary condition for the wall movement, the velocity of the wall can be prescribed as an arbitrary function of time, i.e., . Then the solution is given by
Rayleigh's problem in cylindrical geometry
Rotating cylinder
Consider an infinitely long cylinder of radius starts rotating suddenly at time with an angular velocity . Then the velocity in the direction is given by
where is the modified Bessel function of the second kind. As , the solution approaches that of a rigid vortex. The force per unit area exerted on the cylinder is
where is the modified Bessel function of the first kind.
Sliding cylinder
Exact solution is also available when the cylinder starts to slide in the axial direction with constant velocity . If we consider the cylinder axis to be in direction, then the solution is given by
See also
Stokes problem
References
Fluid dynamics | Rayleigh problem | [
"Chemistry",
"Engineering"
] | 435 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
40,794,947 | https://en.wikipedia.org/wiki/Growth%20factor%20receptor%20inhibitor | Growth factor receptor inhibitors (growth factor inhibitors, growth factor receptor blockers, growth factor blockers, growth factor receptor antagonists, growth factor antagonists) are drugs that target the growth factor receptors of cells. They interfere with binding of the growth factor to the corresponding growth factor receptors, impeding cell growth and are used medically to treat cancer.
Drugs of this type include those that target the epidermal growth factor receptors of epidermal cells (EGFR inhibitors) and those that target vascular endothelial growth factor receptors (VEGFR inhibitors).
Growth factor receptor inhibitors in cancer treatment and research
In cancer treatment, growth factor receptor inhibitors have been used to target cancer cells.
In cancer research, growth factor receptor inhibitors have been applied to protect normal cells selectively from the toxic side-effects of chemotherapy targeted against cancer cells.
References
Further reading
External links
Growth Factor Plus Reviews at the todayhealthplan.com website
EGFR inhibitors at the Drugs.com website.
HER2 inhibitors at the Drugs.com website.
VEGF/VEGFR inhibitors at the Drugs.com website.
Cancer growth blockers at the Cancer Research UK website.
Status of Epidermal Growth Factor Receptor Antagonists in the Biology and Treatment of Cancer at the Journal of Clinical Oncology website.
Exploiting Cancer Cell Cycling for Selective Protection of Normal Cells at the Cancer Research (Journal) website.
Tyrosine kinase inhibitors
Receptor antagonists
Growth factors | Growth factor receptor inhibitor | [
"Chemistry"
] | 292 | [
"Neurochemistry",
"Growth factors",
"Receptor antagonists",
"Signal transduction"
] |
40,796,886 | https://en.wikipedia.org/wiki/Decanter%20centrifuge | A centrifuge is a device that employs a high rotational speed to separate components of different densities. This becomes relevant in the majority of industrial jobs where solids, liquids and gases are merged into a single mixture and the separation of these different phases is necessary. A decanter centrifuge (also known as solid bowl centrifuge) separates continuously solid materials from liquids in the slurry, and therefore plays an important role in the wastewater treatment, chemical, oil, and food processing industries. There are several factors that affect the performance of a decanter centrifuge, and some design heuristics are to be followed which are dependent upon given applications.
Operating principle
The operating principle of a decanter centrifuge is based on separation via buoyancy. Naturally, a component with a higher density would fall to the bottom of a mixture, while the less dense component would be suspended above it. A decanter centrifuge increases the rate of settling through the use of continuous rotation, producing a G-force equivalent to between 1000 and 4000 G's. This reduces the settling time of the components by a large magnitude, whereby mixtures previously having to take hours to settle can be settled in a matter of seconds using a decanter centrifuge. This form of separation enables more rapid and controllable results.
How does it work
The feed product is pumped into the decanter centrifuge through the inlet. Feed goes into a horizontal bowl, which rotates. The bowl is composed of a cylindrical part and a conical part. The separation takes place in the cylindrical part of the bowl. The fast rotation generates centrifugal forces up to 4000 x g. Under these forces, the solid particles with higher density are collected and compacted on the wall of the bowl. A scroll (also screw or screw conveyor) rotates inside the bowl at a slightly different speed. This speed difference is called the differential speed. This way the scroll is transporting the settled particles along the cylindrical part of the bowl and up to the end conical part of the bowl. At the smallest end of the conical part of the bowl, the dewatered solids leave the bowl via discharge opening. The clarified liquid leaves through a paring disc (internal centripetal pump).
3-phase separation with a decanter
With a 3 phase decanter centrifuge, it is possible to separate 3 phases from each other in one process step only. For example, two liquids which cannot be mixed because of different densities (e.g. oil and water) are separated from a solids phase. The heavy liquid (water) collects in the middle between the oil and the solids layer. Thus the two liquids separated from each other can be drawn off from the decanter. The solids are transported via the scroll to the discharge openings as it happens also in 2-phase separation.
Typical applications of 3-phase separation are the production of edible oils such as olive oil, oil sludge processing, the production of biodiesel etc.
Parameters and influencing factors of the separation
Feed, throughput and residence time
Through the feed, the separation medium to be processed can be input into the centre of the infeed chamber of the scroll, where it is accelerated. The throughput will have an influence on the residence time.
Acceleration
The separation medium reaches its maximum speed in the decanter bowl, causing the solids to settle on the bowl inner diameter. A characteristic feature of the bowl is its cylindrical/conical shape.
Differential speed
There is a differential speed between the decanter bowl and the scroll, which is created by a gear unit on the industrial decanter centrifuges. The differential speed determines the solid content in the outfeed.
Filling volume / weir discs or overflow weir
Pond depth / weir discs
The clarified liquid flows to the cylindrical end of the bowl in the decanter centrifuge, from where it runs out through openings in the bowl cover. These openings contain precisely adjustable weir discs/weir plates by means of which the pond depth in the bowl can be set. The weir discs determine the filling volume of the bowl.
Range of applications
The main application of decanter centrifuges is to separate large amounts of solids from liquids on a continuous basis. They are also used to wash and dry various solids in industry, such as polystyrene beads, clarify liquids and concentrate solids.
Advantages and limitations over competitive processes
Generally the decanter centrifuge has more advantages than disadvantages; however, there are some limitations when compared to other processes.
Advantages:
Decanter centrifuges have a clean appearance and have little to no odour problems.
Not only is the device easy to install and fast at starting up and shutting down but also only requires a small area for operation compared to other competitive processes.
The decanter centrifuge is versatile as different lengths of the cylindrical bowl section and the cone angle can be selected for different applications. Also, the system can be pre-programmed with various design curves to predict the sludge type, while some competitive processes, such as a belt filter press, cannot change the belt type to operate for different sludge types. Its versatility allows the machine to have various functions such as operating for thickening or dewatering.
The machine can operate with a higher throughput capacity than smaller machines. This also reduces the number of units required.
The device is simple to optimise and operate as it has few major variables and reliable feedback information.
The decanter centrifuge has reduced labour costs compared to other processes, as it requires low continuous maintenance and operator attention.
Compared to some competitive process such as the belt filter process, the decanter centrifuge has more process flexibility and higher levels of performance.
Limitations:
The decanter centrifuge cannot separate biological solids with very small density differences, such as cells and viruses. A competitive process that is capable of separating these difficult-to-separate solids is the tubular-bowl centrifuge.
The machine can be very noisy and can cause vibration.
The device has a high-energy consumption due to high G-forces.
The decanter centrifuge has high equipment capital costs. Hard surfacing and abrasion protection materials are required for the scroll to reduce wear and therefore reduce the maintenance of the scroll wear.
Designs available
The main types of decanter centrifuges are the vertical orientation, horizontal orientation and conveyor/scroll.
In vertical decanter centrifuges, the rotating assembly is mounted vertically with its weight supported by a single bearing at the bottom or suspended from the top. The gearbox and bowl are suspended from the drive head, which is connected to the frame. The vertical decanter allows for high temperature and/or high-pressure operation due to the orientation and the rotational seals provided at one end. However, this makes the device more expensive than the horizontal decanter centrifuge, which is non-pressurised and open. The advantage of the vertical machine over the horizontal machine is that the noise emitted during production is much lower due to less vibration.
In horizontal decanter centrifuges, as shown in figure 1, the rotating assembly is mounted horizontally with bearings on each end to a rigid frame, which provides a good sealing surface for high-pressure applications. The feed enters through one end of the bearings, while the gearbox is attached to the other end and is operated below the critical speed. Capacities range up to of solids per hour with liquid feed rates of up to per minute. The horizontal machine is arranged in a way that slurry can be introduced at the centre of a rotating horizontal cylindrical bowl. The scroll discharge screw forces the solids to one end of the bowl as it is collected on the walls. This orientation is the most common design implemented in the industry.
In conveyor decanter centrifuges the conveyor or scroll fits inside a rotating bowl and carries the solids settled against the wall, pushing them across a beach towards the underflow where the solids discharge. The conveyor allows for an increase in separation efficiency and feed capacity.
Decanter centrifuges process characteristics
The separation process in a decanter centrifuge relies on a few process characteristics such as centrifugal force or G-force, sedimentation rate and separating factor, differential speed between the conveyor and bowl, and clarity of the liquid discharge.
Decanter centrifuges require a centrifugal force for the separation of the solids from the liquid. This characteristic is dependent on the radius of the centrifuge and its angular rotational speed. A decanter centrifuge applies a force equivalent to several thousand G's, which reduces the settling time of the particles. It is also favoured to maintain a large G-force, which will result in an improved separation.
The rate at which sedimentation occurs is an important characteristic of the decanter centrifuge separation process. The sedimentation rate is influenced by the particle size, the shapes of the particles, their density differential between solid and liquid and the viscosity of the liquid. This process characteristic can be improved by utilizing flocculating agents. The sedimentation rate is also dependent on the separating factor of the decanter centrifuge, which is related to the centrifugal force.
The exterior bowl and the scroll conveyor rotate at different high speeds. This differential speed between the two is accountable for the sedimentation throughout the decanter centrifuge cylinder. A high differential speed results in a smaller residence time of the cake settlement, so it is necessary to keep the cake thickness to a minimum to avoid impairing the discharge quality. Keeping the cake thickness to a minimum also aids in the improvement of the cake dewatering process. For this reason, it is necessary to obtain an optimal differential speed to balance the cake thickness and quality.
The characteristic above all affects the clarity of the liquid output which is dependent on the volumetric throughout rate, where a higher flow rate will result in a poor liquid clarity. Another characteristic that influences the clarity of the liquid output is the differential speed. A low differential speed results in a better clarity, therefore, aiding in the separation process. The G-Force also plays a role in the clarity of the liquid discharge. Higher G-force results in an increase in the separation of the solid particles from the liquid and yields a better clarity.
Design heuristics
Design heuristics are methods based on experience which serve the purpose of reducing the need for calculations with regards to equipment sizing, operating parameters or performance.
One of the important design heuristics to be considered when employing decanter centrifuges is the scale of the process. Decanter centrifuges should ideally be used in large scale processes. This is to optimise economic value since smaller scale processes do not necessarily require such costly equipment to attain the desired product.
Another design heuristic to be considered is the length to diameter ratio of the decanter centrifuge. A length to diameter ratio of 2, 3 and 4 are commonly used. Decanter centrifuges with the same diameter but the longer length would have a higher capacity for conveying solids and attain a larger suspension volume, which would enhance the settling out of fine solids.
The beach angle at the conical section of a decanter centrifuge is a design heuristic, which must also be taken into consideration. The slippage force acting on solids in the direction of the liquid pool increases by a large magnitude when solids exit the pool onto the beach. A decanter centrifuge possessing a small cone angle is able to produce a lower slippage force compared to a large cone angle. A low cone angle is beneficial when solids do not compact properly and possess a soft texture. Additionally, low cone angles result in a lower wear rate on the scroll and are beneficial when being used with very compact solids requiring a large magnitude of torque to move.
The magnitude of centrifugal force being used must also be considered. Centrifugal force aids with dewatering but hinders the transport of cake in the dry beach. Hence, a tradeoff exists between cake conveyance and cake dewatering. A balance between the two is necessary for setting the pool and G-force for a particular application. Additionally, a larger centrifuge will produce better separation than a smaller centrifuge with the same bowl speed as a greater G-force would be produced.
In the cylindrical section of the decanter centrifuge, the pool should ideally be shallow in order to maximise G-force for separation. Alternatively, a deeper pool is advantageous when the cake layer is too thick and the finer particles entrain into the fast liquid stream since a thicker buffer liquid layer is present to help settle suspended solids. The compromise between cake dryness and clarity of centrate is to be considered. The reason behind this trade-off is that in losing fine solids to centrate, the cake with bigger particles is able to dewater more effectively which results in a drier cake. Optimal pool for a particular application should be identified through the conduction of tests.
Another important heuristic is the differential speed, which controls cake transport. A high differential speed would give rise to a high solids throughput. A high differential speed also reduces cake residence time.
Post-treatment systems
The production of a waste stream is small in comparison to the overall process output; however can still pose a number of significant problems. Firstly, the volume of waste in the process reduces the available volume to be used for the process. Direct disposal into the environment of especially oil wastes can be detrimental to the surroundings if a treatment is not applied. The post-treatment system applied to the waste product should depend on the specific treated product required. The objectives of post-treatment can range from achieving a product that can be safely disposed, recycled into the refining process or requires an adequate water phase to be re-used in the process.
The objectives of post-treatment vary between different industries where in order to perform an efficient and economical process; the decanter centrifuge must be tailored to the task at hand. In the food manufacturing industry, decanter centrifuges are utilised in oil extraction machines. An oil extraction machine can process up to fifteen metric tonnes per hour of organic wastes and are found either within the process plant or outdoors if designed for the climate. The waste material enters the inlet chute and is softened into a sludge which is then steam heated. This mixture then enters a three-phase decanter centrifuge, also known as a tricanter centrifuge.
A tricanter centrifuge operates on a similar principle to decanter centrifuges but instead separates three phases, consisting of a suspended solids phase and two immiscible liquids. Sedimentation of the suspended solids occurs as normal where they accumulate on the wall of the bowl and are conveyed out of the centrifuge. The two liquid phases are separated using a dual discharge system where the lighter liquid phase such as oil, is separated over a ring dam via gravity, and water, which is commonly the heavier liquid phase, is discharged using a stationary impeller under pressure. Each of the three components, solid, oil and water, are distributed to different storage tanks.
There are numerous manufacturers specialising in mechanical separation technology that have adopted these new designs into industry standard equipment. This advanced technology has allowed decanter centrifuges to operate up to 250 cubic metres per hour and has developed numerous designs such as the Z8E decanter, known as the world's largest decanter centrifuge with an adjustable impeller, which supplies a torque of 24,000 newton metres. Other designs can reduce power consumption by up to thirty percent due to a large slurry discharge, and are best utilised in the water treatment industry.
New development
The rapid development of the decanter centrifuge over the 20th century saw it expand into a vast range of over 100 industrial applications. Further development since then has seen the refinement of machine design and control methods, improving its overall performance, which allows the system to respond quickly to varying feed conditions. The newest development in decanter centrifuge technology aims to achieve enhanced control of the separation process occurring inside the decanter. The way in which manufacturers aim to address this is by utilising variable mechanical devices in the rotating part of the decanter centrifuge. To control the separation process, the operational parameters should be transferred from the rotating part to the stationary part of the decanter whilst also constantly controlling and maintaining the mechanical device inside the process region. This can be achieved using hydraulic and electronic transfer systems. A hydraulic drive motor is easily able to access the rotating area of the decanter centrifuge.
Another area of development in recent years is the implementation of Functional Safety measures, aiming at providing a better and more safe working environment. Functional Safety measures like SIL-2 certified vibration monitoring protects both personnel and machinery by facilitating a safety shutdown before e.g. vibrations get to a dangerous level and other safety measures.
References
Centrifuges
Water treatment
Waste treatment technology | Decanter centrifuge | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,563 | [
"Centrifugation",
"Water treatment",
"Chemical equipment",
"Water pollution",
"Environmental engineering",
"Water technology",
"Waste treatment technology",
"Centrifuges"
] |
40,797,018 | https://en.wikipedia.org/wiki/Circulating%20fluidized%20bed | The circulating fluidized bed (CFB) is a type of fluidized bed combustion that utilizes a recirculating loop for even greater efficiency of combustion. while achieving lower emission of pollutants. Reports suggest that up to 95% of pollutants can be absorbed before being emitted into the atmosphere. The technology is limited in scale however, due to its extensive use of limestone, and the fact that it produces waste byproducts.
Introduction
Fluidization is the phenomenon by which solid particles are transported into a fluid-like state through suspension in a gas or liquid. The resultant mixing of gas and solids promotes rapid heat transfer and chemical reactions within the bed. Power plants that use this technology are capable of burning low grade fuels at high efficiency and without the need for expensive fuel preparation. They are also smaller than the equivalent conventional furnace, so may offer significant advantages in terms of cost and flexibility.
Circulating fluidized bed is a relatively new technology with the ability to achieve lower emission of pollutants. Extensive research has been conducted on this technology within the past 15 years due to increasing concerns over pollution caused by traditional methods of combusting coal and its sustainability. The importance of this technology has grown recently because of tightened environmental regulations for pollutant emission.
The Mercury and Air Toxic Standards (MATS) enacted in December 2011 in the United States by the Environmental Protection Agency have forced all the countries in Europe and America to strictly adhere to this policy. This means that emissions such as metals, acid gases, organic compound, flue gas acids and other pollutants from power plants or industrial facilities have to meet the requirements set by EPA and upgrades have to be done for facilities that do not meet the standards. As a result, the demand for circulating fluidized bed technology is predicted to rise.
In 1923, Winkler's coal gasifier represented the first significant large-scale industrial application of fluidized bed (Kunii and Levenspiel, 1991). CFB combustion technology continues to grow strongly in large utility power plant applications as CFB boiler technology has grown from small-scale industrial applications to large ultra-supercritical power plants in less than 20 years. Prime examples, both provided by Sumitomo SHI FW are the 460 MW supercritical CFB power plant operating since 2009 in Lagisza, Poland, and 2200 MW ultrasupercritical Samcheok (Korea) Green Power Plant successfully running since 2016.
Fluidization regimes and classification
Fluidization is the phenomenon by which solid particles are transported into a fluid like state through suspension in a gas or liquid. In fact, there is a simple and precise way to classify the various fluid-particle beds (Winaya et al., 2003; Souza-Santos, 2004; Basu, 2006). Most of the CFB operating and environmental characteristics are the direct results of the hydrodynamic behaviour. Numerous researchers have studied the hydrodynamics of CFB (Yang, 1998; Basu, 2006; Rhodes, 2008; Scala, 2013). The fluidization is a function of several parameters such as the particles’ shape, size and density, velocity of the gas, beds' geometries etc. Kunii and Levenspiel (1991), Oka and Dekker (2004), and Souza-Santos (2004) defined the regimes of fluidization as described below:
(a) Fixed Bed: When the fluid is passed through the bottom of the bed at a low flow rate, the fluid merely percolates through the void spaces between stationary particles.
(b) Minimum fluidization: When the gas velocity reaches (Umf) minimum fluidization velocity, and all the particles are just suspended by the upward flowing fluid.
(c) Bubbling Fluid Bed: When the flow rate increases beyond the minimum fluidization velocity, bed starts bubbling. The gas-solid system shows large instabilities with bubbling and gas channelling with rise in flow rate beyond minimum fluidization. Such a bed is called aggregative, heterogeneous, or bubbling fluidized.
(d) Turbulent Fluidized Bed: When the gas flow rate sufficiently increases, the terminal velocity (Utr) of solids is exceeded, the upper surface of the bed disappears, entrainment becomes appreciable instead of bubbling,
(e) Fast Fluidized Bed: With further increasing in gas velocity, solids are carried out of the bed with the gas making a lean phase fluidized, this regime is used for operating CFB. In the present work, fast fluidized bed is used to operate the CFB where the pressure drop decreases dramatically in this regime.
(f) Pneumatic Transport: Beyond the circulating fluidized bed operating regime, there is the pneumatic transport region, pressure drop increases in this regime.
An appreciated contribution by Geldart (1973) classified the particles based on size and density into four groups viz. C, A, B, and D. Group B (of particle size dp between 40–500 μm and density of ρs<~1400 kg/m3) is commonly used for CFB. Yang modified Geldart's classification using Archimedes number Ar, under elevated pressure, temperature, and non-dimensional density (Yang, 2007).
Pressure and Pressure Drop
The flow in a CFB is multiphase. The unrecoverable pressure drop along the riser height is a basic value for design; and this results due to solid particles distribution, voidage, gas viscosity, gas velocity, gas density, and density of solid.
Basis of technology
During the combustion phase, upwards jets of air will cause the solid fuels to be suspended. This is to ensure the gas and solids will mix together turbulently for better heat transfer and chemical reactions. The fuel will be burnt at a temperature of 1400 °F (760 °C) to 1700 °F(926.7 °C) to prevent nitrogen oxide from forming. While burning, flue gas such as sulfur dioxide will be released. At the same time, sulfur-absorbing chemical such as limestone or dolomite will be used to mix with the fuel particles in the fluidization phase, which will absorb almost 95% of the sulfur pollutants.
Alternatively, the sulfur absorbing chemical and fuel will be recycled to increase the efficiency of producing a higher quality steam as well as lower the emission of pollutants. Therefore, it will be possible to use circulating fluidized bed technology to burn fuel in a much more environmental friendly method as compared to other conventional processes.
Range of applications
Circulating fluidized bed technology can be implemented in many different fields ranging from oil and gas to power stations. This technology is highly sought after due to its numerous benefits. Some of the popular applications of circulating fluidized bed are circulating fluidized bed scrubber and circulating fluidized bed gasification system.
Circulating fluidized bed scrubber
One of the applications of a circulating fluidized bed scrubber is at power stations which utilize a dry sorbent usually Ca(OH)2 to reduce pollutants like HF, HCL, SO2 and SO3 in a flue gas stream. Currently, Basin Electric Power Cooperative are the only company operating the best available circulating fluidized bed scrubbing technology for a coal-fired boiler plant near Gillette, Wyoming since 2011.—
The three major components of the circulating fluidized bed scrubber in power plants are:
Circulating fluidized bed absorber
Fabric filter
Dry lime hydration system.
In the circulating fluidized bed scrubber process, flue gas will enter the reactor from the bottom of the vessel. Simultaneously, hydrated lime will be injected into the circulating fluidized bed absorber for reaction to take place to convert SO2 and SO3 from the flue gas to calcium sulfate and calcium sulfite. Water will also be injected at the same time to control the operation temperature for maximum absorption capacity. The flue gas will then send to the bag house for further filtration. In the bag house, a series of air valves across the filters, will produce compressed air bursts to ensure a more efficient solid and dust collection. Lastly, clean flue gas will then be directed to the stack with the minimum pollutants in the flue gas stream. The schematic diagram of the process is shown in Figure 1.
Circulating fluidized bed gasification system
Gasification is the process of converting biodegradable waste materials into synthetic gas without combustion. This process is first used at the Gussing power plant in Austria based on the steam gasification of biomass in the internally circulating fluidized bed.
In the gasification process, fuel will be gasified at 850 °C in the presence of steam to produce a nitrogen-free and clean synthetic gas. Charcoal will be burnt with air in the combustion chamber to provide the heating for the gasification process as it is an endothermic process. Thermal transfer will take place between the gasification and combustion chamber. The illustrated gasification process is presented in Figure 2.
The chemical reaction that takes place in the gasification as shown in equation [1] and [2] whereas the reaction in combustion chamber is represent in equation [3].
Gasification;
C + H2O = CO + H2 [1]
C + CO2 = 2CO [2]
Combustion;
C + O2 = CO2 [3]
Dolomite lime or limestone can also be used to increase the hydrogen concentration by absorbing carbon dioxide to increase the combustion process.
Advantages and limitations
Wet flue gas desulfurization (Wet FGD) has typically been used to capture the pollutants gas. However, this machinery is expensive, hard to maintain and takes a lot of space in power plant. Wet FGD uses lot of water, however only marginal metals like mercury and acid gases such as HCl, HF, SO2 and SO3 can be captured.
The use of CFB's and dry scrubbers in the Virginia City Hybrid Energy Center allows it to capture up to 99.6% of the SO2 emitted.
The new technology of circulating fluidized bed scrubber (CFBS) was introduced circa 1984. The turbulator wall design will ensure a perfect mixing and the ability to capture various pollutants. The used of alloy metals had been replaced with a carbon steel design, reducing the installation cost. It also comes in a compact size thus the capital costs could be reduced. The water usage can also be reduced with the design of plug-free water spray nozzles. The CFBS can undergo a self-cleaning process, reducing the cost of maintenance. The operating temperature is lower thus the production of the nitrogen oxides, a contributor to smog, is lower.
Despite all the advantages, the CFBS is limited to 400 MW per unit. The limestone used in the CFBS is expensive and must be kept either in a concrete or steel silo rather than a pile[8]. Besides that, this machinery also produces a by-product, for instance CaCl that do not have many uses due to its properties.
Another type of CFB is circulating fluidized bed gasification (CFBG), which is preferable to other type of gasifiers. CFBG has a high mass and heat transfer rate as well as high efficient gas-solid contacting. At low operating temperature of CFBG, a longer residence time of solid can be achieved leading to a higher gasification yield. CFBG process is more energy efficient as it is an endothermic process. Only the required heat will be generated to maintain the process at the optimum temperature. Practically, all the heat produced will be utilized throughout all the processes, as it is an adiabatic and isothermal process.
Even though, the CFBG process is able to manage huge range of fuels, high gasification yield cannot be achieved for the fuels that are less reactive such as anthracite and pet coke because of the low operating temperature. The flow is also multiphase complex and every distinct particles need to be scaled-up in a different way
Available design
Nowadays, several designs have been invented for CFBS for example the CFBS developed by Clyde Bergemann Power Group namely circulating dry scrubbers (CDS). This type of CFBS consists of three distinct feedback control loops which are for temperature, pressure drop and sulphur dioxide emission. In order to minimize erosion, its injection was designed to be above the venturi. Not only that, the CDS contains less moving parts compared to other type of CFBS. This design will lead to a lower maintenance cost. Major components of the CDS are shown in Figure 3.
Similar to CFBS, there are several designs available with specific specification to fulfill various industrial demands. One of the types is the CFBG, developed by the Phoenix BioEnergy. This type of CFBG combines several technologies and implement the auger gasifier into one design. The large diameter of the auger will be placed horizontally on top of the fluidized bed. This configuration will increase the gasification efficiency, which will assist in the heat transfer over the suspended aggregate into the biofuel. Full design of this CFBG is shown in Figure 4.
Main process characteristics
The circulating fluidized bed reactors have been widely used in various industrial processes such as gasification and coal combustion. Though the circulating fluidized beds are used widely, the CFD, which can be, describe by non-uniformity flow patterns and a thorough back mixing still possess significant radial gradients in the particle density and a lower solid holdup inside the riser interior compared to the wall of the reactor. These events will then result in low contact efficiency.
For the case of catalytic gas-phase reaction process, gas back mixing should be avoided thus the reacted product is the gas phase. Another characteristic of the circulating fluidized bed is, as it required promoting the small contact time of gas and solid catalyst and plug flow, a significant high gas velocity in the riser is needed. The significant high gas velocity in the riser is also desired to fulfill the necessity in the catalytic gas-phase reaction.
Design and operation
The circulating fluidized bed involves basically two balancing characteristics of the gas-solid system, which are the design and the operation characteristics.
Design: Recirculating loop of particles occurred when entrained particles, which possess a substantial amount of flux, are separated efficiently and externally to the reactor from a giant core reactor (riser) from its carrying fluid and will then be circulated back to the bottommost of the riser. The carrying fluid will circulate around this loop only once however the particle will pass through several times before finally leaving the system
Operational: The system is usually operated under high particle flux and high superficial gas velocity, which are typically (10–1000 kg/m2s), and (2–12 m/s) respectively. This operational condition is chosen to avoid a distinct interface between the dilute region and the dense bed inside the riser. Thus gas velocities above the bubbling point is chosen for contacting. The standard operating conditions for the circulating fluidized bed can be seen in Table 1 below.
Process characteristics assessments
The circulating fluidized bed (CFB) uses high fluid velocity to provide better gas-solid contact by providing more intense mixing of the fluid so that better quality of product can be obtained. However, the high gas velocities and the recirculation of solids may make the CFB system much more expensive in term of power requirement and investment compared to conventional fluidized bed reactors.
CFBs have been widely used in the field of solid catalyzed gas phase reactions in two situations below.
Continuous regeneration of catalyst, which deactivates rapidly. The solid is maintained in constant circulation where catalyst is continuously regenerated and return to the reactor.
Heat must be brought in or removed from a reactor. A continuous circulation of solids between vessels can efficiently transport heat from one vessel to another since solids have relatively large heat capacity compared to gases.
One important factor of circulating systems is the ability to control the feed circulation rate. The feed circulation rate is controlled by the gas velocity in the bed which determines the flow regime and density of bed. All the circulating systems can be characterized either by the solid circulation rate, kg/s and the transfer ratio of the suspended materials being exchanged between vessels.
For circulating fluidized bed in coal combustion, the beds need to use a greater fluidizing speed, so the particles will remain constant in the flue gases, before moving across the combustion chamber and into the cyclone. During combustion, a dense bed is required to mix the fuel even though the solids are dispersed evenly all over the unit. The bigger particles are extracted and returned to combustion chamber for further process, which required relatively longer particle residence time. If the total carbon conversion efficiencies gets over 98% it shows good separation process that leaves simply a minor proportion of unburned char in the residues. During the whole process, the operating conditions are relatively uniform for the combustor.
Possible design heuristics
In designing a circulating fluidized bed, with constant temperature distribution for either endothermic or exothermic reactions, in order to determine the appropriate design for cooling or heating of the circulating fluidized bed reactors, a good approximation of heat transfer rates are necessary for better control so that the reactor can change its performance for different operating conditions. For highly exothermic reactor, it is recommended to keep the conversion of material low and recycle any possible cooled reactants. It is also recommended to separate the components in order of decreasing percentage of material in feed. This will help in reducing the cost of maintenance for the next separation process.
In many industrial processes that involved small, porous or light particle which have to be fluidized with more viscous fluid in the present of gas, a gas–liquid–solid circulating fluidized bed (GLSCFB) is more preferred compared to conventional system because it can minimize dead zone and increase the contacting efficiency among gas, liquid and solid phases by improving the shear stress between those phases. Gas–liquid–solid circulating fluidized bed also can provide higher gas holdup, produce more uniform bubble size, better interphase contact, and good heat and mass transfer capabilities. The flexibility of using GLSCFB allows the fluidized bed to operate at much more higher liquid velocity than the minimum fluidization velocity which in turn increase the fractional conversion as well as production efficiency per unit cross-sectional area of the bed. Moreover, the deactivated catalyst used in the GLSCFB can be regenerated continuously by using the circulating fluidized bed which in turn reduced the operating cost for replacing the catalyst frequently.
As for circulating fluidized bed scrubbers (CFBS), it is more preferred in industry due to its ability to produce higher purity product while avoiding the corrosion issue. The CFBS also preferred because it requires low installation cost, high capture of metals, low maintenance required, wide fuel sulphur flexibility and fast response to cope with changes in operating condition. Some modification is necessary at the inlet in order to eliminate loss of solids materials at the bottom of bed during low-load operation. For better quality of product, it is advisable to purify the feed stream if it is difficult to separate between the impurity and the desired product if it is present in large amount.
This will enable the fluidized bed to operate at full capacity range in a stable manner. Every CFBS need to have larger boilers that are connected to several cyclones in parallel as to remove the solids for recirculation. CFBS also need to have heat recovery unit as some of the heat from the bottom ash can be recovered as it is more economically feasible in term of lowering the operating cost. Ash coolers are prone to foul the bed while the heat transfer tubes in fluidized bed are prone to erosion can be removed by the use of some fluidizing air.
New development
More new clean technology has to be implemented to maintain the sustainability of the earth. Bigger reactors, with lower pollutants emission, have to be developed to meet the global demand. One of the best clean technologies to be used is the circulating fluidized bed technology .
In-bed heat exchanger
Another major field that is currently being looked into is the further development of in-bed heat exchanger used with circulating fluidized bed technology. With this design, the bed materials fill the in-bed heat exchanger through the open top of the circulating fluidized bed furnace, which enables the control of materials through the in-bed heat exchanger. By being able to control the materials throughput rate, better control of heat absorption as well as bed temperature in the furnace is achievable. With further development in this field, we will be able to fully utilize the energy required to drive the furnace with minimum energy wastage.
U-beam separator design
The U-beam separator design has been improved for better efficiency, reliability as well as maintainability and it is now in the 4th generation of its design as shown in Figure 6.
Improved design has brought numerous benefits to the circulating fluidized bed technology. Some of the benefits are as follows:
High solids collection efficiency
Controlled furnace temperature
Low auxiliary power
Smaller footprint
Minimal refractory use
Low maintenance
References
External links
CFB Boiler Process Video
Industrial furnaces
Fluidization | Circulating fluidized bed | [
"Chemistry"
] | 4,342 | [
"Metallurgical processes",
"Fluidization",
"Industrial furnaces"
] |
40,797,269 | https://en.wikipedia.org/wiki/Circulation%20evaporator | Circulation evaporators are a type of evaporating unit designed to separate mixtures unable to be evaporated by a conventional evaporating unit. Circulation evaporation incorporates the use of both heat exchangers and flash separation units in conjunction with circulation of the solvent in order to remove liquid mixtures without conventional boiling. There are two types of Circulation Evaporation; Natural Circulation Evaporators and Forced Circulation Evaporators, both of which are still currently used in industry today, although forced Circulation systems, which have a circulation pump as opposed to natural systems with no driving force, have a much wider range of appropriate uses.
Design of natural/forced circulation evaporators
Evaporators are designed with two key objectives: Is the equipment to be selected best suited for the duty, and is the arrangement the most efficient and economical. Heat transfer greatly affects evaporator design, as it represents the greatest cost in its operation. The most suitable evaporator will have the highest heat transfer coefficient per dollar of equipment cost. In optimising the design of an evaporator, another important consideration is the steam economy (kg of solvent evaporated per kilogram of steam used). The best way to achieve high economies (which can be well over 100%) is to use multiple effect evaporator, whereby the vapour from one evaporator – or effect – is used to heat the feed in the next effect, where boiling occurs at lower pressure and temperature Thermo-compression of the vapour, whereby the vapour will condense at a temperature high enough to be reused for the next effect through compression, will also increase efficiency. However, increased energy efficiency can only be achieved through higher capital costs and a general rule is the larger the system, the more it will pay back to increase the thermal efficiency of the evaporator.
Heat transfer is not the sole design criteria however, as the most appropriate evaporator also depends on properties of the feed and products. Crystallisation, salting and scaling, product quality and its heat sensitivity, foaming potential of the solution, viscosity of feed (which increases with evaporation) and its nature (slurry or concentrate) all need to be considered. For Single Effect Evaporators that are used in small scale processes with low throughput of material, material and energy balances can be used to design and optimise the process. In designing multiple effect evaporators, trial and error methods with many iterations are usually the fastest and most efficient. The general steps in design are as follows, and would be carried out in excel for ease of calculation. Other design software such as Aspen Plus could also be used with built in functions for process equipment.
1) Estimate temperature distribution in the evaporator, taking into account boiling-point elevations. If all heating surfaces are to be equal, temperature drop across each effect will be approximately inversely proportional to the heat transfer coefficient in that effect.
2) Determine total evaporation required, and estimate steam consumption for the number of effects chosen.
3) Calculate evaporation in the first effect from assumed feed temperature or flowrate. Repeat for following effects, and check that initial and intermediate assumptions are still valid. Also, determine whether product quality has met required specifications at the last effect.
4) Check to see if the heat requirements have been met and product meets desired specifications. If not, repeat previous steps with different assumption of steam flow into the first effect.
5) Now that concentrations in each effect are known, recalculate boiling point rises to determine the heat loads. Using this information revise assumed temperature differences heat transfer coefficients, then determine heating surface requirements.
6) Given enough data, based on the above conditions, heat transfer coefficients can then be calculated more rigorously, and surface heating requirements adjusted accordingly to give a more reliable design representative of the physical system itself.
Once the evaporator components themselves have been designed, ancillary equipment such as pumps (particularly for forced circulation evaporators) and heaters would need to be designed and/or specified for the system to give a reliable performance and cost estimate of the system as a whole. These would be based on the specifications determined in the calculations above.
Main characteristics
The main process characteristics are those based around evaporation specifically through heat exchange and pressure manipulation. It is a flash separation procedure that includes the heating of a base liquid mixture and forced circulation through the system via pumping.
Physical characteristics
Forced/Natural Circulation Evaporation is used when boiling of base liquids is undesired. It was developed specifically for processing and separation of liquids in which crystallising and scaling occurs. The evaporator uses separate parts to create the overall system; a heat exchanger, separation tank and for the forced circulation system (as opposed to the natural circulation system) a circulation pump are standard although can be subject to change depending on the liquids properties of the mixtures being separated and specific design. The units in the heat exchanger (where thermal transfer takes place) are called the heating units or calandria (for single tube heat exchangers). The liquid-vapor separation tank is called a flash separator, flash chamber or flash vessel. The basic module of an evaporator is known as the “body” of the evaporator and refers to the calandria and the flash chamber. The term “effect” is used to describe the body where vapor is extracted from the raw material and is operating at the same boiling point.
System characteristics
Evaporation is the elimination of the solvent in form of vapor from a solution. For most evaporation systems, the solvent is water and the heat is provided by steam condensation. In a forced circulation evaporation liquid is constantly circulated through the system. The mixture moves through the heat exchanger where it is superheated under pressure. To avoid fouling a high circulation rate is used, typically between 1.5 – 4 m/s although this ultimately depends on the component properties and is easily manipulated by the circulation pump. The liquid is pressurised through the heat exchanger externally by pressure stabilisers such as valves or orifices or hydrostatically within the system.
Heating of the liquid across the heat exchanger is kept minimal with a standard temperature difference of 2 - 3 K. As the liquid enters the flash vessel the pressure is reduced to slightly below that of the heat exchanger and flash evaporation occurs. The vapor stream is separated out of the liquid stream. This vapor is usually not the desired product from the evaporation unit. As such the vapor can be either collected or disposed of depending on the system. The enriched liquid solution is then either collected in the same way as the vapor or recirculated through the system again.
This results in a high recirculation ratio within the range of 100–150 kg of liquid (solvent) recirculated per Kg of vapor removed. These high recirculation rates result in high liquor velocities through the tube and in turn minimize the buildup of crystals, other deposits and in turn minimize fouling. It is important to note that in crystallisation applications, crystallisation still occurs in the flash separator and in some specific systems a further separation of solid particles from the recirculated slurry is needed.
Assessment of characteristics
When designing a forced circulation evaporator there are 3 considerations to address; the heat transferred, the liquid vapor separation and the energy consumption efficiency. All of these considerations need to be maximized in order to create an efficient system. As circulation and heating are maintained for the system, liquid temperatures and flow rates can be controlled specifically to suit the product requirements and as such optimum tube velocities can be reached resulting in an efficiently designed system that addresses the design considerations.
Forced Circulation Evaporators have high liquid velocity and therefore a high turbulence this ergo equates to high heat transfer coefficients. The system contains positive circulation, freedom from high fouling, scaling or salting and is suitable for corrosive and viscous solutions.
The operating characteristics are specifically manipulated to fit the application criteria. Forced Circulation Evaporators are however versatile in their application and can be used in a wide variety of applications (see applications). For instance they are ideal for crystallising operations. Concentration values of forced circulation evaporators can handle more than the limits of conventional tubular evaporators when handling feed with dissolved salts and is often used as a finishing evaporator for concentration of liquids to high solid content following low solids multi-stage, TVR or MVR evaporation.
Multiple heating effects can be used to increase thermal efficiency. In this system design extracted vapor is used as a heating medium for the 2nd heating effect at a lower pressure than the first effect. This can be repeated for multiple effects.
Natural Circulation evaporator characteristics
Natural Circulation evaporation is essentially based upon natural convection currents manipulated through the system piping to create circulation. Circulation through convection is achieved through bubble formation. Bubble are of lower density and rise through the liquid to promote upward lift into the evaporating vessel.
Physically Natural circulation evaporators use a short tube bundle within the batch pan or by having an external shell and tube heat exchanger outside of the main vessel (as shown in the diagram) External heating through heat exchangers is normally used as it has the advantage that it is not dependent on the calandria size or shape. As such larger capacities for the flash separation tank can be obtained.
Removing of the circulation pump reduces the operating costs, however due to characteristics of the system as mentioned above the evaporator has a long residence time and low flow rates, making its uses severely more limited than a forced circulation evaporator.
The most common application of Natural Circulation evaporation is as a reboiler for distillation columns.
System designs available
Currently, a wide range of forced circulation evaporators are available that are specifically tailored to carry out distinct applications.
Plate Forced Circulation Evaporators utilize a centrifugal pump which forces liquid to circulate through the plate structures and heat exchanger. The flexibility of this design is a major advantage, as the rate of evaporation can be manipulated by either adding or removing extra plates, allowing it to perform a wide range of duties that require greater heat transfer co-efficient. More specifically, products with higher viscosity have been better suited to this design, with the plate forced circulation evaporator demonstrating higher performance and improved evaporation with comparison to the tubular forced circulation system. The liquid must undergo superheated temperature, which exceeds the original boiling point of the liquid by a large degree, forcing rapid evaporation. In addition, to flexibility, this system is compact, only needing small space and is easy to clean and maintain as plates are readily accessible. With regards to suitability, this design is currently being used in processes that involve liquids with low to medium evaporation rates and consist of minute portions of undissolved solutes with close to no capacity to induce fouling.
Tubular forced circulation evaporators employs an axial circulation pump which navigates the flow of liquid in a circular motion through the system's heat exchanger in which it is superheated. Thereafter, when the liquid reaches the separator the liquid pressure decreases dramatically forcing a portion of the liquid to be rapidly boiled off. This design is specifically for products and/ or particulates with a diameter of over 2mm. As the evaporation action occurs only in the separator and not in the heat exchanger, fouling is reduced despite higher levels of turbulence in the design. Alternatively, another design parameter is the optimisation of liquid velocity in the tube side flow which is regulated by the circulation pump.
Forced circulation evaporators in the food industry use modified designs that mimic the original system however involve extra secondary steam units to enhance forced circulation flow. Whilst the single effect design employs a condenser unit to stimulate a condensation action subsequent to vapour inflow from the heat exchanger, the double effect design does a similar duty however the extra component acts to reduce the overall pressure in the system. In comparison, the triple effect system is used when high levels of effective evaporation are needed with minimum labour. In this design, the liquid enters the third effect at a low temperature and moves to the second stream in which concentration is increased due to the previous evaporation effect. Finally, the optimum product concentration is achieved in the first effect.
With regards to the design components within forced circulation evaporation systems, the heat exchangers can vary. Shell and tube exchangers are the most widely apparent as a result of the flexible design that can accommodate various pressure and temperature values. Forced circulation exchangers can employ either horizontal or vertical shell and tube heat exchangers, allowing the exchange of heat between fluids within and outside the tubes (that exist inside the heat exchanger). Liquids with high levels of solute usually require vertical heat exchangers which are more commonly used.
Waste
Evaporation generally deals with evaporation of water from a mixture or solution, containing another liquid or fine solids. This concentrated stream is in most cases the product and as such the only waste stream is pure water, which poses no risk to the environment and may be disposed into the stormwater/ sewage system. For the case where the concentrate is the waste stream, such as in evaporation of saltwater to produce potable water, the salt concentrate should be dispersed back into the oceans, or further dried and sent off for disposal/ use in other operations. For most cases, there are no hazardous waste streams associated with natural and forced circulation evaporators.
Advantages and limitations
Advantages
Natural/forced circulation evaporators have many advantages, making them the more popular choice of evaporator in industry.
The liquid entering the circulation evaporator will boil in the separator, not on a heating surface, hence minimising fouling, whereas with plate evaporators, boiling will occur on a heating surface. It is for this reason that circulation evaporators are preferred for liquids with a higher tendency to foul. Minimal fouling also means that the cleaning cycles are not as frequent as with other evaporators such as plate evaporators.
Circulation evaporators are fairly compact and are easy to clean and operate. They can also be easily adapted according to the product that needs to be obtained. They have a high heat transfer coefficient as well as a high circulation flow, which both work to increase the efficiency of the evaporator.
Limitations
One of the main limitations of the forced/natural circulation evaporators is the cost. Circulation evaporators have particularly high construction costs, whereas falling film evaporators have a low investment cost. Falling film evaporators has no rotating internal part, and hence experience no mechanical deterioration, whilst circulation evaporators have high maintenance costs.
Although previously described as an advantage, there is also a down side to the high circulation flow. The increased velocity can cause the equipment to corrode at a faster rate, which will increase the overall cost of running the evaporator considering how expensive it is to maintain compared to other evaporators.
Applications
Natural/forced circulation evaporators have a major role in the food and beverage industry. Specifically, they can be used for processes that produce tomato juice concentrate, (tropical and berry) fruit concentrate, and when water needs to be removed from certain raw materials in such a way as to maintain the raw material properties.
In general, forced circulation evaporators are required when the fouling characteristics of a liquid will cause problems if the liquid boils on a heating surface. These evaporators are also used for liquids with a high solids content and a high viscosity.
There are several other processes that require the use of forced circulation evaporators, which work particularly well as crystallising evaporators. These include processes that produce salt, corn steep water and calcium carbonate.
Natural circulation evaporators are used in other processes such as those that produce anhydrous sodium hydroxide (caustic), sugar beet, liquors that are particularly foamy, or those that have a low to moderate viscosity, and precipitating liquids.
Natural/forced circulation evaporators are also necessary in effluent treatment plants, and in both the chemical and pharmaceutical industry.
New developments
Improvements in the design of the forced/natural circulation evaporators have had significant implications for industrial products and processes. The advent of self-cleaning exchangers installations containing an external circulating motion for particles has drastically reduced levels of fouling. Moreover, the use of forced circulation evaporators in multi-effect evaporation plants, as described earlier in the designs available section, have significantly broadened the applications for liquids that have high viscosities, can be easily deposited and require higher concentrations. Further evidence can be extracted from the case study regarding the North Italy landfill, in which biogas in a single effect evaporator could not completely evaporate the leachate. As a result, a triple effect forced circulation evaporator was utilized.
References
Evaporators | Circulation evaporator | [
"Chemistry",
"Engineering"
] | 3,502 | [
"Chemical equipment",
"Distillation",
"Evaporators"
] |
40,797,969 | https://en.wikipedia.org/wiki/Circle-throw%20vibrating%20machine | A circle-throw vibrating machine is a screening machine employed in processes involving particle separation. In particle processes screening refers to separation of larger from smaller particles in a given feed, using only the materials' physical properties. Circle throw machines have simple structure with high screening efficiency and volume. However it has limitations on the types of feed that can be processed smoothly. Some characteristics of circle-throw machines, such as frequency, vibration amplitude and angle of incline deck also affect output.
Applications
They are widely used for screening quarry stone stock and classifying products in mining, sand, gold, energy and chemical industrial processes. The targeted substance is predominantly finer particles, which can then be directed into a separation unit, such as a hydrocyclone or are materials that can be removed and used. Removed materials are often formed intentionally and are classified by their shape, size and physical properties. For example, construction wastes are sorted and sieved by a circular vibrating screen into coarse and fine particles. The particles are taken to make concrete, architectural bricks and road base materials.
Competitive processes
Circle-throw vibrating screens operate on an inclined surface. A deck moves in a circle. It operates with a continuous feed rather than in batches, leading to much greater output. The incline allows the feed to move through the device.
Circle-throw machines are larger than others, and may require greater space than other screening units. Fine, wet, sticky materials require a water spray to wash fine materials under spray bars. Circle-throws have a large stroke and allow heavy components to circulate and interfere with the screen box. A powerful motor is needed, while other separators may not.
Circle throw separation does not produce a separate waste stream. The feed is separated into multiple streams, with the number of exit streams matching the number of decks. Circle throw separation usually follows a grinding process. The coarser upper deck can be directly re-fed into the grinding units due to continuous operation, thus reducing transport time, costs and storage.
Design
The standard unit is a single-shaft, double-bearing unit constructed with a sieving box, mesh, vibration exciter and damper spring. The screen framing is steel side plates and cross-members that brace static and dynamic forces. At the center of the side plates, two roller bearings with counterweights are connected to run the drive. Four sets of springs are fixed on the base of the unit to overcome the lengthwise or crosswise tension from sieves and panels and to dampen movement. An external vibration exciter (motor) is mounted on the lateral (side) plate of the screen box with a cylindrical eccentric shaft and stroke adjustment unit. At the screen outlet, the flows are changed in direction, usually to 90 degrees or alternate directions, which reduces the exiting stream speed. Strong, ring-grooved lock bolts connect components.
Variations in this design regard the positioning of the vibration components. One alternative is top mounted vibration, in which the vibrators are attached to the top of the unit frame and produce an elliptical stroke. This decreases efficiency in favor of increased capacity by increasing the rotational speed, which is required for rough screening procedures where a high flow rate must be maintained.
A refinement adds a counter-flow top mounting vibration, in which the sieving is more efficient because the material bed is deeper and the material stays on the screen for a longer time. It is employed in processes where higher separation efficiency per pass is required.
A dust hood or enclosure can be added to handle particularly loose particles. Water sprays may be attached above the top deck and the separation can be converted into a wet screening process.
Characteristics
Screen deck inclination angle
The circular-throw vibrating screen generates a rotating acceleration vector and the screen must maintain a steep throwing angle to prevent transportation along the screen deck. The deck is commonly constructed to have an angle within the range of 10° to 18°, in order to develop adequate particle movement. An Increase of deck angle speeds particle motion with proportional relationship to particle size. This decreases residence time and size stratification along the mesh screen. However, if the angle is greater than 20°, efficiency decreases due to reduction of effective mesh area. Effect of deck angle on efficiency is also influenced by particle density. In mining the optimal inclination angle is about 15°. Exceptions are the dewatering screens at 3° to 5° and steep screens at 20° to 40°.
Short distribution time
On average, 1.5 seconds is required for the screen process to reach a steady state and for particles to cover the screen. This is induced by the circular motion. The rotary acceleration has a loosening effect on the particles on the deck. Centrifugal forces spread particles across the screen. With the combination of the gravitational component, the efficiency of small particle passing through aperture is improved, and large size particles are carried forward towards the discharge end.
Vibration separation
Under vibration, particles of different sizes segregate (Brazil nut effect). Vibration lifts and segregates particles on the inclined screen. When vibration amplitude is within the range of 3 to 3.5mm, the equipment segregates the large and small particles with best efficiency. If the amplitude is too high, the contact area between particles and screen surface is reduced and energy is wasted; if too low, particles block the aperture, causing poor separation.
Higher frequency of vibration improves component stratification along the screen and leads to better separation efficiency. Circle throw gear is designed with 750 ~to 1050 rpm, which screens large materials. However, frequencies that are too high vibrate particles excessively; therefore the effective contact area of mesh surface to particles decreases.
Characteristics of feed
Moisture in the feed forms larger particles by coagulating small particles. This effect reduces sieve efficiency. However the centrifugal force and vibration and acts to prevent aperture blockage and agglomerated particle formation. Feed particles are classified as fine, near-sized and oversized particles; most near-sized and fines pass through the aperture rapidly. The ratio of fine and near-size particles to oversize should be maximized to obtain high screening rates.
Rate of feed is proportional to efficiency and capacity of screen; high feed rate reaches steady state and results in better screening rates. However, an optimum bed thickness should be maintained for consistent high efficiency.
Stable efficiency
Steady state screening efficiency is sensitive to the vibration amplitude. Good screening performance usually occurs when the amplitude is 3-3.5 mm. Particle velocity should be no more than 0.389 m/s. If the speed is too big, poor segregation and low efficiency follows. Eo shows the efficiency of undersize removal from oversize at steady state.
where F is (short ton per hour) of feed ore, O is of oversize solids discharging as screen oversize, fx is cumulative weight fraction of feed finer than ‘x’ and ox is cumulative weight fraction of oversize finer than ‘x’. Eu shows the efficiency of undersize recovery. U is mass rate of solids in the undersize stream.
Thus
Design heuristics
Vibration design
Circle-throw vibrating units rely on operating the screen component at a resonant frequency to sieve efficiently. While properly selected vibration frequencies drastically improve filtration, a deflection factor occurs as the vibrations displaces smaller particles. They do not properly pass through the screen due to excess movement. This is a property of the system's natural frequency. The natural frequency preferably vibrates at Fn is 188(1/d)2 (cycles per min) where d = (188/Fn)2 (inches). Static deflection corresponds to this frequency. Vibration isolation is a control principle employed to mitigate transmission. On circle-throw vibrating screens, passive vibration isolation in the form of mechanical springs and suspension are employed at the base of the unit, which provides stability and control of motor vibration. A rule of thumb regarding the amount of static deflection minimization that should be targeted with respect to operating RPM is provided in the table below.
Critical installations refer to roof-mounted units. Weight, loading and weight distribution are all elements which must be considered.
Roller bearing design
A circle-throw vibrating screen with a shaft and bearing system requires consideration of the loading the unit will undergo. The extra loading to the screen box created by the centrifugal force, due to the circular motion of the load as it passes through the unit is also a factor. Bearings must be designed to accommodate the extra stress. Bearing load due to screen box centrifugal force (Fr) is
Supplementary factor of Fz = ~1.2 is used to account for unfavorable dynamic stressing:
Index of dynamic stressing FL, speed factor Fn are used to calculate minimum required dynamic loading (kN)
FL is taken between 2.5-3 generally as to correspond to a nominal fatigue life of 11,000-20,000 hours as part of a usual design.
Structural support of vibrating equipment
The unit's processing ability is related to the vibration, requiring care to the design of the structural and support elements. An inadequate structural design is unable to stabilize the unit producing excess vibrations, leading to higher deflection or reducing effectiveness.
Total static force applied and spring stiffness:
When the dynamic forces of the loading is considered, an amplitude magnification factor (MF) must be considered:
An estimation of the magnification factor for a system with one degree of freedom may be gained using:
Most structural mechanical systems are lightly damped. If the damping term is neglected:
where fd/fn represents frequency ratio (frequency due to dynamic force, fd, and natural frequency of the unit, fn).
Screen length/width
Once the area of the unit is known, the screen's length and width must be calculated so that a ratio of 2-3 width (W) to 1 length (L) is maintained. Capacity is controlled by width adjustment and efficiency by width.
The bed depth D must be lesser or equal to
Xs is desired cut size.
(ft)
Starting deck angles can be estimated from
F= ideal oversize flowrate, standard widths for circle-throw machines are 24,36,48,60, 72, 84, 96 inches. Measurements should be matched to available "on-shelf" units to reduce capital cost.
Aperture size and shape
At a fixed screen capacity, efficiency is likely to decrease as aperture size decreases. In general, particles are not required to be separated precisely at their aperture size. However efficiency is improved if the screen is designed to filter as close to the intended cut size as possible. The selection of aperture type is generalized by the table below:
Bearings
Most processes have employed two-bearing screens. Two- bearing circular vibrating screens with a screen box weight of 35 kN and speed of 1200 RPM were common. The centroid axis of the screen box and unbalanced load does not change during rotation.
A four-bearing vibrating screen (F- Class) was developed to meet demands especially for iron ore, phosphate and limestone production industries. F-Class features a HUCK-bolted screen body is connected for extra strength and rigidity and carbon steel is used for the side plates to give high strength. The shaft is strengthened with a reinforcing plate, which attaches to the slide plate and screen panels.
Four-bearing screen provide much greater unit stability thus higher vibration amplitudes and/or frequencies may be used without excess isolation or dampening; overall plant noise emission. The new design gives an accurate, fast sizing classification with materials ranging in cut size from 0.15 to 9.76 inches and high tonnage output that can process up to 5000 tons per hour.
References
Mechanics
Mineral processing
Mining equipment | Circle-throw vibrating machine | [
"Physics",
"Engineering"
] | 2,395 | [
"Mechanics",
"Mining equipment",
"Mechanical engineering"
] |
40,798,385 | https://en.wikipedia.org/wiki/Flash%20reactor | As an extension of the fluidized bed family of separation processes, the flash reactor (FR) (or transport reactor) employs turbulent fluid introduced at high velocities to encourage chemical reactions with feeds and subsequently achieve separation through the chemical conversion of desired substances to different phases and streams. A flash reactor consists of a main reaction chamber and an outlet for separated products to enter downstream processes.
FR vessels facilitate a low gas and solid retention (and hence reactant contact time) for industrial applications which give rise to a high throughput, pure product and less than ideal thermal distribution when compared to other fluidized bed reactors. Due to these properties as well as its relative simplicity FRs have the potential for use for pre-treatment and post-treatment processes where these strengths of the FR are prioritized the most.
Various designs of a FR (e.g. pipeline FR, centrifugal FR, vessel FR) exist and are currently used in pilot industrial plants for further development. These designs allow for a wide range of current and future applications, including water treatment sterilization, recovery and recycling of steel mill dust, pre-treatment and roasting of metals, chemical looping combustion as well as hydrogen production from biomass.
Properties
The vessel flash reactor is a design commonly used and is shown in the figure to the right. Gas is introduced from the bottom at an elevated temperature and high velocity, with a slight drop in velocity experienced at the central part of the vessel. Chamber A is designed to be "egg shaped", with a relatively narrow bottom cross sectional area and a wide upper cross sectional area. This configuration is designed to increase the fluid's velocity at the chamber's bottom, allowing for heavy feed particles to be in a continuous circulation that promotes a reaction site for separation processes.
The method of feed delivery varies depending on its phase. Solids may be delivered using a conveyor B, whilst fluids are vaporized and sprayed directly into the FR. It is then contacted with a continuously circulating hot gas that was introduced in section C. This continuously circulating gas interacts throughout the chamber with the incoming feed, with the surfaces of the particles generating insoluble salts as a result of reactions. Product mixture is then separated through E, where an exhaust vent emits gaseous products. Temperature of this stream is controlled by a coolant emitted by the vessel's spray nozzles D.
Design characteristics and heuristics
Whilst a variety of applications are available for a flash reactor, they follow a general set of operating parameters/heuristics that are similar. The following lists the important parameters to consider when designing a FR:
Fluid velocity and flow configuration
A relatively fast fluid velocity (10–30 m/s) is usually required in FR operations to encourage a continuous particle distribution throughout the reactor's vessel. This minimizes the column's slip velocity (average velocity difference of different fluids in a pipe), providing a positive impact on heat and mass transfer rates and allowing for the use of smaller diameter vessels which can lower operating costs. Also, the use of a vertical fluid flow configuration will result in a lack of feed particle mixing in the horizontal and vertical direction, as such, discouraging particle interactions that would decrease product impurity.
Solid retention time
The use of a fast fluid velocity, as described above, also ensures a short solid feed retention time. This would cater for reactions that require a purer product and higher throughput. However, if the operating condition for a certain application requires an extended reaction time, this can be implemented by introducing a cyclical operation. By employing a backflow line, the fluid in the FR can be recirculated with the feed to allow for additional contact time.
Refractory lining material
Due to the high temperature requirements for FR operations, a refractory lining is required to reinforce and maintain vessel integrity over time. Also, a refractory lining serves to isolate the chamber's high temperature from ambient temperature. For example, in the Reco-Dust process, the FR is lined with two separate refractory materials: aluminum oxide bricks for the combustion chamber, and silicon carbide bricks for the conical outlet part. In addition, design of the vessel can vary in shapes and sizes (i.e. from pipeline to an egg-like shape) that aims to promote the vertical circulation of the gases and particulate matter.
Feed and fluid type
To minimize hold-up of material in the reactor, a dense gas with light solids are recommended for the operation of the FR. The solid feed fed into the reactor can only consist of heat-resistant materials and will be at best when a short retention time is only required. It is also desired to for a solid feed to be dry, pourable and with a well-defined grain size.
Flash reactor types
Centrifugal flash reactor
Unlike other FR designs, the powdered feed is contacted on a solid heat carrier rather than a gaseous carrier. It involves the use of a heated rotating plate that disperses the feed powder particles for a short duration. This is achieved by the use of centrifugal forces, where it compresses the powder onto the plate's surface, allowing for direct contact between the particles and hot metal, which enables a higher heat transfer rate. the figure on the right illustrates the TSE-FLAR set up, with the arrows illustrating the direction of the feed traveling from the feed tank, to the metering unit, to the rotating plate, and finally to the cooling water unit.
Pipeline flash reactor
A pipeline flash reactor (PFR) is a relatively new device developed through the principles of a FR thus possessing most of its characteristics, functions and properties. As inferred from its name, the shape of the pipeline reactor takes the form of a pipe. Even though it is a new derivative product of an older technology, it is being trialed in industrial size operations. Pipeline flash reactors are used as a tertiary or post treatment step in waste water treatment, either integrated in new plants or retrofitted in existing developments. The PFR's shape allows it to be easily integrated into new process systems and be retrofitted into older existing systems to improve the overall system's efficiency. Due to its shape, modifications and extensions can be easily added to the PFR to accommodate the requirements of certain processes.
In the PFR, the reactants come into contact with each other in the pipe rather than a mixing vessel in conventional mixing systems, such as a continuously stirred tank reactor. This eliminates the need for extra mixing tanks which saves space but as a trade-off, the actual reaction site will be dependent on the pipe specifications and velocity of the fluid. The PFR also eliminates the need of bulky cascade systems or tanks used by other technologies in existing developments which can reduce maintenance costs. Due to the nature of the device, the reactants processed in PFRs will have short retention times, however, adding backflows into the system is a technique which can increase retention time if required. Unlike conventional mixing systems, a turbulent mixing chamber can be realized without producing pressure drops. Also, PFRs, like most flash reactors, are highly efficient with a small footprint.
Applications
The versatility of flash/transport reactors are suitable for a wide range of quality sensitive separation processes. The following describes the main applications for the flash reactor, note that most flash reactor applications do not require any post-treatment or pre-treatment systems due to a lack of waste generated.
Ozone injection for water treatment sterilization
The (PFR) is a growing technology with applications in improving the efficiency of certain processes such as the waste-water treatment. A pilot reactor was installed in California as part of the [Castaic Lake Water Agency] (CLWA) Expansion plan. The PFR serves as an auxiliary mixing and contact device to promote the ozone absorption in treated water. The PFR used customized nozzles to inject the ozone/water mixture at high velocities back into the bulk of the treated fluid. The use of PFRs, such as the reactor in the CLWA expansion, in water treatments is becoming more popular since the PFRs eliminates the need for additional tanks that would have been required for processes such as chlorination. Smaller basins are sufficient in providing the contact time between reactants for microbial inactivation thus reducing installation footprints in new developments. Also, the reactants will leave the PFRs quicker due to a shorter retention time; it was found that effective dispersion of the side stream into the bulk fluid was accomplished in as short as 1 second.
Treatment of steel mill dust to recover zinc
Since 2010, a flash-reactor pilot plant was successfully operating at the Montanuniversität in Leoben, Austria. Known as the RecoDust process, such a setup was designed to recover zinc from the dust collected in steel operations. Whilst tests have proven the functionality of this process, further research and implementation of this process in industry was halted due to the steel industry's uncertain economic outlook.
Nonetheless, research has shown a great potential for the use of the FR in recovering zinc from steel mill dust as it provides a strong oxidizing and reducing condition in the reaction vessel, with no waste materials produced. The large reaction surface area of the dust material input as well as not having an inner Zn-cycle and not requiring pretreatment processes has proven the effectiveness and efficiency of the RecoDust process.
A typical RecoDust process will often require temperatures from 1600-1650 °C with a dry, pourable, and well-defined grain sized raw material input of approximately 300 kg/h. In one experiment, 94% of chlorine, 93% of fluorine and 92% of lead was eliminated from the steel mill dust with a 97% recovery of zinc.
Rapid thermal treatment of powdered materials
The use of a rapid thermal heating process followed by their quenching/cooling is essential in many chemical engineering fields. For example, the aluminum hydroxide powder (i.e. gibbsite) used for the preparation of an alumina-based catalyst goes through the process of thermochemical activation (TCA) to form a thermally activated product, Al2O3∙nH2O. A centrifugal FR, TSEFLAR can be employed to heat the powder up to 400-900 K with a plate temperature of 1000 K and a speed of 90-250 turns per minute. Such settings have shown to produce a product output of 40 dm3/hr with a thermal treatment of less than 1.5 s.
Metallurgy
Flash reactors have enormous potential for replacing or assisting existing primary ore oxidation, reduction or other pre-treatment conditioning processes (e.g. calcining) in metal refinery. The simplicity and throughput of a flash reactor can provide a cost-effective solution to ease the use of existing, expensive rigorous processes.
Preheating
Preheating of crushed or fine ores can be carried out within a FR, utilising the short retention times to most quickly increase temperatures to reach conditions required in later processes. In iron and ilmenite ores high FR throughputs allows for substantial overall reduction in operating energy consumption, as well as provide a mixing site with other reactants such as hydrogen for briquetting in the main refining process.
Roasting
The oxidation of crushed particulate ores and the removal of sulfide, arsenic or other contaminants is a crucial separation process in the purification of metals which can be carried out within a FR. The oxidation of sulfide ores result in a conversion of small sized solid sulfide ore to oxides and residual sulfur dioxide gas culminating in a separation by converting unwanted sulfides into a gaseous phase. These contaminants can then go under post-treatment to create useful products from the waste stream, such as sulfuric acid using the contact process.
The equation below displays some examples of roasting oxidation reactions used in refining zinc from sphalerite and other ores.
2AS (s) + 3O2 (g) 2MO(s) + 2SO2 (g)
where A=Cu, Zn, Pb
In ilmenite roasting to produce synthetic, the magnetic properties of the ore are changed at high temperatures as ferrite compounds within the ore are oxidized. This results in the separation of oxidised ferric compounds from paramagnetic chromite components within the ore at the reactor outlet where the product may be further refined to synthesize iron or rutile downstream. In roasting gold-bearing sulfide ores, sulfur or arsenic diffusion gradients encourage the migration of gold towards mineral pores. Hence, continual roasting and volatilisation of sulfur and arsenic allows for the coalescence of gold at the surface of mineral particles which can then be separated efficiently by downstream processes such as leaching.
In a FR, the high throughput implies a high particle concentration per unit volume of gas and hence a large contact area of reaction for mass transfer. Further, the tolerance for this reaction to short retention times make this process ideal for carrying out industrial roasting. This allows for lower-grade feed materials to be utilised to improve both product capacity as well as quality compared to conventional treatment. Hence, the simplicity of FR implementation and its high product output optimizes costs of the roasting pre-treatment.
Advantages and limitations over competitive processes
Future developments
Chemical looping combustion
Chemical Looping Combustion or CLC is a method where using a combination of CFB and Flash reactors to remove nitrogen and impurities from the air before the oxidation of the fuel using an oxidation and reduction cycle of a metal such as nickel. In CLC, hot air is injected into a metal which acts as a catalyst and an oxygen carrier such as Fe2O3 or metallic nickel or copper. A flash reactor is used in the air injection process in the beginning of the loop. The use of flash reactors in this scenario allows the use of lower-grade feed materials and a substantial increase in capacity as well as product purity compared to conventional processing.
CLC can theoretically also be used to recover hydrogen from biomass during syngas synthesis and is explained in hydrogen production below.
Hydrogen production from biomass
Hydrogen production is an emerging technology in the field of renewable energy. As hydrogen demand is expected to grow exponentially, in the chemical, hydrocarbon, semiconductor industry, new sources for hydrogen must be found. Flash reactors in tandem with steam methane reforming and gasification, uses waste biomass such as a mixture of cellulose, lignin and other plant material organics to produce hydrogen gas. Most commonly used biomass waste is oil palm waste as a result of the palm oil industry.
Flash reactors can also be used in the drying section to quickly remove water content from the biomass by injecting high velocity heated air which acts as a pretreatment to the actual pyrolysis reaction which also occurs in a flash reactor. also shows that a flash reactor is used, after the grinding of the biomass, with the addition of extreme heat, into a mixture of bio-oil, char and ash. The ash and char produced from this reaction is later removed due to their catalytic properties which would interfere with the steam reformation.
References
Chemical reactors | Flash reactor | [
"Chemistry",
"Engineering"
] | 3,106 | [
"Chemical reactors",
"Chemical reaction engineering",
"Chemical equipment"
] |
40,798,692 | https://en.wikipedia.org/wiki/Trommel%20screen | A trommel screen, also known as a rotary screen, is a mechanical screening machine used to separate materials, mainly in the mineral and solid-waste processing industries. It consists of a perforated cylindrical drum that is normally elevated at an angle at the feed end. Physical size separation is achieved as the feed material spirals down the rotating drum, where the undersized material smaller than the screen apertures passes through the screen, while the oversized material exits at the other end of the drum.
Summary
Trommel screens can be used in a variety of applications such as classification of solid waste and recovery of valuable minerals from raw materials. Trommels come in many designs such as concentric screens, series or parallel arrangement and each component has a few configurations. However depending on the application required, trommels have several advantages and limitations over other screening processes such as vibrating screens, grizzly screens, roller screens, curved screens and gyratory screen separators.
Some of the main governing equations for a trommel screen include the screening rate, screening efficiency and residence time of particles in the screen. These equations could be applied in the rough calculation done in initial phases of a design process. However, design is largely based on heuristics. Therefore, design rules are often used in place of the governing equations in the design of a trommel screen. When designing a trommel screen, the main factors affecting the screening efficiency and production rate are the rotational velocity of the drum, mass flow rate of feed particles, size of the drum, and inclination of trommel screen. Depending on desired application of trommel screen, a balance has to be made between the screening efficiency and production rate.
Range of application
Municipal and industrial waste
Trommel screens are used by the municipal waste industry in the screening process to classify sizes of solid waste. Besides that, it can also be used to improve the recovery of fuel-derived solid waste. This is done by removing inorganic materials such as moisture and ash from the air-classified light fraction segregated from shredded solid waste, thereby increasing the quality of the product fuel. In addition, trommel screens are used for the treatment of wastewater. For this particular application, solids from the entering flow will settle onto the screen mesh and the drum will rotate once the liquid reaches a certain level. The clean area of the screen is submerged into the liquid while the trapped solids fall onto a conveyor which will be further processed before removal.
Mineral processing
Trommel screens are also used for the grading of raw materials to recover valuable minerals. The screen will segregate minuscule materials which are not in the suitable range of size to be used in the crushing stage. It also helps to get rid of dust particles which will otherwise impair the performance of the subsequent machineries in the downstream processes.
Other applications
Other applications of trommel screens can be seen in the screening process of composts as an enhancement technique. It selects composts of variable size fractions to get rid of contaminants and incomplete composted residues, forming end products with a variety of uses. Besides this, the food industries use trommel screens to sort dry food of different sizes and shapes. The classification process will help to achieve the desired mass or heat transfer rate and avoid under or over-processing. It also screens tiny food such as peas and nuts that are strong enough to resist the rotational force of the drum.
Designs available
One of the available designs of trommel screens is concentric screens with the coarsest screen located at the innermost section. It can also be designed in parallel in which objects exit one stream and enter the following. A trommel in series is a single drum whereby each section has different apertures size arranged from the finest to the coarsest
The trommel screen has many different configurations. For the drum component, an internal screw is fitted when the placement of the drum is flat or elevated at an angle less than 5°. The internal screw facilitates the movement of objects through the drum by forcing them to spiral.
For an inclined drum, objects are being lifted and then dropped with the help of lifter bars to move it further down the drum which the objects will otherwise roll down slower. Furthermore, the lifter bars shake the objects to segregate them. Lifter bars will not be considered in the presence of heavy objects as they may break the screen.
As for the screens, perforated plate screens or mesh screens are usually used. Perforated plate screen are rolled and welded for strength. This design contains fewer ridges which makes it easier for the cleaning process. On the other hand, mesh screen are replaceable as it is susceptible to wear and tear compared to perforated screen. In addition, screw cleaning work for this design is more intensive as objects tend to get wedged in the mesh ridges.
The screen's aperture comes in either square or round shape which is determined by
many operating factors such as:
The required dimension of the undersized product.
The aperture area. Round aperture contributes to a smaller area than square-shaped one.
The magnitude of the agitation of product.
Cleanup of drum.
Advantages and limitations over competitive processes
Vibrating screen
Trommel screens are cheaper to produce than vibrating screens. They are vibration free which causes less noise than vibrating screens. Trommel screens are more mechanically robust than vibrating screens allowing it to last longer under mechanical stress.
However more material can be screened at once for a vibrating screen compared to a trommel screen. This is because only one part of the screen area of the trommel screen is utilised during the screening process whilst the entire screen is used for a vibrating screen. Trommel screens are also more susceptible to plugging and blinding, especially when different sized screen apertures are in series. Plugging is when material larger than the aperture may become stuck or wedged into the apertures and then may be forced through which is undesirable. Blinding is when wet material clump up and stick to the surface of the screen. The vibrations in the vibrating screens reduce the risk of plugging and blinding.
Grizzly screen
A grizzly screen is a grid or set of parallel metal bars set in an inclined stationary frame. The slope and the path of the material are usually parallel to the length of the bars. The length of the bar may be up to 3 m and the spacing between the bars ranges from 50 to 200 mm. Grizzly screens are typically used in mining to limit the size of material passing into a conveyance or size reduction stage.
Construction
The material of construction of the bars is usually manganese steel to reduce wear. Usually, the bar is shaped in such a way that its top is wider than the bottom, and hence the bars can be made fairly deep for strength without being choked by lumps passing partway through them.
Working
A coarse feed (say from a primary crusher) is fed at the upper end of the grizzly. Large chunks roll and slide to the lower end (tail discharge), while small lumps having sizes less than the openings in the bars fall through the grid into a separate collector.
Roller screen
Roller screens are preferred to trommel screens when the feed rate required is high. They also cause less noise than trommel screens and require less head room. Viscous and sticky materials are easier to be separated using a roller screen than with a trommel screen.
Curved screen
Curved screens are able to separate finer particles (200-3000 μm) than trommel screens. However, binding may occur if the particle size is less than 200 μm which will affect the separation efficiency. The screening rate of a curved screen is also much higher than the trommel screen as the whole surface area of the screen is utilised. Furthermore, for curved screens, the feed flows parallel to the apertures. This allows any loose material to break up from the jagged surface of the larger materials, resulting in more undersized particles passing through.
Gyratory screen separators
Finer particle sizes (>40 μm) are able to be separated with the gyratory separator than with a trommel screen. The size of the gyratory screen separator can be adjusted through removable trays, whereas the trommel screen is usually fixed. Gyratory separators can also separate dry and wet materials like trommel screens. However, it is common for the gyratory separators to separate either dry or wet materials only. This is because there are different parameters for the gyratory screen to have the best separation efficiency. Therefore, two separators would be required for the separation of dry and wet materials, while one trommel screen would be able to do the same job.
Main process characteristics
Screening rate
One of the main process characteristics of interest is the screening rate of the trommel. Screening rate is related to the probability of the undersized particles passing through the screen apertures upon impact. Based on the assumption that the particle falls perpendicularly on the screen surface, the probability of passage, P, is simply given as
where refers to the particle size, refers to the size of aperture (diameter or length) and refers to the ratio of aperture area to the total screen area. Equation () holds for both square and circular apertures. However, for rectangular apertures, the equation becomes:
where and refers to the rectangular dimension of the aperture. After determining the probability of passage of a given size interval of particles through the screen, the fraction of particles remaining in the screen, , can be found using:
where is the number of impingements of the particles on the screen. After making the assumption that the number of impingements per unit time, , is constant, equation () becomes:
An alternative way of expressing the fraction of particles remaining in the screen is in terms of the particle weight, which is given as follows:
where is the weight of a given size interval of particles remaining in the screen at any given time and is the initial weight of the feed. Therefore, from equations () and (), the screening rate can be expressed as:
Separation efficiency
Screening efficiency can be calculated using mas weight in the following way E=c(f-u)(1-u)(c-f)/f(c-u)^2(1-f)
Apart from screening rate, another characteristic of interest is the separation efficiency of the trommel screen. Assuming that the size distribution function of the undersized particles to be removed, , is known, the cumulative probability of all particles ranging from to that are separated after impingements is simply:
Furthermore, the total number fraction of particles within this size range in the feed can be expressed as follows:
Therefore, the separation efficiency, which is defined as the ratio of the fraction of particles
removed to the total fraction of particles in the feed, can be determined as follows:
There are a number of factors that affect the separation efficiency of the trommel, which include:
Speed of rotation of the trommel screen
Feed rate
Residence time in the rotating drum
Angle of inclination of drum
Number and size of screen apertures
Characteristics of the feed
Residence time in the screen
Two simplifying assumptions are made in the equation presented in this section for the residence time of materials in a rotating screen. First, it is assumed that there is no slippage of particles on the screen. In addition, the particles dislodging from the screen are under free fall. When the drum rotates, particles are kept in contact with the rotating wall by centrifugal force. As the particles reach near the top of the drum, the gravitational force acting in the radial direction overcomes the centrifugal force, causing the particles to fall from the drum in a cataracting motion. The force components acting on the particle at the point of departure is illustrated in Figure 6.
The departure angle, α can be determined through a force balance, which is given as:
where is the drum radius, is the rotational velocity in radians per second, is the gravitational acceleration and is the angle of inclination of the drum. Hence, the residence time of particles in the rotating screen can be determined from the equation below:
where refers to the screen length, refers to the rotation of the screen in terms of revolutions per minute and refers to the departure angle in degrees.
Design and heuristics
Trommel screens are used widely in industries for its efficiency in material size separation. The trommel screening system is governed by the rotational velocity of the drum, mass flow rate of feed particles, size of the drum and inclination of trommel screen.
Particle rotational velocity behaviour
Considering the mesh sizes of the rotating drum are larger than particle sizes as shown in Figure 7, the particle motion velocity can be broken down into two velocity components consisting of the vertical component and horizontal component . Denoting to be the angle between the particle motion and vertical component, the vertical and horizontal velocities can now be written as:
When , the particles escape through the mesh in the rotating drum. However, if , the particles are retained within the rotating drum. Larger granules will be retained inside the trommel screen until the desired aperture is met and follows the same particle behaviour.
Particle motion mechanisms
With varying rotational velocities, the effect of screening efficiency and production rate varies according to different types of motion mechanisms. These mechanisms include slumping, cataracting and centrifuging.
Slumping
This occurs when the rotational velocity of drum is low. The particles are lifted slightly from the bottom of the drum before tumbling down the free surface as shown in Figure 8. As only smaller- sized filter granules near the wall of the trommel body are able to be screened, this results in a lower screening efficiency.
Cataracting
As rotational velocity increases, slumping transitions to cataracting motion where particles detach near the top of the rotating drum as shown in Figure 9. Larger granules segregate near the inner surface due to the Brazil nut effect while smaller granules stay near the screen surface, thereby allowing smaller filter granules to pass through. This motion generates turbulent flow of particles, resulting in a higher screening efficiency compared to slumping.
Centrifuging
As the rotational velocity increases further, cataracting motion will transition to centrifuging motion which will result in a lower screening efficiency. This is due to particles attaching to the wall of the rotating drum caused by centrifugal forces as shown in Figure 10.
Feed flow rate
According to Ottino and Khakhar, increasing the feed flow rate of particles resulted in a decrease in screening efficiency. Not much is known about why this occurs, however, it is suggested that this effect is influenced by the thickness of filter
granules packed in the trommel body.
At higher feed flow rates, smaller-sized particles at the lower layer of the packed bed are able to be screened at designated apertures and remaining small-sized particles adhere to larger particles. On the other hand, it is easier for smaller-sized particles to pass through the granules thickness in the trommel system at lower feed rates.
Size of the drum
Increasing the area of material exposed to screening allows more particles to be filtered out. Therefore, features that increase the surface area will result in a much higher screening efficiency and production rate. The larger surface area can be increased by
Increasing the length and diameter of the drum
Increasing the size of the apertures and number of apertures
Reducing the number of gaps/area between the apertures
Using lifting bars to increase spread of particles
Inclination angle of drum
When designing the trommel screen, it should be taken into account that higher inclination angle would result in a higher production rate of particles. A higher inclination angle would result in a higher production rate due to an increase in particle velocity, , as illustrated in Figure 7. However, this is at a cost of a lower screening efficiency. On the other hand, decreasing the inclination angle will result in a much longer residence time of particles within the trommel system which increases the screening efficiency.
Since screening efficiency is directly proportional to the length of the trommel, a shorter trommel screen would be needed at a smaller inclination angle to achieve a desired screening efficiency. It is suggested that the inclination angle should not be below 2° because the efficiency and production rate is unknown beyond this point. A phenomenon exist below 2° such that for a given set of operating conditions, decreasing the inclination angle will increase the bed depth resulting in a lower screening efficiency. However it will also simultaneously increase the residence time, which results in an increase in the screening efficiency. It is unsure which effect will be more dominant at inclination angles less than 2°.
Example of post-treatment
In the wastewater treatment industry, the solids that exit the trommel will be compressed and dewatered as they travel along the conveyor. Most often a post-washing treatment such as a jet wash will be used after the trommel screen to break down faecal and unwanted semi-solid matter. The volume of the solid will decrease up to 40% depending on the properties before removal.
Notes
References
Brentwood Recycling Systems (2013). "Trommels 101: Understanding Trommel Screen Design" Retrieved 5 October 2013
Fellows, P. J. (2009). "Food Processing Technology - Principles and Practice (3rd Edition)". Woodhead Publishing.
Glaub, J.C., Jones, D.B. & Savage, G.M. (1982). "The Design and Use of Trommel Screens for Processing Municipal Solid Waste", Cal Recovery Systems, Inc.
Gupta, A. Yan, D. (2006) "Mineral Processing Design and Operation - An Introduction". Elsevier.
Halder, S.K. (2012) "Mineral Exploration: Principles and Applications". Elsevier.
Hester, R.E. & Harrison, R.M. (2002). "Environmental and Health Impact of Solid Waste Management Activities". Royal Society of Chemistry.
Johnsons Screens (2011). "Inclined Rotary Screens" Retrieved 7 October 2013
Neikov, O. D. Stanislav, I. Mourachova, I. B. Gopienko, V.G. Frishberg, I.V. Lotskot, D.V. (2009) "Handbook of Non-Ferrous Metal Powders: Technologies and Applications". Elsevier.
Pichtel, J. (2005). "Waste Management Practices: Municipal, Hazardous, and Industrial", CRC Press, Boca Raton.
Richardson, J.F. Harker, J.H. Backhurst, J.R. (2002). "Coulson and Richardson's Chemical Engineering Volume 2 - Particle Technology and Separation Processes (5th Edition)". Elsevier.
Sutherland, K.S. (2011) "Filters and Filtration Handbook". Elsevier.
Tarleton, S. Wakeman, R. (2006) "Solid/Liquid Separation: Equipment Selection and Process Design: Equipment". Elsevier.
West, G. Fookes, P.G. Lay, J. Sims, I. Smith, M.R. Collis, L. (2001). "Aggregates: Sand, Gravel and Crushed Rock Aggregates for Construction Purposes (3rd Edition)". Geological Society of London.
Wills, B.A Napier-Munn, T. (2011) "Wills' Mineral Processing Technology: An Introduction to the Practical". Elsevier.
Waste treatment technology
Mechanical biological treatment
Mining equipment
Industrial processes
Material-handling equipment
Solid-solid separation | Trommel screen | [
"Chemistry",
"Engineering"
] | 4,015 | [
"Solid-solid separation",
"Mining equipment",
"Separation processes by phases",
"Water treatment",
"Environmental engineering",
"Waste treatment technology"
] |
40,798,889 | https://en.wikipedia.org/wiki/Solid%20bowl%20centrifuge | A solid bowl centrifuge is a type of centrifuge that uses the principle of sedimentation. A centrifuge is used to separate a mixture that consists of two substances with different densities by using the centrifugal force resulting from continuous rotation. It is normally used to separate solid-liquid, liquid-liquid, and solid-solid mixtures. Solid bowl centrifuges are widely used in various industrial applications, such as wastewater treatment, coal manufacturing, and polymer manufacturing. One advantage of solid bowl centrifuges for industrial uses is the simplicity of installation compared to other types of centrifuge. There are three design types of solid bowl centrifuge, which are conical, cylindrical, and conical-cylindrical.
Range of applications
Wastewater sludge treatment
During the industrial process of wastewater treatment, huge quantity of sludge is produced. The sludge needs to be disposed of or having further treatment. One of the treatment methods available is thickening the sludge by using solid bowl centrifuges. While prior sludge have the concentration around 0.5-1 % of dry solid, after the thickening process, it will contain up to 5-6% of dry solids. This process reduces the waste of active sludge volume by more than 80% as well as minimizing the sludge amount for digestion by 30-40%. Furthermore, less disposal sludge also lowers the cost of polymer and improves the characteristics of dewatering.
Coal treatment underflow slurry
Coal slurry, which contains around 6% solids by weight and nearly 60% of 10 mm material, is thickened using solid bowl centrifuges. By using this centrifuge technique, the concentration of the end product could reach up to 55-60% solids without extra chemical added. Additionally, solid bowl centrifuge is also used in the water removal process from waste slurry obtained from coal-cleaning facility.
Polymer manufacture
Solid bowl centrifuge is used in the manufacture of polymer to recover acetates from polymer slurry. The conical beach is used for internal washing for acetates recovery improvement. Previously, the centrifuge only consists of single lead conveyor, and then improved using double lead conveyor in order to increase the capacity. Furthermore, the extra double lead conveyor with minimized pitch is used to improve the acetate yield.
Advantages and disadvantages
Advantages
Rapid start-up and shut down.
Relatively simple installation.
Compact design.
Functions automatically with minimal monitoring and control units.
Flexible usage for both thickening and dewatering process.
Needs relatively low polymer quantity compared to other types, except basket centrifuges.
Large liquid capacity and able to deal with more concentrated slurry.
Produces more dry solids and has better solids retention. depending on bowl size and RPM
Disadvantages
Less surface area for clarification compared to disk stack.
May generate heat that could damage the thermo sensitive products.
High maintenance, especially for scroll wear part. It is recommended to have hard surface and abrasion protector.
Produces unwanted noise, especially for high G centrifuges.
Occasionally produce vibration that disturbs electronic control and structural components.
High power consumption.
Requires pretesting for selecting optimum machine setting before starting the normal service.
Designs available
Solid bowl centrifuge designs are divided into three different types based on the solid bowl shapes, which are conical, cylindrical, and cylindrical-conical. The choice of the centrifuge design in a particular industry is determined by the characteristics of the slurry and solids.
Solid bowl centrifuge based on shapes design
Conical solid bowl centrifuge
Among the three designs, initially conical bowl was the most preferable design due to its maximum allowance of water removal and its excellent classifying ability. However, this design is less effective in achieving a high centrate quality, which makes it a poor clarifier.
Cylindrical solid bowl centrifuge
Unlike conical bowl design, cylindrical bowl design does not allow maximum water removal, and thus mainly produces wet cakes. In addition, it is also a less effective classifier. However, cylindrical bowl design is more effective in achieving a high centrate quality compared to the conical bowl design, which makes it a better dewatering device compared to the conical bowl design.
Conical-cylindrical solid bowl centrifuge
Conical-cylindrical bowl design was developed based on the conical and cylindrical bowl designs. This design is basically a combination of the best individual characteristics of the previous two designs, and therefore is a more advanced design. This particular design allows efficient dewatering ability, effective clarification, and fairly good classification within one unit. It has the ability to change and control the balance between the water removal and the centrate quality by the adjustment of its pool length, depending on the required product. Thus, conical-cylindrical bowl design is the most widely used in the industry today.
A typical conical-cylindrical solid bowl centrifuge design contains a rotating bowl unit connected to a conveyor with a gear system. The gear system allows the rotating bowl and the conveyor to rotate at different speeds but in the same direction. Commonly, the conveyor operates at the speed between 1900 and 2400 rotation per minute, while the bowl unit operates at 100 rotation per minute higher.
The performances of the shape designed centrifuge types can be seen in the table as follows:
Solid bowl centrifuge based on exit stream design
Based on the exit stream of the solid cake and liquid centrate, there are two types of solid bowl centrifuge designs, which are:
Concurrent design
For this design, the solid cake and the liquid centrate leave the centrifuge bowl at the same end.
Counter current design
This design allows the solid cake and the liquid centrate to leave the centrifuge bowl at opposite ends. For this design, the conveyor pushes the sludge towards the end streams and the supernatant liquid is allowed to exit over the weirs.
Process characteristics
Process descriptionFSA Environment (2002). Case Study 10 Centrifuge Decanter, Solid Separation Systems for the Pig Industry. http://www.fsaconsulting.net/pdfs/Case%20Study%2010%20-%20Centrifuge.pdf (accessed 10 October 2013)
With the help of helical screw conveyor, solid bowl centrifuges separate two substances with different densities by the centrifugal force formed under fast rotation. Feed slurry enters the conveyor and is delivered into the rotating bowl through discharge ports. There is a slight speed difference between the rotation of conveyor and bowl, causing the solids to convey from the stationary zone where the wastewater is introduced to the bowl wall. By centrifugal force, the collected solids moves along the bowl wall, out of the pool and up the dewatering beach located at the tapered end of the bowl. At last the solids separated go to solid discharge while the liquids go to liquid discharge. The clarified liquid flows through the conveyor in the opposite direction through adjustable overflow parts.
Main process characteristicsSmidth, F.L (2010). Decanter Solid Bowl Centrifuge. Cement Technologies and Corporate Matters. http://www.flsmidth.com/enUS/Products/Product+Index/All+Products/Classification/Cen trifuges/SolidbowlCentrifuge/Solidbowl+Centrifuge (accessed 11 October 2013)
Feed rate range is between 1.5-12 L/s (25-200 gal/min).
Rotational speed is in the range of 1000-6000 rpm.
Flow rate range is between 3.5-15 m3/(d KW) (0.5-2 gal/(min hp)).
G factor (ratio of centrifugal force to gravitational force) is in the range of 2000-3000.
Examples of common gearbox ratio used are 20, 40, 116, 130, and 140:1.
Dewatering sludge is more effectively processed using centrifuges with larger pool volumes.
Pool depth (radial height of liquid) can be changed in most centrifuges.
Solid concentration range in the cake is between 4-6% in thickening operations, and 10- 35% in dewatering operations.
The length to diameter ratio is between the range of 2.5:1 to 4:1 .
The solid bowl centrifuge performance is determined by quality of the solids in the effluent and the cake dryness. However the centrifuge efficiency is usually measured by the percentage solid recovery with formula:
The table below shows the percentage of solid separation that has been achieved within different solids and cake solid formed while considering the effect of adding polymer:
Assessment of the characteristics
The length of the dewatering beach and the differential speed between the bowl and conveyor can affect the solid content of the separated solids. Optimum residence time of solids in the centrifuge and the water content of the separated solids can be achieved by adjusting the differential speed. High differential speed tends to increase the solid cake moisture and decrease solids in effluent as the residence time is lower. However, in some cases it is also possible that the solids in the effluent increases due to stirring effects.
Sludges with a higher proportion of fine and hydrous particles are most likely to resist on being conveyed up to the solids discharge point. Heavier particles is preferred, therefore gravity thickening is used initially to treat most sludges along with the addition of organic polyelectrolytes.
When the rotational speed in the bowl is higher, more solid separation occurs as the settling rate of solids increases with the square of the rotational speed. However, the maintenance cost increases proportionally with increasing rotational speed. The speed of centrifuge with a variable of speed drive motor can be adjusted while the machine is operating. Otherwise, the speed is fixed following the design of the motor and sheave sizes.
Increasing flow rate reduces the residence time of the slurry in the bowl, causing a rise in the amount of solids in the effluent phase. Thus, the separation efficiency will decrease as a result. Moreover, the pool depth increases at the same time due to head that overflows the weir plates.
The pool depth is controlled by the weir plates on the liquid end of the centrifuge. The pool volume and the residence time of slurry in the bowl are proportional to the pool depth. Lowering the depth pool decreases the centrifuge efficiency, reducing the g factor, and at the same time increasing the amount of solids in the effluent phase. Additionally, there is more area at the dewatering beach that is not covered by the pool which leads into a decrease in cake solid moisture.
Heuristics
The tested material is considered as an ideal material for the solid bowl centrifuge if the volume of solids and the volume of compaction come together within 90–120 seconds with a clear effluent.
The spinning of difficult material may require higher G force, i.e. 2500 x G.
If the settled solids are not transported out at a sufficient rate of pool, the bowl fills up with solids and no separation occurs.
A longer retention time allows a higher solid recovery. It can be achieved by using a bowl of larger diameter while slowing down the machine rotation by maintaining the centrifugal force, and increasing the height of the liquid annulus in the centrifuge (pool depth).
Centrifuges should be heated for four to five hours, and being held at a constant pressure, and then cooled down before the production can be started.
15-20 % solid will be present in the dewatered primary sludge when all of the settings follow all of the centrifuges characteristics.
In general, total solid recovery without polymer addition is ranged from 74 – 84%.
The effects of the process variables can be seen in the table below:
In order to improve solid recovery, the machine variables can be controlled as follows:
Production of waste stream
The post-treatment of the waste stream produced by the solid bowl centrifuge diverse depending on the industrial application. Since different industries have different feed for the centrifuge system, the waste stream will be different as well, and thus require different post treatments. Below are some examples of the waste stream production and its necessary post treatment in various applications in the industry.
Wastewater sludge treatment
In water treatment, the wanted product is the clean water, while the waste is the sludge containing dissolved organic and inorganic materials, fibrous matter, and extracellular polymer (ECP). The sludge is commonly discarded down the sewer or to landfill. Occasionally, the sludge is used in the production of bricks and concrete, in agriculture as a soil additive, or for land reclamation. In this application, solid bowl centrifuge is used as the final step of the water treatment sludge before disposal in order to reduce landfill charges and transport costs.
Coal treatment underflow slurry
Solid-bowl centrifuge is used for dewatering the coal waste slurry along with plate-and-frame filter press in coal manufacturing to dewater the slurry before being disposed of. The slurry feed was attained from the underflow of a functioning bituminous coal-cleaning thickener device. As for the waste stream, it is usually disposed of into slurry cells or deserted underground mine site if available, or more commonly in slurry impoundments.
Polymer manufacture
In this industry, acetate is recovered during the manufacturing of polymers. In this case, the wanted product is actually the polymer; however, the acetate is not a waste either since it is recovered. While the polymer solids are discharged through the solid exit port and further processed, the acetate that is recovered through the liquid exit port and further separated from the washing liquid to recover pure acetate.
New development
There are various aspects that can be improved from the current solid bowl centrifuges in order to increase its performance and reliability. To allow more control and simpler operation, support systems such as feed equipment, chemical dosing facility and better transfer pumps were designed and added. Moreover, the operating parameters can be adjusted to optimize the sludge dewatering process. It relies on rotational force in order to throw the solids out and for the sludge to stick to the outer all surfaces. The use of a metal screen or some suitable filtering material can also be added to achieve better solids dryness.
External links
The Flottweg Decanter Centrifuge – Parameters and influencing factors of a decanter that ensure the best possible separation result including decanter centrifuge video
References
Centrifuges | Solid bowl centrifuge | [
"Chemistry",
"Engineering"
] | 3,047 | [
"Chemical equipment",
"Centrifugation",
"Centrifuges"
] |
36,589,154 | https://en.wikipedia.org/wiki/Rayleigh%E2%80%93B%C3%A9nard%20convection | In fluid thermodynamics, Rayleigh–Bénard convection is a type of natural convection, occurring in a planar horizontal layer of fluid heated from below, in which the fluid develops a regular pattern of convection cells known as Bénard cells. Such systems were first investigated by Joseph Valentin Boussinesq and Anton Oberbeck in the 19th century. This phenomenon can also manifest where a species denser than the electrolyte is consumed from below and generated at the top. Bénard–Rayleigh convection is one of the most commonly studied convection phenomena because of its analytical and experimental accessibility. The convection patterns are the most carefully examined example of self-organizing nonlinear systems. Time-dependent self-similar analytic solutions are known for the velocity fields and for the temperature distribution as well.
Buoyancy, and hence gravity, are responsible for the appearance of convection cells. The initial movement is the upwelling of less-dense fluid from the warmer bottom layer. This upwelling spontaneously organizes into a regular pattern of cells.
Physical processes
The features of Bénard convection can be obtained by a simple experiment first conducted by Henri Bénard, a French physicist, in 1900.
Development of convection
The experimental set-up uses a layer of liquid, e.g. water, between two parallel planes. The height of the layer is small compared to the horizontal dimension. At first, the temperature of the bottom plane is the same as the top plane. The liquid will then tend towards an equilibrium, where its temperature is the same as its surroundings. (Once there, the liquid is perfectly uniform: to an observer it would appear the same from any position. This equilibrium is also asymptotically stable: after a local, temporary perturbation of the outside temperature, it will go back to its uniform state, in line with the second law of thermodynamics).
Then, the temperature of the bottom plane is increased slightly yielding a flow of thermal energy conducted through the liquid. The system will begin to have a structure of thermal conductivity: the temperature, and the density and pressure with it, will vary linearly between the bottom and top plane. A uniform linear gradient of temperature will be established. (This system may be modelled by statistical mechanics).
Once conduction is established, the microscopic random movement spontaneously becomes ordered on a macroscopic level, forming Benard convection cells, with a characteristic correlation length.
Convection features
The rotation of the cells is stable and will alternate from clock-wise to counter-clockwise horizontally; this is an example of spontaneous symmetry breaking. Bénard cells are metastable. This means that a small perturbation will not be able to change the rotation of the cells, but a larger one could affect the rotation; they exhibit a form of hysteresis.
Moreover, the deterministic law at the microscopic level produces a non-deterministic arrangement of the cells: if the experiment is repeated, a particular position in the experiment will be in a clockwise cell in some cases, and a counter-clockwise cell in others. Microscopic perturbations of the initial conditions are enough to produce a non-deterministic macroscopic effect. That is, in principle, there is no way to calculate the macroscopic effect of a microscopic perturbation. This inability to predict long-range conditions and sensitivity to initial-conditions are characteristics of chaotic or complex systems (i.e., the butterfly effect).
If the temperature of the bottom plane was to be further increased, the structure would become more complex in space and time; the turbulent flow would become chaotic.
Convective Bénard cells tend to approximate regular right hexagonal prisms, particularly in the absence of turbulence, although certain experimental conditions can result in the formation of regular right square prisms or spirals.
The convective Bénard cells are not unique and will usually appear only in the surface tension driven convection. In general the solutions to the Rayleigh and Pearson analysis (linear theory) assuming an infinite horizontal layer gives rise to degeneracy meaning that many patterns may be obtained by the system. Assuming uniform temperature at the top and bottom plates, when a realistic system is used (a layer with horizontal boundaries) the shape of the boundaries will mandate the pattern. More often than not the convection will appear as rolls or a superposition of them.
Rayleigh–Bénard instability
Since there is a density gradient between the top and the bottom plate, gravity acts trying to pull the cooler, denser liquid from the top to the bottom. This gravitational force is opposed by the viscous damping force in the fluid. The balance of these two forces is expressed by a non-dimensional parameter called the Rayleigh number. The Rayleigh number is defined as:
where
Tu is the temperature of the top plate
Tb is the temperature of the bottom plate
L is the height of the container
g is the acceleration due to gravity
ν is the kinematic viscosity
α is the thermal diffusivity
β is the thermal expansion coefficient.
As the Rayleigh number increases, the gravitational forces become more dominant. At a critical Rayleigh number of 1708, instability sets in and convection cells appear.
The critical Rayleigh number can be obtained analytically for a number of different boundary conditions by doing a perturbation analysis on the linearized equations in the stable state. The simplest case is that of two free boundaries, which Lord Rayleigh solved in 1916, obtaining Ra = π4 ≈ 657.51. In the case of a rigid boundary at the bottom and a free boundary at the top (as in the case of a kettle without a lid), the critical Rayleigh number comes out as Ra = 1,100.65.
Effects of surface tension
In case of a free liquid surface in contact with air, buoyancy and surface tension effects will also play a role in how the convection patterns develop. Liquids flow from places of lower surface tension to places of higher surface tension. This is called the Marangoni effect. When applying heat from below, the temperature at the top layer will show temperature fluctuations. With increasing temperature, surface tension decreases. Thus a lateral flow of liquid at the surface will take place, from warmer areas to cooler areas. In order to preserve a horizontal (or nearly horizontal) liquid surface, cooler surface liquid will descend. This down-welling of cooler liquid contributes to the driving force of the convection cells. The specific case of temperature gradient-driven surface tension variations is known as thermo-capillary convection, or Bénard–Marangoni convection.
History and nomenclature
In 1870, the Irish-Scottish physicist and engineer James Thomson (1822–1892), elder brother of Lord Kelvin, observed water cooling in a tub; he noted that the soapy film on the water's surface was divided as if the surface had been tiled (tesselated). In 1882, he showed that the tesselation was due to the presence of convection cells. In 1900, the French physicist Henri Bénard independently arrived at the same conclusion. This pattern of convection, whose effects are due solely to a temperature gradient, was first successfully analyzed in 1916 by Lord Rayleigh. Rayleigh assumed boundary conditions in which the vertical velocity component and temperature disturbance vanish at the top and bottom boundaries (perfect thermal conduction). Those assumptions resulted in the analysis losing any connection with Henri Bénard's experiment. This resulted in discrepancies between theoretical and experimental results until 1958, when John Pearson (1930– ) reworked the problem based on surface tension. This is what was originally observed by Bénard. Nonetheless in modern usage "Rayleigh–Bénard convection" refers to the effects due to temperature, whereas "Bénard–Marangoni convection" refers specifically to the effects of surface tension. Davis and Koschmieder have suggested that the convection should be rightfully called the "Pearson–Bénard convection".
Rayleigh–Bénard convection is also sometimes known as "Bénard–Rayleigh convection", "Bénard convection", or "Rayleigh convection".
See also
Hydrodynamic stability
Marangoni effect
Natural convection
Giant's Causeway and Causeway Coast
Rayleigh–Taylor instability
References
Further reading
B. Saltzman (ed., 1962). Selected Papers on the Theory of Thermal Convection, with Special Application to the Earth's Planetary Atmosphere (Dover). ASIN B000IM1NYC
Subrahmanyan Chandrasekhar (1982). Hydrodynamic and Hydromagnetic Stability (Dover).
E.L. Koschmieder (1993). Bénard Cells and Taylor Vortices (Cambridge University Press).
A.V. Getling (1998). Rayleigh-Bénard Convection: Structures and Dynamics (World Scientific).ISBN 981 022657 8
R. Meyer-Spasche (1999). Pattern Formation in Viscous Flows: The Taylor-Couette Problem and Rayleigh-Bénard Convection (Birkhäuser Basel). ISBN 978-3-0348-9738-9
P.G. Drazin and W.H. Reid (2004). Hydrodynamic Stability, second edition (Cambridge University Press). ISBN 978-0631525417
E.S.C. Ching (2014). Statistics and Scaling in Turbulent Rayleigh-Bénard Convection (Springer).
D. Goluskin (2015). Internally Heated Convection and Rayleigh-Bénard Convection (Springer).
R. Kh (2009). Convection in Fluids: A Rational Analysis and Asymptotic Modelling, Springer. ISBN 978-90-481-2432-9
External links
A. Getling, O. Brausch: Cellular flow patterns
K. Daniels, B. Plapp, W.Pesch, O. Brausch, E. Bodenschatz: Undulation Chaos in inclined Layer Convection
Karen E. Daniels, Oliver Brausch, Werner Pesch, Eberhard Bodenschatz: Competition and bistability of ordered undulations and undulation chaos in inclined layer convection (PDF; 608 kB)
P. Subramanian, O. Brausch, E. Bodenschatz, K. Daniels, T.Schneider W. Pesch: Spatio-temporal Patterns in Inclined Layer Convection (PDF; 5,3 MB)
Convection
Fluid dynamic instabilities
Articles containing video clips | Rayleigh–Bénard convection | [
"Physics",
"Chemistry"
] | 2,125 | [
"Transport phenomena",
"Physical phenomena",
"Fluid dynamic instabilities",
"Convection",
"Thermodynamics",
"Fluid dynamics"
] |
36,590,968 | https://en.wikipedia.org/wiki/Uses%20of%20radioactivity%20in%20oil%20and%20gas%20wells | Radioactive sources are used for logging formation parameters. Radioactive tracers, along with the other substances in hydraulic-fracturing fluid, are sometimes used to determine the injection profile and location of fractures created by hydraulic fracturing.
Use of radioactive sources for logging
Sealed radioactive sources are routinely used in formation evaluation of both hydraulically fractured and non-fracked wells. The sources are lowered into the borehole as part of the well logging tools, and are removed from the borehole before any hydraulic fracturing takes place. Measurement of formation density is made using a sealed caesium-137 source. This bombards the formation with high energy gamma rays. The attenuation of these gamma rays gives an accurate measure of formation density; this has been a standard oilfield tool since 1965. Another source is americium berylium (Am-Be) neutron source used in evaluation of the porosity of the formation, and this has been used since 1950. In a drilling context, these sources are used by trained personnel, and radiation exposure of those personnel is monitored. Usage is covered by licenses from International Atomic Energy Agency (IAEA) guidelines, SU or European Union protocols, and the Environment Agency in the UK. Licenses are required for access, transport, and use of radioactive sources. These sources are very large, and the potential for their use in a 'dirty bomb' means security issues are considered as important. There is no risk to the public, or to water supplies under normal usage. They are transported to a well site in shielded containers, which means exposure to the public is very low, much lower than the background radiation dose in one day.
Radiotracers and markers
The oil and gas industry in general uses unsealed radioactive solids (powder and granular forms), liquids and gases to investigate or trace the movement of materials. The most common use of these radiotracers is at the well head for the measurement of flow rate for various purposes. A 1995 study found that radioactive tracers were used in over 15% of stimulated oil and gas wells.
Use of these radioactive tracers is strictly controlled. It is recommended that the radiotracer is chosen to have readily detectable radiation, appropriate chemical properties, and a half life and toxicity level that will minimize initial and residual contamination. Operators are to ensure that licensed material will be used, transported, stored, and disposed of in such a way that members of the public will not receive more than 1 mSv (100 mrem) in one year, and the dose in any unrestricted area will not exceed 0.02 mSv (2 mrem) in any one hour. They are required to secure stored licensed material from access, removal, or use by unauthorized personnel and control and maintain constant surveillance of licensed material when in use and not in storage. Federal and state nuclear regulatory agencies keep records of the radionuclides used.
As of 2003 the isotopes Antimony-124, argon-41, cobalt-60, iodine-131, iridium-192, lanthanum-140, manganese-56, scandium-46, sodium-24, silver-110m, technetium-99m, and xenon-133 were most commonly used by the oil and gas industry because they are easily identified and measured. Bromine-82, Carbon-14, hydrogen-3, iodine-125 are also used.
Examples of amounts used are:
In hydraulic fracturing, plastic pellets coated with Silver-110m or sand labelled with Iridium-192with may be added to a proppant when it is required to evaluate whether a fracturing process has penetrated rocks in the pay zone. Some radioactivity may by brought to the surface at the well head during testing to determine the injection profile and location of fractures. Typically this uses very small (50 kBq) Cobalt-60 sources and dilution factors are such that the activity concentrations will be very low in the topside plant and equipment.
Regulation in the US
The NRC and approved state agencies regulate the use of injected radionuclides in hydraulic fracturing in the United States.
The US EPA sets radioactivity standards for drinking water. Federal and state regulators do not require sewage treatment plants that accept gas well wastewater to test for radioactivity. In Pennsylvania, where the hydraulic fracturing drilling boom began in 2008, most drinking-water intake plants downstream from those sewage treatment plants have not tested for radioactivity since before 2006. The EPA has asked the Pennsylvania Department of Environmental Protection to require community water systems in certain locations, and centralized wastewater treatment facilities to conduct testing for radionuclides.
See also
List of additives for hydraulic fracturing
Hydraulic fracturing proppants
References
Hydraulic fracturing
Radioactivity | Uses of radioactivity in oil and gas wells | [
"Physics",
"Chemistry"
] | 965 | [
"Petroleum technology",
"Natural gas technology",
"Nuclear physics",
"Hydraulic fracturing",
"Radioactivity"
] |
49,809,733 | https://en.wikipedia.org/wiki/Institute%20of%20Wood%20Science | The Institute of Wood Science (IWSc) was incorporated in 1955 as a professional body for the timber industries and allied professions. In 2009 it merged with the Institute of Materials, Minerals and Mining (IOM3), and became known as The Wood Technology Society. Following restructuring and rebranding of the IOM3 it changed its name to the Wood Technology Group in 2021.
The IWSc aimed to promote and encourage a better understanding of timber, wood-based materials and associated timber processes and products in the United Kingdom and beyond. It represented people employed within the timber importing, merchanting, manufacturing and user industries, together with those in education and research. In particular, it represented the interests and know-how of wood scientists and wood technologists. IWSc organised training and conferences. Through local groups it held meetings and visits to keep members up to date in an era when large technological advances were occurring in the wood products sector. This activity continues as the Wood Technology Group of IOM3.
History
The Institute for Wood Science (IWSc) was formed in 1955 by several leading members of the wood industry in the UK and wood scientists, recognising that if timber was to compete effectively with other materials there needed to be an understanding of wood science and wood technologies. At this time there were dramatic changes in timber supply, innovations in timber use, and the introduction of new timber products. There was a clear need for an organisation to encourage and support this advancement of knowledge. The IWSc was originally based in the City of London, UK., followed by many years at Hughenden Valley, near High Wycombe. It was registered as a charity in 1998. There were active local branches throughout the UK, as well as in Ireland, Australia and Canada.
In March 1958 the IWSc published the first edition of the Journal of the Institute of Wood Science (now the International Wood Products Journal).
Among the list of former presidents of the IWSc is Jean Marion Taylor, BSc FIWSc, who served from 1986 to 1988. Jean, who was an entomologist working on chemical preservatives for wood, was the first female president of any of the Institutes that now make up the IOM3.
Function
The function of the IWSc was to provide a forum for the timber trade and timber research to come together, thereby furthering wood science and technology to the wider community. It provided recognised education and training qualifications in wood science and technology through courses and examinations. Membership grades were:
Hon Fellow (Hon FIWSc)
Fellow (FIWSc)
Associate (AIWSc)
Retired Member
Member (MIWSc)
Ordinary Member
Certificated Member (CMIWSc)
Student
The IWSc acted as the examining body for the UK timber trade, awarding qualifications at Certificate (intermediate) and Associate levels. Both levels were based in a workbook concept and were designed to provide information leading to self-seeking study for completion. The theory was complemented by several practical exercises. People successful at the certificate level were able to use the post-nominal letters CMIWSc and those at Associate level AIWSc. Post-nominals are still used today by some former members to signify the qualifications.
See also
Alice Holt Research Station
Furniture Industry Research Association
List of forest research institutes
Forestry Commission
Timber Trade Federation
Journal: Wood Science and Technology
References
External links
The Wood Technology Group of IOM3
1955 establishments in the United Kingdom
British furniture
British research associations
Environmental research institutes
Forest research institutes
Forestry in the United Kingdom
High Wycombe
Materials science institutes
Research institutes established in 1955
Research institutes in Buckinghamshire
Timber industry
Wycombe District | Institute of Wood Science | [
"Materials_science",
"Environmental_science"
] | 742 | [
"Environmental research",
"Materials science organizations",
"Environmental research institutes",
"Materials science institutes"
] |
49,810,193 | https://en.wikipedia.org/wiki/Vladimir%20Pentkovski | Vladimir Mstislavovich Pentkovski (Russian: Владимир Мстиславович Пентковский; March 18, 1946, Moscow, Soviet Union – December 24, 2012, Folsom, California, United States) was a Soviet-American computer scientist, a graduate of the Moscow Institute of Physics and Technology and winner of the highest former Soviet Union's USSR State Prize (1987). He was one of the leading architects of the Soviet Elbrus supercomputers and the high-level programming language El-76. At the beginning of 1990s, he immigrated to the United States where he worked at Intel and led the team that developed the architecture for the Pentium III processor. According to a popular legend, Pentium processors were named after Vladimir Pentkovski.
Biography
Pentkovski was born in Moscow, USSR, into the family of the mathematician Mstislav Pentkovskii (1911–1968), Doctor of Physical and Mathematical Sciences, full professor (1955), full member of The National Academy of Sciences of the Republic of Kazakhstan (1958), the author of the specific nomogram's application in the engineering.
After graduation from the Moscow Institute of Physics and Technology (1970), completed his PhD and Doctorate of Science. From 1970 to 1992 Pentkovski worked at the Lebedev Institute of Precision Mechanics and Computer Engineering designing the supercomputers Elbrus-1 and Elbrus-2 and leading the development of the high-level programming language El-76.
Starting in 1986, he led the research and development of the 32-bit microprocessor El-90 which combined the concept of RISC and Elbrus-2 architecture. The logical design of El-90 processor was finished by 1987, with the prototype launched in 1990. At the same time Pentkovski started designing El-91C microprocessor based on El-90 design, but the project was closed due to the changes to Russian political and economic systems.
In February 1993 Pentkovski started his career at Intel and rose to the level of Senior Principal Engineer. He focused mainly on CPU architecture, working on several generations of x86 from single-core to multi-core to many-core. Since the beginning of 2000s he was leading the Russian CPU development team on a new processor Vector Instruction Pointer (VIP) architecture.
In 2010, under Pentkovski's leadership, Intel and the Moscow Institute of Physics and Technology (MIPT) won the contest of university proposals to launch major world-class research initiatives with the participation of prominent international scientists, conducted by the Ministry of Education and Science of the Russian Federation and received a grant of 150 million rubles. A team of Intel engineers, led by Pentkovski in collaboration with MIPT researchers, launched a lab targeting research and development of computer-intensive applications. Primarily, the lab iSCALARE was focused on problem-oriented, highly parallel hardware and software architectures for bioinformatics, drug design, and pharmaceuticals.
After Pentkovski's death in 2012, he was survived by his two children: his son Mstislav Pentkovsky, who works as an opera stage director at the Mariinsky Theatre, and his daughter Maria Pentkovski. His final resting place is in Folsom, California.
Selected publications
Пентковский В. М. Автокод Эльбрус. Эль-76. Принципы построения языка и руководство к использованию [Pentkovskii V.M. Avtokod el'brus: printsipy postroeniia iazyka i rukovodstvo k pol'zovaniiu] под редакцией Ершова А. П. – М.: Наука, 1982. – 352 с. – OCLC 22953931.
Пентковский В. М. Язык программирования Эль-76. Принципы построения языка и руководство к пользованию. – 2-е изд, испр. и доп. – М.: Наука, 1989. – 364 с. – .
Paul M. Zagacki, Deep Buch, Emile Hsieh, Daniel Melaku, Vladimir M. Pentkovski, Hsien-Hsin S. Lee: Architecture of a 3D Software Stack for Peak Pentium III Processor Performance. // Intel Technology Journal. – 1999. – Т. 3. – No. 2.
Jagannath Keshava and Vladimir Pentkovski: Pentium® III Processor Implementation Tradeoffs. // Intel Technology Journal. – 1999. – Т. 3. – No. 2.
Srinivas K. Raman, Vladimir M. Pentkovski, Jagannath Keshava: Implementing Streaming SIMD Extensions on the Pentium III Processor. // IEEE Micro, Volume 20, Number 1, January/February 2000: 47–57 (2000)
Deep K. Buch, Vladimir M. Pentkovski: Experience of Characterization of Typical Multi-Tier e-Business System Using Operational Analysis. / 27th International Computer Measurement Group Conference, 2001: 671–682.
https://www.mariinsky.ru/en/company/stagedirectors/andrea_de_rosa1
Patents
Shared cache structure for temporal and non-temporal instructions
Efficient utilization of write-combining buffers
Pipelined processing of short data streams using data prefetching
Processing polygon meshes using mesh pool window
System and method for cache sharing
Method and apparatus for shared cache coherency for a chip multiprocessor or multiprocessor system Method and apparatus for floating point operations and format conversion operations
Multiprocessor-scalable streaming data server arrangement
Method and apparatus for performing cache segment flush and cache segment invalidation operations
Method and system for efficient handlings of serial and parallel java operations
Selective interrupt delivery to multiple processors having independent operating systems
Executing partial-width packed data instructions Method and apparatus for processing 2D operations in a tiled graphics architecture
Method and apparatus for mapping address space of integrated programmable devices within host system memory
Method and apparatus for efficiently processing vertex information in a video graphics system
Method and apparatus for prefetching data into cache
Conversion between packed floating point data and packed 32-bit integer data in different architectural registers
Conversion from packed floating point data to packed 8-bit integer data in different architectural registers
Notes
Moscow Institute of Physics and Technology alumni
1946 births
Supercomputers
2012 deaths
Russian computer scientists
American computer scientists
American people of Russian descent
Soviet computer scientists | Vladimir Pentkovski | [
"Technology"
] | 1,469 | [
"Supercomputers",
"Supercomputing"
] |
49,810,721 | https://en.wikipedia.org/wiki/Alpelisib | Alpelisib, sold under the brand name Piqray among others, is a medication used to treat certain types of breast cancer. It is used together with fulvestrant. It is taken by mouth. It is marketed by Novartis.
Common side effects include high blood sugar, kidney problems, diarrhea, rash, low blood cells, liver problems, pancreatitis, vomiting, and hair loss. It is an alpha-specific PI3K inhibitor. It was approved for medical use in the United States in May 2019.
Medical uses
Alpelisib is indicated in combination with fulvestrant for the treatment of postmenopausal women, and men, with hormone receptor (HR)-positive, human epidermal growth factor receptor 2 (HER2)-negative, PIK3CA-mutated, advanced or metastatic breast cancer as detected by an FDA-approved test following progression on or after an endocrine-based regimen.
In the European Union, alpelisib is indicated in combination with fulvestrant for the treatment of postmenopausal women, and men, with hormone receptor (HR)‑positive, human epidermal growth factor receptor 2 (HER2)‑negative, locally advanced or metastatic breast cancer with a PIK3CA mutation after disease progression following endocrine therapy as monotherapy.
In April 2022, the indication for alpelisib was expanded in the US to include the treatment of severe manifestations of PIK3CA-related overgrowth spectrum (PROS) in those who require systemic therapy.
History
In May 2019, alpelisib was approved in the United States for use in combination with the endocrine therapy fulvestrant, to treat postmenopausal women, and men, with hormone receptor (HR)-positive, human epidermal growth factor receptor 2 (HER2)-negative, PIK3CA-mutated, advanced or metastatic breast cancer following progression on or after an endocrine-based regimen.
The U.S. Food and Drug Administration (FDA) also approved the companion diagnostic test, therascreen PIK3CA RGQ PCR Kit, to detect the PIK3CA mutation in a tissue and/or a liquid biopsy.
The efficacy of alpelisib was studied in the SOLAR-1 trial (NCT02437318), a randomized trial of 572 postmenopausal women and men with HR-positive, HER2-negative, advanced or metastatic breast cancer whose cancer had progressed while on or after receiving an aromatase inhibitor.
The FDA granted the application for alpelisib priority review designation and granted approval of Piqray to Novartis. The FDA granted approval of the therascreen PIK3CA RGQ PCR Kit to Qiagen Manchester, Ltd.
On 28 May 2020, the Committee for Medicinal Products for Human Use (CHMP) of the European Medicines Agency (EMA) adopted a positive opinion, recommending the granting of a marketing authorization for the medicinal product alpelisib (Piqray), intended for the treatment of locally advanced or metastatic breast cancer with a PIK3CA mutation. The applicant for this medicinal product is Novartis Europharm Limited. Alpelisib was approved for medical use in the European Union in July 2020.
Society and culture
Legal status
Alpelisib was approved for medical use in the United States in May 2019, in Australia in March 2020, and in the European Union in July 2020.
References
External links
Amides
Drugs developed by Novartis
Phosphoinositide 3-kinase inhibitors
Pyridines
Pyrrolidines
Thiazoles
Trifluoromethyl compounds
Ureas | Alpelisib | [
"Chemistry"
] | 786 | [
"Organic compounds",
"Amides",
"Functional groups",
"Ureas"
] |
49,821,898 | https://en.wikipedia.org/wiki/Polymer%20sponge | Taking clues from spongy toddler toys that can absorb water and inflate to bigger sizes, scientists at Mayo Clinical Research Centre, Rochester, Minnesota, United States have developed biodegradable polymer grafts that, when surgically placed in damaged vertebrae, intended to grow such that it is just the right size and shape to fix the spinal column.
For obvious reasons, any problem with the backbone of a vertebrate is often considered a potential disability which can limit a person's ability to manoeuvre their way around their surroundings, cause a lot of pain and be responsible for mental distress. This has been researched upon by Lichun Lu and Xifeng Liu, scientists from Mayo Clinic's college of medicine, who have developed a novel spinal graft that, once surgically placed in the body, will grow to be just the right size and shape to fix the spinal column. They presented their work at the 251st National Meeting & Exposition of the non-profit organization American Chemical Society (ACS).
Problem
Current treatments for spinal tumours have been considered way too expensive and invasive. When cancer metastasizes it predominantly tends to settle in the spinal column. A different approach to replacing harmed vertebrae has been investigated. Polymer sponge researchers were reported to being about to present their work in March 2016 to a meeting of the American Chemical Society (ACS).
Solution
Doctors can cut out the infected bone tissue (or flat-out replace it as they did in the Sydney case) but that leaves large gaps in the spine. Normally, doctors would either have to open the chest cavity and access the spine from far side (which entails a lengthy recovery and high probability of complications) or they'd make a small incision in the neck/back and inject expandable titanium rods into the bone gap (which is super expensive because titanium). This new technique combines the easy access and short recovery of the titanium rod method with the low cost of the open chest operation. The use of sponges for the treatment of such problems has long been suggested for obvious reasons.
Procedure
Doctors simply cut a small hole in the patient's neck/back and inject a hydrogel polymer into the bone gap much the same way they would a titanium rod. This polymer absorbs fluids from within the wound and grows to fill the gap. Doctors control how far the polymer expands in any specific direction by first inserting a "cage"—basically a pre-expanded shell that the polymer fills in as it spreads. Think of it as the wooden frame that keeps a freshly-poured concrete sidewalk in place until it hardens. Once the polymer fills in the cage, which takes 5 to 10 minutes on average, it will set and harden into a viable prosthetic. From there, surrounding bone tissue grows into and through the polymer, reinforcing and cementing it in place.
Process
The sponge-like polymer, polycaprolactone (PCL) shows promise as a medical material that can be used to fill gaps in human bones and serve as a scaffold to promote new bone growth. Injuries, birth defects (such as cleft lip and palates), or the removal of tumors in the case of bone cancer can create gaps in bone that are too large to heal naturally. The gaps may dramatically alter a person's phenotypic appearance when they occur in the head, face, or jaw.
Transplant rejection
While there might be a strong possibility that a transplant is rejected, various complications may be averted by the use of techniques like bone marrow transplantation, blood transfusion, T lymphocyte modification and the similar techniques.
The scope of polymer sponge in this field is still in its infancy and researchers in the field of biotechnological applications for making the concept available to humans and animals may require more attentive financing.
See also
Sponge (animal)
Sponge (material)
Transplant rejection
Vertebral column
Vertebrates
References
Bionics
Implants (medicine)
Medical equipment
Medical devices | Polymer sponge | [
"Engineering",
"Biology"
] | 816 | [
"Bionics",
"Medical devices",
"Medical equipment",
"Medical technology"
] |
60,048,566 | https://en.wikipedia.org/wiki/PyClone | PyClone is a software that implements a Hierarchical Bayes statistical model to estimate cellular frequency patterns of mutations in a population of cancer cells using observed alternate allele frequencies, copy number, and loss of heterozygosity (LOH) information. PyClone outputs clusters of variants based on calculated cellular frequencies of mutations.
Background
According to the Clonal Evolution model proposed by Peter Nowell, a mutated cancer cell can accumulate more mutations as it progresses to create sub-clones. These cells divide and mutate further to give rise to other sub-populations. In compliance with the theory of natural selection, some mutations may be advantageous to the cancer cells and thus make the cell immune to previous treatment. Heterogeneity within a single cancer tumour can arise from single nucleotide polymorphism/variation (SNP/SNV) events, microsatellite shifts and instability, loss of heterozygosity (LOH), Copy number variation and karyotypic variations including chromosome structural aberrations and aneuploidy. Due to the current methods of molecular analysis where a mixed population of cancer cells are lysed and sequenced, heterogeneity within the tumour cell population is under-detected. This results in a lack of information on the clonal composition of cancer tumours and more knowledge in this area would aid in the decisions for therapies.
PyClone is a hierarchical Bayes statistical model that uses measurements of allele frequency and allele specific copy numbers to estimate the proportion of tumor cells harboring a mutation. By using deeply sequenced data to find putative clonal clusters, PyClone estimates the cellular prevalence, the portion of cancer cells harbouring a mutation, of the input sample. Progress has been made for measuring variant allele frequency with deep sequencing data but statistical approaches to cluster mutations into biologically relevant groups remain underdeveloped. The commonness of a mutation between cells is difficult to measure because the proportion of cells that harbour a mutation doesn't simply relate to allelic prevalence. This is due to allelic prevalence depending on multiple factors such as the proportion of 'contaminating' normal cells in the sample, the proportion of tumor cells harboring the mutation, the number of allelic copies of the mutation in each cell, and sources of technical noise. PyClone is among the first methods to incorporate variant allele frequencies (VAFs) with allele-specific copy numbers. It also accounts for Allelic Imbalances, where alleles of a gene are expressed at different levels in a given cell, which may occur in the cell due to Segmental CNVs and normal cell contamination.
Workflow
Input
PyClone requires 2 inputs:
A set of deeply sequenced mutations from one or more samples derived from a single patient. Deep sequencing, also referred to as high throughput sequencing, uses methods such as sequencing by synthesis to sequence a genomic region with high coverage in order to detect rare clonal types and contaminating normal cells that comprise as little as 1% of the sample.
A measure of allele specific copy number at each mutation location. This is obtained from microarray-based comparative genomic hybridization or whole genome sequencing methods to detect chromosomal or copy number changes.
Statistical modeling
For each mutation, the PyClone model divides the input sample into three sub-populations. The three sub-populations are the normal (non-malignant) population consisting of normal cells, the reference cancer population consisting of cancer cells wild type for the mutation, and the variant cancer cell population consisting of the cancer cells with at least one variant allele of the mutation.
PyClone implements four advances in its statistic model that were tested on simulated datasets :
Beta-binomial emission densities
Beta-binomial Emission Densities are used by PyClone and are more effective than binomial models used by previous tools. Beta-binomial emission densities more accurately model input datasets that have more variance in allelic prevalence measurements. Higher accuracy in modeling variance in allelic prevalence translates to a higher confidence in the clusterings outputted by PyClone.
Priors
PyClone acknowledges that some geometrical structures and properties, such as copy number, of the clonal population to be reconstructed is known. When not enough information is available or taken into account, the reconstruction is usually of low confidence and many solutions are possible. PyClone uses priors, flexible prior probability estimates, of possible mutational genotypes to link allelic prevalence measurements to zygosity and copy number variants and is one of the first methods to incorporate variant allele frequencies (VAFs) with allele-specific copy numbers.
Bayesian nonparametric clustering
Instead of fixing the number of clusters prior to clustering, Bayesian nonparametric clustering is used to discover groupings of mutations and the number of groups simultaneously. This allows for cellular prevalence estimates to reflect uncertainty in this parameter.
Section sequencing
Multiple samples from the same patient can be analyzed at the same time to leverage the scenario in which clonal populations are shared across samples. When multiple samples are sequenced, subclonal populations that are similar in allelic prevalence in some cells but not others can be differentiated from each other.
Output
PyClone outputs posterior densities of cellular prevalences for the mutations in the sample and a matrix containing the probability any two mutations occur in the same cluster. Estimates of clonal populations from differing cellular prevalences of mutations are then generated from the posterior densities.
Applications
PyClone is used to analyze deeply sequenced (over 100× coverage) mutations to identify and quantify clonal populations in tumors. Some applications include:
Xenografting is used as a reasonable model to study human breast cancer but the consequences of engraftment and genomic propagation of xenografts have not been examined at a single-cell resolution. PyClone can be used to follow the clonal dynamics of initial grafts and serial propagation of primary and metastatic human breast cancers in immunodeficient mice. PyClone can predict how clonal dynamics differ after initial engraftment, over serial passage generations.
Circulating tumour DNA (plasma DNA) Analysis can be used to track tumour burden and analyse cancer genomes non-invasively but the extent to which it represents metastatic heterogeneity is unknown. PyClone can be used to compare the clonal population structures present in the tumour and plasma samples from amplicon sequencing data. Stem and metastatic-clade mutation clusters can be inferred using PyClone and then compared to results from clonal ordering.
Serial Time Point Sequencing: PyClone can be used to study the evolution of mutational clusters as cancer progresses. With samples taken from different time points, PyClone can identify the expansion and decline of initial clones and discover newly acquired subclones that arise during treatment. Understanding clonal dynamics improves understanding on how related cancers such as MDS, MPN and sAML compare in risk and give insight on the clinical significance of somatic mutations.
Section sequencing: PyClone is most effective for section sequencing tumor DNA. Section sequencing is when samples are taken from different portions of a single tumour to infer clonal structure from differential cellular prevalence. An advantage of section sequencing is more statistical power and information on the spatial position and interactions of the clones, uncovering information on how tumors evolve in space.
Assumptions
A key assumption of the PyClone model is that all cells within a clonal population have the same genotype. This assumption is likely false since copy number alterations and loss of heterozygosity events are common in cancer cells. The amount of error introduced by this assumption depends on the variability of genotype of cells in the location of interest. For example, in solid tumors the cells of a sample are spatially close together resulting in a small error rate, but for liquid tumors the assumption may introduce more error as cancer cells are mobile.
Another assumption made is that the sample follows a perfect and persistent phylogeny. This means that no site mutates more than once in a clonal population and each site has at most one mutant genotype. Mutations that revert to normal genotype, deletions of segments of DNA harbouring mutations and recurrent mutations are not accounted for in PyClone as it would lead to unidentifiable explanations for some observed data.
Limitations
In order to obtain input data for PyClone, cell lysis is a required step to prepare bulk sample sequencing. This results in the loss of information on the complete set of mutations defining a clonal population. PyClone can distinguish and identify the frequency of different clonal populations but can not identify exact mutations defining these populations.
Instead of clustering cells by mutational composition, PyClone clusters mutations that have similar cellular frequencies. In sub-clones that have similar cellular frequencies, PyClone will mistakenly cluster these subclones together. Chances of making this error decreases when using targeted deep sequencing with high coverage and joint analysis of multiple samples
A confounding factor of the PyClone model arises due to imprecise input information on the genotype of the sample and the depth of sequencing. Uncertainty arises in the posterior densities due to insufficient information on the genotype of mutations and depth of sequencing of the sample. This results in relying on the assumptions made by the PyClone model to interpret and cluster the sample.
Similar tools
SciClone- SciClone is a Bayesian clustering method on single nucleotide variants (SNVs).
Clomial- Clomial is a Bayesian clustering method with a decomposition process. Both Clomial and SciCloe limit the SNVs located in copy-number neutral region. The tumor is physically divided into subsections and deep sequenced to measure normal allele and variant allele. Their inference model uses Expectation-Maximization algorithm.
GLClone – GLClone uses a hierarchical probabilistic model and Bayesian posteriors to calculate copy number alterations in sub-clones.
Cloe - Cloe uses a phylogenetic latent feature model for analyzing sequencing data to distinguish the genotypes and the frequency of clones in a tumor.
PhyC - PhyC uses an unsupervised learning approach to identify subgroups of patients through clustering the respective cancer evolutionary trees. They identified the patterns of different evolutionary modes in a simulation analysis, and also successfully detected the phenotype-related and cancer type-related subgroups to characterize tree structures within subgroups using actual datasets.
PhyloWGS - PhyloWGS reconstructs tumor phylogenies and characterizes the subclonal populations present in a tumor sample using both SSMs and CNVs.
References
Computational biology | PyClone | [
"Biology"
] | 2,260 | [
"Computational biology"
] |
60,048,801 | https://en.wikipedia.org/wiki/RopB%20transcriptional%20regulator | RopB transcriptional regulator, also known as RopB/Rgg transcriptional regulator is a transcriptional regulator protein that regulates expression of the extracellularly secreted cysteine protease streptococcal pyrogenic exotoxin B (speB or streptopain), which is an important virulence factor of Streptococcus pyogenes and is responsible for the dissemination of a host of infectious diseases including strep throat, impetigo, streptococcal toxic shock syndrome, necrotizing fasciitis, and scarlet fever. Functional studies suggest that the ropB multigene regulon is responsible for not only global regulation of virulence but also a wide range of functions from stress response, metabolic function, and two-component signaling. Structural studies implicate ropB's regulatory action being reliant on a complex interaction involving quorum sensing with the leaderless peptide signal speB-inducing peptide (SIP) acting in conjunction with a pH sensitive histidine switch.
Discovery
Observations of an extracellularly secreted glucosyltransferase (gtfG) sequentially proximal to and activated by an rgg gene with inverted repeats in the intergenic region of Streptococcus gordonii served as a basis for studying its homology between Streptococcus pyogenes. It was discovered that S. pyogenes also shared an rgg/ropB gene located directly next to the subject of its transcriptional regulation, in this case speB protease, with intergenic inverted repeats. Confirmation of linkage between rgg/ropB and speB secretion activation was achieved by means of ropB insertional disruption which resulted in decreased speB production.
Structure
Gene location
The location of the ropB gene is directly and sequentially proximal to the subject of its transcriptional regulation, speB, which lies downstream of a 941 bp intergenic region between the two. Transcription of the ropB gene seems to necessitate a promoter within a series sequences between 238 and 480 bp and up to 800 bp upstream of the gene itself inside the highly repetitive intergenic region.
Protein binding location
The ropB protein binding location lies adjacent to speB promoter 1 that is also located within the highly repetitive intergenic region, although the ropB gene and the speB gene are transcribed in opposite directions. The -10 and -35 regions of speB promoter 1 have poor consensus; in order to ameliorate this, the ropB aids the RNA polymerase bondage with the help of a polyU polypyrimidine tract inside the palindromic inverted repeat region in a fashion uncannily similar to intrinsic termination in E. coli.
Protein domains
N-Terminal
The N-terminal domain consists of amino acids 1-56 and is an amino terminal responsible for DNA-binding and is a key mediator in the linkage between the C-terminal domain of the opposite dimer. The dimer interface II has its I255 side chain located in the N-terminal.
C-Terminal
The C-terminal domain, also known as ropB-CTD, is a carboxy terminalligand-binding domain made of amino acids 56–280. RopB-CTD houses 5 TPR motifs and attaches to the SIP peptide in the innermost part of the SIP binding pocket in a sequence-specific manner without induction of polymerization.
TPR domain
The tetratricopeptide repeat domain provides the concave surface necessitated for SIP recognition. RopB-CTD houses 5 stacked TPR motifs, each having sets of paired antiparallel helices that aid in the formation of a concave inner pathway and a convex exterior. The base of the recognition site is constructed by alpha helices α6 and α8, while the supporting walls are constructed from helices α2, and α12. The exterior portion of the recognition site is flanked by asparagines N152 and N192, thus providing a ridge of support for the peptide-protein complex.
Dimer interface
The dimer interfaces of ropB are constructed by a union of the α8 - α12 helices of the N-terminal domain and the C-terminal domain. Additionally, there is an Interface I forged from three side chains (C22, Y224, and R226), an Interface II forged from one side chain (I255), and N-terminal domains that are all responsible for dimerizing ropB protein subunits together.
Peptide binding pocket
The SIP peptide binding pocket is the docking station of the eight amino acid leaderless peptide signal, speB-inducing peptide (SIP). The binding pocket is a tripartite construction of the C-terminal's α12 helix which is a capping helix, TPR3's α6 helix that has a hydrophobic interplay with SIP sidechains, and TPR 4's α8 helix which electrostatically stabilizes SIP. Variations in pH level altered strength of adherence between SIP and the SIP binding pocket with acidic pH levels between 5.5 and 6.5 enhancing adherence and pH levels between 7 and 9 reducing adherence.
Histidine switch
Though the ropB protein has seven histidines (H12, H81, H93, H144, H265, H266, and H277) structurally present, the ropB histidine switch primarily operates with a single functionally involved histidine (H144) conveniently placed to associate with ropB sidechains (Y176 and E185) that near each other upon the addition of a hydrogen ion to H144 in acidic conditions. Only one histidine (H12) is located on the N-domain while the rest lie in the C-terminal domain.
Regulon kinetics
Streptococcus pyogenes has evolved an interwoven complex of gene regulatory mechanisms in the SIP signaling pathway by implanting a pH sensitive histidine switch onto the quorum-sensing ropB protein. During the neutral to basic pH conditions whether synthetically induced or naturally caused by low population density of S. pyogenes, the interaction between the unprotonated functionally involved histidine (H144) with relevant sidechains (Y176, Y182, E185) in the SIP binding pocket domain is impaired and speB protease expression is inhibited. On the other hand, as extracellular pH decreases to be more acidic in cases of high population density, S. pyogenes has no elaborate pH homeostatic capabilities relative to non-lactic bacteria, therefore intracellular cytosolic pH levels will more easily resemble extracellular levels. Cytosolic acidification mobilizes the SIP pathway to allow for the SIP-ropB protein complex to form and increasing SIP production. Furthermore, increased cytosolic acidity enhances the maturation of speB zymogen (speBz) into mature speB protease (speBm) to dramatically increase its proteolytic activity and virulence.
Homology
Rgg family
Rgg-like transcriptional regulators can be found in a variety of gram-positive bacteria. Where ropB regulates speB protease production in S. pyogenes, a roughly equivalent secretory control mechanism can be seen in Rgg's regulation of gtfG glucosyltransferase production in S. gordonii, in the manner in which gadR regulates acid resistance in Lactococcus lactis, how lasX regulates expression of lantibiotic lactocin S in Lactobacillus sakei, and mutR's regulation of mutacin in S. mutans. Sequentially, these genes are all localized contiguously to their respective subject of regulation and share promoters localized contiguously to inverted repeat regions.
RRNPP family
Characterization of the RRNPP family of quorum-sensing regulators (which stands for proteins Rap, NprR, PrgX, PlcRd) were used in comparisons with ropB to postulate its structural functions. The Rap protein derived from Bacilli regulates sporulation, the NprR protein in Bacillus thuringiensis regulates necrotrophism, the PrgX protein regulates conjugation in Enterococcus faecalis, and PlcR protein regulates transcription of virulence factors in both Bacillis thuringiensis and Bacillus cereus. Similarities were observed in conserved asparagine residues on the TPR motifs of each of these proteins and in ropB.
Quorum sensing
Quorum sensing regulates a menagerie of aspects in Bacillota including the production of ropB-like proteins in Streptococcus pneumoniae and S. pyogenes. Similarities in the pH sensitivity of the cell signaling mechanisms were found in pneumococci, S. mutans, and Staphylococcus aureus as well.
pH sensitive histidine switch
Amongst Rgg-like proteins, it has been observed that the pH sensitive histidine (particularly H144) and interacting amino acids (Y176, Y182, and E185) of ropB of Streptococcus pyogenes are conserved in S. porcinus, S. pseudoporcinus, S. salivarius, L. pentosus, L. aviaries, L. reuteri, and Enterococcus sp. including E. faecalis. Thus, suggesting the usage of a pH sensitive histidine switch complex with gene-regulating effector molecules in a slew of other bacteria [See Also: allosteric regulation].
Pathogenesis
RopB regulation speB is a key determinant in the expression of the speB proteinase which is a primary virulence factor and the most abundant extracellular protein in streptococcal secretions. SpeB cleaves host serum proteins that make up the human extracellular matrix and bacterial proteins including other secreted streptococcal proteins. As previously mentioned, it is responsible for the dissemination of a host of infectious diseases including but not limited to pharyngitis, impetigo, streptococcal toxic shock syndrome, necrotizing fasciitis, and scarlet fever. Therefore, study of the inactivation of speB's many functional pathways and regulators are of critical importance in developing potential novel therapeutics.
See also
Erythrogenic toxins
Allosteric regulation
Alpha helix
References
Gene expression | RopB transcriptional regulator | [
"Chemistry",
"Biology"
] | 2,224 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
60,054,667 | https://en.wikipedia.org/wiki/ROSE%20test | The resistivity of solvent extract (ROSE) test is a test for the presence and average concentration of soluble ionic contaminants, for example on a printed circuit board (PCB). It was developed in the early 1970s. Some manufacturers use it as part of Six Sigma processes.
Some modern fluxes have low solubility in traditional ROSE solvents such as water and isopropyl alcohol, and therefore require the use of different solvents.
References
Chemical tests
Printed circuit board manufacturing | ROSE test | [
"Chemistry",
"Engineering"
] | 101 | [
"Chemical tests",
"Electronic engineering",
"Analytical chemistry stubs",
"Electrical engineering",
"Printed circuit board manufacturing"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.