id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
31,103,500
https://en.wikipedia.org/wiki/Reasoning%20system
In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems. By the everyday usage definition of the phrase, all computer systems are reasoning systems in that they all automate some type of logic or decision. In typical use in the Information Technology field however, the phrase is usually reserved for systems that perform more complex kinds of reasoning. For example, not for systems that do fairly straightforward types of reasoning such as calculating a sales tax or customer discount but making logical inferences about a medical diagnosis or mathematical theorem. Reasoning systems come in two modes: interactive and batch processing. Interactive systems interface with the user to ask clarifying questions or otherwise allow the user to guide the reasoning process. Batch systems take in all the available information at once and generate the best answer possible without user feedback or guidance. Reasoning systems have a wide field of application that includes scheduling, business rule processing, problem solving, complex event processing, intrusion detection, predictive analytics, robotics, computer vision, and natural language processing. History The first reasoning systems were theorem provers, systems that represent axioms and statements in First Order Logic and then use rules of logic such as modus ponens to infer new statements. Another early type of reasoning system were general problem solvers. These were systems such as the General Problem Solver designed by Newell and Simon. General problem solvers attempted to provide a generic planning engine that could represent and solve structured problems. They worked by decomposing problems into smaller more manageable sub-problems, solving each sub-problem and assembling the partial answers into one final answer. Another example general problem solver was the SOAR family of systems. In practice these theorem provers and general problem solvers were seldom useful for practical applications and required specialized users with knowledge of logic to utilize. The first practical application of automated reasoning were expert systems. Expert systems focused on much more well defined domains than general problem solving such as medical diagnosis or analyzing faults in an aircraft. Expert systems also focused on more limited implementations of logic. Rather than attempting to implement the full range of logical expressions they typically focused on modus-ponens implemented via IF-THEN rules. Focusing on a specific domain and allowing only a restricted subset of logic improved the performance of such systems so that they were practical for use in the real world and not merely as research demonstrations as most previous automated reasoning systems had been. The engine used for automated reasoning in expert systems were typically called inference engines. Those used for more general logical inferencing are typically called theorem provers. With the rise in popularity of expert systems many new types of automated reasoning were applied to diverse problems in government and industry. Some such as case-based reasoning were off shoots of expert systems research. Others such as constraint satisfaction algorithms were also influenced by fields such as decision technology and linear programming. Also, a completely different approach, one not based on symbolic reasoning but on a connectionist model has also been extremely productive. This latter type of automated reasoning is especially well suited to pattern matching and signal detection types of problems such as text searching and face matching. Use of logic The term reasoning system can be used to apply to just about any kind of sophisticated decision support system as illustrated by the specific areas described below. However, the most common use of the term reasoning system implies the computer representation of logic. Various implementations demonstrate significant variation in terms of systems of logic and formality. Most reasoning systems implement variations of propositional and symbolic (predicate) logic. These variations may be mathematically precise representations of formal logic systems (e.g., FOL), or extended and hybrid versions of those systems (e.g., Courteous logic). Reasoning systems may explicitly implement additional logic types (e.g., modal, deontic, temporal logics). However, many reasoning systems implement imprecise and semi-formal approximations to recognised logic systems. These systems typically support a variety of procedural and semi-declarative techniques in order to model different reasoning strategies. They emphasise pragmatism over formality and may depend on custom extensions and attachments in order to solve real-world problems. Many reasoning systems employ deductive reasoning to draw inferences from available knowledge. These inference engines support forward reasoning or backward reasoning to infer conclusions via modus ponens. The recursive reasoning methods they employ are termed 'forward chaining' and 'backward chaining', respectively. Although reasoning systems widely support deductive inference, some systems employ abductive, inductive, defeasible and other types of reasoning. Heuristics may also be employed to determine acceptable solutions to intractable problems. Reasoning systems may employ the closed world assumption (CWA) or open world assumption (OWA). The OWA is often associated with ontological knowledge representation and the Semantic Web. Different systems exhibit a variety of approaches to negation. As well as logical or bitwise complement, systems may support existential forms of strong and weak negation including negation-as-failure and 'inflationary' negation (negation of non-ground atoms). Different reasoning systems may support monotonic or non-monotonic reasoning, stratification and other logical techniques. Reasoning under uncertainty Many reasoning systems provide capabilities for reasoning under uncertainty. This is important when building situated reasoning agents which must deal with uncertain representations of the world. There are several common approaches to handling uncertainty. These include the use of certainty factors, probabilistic methods such as Bayesian inference or Dempster–Shafer theory, multi-valued ('fuzzy') logic and various connectionist approaches. Types of reasoning system This section provides a non-exhaustive and informal categorisation of common types of reasoning system. These categories are not absolute. They overlap to a significant degree and share a number of techniques, methods and algorithms. Constraint solvers Constraint solvers solve constraint satisfaction problems (CSPs). They support constraint programming. A constraint is a which must be met by any valid solution to a problem. Constraints are defined declaratively and applied to variables within given domains. Constraint solvers use search, backtracking and constraint propagation techniques to find solutions and determine optimal solutions. They may employ forms of linear and nonlinear programming. They are often used to perform optimization within highly combinatorial problem spaces. For example, they may be used to calculate optimal scheduling, design efficient integrated circuits or maximise productivity in a manufacturing process. Theorem provers Theorem provers use automated reasoning techniques to determine proofs of mathematical theorems. They may also be used to verify existing proofs. In addition to academic use, typical applications of theorem provers include verification of the correctness of integrated circuits, software programs, engineering designs, etc. Logic programs Logic programs (LPs) are software programs written using programming languages whose primitives and expressions provide direct representations of constructs drawn from mathematical logic. An example of a general-purpose logic programming language is Prolog. LPs represent the direct application of logic programming to solve problems. Logic programming is characterised by highly declarative approaches based on formal logic, and has wide application across many disciplines. Rule engines Rule engines represent conditional logic as discrete rules. Rule sets can be managed and applied separately to other functionality. They have wide applicability across many domains. Many rule engines implement reasoning capabilities. A common approach is to implement production systems to support forward or backward chaining. Each rule ('production') binds a conjunction of predicate clauses to a list of executable actions. At run-time, the rule engine matches productions against facts and executes ('fires') the associated action list for each match. If those actions remove or modify any facts, or assert new facts, the engine immediately re-computes the set of matches. Rule engines are widely used to model and apply business rules, to control decision-making in automated processes and to enforce business and technical policies. Deductive classifier Deductive classifiers arose slightly later than rule-based systems and were a component of a new type of artificial intelligence knowledge representation tool known as frame languages. A frame language describes the problem domain as a set of classes, subclasses, and relations among the classes. It is similar to the object-oriented model. Unlike object-oriented models however, frame languages have a formal semantics based on first order logic. They utilize this semantics to provide input to the deductive classifier. The classifier in turn can analyze a given model (known as an ontology) and determine if the various relations described in the model are consistent. If the ontology is not consistent the classifier will highlight the declarations that are inconsistent. If the ontology is consistent the classifier can then do further reasoning and draw additional conclusions about the relations of the objects in the ontology. For example, it may determine that an object is actually a subclass or instance of additional classes as those described by the user. Classifiers are an important technology in analyzing the ontologies used to describe models in the Semantic web. Machine learning systems Machine learning systems evolve their behavior over time based on experience. This may involve reasoning over observed events or example data provided for training purposes. For example, machine learning systems may use inductive reasoning to generate hypotheses for observed facts. Learning systems search for generalised rules or functions that yield results in line with observations and then use these generalisations to control future behavior. Case-based reasoning systems Case-based reasoning (CBR) systems provide solutions to problems by analysing similarities to other problems for which known solutions already exist. Case-based reasoning uses the top (superficial) levels of similarity; namely, the object, feature, and value criteria. This differs case-based reasoning from analogical reasoning in that analogical reasoning uses only the "deep" similarity criterion i.e. relationship or even relationships of relationships, and need not find similarity on the shallower levels. This difference makes case-based reasoning applicable only among cases of the same domain because similar objects, features, and/or values must be in the same domain, while the "deep" similarity criterion of "relationships" makes analogical reasoning applicable cross-domains where only the relationships ae similar between the cases. CBR systems are commonly used in customer/technical support and call centre scenarios and have applications in industrial manufacture, agriculture, medicine, law and many other areas. Procedural reasoning systems A procedural reasoning system (PRS) uses reasoning techniques to select plans from a procedural knowledge base. Each plan represents a course of action for achievement of a given goal. The PRS implements a belief–desire–intention model by reasoning over facts ('beliefs') to select appropriate plans ('intentions') for given goals ('desires'). Typical applications of PRS include management, monitoring and fault detection systems. References Deductive reasoning Problem solving Automated reasoning Inductive reasoning Cognitive architecture Rule engines Expert systems Automated theorem proving Constraint programming Applied machine learning
Reasoning system
[ "Mathematics", "Technology", "Engineering" ]
2,261
[ "Automated theorem proving", "Mathematical logic", "Cognitive architecture", "Computational mathematics", "Information systems", "Artificial intelligence engineering", "Expert systems" ]
31,104,438
https://en.wikipedia.org/wiki/K-tree
In graph theory, a k-tree is an undirected graph formed by starting with a (k + 1)-vertex complete graph and then repeatedly adding vertices in such a way that each added vertex v has exactly k neighbors U such that, together, the k + 1 vertices formed by v and U form a clique. Characterizations The k-trees are exactly the maximal graphs with a treewidth of k ("maximal" means that no more edges can be added without increasing their treewidth). They are also exactly the chordal graphs all of whose maximal cliques are the same size k + 1 and all of whose minimal clique separators are also all the same size k. Related graph classes 1-trees are the same as trees. 2-trees are maximal series–parallel graphs, and include also the maximal outerplanar graphs. Planar 3-trees are also known as Apollonian networks. The graphs that have treewidth at most k are exactly the subgraphs of k-trees, and for this reason they are called partial k-trees. The graphs formed by the edges and vertices of k-dimensional stacked polytopes, polytopes formed by starting from a simplex and then repeatedly gluing simplices onto the faces of the polytope, are k-trees when k ≥ 3. This gluing process mimics the construction of k-trees by adding vertices to a clique. A k-tree is the graph of a stacked polytope if and only if no three (k + 1)-vertex cliques have k vertices in common. References Graph minor theory Trees (graph theory) Perfect graphs Graph families
K-tree
[ "Mathematics" ]
349
[ "Graph minor theory", "Mathematical relations", "Graph theory" ]
31,104,610
https://en.wikipedia.org/wiki/Apollonian%20network
In combinatorial mathematics, an Apollonian network is an undirected graph formed by a process of recursively subdividing a triangle into three smaller triangles. Apollonian networks may equivalently be defined as the planar 3-trees, the maximal planar chordal graphs, the uniquely 4-colorable planar graphs, and the graphs of stacked polytopes. They are named after Apollonius of Perga, who studied a related circle-packing construction. Definition An Apollonian network may be formed, starting from a single triangle embedded in the Euclidean plane, by repeatedly selecting a triangular face of the embedding, adding a new vertex inside the face, and connecting the new vertex to each vertex of the face containing it. In this way, the triangle containing the new vertex is subdivided into three smaller triangles, which may in turn be subdivided in the same way. Examples The complete graphs on three and four vertices, and , are both Apollonian networks. is formed by starting with a triangle and not performing any subdivisions, while is formed by making a single subdivision before stopping. The Goldner–Harary graph is an Apollonian network that forms the smallest non-Hamiltonian maximal planar graph. Another more complicated Apollonian network was used by to provide an example of a 1-tough non-Hamiltonian maximal planar graph. Graph-theoretic characterizations As well as being defined by recursive subdivision of triangles, the Apollonian networks have several other equivalent mathematical characterizations. They are the chordal maximal planar graphs, the chordal polyhedral graphs, and the planar 3-trees. They are the uniquely 4-colorable planar graphs, and the planar graphs with a unique Schnyder wood decomposition into three trees. They are the maximal planar graphs with treewidth three, a class of graphs that can be characterized by their forbidden minors or by their reducibility under Y-Δ transforms. They are the maximal planar graphs with degeneracy three. They are also the planar graphs on a given number of vertices that have the largest possible number of triangles, the largest possible number of tetrahedral subgraphs, the largest possible number of cliques, and the largest possible number of pieces after decomposing by separating triangles. Chordality Apollonian networks are examples of maximal planar graphs, graphs to which no additional edges can be added without destroying planarity, or equivalently graphs that can be drawn in the plane so that every face (including the outer face) is a triangle. They are also chordal graphs, graphs in which every cycle of four or more vertices has a diagonal edge connecting two non-consecutive cycle vertices, and the order in which vertices are added in the subdivision process that forms an Apollonian network is an elimination ordering as a chordal graph. This forms an alternative characterization of the Apollonian networks: they are exactly the chordal maximal planar graphs or equivalently the chordal polyhedral graphs. In an Apollonian network, every maximal clique is a complete graph on four vertices, formed by choosing any vertex and its three earlier neighbors. Every minimal clique separator (a clique that partitions the graph into two disconnected subgraphs) is one of the subdivided triangles. A chordal graph in which all maximal cliques and all minimal clique separators have the same size is a -tree, and Apollonian networks are examples of 3-trees. Not every 3-tree is planar, but the planar 3-trees are exactly the Apollonian networks. Unique colorability Every Apollonian network is also a uniquely 4-colorable graph. Because it is a planar graph, the four color theorem implies that it has a graph coloring with only four colors, but once the three colors of the initial triangle are selected, there is only one possible choice for the color of each successive vertex, so up to permutation of the set of colors it has exactly one 4-coloring. It is more difficult to prove, but also true, that every uniquely 4-colorable planar graph is an Apollonian network. Therefore, Apollonian networks may also be characterized as the uniquely 4-colorable planar graphs. Apollonian networks also provide examples of planar graphs having as few -colorings as possible for . The Apollonian networks are also exactly the maximal planar graphs that (once an exterior face is fixed) have a unique Schnyder wood, a partition of the edges of the graph into three interleaved trees rooted at the three vertices of the exterior face. Treewidth The Apollonian networks do not form a family of graphs that is closed under the operation of taking graph minors, as removing edges but not vertices from an Apollonian network produces a graph that is not an Apollonian network. However, the planar partial 3-trees, subgraphs of Apollonian networks, are minor-closed. Therefore, according to the Robertson–Seymour theorem, they can be characterized by a finite number of forbidden minors. The minimal forbidden minors for the planar partial 3-trees are the four minimal graphs among the forbidden minors for the planar graphs and the partial 3-trees: the complete graph , the complete bipartite graph , the graph of the octahedron, and the graph of the pentagonal prism. The Apollonian graphs are the maximal graphs that do not have any of these four graphs as a minor. A Y-Δ transform, an operation that replaces a degree-three vertex in a graph by a triangle connecting its neighbors, is sufficient (together with the removal of parallel edges) to reduce any Apollonian network to a single triangle, and more generally the planar graphs that can be reduced to a single edge by Y-Δ transforms, removal of parallel edges, removal of degree-one vertices, and compression of degree-two vertices are exactly the planar partial 3-trees. The dual graphs of the planar partial 3-trees form another minor-closed graph family and are exactly the planar graphs that can be reduced to a single edge by Δ-Y transforms, removal of parallel edges, removal of degree-one vertices, and compression of degree-two vertices. Degeneracy In every subgraph of an Apollonian network, the most recently added vertex has degree at most three, so Apollonian networks have degeneracy three. The order in which the vertices are added to create the network is therefore a degeneracy ordering, and the Apollonian networks coincide with the 3-degenerate maximal planar graphs. Extremality Another characterization of the Apollonian networks involves their connectivity. Any maximal planar graph may be decomposed into 4-vertex-connected maximal planar subgraphs by splitting it along its separating triangles (triangles that are not faces of the graph): given any non-facial triangle: one can form two smaller maximal planar graphs, one consisting of the part inside the triangle and the other consisting of the part outside the triangle. The maximal planar graphs without separating triangles that may be formed by repeated splits of this type are sometimes called blocks, although that name has also been used for the biconnected components of a graph that is not itself biconnected. An Apollonian network is a maximal planar graph in which all of the blocks are isomorphic to the complete graph . In extremal graph theory, Apollonian networks are also exactly the -vertex planar graphs in which the number of blocks achieves its maximum, , and the planar graphs in which the number of triangles achieves its maximum, . Since each subgraph of a planar graph must be a block, these are also the planar graphs in which the number of subgraphs achieves its maximum, , and the graphs in which the number of cliques of any type achieves its maximum, . Geometric realizations Construction from circle packings Apollonian networks are named after Apollonius of Perga, who studied the Problem of Apollonius of constructing a circle tangent to three other circles. One method of constructing Apollonian networks is to start with three mutually-tangent circles and then repeatedly inscribe another circle within the gap formed by three previously-drawn circles. The fractal collection of circles produced in this way is called an Apollonian gasket. If the process of producing an Apollonian gasket is stopped early, with only a finite set of circles, then the graph that has one vertex for each circle and one edge for each pair of tangent circles is an Apollonian network. The existence of a set of tangent circles whose tangencies represent a given Apollonian network forms a simple instance of the Koebe–Andreev–Thurston circle-packing theorem, which states that any planar graph can be represented by tangent circles in the same way. Polyhedra Apollonian networks are planar 3-connected graphs and therefore, by Steinitz's theorem, can always be represented as the graphs of convex polyhedra. The convex polyhedron representing an Apollonian network is a 3-dimensional stacked polytope. Such a polytope can be obtained from a tetrahedron by repeatedly gluing additional tetrahedra one at a time onto its triangular faces. Therefore, Apollonian networks may also be defined as the graphs of stacked 3d polytopes. It is possible to find a representation of any Apollonian network as convex 3d polyhedron in which all of the coordinates are integers of polynomial size, better than what is known for other planar graphs. Triangle meshes The recursive subdivision of triangles into three smaller triangles was investigated as an image segmentation technique in computer vision by ; in this context, they called it the ternary scalene triangle decomposition. They observed that, by placing each new vertex at the centroid of its enclosing triangle, the triangulation could be chosen in such a way that all triangles have equal areas, although they do not all have the same shape. More generally, Apollonian networks may be drawn in the plane with any prescribed area in each face; if the areas are rational numbers, so are all of the vertex coordinates. It is also possible to carry out the process of subdividing a triangle to form an Apollonian network in such a way that, at every step, the edge lengths are rational numbers; it is an open problem whether every planar graph has a drawing with this property. It is possible in polynomial time to find a drawing of a planar 3-tree with integer coordinates minimizing the area of the bounding box of the drawing, and to test whether a given planar 3-tree may be drawn with its vertices on a given set of points. Properties and applications Matching-free graphs used Apollonian networks to construct an infinite family of maximal planar graphs with an even number of vertices but with no perfect matching. Plummer's graphs are formed in two stages. In the first stage, starting from a triangle , one repeatedly subdivides the triangular face of the subdivision that contains edge : the result is a graph consisting of a path from to the final subdivision vertex together with an edge from each path vertex to each of and . In the second stage, each of the triangular faces of the resulting planar graph is subdivided one more time. If the path from to the final subdivision vertex of the first stage has even length, then the number of vertices in the overall graph is also even. However, approximately 2/3 of the vertices are the ones inserted in the second stage; these form an independent set, and cannot be matched to each other, nor are there enough vertices outside the independent set to find matches for all of them. Although Apollonian networks themselves may not have perfect matchings, the planar dual graphs of Apollonian networks are 3-regular graphs with no cut edges, so by a theorem of they are guaranteed to have at least one perfect matching. However, in this case more is known: the duals of Apollonian networks always have an exponential number of perfect matchings. László Lovász and Michael D. Plummer conjectured that a similar exponential lower bound holds more generally for every 3-regular graph without cut edges, a result that was later proven. Power law graphs studied power laws in the degree sequences of a special case of networks of this type, formed by subdividing all triangles the same number of times. They used these networks to model packings of space by particles of varying sizes. Based on their work, other authors introduced random Apollonian networks, formed by repeatedly choosing a random face to subdivide, and they showed that these also obey power laws in their degree distribution and have small average distances. Alan M. Frieze and Charalampos E. Tsourakakis analyzed the highest degrees and the eigenvalues of random Apollonian networks. Andrade et al. also observed that their networks satisfy the small world effect, that all vertices are within a small distance of each other. Based on numerical evidence they estimated the average distance between randomly selected pairs of vertices in an -vertex network of this type to be proportional to , but later researchers showed that the average distance is actually proportional to . Angle distribution observed that if each new vertex is placed at the incenter of its triangle, so that the edges to the new vertex bisect the angles of the triangle, then the set of triples of angles of triangles in the subdivision, when reinterpreted as triples of barycentric coordinates of points in an equilateral triangle, converges in shape to the Sierpinski triangle as the number of levels of subdivision grows. Hamiltonicity claimed erroneously that all Apollonian networks have Hamiltonian cycles; however, the Goldner–Harary graph provides a counterexample. If an Apollonian network has toughness greater than one (meaning that removing any set of vertices from the graph leaves a smaller number of connected components than the number of removed vertices) then it necessarily has a Hamiltonian cycle, but there exist non-Hamiltonian Apollonian networks whose toughness is equal to one. Enumeration The combinatorial enumeration problem of counting Apollonian triangulations was studied by , who showed that they have the simple generating function described by the equation . In this generating function, the term of degree counts the number of Apollonian networks with a fixed outer triangle and vertices. Thus, the numbers of Apollonian networks (with a fixed outer triangle) on 3, 4, 5, ... vertices are: 1, 1, 3, 12, 55, 273, 1428, 7752, 43263, 246675, ... , a sequence that also counts ternary trees and dissections of convex polygons into odd-sided polygons. For instance, there are 12 6-vertex Apollonian networks: three formed by subdividing the outer triangle once and then subdividing two of the resulting triangles, and nine formed by subdividing the outer triangle once, subdividing one of its triangles, and then subdividing one of the resulting smaller triangles. History is an early paper that uses a dual form of Apollonian networks, the planar maps formed by repeatedly placing new regions at the vertices of simpler maps, as a class of examples of planar maps with few colorings. Geometric structures closely related to Apollonian networks have been studied in polyhedral combinatorics since at least the early 1960s, when they were used by to describe graphs that can be realized as the graph of a polytope in only one way, without dimensional or combinatorial ambiguities, and by to find simplicial polytopes with no long paths. In graph theory, the close connection between planarity and treewidth goes back to , who showed that every minor-closed family of graphs either has bounded treewidth or contains all of the planar graphs. Planar 3-trees, as a class of graphs, were explicitly considered by , , , and many authors since them. The name "Apollonian network" was given by to the networks they studied in which the level of subdivision of triangles is uniform across the network; these networks correspond geometrically to a type of stacked polyhedron called a Kleetope. Other authors applied the same name more broadly to planar 3-trees in their work generalizing the model of Andrade et al. to random Apollonian networks. The triangulations generated in this way have also been named "stacked triangulations" or "stack-triangulations". See also Barycentric subdivision, a different method of subdividing triangles into smaller triangles Loop subdivision surface, yet another method of subdividing triangles into smaller triangles Notes References . . . . As cited by . . . . . . . . . . . . . . . . . . . See also the same journal 6(2):33 (1975) and 8:104-106 (1977). Reference from listing of Harary's publications. . . . . . . . . . . . . . . As cited by . . . An error regarding Hamiltonicity was pointed out by MathSciNet reviewer W. T. Tutte. . . . . . . . External links Matlab Simulation Code Graph families Perfect graphs Planar graphs Triangulation (geometry)
Apollonian network
[ "Mathematics" ]
3,562
[ "Triangulation (geometry)", "Planes (geometry)", "Planar graphs" ]
36,284,621
https://en.wikipedia.org/wiki/Wu%20experiment
The Wu experiment was a particle and nuclear physics experiment conducted in 1956 by the Chinese American physicist Chien-Shiung Wu in collaboration with the Low Temperature Group of the US National Bureau of Standards. The experiment's purpose was to establish whether or not conservation of parity (P-conservation), which was previously established in the electromagnetic and strong interactions, also applied to weak interactions. If P-conservation were true, a mirrored version of the world (where left is right and right is left) would behave as the mirror image of the current world. If P-conservation were violated, then it would be possible to distinguish between a mirrored version of the world and the mirror image of the current world. The experiment established that conservation of parity was violated (P-violation) by the weak interaction, providing a way to operationally define left and right without reference to the human body. This result was not expected by the physics community, which had previously regarded parity as a symmetry applying to all forces of nature. Tsung-Dao Lee and Chen-Ning Yang, the theoretical physicists who originated the idea of parity nonconservation and proposed the experiment, received the 1957 Nobel Prize in Physics for this result. While not awarded the Nobel Prize, Chien-Shiung Wu's role in the discovery was mentioned in the Nobel Prize acceptance speech of Yang and Lee, but she was not honored until 1978, when she was awarded the first Wolf Prize. History In 1927, Eugene Wigner formalized the principle of the conservation of parity (P-conservation), the idea that the current world and one built like its mirror image would behave in the same way, with the only difference that left and right would be reversed (for example, a clock which spins clockwise would spin counterclockwise if a mirrored version of it were built). This principle was widely accepted by physicists, and P-conservation was experimentally verified in the electromagnetic and strong interactions. However, during the mid-1950s, certain decays involving kaons could not be explained by existing theories in which P-conservation was assumed to be true. There seemed to be two types of kaons, one which decayed into two pions, and the other which decayed into three pions. This was known as the puzzle. Theoretical physicists Tsung-Dao Lee and Chen-Ning Yang did a literature review on the question of parity conservation in all fundamental interactions. They concluded that in the case of the weak interaction, experimental data neither confirmed nor refuted P-conservation. Shortly after, they approached Chien-Shiung Wu, who was an expert on beta decay spectroscopy, with various ideas for experiments. They settled on the idea of testing the directional properties of beta decay in cobalt-60. Wu realized the potential for a breakthrough experiment and began work in earnest at the end of May 1956, cancelling a planned trip to Geneva and the Far East with her husband, wanting to beat the rest of the physics community to the punch. Most physicists, such as close friend Wolfgang Pauli, thought it was impossible and even expressed skepticism regarding the Yang-Lee proposal. Wu had to contact Henry Boorse and Mark W. Zemansky, who had extensive experience in low-temperature physics, to perform her experiment. At the behest of Boorse and Zemansky, Wu contacted Ernest Ambler, of the National Bureau of Standards, who arranged for the experiment to be carried out in 1956 at the NBS' low-temperature laboratories. After several months of work overcoming technical difficulties, Wu's team observed an asymmetry indicating parity violation in December 1956. Lee and Yang, who prompted the Wu experiment, were awarded the Nobel prize in Physics in 1957, shortly after the experiment was performed. Wu's role in the discovery was mentioned in the prize acceptance speech, but was not honored until 1978, when she was awarded the inaugural Wolf Prize. Many were outraged, from her close friend Wolfgang Pauli, to Lee and Yang, with 1988 Nobel Laureate Jack Steinberger labeling it as the biggest mistake in the Nobel committee's history. Wu did not publicly discuss her feelings about the prize, but in a letter she wrote to Steinberger, she said, "Although I did not do research just for the prize, it still hurts me a lot that my work was overlooked for certain reasons." Theory If a particular interaction respects parity symmetry, it means that if left and right were interchanged, the interaction would behave exactly as it did before the interchange. Another way this is expressed is to imagine that two worlds are constructed that differ only by parity—the "real" world and the "mirror" world, where left and right are swapped. If an interaction is parity symmetric, it produces the same outcomes in both "worlds". The aim of Wu's experiment was to determine if this was the case for the weak interaction by looking at whether the decay products of cobalt-60 were being emitted preferentially in one direction or not. This would signify the violation of parity symmetry because if the weak interaction were parity conserving, the decay emissions should be emitted with equal probability in all directions. As stated by Wu et al.: The reason for this is that the cobalt-60 nucleus carries spin, and spin does not change direction under parity (because angular momentum is an axial vector). Conversely, the direction in which the decay products are emitted is changed under parity because momentum is a polar vector. In other words, in the "real" world, if the cobalt-60 nuclear spin and the decay product emissions were both in roughly the same direction, then in the "mirror" world, they would be in roughly opposite directions, because the emission direction would have been flipped, but the spin direction would not. This would be a clear difference in the behaviour of the weak interaction between both "worlds", and hence the weak interaction could not be said to be parity symmetric. The only way that the weak interaction could be parity symmetric is if there were no preference in the direction of emission, because then a flip in the direction of emissions in the "mirror" world would look no different from the "real" world because there were equal numbers of emissions in both directions anyway. Experiment The experiment monitored the decay of cobalt-60 (60Co) atoms that were aligned by a uniform magnetic field (the polarizing field) and cooled to near absolute zero so that thermal motions did not ruin the alignment. Cobalt-60 is an unstable isotope of cobalt that decays by beta decay to the stable isotope nickel-60 (60Ni). During this decay, one of the neutrons in the cobalt-60 nucleus decays to a proton by emitting an electron (e−) and an electron antineutrino (e). The resulting nickel nucleus, however, is in an excited state and promptly decays to its ground state by emitting two gamma rays (γ). Hence the overall nuclear equation of the reaction is: Gamma rays are photons, and their release from the nickel-60 nucleus is an electromagnetic (EM) process. This is important because EM was known to respect parity conservation, and therefore they would be emitted roughly equally in all directions (they would be distributed roughly "isotropically"). Hence, the distribution of the emitted electrons could be compared to the distribution of the emitted gamma rays in order to compare whether they too were being emitted isotropically. In other words, the distribution of the gamma rays acted as a control for the distribution of the emitted electrons. Another benefit of the emitted gamma rays was that it was known that the degree to which they were not distributed perfectly equally in all directions (the "anisotropy" of their distribution) could be used to determine how well the cobalt-60 nuclei had been aligned (how well their spins were aligned). If the cobalt-60 nuclei were not aligned at all, then no matter how the electron emission was truly distributed, it would not be detected by the experiment. This is because an unaligned sample of nuclei could be expected to be oriented randomly, and thus the electron emissions would be random and the experiment would detect equal numbers of electron emissions in all directions, even if they were being emitted from each individual nucleus in only one direction. The experiment then essentially counted the rate of emission for gamma rays and electrons in two distinct directions and compared their values. This rate was measured over time and with the polarizing field oriented in opposite directions. If the counting rates for the electrons did not differ significantly from those of the gamma rays, then there would have been evidence to suggest that parity was indeed conserved by the weak interaction. If, however, the counting rates were significantly different, then there would be strong evidence that the weak interaction does indeed violate parity conservation. Materials and methods The experimental challenge in this experiment was to obtain the highest possible polarization of the 60Co nuclei. Due to the very small magnetic moments of the nuclei as compared to electrons, strong magnetic fields were required at extremely low temperatures, far lower than could be achieved by liquid helium cooling alone. The low temperatures were achieved using the method of adiabatic demagnetization. Radioactive cobalt was deposited as a thin surface layer on a crystal of cerium-magnesium nitrate, a paramagnetic salt with a highly anisotropic Landé g-factor. The salt was magnetized along the axis of high g-factor, and the temperature was decreased to 1.2 K by pumping the helium to low pressure. Shutting off the horizontal magnetic field resulted in the temperature decreasing to about 0.003 K. The horizontal magnet was opened up, allowing room for a vertical solenoid to be introduced and switched on to align the cobalt nuclei either upwards or downwards. Only a negligible increase in temperature was caused by the solenoid magnetic field, since the magnetic field orientation of the solenoid was in the direction of low g-factor. This method of achieving high polarization of 60Co nuclei had been originated by Gorter and Rose. The production of gamma rays was monitored using equatorial and polar counters as a measure of the polarization. Gamma ray polarization was continuously monitored over the next quarter-hour as the crystal warmed up and anisotropy was lost. Likewise, beta-ray emissions were continuously monitored during this warming period. Results In the experiment carried out by Wu, the gamma ray anisotropy was approximately 0.6. That is, approximately 60% of the electrons were emitted in one direction, where as 40% were emitted in the other. If parity were conserved in beta decay, the emitted electrons would have had no preferred direction of decay relative to the nuclear spin, and the asymmetry in emission direction would have been close to the value for the gamma rays. However, Wu observed that the electrons were emitted in a direction preferentially opposite to that of the gamma rays with an asymmetry significantly greater than the gamma ray anisotropy value. That is, most of the electrons favored a very specific direction of decay, specifically opposite to that of the nuclear spin. The observed electron asymmetry also did not change sign when the polarizing field was reversed, meaning that the asymmetry was not being caused by remanent magnetization in the samples. It was later established that parity violation was in fact maximal. The results greatly surprised the physics community. Several researchers then scrambled to reproduce the results of Wu's group, while others reacted with disbelief at the results. Wolfgang Pauli upon being informed by Georges M. Temmer, who also worked at the NBS, that parity conservation could no longer be assumed to be true in all cases, exclaimed "That's total nonsense!" Temmer assured him that the experiment's result confirmed this was the case, to which Pauli curtly replied "Then it must be repeated!" By the end of 1957, further research confirmed the original results of Wu's group, and P-violation was firmly established. Mechanism and consequences The results of the Wu experiment provide a way to operationally define the notion of left and right. This is inherent in the nature of the weak interaction. Previously, if the scientists on Earth were to communicate with a newly discovered planet's scientist, and they had never met in person, it would not have been possible for each group to determine unambiguously the other group's left and right. With the Wu experiment, it is possible to communicate to the other group what the words left and right mean exactly and unambiguously. The Wu experiment has finally solved the Ozma problem which is to give an unambiguous definition of left and right scientifically. At the fundamental level (as depicted in the Feynman diagram on the right), Beta decay is caused by the conversion of the negatively charged () down quark to the positively charged () up quark by emission of a boson; the boson subsequently decays into an electron and an electron antineutrino: → + + . The quark has a left part and a right part. As it walks across the spacetime, it oscillates back and forth from right part to left part and from left part to right part. From analyzing the Wu experiment's demonstration of parity violation, it can be deduced that only the left part of down quarks decay and the weak interaction involves only the left part of quarks and leptons (or the right part of antiquarks and antileptons). The right part of the particle simply does not feel the weak interaction. If the down quark did not have mass it would not oscillate, and its right part would be quite stable by itself. Yet, because the down quark is massive, it oscillates and decays. Overall, as , the strong magnetic field vertically polarizes the nuclei such that . Since and the decay conserves angular momentum, implies that . Thus, the concentration of beta rays in the negative-z direction indicated a preference for left-handed quarks and electrons. From experiments such as the Wu experiment and the Goldhaber experiment, it was determined that massless neutrinos must be left-handed, while massless antineutrinos must be right-handed. Since it is currently known that neutrinos have a small mass, it has been proposed that right-handed neutrinos and left-handed antineutrinos could exist. These neutrinos would not couple with the weak Lagrangian and would interact only gravitationally, possibly forming a portion of the dark matter in the universe. Impact and influence The discovery set the stage for the development of the Standard Model, as the model relied on the idea of symmetry of particles and forces and how particles can sometimes break that symmetry. The wide coverage of her discovery prompted the discoverer of fission Otto Robert Frisch to mention that people at Princeton would often say that her discovery was the most significant since the Michelson–Morley experiment that inspired Einstein's theory of relativity. The AAUW called it the “solution to the number-one riddle of atomic and nuclear physics.” Beyond showing the distinct characteristic of weak interaction from the other three conventional forces of interaction, this eventually led to general CP violation, the violation of the charge conjugation parity symmetry. This violation meant researchers could distinguish matter from antimatter and create a solution that would explain the existence of the universe as one that is filled with matter. This is since the lack of symmetry gave the possibility of matter-antimatter imbalance which would allow matter to exist today through the Big Bang. In recognition of their theoretical work, Lee and Yang were awarded the Nobel Prize for Physics in 1957. To further quote the impact it had, Nobel laureate Abdus Salam quipped, If any classical writer had ever considered giants (cyclops) with only the left eye. [One] would confess that one-eyed giants have been described and [would have] supplied me with a full list of them; but they always sport their solitary eye in the middle of the forehead. In my view what we have found is that space is a weak left-eyed giant. Wu's discovery would pave the way for a unified electroweak force that Salam proved, which is theoretically described to merge with the strong force to create a total new model and a Grand Unified Theory. See also The Ambidextrous Universe by Martin Gardner; book containing a lengthy popular discussion of parity and the Wu experiment Fermi's interaction References Further reading Chien-Shiung Wu Electroweak theory Physics experiments 1956 in science 1956 in Washington, D.C. Asymmetry
Wu experiment
[ "Physics" ]
3,432
[ "Physical phenomena", "Physics experiments", "Electroweak theory", "Experimental physics", "Fundamental interactions", "Asymmetry", "Symmetry" ]
36,291,111
https://en.wikipedia.org/wiki/Henry%20adsorption%20constant
The Henry adsorption constant is the constant appearing in the linear adsorption isotherm, which formally resembles Henry's law; therefore, it is also called Henry's adsorption isotherm. It is named after British chemist William Henry. This is the simplest adsorption isotherm in that the amount of the surface adsorbate is represented to be proportional to the partial pressure of the adsorptive gas: where: X - surface coverage, P - partial pressure, KH - Henry's adsorption constant. For solutions, concentrations, or activities, are used instead of the partial pressures. The linear isotherm can be used to describe the initial part of many practical isotherms. It is typically taken as valid for low surface coverages, and the adsorption energy being independent of the coverage (lack of inhomogeneities on the surface). The Henry adsorption constant can be defined as: where: is the number density at free phase, is the surface number density, Application at a permeable wall Source: If a solid body is modeled by a constant field and the structure of the field is such that it has a penetrable core, then Here is the position of the dividing surface, is the external force field, simulating a solid, is the field value deep in the solid, , is the Boltzmann constant, and is the temperature. Introducing "the surface of zero adsorption" where and we get and the problem of determination is reduced to the calculation of . Taking into account that for Henry absorption constant we have where is the number density inside the solid, we arrive at the parametric dependence where Application at a static membrane Source: If a static membrane is modeled by a constant field and the structure of the field is such that it has a penetrable core and vanishes when , then We see that in this case the sign and value depend on the potential and temperature only. Application at an impermeable wall Source: If a solid body is modeled by a constant hard-core field, then or where Here For the hard solid potential where is the position of the potential discontinuity. So, in this case Choice of the dividing surface Sources: The choice of the dividing surface, strictly speaking, is arbitrary, however, it is very desirable to take into account the type of external potential . Otherwise, these expressions are at odds with the generally accepted concepts and common sense. First, must lie close to the transition layer (i.e., the region where the number density varies), otherwise it would mean the attribution of the bulk properties of one of the phase to the surface. Second. In the case of weak adsorption, for example, when the potential is close to the stepwise, it is logical to choose close to . (In some cases, choosing , where is particle radius, excluding the "dead" volume.) In the case of pronounced adsorption it is advisable to choose close to the right border of the transition region. In this case all particles from the transition layer will be attributed to the solid, and is always positive. Trying to put in this case will lead to a strong shift of to the solid body domain, which is clearly unphysical. Conversely, if (fluid on the left), it is advisable to choose lying on the left side of the transition layer. In this case the surface particles once again refer to the solid and is back positive. Thus, except in the case of static membrane, we can always avoid the "negative adsorption" for one-component systems. See also Freundlich equation Langmuir adsorption model Brunauer–Emmett–Teller (BET) theory References Physical chemistry Statistical mechanics
Henry adsorption constant
[ "Physics", "Chemistry" ]
773
[ "Physical chemistry", "Statistical mechanics", "Applied and interdisciplinary physics", "nan" ]
36,292,499
https://en.wikipedia.org/wiki/N-%282-Hydroxypropyl%29%20methacrylamide
N-(2-Hydroxypropyl)methacrylamide or N-HPMA is the monomer used to make the polymer poly(N-(2-hydroxypropyl)methacrylamide). The polymer is water-soluble (highly hydrophilic), non-immunogenic and non-toxic, and resides in the blood circulation well. Thus, it is frequently used as macromolecular carrier for low molecular weight drugs (especially anti-cancer chemotherapeutic agents) to enhance therapeutic efficacy and limit side effects. Poly(HPMA)-drug conjugate preferably accumulates in tumor tissues via the passive-targeting process (or so-called EPR effect). Due to its favorable characteristics, HPMA polymers and copolymers are also commonly used to produce synthetic biocompatible medical materials such as hydrogels. The development of pHPMA as anti-cancer drug delivery vehicles is initiated by Dr. Jindřich Kopeček and colleagues at the Czech (-oslovak) Academy of Sciences in Prague in the mid-1970s. Prior to this, it was used as a plasma expander. The Kopeček Laboratory designed and developed HPMA copolymer-drug conjugates as a lysosomal delivery vehicle to cancer cells. The concept of using pHPMA as polymeric drug carriers has opened a new perspective in modern pharmaceutical science, and developed into the first polymer-drug conjugate entering clinical trials (i.e. PK1; HPMA copolymer-doxorubicin conjugate). The HPMA copolymers are also used as a scaffold for iBodies, polymer-based antibody mimetics. References See also Polymer-drug conjugates Acrylamides Monomers
N-(2-Hydroxypropyl) methacrylamide
[ "Chemistry", "Materials_science" ]
381
[ "Monomers", "Polymer chemistry" ]
36,293,587
https://en.wikipedia.org/wiki/Ionium%E2%80%93thorium%20dating
Ionium-thorium dating is a technique for determining the age of marine sediments based upon the quantities present of nearly stable thorium-232 and more radioactive thorium-230. (230Th was once known as ionium, before it was realised it was the same element as 232Th.) Uranium (in nature, predominantly uranium-238) is soluble in water. However, when it decays into thorium, the latter element is insoluble and so precipitates out to become part of the sediment. Thorium-232 has a half-life of 14.5 billion years, but thorium-230 has a half-life of only 75,200 years, so the ratio is useful for dating sediments up to 400,000 years old. Conversely, this technique can be used to determine the rate of ocean sedimentation over time. The ionium/thorium method of dating assumes that the proportion of thorium-230 to thorium-232 is a constant during the time period that the sediment layer was formed. Likewise, both thorium-230 and thorium-232 are assumed to precipitate out in a constant ratio; no chemical process favors one form over the other. It must also be assumed that the sediment does not contain any pre-existing particles of eroded rock, known as detritus, that already contain thorium isotopes. Finally, there must not be a process that causes the thorium to shift its position within the sediment. If these assumptions are correct, this dating technique can produce accurate results. References Radiometric dating Thorium
Ionium–thorium dating
[ "Physics", "Chemistry" ]
323
[ "Radiometric dating", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Nuclear physics", "Radioactivity" ]
33,668,371
https://en.wikipedia.org/wiki/Ceresin
Ceresin (also cerin, cerasin, cerosin, ceresin wax or ceresine) is a wax derived from ozokerite by a purifying process. The purifying process of the ozokerite commonly comprises a treatment with heat and sulfuric acid, but other processes are also in use. Uses include: An alternative to beeswax in ointments (Historic) laboratory-supply bottles for small amounts of hydrofluoric acid, which were made of ceresin wax; this was before polyethylene became commonplace. External links AKROCHEM® CERESIN WAX, Akrochem product information Waxes
Ceresin
[ "Physics" ]
140
[ "Materials", "Matter", "Waxes" ]
33,673,568
https://en.wikipedia.org/wiki/Oxygen%20equivalent
Oxygen equivalent compares the relative amount of oxygen available for respiration at a variable pressure to that available at SATP. As external respiration depends on the exchange of gases due to partial pressures across a semipermeable membrane and normally occurs at SATP, an oxygen equivalent may aid in recognizing and managing variable oxygen availability during procedures such as hyperbaric oxygen therapy or medical air transport. It does so by expressing oxygen concentration as a ratio of the partial pressure of oxygen at a given altitude or pressure to Standard Atmospheric Pressure; rather than as a ratio of the PO2 at a given pressure to the total pressure of the gas mixture. The latter would generally be 0.2095, the atmospheric concentration by volume of O2, although FO2 and Patm vary for extraterrestrials. Calculations occur as follows: Let O2E be oxygen equivalent, FO2 be the fractional concentration of oxygen, Patm (generally 760 mmHg, barring intergalactic travel), Pb be the barometric pressure, and dP be the change in pressure at a given altitude. Then, O2E = FO2(Pb + dP)/ Patm It is worthwhile to note that pressures may often be expressed in units of distance such as feet when diving. For this, note that descending 33 ft in salt water or 33.9 ft in fresh water results in a change of 1 atm, so distance and pressure are used interchangeably in this context. References Williams, Paul. 'Lectures on Respiratory Physics'. Ed. J. Brown et al. (London, ON: FC, 2011). Respiration Respiratory physiology Diffusion Equivalent units
Oxygen equivalent
[ "Physics", "Chemistry", "Mathematics" ]
340
[ "Transport phenomena", "Physical phenomena", "Equivalent quantities", "Diffusion", "Quantity", "Equivalent units", "Units of measurement" ]
41,925,824
https://en.wikipedia.org/wiki/Hwp1
Hwp1 (Hyphal wall protein 1) is a protein (glycoprotein) located on the surface of an opportunistic diploid fungus called Candida albicans. This "hyphal" denomination is due to Hwp1 appears exclusively on the surface of a projection called hyphae that emerges from the surface of this fungus. Hwp1 is particularly important because it is a substrate of mammalian transglutaminase. This transglutaminase ability has two implications, one (in fungus pathogenicity) proved, and the other (in food proteins potential pathogenicity) hypothetical. Fungus pathogenicity Hwp1 has been proven to be involved in oral candidiasis. Candida albicans Hwp1 allows through the use of transglutaminase from the host (human beings, for example) to adhere to human epithelial cells with the strength of a covalent, isopeptide bond (the same strength in which human body proteins are built). This ability is highly related with Candida albicans being the prevalent Candida species in all types of candidiasis. Other candida species don't have the Hwp1 protein. Hwp1 - Gluten molecular mimicry Hwp1 of Candida albicans shares similar sequence homology of amino acids with gliadin (α- and γ-gliadins) of gluten protein. This homology appears between fragments of hwp1 sequence and α-gliadin and γ-gliadin T-cell epitopes in celiac disease. See also Adhesin molecule (immunoglobulin -like) Bacterial adhesin Cell adhesion Fungal adhesin References Fungal proteins Glycoproteins
Hwp1
[ "Chemistry" ]
378
[ "Glycoproteins", "Glycobiology" ]
41,926,407
https://en.wikipedia.org/wiki/Order-4%20120-cell%20honeycomb
In the geometry of hyperbolic 4-space, the order-4 120-cell honeycomb is one of five compact regular space-filling tessellations (or honeycombs). With Schläfli symbol {5,3,3,4}, it has four 120-cells around each face. Its dual is the order-5 tesseractic honeycomb, {4,3,3,5}. Related honeycombs It is related to the (order-3) 120-cell honeycomb, and order-5 120-cell honeycomb. See also List of regular polytopes References Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II, III, IV, V, p212-213) Honeycombs (geometry)
Order-4 120-cell honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
178
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
41,927,402
https://en.wikipedia.org/wiki/Cubic%20honeycomb%20honeycomb
In the geometry of hyperbolic 4-space, the cubic honeycomb honeycomb is one of two paracompact regular space-filling tessellations (or honeycombs). It is called paracompact because it has infinite facets, whose vertices exist on 3-horospheres and converge to a single ideal point at infinity. With Schläfli symbol {4,3,4,3}, it has three cubic honeycombs around each face, and with a {3,4,3} vertex figure. It is dual to the order-4 24-cell honeycomb. Related honeycombs It is related to the Euclidean 4-space 16-cell honeycomb, {3,3,4,3}, which also has a 24-cell vertex figure. It is analogous to the paracompact tesseractic honeycomb honeycomb, {4,3,3,4,3}, in 5-dimensional hyperbolic space, square tiling honeycomb, {4,4,3}, in 3-dimensional hyperbolic space, and the order-3 apeirogonal tiling, {∞,3} of 2-dimensional hyperbolic space, each with hypercube honeycomb facets. See also List of regular polytopes References Coxeter, Regular Polytopes, 3rd. ed., Dover Publications, 1973. . (Tables I and II: Regular polytopes and honeycombs, pp. 294–296) Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999 (Chapter 10: Regular honeycombs in hyperbolic space, Summary tables II, III, IV, V, p212-213) Honeycombs (geometry)
Cubic honeycomb honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
359
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
49,435,114
https://en.wikipedia.org/wiki/List%20of%20genetically%20modified%20crops
Genetically modified crops are plants used in agriculture, the DNA of which has been modified using genetic engineering techniques. In most cases, the aim is to introduce a new trait to the plant which does not occur naturally in the species. As of 2015, 26 plant species have been genetically modified and approved for commercial release in at least one country. The majority of these species contain genes that make them either tolerant to herbicides or resistant to insects. Other common traits include virus resistance, delayed ripening, modified flower colour or altered composition. In 2014, 28 countries grew GM crops, and 39 countries imported but did not grow them. Background Regulations regarding the commercialisation of genetically modified crops are mostly conducted by individual countries. For cultivation, environmental approval determines whether a crop can be legally grown. Separate approval is generally required to use GM crops in food for human consumption or as animal feed. GM crops were first planted commercially on a large scale in 1996, in the US, China, Argentina, Canada, Australia, and Mexico. Some countries have approved but not actually cultivated GM crops, due to public uncertainty or further government restrictions, while at the same time, they may import GM foods for consumption. For example, Japan is a leading GM food importer, and permits but has not grown GM food crops. The European Union regulates importation of GM foods, while individual member states determine cultivation. In the US, separate regulatory agencies handle approval for cultivation (USDA, EPA) and for human consumption (FDA). Two genetically modified crops have been approved for food use in some countries, but have not obtained approval for cultivation. A GM Melon engineered for delayed senescence was approved in 1999 and a herbicide tolerant GM wheat was approved in 2004. Genetically modified crops cultivated in 2014 In 2014, 181.5 million hectares of genetically modified crops were planted in 28 countries. Half of all GM crops planted were genetically modified soybeans, either for herbicide tolerance or insect resistance. Eleven countries grew modified soybean, with the USA, Brazil and Argentina accounting for 90% of the total hectarage. Of the 111 hectares of soybean grown worldwide in 2014, 82% was genetically modified in some way. Seventeen countries grew a total of 55.2 million hectares of genetically modified maize and fifteen grew 23.9 hectares of genetically modified cotton. Nine million hectares of genetically modified canola was grown with 8 million of those in Canada. Other GM crops grown in 2014 include Alfalfa (862 000 ha), sugar beet (494 000 ha) and papaya (7 475 ha). In Bangladesh a genetically modified eggplant was grown commercially for the first time on 12 ha. The majority of GM crops have been modified to be resistant to selected herbicides, usually a glyphosate or glufosinate based one. In 2014, 154 million hectares were planted with a herbicide resistant crop and 78.8 million hectares had insect resistant. This include 51.4 million hectares planted in thirteen countries that contained both herbicide tolerance and insect resistance. Less than one million hectares contained other traits, which include providing virus resistance, delaying senescence, modifying flower colour and altering the plants composition. Drought tolerant maize was planted for just the second year in the USA on 275 000 hectares. Herbicide tolerance Genetically modified crops engineered to resist herbicides are now more available than conventionally bred resistant varieties. They comprised 83% of the total GM crop area, equating to just under 8% of the arable land worldwide. Approval has been granted to grow crops engineered to be resistant to the herbicides 2,4-dichlorophenoxyacetic acid, dicamba, glufosinate glyphosate, sulfonylurea, oxynil mesotrione and isoxaflutole Most herbicide resistant GM crops have been engineered for glyphosate tolerance, in the USA 93% of soybeans and most of the GM maize grown is glyphosate tolerant. Insect resistance Most currently available genes used to engineer insect resistance come from the Bacillus thuringiensis bacterium. Most are in the form of delta endotoxin genes known as cry proteins, while a few use the genes that encode for vegetative insecticidal proteins. Insect resistant crops target various species of coleopteran (beetles) and lepidopteran (moths). The only gene commercially used to provide insect protection that does not originate from B. thuringiensis is the Cowpea trypsin inhibitor (CpTI). CpTI was first approved for use cotton in 1999 and is currently undergoing trials in rice. Stacked traits Many varieties of GM crops contain more than one resistance gene. This could be in the form of multiple insect resistant genes, multiple herbicide tolerance genes or a combination of the herbicide and insect resistant genes. Smartstax is a brand of GM maize that has eight different genes added to it, making it resistant to two types of herbicides and toxic to six different species of insects. Other modified traits While most crops are engineered to resist insects or tolerate herbicides some crops have been developed for other traits. Flowers have been engineered to display colours that they cannot do so naturally (in particular the blue color in roses). A few crops, like the genetically modified papaya, are engineered to resist viruses. Other modifications alter the plants composition, with the aim of making it more nutritious, longer lasting or more industrially useful. Recently crops engineered to tolerate drought have been commercialised. Genetically modified crops that are no longer cultivated Approved genetically modified crops that have not yet been cultivated Genetically modified crops by country The following graph shows the area planted in GM crops in the five largest GM crop producing countries. The area planted is presented along the y axis in thousands of hectares while the year is along the x axis. See also AquAdvantage salmon References and notes Notes References Genetically modified crops Biotechnology Genetically modified organisms in agriculture Genetically modified organisms
List of genetically modified crops
[ "Engineering", "Biology" ]
1,215
[ "Biotechnology", "nan", "Genetic engineering", "Genetically modified organisms" ]
49,442,097
https://en.wikipedia.org/wiki/Gas%20Dynamic%20Trap
The Gas Dynamic Trap is a magnetic mirror machine being operated at the Budker Institute of Nuclear Physics in Akademgorodok, Russia. Technical specifications Dimensions The plasma inside the machine fills a cylinder of space, 7 meters long and 28 centimeters in diameter. The magnetic field varies along this tube. In the center the field is low; reaching (at most) 0.35 Teslas. The field rises to as high as 15 Teslas at the ends. This change in the strength is needed to reflect the particles and get them internally trapped (see: the magnetic mirror effect). Heating The plasma is heated using two methods, simultaneously. The first is neutral beam injection, where a hot (25 keV), neutral beam of material is shot into the machine at a rate of 5 megawatts. The second is Electron cyclotron resonance heating, where electromagnetic waves are used to heat a plasma, analogous to microwaving it. Performance As of 2016, the machine had achieved a plasma trapping beta of 0.6 for 5 milliseconds. It had reached an electron temperature of 1 keV using the method of Electron cyclotron resonance heating. It had reached an ion density of 1×1020 ions/m3. The machine loses material out of the ends of the mirror but material is replenished at such a rate as to maintain a density inside the machine. Diagnostics During any given experiment, operators can choose from at least 15 fusion diagnostics to measure the machines' behavior: Thomson Scattering Motional Stark Effect CX Energy Analysis (2) Rutherford Ion Scattering Ion End Loss Analyzer Microwave Interferometer Dispersion Interferometer Diamagnetic Loops Langmuir Probes Pyro electric Detectors RF Probes Beam Dump Calorimeters NBI Sec. Electron Detectors Neutron Detectors Thermonuclear Proton Detectors Pictures of the GDT References Magnetic confinement fusion devices Budker Institute of Nuclear Physics
Gas Dynamic Trap
[ "Chemistry" ]
395
[ "Particle traps", "Magnetic confinement fusion devices" ]
29,732,232
https://en.wikipedia.org/wiki/Bancroft%20point
A Bancroft point is the temperature where an azeotrope occurs in a binary system. Although vapor liquid azeotropy is impossible for binary systems which are rigorously described by Raoult's law, for real systems, azeotropy is inevitable at temperatures where the saturation vapor pressure of the components are equal. Such a temperature is called a Bancroft point. However, not all azeotropic binary systems exhibit such a point. Also, a Bancroft point must lie in the valid temperature ranges of the Antoine equation. Bancroft point is named after Wilder Dwight Bancroft. See also Raoult's law Vapor–liquid equilibrium Bancroft rule External links Separation of Azeotropic Mixtures Phase transitions Thermodynamics Distillation Temperature
Bancroft point
[ "Physics", "Chemistry", "Mathematics" ]
152
[ "Physical phenomena", "Physical quantities", "Phases of matter", "Thermodynamics", "Statistical mechanics", "Dynamical systems", "Phase transitions", "Distillation", "Wikipedia categories named after physical quantities", "Physical chemistry stubs", "Scalar physical quantities", "Temperature",...
29,733,801
https://en.wikipedia.org/wiki/Rouse%20model
The Rouse model is frequently used in polymer physics. The Rouse model describes the conformational dynamics of an ideal chain. In this model, the single chain diffusion is represented by Brownian motion of beads connected by harmonic springs. There are no excluded volume interactions between the beads and each bead is subjected to a random thermal force and a drag force as in Langevin dynamics. This model was proposed by Prince E. Rouse in 1953. The mathematical formalism of the dynamics of Rouse model is described here. An important extension to include hydrodynamic interactions mediated by the solvent between different parts of the chain was worked out by Bruno Zimm in 1956. Whilst the Rouse model applies to polymer melts, the Zimm model applies to polymer in solution where the hydrodynamic interaction is not screened. In solution, the Rouse-Zimm model predicts D~1/Nν which is consistent with the experiments. In a polymer melt, the Rouse model correctly predicts long-time diffusion only for chains shorter than the entanglement length. For long chains with noticeable entanglement, the Rouse model holds only up to a crossover time τe. For longer times the chain can only move within a tube formed by the surrounding chains. This slow motion is usually approximated by the reptation model. References Polymer physics
Rouse model
[ "Physics", "Chemistry", "Materials_science", "Biology" ]
271
[ "Polymer physics", "Applied and interdisciplinary physics", "Biophysics", "Polymer chemistry", "Statistical mechanics" ]
40,489,816
https://en.wikipedia.org/wiki/Rate-of-living%20theory
The rate of living theory postulates that the faster an organism's metabolism, the shorter its lifespan. First proposed by Max Rubner in 1908, the theory was based on his observation that smaller animals had faster metabolisms and shorter lifespans compared to larger animals with slower metabolisms. The theory gained further credibility through the work of Raymond Pearl, who conducted experiments on drosophila and cantaloupe seeds, which supported Rubner's initial observation. Pearl's findings were later published in his book, The Rate of Living, in 1928, in which he expounded upon Rubner's theory and demonstrated a causal relationship between the slowing of metabolism and an increase in lifespan. The theory gained additional credibility with the discovery of Max Kleiber's law in 1932. Kleiber found that an organism's basal metabolic rate could be predicted by taking 3/4 the power of the organism's body weight. This finding was noteworthy because the inversion of the scaling exponent, between 0.2 and 0.33, also demonstrated the scaling for both lifespan and metabolic rate, and was colloquially called the "mouse-to-elephant" curve. Mechanism Mechanistic evidence was provided by Denham Harman's free radical theory of aging, created in the 1950s. This theory stated that organisms age over time due to the accumulation of damage from free radicals in the body. It also showed that metabolic processes, specifically the mitochondria, are prominent producers of free radicals. This provided a mechanistic link between Rubner's initial observations of decreased lifespan in conjunction with increased metabolism. Current state of theory Support for this theory has been bolstered by studies linking a lower basal metabolic rate (evident with a lowered heartbeat) to increased life expectancy. This has been proposed by some to be the key to why animals like the giant tortoise can live over 150 years. However, the ratio of resting metabolic rate to total daily energy expenditure can vary between 1.6 and 8.0 between species of mammals. Animals also vary in the degree of coupling between oxidative phosphorylation and ATP production, the amount of saturated fat in mitochondrial membranes, the amount of DNA repair, and many other factors that affect maximum life span. Furthermore, a number of species with high metabolic rate, like bats and birds, are long-lived. In a 2007 analysis it was shown that, when modern statistical methods for correcting for the effects of body size and phylogeny are employed, metabolic rate does not correlate with longevity in mammals or birds. See also DNA damage theory of aging Life history theory Longevity quotient References Rubner, M. (1908). Das Problem der Lebensdauer und seiner beziehungen zum Wachstum und Ernährung. Munich: Oldenberg. Raymond Pearl. The Rate of Living. 1928 Theories of biological ageing Metabolism
Rate-of-living theory
[ "Chemistry", "Biology" ]
594
[ "Senescence", "Cellular processes", "Biochemistry", "Theories of biological ageing", "Metabolism" ]
40,491,703
https://en.wikipedia.org/wiki/SIMes
SIMes (or H2Imes) is an N-heterocyclic carbene. It is a white solid that dissolves in organic solvents. The compound is used as a ligand in organometallic chemistry. It is structurally related to the more common ligand IMes but with a saturated backbone (the S of SIMes indicates a saturated backbone). It is slightly more flexible and is a component in Grubbs II. It is prepared by alkylation of trimethylaniline by dibromoethane followed by ring closure and dehydrohalogenation. References Carbenes
SIMes
[ "Chemistry" ]
129
[ "Inorganic compounds", "Organic compounds", "Carbenes", "Organic compound stubs", "Organic chemistry stubs" ]
40,494,287
https://en.wikipedia.org/wiki/Kaj%20Riska
Kaj Antero Riska (born January 25, 1953, Helsinki, Finland) is a naval architect and engineer with expertise in ice and arctic technology. He has written various publications about ice-going ships and icebreaker design, ice loads and ice management for arctic offshore floating platforms. He worked at Total S.A. as Senior Ice Engineer. He received the 2019 POAC Founders Lifetime Achievement Award. Education and career Kaj Riska graduated from the Helsinki University of Technology (TTK) () in Naval Architecture as M.Sc. in 1978 and D.Sc. in 1988. He worked at the Technical Research Centre of Finland from 1977 to 1988 as the group leader for Arctic Marine Technology. From 1989 to 1991 he was a senior researcher for the Academy of Finland. From 1992 to 1995 he was the director of Arctic Offshore Research Centre and from 1995 to 2005 the professor of Arctic Marine Technology at the Helsinki University of Technology. Since 2005 he has been a partner of the company ILS Oy and since 2006 Professor at the Norwegian University of Science and Technology (NTNU) in Trondheim, Norway. He and his Ph.D. students were investigating the models to describe the ice action on ships and their application in various ship design aspects. Exams Kaj Riska passed his student exam (upper secondary school) at Helsingin Yhtenäiskoulu in May 1972. He received his Master of Science degree in naval architecture in June 1978 and his Doctor in Technology degree in September 1988, both issued by the Helsinki University of Technology. Previous professional experience From 1974 and up to 2005, Kaj Riska worked at many academic institutions as researcher or professor. These experiences can be summarized as follow: TKK, Laboratory of Mechanics: assistant to professor. Technical Research Centre of Finland, Ship Laboratory: research assistant, research scientist, senior research scientist and head of division. TKK, Laboratory of Naval Architecture and Marine Engineering: senior research scientist. Academy of Sciences in Finland: senior fellow. TKK, Arctic Offshore Research Centre: Director. TKK, Ship Laboratory: acting professor and Professor in Arctic Marine Technology. Also, he has been giving lectures at TKK, Laboratory of Naval Architecture and Marine Engineering in ship vibrations, winter navigation and marine technology. He worked as Project Manager from 1989 to 1992 in the same laboratory. In 2005, Kaj Riska joined ILS Ltd as Senior Naval Architect and partner. ILS Lts is an independent, private owned consulting and engineering company, specialized in ship design and especially in ship project evaluation and basic design. In 2012, Riska left ILS Ltd to join Total S.A. Present activities Since 2012, Kaj Riska is working at the supermajor Total in Paris as Senior ice engineer. Total is positioned on two large-scale projects in the Arctic: Shtokman and Yamal LNG, both in the Russian Arctic Shelf. Riska is Professor II at the University of Science and Technology of Trondheim, Norway, since 2006 Influence Kaj Riska is a key person in the Arctic Technology area. His works are key references for ISO standards (ISO 19906) and Rules of Classification of ice-going ships. He is an active member of many organizations. Scientific activities PolarTech (member of the standing committee since 1991) Conference of Port and Ocean Engineering under Arctic Conditions (POAC) (member of the international committee since 1990, chairman 1998-2000) Society of Structural Mechanics (secretary 1984-85, chairman 1996-2001) The Association of Finnish Metal Industries, Shipbuilding group, R & D committee (member and secretary, 1985-1998) 11th and 12th International Ship & Offshore Structures Congress (ISSC) (member of committee I.2 1988-1994, chairman of ice-structure interaction committee 1994-2000) 20th International Towing Tank Conference (ITTC) (member of performance in ice-covered waters committee 1990-1993, chairman 1993-1996) The Board of the Maritime Institute of Finland (member 1994-1999) The Finnish Polar Council (member 1994-1999) International Journal of Polar Engineering and Offshore Mechanics (associate-editor 1995-1998) Oceanic Engineering International - journal (associate-editor 1996-1999) Specific Committee for European Union scientific programme MAST (Marine Science and Technology) (national delegate 1997-1998) European Association of Universities in Marine Technology and Related Sciences (member of the executive committee 2001-2004) European Shipbuilders Association (CESA), the R&D working group of CESA (COREDES) (scientific adviser 2003-2005) The Maritime Institute of Finland (chairman of the board 2004) Expert Pilot Panel of IMO on developing the GBS (member 2007-2008) Partial list of project management Expert for the Finnish Transport Safety Agency in developing the ice class correction factors for EEDI at IMO/MEPC, since 2009 Expert in developing the Finnish-Swedish ice class rules for the Finnish and Swedish Maritime Administrations, since 1995 Updating the performance requirements of the Finnish-Swedish ice class rules, technical background, The Finnish Maritime Administration, since 1993 Coordinator of the EC-funded project SAFEICE (Increasing the Safety of Icebound Shipping), 2004 – 2005 Expert for the Finnish Maritime Administration in the EC-funded PHARE-project Strengthening Enforcement of Maritime Safety, a twinning project between Estonia and Finland, 2003 - 2005 Coordinator of the EC-funded project IRIS (Ice ridging information for decision making in shipping operations), 2003 – 2005 Member of the ad hoc IACS working group to develop the harmonized polar rules, 1993 – 2001 Evaluation of design ice loads of a FPSU operating in the Bohai Bay, 1997-1999, The China Offshore Oil Bohai Corporation and the Finnish Ministry of Trade and Industry. Coordinator of the EC-funded MAST III project ICE STATE concerned with modelling and remote sensing of the ice cover, 1996-1999 Safety of RORO vessels, ship collision risks in the Finnish waters and external collision dynamics, 1996. International joint industry project coordinated by Det norske Veritas. Ice interaction with the Bohai Bay production station, 1992, China Offshore Oil Design and Engineering Corporation. Model tests to design the Finnish multipurpose icebreaker, 1991–92, The Board of Navigation and Finnyards. Ice expedition to the Pechora Sea, winter 1992. Offshore Industry Group in Finland. Ice expedition to the Okhotsk Sea, winter 1990 and 1991. Offshore Industry Group in Finland. Ice load evaluations for the various production platform concepts for the Shtockmanovskoye gas field, 1990, Wärtsilä Project Export, 2000, Fortum. Ice load on ships in realistic ice conditions, 1989 - 1993, jointly with National Research Council of Canada. The Joint Research Project Arrangement No. 5 (JRPA V) between Finland and Canada. Theoretical modelling of ship/ice interaction, 1987 - 1990. The Technology Development Centre of Finland. Physical modelling of ship/ice interaction, 1986 - 1989, jointly with Canadian Coast Guard. The Joint Research Project Arrangement No. 3 (JRPA III) between Finland and Canada. Ice load penetration model, 1985 - 1987, jointly with National Research Council of Canada. The Joint Research Project Arrangement No. 1 (JRPA I) between Finland and Canada. Estimation of ice loading and strength of shell structure of MV Arctic, 1983, Canadian Coast Cuard. Statistical measurement of wave and ice loads on MV Arctic in the Canadian Arctic and North Atlantic, 1982, Canadian Coast Guard. Measurement of ice loads on Canmar Kigoriak in the Beaufort Sea August and October, 1981, Dome Petroleum. Publications Theses Riska, K. 1978. On the Application of Macroscopic Failure Criteria on Columnar-Grained Ice. M.Sc. Thesis, Helsinki University of Technology, Department of General Sciences, Otaniemi 1978. 71 p. (in Finnish) Riska, K. 1978. On the Mechanics of the Ramming Interaction between a Ship and a Massive Ice Floe. Technical Research Centre of Finland, Publications 43, Espoo, 1987, 86 p. Scientific publications Riska, K. & Varsta, P. 1977. State-of-Art Review of Basic Ice Problems for a Naval Architect. Espoo 1977, VTT Ship Laboratory, Report No. 2, 63 p. Riska, K. & Varsta, P. 1977. Failure Process of Ice Edge Caused by Impact with Ships Side. Symposium in connection with 100 Years Celebration of Finnish Winter Navigation, Oulu, Finland, December 16–17, 1977. Publ. Board of Navigation, Helsinki 1979, pp. 235–262. Riska, K. 1980. On the Role of Failure Criterion of Ice in Determining Ice Loads. Espoo 1980, VTT Ship Laboratory, Report No. 7. 31 p. Enkvist, E. & Varsta, P. & Riska, K. 1979. The Ship-Ice Interaction. POAC 1979, Proceedings, vol. 2, Trondheim, August 13–18, 1979, pp. 977–1002. Vuorio, J. & Riska, K. & Varsta, P.1979. Long Term Measurements of Ice Pressure and Ice-Induced Stresses on the Icebreaker SISU in Winter 1978. Helsinki 1979, Winter Navigation Research Board, Report No. 28, 50 p. Riska, K. & Kujala, P. & Vuorio, J. 1983. Ice Load and Pressure Measurements onboard I.B. SISU. POAC 1983, Proceedings, Vol. 2, Helsinki, April 5–9, 1983, pp. 1055–1069. Nyman, T. & Riska, K. 1984. The Level Ice Resistance - Ideas Stemming from the Model and Full Scale Tests. VTT Symposium 52, Ship Strength and Winter Navigation, Espoo, January 10–11, 1984, pp. 183–200. Kujala, P. & Riska, K. 1983. Evaluation of Some Factors Influencing Material Selection for Arctic Vessels. Proceedings of the Study Session on Fracture Toughness Evaluation of Steels for Arctic Marine Use, October 1983, Ottawa, Canada, Publ. Physical Metallurgy Research Laboratories MRP/PMRL 83-72 (OP-J), pp. 3/1-3/27. Riska, K. & Frederking, R. 1987. Ice Load Penetration Modelling. POAC 1987, proceedings, Vol. 1, Fairbanks, Alaska, August 17–21, 1987, pp. 317–328. Riska, K. 1989. An Analysis of Factors Influencing Ship Response in Collision with Multi-Year Ice Floes. POAC 1989, Proceedings, Vol. 2, Luleå, June 12–16, 1989, pp. 750–763. Riska, K. 1991. Theoretical Modelling of Ice-Structure Interaction. S. Jones & R.McKenna & J. Tillotson & I. Jordaan (Eds.): Ice-Structure Interaction. IUTAM-IAHR Symposium, St. John's, Newfoundland, Canada, Springer-Verlag, Berlin,1991, pp. 595 – 618. Kujala, P. & Riska, K. & Varsta, P. & Koskivaara, R. & Nyman, T. 1990. Results from In-Situ Four Point Bending Tests with Baltic Sea Ice. IAHR Symposium on Ice 1990, Proceedings, Vol. 1, Espoo, Finland, August 20–24, 1990, pp. 261–278. Veitch, B. & Lensu, M. & Riska, K. & Kosloff, P. & Keiley, P. & Kujala, P. 1991. Field Observations of Ridges in the Northern Baltic Sea. 11th International Conference on Port and Ocean Engineering under Arctic Conditions (POAC), Proceedings, Vol 1, St. John's, Canada, September 24–28, 1991, pp. 381 – 400. Riska, K. 1991. Observations of the Line-like Nature of Ship-Ice Contact. 11th International Conference on Port and Ocean Engineering under Arctic Conditions, Proceedings, Vol 2, St. John's, Canada, September 24–28, 1991, pp. 785 – 811. Kujala, P. & Varsta, P. & Riska, K. 1993. Full-Scale Observations of Ship Performance in Ice. The 12th International Conference on Port and Ocean Engineering under Arctic Conditions, 17–20 August 1993, Hamburg, Germany, pp. 209–218. Soininen, H. & Nyman, T. & Riska, K. & Lohi, P. & Harjula, A. 1993. The Ice Capability of the Multipurpose Icebreaker FENNICA - Full-Scale Results. The 12th Int. Conference on Port and Ocean Engineering under Arctic Conditions (POAC), 17–20 August 1993, Hamburg, Germany, pp. 259–271. Riska, K. & Baarman, L. 1993. A Model for Ice Induced Vibration of Slender Offshore Structures. The 12th International Conference on Port and Ocean Engineering under Arctic Conditions (POAC), 17–20 August 1993, Hamburg, Germany, pp. 578–594. Nortala-Hoikkanen, A. & Riska, K. & Salmela, O. & Wilkman, G. 1993. Methods to Map Ice Conditions, to Measure Ice Properties and to Quantify Ice Features. The 12th International Conference on Port and Ocean Engineering under Arctic Conditions (POAC), 17–20 August 1993, Hamburg, Germany, pp. 921–935. Riska, K. & Bo, Z.C. & Saarikoski, R. 1993. Structure-Ice Interaction for a Bohai Bay Oil Production Project. Sea Ice Symposium, 19–21 October 1993, Beijing, China, pp. 230–246. Riska, K. 1993. Prediction of Ice Action on Offshore Structures. Sea Ice Symposium, 19–21 October 1993, Beijing, China, pp. 207–221. Riska, K. & Jalonen, R. & Veitch, B. & Nortala-Hoikkanen, A. & Wilkman, G. 1994. Assessment of Ice Model Testing Techniques. IceTech '94, 15–18 March 1994, Paper F. Riska, K. & Kukkanen, T. 1994. Speed Dependence of the Natural Modes of an Elastically Scaled Ship Model. Proc. of the Int. Conf. on Hydroelasticity in Marine Tech., 25–27 May 1994, Trondheim, Norway, pp. 157–168. Tuhkuri, J. & Riska, K. 1994. Experimental Investigations on Extrusion of Crushed Ice. IAHR 94, Proc. of the 12th Int. Symp. on Ice, 23–26 August 1994, Trondheim, Norway, Vol. 1, pp. 474–483. Hautaniemi, H. & Oksama, M. & Multala, J. & Leppäranta, M. & Riska, K. & Salmela, O. 1994. Airborne Electromagnetic Mapping of Ice Thickness in the Baltic Sea. IAHR 94, Proc of the 12th Int. Symp. on Ice, 23–26 August 1994, Trondheim, Norway, Vol. 2, pp. 530–539. Riska, K. 1995. Models of Ice-Structure Contact for Engineering Applications. In Mechanics of Geo-material Interfaces, eds. A.P.S. Selvadurai & M.J. Boulon, Elsevier Science B.V., 1995, pp. 77–103. Riska, K. & Kujala, P. & Goldstein, R. & Danilenko, V. & Osipenko, N. 1995. Application of Results from the Research Project 'A Ship in Compressive Ice' to Ship Operability. The 13th International Conference on Port and Ocean Engineering under Arctic Conditions (POAC), 15–18 August 1995, Murmansk, Russia. Lensu, M. & Heale, S. & Riska, K. & Kujala, P. 1996. Ice Environment and Ship Hull Loading along the NSR. INSROP Working Paper No. 66 - 1996, I.1.10. Multala, J. & Hautaniemi, H. & Oksama, M. & Leppäranta, M. & Riska, K. & Lensu, M. 1996. An Airborne Electromagnetic System on a Fixed Wing Aircraft for Sea Ice Thickness Mapping. Cold Regions Science and Technology, no. 24, 1996, pp. 355 – 373. Li Zhijun & Riska, K. 1996. On the Measuring Methods of Physical and Mechanical Properties for Fine Granular Ethanol Model Ice. Proc. of the fifth Chinese National Glaciology and Geocryology Conference, Ganshu Cultural Place, China, Vol. I, 1996, pp. 565 – 571. [in Chinese]. Riska, K. 1997. Determination of the Stress Field around a Whole in a Plate. Journal of Structural Mechanics, Vol. 30, 1997, No. 2, pp. 18 – 39. (in Finnish). Daley, C. & Tuhkuri, J. & Riska, K. 1998: The Role of Discrete Failures in Local Ice Loads. Cold Regions Science and Technology, 27(1998), pp. 197–211. Riska, K. & Wilhelmson, M. & Englund, K. & Leiviskä, T. 1998: Performance of Merchant Vessels in Ice in the Baltic. Winter Navigation Research Board, Research Report No. 52, Helsinki 1998, 72 p. Tuhkuri, J. & Lensu, M. & Riska, K. & Sandven, S. & Thorkildsen, F. & Haapala, J. & Leppäranta, M. & Doble, M. & Alsenov, Y. & Wadhams, P. & Erlingson, B. 1998. Local Ice Cover Deformation and Mesoscale Ice Dynamics "Ice State". Third European Marine Science and Technology Conference, Lisbon, 23–27 May 1998, Proc. Vol. I, pp. 315–328. Li, Zhijun & Riska, K. 1998. Characteristic Length and Strain Modulus of the Fine Grain Ethanol Model Ice. Marine Environmental Science, 17(1998)4, pp. 42–47 [in Chinese]. Li, Zhijun & Riska, K. 1998. Experimental Study on the Uniaxial Compressive Strength Characteristics of Fine Grain Ethanol Model Ice. Journal of Glaciology and Geocryology, 20(1998)2, pp. 167–171 [in Chinese]. Li, Zhijun & Riska, K. 1998. Uniaxial Compressive Strength of Fine Grain Ethanol Model Ice. Ice in Surface Waters, Proc. of the IAHR Ice Symposium, ed. H-T. Shen, Balkema, Holland, pp. 547–552. Riska, K. & Daley, C. 1999. Harmonization of Polar Class Ship Rules. Structural Design '98, Seminar, Espoo, Finland, 26 March 1998. Proc. Ed. P. Kujala, Helsinki University of Technology, Ship Laboratory, Report M-238, Espoo 1999, pp. 41–56. Riska, K. & Tuhkuri, J. 1999. Analysis of Contact between Level Ice and a Structure. Proc. of the International Workshop on RATIONAL EVALUATION OF ICE FORCES ON STRUCTURES, 2–4 February 1999, Mombetsu, Japan, pp. 103–120. Patey, M. & Riska, K. 1999. Simulation of Ship Transit through Ice. INSROP Working Paper No. 155 - 1999. The Fridtjof Nansen Institute, Norway, 57 p. Nyman, T., Riska, K., Soininen, H., Lensu, M., Jalonen, R., Lohi, P. & Harjula, A. 1999. The Ice Capability of the Multipurpose Icebreaker Botnica – Full Scale Results. The 15th International Con-ference on Port and Ocean Engineering under Arctic Conditions (POAC99), 23–27 August 1999, Otaniemi, Finland, pp. 631–643. Riska, K., Patey, M., Kishi, S. & Kamesaki, K. 2001. Influence of Ice Conditions on Ship Transit Times in Ice. The 16th International Conference on Port and Ocean Engineering under Arctic Condi-tions (POAC), 12–17 August 2001, Ottawa, Canada, pp. 729–746. Riska, K., Leiviskä, T., Nyman, T., Fransson, L., Lehtonen, J., Eronen, H. & Backman, A. 2001. Ice Performance of the Swedish Multi-purpose Icebreaker Tor Viking II. The 16th International Con-ference on Port and Ocean Engineering under Arctic Conditions (POAC), 12–17 August 2001, Ottawa, Canada, pp. 849–866. Leiviskä, T., Tuhkuri, J. & Riska, K. 2001. Model Tests on Resistance in Ice-Free Ice Channels. The 16th International Conference on Port and Ocean Engineering under Arctic Conditions (POAC), 12–17 August 2001, Ottawa, Canada, pp. 881–890. Iyerusalimski, A., Riska, K. & Minnick, P. 2001: USCGC Healy Ice trials Trafficability Program. The 16th International Conference on Port and Ocean Engineering under Arctic Conditions (POAC), 12–17 August 2001, Ottawa, Canada, pp. 917–920. St. John, J., Tunik, A., Riska, K. & Sheinberg, R. 2001: Forward Shoulder Ice Impact Loads during the USCGC Healy Ice Trials. The 16th International Conference on Port and Ocean Engineering under Arctic Conditions (POAC), 12–17 August 2001, Ottawa, Canada, pp. 965–968. Riska, K. & Uto, S. & Tuhkuri, J. 2002: Pressure Distribution and Response of Multiplate Panels under Ice Loading. Cold Regions Science and Technology, 34(2002), pp. 209–225. Riska, K. & Lohi, P. & Eronen, H. 2005: The Width of the Channel Achieved by an Azimuth Thruster Icebreaker. The 17th International Conference on Port and Ocean Engineering under Arctic Conditions (POAC), June 26–30, 2005, Vol. 2, pp. 647 – 662. Jalonen, R. & Riska, K. & Hänninen, S. 2005: A Preliminary Risk Analysis of Winter Navigation in the Baltic Sea. Winter Navigation Research Board, Research Report No 57, Helsinki, 206 p. Riska, K. & Breivik, K. & Eide, S.I. & Gudmestad, O. 2006: Factors Influencing the Development of Routes for Regular Oil Transport from Dikson. Proc. ICETECH’06, Banff, Canada, Paper 153RF, 7 p. Bridges, R. & Riska, K. & Zhang, S. 2006: Preliminary Results of Investigation on the Fatigue of Ship Hull Structures when Navigating in Ice. Proc. ICETECH’06, Banff, Canada, Paper 142 RF, 4 p. Wang, Ge & Liu, S. & Riska, K. 2006: Recent Advances in Structural Design of Ice-Strengthened Vessels. Proc. ICETECH’06, Banff, Canada, Paper 127 RF, 8 p. Pärn, O. & Haapala, J. & Kouts, T. & Elken, J. & Riska, K. 2007: On the Relationship between Sea Ice Deformation and Ship Damages in the Gulf of Finland in Winter 2003. Proc. Estonian Acad. Sci. Eng. Vol. 13, No. 3, 2007 Tikka, K., Riska, K. & Liu, S. 2008: Tanker Design Considerations for Safety and Environmental Protection of Arctic Waters: Learning from Past Experience. WMU Journal of Maritime Affairs, Vol. 7 (2008), No. 1, pp. 189–204. Eriksson, P., Haapala, J., Heiler, I., Leisti, H., Riska, K. & Vainio, J. 2009: Ships in Compressive Ice – Description and Operative Forecasting of Compression in an Ice Field. Winter Navigation Research Board, Research Report No. 59, Finnish and Swedish Maritime Administrations, 43 p. Kujala, P., Suominen, M. & Riska, K. 2009: Statistics of Ice Loads Measured on MT Uikku in the Baltic. Proc. of the 20th International Conf. on Port and Ocean Engineering under Arctic Conditions (POAC09), June 9–12, Luleå, Sweden. Su, B., Riska, K. & Moan, T. 2010: A numerical method for the prediction of ship performance in level ice. Cold Regions Science and Technology 60 (2010), pp. 177–188. Su, B., Riska, K. & Moan, T. 2010: Numerical Simulation of Ship Turning in Ice. 29th International Conference on Ocean, Offshore and Arctic Engineering (OMAE2010), June 6–11, 2010, Shanghai, China, pp. 783–792. Suyuthi, A., Leira, B. & Riska, K. 2010: Variation of the Short Term Extreme Ice Loads Along a Ship Hull. 29th International Conference on Ocean, Offshore and Arctic Engineering (OMAE2010), June 6–11, 2010, Shanghai, China, pp. 783–792. Su, B., Riska, K. & Moan, T. 2011: Numerical Simulation of Local Ice Loads in Uniform and Randomly Varying Ice Conditions. Cold Regions Science and Technology 65(2011), pp. 145–159. Su, B., Riska, K. & Moan, T. 2011: Numerical Study of Ice-Induced Loads on Ship Hulls. Marine Structures (in press). Riska, K. & Coche, E. 2013: Station keeping in ice - Challenges and possibilities. Proc. of the 22nd International Conf. on Port and Ocean Engineering under Arctic Conditions (POAC13), June 2013, Espoo, Finland. Laboratory work Riska, K. & Frederking, R. 1985. Constituents for Structure-Ice Interaction Modelling. Joint Research Project Arrangement I, Ice Load Penetration Model, Report 1, National Research Council of Canada and Technical Research Centre of Finland, 1985, 34 p. Riska, K. & Frederking, R. 1987. Modelling Ice Load during Penetration into Ice. Joint Research Project Agreement I, Ice Load Penetration Model, Report 2, National Research Council of Canada and Technical Research Centre of Finland, 1987, 57 p. + 18 app. Riska, K. & Daley, C. 1986. M.V. Arctic ramming Model Test results. Joint Research Project Arrangement III, Physical Modelling of Ship/Ice Interaction, Report 1, Transport Canada and Technical Research Centre of Finland, 1986, 54 p. + 43 app. Riska, K. 1988. Ship Ramming Multi-Year Ice Floes, Model Test Results. Technical Research Centre of Finland, Research Notes 818, Espoo, 1988, 67 p. + 47 app. Riska, K. 1988. Ship/Ice Interaction, Prestudy and Research Plan. Helsinki University of Technology, Lab. of Naval Architecture and Marine Eng., Report M-81, Espoo, 1988, 39 p (in Finnish). Muhonen, A. & Riska, K. 1988. Impact of a Landing Craft Bow on a Presawn Level Ice Edge, Results from the First Model Test Series 3-8-12-1987. Helsinki University of Technology, Lab. of Naval Architecture and Marine Engineering, Report M-84, Espoo, 1988, 30 p.+ 169 app. (in Finnish) Joensuu, A. & Riska, K. 1989. Structure/Ice Contact, Measurement Results from the joint Tests with Wärtsilä Arctic research Centre in Spring 1988. Helsinki University of Technology, Lab. of Naval Architecture and Marine Eng., Report M-88, Espoo, 1989, 57 p. + 154 app. (in Finnish) Gyldén, R. & Riska, K. 1989. Ice Load Measurements onboard MS Kemira, Winter 1989. Helsinki University of Technology, Lab. of Naval Architecture and Marine Eng., Report M-93, Espoo, 1989, 13 p. + 49 app. Riska, K. & Kämäräinen, J. & Hänninen, M. 1990. Ice Impact Model Tests for Three Bow Forms of a Vessel. Helsinki University of Technology, Lab. of Naval Architecture and Marine Eng., Report M-96, Espoo, 1990, Vol. 1, 141 p. + 24 app., Vol 2, 374 p. Riska, K. & Rantala, H. & Joensuu, A. 1990. Full Scale Observations of Ship-Ice Contact. Helsinki University of Technology, Lab. of Naval Architecture and Marine Eng., Report M-97, Espoo, 1990, 54 p. + 293 app. Tuhkuri, J. & Riska, K. 1990. Results from Tests on Extrusion of Crushed Ice. Helsinki University of Technology, Lab. of Naval Architecture and Marine Eng., Report M-98, Espoo, 1990, 47 p. + 33 app. Lindholm, J.-E. & Riska, K. & Joensuu, A. 1990. Contact between Structure and Ice, Results from Ice Crushing Tests with Flexible Indentor. Helsinki University of Technology, Lab. of Naval Architecture and Marine Eng., Report M-101, Espoo, 1990, 30 p. + 117 app. Daley, C. & Riska, K. 1990. Review of Ship-Ice Interaction Mechanics. Helsinki University of Technology, Lab. of Naval Architecture and Marine Eng., Report M-102, Espoo, 1990, 120 p. Riska, K. 1991. Prestudy on the navigation in the Saimaa lake area. Helsinki University of Technology, Lab. of Naval Architecture and Marine Engineering, Report M-115, Espoo, 1991, 41 p. (in Finnish) Muhonen, A. & Kärnä, T. & Eranti, E. & Riska, K. & Järvinen, E. and Lehmus, E. 1992. Laboratory Indentation Tests with Thick Freshwater Ice, Vol. I. Technical Research Centre of Finland, Research Notes 1370, Espoo, 1992, 92 p. + app. 103 p. Muhonen, A. & Kärnä, T. & Järvinen, E. & Riska, K. & Lehmus, E. 1992. Laboratory Indentation Tests with Thick Freshwater Ice, Vol. II. Helsinki University of Technology, Laboratory of Naval Architecture and Marine Engineering, Report M-122, Espoo, Finland, 1992, 397 p. Riska, K. & Baarman, L. & Muhonen, A. 1992. Modelling of Ice-Induced Vibration of Slender Offshore Structures. Helsinki University of Technology, Arctic Offshore Research Centre, Report M-172, Otaniemi, Finland, 1992, 33 p. Riska, K. & Kukkanen, T. 1994. Speed Dependence of the Natural Modes of an Elastically Scaled Ship Model, Test Results. Helsinki University of Technology, Arctic Offshore Research Centre, Report M-184, Otaniemi, Finland, 1994, 47 p. Riska, K. & Salmela, O. 1994. Description of Ice Conditions along the North-East Passage. Helsinki University of Technology, Arctic Offshore Research Centre, Report M-192, Otaniemi, Finland, 1994, 26 p. + 41 app. Multala, J. & Hautaniemi, H. & Oksama, M. & Leppäranta, M. & Haapala, J. & Herlevi, A. & Riska, K. & Lensu, M. 1995: Airborne Electromagnetic Surveying of Baltic Sea Ice. University of Helsinki, Department of Geophysics, Report Series in Geophysics No. 31, Helsinki 1995, 58 p. La Prairie, D. & Wilhelmson, M.& Riska, K. 1995. A Transit Simulation Model for Ships in Baltic Ice Conditions. Helsinki University of Technology, Ship Laboratory, Report M-200, Otaniemi, Finland, 1995, 38 p. Li, Zhijun, Riska, K. 1996. Preliminary Study of Physical and Mechanical Properties of Model Ice. Helsinki University of Technology, Ship Laboratory, Report M-212, Otaniemi, Finland, 1996, 100 p. + 120 app. Riska, K. & Windeler, M. 1997. Ice-Induced Stresses in the Shell Plating of Ice-Going Vessels. Helsinki University of Technology, Ship Laboratory, Report M-219, Otaniemi, Finland, 1997, 34 p. Tuhkuri, J. & Riska, K. & Wilhelmson, M. & Kennedy, R. & McCarthy, S. 1997. Indentation of Model Scale Pressure Ridges with a Vertical Indentor. Helsinki University of Technology, Ship Laboratory, Report M-230, Otaniemi, Finland, 1997, 63 p. Leiviskä, T., Kennedy, R., Tuhkuri, J., Herrington, P., Aspelund, A. & Riska, K. 2000. Model Tests on Resistance in Ice-Free Channels. Helsinki University of Technology, Ship Laboratory, Report M-255, Otaniemi, Finland, 2000, 27 p. Aspelund, A., Forsey, H. and Riska, K. 2001. Analysis of Trafficability Observations during USCGC Healy Ice Trials, Spring 2000. Helsinki University of Technology, Ship Laboratory, Report M-262, Otaniemi, Finland, 2001, 33 p. + app. Patey, M., Aspelund, A., Forsey, H. and Riska, K. 2001. Ice Top Profile Measurements during USCGC Healy Ice Trials, Spring 2000. Helsinki University of Technology, Ship Laboratory, Report M-261, Otaniemi, Finland, 2001, 23 p. + app. Hänninen, S. & Riska, K. 2001. Description of the Ice performance of USCGC Healy. Helsinki University of Technology, Ship Laboratory, Report M-263, Otaniemi, Finland, 2001, 45 p. + app. (in Finnish) Hänninen, S., Lensu, M. & Riska, K. 2001. Analysis of the Ice Load Measurements during USCGC Healy Ice Trials, Spring 2000. Helsinki University of Technology, Ship Laboratory, Report M-265, Otaniemi, Finland, 2001, 65 p. + app. Magazine articles, lectures and other publications Riska, K. & Varsta, P. 1980. Design of Offshore Structures for Low Temperatures. INSKO course Steel Structures in Low temperatures, Helsinki 1980, 47 p. (in Finnish) Sukselainen, J. & Riska, K. 1986. Current Problems in Arctic Vessel Research. International Polar Transportation Conference, Vancouver, Canada, May 4–8, 1986, Proceedings, Vol. 1, pp. 41–65. (Invited Lecture) Riska, K. 1987. Investigation of Factors involved in Longitudinal Strength of Arctic Vessels. Technical Seminar Arranged by CNIIMF and Wärtsilä Marine, Leningrad 29.5.1987, 28 p. (in Finnish and Russian) Riska, K. 1991. Current Research Themes in Arctic Technology. Lecture in the Inauguration of the Finnish Maritime Institute 10.12.1991, Espoo, Finland, 19 # (in Finnish) Riska, K. & Baarman, L. and Muhonen, A. 1992. Modelling of Ice Induced Vibration of Slender Offshore Structures. Third International Conference on Ice Technology, Cambridge, Mass., USA, 11–13 August 1992, 33 p. [unpublished]. Wilkman, G. & Riska, K. 1992. Possibilities to Use Model Tests in Ice for the Development of the Northern Sea Route. Conference on Opening the Northern Sea Route, Trondheim, Norway, 2–4 September 1992, 18 p. Riska, K. 1992. The Economic Use of the North-East Passage, Memorandum. The Northern Sea Route Expert Meeting, Tromsö 13–14 October 1992, pp. 243–248. Riska, K. 1994. Research on Arctic Technology. Tiedepolitiikka 2(1994)19, pp. 44–46. (in Finnish) Riska, K. 1994. A Century of Icebreakers. Form Function Finland 4(1994), pp. 30–34. Riska, K. & Tuhkuri, J. 1995. Application of Ice Cover Mechanics in design and Operations of Marine Structures. Proc. of the Sea Ice Mechanics and Arctic Modeling Workshop, April 25–28, 1995, Anchorage, Alaska. Mälkki, P. & Riska, K. & Tuhkuri, J. 1998. Finland: Ice, Environment, Cold Seas - Themes for Marine S&T. Sea Technology, August 1998, pp. 18–21. Riska, K. 1998. The Significance of Winter in the Transport Logistics. Satama 98 Port, Seminar in Naantali 14–15 October 1998. Publications from the Centre for Maritime Studies, University of Turku, Report B 102, Turku 1998, pp. 38–54. (in Finnish) Riska, K. 1999. What Happens When a Ship Hits the Ice Edge?. Tietoyhteys 2(1999), pp. 29–30. (in Finnish) Riska, K. 1999. Dissemination and Exploitation of Results from Basic RTD: How to Do It Better?. Keynote Speech in the Third European Marine Science and Technology Conference, Lisbon, 23–27 May 1998, Conf. Proc. 1999, pp. 379–385. Riska, K. 1999. Arctic Research and Shipping. Ports and Short Sea Shipping, A Baltic Sea Seminar, 14–15 June 1999, Turku, Finland. Publications from the Centre for Maritime Studies, University of Turku, Report A31, pp. 71–80. Riska, K. 1999. The Background of the Powering Requirements in the Finnish-Swedish Ice Class Rules. Maritime Research Seminar ’99, Espoo 17.3.1999, in Nyman, T. (ed.) 2000. VTT Manufacturing Technology, pp. 91–106. Saarinen, S., Riska, K. & Saisto, I. 1999. Ice Force Model Tests of a FPSU System. Maritime Research Seminar ’99, Espoo 17.3.1999, in Nyman, T. (ed.) 2000. VTT Manufacturing Technology, pp. 66–77. Riska, K. 2001. Factors Influencing the Tanker Traffic Safety in Winter in the Gulf of Finland. Lecture in the Safety at Sea Seminar 3.5.2001, Finlandia Hall, Helsinki, Finland. (in Finnish) Riska, K. 2001. The Environmental Safety of Tanker Traffic during Winter in the Gulf of Finland. International Seminar on Combatting Marine Oil Spills in Ice and Cold/Arctic Conditions, Helsinki, Finland 20 – 22 November 2001. Riska, K. 2001. Year-Round Inland Navigation and Short Sea Shipping – a Necessity in Europe, or Is It? A presentation given in the Seminar on Winter Navigation in Coastal and Inland Waterways, Lappeenranta 29–30 November 2001. Riska, K. & Hänninen, S. 2004: Ice Damages of Ice-Strengthened Ships. Lecture at Ice Days, Oulu, February 2004. Riska, K. 2006: Ice Classification of Large Vessels. Lecture at Ice Day, Kemi, February 9–10, 2006. Riska, K. 2008: Forecasting Ice Pressure against Ships. Lecture at Ice Day, Rovaniemi, February 13–14, 2008. Riska, K. 2008: Characteristics of Pollution Response Vessel for the Gulf of Finland. Arctic Shipping Summit, St. Petersburg, April 7–10, 2008. Riska, K. 2009: Implementation of Research Results into Icebreaker Design. Invited lecture at POAC09, 20th International Conf. on Port and Ocean Engineering under Arctic Conditions (POAC09), June 9–12, Luleå, Sweden. Martyuk, G. & Riska, K. 2010: SV Toboy and IB Varandey, Full Scale Trials. Arctic Shipping Summit, Helsinki, April 27–29, 2010. Riska, K. 2010: Bulk Carriers for Northern Baltic; Design Considerations. Lecture at Ice Day, Tornio, February 10–11, 2010. Riska, K. 2011: Propulsion in Ice – An Introduction. Arctic Shipping Summit, Helsinki, April 12–15, 2011. References 1953 births Living people TotalEnergies people 20th-century Finnish engineers Ice in transportation Engineers from Helsinki Academic staff of the Helsinki University of Technology
Kaj Riska
[ "Physics" ]
8,634
[ "Physical systems", "Transport", "Ice in transportation" ]
40,496,327
https://en.wikipedia.org/wiki/Extended%20theories%20of%20gravity
Extended theories of gravity are alternative theories of gravity developed from the exact starting points investigated first by Albert Einstein and Hilbert. These are theories describing gravity, which are metric theory, "a linear connection" or related affine theories, or metric-affine gravitation theory. Rather than trying to discover correct calculations for the matter side of the Einstein field equations (which include inflation, dark energy, dark matter, large-scale structure, and possibly quantum gravity), it is instead proposed to change the gravitational side of the equation. Proposed theories Hernández et al. One such theory is also an extension to general relativity and Newton's Universal gravity law (), first proposed in 2010 by the Mexican astronomers Xavier Hernández Doring, Sergio Mendoza Ramos et al., researchers at the Astronomy Institute, at the National Autonomous University of Mexico. This theory is in accordance with observations of kinematics of the solar system, extended binary stars, and all types of galaxies and galactic groups and clouds. It also reproduces the gravitational lensing effect without the need of postulating dark matter. There is some evidence that it could also explain the dark energy phenomena and give a solution to the initial conditions problem. These results can be classified as a metric f(R) gravity theory, more properly an f(R,T) theory, derived from an action principle. This approach to solve the dark matter problem takes into account the Tully–Fisher relation as an empirical law that applies always at scales larger than the Milgrom radius. See also Modified Newtonian Dynamics Alternatives to general relativity References Further reading External links Sergio Mendoza's web page News El universal.com . La jornada.mx . La crónica.com . Theories of gravity General relativity
Extended theories of gravity
[ "Physics" ]
351
[ "General relativity", "Theoretical physics", "Theory of relativity", "Theories of gravity" ]
32,084,423
https://en.wikipedia.org/wiki/Jantzen%20filtration
In representation theory, a Jantzen filtration is a filtration of a Verma module of a semisimple Lie algebra, or a Weyl module of a reductive algebraic group of positive characteristic. Jantzen filtrations were introduced by . Jantzen filtration for Verma modules If M(λ) is a Verma module of a semisimple Lie algebra with highest weight λ, then the Janzen filtration is a decreasing filtration It has the following properties: M(λ)1=N(λ), the unique maximal proper submodule of M(λ) The quotients M(λ)i/M(λ)i+1 have non-degenerate contravariant bilinear forms. The Jantzen sum formula holds: where denotes the formal character. References Lie algebras Representation theory
Jantzen filtration
[ "Mathematics" ]
184
[ "Representation theory", "Fields of abstract algebra" ]
32,084,687
https://en.wikipedia.org/wiki/Weyl%20module
In algebra, a Weyl module is a representation of a reductive algebraic group, introduced by and named after Hermann Weyl. In characteristic 0 these representations are irreducible, but in positive characteristic they can be reducible, and their decomposition into irreducible components can be hard to determine. See also Borel–Weil–Bott theorem Garnir relations Further reading Representation theory Algebraic groups
Weyl module
[ "Mathematics" ]
84
[ "Representation theory", "Fields of abstract algebra" ]
32,084,694
https://en.wikipedia.org/wiki/Kneader%20reactor
A kneader reactor (or kneading reactor) is a device used for mixing and kneading substances with high viscosity. Many industries, such as the food processing, utilize kneader reactors to produce goods, as for example, polymers or chewing gum. Although the machine has existed for decades, kneader reactors are only recently gaining popularity in the processing industry. Description The kneading reactor is a horizontal mixing machine with two Sigma, or Z-type blades. These blades are driven by separate gears at different speeds, one running 1.5 times faster than the other. The reactor has one powerful motor and a speed reducer to drive the two blades. The kneader reactor usually has a W-type barrel with a hydraulic tilt that turns it, and a heating jacket outside. Usage The kneader reactor uses very high viscosity materials such as chewing gum, dough, toffee, Plasticine, rubber, silicone, adhesive or resin. These materials have a viscosity of approximately 1,000,000 cps. They are mixed with reactants such as liquids, powders or slurries; the reaction mass does not undergo a physical phase change while the reaction takes place. How to select If a phase change does occur during processing, the conventional technology requires the use of diluents (or dilutants). Diluents are solvents which decrease the viscosity of the reaction mass, enabling mixing in the reactor, and help to control the reaction temperature. More recently, manufacturers have sought technological solutions that allow synthesis in the concentrated phase, minimizing or eliminating the use of solvents and thus intensifying the process. This "dry" process is possible in a kneader reactor. History The Sigma kneader was developed by Heinz List, a pioneer of modern industrial processing technology. List recognized that processing in the concentrated phase with little to no solvent, also known as "dry processing", would increase process yield per unit volume and would therefore be more profitable. List developed the reactor to overcome the technical complexities of processing in the concentrated phase. Technology advantages Kneader reactors offer a number of technological advantages for dry processing: Excellent mixing and kneading performance during wet, pasty and viscous phases Large working volume reactors efficiently handling large product volumes Large heat-exchange surface areas yielding highest possible surface-to-volume ratio Maximum self-cleaning Narrow residence-time distribution for plug flow operation Adaptive for a wide range of residence times Closed design for cleaner production environment Robust design for high viscosity processing Compact design maximizing process yield per performance volume and minimizing space requirement Kneader reactor technology has long been used for what is known as “Process Intensification”, where multiple processing steps are performed in the same unit. Such units are characterized by high yield per performance volume and also have the flexibility to produce different grades and/or products. References Witte, Dr. Daniel U. "New Devolatilization Process For Thermosensitive and HighlyViscous Polymers in High Volume Kneader Reactors". ANTEC 2011 Technical Conference. Kunkel, Roland. "A Clever Alternative" . Process Worldwide, Issue 5-2010, pages 28–29. Fleury, Pierre-Alain. "Bulk Polymerisation or Copolymerisation in a Novel Continuous Kneader Reactor". Macromolecular Symposia, Special Issue: Contributions from Polymer Reaction Engineering VI. Volume 243, Issue 1, pages 287–298, November, 2006. Chemical reactors
Kneader reactor
[ "Chemistry", "Engineering" ]
726
[ "Chemical reactors", "Chemical reaction engineering", "Chemical equipment" ]
32,085,457
https://en.wikipedia.org/wiki/Quasielastic%20scattering
In physics, quasielastic scattering designates a limiting case of inelastic scattering, characterized by energy transfers being small compared to the incident energy of the scattered particles. The term was originally coined in nuclear physics. It was applied to thermal neutron scattering by Leon van Hove and Pierre Gilles de Gennes (quasielastic neutron scattering, QENS). Finally, it is sometimes used for dynamic light scattering (also known by the more expressive term photon correlation spectroscopy). References Nuclear physics Neutron scattering
Quasielastic scattering
[ "Physics", "Chemistry" ]
101
[ "Scattering", "Neutron scattering", "Nuclear physics" ]
32,085,758
https://en.wikipedia.org/wiki/NEMA%20enclosure%20types
The National Electrical Manufacturers Association (NEMA) defines standards used in North America for various grades of electrical enclosures typically used in industrial applications. Each is rated to protect against personal access to hazardous parts, and additional type-dependent designated environmental conditions. A typical NEMA enclosure might be rated to provide protection against environmental hazards such as water, dust, oil or coolant or atmospheres containing corrosive agents such as acetylene or gasoline. A full list of NEMA enclosure types is available for download from the NEMA website. Enclosure types Below is a list of NEMA enclosure types; these types are further defined in NEMA 250- Enclosures for Electrical Equipment. Each type specifies characteristics of an enclosure, but not, for example, a specific enclosure size. Note that higher numbers do not include the lower-numbered tests. For example, types 3, 4 and 6 are intended for outdoor use, but type 5 is not. A NEMA enclosure rating does not mean that it also meets the same UL enclosure rating. NFPA is National Fire Protection Association, and NEC is National Electrical Code (U.S.A.) See also Electrical equipment in hazardous areas NEMA connector – Another common, but mostly unrelated, set of standards from NEMA References NEMA standards Electrical enclosures
NEMA enclosure types
[ "Engineering" ]
262
[ "Electrical enclosures", "Electrical engineering" ]
32,086,016
https://en.wikipedia.org/wiki/Stomatal%20conductance
Stomatal conductance, usually measured in mmol m−2 s−1 by a porometer, estimates the rate of gas exchange (i.e., carbon dioxide uptake) and transpiration (i.e., water loss as water vapor) through the leaf stomata as determined by the degree of stomatal aperture (and therefore the physical resistances to the movement of gases between the air and the interior of the leaf). The stomatal conductance, or its inverse, stomatal resistance, is under the direct biological control of the leaf through its guard cells, which surround the stomatal pore. The turgor pressure and osmotic potential of guard cells are directly related to the stomatal conductance. Stomatal conductance is a function of stomatal density, stomatal aperture, and stomatal size. Stomatal conductance is integral to leaf level calculations of transpiration. Multiple studies have shown a direct correlation between the use of herbicides and changes in physiological and biochemical growth processes in plants, particularly non-target plants, resulting in a reduction in stomatal conductance and turgor pressure in leaves. Relation to stomatal opening For mechanism, see: Stomatal opening and closingStomatal conductance is a function of the density, size and degree of opening of the stomata; with more open stomata allowing greater conductance, and consequently indicating that photosynthesis and transpiration rates are potentially higher. Therefore, stomatal opening and closing has a direct relationship to stomatal conductance. Light-dependent stomatal opening Light-dependent stomatal opening occurs in many species and under many different conditions. Light is a major stimulus involved in stomatal conductance, and has two key elements that are involved in the process: 1) the stomatal response to blue light, and 2) photosynthesis in the chloroplast of the guard cell. In C3 and C4 plants, the stomata open when there is an increase in light, and they close when there is a decrease in light. In CAM plants, however, the stomata open when there is a decrease in light.For more details about CAM plant stomatal conductance, see: CAM Plants Stomatal response to blue light Stomatal opening occurs as a response to blue light. Blue light activates the blue light receptor on the guard cell membrane which induces the pumping of protons out of the guard cell. This efflux of protons creates an electrochemical gradient that causes free floating potassium (K+) and other ions to enter the guard cells via a channel. This increase in solutes within the guard cells leads to a decrease in the osmotic potential of the cells, resulting in a decrease in water potential. Then, because water flows from a system with higher water potential to a system with lower water potential, water floods into the guard cells, causing the guard cells to become enlarged and therefore causes the stomata to open. Studies showed that stomata responded greatly to blue light, even when in a red-light background (see Figure 1). In one study, the experiment began once stomatal opening had reached its saturation in red-light. Then, when blue light was added, stomatal opening increased even further, showing that a different photoreceptor system, stimulated by blue light, mediates the additional increases in opening. Photosynthesis in the chloroplast The second key element involved in light-dependent stomatal opening is photosynthesis in the chloroplast of the guard cell. In response to carbon dioxide (CO2) entering the chloroplasts, photosynthesis occurs. This increases the amount of solutes that are being produced by the chloroplast which are then released into the cytosol of the guard cell. This causes a decrease in osmotic potential, causing a decrease in the water potential inside the guard cells. Again, this decrease in water potential causes water to enter into the guard cells. The guard cells subsequently swell up with water and the stomata is opened. Recent studies have looked at the stomatal conductance of fast growing tree species to identify the water use of various species. Through their research it was concluded that the predawn water potential of the leaf remained consistent throughout the months while the midday water potential of the leaf showed a variation due to the seasons. For example, canopy stomatal conductance had a higher water potential in July than in October. The studies conducted for this experiment determined that the stomatal conductance allowed for a constant water use per unit leaf area. Another study also showed that stomatal opening is dependent on guard cell photosynthesis. This was carried out by isolating guard cells that were localized to the lower surface of the Adiantum leaves used in the study. It was thus hypothesized that if guard cell chloroplasts are responsible for stomatal opening, it would be expected that light applied to the lower leaf surface would be much more effective at increasing stomatal conductance than light applied to the upper surface. And indeed, when red light was applied to the lower surface, stomatal conductance increased at a light intensity of <5 μmol m−2 s−1 and continued to increase with increasing light intensity, reaching a maximum at about 20 μmol m−2 s−1. Nocturnal stomatal opening Nocturnal stomatal conductance (gn) across both C3 and C4 plants remains a highly researched topic, as the biological function of this phenomenon is ambiguous. Since photosynthesis does not occur at night, gn contributes to significant water loss at night without fixing any carbon in both C3 and C4 plants. Recent studies have compiled extensive literature/data sets that reveal relative growth rate is positively correlated with nocturnal stomatal conductance. However, gn does not directly correlate with positive growth; in fact, the direct effects of nocturnal stomatal conductance lead to higher transpiration rate, which decreases turgor pressure and consequently growth. Thus, it is likely that the indirect effects of gn are what lead to a positive growth rate, as predawn stomatal priming reduces the time it takes to reach complete stomatal responses to illumination. Further studies are needed to see how nocturnal stomatal conductance shortens the time to reach operating daytime stomatal conductance, and whether faster stomatal responses upon illumination correlate to an increase in carbon assimilation that lead to a significant contribution to the growth of the plant. Studies have shown that nocturnal conductance is not the result of stomatal leakiness. As there is extensive genetic variation across a variety of C3 and C4 plants, gn has most likely been selected for during evolution of said plants. Additionally, experiments have revealed that nocturnal stomatal conductance is regulated in an active manner, as there is a temporal change witnessed due to the presence of a circadian clock. Finally, it has been witnessed that gn declines during drought, demonstrating an active response to drought. These reasons disprove the theory that stomatal leakiness causing nocturnal stomatal conductance. Lastly, there is not consistent evidence across various plant species that the main functions of gn are: to get rid of surplus CO2 (could limit growth), improve oxygen delivery, or aid in nutrient supply. Stomatal Transpiration Regulating stomatal conductance is critical to controlling to the amount of transpiration, or water loss from the plant. Since over 95% of water loss comes directly from the stomatal pore, changes in stomatal resistance are critical to regulating water loss. Stomatal conductance also assists in the regulation of CO2 uptake from the atmosphere. Regulation of stomatal transcription is especially important when transcription rates are high. High transcription rates can lead to cavitation events, or when the tension in the xylem increases to the point where air bubbles begin to fill the xylem vessels. This is harmful to the plant because these air bubbles can block the flow of water up the xylem to the aerial parts of the plant. Recent studies have investigated the relationship between stomatal conductance, cavitation, and water potential. Cavitation events have been shown to decrease stomatal conductance while maintaining a stable water potential. In other words, cavitation events cause stomata to close to different extents. This limits transpiration and allows the plant to begin to repair the damaged, cavitated xylem. Similarly, some studies have explored the relationship between drought stress and stomatal conductance. Recent studies have found that drought resistant plants regulate their transpiration rate via stomatal conductance. The hormone ABA is triggered by drought conditions and can assist in closing the stomata. This minimizes water loss and allows the plant to survive under low water conditions. However, closing the stomates can also lead to low photosynthetic rates because of limited CO2 uptake from the atmosphere. Methods for measuring Stomatal conductance can be measured in several ways: Steady-state porometers: A steady state porometer measures stomatal conductance using a sensor head with a fixed diffusion path to the leaf. It measures the vapor concentration at two different locations in the diffusion path. It computes vapor flux from the vapor concentration measurements and the known conductance of the diffusion path using the following equation: Where is the vapor concentration at the leaf, and are the concentrations at the two sensor locations, is the stomatal resistance, and and are the resistances at the two sensors. If the temperatures of the two sensors are the same, concentration can be replaced with relative humidity, giving Stomatal conductance is the reciprocal of resistance, therefore . A dynamic porometer measures how long it takes for the humidity to rise from one specified value to another in an enclosed chamber clamped to a leaf. The resistance is then determined from the following equation: where ∆ is the time required for the cup humidity to change by ∆, is the cup humidity, is the cup "length", and is an offset constant. Null balance porometers maintain a constant humidity in an enclosed chamber by regulating the flow of dry air through the chamber and find stomatal resistance from the following equation: where is the stomatal resistance, is the boundary layer resistance, is the leaf area, is the flow rate of dry air, and is the chamber humidity. The resistance values found by these equations are typically converted to conductance values. Models A number of models of stomatal conductance exist. Ball-Berry-Leuning model The Ball-Berry-Leuning model was formulated by Ball, Woodrow and Berry in 1987, and improved by Leuning in the early 90s. The model formulates stomatal conductance, as where is the stomatal conductance for diffusion, is the value of at the light compensation point, is assimilation rate of the leaf, is the vapour pressure deficit, is the leaf-surface CO2 concentration, is the CO2 compensation point. and are empirical coefficients. See also Canopy conductance Ecohydrology Transpiration References Plant physiology
Stomatal conductance
[ "Biology" ]
2,328
[ "Plant physiology", "Plants" ]
32,086,877
https://en.wikipedia.org/wiki/Novel%20ecosystem
Novel ecosystems are human-built, modified, or engineered niches of the Anthropocene. They exist in places that have been altered in structure and function by human agency. Novel ecosystems are part of the human environment and niche (including urban, suburban, and rural), they lack natural analogs, and they have extended an influence that has converted more than three-quarters of wild Earth . These anthropogenic biomes include technoecosystems that are fuelled by powerful energy sources (fossil and nuclear) including ecosystems populated with technodiversity, such as roads and unique combinations of soils called technosols. Vegetation associations on old buildings or along field boundary stone walls in old agricultural landscapes are examples of sites where research into novel ecosystem ecology is developing. Overview Human society has transformed the planet to such an extent that we may have ushered in a new epoch known as the anthropocene. The ecological niche of the anthropocene contains entirely novel ecosystems that include technosols, technodiversity, anthromes, and the technosphere. These terms describe the human ecological phenomena marking this unique turn in the evolution of Earth's history. The total human ecosystem (or anthrome) describes the relationship of the industrial technosphere to the ecosphere. Technoecosystems interface with natural life-supporting ecosystems in competitive and parasitic ways. Odum (2001) attributes this term to a 1982 publication by Zev Naveh: "Current urban-industrial society not only impacts natural life-support ecosystems, but also has created entirely new arrangements that we can call techno-ecosystems, a term believed to be first suggested by Zev Neveh (1982). These new systems involve new, powerful energy sources (fossil and atomic fuels), technology, money, and cities that have little or no parallels in nature." The term technoecosystem, however, appears earliest in print in a 1976 technical report and also appears in a book chapter (see in Lamberton and Thomas (1982) written by Kenneth E. Boulding). Novel Ecosystems Novel ecosystems "differ in composition and/or function from present and past systems". Novel ecosystems are the hallmark of the recently proposed anthropocene epoch. They have no natural analogs due to human alterations on global climate systems, invasive species, a global mass extinction, and disruption of the global nitrogen cycle. Novel ecosystems are creating many different kinds of dilemmas for terrestrial and marine conservation biologists. On a more local scale, abandoned lots, agricultural land, old buildings, field boundary stone walls or residential gardens provide study sites on the history and dynamics of ecology in novel ecosystems. Anthropogenic biomes Ellis (2008) identifies twenty-one different kinds of anthropogenic biomes that sort into the following groups: 1) dense settlements, 2) villages, 3) croplands, 4) rangeland, 5) forested, and 6) wildlands. These anthropogenic biomes (or anthromes for short) create the technosphere that surrounds us and are populated with diverse technologies (or technodiversity for short). Within these anthromes the human species (one species out of billions) appropriates 23.8% of the global net primary production. "This is a remarkable impact on the biosphere caused by just one species." Noosphere Noosphere (sometimes noösphere) is the "sphere of human thought". The word is derived from the Greek νοῦς (nous "mind") + σφαῖρα (sphaira "sphere"), in lexical analogy to "atmosphere" and "biosphere". Introduced by Pierre Teilhard de Chardin 1922 in his Cosmogenesis. Another possibility is the first use of the term by Édouard Le Roy, who together with Chardin was listening to lectures of Vladimir Vernadsky at Sorbonne. In 1936 Vernadsky presented on the idea of the Noosphere in a letter to Boris Leonidovich Lichkov (though, he states that the concept derives from Le Roy). Technosphere The technosphere is the part of the environment on Earth where technodiversity extends its influence into the biosphere. "For the development of suitable restoration strategies, a clear distinction has to be made between different functional classes of natural and cultural solar-powered biosphere and fossil-powered technosphere landscapes, according to their inputs and throughputs of energy and materials, their organisms, their control by natural or human information, their internal self-organization and their regenerative capacities." The weight of Earth's technosphere has been suggested to be 30 trillion tons, a mass greater than 50 kilos for every square metre of the planet's surface. Technoecosystems The concept of technoecosystems has been pioneered by ecologists Howard T. Odum and Zev Naveh. Technoecosystems interfere with and compete against natural systems. They have advanced technology (or technodiversity) money-based market economies and have a large ecological footprints. Technoecosystems have far greater energy requirements than natural ecosystems, excessive water consumption, and release toxic and eutrophicating chemicals. Other ecologists have defined the extensive global network of road systems as a type of technoecosystem. Technoecotypes "Bio-agro- and techno-ecotopes are spatially integrated in larger, regional landscape units, but they are not structurally and functionally integrated in the ecosphere. Because of the adverse impacts of the latter and the great human pressures on bio-ecotopes, they are even antagonistically related and therefore cannot function together as a coherent, sustainable ecological system." Technosols Technosols are a new form of ground group in the World Reference Base for Soil Resources (WRB). Technosols are "mainly characterised by anthropogenic parent material of organic and mineral nature and which origin can be either natural or technogenic." Technodiversity Technodiversity refers to the varied diversity of technological artifacts that exist in technoecosystems. References Ecology Systems ecology Ecosystems
Novel ecosystem
[ "Biology", "Environmental_science" ]
1,263
[ "Symbiosis", "Systems ecology", "Ecology", "Ecosystems", "Environmental social science" ]
32,088,502
https://en.wikipedia.org/wiki/Coxeter%20notation
In geometry, Coxeter notation (also Coxeter symbol) is a system of classifying symmetry groups, describing the angles between fundamental reflections of a Coxeter group in a bracketed notation expressing the structure of a Coxeter-Dynkin diagram, with modifiers to indicate certain subgroups. The notation is named after H. S. M. Coxeter, and has been more comprehensively defined by Norman Johnson. Reflectional groups For Coxeter groups, defined by pure reflections, there is a direct correspondence between the bracket notation and Coxeter-Dynkin diagram. The numbers in the bracket notation represent the mirror reflection orders in the branches of the Coxeter diagram. It uses the same simplification, suppressing 2s between orthogonal mirrors. The Coxeter notation is simplified with exponents to represent the number of branches in a row for linear diagram. So the An group is represented by [3n−1], to imply n nodes connected by n−1 order-3 branches. Example A2 = [3,3] = [32] or [31,1] represents diagrams or . Coxeter initially represented bifurcating diagrams with vertical positioning of numbers, but later abbreviated with an exponent notation, like [...,3p,q] or [3p,q,r], starting with [31,1,1] or [3,31,1] = or as D4. Coxeter allowed for zeros as special cases to fit the An family, like A3 = [3,3,3,3] = [34,0,0] = [34,0] = [33,1] = [32,2], like = = . Coxeter groups formed by cyclic diagrams are represented by parentheseses inside of brackets, like [(p,q,r)] = for the triangle group (p q r). If the branch orders are equal, they can be grouped as an exponent as the length the cycle in brackets, like [(3,3,3,3)] = [3[4]], representing Coxeter diagram or . can be represented as [3,(3,3,3)] or [3,3[3]]. More complicated looping diagrams can also be expressed with care. The paracompact Coxeter group can be represented by Coxeter notation [(3,3,(3),3,3)], with nested/overlapping parentheses showing two adjacent [(3,3,3)] loops, and is also represented more compactly as [3[ ]×[ ]], representing the rhombic symmetry of the Coxeter diagram. The paracompact complete graph diagram or , is represented as [3[3,3]] with the superscript [3,3] as the symmetry of its regular tetrahedron coxeter diagram. For the affine and hyperbolic groups, the subscript is one less than the number of nodes in each case, since each of these groups was obtained by adding a node to a finite group's diagram. Unconnected groups The Coxeter diagram usually leaves order-2 branches undrawn, but the bracket notation includes an explicit 2 to connect the subgraphs. So the Coxeter diagram = A2×A2 = 2A2 can be represented by [3]×[3] = [3]2 = [3,2,3]. Sometimes explicit 2-branches may be included either with a 2 label, or with a line with a gap: or , as an identical presentation as [3,2,3]. Rank and dimension Coxeter point group rank is equal to the number of nodes which is also equal to the dimension. A single mirror exists in 1-dimension, [ ], , while in 2-dimensions [1], or [ ]×[ ]+. The 1 is a place-holder, not an actual branch order, but a marker for an orthogonal inactive mirror. The notation [n,1], represents a rank 3 group, as [n]×[ ]+ or . Similarly, [1,1] as [ ]×[ ]+×[ ]+ or order 2 and [1,1]+ as [ ]+×[ ]+×[ ]+ or , order 1! Subgroups Coxeter's notation represents rotational/translational symmetry by adding a + superscript operator outside the brackets, [X]+ which cuts the order of the group [X] in half, thus an index 2 subgroup. This operator implies an even number of operators must be applied, replacing reflections with rotations (or translations). When applied to a Coxeter group, this is called a direct subgroup because what remains are only direct isometries without reflective symmetry. The + operators can also be applied inside of the brackets, like [X,Y+] or [X,(Y,Z)+], and creates "semidirect" subgroups that may include both reflective and nonreflective generators. Semidirect subgroups can only apply to Coxeter group subgroups that have even order branches adjacent to it. Elements by parentheses inside of a Coxeter group can be give a + superscript operator, having the effect of dividing adjacent ordered branches into half order, thus is usually only applied with even numbers. For example, [4,3+] and [4,(3,3)+] (). If applied with adjacent odd branch, it doesn't create a subgroup of index 2, but instead creates overlapping fundamental domains, like [5,1+] = [5/2], which can define doubly wrapped polygons like a pentagram, {5/2}, and [5,3+] relates to Schwarz triangle [5/2,3], density 2. Groups without neighboring + elements can be seen in ringed nodes Coxeter-Dynkin diagram for uniform polytopes and honeycomb are related to hole nodes around the + elements, empty circles with the alternated nodes removed. So the snub cube, has symmetry [4,3]+ (), and the snub tetrahedron, has symmetry [4,3+] (), and a demicube, h{4,3} = {3,3} ( or = ) has symmetry [1+,4,3] = [3,3] ( or = = ). Note: Pyritohedral symmetry can be written as , separating the graph with gaps for clarity, with the generators {0,1,2} from the Coxeter group , producing pyritohedral generators {0,12}, a reflection and 3-fold rotation. And chiral tetrahedral symmetry can be written as or , [1+,4,3+] = [3,3]+, with generators {12,0120}. Halving subgroups and extended groups Johnson extends the + operator to work with a placeholder 1+ nodes, which removes mirrors, doubling the size of the fundamental domain and cuts the group order in half. In general this operation only applies to individual mirrors bounded by even-order branches. The 1 represents a mirror so [2p] can be seen as [2p,1], [1,2p], or [1,2p,1], like diagram or , with 2 mirrors related by an order-2p dihedral angle. The effect of a mirror removal is to duplicate connecting nodes, which can be seen in the Coxeter diagrams: = , or in bracket notation:[1+,2p, 1] = [1,p,1] = [p]. Each of these mirrors can be removed so h[2p] = [1+,2p,1] = [1,2p,1+] = [p], a reflective subgroup index 2. This can be shown in a Coxeter diagram by adding a + symbol above the node: = = . If both mirrors are removed, a quarter subgroup is generated, with the branch order becoming a gyration point of half the order: q[2p] = [1+,2p,1+] = [p]+, a rotational subgroup of index 4. = = = = . For example, (with p=2): [4,1+] = [1+,4] = [2] = [ ]×[ ], order 4. [1+,4,1+] = [2]+, order 2. The opposite to halving is doubling which adds a mirror, bisecting a fundamental domain, and doubling the group order. = [2p] Halving operations apply for higher rank groups, like tetrahedral symmetry is a half group of octahedral group: h[4,3] = [1+,4,3] = [3,3], removing half the mirrors at the 4-branch. The effect of a mirror removal is to duplicate all connecting nodes, which can be seen in the Coxeter diagrams: = , h[2p,3] = [1+,2p,3] = [(p,3,3)]. If nodes are indexed, half subgroups can be labeled with new mirrors as composites. Like , generators {0,1} has subgroup = , generators {1,010}, where mirror 0 is removed, and replaced by a copy of mirror 1 reflected across mirror 0. Also given , generators {0,1,2}, it has half group = , generators {1,2,010}. Doubling by adding a mirror also applies in reversing the halving operation: = [4,3], or more generally = [2p,q]. Radical subgroups Johnson also added an asterisk or star * operator for "radical" subgroups, that acts similar to the + operator, but removes rotational symmetry. The index of the radical subgroup is the order of the removed element. For example, [4,3*] ≅ [2,2]. The removed [3] subgroup is order 6 so [2,2] is an index 6 subgroup of [4,3]. The radical subgroups represent the inverse operation to an extended symmetry operation. For example, [4,3*] ≅ [2,2], and in reverse [2,2] can be extended as [3[2,2]] ≅ [4,3]. The subgroups can be expressed as a Coxeter diagram: or ≅ . The removed node (mirror) causes adjacent mirror virtual mirrors to become real mirrors. If [4,3] has generators {0,1,2}, [4,3+], index 2, has generators {0,12}; [1+,4,3] ≅ [3,3], index 2 has generators {010,1,2}; while radical subgroup [4,3*] ≅ [2,2], index 6, has generators {01210, 2, (012)3}; and finally [1+,4,3*], index 12 has generators {0(12)20, (012)201}. Trionic subgroups A trionic subgroup is an index 3 subgroup. Johnson defines a trionic subgroup with operator ⅄, index 3. For rank 2 Coxeter groups, [3], the trionic subgroup, [3⅄] is [ ], a single mirror. And for [3p], the trionic subgroup is [3p]⅄ ≅ [p]. Given , with generators {0,1}, has 3 trionic subgroups. They can be differentiated by putting the ⅄ symbol next to the mirror generator to be removed, or on a branch for both: [3p,1⅄] = = , = , and [3p⅄] = = with generators {0,10101}, {01010,1}, or {101,010}. Trionic subgroups of tetrahedral symmetry: [3,3]⅄ ≅ [2+,4], relating the symmetry of the regular tetrahedron and tetragonal disphenoid. For rank 3 Coxeter groups, [p,3], there is a trionic subgroup [p,3⅄] ≅ [p/2,p], or = . For example, the finite group [4,3⅄] ≅ [2,4], and Euclidean group [6,3⅄] ≅ [3,6], and hyperbolic group [8,3⅄] ≅ [4,8]. An odd-order adjacent branch, p, will not lower the group order, but create overlapping fundamental domains. The group order stays the same, while the density increases. For example, the icosahedral symmetry, [5,3], of the regular polyhedra icosahedron becomes [5/2,5], the symmetry of 2 regular star polyhedra. It also relates the hyperbolic tilings {p,3}, and star hyperbolic tilings {p/2,p} For rank 4, [q,2p,3⅄] = [2p,((p,q,q))], = . For example, [3,4,3⅄] = [4,3,3], or = , generators {0,1,2,3} in [3,4,3] with the trionic subgroup [4,3,3] generators {0,1,2,32123}. For hyperbolic groups, [3,6,3⅄] = [6,3[3]], and [4,4,3⅄] = [4,4,4]. Trionic subgroups of tetrahedral symmetry ] Johnson identified two specific trionic subgroups of [3,3], first an index 3 subgroup [3,3]⅄ ≅ [2+,4], with [3,3] ( = = ) generators {0,1,2}. It can also be written as [(3,3,2⅄)] () as a reminder of its generators {02,1}. This symmetry reduction is the relationship between the regular tetrahedron and the tetragonal disphenoid, represent a stretching of a tetrahedron perpendicular to two opposite edges. Secondly he identifies a related index 6 subgroup [3,3]Δ or [(3,3,2⅄)]+ (), index 3 from [3,3]+ ≅ [2,2]+, with generators {02,1021}, from [3,3] and its generators {0,1,2}. These subgroups also apply within larger Coxeter groups with [3,3] subgroup with neighboring branches all even order. ] For example, [(3,3)+,4], [(3,3)⅄,4], and [(3,3)Δ,4] are subgroups of [3,3,4], index 2, 3 and 6 respectively. The generators of [(3,3)⅄,4] ≅ ≅ [8,2+,8], order 128, are {02,1,3} from [3,3,4] generators {0,1,2,3}. And [(3,3)Δ,4] ≅ , order 64, has generators {02,1021,3}. As well, [3⅄,4,3⅄] ≅ [(3,3)⅄,4]. Also related [31,1,1] = [3,3,4,1+] has trionic subgroups: [31,1,1]⅄ = [(3,3)⅄,4,1+], order 64, and 1=[31,1,1]Δ = [(3,3)Δ,4,1+] ≅ [[4,2+,4]]+, order 32. Central inversion A central inversion, order 2, is operationally differently by dimension. The group [ ]n = [2n−1] represents n orthogonal mirrors in n-dimensional space, or an n-flat subspace of a higher dimensional space. The mirrors of the group [2n−1] are numbered . The order of the mirrors doesn't matter in the case of an inversion. The matrix of a central inversion is , the Identity matrix with negative one on the diagonal. From that basis, the central inversion has a generator as the product of all the orthogonal mirrors. In Coxeter notation this inversion group is expressed by adding an alternation + to each 2 branch. The alternation symmetry is marked on Coxeter diagram nodes as open nodes. A Coxeter-Dynkin diagram can be marked up with explicit 2 branches defining a linear sequence of mirrors, open-nodes, and shared double-open nodes to show the chaining of the reflection generators. For example, [2+,2] and [2,2+] are subgroups index 2 of [2,2], , and are represented as (or ) and (or ) with generators {01,2} and {0,12} respectively. Their common subgroup index 4 is [2+,2+], and is represented by (or ), with the double-open marking a shared node in the two alternations, and a single rotoreflection generator {012}. Rotations and rotary reflections Rotations and rotary reflections are constructed by a single single-generator product of all the reflections of a prismatic group, [2p]×[2q]×... where gcd(p,q,...)=1, they are isomorphic to the abstract cyclic group Zn, of order n=2pq. The 4-dimensional double rotations, [2p+,2+,2q+] (with gcd(p,q)=1), which include a central group, and are expressed by Conway as ±[Cp×Cq], order 2pq. From Coxeter diagram , generators {0,1,2,3}, requires two generator for [2p+,2+,2q+], as {0123,0132}. Half groups, [2p+,2+,2q+]+, or cyclic graph, [(2p+,2+,2q+,2+)], expressed by Conway is [Cp×Cq], order pq, with one generator, like {0123}. If there is a common factor f, the double rotation can be written as [2pf+,2+,2qf+] (with gcd(p,q)=1), generators {0123,0132}, order 2pqf. For example, p=q=1, f=2, [4+,2+,4+] is order 4. And [2pf+,2+,2qf+]+, generator {0123}, is order pqf. For example, [4+,2+,4+]+ is order 2, a central inversion. In general a n-rotation group, [2p1+,2,2p2+,2,...,pn+] may require up to n generators if gcd(p1,..,pn)>1, as a product of all mirrors, and then swapping sequential pairs. The half group, [2p1+,2,2p2+,2,...,pn+]+ has generators squared. n-rotary reflections are similar. Commutator subgroups Simple groups with only odd-order branch elements have only a single rotational/translational subgroup of order 2, which is also the commutator subgroup, examples [3,3]+, [3,5]+, [3,3,3]+, [3,3,5]+. For other Coxeter groups with even-order branches, the commutator subgroup has index 2c, where c is the number of disconnected subgraphs when all the even-order branches are removed. For example, [4,4] has three independent nodes in the Coxeter diagram when the 4s are removed, so its commutator subgroup is index 23, and can have different representations, all with three + operators: [4+,4+]+, [1+,4,1+,4,1+], [1+,4,4,1+]+, or [(4+,4+,2+)]. A general notation can be used with +c as a group exponent, like [4,4]+3. Example subgroups Rank 2 example subgroups Dihedral symmetry groups with even-orders have a number of subgroups. This example shows two generator mirrors of [4] in red and green, and looks at all subgroups by halfing, rank-reduction, and their direct subgroups. The group [4], has two mirror generators 0, and 1. Each generate two virtual mirrors 101 and 010 by reflection across the other. Rank 3 Euclidean example subgroups The [4,4] group has 15 small index subgroups. This table shows them all, with a yellow fundamental domain for pure reflective groups, and alternating white and blue domains which are paired up to make rotational domains. Cyan, red, and green mirror lines correspond to the same colored nodes in the Coxeter diagram. Subgroup generators can be expressed as products of the original 3 mirrors of the fundamental domain, {0,1,2}, corresponding to the 3 nodes of the Coxeter diagram, . A product of two intersecting reflection lines makes a rotation, like {012}, {12}, or {02}. Removing a mirror causes two copies of neighboring mirrors, across the removed mirror, like {010}, and {212}. Two rotations in series cut the rotation order in half, like {0101} or {(01)2}, {1212} or {(02)2}. A product of all three mirrors creates a transreflection, like {012} or {120}. Hyperbolic example subgroups The same set of 15 small subgroups exists on all triangle groups with even order elements, like [6,4] in the hyperbolic plane: Parabolic subgroups A parabolic subgroup of a Coxeter group can be identified by removing one or more generator mirrors represented with a Coxeter diagram. For example the octahedral group has parabolic subgroups , , , , , . In bracket notation [4,3] has parabolic subgroups [4],[2],[3], and a single mirror []. The order of the subgroup is known, and always an integer divisor group order, or index. Parabolic subgroups can also be written with x nodes, like =[4,3] subgroup by removing second mirror: or = = [4,1×,3] = [2]. Petrie subgroup A petrie subgroup of an irreducible coxeter group can be created by the product of all of the generators. It can be seen in the skew regular petrie polygon of a regular polytope. The order of the new group is called the Coxeter number of the original Coxeter group. The Coxeter number of a Coxeter group is 2m/n, where n is the rank, and m is the number of reflections. A petrie subgroup can be written with a superscript. For example, [3,3] is the petrie subgroup of a tetrahedral group, cyclic group order 4, generated by a rotoreflection. A rank 4 Coxeter group will have a double rotation generator, like [4,3,3] is order 8. Extended symmetry Coxeter's notation includes double square bracket notation, to express automorphic symmetry within a Coxeter diagram. Johnson added alternative doubling by angled-bracket <[X]>. Johnson also added a prefix symmetry modifier [Y[X]], where Y can either represent symmetry of the Coxeter diagram of [X], or symmetry of the fundamental domain of [X]. For example, in 3D these equivalent rectangle and rhombic geometry diagrams of : and , the first doubled with square brackets, or twice doubled as [2[3[4]]], with [2], order 4 higher symmetry. To differentiate the second, angled brackets are used for doubling, <[3[4]]> and twice doubled as <2[3[4]]>, also with a different [2], order 4 symmetry. Finally a full symmetry where all 4 nodes are equivalent can be represented by [4[3[4]]], with the order 8, [4] symmetry of the square. But by considering the tetragonal disphenoid fundamental domain the [4] extended symmetry of the square graph can be marked more explicitly as [(2+,4)[3[4]]] or [2+,4[3[4]]]. Further symmetry exists in the cyclic and branching , , and diagrams. has order 2n symmetry of a regular n-gon, {n}, and is represented by [n[3[n]]]. and are represented by [3[31,1,1]] = [3,4,3] and [3[32,2,2]] respectively while by [(3,3)[31,1,1,1]] = [3,3,4,3], with the diagram containing the order 24 symmetry of the regular tetrahedron, {3,3}. The paracompact hyperbolic group = [31,1,1,1,1], , contains the symmetry of a 5-cell, {3,3,3}, and thus is represented by [(3,3,3)[31,1,1,1,1]] = [3,4,3,3,3]. An asterisk * superscript is effectively an inverse operation, creating radical subgroups removing connected of odd-ordered mirrors. Examples: Looking at generators, the double symmetry is seen as adding a new operator that maps symmetric positions in the Coxeter diagram, making some original generators redundant. For 3D space groups, and 4D point groups, Coxeter defines an index two subgroup of , , which he defines as the product of the original generators of [X] by the doubling generator. This looks similar to +, which is the chiral subgroup of . So for example the 3D space groups + (I432, 211) and (Pmn, 223) are distinct subgroups of (Imm, 229). Rank one groups In one dimension, the bilateral group [ ] represents a single mirror symmetry, abstract Dih1 or Z2, symmetry order 2. It is represented as a Coxeter–Dynkin diagram with a single node, . The identity group is the direct subgroup [ ]+, Z1, symmetry order 1. The + superscript simply implies that alternate mirror reflections are ignored, leaving the identity group in this simplest case. Coxeter used a single open node to represent an alternation, . Rank two groups In two dimensions, the rectangular group [2], abstract D22 or D4, also can be represented as a direct product [ ]×[ ], being the product of two bilateral groups, represents two orthogonal mirrors, with Coxeter diagram, , with order 4. The 2 in [2] comes from linearization of the orthogonal subgraphs in the Coxeter diagram, as with explicit branch order 2. The rhombic group, [2]+ ( or ), half of the rectangular group, the point reflection symmetry, Z2, order 2. Coxeter notation to allow a 1 place-holder for lower rank groups, so [1] is the same as [ ], and [1+] or [1]+ is the same as [ ]+ and Coxeter diagram . The full p-gonal group [p], abstract dihedral group D2p, (nonabelian for p>2), of order 2p, is generated by two mirrors at angle π/p, represented by Coxeter diagram . The p-gonal subgroup [p]+, cyclic group Zp, of order p, generated by a rotation angle of π/p. Coxeter notation uses double-bracking to represent an automorphic doubling of symmetry by adding a bisecting mirror to the fundamental domain. For example, [[p]] adds a bisecting mirror to [p], and is isomorphic to [2p]. In the limit, going down to one dimensions, the full apeirogonal group is obtained when the angle goes to zero, so [∞], abstractly the infinite dihedral group D∞, represents two parallel mirrors and has a Coxeter diagram . The apeirogonal group [∞]+, , abstractly the infinite cyclic group Z∞, isomorphic to the additive group of the integers, is generated by a single nonzero translation. In the hyperbolic plane, there is a full pseudogonal group [iπ/λ], and pseudogonal subgroup [iπ/λ]+, . These groups exist in regular infinite-sided polygons, with edge length λ. The mirrors are all orthogonal to a single line. Rank three groups Point groups in 3 dimensions can be expressed in bracket notation related to the rank 3 Coxeter groups: In three dimensions, the full orthorhombic group or orthorectangular [2,2], abstractly Z23, order 8, represents three orthogonal mirrors, (also represented by Coxeter diagram as three separate dots ). It can also can be represented as a direct product [ ]×[ ]×[ ], but the [2,2] expression allows subgroups to be defined: First there is a "semidirect" subgroup, the orthorhombic group, [2,2+] ( or ), abstractly Z2×Z2, of order 4. When the + superscript is given inside of the brackets, it means reflections generated only from the adjacent mirrors (as defined by the Coxeter diagram, ) are alternated. In general, the branch orders neighboring the + node must be even. In this case [2,2+] and [2+,2] represent two isomorphic subgroups that are geometrically distinct. The other subgroups are the pararhombic group [2,2]+ ( or ), also order 4, and finally the central group [2+,2+] ( or ) of order 2. Next there is the full ortho-p-gonal group, [2,p] (), abstractly Z2×D2p, of order 4p, representing two mirrors at a dihedral angle π/p, and both are orthogonal to a third mirror. It is also represented by Coxeter diagram as . The direct subgroup is called the para-p-gonal group, [2,p]+ ( or ), abstractly D2p, of order 2p, and another subgroup is [2,p+] () abstractly Z2×Zp, also of order 2p. The full gyro-p-gonal group, [2+,2p] ( or ), abstractly D4p, of order 4p. The gyro-p-gonal group, [2+,2p+] ( or ), abstractly Z2p, of order 2p is a subgroup of both [2+,2p] and [2,2p+]. The polyhedral groups are based on the symmetry of platonic solids: the tetrahedron, octahedron, cube, icosahedron, and dodecahedron, with Schläfli symbols {3,3}, {3,4}, {4,3}, {3,5}, and {5,3} respectively. The Coxeter groups for these are: [3,3] (), [3,4] (), [3,5] () called full tetrahedral symmetry, octahedral symmetry, and icosahedral symmetry, with orders of 24, 48, and 120. In all these symmetries, alternate reflections can be removed producing the rotational tetrahedral [3,3]+(), octahedral [3,4]+ (), and icosahedral [3,5]+ () groups of order 12, 24, and 60. The octahedral group also has a unique index 2 subgroup called the pyritohedral symmetry group, [3+,4] ( or ), of order 12, with a mixture of rotational and reflectional symmetry. Pyritohedral symmetry is also an index 5 subgroup of icosahedral symmetry: --> , with virtual mirror 1 across 0, {010}, and 3-fold rotation {12}. The tetrahedral group, [3,3] (), has a doubling (which can be represented by colored nodes ), mapping the first and last mirrors onto each other, and this produces the [3,4] ( or ) group. The subgroup [3,4,1+] ( or ) is the same as [3,3], and [3+,4,1+] ( or ) is the same as [3,3]+. Affine In the Euclidean plane there's 3 fundamental reflective groups generated by 3 mirrors, represented by Coxeter diagrams , , and , and are given Coxeter notation as [4,4], [6,3], and [(3,3,3)]. The parentheses of the last group imply the diagram cycle, and also has a shorthand notation [3[3]]. as a doubling of the [4,4] group produced the same symmetry rotated π/4 from the original set of mirrors. Direct subgroups of rotational symmetry are: [4,4]+, [6,3]+, and [(3,3,3)]+. [4+,4] and [6,3+] are semidirect subgroups. Given in Coxeter notation (orbifold notation), some low index affine subgroups are: Rank four groups Point groups Rank four groups defined the 4-dimensional point groups: Subgroups Space groups Line groups Rank four groups also defined the 3-dimensional line groups: Duoprismatic group Rank four groups defined the 4-dimensional duoprismatic groups. In the limit as p and q go to infinity, they degenerate into 2 dimensions and the wallpaper groups. Wallpaper groups Rank four groups also defined some of the 2-dimensional wallpaper groups, as limiting cases of the four-dimensional duoprism groups: Subgroups of [∞,2,∞], (*2222) can be expressed down to its index 16 commutator subgroup: Complex reflections Coxeter notation has been extended to Complex space, Cn where nodes are unitary reflections of period 2 or greater. Nodes are labeled by an index, assumed to be 2 for ordinary real reflection if suppressed. Complex reflection groups are called Shephard groups rather than Coxeter groups, and can be used to construct complex polytopes. In , a rank 1 Shephard group , order p, is represented as p[ ], [ ]p or ]p[. It has a single generator, representing a 2π/p radian rotation in the Complex plane: . Coxeter writes the rank 2 complex group, p[q]r represents Coxeter diagram . The p and r should only be suppressed if both are 2, which is the real case [q]. The order of a rank 2 group p[q]r is . The rank 2 solutions that generate complex polygons are: p[4]2 (p is 2,3,4,...), 3[3]3, 3[6]2, 3[4]3, 4[3]4, 3[8]2, 4[6]2, 4[4]3, 3[5]3, 5[3]5, 3[10]2, 5[6]2, and 5[4]3 with Coxeter diagrams , , , , , , , , , , , , . Infinite groups are 3[12]2, 4[8]2, 6[6]2, 3[6]3, 6[4]3, 4[4]4, and 6[3]6 or , , , , , , . Index 2 subgroups exists by removing a real reflection: p[2q]2 → p[q]p. Also index r subgroups exist for 4 branches: p[4]r → p[r]p. For the infinite family p[4]2, for any p = 2, 3, 4,..., there are two subgroups: p[4]2 → [p], index p, while and p[4]2 → p[ ]×p[ ], index 2. Computation with reflection matrices as symmetry generators A Coxeter group, represented by Coxeter diagram , is given Coxeter notation [p,q] for the branch orders. Each node in the Coxeter diagram represents a mirror, by convention called ρi (and matrix Ri). The generators of this group [p,q] are reflections: ρ0, ρ1, and ρ2. Rotational subsymmetry is given as products of reflections: By convention, σ0,1 (and matrix S0,1) = ρ0ρ1 represents a rotation of angle π/p, and σ1,2 = ρ1ρ2 is a rotation of angle π/q, and σ0,2 = ρ0ρ2 represents a rotation of angle π/2. [p,q]+, , is an index 2 subgroup represented by two rotation generators, each a products of two reflections: σ0,1, σ1,2, and representing rotations of π/p, and π/q angles respectively. With one even branch, [p+,2q], or , is another subgroup of index 2, represented by rotation generator σ0,1, and reflectional ρ2. With even branches, [2p+,2q+], , is a subgroup of index 4 with two generators, constructed as a product of all three reflection matrices: By convention as: ψ0,1,2 and ψ1,2,0, which are rotary reflections, representing a reflection and rotation or reflection. In the case of affine Coxeter groups like , or , one mirror, usually the last, is translated off the origin. A translation generator τ0,1 (and matrix T0,1) is constructed as the product of two (or an even number of) reflections, including the affine reflection. A transreflection (reflection plus a translation) can be the product of an odd number of reflections φ0,1,2 (and matrix V0,1,2), like the index 4 subgroup : [4+,4+] = . Another composite generator, by convention as ζ (and matrix Z), represents the inversion, mapping a point to its inverse. For [4,3] and [5,3], ζ = (ρ0ρ1ρ2)h/2, where h is 6 and 10 respectively, the Coxeter number for each family. For 3D Coxeter group [p,q] (), this subgroup is a rotary reflection [2+,h+]. Coxeter groups are categorized by their rank, being the number of nodes in its Coxeter-Dynkin diagram. The structure of the groups are also given with their abstract group types: In this article, the abstract dihedral groups are represented as Dihn, and cyclic groups are represented by Zn, with Dih1=Z2. Rank 2 Example, in 2D, the Coxeter group [p] () is represented by two reflection matrices R0 and R1, The cyclic symmetry [p]+ () is represented by rotation generator of matrix S0,1. Rank 3 The finite rank 3 Coxeter groups are [1,p], [2,p], [3,3], [3,4], and [3,5]. To reflect a point through a plane (which goes through the origin), one can use , where is the 3×3 identity matrix and is the three-dimensional unit vector for the vector normal of the plane. If the L2 norm of and is unity, the transformation matrix can be expressed as: [p,2] The reducible 3-dimensional finite reflective group is dihedral symmetry, [p,2], order 4p, . The reflection generators are matrices R0, R1, R2. R02=R12=R22=(R0×R1)3=(R1×R2)3=(R0×R2)2=Identity. [p,2]+ () is generated by 2 of 3 rotations: S0,1, S1,2, and S0,2. An order p rotoreflection is generated by V0,1,2, the product of all 3 reflections. [3,3] The simplest irreducible 3-dimensional finite reflective group is tetrahedral symmetry, [3,3], order 24, . The reflection generators, from a D3=A3 construction, are matrices R0, R1, R2. R02=R12=R22=(R0×R1)3=(R1×R2)3=(R0×R2)2=Identity. [3,3]+ () is generated by 2 of 3 rotations: S0,1, S1,2, and S0,2. A trionic subgroup, isomorphic to [2+,4], order 8, is generated by S0,2 and R1. An order 4 rotoreflection is generated by V0,1,2, the product of all 3 reflections. [4,3] Another irreducible 3-dimensional finite reflective group is octahedral symmetry, [4,3], order 48, . The reflection generators matrices are R0, R1, R2. R02=R12=R22=(R0×R1)4=(R1×R2)3=(R0×R2)2=Identity. Chiral octahedral symmetry, [4,3]+, () is generated by 2 of 3 rotations: S0,1, S1,2, and S0,2. Pyritohedral symmetry [4,3+], () is generated by reflection R0 and rotation S1,2. A 6-fold rotoreflection is generated by V0,1,2, the product of all 3 reflections. [5,3] A final irreducible 3-dimensional finite reflective group is icosahedral symmetry, [5,3], order 120, . The reflection generators matrices are R0, R1, R2. R02=R12=R22=(R0×R1)5=(R1×R2)3=(R0×R2)2=Identity. [5,3]+ () is generated by 2 of 3 rotations: S0,1, S1,2, and S0,2. A 10-fold rotoreflection is generated by V0,1,2, the product of all 3 reflections. Rank 4 There are 4 irreducible Coxeter groups in 4 dimensions: [3,3,3], [4,3,3], [31,1,1], [3,4,4], [5,3,3], as well as an infinite family of duoprismatic groups [p,2,q]. [p,2,q] The duprismatic group, [p,2,q], has order 4pq. [[p,2,p]] The duoprismatic group can double in order, to 8p2, with a 2-fold rotation between the two planes. [3,3,3] Hypertetrahedral symmetry, [3,3,3], order 120, is easiest to represent with 4 mirrors in 5-dimensions, as a subgroup of [4,3,3,3]. [[3,3,3]] The extended group [[3,3,3]], order 240, is doubled by a 2-fold rotation matrix T, here reversing coordinate order and sign: There are 3 generators {T, R0, R1}. Since T is self-reciprocal R3=TR0T, and R2=TR1T. [4,3,3] A irreducible 4-dimensional finite reflective group is hyperoctahedral group (or hexadecachoric group (for 16-cell), B4=[4,3,3], order 384, . The reflection generators matrices are R0, R1, R2, R3. R02=R12=R22=R32=(R0×R1)4=(R1×R2)3=(R2×R3)3=(R0×R2)2=(R1×R3)2=(R0×R3)2=Identity. Chiral hyperoctahedral symmetry, [4,3,3]+, () is generated by 3 of 6 rotations: S0,1, S1,2, S2,3, S0,2, S1,3, and S0,3. Hyperpyritohedral symmetry [4,(3,3)+], () is generated by reflection R0 and rotations S1,2 and S2,3. An 8-fold double rotation is generated by W0,1,2,3, the product of all 4 reflections. [3,31,1] A half group of [4,3,3] is [3,31,1], , order 192. It shares 3 generators with [4,3,3] group, but has two copies of an adjacent generator, one reflected across the removed mirror. [3,4,3] A irreducible 4-dimensional finite reflective group is Icositetrachoric group (for 24-cell), F4=[3,4,3], order 1152, . The reflection generators matrices are R0, R1, R2, R3. R02=R12=R22=R32=(R0×R1)3=(R1×R2)4=(R2×R3)3=(R0×R2)2=(R1×R3)2=(R0×R3)2=Identity. Chiral icositetrachoric symmetry, [3,4,3]+, () is generated by 3 of 6 rotations: S0,1, S1,2, S2,3, S0,2, S1,3, and S0,3. Ionic diminished [3,4,3+] group, () is generated by reflection R0 and rotations S1,2 and S2,3. A 12-fold double rotation is generated by W0,1,2,3, the product of all 4 reflections. [[3,4,3]] The group [[3,4,3]] extends [3,4,3] by a 2-fold rotation, T, doubling order to 2304. [5,3,3] The hyper-icosahedral symmetry, [5,3,3], order 14400, . The reflection generators matrices are R0, R1, R2, R3. R02=R12=R22=R32=(R0×R1)5=(R1×R2)3=(R2×R3)3=(R0×R2)2=(R0×R3)2=(R1×R3)2=Identity. [5,3,3]+ () is generated by 3 rotations: S0,1 = R0×R1, S1,2 = R1×R2, S2,3 = R2×R3, etc. Rank 8 [34,2,1] The E8 Coxeter group, [34,2,1], , has 8 mirror nodes, order 696729600 (192x10!). E7 and E6, [33,2,1], , and [32,2,1], can be constructed by ignoring the first mirror or the first two mirrors respectively. Affine rank 2 Affine matrices are represented by adding an extra row and column, the last row being zero except last entry 1. The last column represents a translation vector. [∞] The affine group [∞], , can be given by two reflection matrices, x=0 and x=1. Affine rank 3 [4,4] The affine group [4,4], , (p4m), can be given by three reflection matrices, reflections across the x axis (y=0), a diagonal (x=y), and the affine reflection across the line (x=1). [4,4]+ () (p4) is generated by S0,1 S1,2, and S0,2. [4+,4+] () (pgg) is generated by 2-fold rotation S0,2 and glide reflection (transreflection) V0,1,2. [4+,4] () (p4g) is generated by S0,1 and R3. The group [(4,4,2+)] () (cmm), is generated by 2-fold rotation S1,3 and reflection R2. [3,6] The affine group [3,6], , (p6m), can be given by three reflection matrices, reflections across the x axis (y=0), line y=(√3/2)x, and vertical line x=1. [3[3]] The affine group [3[3]] can be constructed as a half group of . R2 is replaced by R'2 = R2×R1×R2, presented by the hyperplane: y+(√3/2)x=2. The fundamental domain is an equilateral triangle with edge length 2. Affine rank 4 [4,3,4] The affine group is [4,3,4] (), can be given by four reflection matrices. Mirror R0 can be put on z=0 plane. Mirror R1 can be put on plane y=z. Mirror R2 can be put on x=y plane. Mirror R3 can be put on x=1 plane. [4,3,4]+ () is generated by S0,1, S1,2, and S2,3. [[4,3,4]] The extended group [[4,3,4]] doubles the group order, adding with a 2-fold rotation matrix T, with a fixed axis through points (1,1/2,0) and (1/2,1/2,1/2). The generators are {R0,R1,T}. R2 = T×R1×T and R3 = T×R0×T. [4,31,1] The group [4,31,1] can be constructed from [4,3,4], by computing [4,3,4,1+], , as R'3=R3×R2×R3, with new R'3 as an image of R2 across R3. [3[4]] The group [3[4]] can be constructed from [4,3,4], by removing first and last mirrors, [1+,4,3,4,1+], , by R'1=R0×R1×R0 and R'3=R3×R2×R3. Notes References H.S.M. Coxeter: Kaleidoscopes: Selected Writings of H.S.M. Coxeter, editied by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) (Paper 23) (Paper 24) Norman Johnson Uniform Polytopes, Manuscript (1991) N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. (1966) Norman W. Johnson and Asia Ivic Weiss Quadratic Integers and Coxeter Groups PDF Can. J. Math. Vol. 51 (6), 1999 pp. 1307–1336 N. W. Johnson: Geometries and Transformations, (2018) PDF John H. Conway and Derek A. Smith, On Quaternions and Octonions, 2003, The Symmetries of Things 2008, John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, Ch.22 35 prime space groups, ch.25 184 composite space groups, ch.26 Higher still, 4D point groups Symmetry Group theory
Coxeter notation
[ "Physics", "Mathematics" ]
11,332
[ "Group theory", "Fields of abstract algebra", "Geometry", "Symmetry" ]
32,089,853
https://en.wikipedia.org/wiki/Greninger%20chart
In crystallography, a Greninger chart is a chart that allows angular relations between zones and planes in a crystal to be directly read from an x-ray diffraction photograph. The Greninger chart is a simple trigonometric tool to determine g and d for a fixed sample-to-film distance. (If one uses a 2-d detector the problem of determining g and d could be solved mathematically using the equations which generate the Greninger chart) A new chart must be generated for different sample to detector distances. (2s is 2q for the diffraction peak and tan m is x/y for the Cartesian coordinates of the diffraction peak.) The Greninger chart gives directly the two angles needed to plot poles on the Wulff net. It is critical to keep track of the relative arrangement of the sample to the film, if photographic film is used then this is achieved by cutting the corner of the film. For Polaroid film one must make a note of the arrangement of the face of the film in the camera. See also Bernal chart References Sources Greninger A. B. (1935). Zeitschrift für Kristallographie 91: 424. External links http://www.answers.com/topic/greninger-chart http://www.eng.uc.edu/~gbeaucag/Classes/XRD/Labs/Lab3Laue.html http://www-xray.fzu.cz/xraygroup/www/grchart.html Trigonometry X-rays Diffraction
Greninger chart
[ "Physics", "Chemistry", "Materials_science" ]
341
[ "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Diffraction", "Crystallography", "Spectroscopy" ]
35,290,643
https://en.wikipedia.org/wiki/Reverse%20pharmacology
In the field of drug discovery, reverse pharmacology also known as target-based drug discovery (TDD), a hypothesis is first made that modulation of the activity of a specific protein target thought to be disease modifying will have beneficial therapeutic effects. Screening of chemical libraries of small molecules is then used to identify compounds that bind with high affinity to the target. The hits from these screens are then used as starting points for drug discovery. This method became popular after the sequencing of the human genome which allowed rapid cloning and synthesis of large quantities of purified proteins. This method is the most widely used in drug discovery today. Differently than the classical (forward) pharmacology, with the reverse pharmacology approach in vivo efficacy of identified active (lead) compounds is usually performed in the final drug discovery stages. See also Classical pharmacology Reverse vaccinology References Drug discovery
Reverse pharmacology
[ "Chemistry", "Biology" ]
182
[ "Pharmacology", "Life sciences industry", "Drug discovery", "Medicinal chemistry stubs", "Medicinal chemistry", "Pharmacology stubs" ]
35,292,764
https://en.wikipedia.org/wiki/Stig%20Stenholm
Stig Torsten Stenholm (26 February 1939 – 30 September 2017) was a theoretical physicist who formerly held an Academy of Finland professorship. Education and career Stenholm obtained an engineering degree at the Helsinki University of Technology (HUT), and a master of science degree in mathematics at the University of Helsinki, both in 1964. He then earned his Dr. phil. at Oxford in 1967 on the topic of quantum liquids under supervision of Dirk ter Haar. From 1967 to 1968, he performed postdoctorate work at Yale University. He obtained his a position as professor at the University of Helsinki in 1974. In 1980, Stenholm was appointed as the scientific director of the Research Institute for Theoretical Physics (TFT). His colleague Kalle-Antti Suominen later affirmed: "As a director Stig was very broad-minded and without this the happy atmosphere of TFT could not have existed." In the 1990s, the TFT was replaced, and the Helsinki Institute of Physics (HIP) took its place. In 1997, Stenholm moved to the Royal Institute of Technology in Stockholm, Sweden. He retired in 2005. He delivered the presentation speech for the 2005 Nobel Prize in Physics at the Stockholm Concert Hall. Work Stenholm specialised on quantum optics and worked among other topics on laser cooling, Bose–Einstein condensation and quantum information. Honors He received an Academy of Finland professorship for the work he performed from 1992 to 1997, and he was member of the Royal Swedish Academy of Sciences, the Austrian Academy of Sciences, the Finnish Society of Sciences and Letters and the Swedish Academy of Engineering Sciences in Finland. Books Stenholm, Stig: The Quest for Reality. Bohr and Wittgenstein – two complementary views. Oxford University Press. 2011. (abstract) Stenholm, Stig & Suominen, Kalle-Antti: Quantum Approach to Informatics. Wiley, 2005. Stenholm, Stig: The Foundations of Laser Spectroscopy. Wiley / Dover Books on Physics, 1984. Stenholm, Stig: The semiclassical theory of the gas laser. Pergamon Press, 1971. References External links Stig Stenholm. Scientific Commons Quantum complex systems: entanglemant and decoherence from nano- to macroscales – QUACS. Research Leader: Professor emeritus Stig Stenholm, KTH Research Projects Database. Theoretical physicists 1939 births 2017 deaths Academic staff of the University of Helsinki Academic staff of the KTH Royal Institute of Technology 20th-century Finnish physicists University of Helsinki alumni Members of the Royal Swedish Academy of Sciences Members of the Austrian Academy of Sciences
Stig Stenholm
[ "Physics" ]
552
[ "Theoretical physics", "Theoretical physicists" ]
35,298,389
https://en.wikipedia.org/wiki/Dalian%20Institute%20of%20Chemical%20Physics
The Dalian Institute of Chemical Physics (DICP) (), also called Huawusuo (), is a research centre specialized in physical chemistry, chemical physics, biophysical chemistry, chemical engineering and materials science belonging to the Chinese Academy of Sciences. It is located in Dalian, Liaoning, China. General Information Having its origin in South Manchuria Railway's research department, which later became the Central Research Centre, the Dalian Institute of Chemical Physics was thus named in 1961 and moved its location from 129 Street (at Zhongshan Road) to the current address in 1995. Dalian Institute of Chemical Physics is one of the leading research institutes in China. In the past half century, the institute has become internationally recognised for its research in catalytic chemistry, chemical engineering, chemical laser and molecular reaction dynamics, organic synthesis and chromatography for modern analytic chemistry and biotechnology. The institute houses one national laboratory, two state key laboratories, and five national engineering research centres. The Dalian National Lab of Clean Energy (DNL) is the first national laboratory in the field of energy research and integrates laboratories across DICP and other institutions. DNL is subdivided into 10 divisions and its research is focused on the efficient conversion and optimal utilisation of fossil energy, clean energy conversion technologies and the economically viable use of solar and biomass energy. DICP's other main laboratories include the Laboratory of Instrumentation and Analytical Chemistry, the Laboratory of Fine Chemicals, the State Key Laboratory of Catalysis, the Laboratory of Chemical Lasers, the State Key Laboratory of Molecular Reaction Dynamics, the Laboratory of Aerospace Catalysis and New Materials, and the Laboratory of Biotechnology. In 1979, Chinese scientists at the Dalian Institute of Chemical Physics first proposed the structure of the nitroamine explosive Hexanitrohexaazaisowurtzitane, an explosive with greater energy then conventional HMX or RDX. In December 2019, a Chinese team involving scientists from the Dalian Institute of Chemical Physics and the company Feye UAV Technology developed a methanol-powered fuel system that kept a drone in the air for 12 hours, the FY-36. Fuel cell research at the institute had first started in the 1960s. Since mid-2010 the Institute and its spin-off company Rongke Power have been the World's leading developer and manufacturer of vanadium redox flow batteries. Basic Data Name: Dalian Institute of Chemical Physics, Chinese Academy of Sciences Established: 1949 Director: Liu Zhongmin () Address: No. 457, Zhongshan Road, Shahekou District, Dalian, Liaoning, China. Postal code: 116023 Transportation Bus: Huawusuo Stop, No. 16, 22, 23, 28, 37, 406, 531, 901 Tramway: Huawusuo Stop, No. 202 Line (between Xinghai Square and Dalian Medical University's No. 2 Hospital) See also South Manchuria Railway Chinese Academy of Sciences Dalian Hi-Tech Zone References External links Education in Dalian Research institutes of the Chinese Academy of Sciences 1949 establishments in China Chemical physics Physics research institutes
Dalian Institute of Chemical Physics
[ "Physics", "Chemistry" ]
641
[ "nan", "Applied and interdisciplinary physics", "Chemical physics" ]
35,300,443
https://en.wikipedia.org/wiki/Imidic%20acid
In chemistry, an imidic acid is any molecule that contains the -C(=NH)-OH functional group. It is the tautomer of an amide and the isomer of an oxime. The term "imino acid" is an obsolete term for this group that should not be used in this context because it has a different molecular structure. Imidic acids can be formed by metal-catalyzed dehydrogenation of geminal amino alcohols. For example, methanolamine, the parent compound of the amino alcohols, can be dehydrogenated to methanimidic acid, the parent compound of the imidic acids. H2NCH2OH → HNCHOH + H2 (tautomer of formamide) Geminal amino alcohols with side chains similarly form imidic acids with the same side chains: H2NCHROH → HNCROH + H2 Another way to form imidic acids is the reaction of carboxylic acids with azanone. For example, the reaction for carbamic acid: H2NCOOH + HNO → H2NCNHOH + O2 (tautomer of urea) And the general reaction for substituted imidic acids: RCOOH + R'NO → RCNR'OH + O2 Another mechanism is the reaction of carboxylic acids with diazene or other azo compounds, forming azanone. RCOOH + HNNH → RCNHOH + HNO Imidic acids tautomerize to amides by a hydrogen shift from the oxygen to the nitrogen atom. Amides are more stable in an environment with oxygen or water, whereas imidic acids dominate the equilibrium in solution with ammonia or methane. HNCHOH ⇌ HCONH2 RNCR'OH ⇌ R'CONHR See also Imidate Alkanolamine Hemiaminal References Amides Functional groups
Imidic acid
[ "Chemistry" ]
402
[ "Amides", "Functional groups" ]
45,196,112
https://en.wikipedia.org/wiki/Charge-shift%20bond
In theoretical chemistry, the charge-shift bond is a proposed new class of chemical bonds that sits alongside the three familiar families of covalent, ionic, and metallic bonds where electrons are shared or transferred respectively. The charge shift bond derives its stability from the resonance of ionic forms rather than the covalent sharing of electrons which are often depicted as having electron density between the bonded atoms. A feature of the charge shift bond is that the predicted electron density between the bonded atoms is low. It has long been known from experiment that the accumulation of electric charge between the bonded atoms is not necessarily a feature of covalent bonds. An example where charge shift bonding has been used to explain the low electron density found experimentally is in the central bond between the inverted tetrahedral carbon atoms in [1.1.1]propellanes. Theoretical calculations on a range of molecules have indicated that a charge shift bond is present, a striking example being fluorine, , which is normally described as having a typical covalent bond. The charge shift bond (CSB) has also been shown to exist at the cation-anion interface of protic ionic liquids (PILs). The authors have also shown how CSB character in PILs correlates with their physicochemical properties. Valence bond description The valence bond view of chemical bonding that owes much to the work of Linus Pauling is familiar to many, if not all, chemists. The basis of Pauling's description of the chemical bond is that an electron pair bond involves the mixing, resonance, of one covalent and two ionic structures. In bonds between two atoms of the same element, homonuclear bonds, Pauling assumed that the ionic structures make no appreciable contribution to the overall bonding. This assumption followed on from published calculations for the hydrogen molecule in 1933 by Weinbaum and by James and Coolidge that showed that the contribution of ionic forms amounted to only a small percentage of the H−H bond energy. For heteronuclear bonds, A−X, Pauling estimated the covalent contribution to the bond dissociation energy as being the mean of the bond dissociation energies of homonuclear A−A and X−X bonds. The difference between the mean and the observed bond energy was assumed to be due to the ionic contribution. The calculation for HCl is shown below. The ionic contribution to the overall bond dissociation energy was attributed to the difference in electronegativity between the A and X, and these differences were the starting point for Pauling's calculation of the individual electronegativities of the elements. The proponents of charge shift bond bonding re−examined the validity of Pauling's assumption that ionic forms make no appreciable contribution to the overall bond dissociation energies of homonuclear bonds. What they found using modern valence bond methods was that in some cases the contribution of ionic forms was significant, the most striking example being F2, fluorine, where their calculations indicate that the bond energy of the F−F bond is due wholly to the ionic contribution. Calculated bond energies The contribution of ionic resonance structures has been termed the charge−shift resonance energy, REcs, and values have been calculated for a number of single bonds, some of which are shown below: The results show that for homonuclear bonds the charge shift resonance energy can be significant, and for F2 and Cl2 show it is the attractive component whereas the covalent contribution is repulsive. The reduced density along the bond axis density is apparent using ELF, electron localization function, a tool for determining electron density. The bridge bond in a propellane The bridge bond (inverted bond between the bridgehead atoms which is common to the three cycles) in a substituted [1.1.1]propellane has been examined experimentally. A theoretical study on [1.1.1]propellane has shown that it has a significant REcs stabilisation energy. Factors causing charge shift bonding Analysis of a number of compounds where charge shift resonance energy is significant shows that in many cases elements with high electronegativities are involved and these have smaller orbitals and are lone pair rich. Factors that reduce the covalent contribution to the bond energy include poor overlap of bonding orbitals, and the lone pair bond weakening effect where repulsion due to the Pauli exclusion principle is the main factor. There is no correlation between the charge−shift resonance energy REcs and the difference between the electronegativities of the bonded atoms as might be expected from the Pauling bonding model, however there is a global correlation between REcs and the sum of their electronegativities which can be accounted for in part by the lone pair bond weakening effect. The charge-shift nature of the inverted bond in [1.1.1]propellanes has been ascribed to the Pauli repulsion due to the adjacent "wing" bonds destabilising of the covalent contribution. Experimental evidence for charge-shift bonds The interpretation of experimentally determined electron density in molecules often uses AIM theory. In this the electron density between the atomic nuclei along the bond path are calculated, and the bond critical point where the density is at a minimum is determined. The factors that determine the type of chemical bond are the Laplacian and the electron density at the bond critical point. At the bond critical point a typical covalent bond has significant density and a large negative Laplacian. In contrast a "closed shell" interaction as in an ionic bond has a small electron density and a positive Laplacian. A charge shift bond is expected to have a positive or small Laplacian. Only a limited number of experimental determinations have been made, compounds with bonds with a positive Laplacian are the N–N bond in solid N2O4, and the (Mg−Mg)2+ diatomic structure. References Chemical bonding
Charge-shift bond
[ "Physics", "Chemistry", "Materials_science" ]
1,216
[ "Chemical bonding", "Condensed matter physics", "nan" ]
45,197,219
https://en.wikipedia.org/wiki/Escalation%20archetype
The escalation archetype is one of possible types of system behaviour that are known as system archetypes. The escalation archetype is common for situations of non-cooperative games where each player can make own decisions and these decisions lead to the outcome for the player. However, when both players try to maximize their output (at the expense of the other one) they can get into a loop where each player will try harder and harder to surpass the opponent. While it can have favourable consequences it can also lead to self-destructive behaviour. Structure Elements of archetype Escalation archetype system can be described using causal loop diagrams which may consist of balancing and reinforcing loops. Balancing loop Balancing loop is a structure representing negative feedback process. In such a structure, a change in system leads to actions that usually eliminate the effect of that change which means that the system tends to remain stable in time. Reinforcing loop Balancing loop is a structure representing positive feedback process. This reinforcing feedback causes that even a small change in the system can lead to huge disturbances, e.g. variable A is increased which leads to an increase of variable B which leads to another increase of A and so there might be an exponential growth over time. Escalation archetype as balancing loops The image below shows escalation archetype as two balancing loops. When X makes an action, it leads to a change in results of X relative to results of Y. Y then makes action to equalize the situation and the result again changes the balance and induces another action by X. As this repeats actions done by X and Y are bigger and bigger to keep up with other's actions and results. Escalation archetype as reinforcing loop The causal loop diagram below shows escalation archetype as a single reinforcing loop. It can be read simply as that more action done by X creates bigger results of action done by X. The bigger results of X, the bigger difference between X and Y results. The bigger difference means more action by Y and more action by Y leads to bigger result of Y. The bigger result of Y leads to a smaller difference between X and Y, but the smaller is this difference, the bigger will be the action of X and it starts all over again. The image below simplifies reinforcing loop and shows general idea of what happens. Increased activity of X leads to an increase of threat for Y which leads to an increased activity by Y. Increased activity by Y leads to increased threat for X which creates another potential for activity of X to grow. Examples Arms race A well known example of escalation archetype is the arms race. The idea is that in the arms race two (or more) parties are competing to have the strongest army and weapons. An example is the race in producing nuclear weapons by the United States and the Soviet Union that was an important factor in the Cold War. Over the time, each party can get temporarily a slight advantage, but then the other one produces or obtains in other way more weapons and gets the advantage on its side, temporarily. In the end, both parties have great military power that is very threatening not only to them but also to non-aligned parties. Picking apples in an orchard The escalation archetype can turn up in situations such as picking fruit in an orchard. Imagine a large apple orchard with a bountiful crop. An owner of such a plantation cannot pick the whole harvest himself because there are simply too many trees. Therefore, he employs fruit pickers to do the work for him. He tries to figure a way to measure the work they do so that he can reward them fairly. As he is suspicious that workers would might slowly, he is hesitant to link the wage only to hours worked, and he comes up with an idea. He divides workers into two groups and declares that the group which harvests more apples will get a bonus, in addition to their regular wage. Both groups start harvesting apples with passion and energy. First, group X collects a pallet load a little bit sooner than the second group, Y. Therefore, the Y-group motivates those members who were a little bit slower to increase their pace. Now Y is a little bit better, so they not only catch up, but even finish the second pallet load before the X-group. Then X comes with an idea that they should assign roles to their members – some will pick apples from the upper part of trees using ladders, while some will collect those that are in the lower part of the trees; other will load boxes, and one person will organise the work and help where necessary. This advantage enables X-group to again get ahead of Y. While Y adapts to the model of X, they make some modifications to their procedures, and soon they are the leading group. This improvement of processes continues in several iterations until both parties are exceptionally effective in harvesting apples. The owner can be satisfied with the situation, as pallets are quickly being loaded with apples. Should everything continue this way, the orchard could be harvested in few hours, with X-group beating Y-group by a tenth of a percent. The owner could reward only the winning team or reward both teams, because they were almost equally hard-working. However, due to the fact that one group was always a little bit behind, the situation in the middle of day is bad for one of the groups, who are slightly slower, let's say it is Y-group. They can continue working at the same rate, and they would finish second, with a loss of a tenth of a percent. Or, they can come up with another innovation, which would enable them to increase their production output. They have an idea that harvesting the topmost apples is slower that harvesting the rest of them. Because of that, they decide to skip these apples and only collect the others. This way, the situation has escalated problematically. While Y could win the competition now, there will be a considerable quantity of apples left on the trees. Or, if both groups are instructed not to leave a single apple in the orchard, they will have to stay much longer to finish these apples, and the owner will have higher costs for their wages. The owner could, of course, set a condition that no team could leave a tree until it is fully harvested. That would help in some way to break the escalation archetype, unless workers realize they are not punished for some other undesirable behaviour, for example being careless regarding tree condition after the harvest. As can be seen in this example, the escalation archetype might bring positive results (faster harvesting) but it is necessary to monitor behaviour of the affected system to ensure long-term profitability. The attention fight To avoid naming in this example of escalation archetype occurrence, behaviour of imaginary persons Amy (A), Betty (B), Cecil (C) and Daniel (D)is described. Amy, Betty, Cecil and Daniel want to become famous celebrities with a lot of money, hordes of admirers and amazing children. They already have many friends and everybody in the town knows each one of them. They all work hard to be best in their field of expertise and soon they are known to the whole country. They know of each other and try to work harder to become more famous than each other. This is when an escalation archetype comes into play. They become the most famous celebrities in the country, and each one of them realizes that to draw more attention than others they need to be unique and very special. As A starts to work even harder than before, B notices that A's fame is growing more than hers and starts to work harder than A. This is noted by C and he does what must be done - starts working more than anyone else. But there is also D, whose ambitions are no smaller; he wants to be the most famous celebrity, so he starts working even harder than anyone else. As A notices her effort is not sufficient, she does what is obvious - starts to work more than before. Now, this cycle could repeat continuously until Amy, Betty, Cecil or Daniel dies of exhaustion. In the meantime, some of them could start taking drugs with the presumption that it could boost their productivity and ability to concentrate, or with the aim to get rid of depression from working all the time. Another solution presumed by them as feasible would be some way of eliminating their opponents - by false accusation or by pointing out their faults. Or, if they found it impossible to be better by simply working more, they could try to figure out some way to attract attention by qualitative change instead of merely quantitative change. This way, A could say something shocking on TV; B could simply follow by saying something even more shocking or controversial. Then, C would feel threatened and so he will come up with an idea that he could make controversial photographs. Then D will try to surpass everyone and will do some action that will attract the attention of the media and the public. They would escalate this to an extreme situation. While, at the beginning, the competitiveness was beneficial for Amy, Betty, Cecil, Daniel and the public, in the long term, many negative consequences result. What could be a meaningful solution for these people? They could have set some limits for themselves beforehand, for example, how much time they are willing to work to achieve their desire to be a famous celebrity and what is acceptable behaviour and what is not. If they are not able to do so, there has to be some mechanism from outside to stop them - e.g., family or friends giving them cautionary advice. Competing children Tendency of parents to compare their children creates nourishing environment for escalation archetype. Parents tend to compare their kids with other children and among own kids. This creates pressure on children as they are expected to perform well. Imagine a family with two kids named, for example, Lisa and Bartolomeo. Their parents are very much children-focused and hope their kids will get proper educations and live what they consider successful lives. They invest significant portions of both their family budget and time into both children and hope that this investment will pay off in the form of Lisa and Bartolomeo being successful in school and later in life. Lisa and Bartolomeo are usual children with some hobbies who study diligently but without extreme passion. They simply do what they got to do. Their results are good but not perfect. So their parents come and start the escalation archetype by telling the kids that the one with better marks will go on special holiday of their choice. As both Lisa and Bartolomeo like travelling a lot, they start to study hard. To satisfaction of their parents children's marks get soon better which is very positive outcome. Yet a problem arises. As they both study really hard to keep pace with the other one, something might go wrong. For example, when Bartolomeo is very creative and skillful painter but not so talented in studying usual subjects like math or languages. Sooner or later he will reach his limits. Then to keep up the good marks he will have to abandon his painting hobby and his talent will be wasted. Or he will try to sustain great marks and then Bartolomeo will start cheating during exams. However even when no negative effect happens there will be a difficulty how to reward children after Bartolome finishes second very closely. Should their parents appreciate only Lisa's effort or acknowledge also Bartolomeo's achievements. When they reward only Lisa, Bartolomeo could easily become demotivated and then it would be impossible to make use of positive effects of escalation archetype. On the other hand, rewarding both could mean the same and for both children as they realise that their extra effort does not have necessarily mean better outcome. There is also an alternative version of how competition amongst children can be affected by escalation archetype. When all parents motivate children to improve in comparison to their peers, they will all study harder and harder while the differences amongst participating kids will remain relatively stable (and if teachers increase requirements they will even retain their marks). Under such simple circumstances most children might benefit from the competition nevertheless children with weaker intellectual skill may become isolated when they are no longer able to pursue others. Reversely in another alternative scenario where all children are demotivated to study for some reason, their results are worse and worse (and if teachers decrease requirements they will retain their marks while being less educated) and downward spiral is working in a way that situation gets worse and worse. Risks and opportunities The dangers of systems with escalation archetypes are various. First, it might be difficult to identify the existence of archetype at the first sight. Then the behaviour of the system might look desirable at first and therefore requiring no immediate action. Another risk is a possibility of exponential growth within such structure. Finally the system might have different outcome in short term and long term. Escalation archetype comes with a possibility to make a big change in the system with a little input or a small action done at the beginning (due to the fact that it behaves like reinforcing loop). Solution and optimization To remove downward or upward spiral effect of escalation archetype there is a change in the system necessary which breaks the ongoing pattern. That change is typically switching the actors from non-cooperative game mode to cooperative game behaviour so that they stop escalating their actions to keep with others and rather find mutual solution and movement. See also Attractiveness principle Fixes that fail Growth and underinvestment Limits to growth References Causality Conflict (process) Systems theory
Escalation archetype
[ "Physics", "Biology" ]
2,818
[ "Behavior", "Aggression", "Human behavior", "Conflict (process)" ]
45,197,429
https://en.wikipedia.org/wiki/Modified%20aldol%20tandem%20reaction
Modified aldol tandem reaction is a sequential chemical transformation that combines aldol reaction with other chemical reactions that generate enolates. Enolates are a common building block in chemical syntheses and are typically formed by the addition of base to a ketone or aldehyde. Modified Aldol tandem reactions allow similar reactivity to be produced without the need for a base which may have adverse effects in a given chemical synthesis. A representative example is the decarboxylative aldol reaction (Figure "Modified aldol tandem reaction, decarboxylative aldol reaction as an example"), where the enolate is generated via decarboxylation reaction mediated by either transition metals or organocatalysts. Key advantage of this reaction over other types of aldol reaction is the selective generation of an enolate in the presence of aldehydes. This allows for the directed aldol reaction to produce a desired cross aldol. Transition metals have been used to mediate the modified aldol tandem reaction. Allyl β-keto carboxylates can be used as substrate for palladium-mediated decarboxylative aldol reaction (Figure "Palladium-mediated decarboxylative aldol reaction with allyl β-keto carboxylates"). The allyl group can be removed by palladium, following decarboxylation reaction selectively generates the enolate at the β-keto group, which could further react with aldehyde to generate aldols. Using decarboxylation reaction to generate enolate is a common strategy in biosynthetic pathways such as polyketide synthesis, where malonic acid half thioester can be converted to the corresponding enolate for Claisen condensation reaction. Inspired by this, a modified tandem aldol reaction has been developed using the malonic acid half thioester as the enolate source. A copper based catalyst system has been developed for efficient aldol generation at mild conditions (Figure "Decarboxylative aldol reaction with malonic acid half thioester"). References Organic reactions
Modified aldol tandem reaction
[ "Chemistry" ]
435
[ "Organic reactions" ]
45,197,601
https://en.wikipedia.org/wiki/Grieco%20three-component%20condensation
The Grieco three-component condensation is an organic chemistry reaction that produces nitrogen-containing six-member heterocycles via a multi-component reaction of an aldehyde, a nitrogen component, such as aniline, and an electron-rich alkene. The reaction is catalyzed by trifluoroacetic acid or Lewis acids such as ytterbium trifluoromethanesulfonate (Yb(OTf)3). The reaction is named for Paul Grieco, who first reported it in 1985. In the original paper the nitrogen component were benzylamine, methyl amine or ammonium chloride, the reaction now also include anilines, similar to the earlier Povarov reaction. The reaction process involves the formation of an aryl immonium ion intermediate followed by an aza Diels-Alder reaction with an alkene. Imines are electron-poor, and thus usually function as the dienophile. Here, however, the alkene is electron-rich, so it reacts well with the immonium diene in an Inverse electron-demand Diels–Alder reaction. Researchers have extended the Grieco three-component reaction to reactants or catalysts immobilized on solid support, which greatly expands the application of this reaction to various combinatorial chemistry settings. Kielyov and Armstrong were the first to report a solid-supported version of this reaction, they found that this reaction works well for each reactants immobilized on solid support. Kobayashi and co-workers show that a polymer-supported scandium catalyst catalyze the Grieco reaction with high efficiency. Given the effectiveness of the reaction and the commercial availability of various Grieco partners, the Grieco three-component coupling is very useful for preparing quinoline libraries for drug discovery. See also Povarov reaction References Organic reactions Multiple component reactions Name reactions
Grieco three-component condensation
[ "Chemistry" ]
398
[ "Name reactions", "Organic reactions" ]
45,199,011
https://en.wikipedia.org/wiki/Bois%20du%20Cazier
The Bois du Cazier () was a coal mine in what was then the town of Marcinelle, near Charleroi, in Belgium which today is preserved as an industrial heritage site. It is best known as the location of a major mining disaster that took place on August 8, 1956 in which 262 men, including a large number of Italian labourers, were killed. Aside from memorials to the disaster, the site features a small woodland park, preserved headframes and buildings, as well as an Industrial Museum and Glass Museum. The museum features on the European Route of Industrial Heritage and is one of the four Walloon mining sites listed by UNESCO as a World Heritage Site in 2012. History The history of coal mining on the site of the Bois du Cazier dates back to a concession awarded by royal decree on 30 September 1822; a transcription error caused the name of the site to be changed from Bois de Cazier. After 1898, the site was owned by the charbonnages d'Amercœur company and operated by the Société anonyme du Charbonnage du Bois du Cazier. The site had two mine shafts reaching and deep. A third shaft, known as the Foraky shaft, was begun in the mid-1950s. By 1955, the mine produced of coal annually and employed a total of 779 workers, many of whom were not Belgian but migrant workers from Italy and elsewhere. They were housed by the mining companies, which in reality meant they moved into Nissen huts in former prisoner of war camps in the region. On the 8 August 1956, a major mining accident occurred and a fire destroyed the mine; 262 workers of 12 nationalities were killed. In the aftermath of the disaster, Italian immigration stopped and mining safety regulations were revised all across Europe and a Mines Safety Commission established. Full production at the Bois du Cazier resumed the following year. The company was liquidated in January 1961 and the mine finally closed in December 1967. It was listed as a national monument on 28 May 1990 and opened as a museum in 2002. Marcinelle disaster of 1956 On 8 August 1956, a major mining disaster occurred at the Bois du Cazier. An accident began at 8:10 AM when the hoist mechanism in one of the shafts was started before the coal wagon had been completely loaded into the cage. Electric cables ruptured, starting an underground fire within the shaft. The moving cage also ruptured oil and air pipes which made the fire worse and destroyed much of the winch mechanism. Smoke and carbon monoxide spread down the mine, killing all the miners trapped by the fire. Despite an attempted rescue from the surface, only 13 of the miners who had been underground at the time of the accident survived. 262 were killed, making the mining accident the worst in Belgian history. Because of the guest worker programme then in force, only 96 killed in the accident were Belgian nationals; in total 12 nationalities were represented among the dead, including 136 Italians. The remains of the last miners, trapped at the bottom of the mine, were only found on 23 August 1956. The excavators famously reported that they were "all corpses" (tutti cadaveri) inside the mine. The disaster is considered a major moment in Belgian and Italian post-war history and was the subject of a 2003 documentary film, Inferno Below, which won an award at the Festival International de Programmes Audiovisuels. Museums Since March 2002, the Bois du Cazier has been open to the public as a museum complex. Most of the original site of the mine is preserved except the derelict Foraky headframe, dating to the 1960s, which was demolished in 2004. The mine buildings house a small Industrial Museum (Musée d'Industrie), displaying artefacts relating to Belgium's industrial history. The Glass Museum of Charleroi (Musée du Verre de Charleroi) also reopened in the same site in 2007, displaying its collection of historic glassware. There are several spaces with memorials to the 1956 disaster. The slag heaps around the mine have been landscaped and can also be visited by the public. The museum is one of the four sites inscribed as a UNESCO World Heritage Site under the Major Mining Sites of Wallonia listing. It also features on the European Route of Industrial Heritage. In 2006, the Bois du Cazier received 46,000 visitors. See also Tiberio Murgia, Italian actor who worked at the mine in the mid-1950s Salvatore Adamo, Belgian singer whose father migrated from Italy to work at Marcinelle Elio Di Rupo, Belgian Prime Minister and the son of an Italian miner Mining accident, including a list References Further reading External links Official website Protocollo italo-belga Industry museums in Belgium Museums in Hainaut (province) Mining museums Glass museums and galleries European Route of Industrial Heritage Anchor Points Coal mines in Belgium
Bois du Cazier
[ "Materials_science", "Engineering" ]
990
[ "Glass engineering and science", "Glass museums and galleries" ]
45,204,119
https://en.wikipedia.org/wiki/Drilling%20jumbo
A Drilling jumbo or drill jumbo is a rock drilling machine. Use Drilling jumbos are usually used in underground mining, if mining is done by drilling and blasting. They are also used in tunnelling, if rock hardness prevents use of tunnelling machines. It is considered as a powerful tool to facilitate labor-intensive process for mineral extraction. Description A drilling jumbo consists of one, two or three rock drill carriages, sometimes a platform, which the miner stands on to load the holes with explosives, clear the face of the tunnel or do something else. The carriages are bolted onto the chassis, which supports the miner's cabin as well as the engine. Although modern drilling jumbos are relatively large, there are smaller ones for use in cramped conditions. Whereas modern jumbos are usually fitted with rubber tires and diesel-powered, there are also exist variants with steel wheels, to ride on rails and even single carriage sled-mounted ones. Electric power is also common, and historic jumbos were powered by compressed air. Electricity and compressed air produce little to no exhaust gases, which is preferable if work is done in smaller tunnels where good ventilation is difficult. The drilling jumbo was invented in 1849 by J.J Couch of Philadelphia. References Mining equipment
Drilling jumbo
[ "Engineering" ]
259
[ "Mining equipment" ]
26,456,640
https://en.wikipedia.org/wiki/Membrane
A membrane is a selective barrier; it allows some things to pass through but stops others. Such things may be molecules, ions, or other small particles. Membranes can be generally classified into synthetic membranes and biological membranes. Biological membranes include cell membranes (outer coverings of cells or organelles that allow passage of certain constituents); nuclear membranes, which cover a cell nucleus; and tissue membranes, such as mucosae and serosae. Synthetic membranes are made by humans for use in laboratories and industry (such as chemical plants). This concept of a membrane has been known since the eighteenth century but was used little outside of the laboratory until the end of World War II. Drinking water supplies in Europe had been compromised by The War and membrane filters were used to test for water safety. However, due to the lack of reliability, slow operation, reduced selectivity and elevated costs, membranes were not widely exploited. The first use of membranes on a large scale was with microfiltration and ultrafiltration technologies. Since the 1980s, these separation processes, along with electrodialysis, are employed in large plants and, today, several experienced companies serve the market. The degree of selectivity of a membrane depends on the membrane pore size. Depending on the pore size, they can be classified as microfiltration (MF), ultrafiltration (UF), nanofiltration (NF) and reverse osmosis (RO) membranes. Membranes can also be of various thickness, with homogeneous or heterogeneous structure. Membranes can be neutral or charged, and particle transport can be active or passive. The latter can be facilitated by pressure, concentration, chemical or electrical gradients of the membrane process. Membrane processes classifications Microfiltration (MF) Microfiltration removes particles higher than 0.08-2 μm and operates within a range of 7-100 kPa. Microfiltration is used to remove residual suspended solids (SS), to remove bacteria in order to condition the water for effective disinfection and as a pre-treatment step for reverse osmosis. Relatively recent developments are membrane bioreactors (MBR) which combine microfiltration and a bioreactor for biological treatment. Ultrafiltration (UF) Ultrafiltration removes particles higher than 0.005-2 μm and operates within a range of 70-700kPa. Ultrafiltration is used for many of the same applications as microfiltration. Some ultrafiltration membranes have also been used to remove dissolved compounds with high molecular weight, such as proteins and carbohydrates. Also, they can remove viruses and some endotoxins. Nanofiltration (NF) Nanofiltration is also known as "loose" RO and can reject particles smaller than 0,002 μm. Nanofiltration is used for the removal of selected dissolved constituents from wastewater. NF is primarily developed as a membrane softening process which offers an alternative to chemical softening. Likewise, nanofiltration can be used as a pre-treatment before directed reverse osmosis. The main objectives of NF pre-treatment are: (1). minimize particulate and microbial fouling of the RO membranes by removal of turbidity and bacteria, (2) prevent scaling by removal of the hardness ions, (3) lower the operating pressure of the RO process by reducing the feed-water total dissolved solids (TDS) concentration. Reverse osmosis (RO) Reverse osmosis is commonly used for desalination. As well, RO is commonly used for the removal of dissolved constituents from wastewater remaining after advanced treatment with microfiltration. RO excludes ions but requires high pressures to produce deionized water (850–7000 kPa). RO is the most widely used desalination technology because of its simplicity of use and relatively low energy costs compared with distillation, which uses technology based on thermal processes. Note that RO membranes remove water constituents at the ionic level. To do so, most current RO systems use a thin-film composite (TFC), mainly consisting of three layers: a polyamide layer, a polysulphone layer and a polyester layer. Nanostructured membranes An emerging class of membranes rely on nanostructure channels to separate materials at the molecular scale. These include carbon nanotube membranes, graphene membranes, membranes made from polymers of intrinsic microporosity (PIMS), and membranes incorporating metal–organic frameworks (MOFs). These membranes can be used for size selective separations such as nanofiltration and reverse osmosis, but also adsorption selective separations such as olefins from paraffins and alcohols from water that traditionally have required expensive and energy intensive distillation. Membrane configurations In the membrane field, the term module is used to describe a complete unit composed of the membranes, the pressure support structure, the feed inlet, the outlet permeate and retentate streams, and an overall support structure. The principal types of membrane modules are: Tubular, where membranes are placed inside a support porous tubes, and these tubes are placed together in a cylindrical shell to form the unit module. Tubular devices are primarily used in micro- and ultrafiltration applications because of their ability to handle process streams with high solids and high viscosity properties, as well as for their relative ease of cleaning. Hollow fiber membrane, consists of a bundle of hundreds to thousands of hollow fibers. The entire assembly is inserted into a pressure vessel. The feed can be applied to the inside of the fiber (inside-out flow) or the outside of the fiber (outside-in flow). Spiral wound, where a flexible permeate spacer is placed between two flat membranes sheet. A flexible feed spacer is added and the flat sheets are rolled into a circular configuration. In recent developments, surface patterning techniques have allowed for the integration of permeable feed spacers directly into the membrane, giving rise to the concept of an integrated membrane Plate and frame consist of a series of flat membrane sheets and support plates. The water to be treated passes between the membranes of two adjacent membrane assemblies. The plate supports the membranes and provides a channel for the permeate to flow out of the unit module. Ceramic and polymeric flat sheet membranes and modules. Flat sheet membranes are typically built-into submerged vacuum-driven filtration systems which consist of stacks of modules each with several sheets. Filtration mode is outside-in where the water passes through the membrane and is collected in permeate channels. Cleaning can be performed by aeration, backwash and CIP. Membrane process operation The key elements of any membrane process relate to the influence of the following parameters on the overall permeate flux are: The membrane permeability (k) The operational driving force per unit membrane area (Trans Membrane Pressure, TMP) The fouling and subsequent cleaning of the membrane surface. Flux, pressure, permeability The total permeate flow from a membrane system is given by following equation: Where Qp is the permeate stream flowrate [kg·s−1], Fw is the water flux rate [kg·m−2·s−1] and A is the membrane area [m2] The permeability (k) [m·s−2·bar−1] of a membrane is given by the next equation: The trans-membrane pressure (TMP) is given by the following expression: where PTMP is the trans-membrane pressure [kPa], Pf the inlet pressure of feed stream [kPa]; Pc the pressure of concentrate stream [kPa]; Pp the pressure if permeate stream [kPa]. The rejection (r) could be defined as the number of particles that have been removed from the feedwater. The corresponding mass balance equations are: To control the operation of a membrane process, two modes, concerning the flux and the TMP, can be used. These modes are (1) constant TMP, and (2) constant flux. The operation modes will be affected when the rejected materials and particles in the retentate tend to accumulate in the membrane. At a given TMP, the flux of water through the membrane will decrease and at a given flux, the TMP will increase, reducing the permeability (k). This phenomenon is known as fouling, and it is the main limitation to membrane process operation. Dead-end and cross-flow operation modes Two operation modes for membranes can be used. These modes are: Dead-end filtration where all the feed applied to the membrane passes through it, obtaining a permeate. Since there is no concentrate stream, all the particles are retained in the membrane. Raw feed-water is sometimes used to flush the accumulated material from the membrane surface. Cross-flow filtration where the feed water is pumped with a cross-flow tangential to the membrane and concentrate and permeate streams are obtained. This model implies that for a flow of feed-water across the membrane, only a fraction is converted to permeate product. This parameter is termed "conversion" or "recovery" (S). The recovery will be reduced if the permeate is further used for maintaining processes operation, usually for membrane cleaning. Filtration leads to an increase in the resistance against the flow. In the case of the dead-end filtration process, the resistance increases according to the thickness of the cake formed on the membrane. As a consequence, the permeability (k) and the flux rapidly decrease, proportionally to the solids concentration and, thus, requiring periodic cleaning. For cross-flow processes, the deposition of material will continue until the forces of the binding cake to the membrane will be balanced by the forces of the fluid. At this point, cross-flow filtration will reach a steady-state condition , and thus, the flux will remain constant with time. Therefore, this configuration will demand less periodic cleaning. Fouling Fouling can be defined as the potential deposition and accumulation of constituents in the feed stream on the membrane. The loss of RO performance can result from irreversible organic and/or inorganic fouling and chemical degradation of the active membrane layer. Microbiological fouling, generally defined as the consequence of irreversible attachment and growth of bacterial cells on the membrane, is also a common reason for discarding old membranes. A variety of oxidative solutions, cleaning and anti-fouling agents is widely used in desalination plants, and their repetitive and incidental exposure can adversely affect the membranes, generally through the decrease of their rejection efficiencies. Fouling can take place through several physicochemical and biological mechanisms which are related to the increased deposition of solid material onto the membrane surface. The main mechanisms by which fouling can occur, are: Build-up of constituents of the feedwater on the membrane which causes a resistance to flow. This build-up can be divided into different types: Pore narrowing, which consists of solid material that it has been attached to the interior surface of the pores. Pore blocking occurs when the particles of the feed-water become stuck in the pores of the membrane. Gel/cake layer formation takes places when the solid matter in the feed is larger than the pore sizes of the membrane. Formation of chemical precipitates known as scaling Colonization of the membrane or biofouling takes place when microorganisms grow on the membrane surface. Fouling control and mitigation Since fouling is an important consideration in the design and operation of membrane systems, as it affects pre-treatment needs, cleaning requirements, operating conditions, cost and performance, it should prevent, and if necessary, removed. Optimizing the operation conditions is important to prevent fouling. However, if fouling has already taken place, it should be removed by using physical or chemical cleaning. Physical cleaning techniques for membrane include membrane relaxation and membrane backwashing. Back-washing or back-flushing consists of pumping the permeate in the reverse direction through the membrane. Back-washing removes successfully most of the reversible fouling caused by pore blocking. Backwashing can also be enhanced by flushing air through the membrane. Backwashing increase the operating costs since energy is required to achieve a pressure suitable for permeate flow reversion. Membrane relaxation consists of pausing the filtration during a period, and thus, there is no need for permeate flow reversion. Relaxation allows filtration to be maintained for a longer period before the chemical cleaning of the membrane. Back pulsing high frequency back pulsing resulting in efficient removal of dirt layer. This method is most commonly used for ceramic membranes Recent studies have assessed to combine relaxation and backwashing for optimum results. Chemical cleaning. Relaxation and backwashing effectiveness will decrease with operation time as more irreversible fouling accumulates on the membrane surface. Therefore, besides the physical cleaning, chemical cleaning may also be recommended. It includes: Chemical enhanced backwash, that is, a low concentration of chemical cleaning agent is added during the backwashing period. Chemical cleaning, where the main cleaning agents are sodium hypochlorite (for organic fouling) and citric acid (for inorganic fouling). Every membrane supplier proposes their chemical cleaning recipes, which differ mainly in terms of concentration and methods. Optimizing the operation condition. Several mechanisms can be carried out to optimize the operating conditions of the membrane to prevent fouling, for instance: Reducing flux. The flux always reduces fouling but it impacts on capital cost since it demands more membrane area. It consists of working at sustainable flux which can be defined as the flux for which the TMP increases gradually at an acceptable rate, such that chemical cleaning is not necessary. Using cross-flow filtration instead of dead-end. In cross-flow filtration, only a thin layer is deposited on the membrane since not all the particles are retained on the membrane, but the concentrate removes them. Pre-treatment of the feed water is used to reduce the suspended solids and bacterial content of the feed-water. Flocculants and coagulants are also used, like ferric chloride and aluminium sulphate that, once dissolved in the water, adsorbs materials such as suspended solids, colloids and soluble organic. Metaphysical numerical models have been introduced in order to optimize transport phenomena Membrane alteration. Recent efforts have focused on eliminating membrane fouling by altering the surface chemistry of the membrane material to reduce the likelihood that foulants will adhere to the membrane surface. The exact chemical strategy used is dependent on the chemistry of the solution that is being filtered. For example, membranes used in desalination might be made hydrophobic to resist fouling via accumulation of minerals, while membranes used for biologics might be made hydrophilic to reduce protein/organic accumulation. Modification of surface chemistry via thin film deposition can thereby largely reduce fouling. One drawback to using modification techniques is that, in some cases, the flux rate and selectivity of the membrane process can be negatively impacted. Recycling of RO membranes Waste prevention Once the membrane reaches a significant performance decline it is discarded. Discarded RO membrane modules are currently classified worldwide as inert solid waste and are often disposed of in landfills; although they can also be energetically recovered. However, various efforts have been made over the past decades to avoid this, such as waste prevention, direct reapplication, and ways of recycling. In this regard, membranes also follows the waste management hierarchy. This means that the most preferable action is to upgrade the design of the membrane which leads to a reduction in use at same application and the least preferred action is a disposal and landfilling RO membranes have some environmental challenges that must be resolved in order to comply with the circular economy principles. Mainly they have a short service life of 5–10 years. Over the past two decades, the number of RO desalination plants has increased by 70%. The size of these RO plants has also increased significantly, with some reaching a production capacity exceeding 600,000 m3 of water per day. This means a generation of 14,000 tonnes of membrane waste that is landfilled every year. To increment the lifespan of a membrane, different prevention methods are developed: combining the RO process with the pre-treatment process to improve efficiency; developing anti-fouling techniques; and developing suitable procedures for cleaning the membranes. Pre-treatment processes lower the operating costs because of lesser amounts of chemical additives in the saltwater feed and the lower operational maintenance required for the RO system. Four types of fouling are found on RO membranes: (i) Inorganic (salt precipitation), (ii) Organic, (iii) Colloidal (particle deposition in the suspension) (iv) Microbiological (bacteria and fungi). Thereby, an appropriate combination of pre-treatment procedures and chemical dosing, as well as an efficient cleaning plan that tackle these types of fouling, should enable the development of an effective anti-fouling technique. Most plants clean their membranes every week (CEB – Chemically Enhanced Backwash). In addition to this maintenance cleaning, an intensive cleaning (CIP) is recommended, from two to four times annually. Reuse Reuse of RO membranes include the direct reapplication of modules in other separation processes with less stringent specifications. The conversion from the RO TFC membrane to a porous membrane is possible by degrading the dense layer of polyamide. Converting RO membranes by chemical treatment with different oxidizing solutions are aimed at removing the active layer of the polyamide membrane, intended for reuse in applications such as MF or UF. This causes an extended life of approximately two years. A very limited number of reports have mentioned the potential of direct RO reuse. Studies shows that hydraulic permeability, salt rejection, morphological and topographical characteristics, and field emission scanning electron and atomic force microscopy were used in an autopsy investigation conducted. The old RO element's performance resembled that of nanofiltration (NF) membranes, thus it was not surprising to see the permeability increase from 1.0 to 2.1 L m-2 h-1 bar-1 and the drop in NaCl rejection from >90% to 35-50%. On the other hand, In order to maximize the overall efficiency of the process, it has lately been common practice to combine RO elements of varying performances within the same pressure vessel, which is called Multi-membrane vessel design. In principle, this innovative hybrid system recommends using high rejection, low productivity membranes in the upstream segment of the filtration train, followed by high productivity, low energy membranes in the downstream section. There are two ways in which this design can help: either by decreasing energy use due to decreased pressure needs or by increasing output. Since this concept would reduce the number of modules and pressure vessels needed for a given application, it has the potential to significantly reduce initial investment costs. It is proposed to adapt this original concept, by internally reusing older RO membranes within the same pressure vessel. Recycle Recycling of materials is a general term that involves physically transforming the material or its components so that they can be regenerated into other useful products. The membrane modules are complex structures, consisting of a number of different polymeric components and, potentially, the individual components can be recovered for other purposes. Plastic solid waste treatment and recycling can be separated into mechanical recycling, chemical recycling and energy recovery. Techniques recycling Mechanical recycling characteristics: A first separation of the components of interest is needed. Previous washing to avoid property deterioration during the process. Grinding of the polymeric materials into suitable size (loss of 5% of the material). Possible posterior washing. Melting and extrusion process (loss of 10 % of material). Membrane components than can be recycled (thermoplastics): PP, polyester, etc. Membrane sheets: constructed from a number of different polymers and additives and therefore inherently difficult to accurately and efficiently separate. Main advantage: it displaces virgin plastic production. • Main disadvantages: need to separate all components, large-enough amount of components to be viable. Chemical recycling characteristics: Break down the polymers into smaller molecules, using depolymerisation and degradation techniques. Cannot be used with contaminated materials. Chemical recycling processes are tailored for specific materials. Advantage: that heterogeneous polymers with limited use of pre-treatment can be processed. Disadvantage: more expensive and complex than mechanical recycling. Polyester materials (such as in the permeate spacer and components of the membrane sheet) are suitable for chemical recycling processes, and hydrolysis is used to reverse the poly-condensation reaction used to make the polymer, with the addition of water to cause decomposition. Energetic recovery characteristics: Volume reduction by 90–99%, reducing the strain on landfill. Waste incinerators can generally operate from 760 °C to 1100 °C and would therefore be capable of removing all combustible material, with the exception of the residual inorganic filler in the fiberglass casing. Heat energy can be recovered and used for electricity generation or other heat related processes, and can also offset the greenhouse gas emissions from traditional energy. If not properly controlled, can emit greenhouse gases as well as other harmful products. Post-treatment After applying the chosen technique, it is necessary to carry out a post-treatment process to ensure that the membrane can function normally again. The first step in post-treatment involves removing all residual waste from the equipment. This ensures that no contaminants remain that could affect the membrane's performance. Separation techniques are employed to recover valuable materials from reverse osmosis membranes, such as polyamide or polysulfone, which can be recycled and reused in the production of new membranes or other products. During the material recovery stage, physical or chemical separation processes are conducted to isolate and purify these materials, ensuring their quality and facilitating their reintroduction into the production chain. Following waste removal, the membrane is tested in a pilot system. During this phase, its performance is carefully analyzed to determine if the output meets the defined parameters and limits. This step is crucial to verify that the membrane operates efficiently and effectively after treatment. Advantages of RO membranes recycling Implementing a recycling process for RO membranes can incur additional costs, which many companies or organizations may be hesitant to accept. Moreover, recycled membranes often exhibit lower performance and efficiency. However, one significant advantage of recycling is the reduction of the environmental impact associated with producing new membranes from raw materials. RO membranes contain polymers derived from petroleum, a major source of greenhouse gases (GHGs) that contribute to climate change. Additionally, these polymers are not biodegradable, making them challenging to recycle. By recycling RO membranes, we reduce the need for new materials, thereby lessening the environmental footprint. Producing new membranes from petroleum-derived polymers increases GHG emissions. Recycling existing membranes helps mitigate this impact by reusing materials that would otherwise contribute to environmental degradation. The demand for RO membranes has surged due to stricter regulations on wastewater discharge. This demand could potentially surpass supply, making the recycling of current RO membranes a viable solution to address this challenge. The increasing demand for RO membranes has led to higher prices. In contrast, the recycling process is generally more cost-effective than purchasing new membranes. This cost advantage can help offset the initial investment required for setting up recycling operations. Applications Distinct features of membranes are responsible for the interest in using them as additional unit operation for separation processes in fluid processes. Some advantages noted include: Less energy-intensive, since they do not require major phase changes Do not demand adsorbents or solvents, which may be expensive or difficult to handle Equipment simplicity and modularity, which facilitates the incorporation of more efficient membranes Membranes are used with pressure as the driving processes in membrane filtration of solutes and in reverse osmosis. In dialysis and pervaporation the chemical potential along a concentration gradient is the driving force. Also perstraction as a membrane assisted extraction process relies on the gradient in chemical potential. A submerged flexible mound breakwater as a type of using membrane can be employed for wave control in shallow water as an advanced alternative to the conventional rigid submerged designs. However, their overwhelming success in biological systems is not matched by their application. The main reasons for this are: Fouling – the decrease of function with use Prohibitive cost per membrane area Lack of solvent resistant materials Scale-up risks See also Collodion bag References Bibliography Metcalf and Eddy. Wastewater Engineering, Treatment and Reuse. McGraw-Hill Book Company, New York. Fourth Edition, 2004. Paula van den Brink, Frank Vergeldt, Henk Van As, Arie Zwijnenburg, Hardy Temmink, Mark C.M.van Loosdrecht. "Potential of mechanical cleaning of membranes from a membrane bioreactor". Journal of membrane science. 429, 2013. 259-267. Simon Judd. The Membrane Bioreactor Book: Principles and Applications of Membrane Bioreactors for Water and Wastewater Treatment. Elsevier, 2010. Fouling Water technology Water treatment Membrane technology
Membrane
[ "Chemistry", "Materials_science", "Engineering", "Environmental_science" ]
5,144
[ "Separation processes", "Water treatment", "Water pollution", "Membrane technology", "Environmental engineering", "Water technology", "Materials degradation", "Fouling" ]
26,457,064
https://en.wikipedia.org/wiki/Binary%20collision%20approximation
In condensed-matter physics, the binary collision approximation (BCA) is a heuristic used to more efficiently simulate the penetration depth and defect production by energetic ions (with kinetic energies in the kilo-electronvolt (keV) range or higher) in solids. In the method, the ion is approximated to travel through a material by experiencing a sequence of independent binary collisions with sample atoms (nuclei). Between the collisions, the ion is assumed to travel in a straight path, experiencing electronic stopping power, but losing no energy in collisions with nuclei. Simulation approaches In the BCA approach, a single collision between the incoming ion and a target atom (nucleus) is treated by solving the classical scattering integral between two colliding particles for the impact parameter of the incoming ion. Solution of the integral gives the scattering angle of the ion as well as its energy loss to the sample atoms, and hence what the energy is after the collision compared to before it. The scattering integral is defined in the centre-of-mass coordinate system (two particles reduced to one single particle with one interatomic potential) and relates the angle of scatter with the interatomic potential. It is also possible to solve the time integral of the collision to know what time has elapsed during the collision. This is necessary at least when BCA is used in the "full cascade" mode, see below. The energy loss to electrons, i.e. electronic stopping power, can be treated either with impact-parameter dependent electronic stopping models , by subtracting a stopping power dependent on the ion velocity only between the collisions, or a combination of the two approaches. The selection method for the impact parameter divided BCA codes into two main varieties: "Monte Carlo" BCA and crystal-BCA codes. In the so-called Monte Carlo BCA approach the distance to and impact parameter of the next colliding atom is chosen randomly from a probability distribution which depends only on the atomic density of the material. This approach essentially simulates ion passage in a fully amorphous material. (Note that some sources call this variety of BCA just Monte Carlo, which is misleading since the name can then be confused with other completely different Monte Carlo simulation varieties). SRIM and SDTrimSP are Monte-Carlo BCA codes. It is also possible (although more difficult) to implement BCA methods for crystalline materials, such that the moving ion has a defined position in a crystal, and the distance and impact parameter to the next colliding atom is determined to correspond to an atom in the crystal. In this approach BCA can be used to simulate also atom motion during channelling. Codes such as MARLOWE operate with this approach. The binary collision approximation can also be extended to simulate dynamic composition changes of a material due to prolonged ion irradiation, i.e. due to ion implantation and sputtering. At low ion energies, the approximation of independent collisions between atoms starts to break down. This issue can be to some extent augmented by solving the collision integral for multiple simultaneous collisions. However, at very low energies (below ~1 keV, for a more accurate estimate see ) the BCA approximation always breaks down, and one should use molecular dynamics ion irradiation simulation approaches because these can, per design, handle many-body collisions of arbitrarily many atoms. The MD simulations can either follow only the incoming ion (recoil interaction approximation or RIA ) or simulate all atoms involved in a collision cascade . BCA collision cascade simulations The BCA simulations can be further subdivided by type depending on whether they only follow the incoming ion, or also follow the recoils produced by the ion (full cascade mode, e.g., in the popular BCA code SRIM). If the code does not account for secondary collisions (recoils), the number of defects is then calculated using the Robinson extension of the Kinchin-Pease model. If the initial recoil/ion mass is low, and the material where the cascade occurs has a low density (i.e. the recoil-material combination has a low stopping power), the collisions between the initial recoil and sample atoms occur rarely, and can be understood well as a sequence of independent binary collisions between atoms. This kind of a cascade can be theoretically well treated using BCA. Damage production estimates The BCA simulations give naturally the ion penetration depth, lateral spread and nuclear and electronic deposition energy distributions in space. They can also be used to estimate the damage produced in materials, by using the assumption that any recoil which receives an energy higher than the threshold displacement energy of the material will produce a stable defect. However, this approach should be used with great caution for several reasons. For instance, it does not account for any thermally activated recombination of damage, nor the well known fact that in metals the damage production is for high energies only something like 20% of the Kinchin-Pease prediction. Moreover, this approach only predicts the damage production as if all defects were isolated Frenkel pairs, while in reality in many cases collision cascades produce defect clusters or even dislocations as the initial damage state. BCA codes can, however, be extended with damage clustering and recombination models that improve on their reliability in this respect. Finally, the average threshold displacement energy is not very accurately known in most materials. BCA codes SRIM offers a graphical user interface and is likely the most used BCA code now. It can be used to simulate linear collision cascades in amorphous materials for all ion in all materials up to ion energies of 1 GeV. Note, however, that SRIM does not treat effects such as channelling, damage due to electronic energy deposition (necessary, e.g., to describe swift heavy ion damage in materials) or damage produced by excited electrons. The calculated sputter yields may be less accurate than that from other codes. MARLOWE is a large code that can handle crystalline materials and support numerous different physics models. TRIDYN, newer versions known as SDTrimSP, is a BCA code capably of handling dynamic composition changes. DART, French code developed by the CEA (Commisariat à l'Energie Atomique) in Saclay. Differs from SRIM in its electronic stopping power and analytical resolution of the scattering integral (the amount of defects produced is determined from the elastic cross sections and the atomic concentrations of atoms). The nuclear stopping power comes from the universal interatomic potential (ZBL potential) while the electronic stopping power is derived from Bethe's equation for protons and Lindhard-Scharff for ions. See also Collision cascade Molecular dynamics COSIRES conference References External links Condensed matter physics Nuclear physics Computational physics
Binary collision approximation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,383
[ "Phases of matter", "Materials science", "Computational physics", "Condensed matter physics", "Nuclear physics", "Matter" ]
26,459,697
https://en.wikipedia.org/wiki/Light%20valve
A light valve (LV) is a device for varying the quantity of light, from a source, which reaches a target. Examples of targets are computer screen surfaces, or a wall screen in the case of a light projector. There are two basic principles of achieving this. One is by deflecting the light on its way to the target (a reflective LV). The other method is to block the light (a transmissive LV). The blocking method has found its way into liquid crystal displays (LCDs), video projectors and rear projection TVs. In this type of screens and projectors, the source light is first polarised by a filter in one direction and then passed on to another filter, filled with liquid crystals. By changing the voltage applied to this crystal filter, it will work as a switching polarising filter, giving different gray scales of the light coming out. The light is changed only once for each image frame. The light valve thus consists of the two polarising filters, where one has a voltage controlled switch function thanks to the properties of the liquid crystals. This type of valve is often referred to as a liquid crystal light valve. The other principle, the reflective LV, works by either reflecting the light towards the target or deflecting it away. The portion of light that is reflected on the target decides the gray scale. This re- and deflection occurs many times a second. Should this happen at too low a frequency, the human eye and brain would perceive it as flickering, but due to sufficiently high frequency, a human will be "tricked" into viewing it as a continuum, a smooth shift in brightness. Examples of the reflective LV type are the digital micromirror device (DMD), Eidophor's oil-film based system, and the grating light valve. See also Femtosecond pulse shaping Multiphoton intrapulse interference phase scan Spatial light modulator References Display technology Optics
Light valve
[ "Physics", "Chemistry", "Engineering" ]
405
[ "Applied and interdisciplinary physics", "Optics", "Electronic engineering", " molecular", "Display technology", "Atomic", " and optical physics" ]
25,061,776
https://en.wikipedia.org/wiki/Simon%E2%80%93Glatzel%20equation
The Simon–Glatzel equation is an empirical correlation describing the pressure dependence of the melting temperature of a solid. The pressure dependence of the melting temperature is small for small pressure changes because the volume change during fusion or melting is rather small. However, at very high pressures higher melting temperatures are generally observed as the liquid usually occupies a larger volume than the solid making melting more thermodynamically unfavorable at elevated pressure. If the liquid has a smaller volume than the solid (as for ice and liquid water) a higher pressure leads to a lower melting point. The equation and its variations and are normally the temperature and the pressure of the triple point, but the normal melting temperature at atmospheric pressure are also commonly used as reference point because the normal melting point is much more easily accessible. Typically is then set to 0.  and are component-specific parameters. The Simon–Glatzel equation can be viewed as a combination of the Murnaghan equation of state and the Lindemann law, and an alternative form was proposed by J. J. Gilvarry (1956): where is general at , is pressure derivative at , is Grüneisen ratio, and is the coefficient in Morse potential. Example parameters For methanol the following parameters can be obtained: The reference temperature has been Tref = 174.61 K and the reference pressure Pref has been set to 0 kPa. Methanol is a component where the Simon–Glatzel works well in the given validity range. Extensions and generalizations The Simon–Glatzel equation is a monotonically increasing function. It can only describe the melting curves that rise indefinitely with increasing pressure. It may fail to describe the melting curves with a negative pressure dependence or local maximums. A damping term that asymptotically slopes down under pressure, (c is another component-specific parameter), is introduced by Vladimir V. Kechin to extend the Simon–Glatzel equation so that all melting curves, rising, falling, and flattening, as well as curves with a maximum, can be described by a unified equation: where is the Simon–Glatzel equation (rising) and is the damping term (falling or flattening). The unified equation may be rewritten as: This form predicts that all solids have a maximum melting temperature at a positive or (fictitious) negative pressure. References Phase transitions Equations Thermodynamics
Simon–Glatzel equation
[ "Physics", "Chemistry", "Mathematics" ]
496
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Mathematical objects", "Equations", "Thermodynamics", "Statistical mechanics", "Matter", "Dynamical systems" ]
25,066,275
https://en.wikipedia.org/wiki/ATF/CREB
The ATF/CREB family is a group of transcription factors, consisting of different ATFs (Activating transcription factors), CREB (cAMP response element binding protein), CREM (cAMP response element modulator) and related proteins. Among the transcription factors assigned to this group, some are more related to CREB-like, factors, whereas other exhibit closer similarity with the AP-1 transcription factor components c-Jun or c-Fos. Common features are a basic leucine zipper type of DNA-binding domain and binding as a dimer to DNA sequences like 5'-TGACGTCA-3' or 5'-TGA(C/G)TCA-3' as recognition sequences. References Transcription factors
ATF/CREB
[ "Chemistry", "Biology" ]
153
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
25,066,324
https://en.wikipedia.org/wiki/Design%20for%20All%20%28in%20ICT%29
Design for All in the context of information and communications technology (ICT) is the conscious and systematic effort to proactively apply principles, methods and tools to promote universal design in computer-related technologies, including Internet-based technologies, thus avoiding the need for a posteriori adaptations, or specialised design. Design for All is design for human diversity (such as that described in the diversity in the workplace or business), social inclusion and equality. It should not be conceived of as an effort to advance a single solution for everybody, but as a user-centred approach to providing products that can automatically address the possible range of human abilities, skills, requirements, and preferences. Consequently, the outcome of the design process is not intended to be a singular design, but a design space populated with appropriate alternatives, together with the rationale underlying each alternative, that is, the specific user and usage context characteristics for which each alternative has been designed. Traditionally, accessibility problems have been solved with adaptations and the use of assistive technology products has been a technical approach to obtain adaptations. Universal Access implies the accessibility and usability of information and telecommunications technologies by anyone at any place and at any time and their inclusion in any living context. It aims to enable equitable access and active participation of potentially all people in existing and emerging computer-mediated human activities, by developing universally accessible and usable products and services and suitable support functionalities in the environment. These products and services must be capable of accommodating individual user requirements in different contexts of use, independent of location, target machine, or runtime environment. Therefore, the approach aiming to grant the use of equipment or services is generalized, seeking to give access to the Information Society as such. Citizens are supposed to live in environments populated with intelligent objects, where the tasks to be performed and the way of performing them are completely redefined, involving a combination of activities of access to information, interpersonal communication, and environmental control. Citizens must be given the possibility of carrying them out easily and pleasantly. For a thorough discussion of the challenges and benefits of Design for All in the context of ICT, see also the EDeAN White Paper (2005) and the "Report on the impact of technological developments on eAccessibility" of the DfA@eInclusion project. Benefits and challenges The European Commission Communication on e-Accessibility, identified a core of practical challenges, as well as market, legal and policy issues towards improving eAccessibility and e-Inclusion in Europe, and elaborated a three-fold approach based on: accessibility requirements in public procurement accessibility certification and better use of existing legislation. In that respect, the challenges that need to be addressed include: the introduction of specific legislative measures to complement and enhance existing legislation, addressing and motivating the industry, effective benchmarking, providing harmonised standardisation, the creation of a curriculum for DfA and, addressing future research activities. Legislative and regulative background The present policy context of accessibility in the Information Society in Europe is the i2010 initiative. The "i2010 – A European Information Society for growth and employment" initiative was launched by the European Commission as a framework for addressing the main challenges and developments in the information society and media sectors up to 2010. It promotes an open and competitive digital economy and emphasises ICT as a driver of inclusion and quality of life. The initiative contains a range of EU policy instruments to encourage the development of the digital economy, such as regulatory instruments, research and partnerships with stakeholders. Equality and non-discrimination The goal of the European Union Disability Strategy is a society that is open and accessible to all. The barriers need to be identified and removed. The European Union Disability Strategy has three main focuses: co-operation between the Commission and the Member States, full participation of people with disabilities, and mainstreaming disability in policy formulation. Non-discrimination is also one of the general principles of the "Convention on the Rights of Persons with Disabilities", adopted by the United Nations General Assembly on 13 December 2006 and was opened for signatures on 30 March 2007. Telecommunications and information society There is a long tradition of European legislation with regard to telecommunications. In 2002, the European Union adopted a new regulatory framework for electronic communications networks and services, covering all forms of fixed and wireless telecoms, data transmission and broadcasting. From a Design for All perspective, the most important Directives are the Directive on a common regulatory framework and the Directive on universal service and users' rights relating to electronic communications networks and services (Universal Service Directive). Public procurement Public procurement is an important economic force, and therefore it is an important tool to promote accessibility. The legislative package of public procurement Directives, approved in 2004 by the European Parliament and the EU's Council of Ministers, will help simplify and modernize procurement procedures. The new directives make it possible to take accessibility needs into account at several stages of a procurement process. It is most convenient to refer to standards when making technical specifications. There are already many CEN, ETSI and ITU standards which can be used for this purpose and many sources which can be useful in practice. Likewise, guidelines like the WAI guidelines, for example, or national guidelines have been used. In the future it will be easier to find suitable standards. Mandate M/376 has been given by the European Commission to the European Standardisation Organisations CEN, CENELEC and ETSI, to come up with a solution for common requirements and conformance assessment. Copyright Not all products are accessible for persons with disabilities. When producing audio books, or certain other accessible works, an additional copy is created, and copyright can be a problem in this situation. On the other hand, copyright is an essential part of the sustainability of a creative society. This conflict of interests must be solved somehow in order to ensure the Information Society is a Society for All. There is international and European legislation in this field. The objectives of the Directive on the harmonisation of certain aspects of copyright and related rights in the information society are to adapt legislation on copyright and related rights to reflect technological developments and to transpose into Community law the main international obligations arising from the two treaties on copyright and related rights adopted within the framework of the World Intellectual Property Organisation (WIPO) in December 1996. Protection of privacy The relationship between design and privacy is not necessarily obvious. Modern technology, which is a result of design, is able to collect significant amounts of personal information. The user has an interest in that information being correct and in it being used appropriately. The person may want to keep something confidential and have access to the information that has been collected. In other words, privacy is desired. In 1995 the European Union adopted a Directive on the processing of personal data. This directive established the basic principles for the collection, storage and use of personal data which should be respected by governments, businesses and any other organizations or individuals engaged in handling personal data. Within the context of Design for All (in ICT), privacy protection is called Privacy by Design. Relevant guidelines and standards In the US, Australia, Japan and in the European Union more and more legislative actions are put in place to require public bodies and companies to make sure that their products and services are accessible and usable not only by "standard" users but also by others such as elderly persons or people with an impairment. As it would be unwise to write down technical – and therefore time-bound – requirements into a law, legislative texts preferably refer to (international) standards. Standardisation: general overview Standardisation, i.e., in very general terms, producing a "standard" (French: ; German: ; Spanish: ) is a voluntary action set up in the past, almost uniquely, by commercial partners who believe that the standardisation will permit easier exchanges of products and goods. This implied very often that the acceptance of the standards is also voluntary and triggered by expected commercial benefits. Only to a very limited extent consumer representatives did participate in standardisation. On the other hand, laws in many countries are referring more and more to the required acceptance of several standards (e.g. on safety or on ecological aspects). The net result of this need for standards is that nowadays many standardisation initiatives are stimulated (= subsidised) by public bodies or, in Europe, directly and indirectly by the European Commission. Also many guidelines have been created by stakeholder groups. Recent developments in DfA related standardisation (formal standards) As DfA standardisation was explicitly mentioned in the eEurope2002 and i2010 Action Plans of the European Union, several new actions were established since then. Four major recent strategies can be distinguished: the set up of coordinating working groups and organisations; the democratisation of the standardisation processes themselves; the increasing impact of non-formal standardisation bodies and; the establishment of standardisation related discussion fora open for non-specialists. DfA in ICT related standards ETSI EG 202 116 V1.2.2 (2009-03) ETSI Guide Human Factors (HF); Guidelines for ICT products and services; "Design for All". Web Content Accessibility Guidelines 2.0 The Web Content Accessibility Guidelines (WCAG) 2.0 is a technical standard that covers a wide range of recommendations for making Web content more accessible. Following these guidelines will make content accessible to a wider range of people with disabilities, including blindness and low vision, deafness and hearing loss, learning disabilities, cognitive limitations, limited movement, speech disabilities, photosensitivity and combinations of these. Following these guidelines will also often make your Web content more usable to users in general. BS 8878:2010 Web accessibility – Code of Practice BS 8878:2010 Web accessibility – Code of Practice provides guidance on how to embed accessibility concerns into organisation's policies and digital production processes. The Standard provides non-technical website owners a better understanding of the value of inclusive design, and a framework for how to use guidelines like WCAG 2.0 to help them create products which are Designed for All. The Standard's lead-author, Jonathan Hassell, has created a summary of BS 8878 to help organisations better understand how the standard can help them. Application domains The application domains of Design for All in the context of ICT, practically include every field involving Information and Communication Technologies. The significance of the application domains reflects their role in establishing a coherent and socially acceptable Information Society, but also the diverse range of human activities affected. The critical application domains for Design for All, can be summarised as follows: Life-long learning Public information systems, terminals and information appliances (e.g. kiosks, smart home environments) Transaction services (e.g., banking) Electronic commerce applications and services Social services for the citizens (e.g., administration, elderly, transport, health care, awareness) Tools to allow for added-value information services (e.g., creation, storage, retrieval and exchange of user experiences, traces and views) Security The White Paper "Toward an Information Society for All: An International R&D Agenda" (1998) published by the International Scientific Forum "Towards an Information Society for All" (ISF-IS4ALL), has discussed the significance of these application domains: Education and training One major lever to improve awareness and practice in Design for All is the development of education and training programs. Professionals are needed who have acquired comprehensive specialist knowledge and skills in Design for All; in addition those professionals who currently work in ICT industry need to acquire additional knowledge and skills concerning Design for All. Little evidence can be found of university degree programmes that specialize in Design for All (or Universal Design) or that explicitly includes a module about this. This lack was tackled in the project DfA@eInclusion, which devised curricula: A bachelor level introductory course which aims to enable students to have an understanding of the ethical and social issues of Design for All, and the role of Design for All as an enabler of accessibility and participation in the information society A masters level programme which aims to enable students to have the relevant knowledge, personal and professional skills & competencies to design, develop, implement, evaluate and manage a wide range of ICT systems products and services that adhere to the principles and practices of Design for All. The implementation of such programmes is already under way in a few places, for example at Oslo and Akershus University College of Applied Sciences, the Middlesex University, UK, University of Linz, Austria and the University of Trás-os-Montes e Alto Douro, Portugal. Core topics include an understanding of the principles of human rights, the development of standards, regulations and legislation, the design and development of assistive technologies as well as improved access of mainstream products and services. Web accessibility is an important component of accessing the information society and information and guidance is offered by the World Wide Web Consortium's Web Accessibility Initiative (WAI) as well as online tutorials (for example, Opera's Web Standards Curriculum). The complementary approach of training for professionals in ICT industry has also been tackled by the DfA@eInclusion project. A comprehensive curriculum for such trainings has been recommended and is currently subject to a CEN workshop negotiation. The CEN workshop "Curriculum for training professionals in Universal Design (UD-Prof)" has been implemented in May 2009. Following the general rules for CEN workshops, it offers all interested stakeholders an opportunity to discuss and improve this DfA curriculum for ICT professionals. Examples of good practice Opera (web browser) was designed with the commitment to be used by as many people as possible thus following a Design for All approach. Audiobooks are good examples for Design for All because they enable people to read a book. Virtually anyone who does not have a hearing disability can use audiobooks for leisure, learning, and information. e-Government uses information and communication (ICT) technology to provide and improve government services, transactions and interactions with citizens, businesses, and other arms of government. Elevators provide an alternative way to reach different floor levels. Modern accessible elevators use information and communication technology to adapt themselves to any user imaginable. The closing speed of the doors is adjustable so people can safely enter quickly or slowly as required. Controls of the elevator provide visual and audible feedback to the user so that people with different sensory abilities can operate the elevator without assistance. Blind people profit from tactile keys. Braille labeling is located besides the keys so that they are not accidentally pushed while reading them. The emergency intercom system operates aurally and visually. Wireless tagging (e.g. RFID), facial recognition, remote controls further enhance the capabilities of a modern elevator which can be used by almost anyone. The Inclusive Design Toolkit presents examples of how Design for All principles can be implemented. Other examples of Design for All in ICT are presented in EDeAN's Education and Training Resource. Related networks and projects European Design for all eAccessibility Network The European Design for All e-Accessibility Network – EDeAN is a network of 160 organisations in European Union member states. The goal of the network is to support all citizens' access to the Information Society. EDeAN provides: a European forum for Design for All issues, supporting EU's e-inclusion goals awareness raising in the public and private sectors online resources on Design for All The network is coordinated by the EDeAN Secretariat, which rotates annually and the corresponding National Contact Centres which are the contact points for EDeAN in each EU member state. Design for All Europe EIDD – Design for All Europe is a 100% self-financed European organisation that covers the entire area of theory and practice of Design for All, from the built environment and tangible products to communication, service and system design. Originally set up in 1993 as the European Institute for Design and Disability (EIDD),to enhance the quality of life through Design for All, it changed its name in 2006 to bring it into line with its core business. EIDD – Design for All Europe disseminates the application of Design for All to business and administration communities previously unaware of its benefits and currently (2009) has active member organisations in 22 European countries. The aim of EIDD is to encourage active interaction and communication between professionals interested in the theory and practice of Design for All and to build bridges between, on the one hand, these and other members of the design community and, on the other hand, all those other communities where Design for All can make a real difference to the quality of life for everyone. Examples of EU-funded research projects addressing ICT and inclusion Design for all for e-Inclusion This is a support project to EDeAN. The project aims to develop an exemplary training course for Design for all targeted to the Industry, course structures and curricula for studying Design for All in undergraduate and postgraduate levels as well as an online knowledge base on Design for All. DIADEM: Delivering Inclusive Access for Disabled or Elderly Members of the Community The project aims to develop an adaptable web browser interface for people with reduced cognitive skills, which can be used at home and at work. I2Home: Intuitive interaction for everyone with home appliances based on industry standards The project seeks to develop a universal remote console that will allow networked access to everyday appliances in the home. SHARE-IT: Supported Human Autonomy for Recovery and Enhancement of cognitive and motor abilities using Information Technologies This project is developing scalable and adaptive 'add-ons' which will allow assistive technologies to be integrated into intelligent ICTs for the home. HaH: Hearing at Home This project is looking at the next generation of assistive devices which will help hearing-impaired people to participate fully in the Information Society. CogKnow: Helping people with mild dementia navigate their day CogKnow aims to develop and prototype a cognitive prosthetic device to help those struggling with dementia to perform their daily activities. MonAmi: Mainstreaming Ambient Intelligence The project seeks to mainstream the accessibility of consumer goods and services. The aim is to develop technology platforms that allow elderly and disabled people to continue living in their own homes and stay in their communities. USEM: User Empowerment in Standardisation The project aims to train end-users in standardisation related issues and to enable them to participate in standardisation activities in the area of ICT. VAALID: Accessibility and Usability Validation Framework for AAL Interaction Design Process The project aims at creating modeling and simulation supporting tools to optimize user interaction design and accessibility and usability validation process when developing Ambient Assisted Living solutions. PERSONA: Perceptive Spaces promoting Independent Aging The project aims at further develop Ambient Assisted Living products and services that are affordable, easy to use and commercially viable. The project develops an integrated technological platform that seamlessly links up the different products and services for social inclusion, for support in daily life activities, for early risk detection, for personal protection from health and environmental risks, for support in mobility and displacements within his neighbourhood/town, all of which make a life of freedom worth living within their families and within the society. See also Design for All (design philosophy) Universal Design Computer accessibility Accessibility Knowbility References External links Website of the EU-funded Project "DfA@eInclusion" Website of the European Design for All e-Accessibility Network (EDeAN) Website of EIDD – Design for All Europe European Commission – Information Society Portal, Design for All Computing and society Information technology Accessibility
Design for All (in ICT)
[ "Technology", "Engineering" ]
3,938
[ "Information and communications technology", "Information technology", "Accessibility", "Computing and society", "Design" ]
25,066,611
https://en.wikipedia.org/wiki/Nos%C3%A9%E2%80%93Hoover%20thermostat
The Nosé–Hoover thermostat is a deterministic algorithm for constant-temperature molecular dynamics simulations. It was originally developed by Shuichi Nosé and was improved further by William G. Hoover. Although the heat bath of Nosé–Hoover thermostat consists of only one imaginary particle, simulation systems achieve realistic constant-temperature condition (canonical ensemble). Therefore, the Nosé–Hoover thermostat has been commonly used as one of the most accurate and efficient methods for constant-temperature molecular dynamics simulations. Introduction In classical molecular dynamics, simulations are done in the microcanonical ensemble; a number of particles, volume, and energy have a constant value. In experiments, however, the temperature is generally controlled instead of the energy. The ensemble of this experimental condition is called a canonical ensemble. Importantly, the canonical ensemble is different from microcanonical ensemble from the viewpoint of statistical mechanics. Several methods have been introduced to keep the temperature constant while using the microcanonical ensemble. Popular techniques to control temperature include velocity rescaling, the Andersen thermostat, the Nosé–Hoover thermostat, Nosé–Hoover chains, the Berendsen thermostat and Langevin dynamics. The central idea is to simulate in such a way that we obtain a canonical ensemble, where we fix the particle number , the volume and the temperature . This means that these three quantities are fixed and do not fluctuate. The temperature of the system is connected to the average kinetic energy via the equation: Although the temperature and the average kinetic energy are fixed, the instantaneous kinetic energy fluctuates (and with it the velocities of the particles). Description In the approach of Nosé, a Hamiltonian with an extra degree of freedom for heat bath, s, is introduced; where g is the number of independent momentum degrees of freedom of the system, R and P represent all coordinates and and Q is a parameter which determines the timescale on which the rescaling occurs. Improper choice of Q can lead to ineffective thermostatting or the introduction of nonphysical temperature oscillations. The coordinates R, P and t in this Hamiltonian are virtual. They are related to the real coordinates as follows: , where the coordinates with an accent are the real coordinates. The ensemble average of the above Hamiltonian at is equal to the canonical ensemble average. Hoover (1985) used the phase-space continuity equation, a generalized Liouville equation, to establish what is now known as the Nosé–Hoover thermostat. This approach does not require the scaling of the time (or, in effect, of the momentum) by s. The Nosé–Hoover algorithm is nonergodic for a single harmonic oscillator. In simple terms, it means that the algorithm fails to generate a canonical distribution for a single harmonic oscillator. This feature of the Nosé–Hoover algorithm has prompted the development of newer thermostatting algorithms—the kinetic moments method that controls the first two moments of the kinetic energy, Bauer–Bulgac–Kusnezov scheme, Nosé–Hoover chains, etc. Using a similar method, other techniques like the Braga–Travis configurational thermostat and the Patra–Bhattacharya full phase thermostat have been proposed. References Literature External links Berendsen and Nosé-Hoover thermostats A simple (c++) implementation of the Nosé-Hoover chains thermostat Molecular dynamics
Nosé–Hoover thermostat
[ "Physics", "Chemistry" ]
713
[ "Molecular dynamics", "Computational chemistry", "Molecular physics", "Computational physics" ]
50,590,211
https://en.wikipedia.org/wiki/Bioremediation%20of%20radioactive%20waste
Bioremediation of radioactive waste or bioremediation of radionuclides is an application of bioremediation based on the use of biological agents bacteria, plants and fungi (natural or genetically modified) to catalyze chemical reactions that allow the decontamination of sites affected by radionuclides. These radioactive particles are by-products generated as a result of activities related to nuclear energy and constitute a pollution and a radiotoxicity problem (with serious health and ecological consequences) due to its unstable nature of ionizing radiation emissions. The techniques of bioremediation of environmental areas as soil, water and sediments contaminated by radionuclides are diverse and currently being set up as an ecological and economic alternative to traditional procedures. Physico-chemical conventional strategies are based on the extraction of waste by excavating and drilling, with a subsequent long-range transport for their final confinement. These works and transport have often unacceptable estimated costs of operation that could exceed a trillion dollars in the US and 50 million pounds in the UK. The species involved in these processes have the ability to influence the properties of radionuclides such as solubility, bioavailability and mobility to accelerate its stabilization. Its action is largely influenced by electron donors and acceptors, nutrient medium, complexation of radioactive particles with the material and environmental factors. These are measures that can be performed on the source of contamination (in situ) or in controlled and limited facilities in order to follow the biological process more accurately and combine it with other systems (ex situ). Areas contaminated by radioactivity Typology of radionuclides and polluting waste The presence of radioactive waste in the environment may cause long-term effects due to the activity and half-life of the radionuclides, leading their impact to grow with time. These particles exist in various oxidation states and are found as oxides, coprecipitates, or as organic or inorganic complexes, according to their origin and ways of liberation. Most commonly they are found in oxidized form, which makes them more soluble in water and thus more mobile. Unlike organic contaminants, however, they cannot be destroyed and must be converted into a stable form or extracted from the environment. The sources of radioactivity are not exclusive of human activity. Natural radioactivity does not come from human sources: it covers up to three fourths of the total radioactivity in the world and has its origins in the interaction of terrestrial elements with high energy cosmic rays (cosmogenic radionuclides) or in the existing materials on Earth since its formation (primordial radionuclides). In this regard, there are differences in the levels of radioactivity throughout the Earth's crust. India and mountains like the Alps are among the areas with the highest level of natural radioactivity due to their composition of rocks and sand. The most frequent radionuclides in soils are naturally radium-226 (226Ra), radon-222 (222Rn), thorium-232 (232Th), uranium-238 (238U) and potassium-40 (40K). Potassium-40 (up to 88% of total activity), carbon-14 (14C), radium-226, uranium-238 and rubidium-87 (87Rb) are found in ocean waters. Moreover, in groundwater abound radius radioisotopes such as radium-226 and radium-228 (228Ra). They are also habitual in building materials radionuclides of uranium, thorium and potassium (the latter common to wood). At the same time, anthropogenic radionuclides (caused by humans) are due to thermonuclear reactions resulting from explosions and nuclear weapons tests, discharges from nuclear facilities, accidents deriving from the reprocessing of commercial fuel, waste storage from these processes and to a lesser extent, nuclear medicine. Some polluted sites by these radionuclides are the US DOE facilities (like Hanford Site), the Chernobyl and Fukushima exclusion zones and the affected area of Chelyabinsk Oblast due to the Kyshtym disaster. In ocean waters, the presence of tritium (3H), cesium-137 (137Cs), strontium-90 (90Sr), plutonium-239 (239Pu) and plutonium-240 (240Pu) has significantly increased due to anthropogenic causes. In soils, technetium-99 (99Tc), carbon-14, strontium-90, cobalt-60 (60Co), iodine-129 (129I), iodine-131 (131I), americium-241 (241Am), neptunium-237 (237Np) and various forms of radioactive plutonium and uranium are the most common radionuclides. The classification of radioactive waste established by the International Atomic Energy Agency (IAEA) distinguishes six levels according to equivalent dose, specific activity, heat released and half-life of the radionuclides: Exempt waste (EW): Waste that meets the criteria for exclusion from regulatory control for radiation protection purposes. Very short lived waste (VSLW): Waste with very short half-lives (often used for research and medical purposes) that can be stored over a limited period of up to a few years and subsequently cleared from regulatory control. Very low level waste (VLLW): Waste like soil and rubble (with low levels of activity concentration) that may also contain other hazardous waste. Low level waste (LLW): Waste that is above clearance levels and requires robust isolation and containment for periods of up to a few hundred years and is suitable for disposal in engineered near surface facilities. LLW include short lived radionuclides at higher levels of activity concentration and also long lived radionuclides, but only at relatively low levels of activity concentration. Intermediate level waste (ILW): Waste with long lived radionuclides that requires a greater degree of containment and isolation at greater depths. High level waste (HLW): Waste with large amounts of long lived radionuclides that need to be stored in deep, stable geological formations usually several hundred metres or more below the surface. Ecological and human health consequences Radioactive contamination is a potential danger for living organisms and results in external hazards, concerning radiation sources outside the body, and internal dangers, as a result of the incorporation of radionuclides inside the body (often by inhalation of particles or ingestion of contaminated food). In humans, single doses from 0.25 Sv produce first anomalies in the amount of leukocytes. This effect is accentuated if the absorbed dose is between 0.5 and 2 Sv, in whose first damage, nausea and hair loss are suffered. The strip ranging between 2 and 5 Sv is considered the most serious and include bleeding, ulcers and risk of death; values exceeding 5 Sv involve immediate death. If radiation, likewise, is received in small doses over long periods of time, the consequences can be equally severe. It is difficult to quantify the health effects for doses below 10 mSv, but it has been shown that there is a direct relationship between prolonged exposure and cancer risk (although there is not a very clear dose-response relationship to establish clear limits of exposure). The information available on the effect of natural background radiation with respect anthropogenic pollution on wildlife is scarce and refers to very few species. It is very difficult to estimate from the available data the total doses that can accumulate during specific stages of the life cycle (embryonic development or reproductive age), in changes in behavior or depending on environmental factors such as seasonality. The phenomena of radioactive bioaccumulation, bioconcentration and biomagnification, however, are especially known to sea level. They are caused by the recruitment and retention of radioisotopes by bivalves, crustaceans, corals and phytoplankton, which then amounted to the rest of the food chain at low concentration factors. Radiobiological literature and IAEA establish a safe limit of absorbed dose of 0.001 Gy/d for terrestrial animals and 0.01 Gy/d for plants and marine biota, although this limit should be reconsidered for long-lived species with low reproductive capacity. Radiation tests in model organisms that determine the effects of high radiation on animals and plants are: Chromosomal aberrations. DNA damage. Cancer, particularly leukemia. Leukopenia. Growth reduction. Reproductive deficiencies: sterility, reduction in fecundity, and occurrence of developmental abnormalities or reduction in viability of offspring Reduced seed germination. Burned tissues exposed to radiation. Mortality, including both acute lethality and long-term reduction in life span. The effects of radioactivity on bacteria are given, as in eukaryotes, by ionization of water and production of reactive oxygen species. These compounds mutate DNA strands and produce genetic damage, inducing newly lysis and subsequent cell death. Its action on viruses, on the other hand, results in damaged nucleic acids and viral inactivation. They have a sensory threshold ranging between 1000 and 10,000 Gy (range occupying most biological organisms) which decreases with increasing genome size. Bacterial bioremediation The biochemical transformation of radionuclides into stable isotopes by bacterial species significantly differs from the metabolism of organic compounds coming from carbon sources. They are highly energetic radioactive forms which can be converted indirectly by the process of microbial energy transfer. Radioisotopes can be transformed directly through changes in valence state by acting as acceptors or by acting as cofactors to enzymes. They can also be transformed indirectly by reducing and oxidizing agents produced by microorganisms that cause changes in pH or redox potential. Other processes include precipitation and complexation of surfactants, or chelating agents that bind to radioactive elements. Human intervention, on the other hand, can improve these processes through genetic engineering and omics, or by injection of microorganisms or nutrients into the treatment area. Bioreduction According to the radioactive element and the specific site conditions, bacteria can enzymatically immobilize radionuclides directly or indirectly. Their redox potential is exploited by some microbial species to carry out reductions that alter the solubility and hence, mobility, bioavailability and radiotoxicity. This waste treatment technique called bioreduction or enzymatic biotransformation is very attractive because it can be done in mild conditions for the environment, does not produce hazardous secondary waste and has potential as a solution for waste of various kinds. Direct enzymatic reduction is the change of radionuclides of a higher oxidation state to a lower one made by facultative and obligate anaerobes. The radioisotope interact with binding sites of metabolically active cells and is used as terminal electron acceptor in the electron transport chain where compounds such as ethyl lactate act as electron donors under anaerobic respiration. The periplasm plays a very important role in these bioreductions. In the reduction of uranium (VI) to insoluble uranium (IV), made by Shewanella putrefaciens, Desulfovibrio vulgaris, Desulfovibrio desulfuricans and Geobacter sulfurreducens, the activity of periplasmic cytochromes is required. The reduction of technetium (VII) to technetium (IV) made by S. putrefaciens, G. sulfurreducens, D. desulfuricans, Geobacter metallireducens and Escherichia coli, on the other hand, requires the presence of the complex formate hydrogenlyase, also placed in this cell compartment. Other radioactive actinides such as thorium, plutonium, neptunium and americium are enzymatically reduced by Rhodoferax ferrireducens, S. putrefaciens and several species of Geobacter, and directly form an insoluble mineral phase. The phenomenon of indirect enzymatic reduction is carried out by sulfate-reducing and dissimilatory metal-reducing bacteria on excretion reactions of metabolites and breakdown products. There is a coupling of the oxidation of organic acids —produced by the excretion of these heterotrophic bacteria— with the reduction of iron or other metals and radionuclides, which forms insoluble compounds that can precipitate as oxide and hydroxide minerals. In the case of sulfate-reducing bacteria hydrogen sulfide is produced, promoting increased solubility of polluting radionuclides and their bioleaching (as liquid waste that can then be recovered). There are several species of reducing microorganisms that produce indirect sequestering agents and specific chelators, such as siderophores. These sequestering agents are crucial in the complexation of radionuclides and increasing their solubility and bioavailability. Microbacterium flavescens, for example, grows in the presence of radioisotopes such as plutonium, thorium, uranium or americium and produces organic acids and siderophores that allow the dissolution and mobilization of radionuclides through the soil. It seems that siderophores on bacterial surface could also facilitate the entry of these elements within the cell as well. Pseudomonas aeruginosa also secretes chelating agents out that meet uranium and thorium when grown in a medium with these elements. In general, it has also been found that enterobactin siderophores are extremely effective in solubilizing actinide oxides of plutonium. Citrate complexes Citrate is a chelator which binds to certain transition metals and radioactive actinides. Stable complexes such as bidentate, tridentate (ligands with more than one atom bound) and polynuclear complexes (with several radioactive atoms) can be formed with citrate and radionuclides, which receive a microbial action. Anaerobically, Desulfovibrio desulfuricans and species of the genera Shewanella and Clostridium are able to reduce bidentate complexes of uranyl-citrate (VI) to uranyl-citrate (IV) and make them precipitate, despite not being able to degrade metabolically complexed citrate at the end of the process. In denitrifying and aerobic conditions, however, it has been determined that it is not possible to reduce or degrade these uranium complexes. Bioreduction do not get a head when they are citrate complex mixed metal complexes or when they are tridentate, monomeric or polynuclear complexes, since they become recalcitrant and persistent in the environment. From this knowledge exists a system that combines the degradation of radionuclide-citrate complex with subsequent photodegradation of remaining reduced uranyl-citrate (previously not biodegradated but sensitive to light), which allows for stable precipitates of uranium and also of thorium, strontium or cobalt from contaminated lands. Biosorption, bioaccumulation and biomineralization The set of strategies that comprise biosorption, bioaccumulation and biomineralization are closely related to each other, because one way or another have a direct contact between the cell and radionuclide. These mechanisms are evaluated accurately using advanced analysis technologies such as electron microscopy, X-ray diffraction and XANES, EXAFS and X-ray spectroscopies. Biosorption and bioaccumulation are two metabolic actions that are based on the ability to concentrate radionuclides over a thousand times the concentration of the environment. They consist of complexation of radioactive waste with phosphates, organic compounds and sulfites so that they become insoluble and less exposed to radiotoxicity. They are particularly useful in biosolids for agricultural purposes and soil amendments, although most properties of these biosolids are unknown. Biosorption method is based on passive sequestration of positively charged radioisotopes by lipopolysaccharides (LPS) on the cell membrane (negatively charged), either live or dead bacteria. Its efficiency is directly related to the increase in temperature and can last for hours, being a much faster method than direct bioreduction. It occurs through the formation of slimes and capsules, and with a preference for binding to the phosphate and phosphoryl groups (although it also occurs with carboxyl, amine or sulfhydryl groups). Bacillota and other bacteria like Citrobacter freudii have significant biosorption capabilities; Citrobacter does it through electrostatic interaction of uranium with phosphates of their LPS. Quantitative analyzes determine that, in the case of uranium, biosorption may vary within a range between 45 and 615 milligrams per gram of cell dry weight. However, it is a technique that requires a high amount of biomass to affect bioremediation; it presents problems of saturation and other cations that compete for binding to the bacterial surface. Bioaccumulation refers to uptake of radionuclides into the cell, where they are retained by complexations with negatively charged intracellular components, precipitation or granules formations. Unlike biosorption, this is an active process: it depends on an energy-dependent transport system. Some metals or radionuclides can be absorbed by bacteria accidentally because of its resemblance to dietary elements for metabolic pathways. Several radioisotopes of strontium, for example, are recognized as analogs of calcium and incorporated within Micrococcus luteus. Uranium, however, has no known function and is believed that its entry into the cell interior may be due to its toxicity (it is able to increase membrane permeability). Furthermore, biomineralization —also known as bioprecipitation— is the precipitation of radionuclides through the generation of microbial ligands, resulting in the formation of stable biogenic minerals. These minerals have a very important role in the retention of radioactive contaminants. A very localized and produced enzymatically ligand concentration is involved and provides a nucleation site for the onset of biomineral precipitation. This is particularly relevant in precipitations of phosphatase activity-derivate biominerals, which cleavage molecules such as glycerol phosphate on periplasm. In Citrobacter and Serratia genera, this cleavage liberates inorganic phosphates (HPO42−) that precipitates with uranyl ion (UO22+) and cause deposition of polycrystalline minerals around the cell wall. Serratia also form biofilms that promote precipitation of chernikovite (rich in uranium) and additionally, remove up to 85% of cobalt-60 and 97% of cesium-137 by proton substitution of this mineral. In general, biomineralization is a process in which the cells do not have limitations of saturation and can accumulate up to several times its own weight as precipitated radionuclides. Investigations of terrestrial and marine bacterial isolates belonging to the genera Aeromonas, Bacillus, Myxococcus, Pantoea, Pseudomonas, Rahnella and Vibrio have also demonstrated the removal of uranium radioisotopes as phosphate biominerals in both oxic and anoxic growth conditions. Biostimulation and bioaugmentation Aside from bioreduction, biosorption, bioaccumulation and biomineralization, which are bacterial strategies for natural attenuation of radioactive contamination, there are also human methods that increase the efficiency or speed of microbial processes. This accelerated natural attenuation involves an intervention in the contaminated area to improve conversion rates of radioactive waste, which tend to be slow. There are two variants: biostimulation and bioaugmentation. Biostimulation is the addition of nutrients with trace elements, electron donors or electron acceptors to stimulate activity and growth of natural indigenous microbial communities. It can range from simple fertilization or infiltration (called passive biostimulation) to more aggressive injections to the ground, and is widely used at US DOE sites. Nitrate is used as nutrient to biostimulate the reduction of uranium, because it serves as very energetically favorable electron acceptor for metal-reducing bacteria. However, many of these microorganisms (Geobacter, Shewanella or Desulfovibrio) exhibit resistance genes to heavy metals that limit their ability to bioremediate radionuclides. In these particular cases, a carbon source such as ethanol is added to the medium to promote the reduction of nitrate at first, and then of uranium. Ethanol is also used in soil injection systems with hydraulic recirculations: it raises the pH and promotes the growth of denitrifying and radionuclide-reducing bacteria, that produce biofilms and achieve almost 90% decrease in the concentration of radioactive uranium. A number of geophysical techniques have been used to monitor the effects of in situ biostimulation trials including measurement of: spectral ionization potential, self potentials, current density, complex resistivity and also reactive transport modelling (RTM), which measures hydrogeological and geochemical parameters to estimate chemical reactions of the microbial community. Bioaugmentaton, on the other hand, is the deliberated addition to the environment of microorganisms with desired traits to accelerate bacterial metabolic conversion of radioactive waste. They are often added when necessary species for bioremediation do not exist in the treatment place. This technique has shown in field trials over the years that it does not offer better results than biostimulation; neither it is clear that introduced species can be distributed effectively through the complex geological structures of most subsurface environments or that can compete long term with the indigenous microbiota. Genetic engineering and omics Omics, especially genomics and proteomics, allow identifying and evaluating genes, proteins and enzymes involved in radionuclide bioremediation, apart from the structural and functional interactions that exist between them and other metabolites. Genome sequencing of various microorganisms has uncovered, for example, that Geobacter sulfurreducens possess more than 100 coding regions for c-type cytochromes involved in bioremediation radionuclide, or that NiCoT gene is significantly overexpressed in Rhodopseudomonas palustris and Novosphingobium aromaticivorans when grown in medium with radioactive cobalt. From this information, different genetic engineering and recombinant DNA techniques are being developed to generate specific bacteria for bioremediation. Some constructs expressed in microbial species are phytochelatins, polyhistidines and other polypeptides by fusion-binding domains to outer-membrane-anchored proteins. Some of these genetically modified strains are derived from Deinococcus radiodurans, one of the most radiation-resistant organisms. D. radiodurans is capable to resist oxidative stress and DNA damage from radiation, and reduces technetium, uranium and chromium naturally as well. Besides, through insertion of genes from other species it has been achieved that it can also precipitates uranyl phosphates and degrades mercury by using toluene as an energy source to grow and stabilize other priority radionuclides. Directed evolution of bacterial proteins related to bioremediation of radionuclides is also a field research. YieF enzyme, for example, naturally catalyzes the reduction of chromium with a very wide range of substrates. Following protein engineering, however, it has also been able to participate in uranyl ion reduction. Plant bioremediation The use of plants to remove contaminants from the environment or to render them less harmful is called phytoremediation. In the case of radionuclides, it is a viable technology when decontamination times are long and waste are scattered at low concentrations. Some plant species are able to transform the state of radioisotopes (without suffering toxicity) concentrating them in different parts of their structure, making them rush through the roots, making them volatile or stabilizing them on the ground. As in bacteria, plant genetic engineering procedures and biostimulation —called phytostimulation— have improved and accelerate these processes, particularly with regard to fast-growing plants. The use of Agrobacterium rhizogenes, for example, is quite widespread and significantly increases radionuclide uptake by the roots. Phytoextraction In phytoextraction (also phytoaccumulation, phytosequesteration or phytoabsorption) plants carry radioactive waste from the root system to the vascular tissue and become concentrated in the biomass of shoots. It is a technique that removes radionuclides without destroying the soil structure, with minimal impact on soil fertility and valid for large areas with a low level of radioactivity. Its efficiency is evaluated through bioaccumulation coefficient (BC) or total removal of radionuclides per m2, and is proven to attract cesium-137, strontium-90, technetium-99, cerium-144, plutonium-240, americium-241, neptunium-237 and various radioisotopes of thorium and radium. By contrast, it requires large biomass production in short periods of time. Species like common heather or amaranths are able to concentrate cesium-137, the most abundant radionuclide in the Chernobyl Exclusion Zone. In this region of Ukraine, mustard greens could remove up to 22% of average levels of cesium activity in a single growing season. In the same way, bok choy and mustard greens can concentrate 100 times more uranium than other species. Rhizofiltration Rhizofiltration is the adsorption and precipitation of radionuclides in plant roots or absorption thereof if soluble in effluents. It has great efficiency in the treatment of cesium-137 and strontium-90, particularly by algae and aquatic plants, such as Cladophora and Elodea genera, respectively. It is the most efficient strategy for bioremediation technologies in wetlands, but must have a continuous and rigorous control of pH to make it an optimal process. From this process, some strategies have been designed based on sequences of ponds with a slow flow of water to clean polluted water with radionuclides. The results of these facilities, for flows of 1000 liters of effluent are about 95% retention of radiation in the first pond (by plants and sludge), and over 99% in three-base systems. The most promising plants for rhizofiltration are sunflowers. They are able to remove up to 95% of uranium of contaminated water in 24 hours, and experiments in Chernobyl have demonstrated that they can concentrate on 55 kg of plant dry weight all the cesium and strontium radioactivity from an area of 75 m2 (stabilized material suitable for transfer to a nuclear waste repository). Phytovolatilization Phytovolatilization involves the capture and subsequent transpiration of radionuclides into the atmosphere. It does not remove contaminants but releases them in volatile form (less harmful). Despite not having too many applications for radioactive waste, it is very useful for the treatment of tritium, because it exploits plants' ability to transpire enormous amounts of water. The treatment applied to tritium (shielded by air produces almost no external radiation exposure, but its incorporation in water presents a health hazard when absorbed into the body) uses polluted effluents to irrigate phreatophytes. It becomes a system with a low operation cost and low maintenance, with savings of about 30% in comparison to conventional methods of pumping and covering with asphalt. Phytostabilization Phytostabilization is an specially valid strategy for radioactive contamination based on the immobilization of radionuclides in the soil by the action of the roots. This can occur by adsorption, absorption and precipitation within root zone, and ensures that radioactive waste can not be dispersed because soil erosion or leaching. It is useful in controlling tailings from strip and open pit uranium mines, and guarantees to retrieve the ecosystem. However, it has significant drawbacks such as large doses of fertilizer needed to reforest the area, apart from radioactive source (which implies long-term maintenance) remaining at the same place. Fungal bioremediation Several fungi species have radioactive resistance values equal to or greater than more radioresistant bacteria; they perform mycoremediation processes. It was reported that some fungi had the ability of growing into, feeding, generating spores and decomposing pieces of graphite from destroyed reactor No. 4 at the Chernobyl Nuclear Power Station, which is contaminated with high concentrations of cesium, plutonium and cobalt radionuclides. They were called radiotrophic fungi. Since then, it has been shown that some species of Penicillium, Cladosporium, Paecilomyces and Xerocomus are able to use ionizing radiation as energy through the electronic properties of melanins. In their feeding they bioaccumulate radioisotopes, creating problems on concrete walls of deep geological repositories. Other fungi like oyster mushrooms can bioremediate plutonium-239 and americium-241. Ways of research Current research on bioremediation techniques is fairly advanced and molecular mechanisms that govern them are well known. However, there are many doubts about the effectiveness and possible adversities of these processes in combination with the addition of agrochemicals. In soils, the role of mycorrhizae on radioactive waste is poorly described and sequestration patterns of radionuclides are not known with certainty. Longevity effects of some bacterial processes, such as maintenance of uranium in insoluble form because of bioreductions or biomineralizations, are unknown. There are not clear details about the electronic transfer from some radionuclides with these bacterial species either. Another important aspect is the change of ex situ or laboratory scale processes to their real application in situ, in which soil heterogeneity and environmental conditions generate reproduction deficiencies of optimal biochemical status of the used species, a fact that decreases the efficiency. This implies finding what are the best conditions in which to carry out an efficient bioremediation with anions, metals, organic compounds or other chelating radionuclides that can compete with the uptake of interest radioactive waste. Nevertheless, in many cases research is focused on the extraction of soil and water and its ex situ biological treatment to avoid these problems. Finally, the potential of GMOs is limited by regulatory agencies in terms of responsibility and bioethical issues. Their release require support on the action zone and comparability with indigenous species. Multidisciplinary research is focused on defining more precisely necessary genes and proteins to establish new cell-free systems which may avoid possible side effects on the environment by the intrusion of transgenic or invasive species. See also List of environment topics Living machines Dutch standards Actinides in the environment Restoration ecology Uranium mining debate Radiobiology Nuclear power References External links , F. Xavier. Bioremediation of radioactive waste. (PDF) Scientific poster of the Bachelor Thesis related to this article. Autonomous University of Barcelona Digital Repository of Documents. Bioremediation Environmental engineering Radioactive waste
Bioremediation of radioactive waste
[ "Chemistry", "Technology", "Engineering", "Biology", "Environmental_science" ]
6,608
[ "Chemical engineering", "Biodegradation", "Ecological techniques", "Civil engineering", "Environmental impact of nuclear power", "Hazardous waste", "Radioactivity", "Environmental engineering", "Bioremediation", "Environmental soil science", "Radioactive waste" ]
50,592,162
https://en.wikipedia.org/wiki/Differential%20static%20light%20scatter
Differential static light scatter (DSLS) is a term coined to represent the change in total light scatter of a system over time or temperature in a static environment. Static light scattering or SLS and its many types are well outlined in literature and is the base principal for DSLS but varies specifically in that the difference (before and after) is the focus of this measurement. Typically the system will commence measurement at T0 and over the course of time measure the change in light scatter. One of the more practical applications of DSLS is in the area of proteomic research and protein based chemistry. Solution conditions can be varied across samples of a specific protein in a screening scenario and the system can be kept at either a static temperature or be ramped up, or in some cases down. The change will be observed over time and the focus of the calculation is on the amount of change in signal from T0 to Tfinal . This method of analysis provides researchers with data that helps them predict a protein or compound's stability in various conditions and further, in the case of proteomic structural work, can help identify the best protein candidates, and their optimal conditions to crystallize and thereby undergo x-ray crystallography for structural analysis. There are other technologies or techniques using similar concepts such as DLS (dynamic light scattering) to obtain this information with the help of fluorophores and the use of lasers for excitation however the primary focus in this arena is on particle sizing. Also DLS has a greater focus in 'flow-based' instrumentation. Many proteins are discovered on an annual basis and in the field of drug discovery it is very important characterize the structure of a novel peptides as well as the best conditions to keep them in solution. Because of this staggering number of potential therapeutics churning out of this research sector today there is a strong need for instrumentation to best capture this data and to date there are a few solutions that are DSLS focused. One such oriented instrument designed for high throughput scenarios utilizing standard HTS (high-throughput screening) SBS standard type plates (or automation friendly) is the StarGazer2. There are other solutions also available that have either wider focus to include particular sizing and Zeta potential but are limited but are limited by how many samples can be run at once, thus, non-HTS. As DSLS in principal measures particles as they either aggregate (or grow larger) or, in theory, breakdown and grow smaller, this technology and method of measurement will pull in a number of great applications in the future in the food and beverage, or environmental sector as the technology is stretched into new applications beyond proteomics. References Scattering, absorption and radiative transfer (optics) Scattering
Differential static light scatter
[ "Physics", "Chemistry", "Materials_science" ]
561
[ " absorption and radiative transfer (optics)", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics" ]
41,933,329
https://en.wikipedia.org/wiki/Cereblon%20E3%20ligase%20modulator
Cereblon E3 ligase modulators, also known as immunomodulatory imide drugs (IMiDs), are a class of immunomodulatory drugs (drugs that adjust immune responses) containing an imide group. The IMiD class includes thalidomide and its analogues (lenalidomide, pomalidomide, mezigdomide and iberdomide). These drugs may also be referred to as 'Cereblon modulators'. Cereblon (CRBN) is the protein targeted by this class of drugs. The name "IMiD" alludes to both "IMD" for "immunomodulatory drug" and the forms imide, imido-, imid-, and imid. The development of analogs of thalidomide was precipitated by the discovery of the anti-angiogenic and anti-inflammatory properties of the drug yielding a new way of fighting cancer as well as some inflammatory diseases after it had been banned in 1961. The problems with thalidomide included teratogenic side effects, high incidence of other adverse reactions, poor solubility in water and poor absorption from the intestines. In 1998 thalidomide was approved by the U.S. Food and Drug Administration (FDA) for use in newly diagnosed multiple myeloma (MM) under strict regulations. This has led to the development of a number of analogs with fewer side effects and increased potency which include lenalidomide and pomalidomide, which are currently marketed and manufactured by Celgene. History Thalidomide was originally released in the Federal Republic of Germany (West Germany) under the label of Contergan on October 1, 1957, by Chemie Grünenthal (now Grünenthal). The drug was primarily prescribed as a sedative or hypnotic, but it was also used as an antiemetic for morning sickness in pregnant women. The drug was banned in 1961 after its teratogenic properties were observed. The problems with thalidomide were, aside from the teratogenic side effects, both high incidence of other adverse reactions along with poor solubility in water and absorption from the intestines. Adverse reactions include peripheral neuropathy in large majority of patients, constipation, thromboembolism along with dermatological complications. Four years after thalidomide was withdrawn from the market for its ability to induce severe birth defects, its anti-inflammatory properties were discovered when patients with erythema nodosum leprosum (ENL) used thalidomide as a sedative and it reduced both the clinical signs and symptoms of the disease. Thalidomide was discovered to inhibit tumour necrosis factor-alpha (TNF-α) in 1991 (5a Sampaio, Sarno, Galilly Cohn and Kaplan, JEM 173 (3) 699–703, 1991) . TNF-α is a cytokine produced by macrophages of the immune system, and also a mediator of inflammatory response. Thus the drug is effective against some inflammatory diseases such as ENL (6a Sampaio, Kaplan, Miranda, Nery..... JID 168 (2) 408-414 2008). In 1994 Thalidomide was found to have anti-angiogenic activity and anti-tumor activity which propelled the initiation of clinical trials for cancer including multiple myeloma. The discovery of the anti-inflammatory, anti-angiogenic and anti-tumor activities of thalidomide increased the interest of further research and synthesis of safer analogs. Lenalidomide is the first analog of thalidomide which is marketed. It is considerably more potent than its parent drug with only two differences at a molecular level, with an added amino group at position 4 of the phthaloyl ring and removal of a carbonyl group from the phthaloyl ring. Development of lenalidomide began in the late 1990s and clinical trials of lenalidomide began in 2000. In October 2001 lenalidomide was granted orphan status for the treatment of MM. In mid-2002 it entered phase II and by early 2003 phase III. In February 2003 FDA granted fast-track status to lenalidomide for the treatment of relapsed or refractory MM. In 2006 it was approved for the treatment of MM along with dexamethasone and in 2007 by European Medicines Agency (EMA). In 2008, phase II trial observed efficacy in treating Non-Hodgkin's lymphoma. Pomalidomide (3-aminothalidomide) was the second thalidomide analog to enter the clinic being more potent than both of its predecessors. First reported in 2001, pomalidomide was noted to directly inhibit myeloma cell proliferation and thus inhibiting MM both on the tumor and vascular compartments. This dual activity of pomalidomide makes it more efficacious than thalidomide both in vitro and in vivo. This effect is not related to TNF-α inhibition since potent TNF-α inhibitors such as rolipram and pentoxifylline did not inhibit myeloma cell growth nor angiogenesis. Upregulation of interferon gamma, IL-2 and IL-10 have been reported for pomalidomide and may contribute to its anti-angiogenic and anti-myeloma activities. Development The thalidomide molecule is a synthetic derivative of glutamic acid and consists of a glutarimide ring and a phthaloyl ring (Figure 5). Its IUPAC name is 2-(2,6-dioxopiperidin-3-yl)isoindole-1,3-dione and it has one chiral center After thalidomide's selective inhibition of TNF-α had been reported, a renewed effort was put in thalidomide's clinical development. The clinical development led to the discovery of new analogs which strived to have improved activities and decreased side effects. Clinically, thalidomide has always been used as a racemate. Generally the S-isomer is associated with the infamous teratogenic effects of thalidomide and the R-isomer is devoid of the teratogenic properties but conveys the sedative effects, however this view is highly debated and it has been argued that the animal model that these different R- and S-effects were seen in was not sensitive to the thalidomide teratogenic effects. Later reports in rabbits, which is a sensitive species, unveiled teratogenic effects from both isomers. Moreover, thalidomide enantiomers have been shown to be interconversed in vivo due to the acidic chiral hydrogen in the asymmetric center (shown, for the EM-12 analog, in Figure 3), so the plan to administer a purified single enantiomer to avoid the teratogenic effects will most likely be in vain. Development of lenalidomide and pomalidomide One of the analogs of interest was made by isoindolinone replacement of the phthaloyl ring. It was given the name EM-12 (Figure 3). This replacement was thought to increase the bioavailability of the substance because of increased stability. The molecule had been reported to be an even more potent teratogenic agent than thalidomide in rats, rabbits and monkeys. Additionally, these analogs are more potent inhibitors of angiogenesis than thalidomide. As well, the amino-thalidomide and amino-EM-12 were potent inhibitors of TNF-α. These two analogs later got the name lenalidomide, which is the EM-12 amino analog, and pomalidomide, the thalidomide amino analog. Medical use The primary use of IMiDs in medicine is in the treatment of cancers and autoimmune diseases (including one that is a response to the infection leprosy). Indications for these agents that have received regulatory approval include: Myelodysplastic syndrome, a precursor condition to acute myeloid leukaemia Erythema nodosum, a complication of leprosy Multiple myeloma Off-label indications for which they seem promising treatments include: Hodgkin's lymphoma Light chain-associated (AL) amyloidosis Primary myelofibrosis (PMF) Acute myeloid leukaemia (AML) Prostate cancer Metastatic renal cell carcinoma (mRCC) Thalidomide Thalidomide has been approved by the FDA for ENL and MM in combination with dexamethasone. EMA has also approved it to treat MM in combination with prednisone and/or melphalan. Orphan indications by the FDA include graft-versus-host disease, mycobacterial infection, recurrent aphthous ulcers, severe recurrent aphthous stomatitis, primary brain malignancies, AIDS-associated wasting syndrome, Crohn's disease, Kaposi's sarcoma, myelodysplastic syndrome and hematopoietic stem cell transplantation. Lenalidomide Lenalidomide is approved in nearly 70 countries, in combination with dexamethasone for the treatment of patients with MM who have received at least one prior therapy. Orphan indications include diffuse large B-cell lymphoma, chronic lymphocytic leukemia and mantle cell lymphoma. Lenalidomide is also approved for transfusion-dependent anemia due to low or intermediate-1-risk myelodysplastic syndromes associated with a deletion 5q cytogenetic abnormality with or without additional cytogenetic abnormalities in the U.S., Canada, Switzerland, Australia, New Zealand, Malaysia, Israel and several Latin American countries, while marketing authorization application is currently being evaluated in a number of other countries. Numerous clinical trials are already in the pipeline or being conducted to explore further use for lenalidomide, alone or in combination with other drugs. Some of these indications include acute myeloid leukemia, follicular lymphoma, MALT lymphoma, Waldenström macroglobulinemia, lupus erythematosus, Hodgkin's lymphoma, myelodysplastic syndrome and more. Pomalidomide Pomalidomide was submitted for FDA approval on April 26, 2012 and on 21 June it was announced that the drug would get standard FDA review. A marketing authorization application was filed to EMA 21 June 2012, where a decision could come as soon as early 2013. EMA has already granted pomalidomide an orphan designation for primary myelofibrosis, MM, systemic sclerosis, post-polycythaemia and post-essential thrombocythaemia myelofibrosis. Adverse effects The major toxicities of approved IMiDs are peripheral neuropathy, thrombocytopenia, anaemia and venous thromboembolism. There may be an increased risk of secondary malignancies, especially acute myeloid leukaemia in those receiving IMiDs. Teratogenicity Thalidomide's teratogenicity has been a subject of much debate and over the years numerous hypotheses have been proposed. Two of the best-known have been the anti-angiogenesis hypothesis and oxidative stress model hypothesis, with considerable experimental evidence supporting these two hypotheses regarding thalidomide's teratogenicity. Recently, new findings have emerged that suggest a novel mechanism of teratogenicity. Cereblon is a 51 kDa protein localized in the cytoplasm, nucleus and peripheral membrane of cells in numerous parts of the body. It acts as a component of the E3 ubiquitin ligase, regulating various developmental processes, including embryogenesis, carcinogenesis and cell cycle regulation, through degradation (ubiquitination) of unknown substrates. Thalidomide has been shown to bind to cereblon, inhibiting the activity of the E3 ubiquitin ligase, resulting in accumulation of the ligase substrates and downregulation of fibroblast growth factor 8 (FGF8) and FGF10. This disrupts the positive feedback loop between the two growth factors, possibly causing both multiple birth defects and anti-myeloma effects. Findings also support the hypothesis that an increase in the expression of cereblon is an essential element of the anti-myeloma effect of both lenalidomide and pomalidomide. Cereblon expression was three times higher in responding patients compared to non-responders and higher cereblon expression was also associated with partial or full response while lower expression was associated with stable or progressive disease. Mechanism of action Their mechanism of action is not entirely clear, but it is known that they inhibit the production of tumour necrosis factor, interleukin 6 and immunoglobulin G and VEGF (which leads to its anti-angiogenic effects), co-stimulates T cells and NK cells and increases interferon gamma and interleukin 2 production. Their teratogenic effects appear to be mediated by binding to cereblon. Thalidomide and its analogs, lenalidomide and pomalidomide, are believed to act in a similar fashion even though their exact mechanism of action is not yet fully understood. It is believed that they work through different mechanisms in various diseases. The net effect is probably due to different mechanisms combined. Mechanism of action will be explained in light of today's knowledge. Thalidomide, lenalidomide and pomalidomide Altering cytokine production Thalidomide and its immune-modulating analogs alter the production of the inflammatory cytokines TNF-α, IL-1, IL-6, IL-12 and anti-inflammatory cytokine IL-10. The analogs are believed to inhibit the production of TNF-α, where the analogs are up to 50.000 times more potent in vitro than the parent drug thalidomide. The mechanism is believed to be through enhanced degradation of TNF-α mRNA, resulting in diminished amounts of this pro-inflammatory cytokine secreted. This explains the effect of thalidomide when given to ENL patients, as they commonly have high levels of TNF-α in their blood and in dermatological lesions. In contrast, in vitro assay demonstrated that TNF-α is actually enhanced in T-cell activation, where CD4+ and CD8+ T lymphocytes were stimulated by anti-CD3 which was later confirmed in an early phase trials involving solid tumors and inflammatory dermatologic diseases. IL-12 is another cytokine both suppressed and enhanced by thalidomide and its analogs. When monocytes are stimulated by lipopolysaccharides, IL-12 production is suppressed but during T-cell stimulation the production is enhanced. Lenalidomide is believed to be about 1000 times more potent in vitro than thalidomide in anti-inflammatory properties and pomalidomide about 10 times more potent than lenalidomide. It is worth noticing however that, when comparing lenalidomide and pomalidomide, clinical relevance of higher in vitro potency is unclear since maximum tolerated dose of pomalidomide is 2 mg daily compared to 25 mg for lenalidomide, leading to 10-100 times lower plasma drug concentration of pomalidomide. T-cell activation Thalidomide and its analogs help with the co-stimulation of T-cells through the B7-CD28 complex by phosphorylating tyrosine on the CD28 receptor. In vitro data suggests this co-stimulation leads to increased Th1 type cytokine release of IFN-γ and IL-2 that further stimulates clonal T cell proliferation and natural killer cell proliferation and activity. This enhances natural and antibody dependent cellular cytotoxicity. Lenalidomide and pomalidomide are about 100-1000 times more potent in stimulating T-cell clonal proliferation than thalidomide. In addition, in vitro data suggests pomalidomide reverts Th2 cells into Th1 by enhancing transcription factor T-bet. Anti-angiogenesis Angiogenesis or the growth of new blood vessels has been reported to correspond with MM progression where vascular endothelial growth factor (VEGF) and its receptor, bFGF and IL-6 appear to be required for endothelial cell migration during angiogenesis. Thalidomide and its analogs are believed to suppress angiogenesis through modulation of the above-mentioned factors where potency in anti-angiogenic activity for lenalidomide and pomalidomide was 2-3 times higher than for thalidomide in various in vivo assays, Thalidomide has also been shown to block NF-κB activity through the blocking of IL-6, and NF-κB has been shown to be involved in angiogenesis. Inhibition of TNF-α is not the mechanism of thalidomide's inhibition of angiogenesis since numerous other TNF-α inhibitors do not inhibit angiogenesis. Anti-tumor activity In vivo anti-tumor activity of thalidomide is believed to be due to the potent anti-angiogenic effect and also through changes in cytokine expression. In vitro assays on apoptosis in MM cells have been shown, when treated with thalidomide and its analogs, to upregulate the activity of caspase-8. This causes cross talking of apoptotic signaling between caspase-8 and caspase-9 leading to indirect upregulation of caspase-9 activity. Further anti-tumor activity is mediated through the inhibition of apoptosis protein-2 and pro-survival effects of IGF-1, increasing sensitivity to FAS mediated cell death and enhancement of TNF-related apoptosis inducing ligand. They have also been shown to cause dose dependent G0/G1 cell cycle arrest in leukemia cell lines where the analogs showed 100 times more potency than thalidomide. Bone marrow environment The role of angiogenesis in the support of myeloma was first discovered by Vacca in 1994. They discovered increased bone marrow angiogenesis correlates with myeloma growth and supporting stromal cells are a significant source for angiogenic molecules in myeloma. This is believed to be a main component of the mechanism in vivo by which thalidomide inhibits multiple myeloma. Additionally, inflammatory responses within the bone marrow are believed to foster many hematological diseases. The secretion of IL-6 by bone marrow stromal cells (BMSC) and the secretion of the adhesion molecules VCAM-1, ICAM-1 and LFA, is induced in the presence of TNF-α and the adhesion of MM cells to BMSC. In vitro proliferation of MM cell lines and inhibition of Fas-mediated apoptosis is promoted by IL-6. Thalidomide and its analogs directly decrease the up-regulation of IL-6 and indirectly through TNF-α, thereby reducing the secretion of adhesion molecules leading to fewer MM cells adhering to BMSC. Osteoclasts become highly active during MM, leading to bone resorption and secretion of various MM survival factors. They decrease the levels of adhesion molecules paramount to osteoclast activation, decrease the formation of the cells that form osteoclasts and downregulate cathepsin K, an important cysteine protease expressed in osteoclasts. Structure-activity relationship Since the mechanism of action of thalidomide and its analogs is not fully clear and the bioreceptor for these substances has not been identified, the insight into the relationship between the structure and activity of thalidomide and its analogs are mostly derived from molecular modelling and continued research investigation. The information on SAR of thalidomide and its analogs is still in process so any trends detailed here are observed during individual studies. Research has mainly focused on improving the TNF-α and PDE4 inhibition of thalidomide, as well as the anti-angiogenesis activity. TNF-α inhibitors (not via PDE4) Research indicated that a substitution at the phthaloyl ring would increase TNF-α inhibition activity (Figure 5). An amino group substitution was tested at various locations on the phthaloyl ring (C4, C5, C6, C7) of thalidomide and EM-12 (previously described). Amino addition at the C4 location on both thalidomide and EM-12 resulted in much more potent inhibition of TNF-α. This also revealed that the amino group needed to be directly opposite the carbonyl group on the isoindolinone ring system for the most potent activity. These analogs do not inhibit PDE4 and therefore do not act by PDE4 inhibition. Other additions of longer and bigger groups at the C4 and C5 position of the phthaloyl ring system of thalidomide, some with an olefin functionality, have been tested with various results. Increased inhibitory effect, compared to thalidomide, was noticed with the groups that had an oxygen atom attached directly to the C5 or C4 olefin. Iodine and bromine addition at C4 or C5 resulted in equal or decreased activity compared to thalidomide. These groups were not compared with lenalidomide or pomalidomide. PDE4 inhibitors The common structure for analogs that inhibit TNF-α via inhibition of PDE4 is prepared on the basis of hydrolysing the glutarimide ring of thalidomide. These analogs do not have an acidic chiral hydrogen, unlike thalidomide, and would therefore be expected to be chirally stable. On the phenyl ring, a 3,4-dialkoxyphenyl moiety (Figure 6) is a known pharmacophore in PDE4 inhibitors such as rolipram. Optimal activity is achieved with a methoxy group at the 4-position (X2) and a bigger group, such as cyclopentoxy at the 3-position carbon (X3). However the thalidomide PDE4 inhibitory analogs do not follow the SAR of rolipram analogs directly. For thalidomide analogs, an ethoxy group at X3 and a methoxy group at X2, with X1 being just a hydrogen, gave the highest PDE4 and TNF-α inhibition. Substitutes larger than diethoxy at the X2–X3 position had decreased activity. The effects of these substitutions seem to be mediated by steric effects. For the Y-position, a number of groups have been explored. Substituted amides that were larger than methylamide (CONHCH3) decrease PDE4 inhibition activity. Using a carboxylic acid as a starting point, an amide group has similar PDE4 inhibition activity but both groups were shown to be a considerably less potent than a methyl ester group, which had about six-fold increase in PDE4 inhibitory activity. Sulfone group had similar PDE4 inhibition as the methyl ester group. The best PDE4 inhibition was observed when a nitrile group was attached, which has 32 times more PDE4 inhibitory activity than the carboxyl acid. Substituents at Y leading to increasing PDE4 inhibitory activity thus followed the order: COOH ≤ CONH2 ≤ COOCH3 ≤ SO2CH3 < CN Substitutions on the phthaloyl ring have been explored and it was noticed that nitro groups at the C4 or C5 location decreased activity but C4 or C5 amino substitution increased it dramatically. When the substitution at the 4 (Z) location on the phthaloyl ring was examined, hydroxyl and methoxy groups seem to make the analog a less potent PDE4 inhibitor. An increase in activity was observed with amino and dimethylamino to a similar extent but a methyl group improved the activity further than the aforementioned groups. A 4-N-acetylamino group had slightly lower PDE4 inhibitory activity, compared with the methyl group, but increased the compound's TNF-α inhibitory activity to a further extent. Substituents at Z leading to increasing PDE4 inhibitory activity thus followed the order: N(CH3)2 ≤ NH2 < NHC(O)CH3 < CH3 Angiogenesis inhibition For angiogenesis inhibition activity, an intact glutarimide ring seems to be required. Different groups were tested in the R position. The substances that had nitrogen salts as the R group showed good activity. The improved angiogenesis inhibitory activity could be due to increased solubility or that the positively charged nitrogen has added interaction with the active site. Tetrafluorination of the phthaloyl ring seems to increase the angiogenesis inhibition. Synthesis Described below are schemes for synthesizing thalidomide, lenalidomide, and pomalidomide, as reported from prominent primary literature. Note that these synthesis schemes do not necessarily reflect the organic synthesis strategies used to synthesize these single chemical entities. Thalidomide Synthesis of thalidomide has usually been performed as seen in scheme 1. This synthesis is a reasonably simplistic three step process. The downside of this process however is that the last step requires a high-temperature melt reaction which demands multiple recrystallizations and is not compliant with standard equipment. Scheme 2 is the newer synthesis route which was designed to make the reaction more direct and to produce better yields. This route uses L-glutamine rather than L-glutamic acid as a starting material and by letting it react with N-carbethoxyphthalimide gives N-phthaloyl-L-glutamine (4), with 50–70% yield. The substance 4 is then stirred in a mixture with carbonyldiimidazole (CDI) with enough 4-dimethylaminopyridine (DMAP) in tetrahydrofuran (THF) to catalyze the reaction and heated to reflux for 15–18 hours. During the reflux thalidomide crystallizes out of the mixture. The final step gives 85–93% yield of thalidomide, bringing the total yield to 43–63%. Lenalidomide and pomalidomide Both of the amino analogs are prepared from the condensation of 3-aminopiperidine-2,6-dione hydrochloride (Compound 3) which is synthesized in a two step reaction from commercially available Cbz-L-glutamine. The Cbz-L-glutamine is treated with CDI in refluxing THF to yield Cbz-aminoglutarimide. To remove the Cbz protecting group hydrogenolysis, under 50–60 psi of hydrogen with 10% Pd/C mixed with ethyl acetate and HCl, was performed. The formulated hydrochloride (Compound 3 in Scheme 3) was then reacted with 3-nitrophthalic anhydride in refluxing acetic acid to yield the 4-nitro substituted thalidomide analog and the nitro group then reduced with hydrogenation to give pomalidomide. Lenalidomide is synthesized in a similar way using compound 3 (3-aminopiperidine-2,6-dione) treated with a nitro-substituted methyl 2-(bromomethyl) benzoate, and hydrogenation of the nitro group. Pharmacokinetics Thalidomide Lenalidomide Pomalidomide See also Cancer Multiple myeloma Drug design Thalidomide Lenalidomide Pomalidomide Apremilast Organic chemistry Health crisis Immunomodulation therapy Immunosuppressant Immunomodulatory drug References Glutarimides Immunosuppressants Phthalimides Teratogens Medicinal chemistry PDE4 inhibitors Orphan drugs TNF inhibitors Cereblon E3 ligase modulators
Cereblon E3 ligase modulator
[ "Chemistry", "Biology" ]
5,991
[ "Biochemistry", "Teratogens", "Medicinal chemistry", "nan" ]
41,938,141
https://en.wikipedia.org/wiki/Aluminum%20polymer%20composite
An aluminum polymer composite (APC) material combines aluminum with a polymer to create materials with interesting characteristics. In 2014 researchers used a 3d laser printer to produce a polymer matrix. When coated with a 50–100 nanometer layer of aluminum oxide, the material was able to withstand loads of as much as 280 megapascals, stronger than any other known material whose density was less than , that of water. Aluminum foam Spherical aluminum foam pieces bonded by polymers produced foams that were 80–95% metal. Such foams were test-manufactured on an automated assembly line and are under consideration as automobile parts. Thermal conductivity Experimentally determined thermal conductivity of specific APCs matched both the Agari and Bruggeman models provide a good estimation for thermal conductivity. The experimental values of both thermal conductivity and diffusivity have shown a better heat transport for the composite filled with large particles. See also Aluminium composite panels Aluminum foam References External links Composite materials 3D printing
Aluminum polymer composite
[ "Physics" ]
198
[ "Materials", "Composite materials", "Matter" ]
41,939,036
https://en.wikipedia.org/wiki/Termite-inspired%20robots
Termite-inspired robots or TERMES robots are biomimetic autonomous robots capable of building complex structures without a central controller. A prototype team of termite-inspired robots was demonstrated by Harvard University researchers in 2014, following four years of development. Their engineering was inspired by the complex mounds that termites build, and was accomplished by developing simple rules to allow the robots to navigate and move building blocks in their environment. By following these simple rules, the robots could construct complex structures through a process called stigmergy, without requiring constant human instruction or supervision. Background Social insects such as termites are capable of constructing elaborate structures such as mounds with complex tunnel systems. They are capable of doing so without an overall plan for the design of a structure, without directly communicating information with each other about how to construct a structure, and without a leader to guide them in constructing a specific structure. Roboticists have been interested in the problem of whether robots can be designed with the limited sensory and motor capabilities of insects such as termites and yet, by following simple rules, construct elaborate and complex structures. Robots As part of the TERMES project, the Harvard team designed each robot to perform a few simple behaviors: to move forward, move backward, turn, move up or down a step the size of a specially designed brick, and to move while carrying a brick on top of it. To perform these locomotor behaviors, each robot was equipped with whegs. To detect features of its environment and other robots, each robot was equipped with the following sensors: seven infrared sensors to detect black and white patterns for navigation; an accelerometer to detect tilt angle for climbing; and five ultrasound sonar detectors for determining distance from the perimeter and to nearby robots. To lift, lower, and place the specially designed bricks, each robot had an arm with a spring-loaded gripper to hold a brick while carried. The project's specially designed construction bricks were 21.5 cm × 21.5 cm × 4.5 cm, which was bigger than the footprint (17.5 cm × 11.0 cm) of the robots. Constructing structures During the Harvard trials, groups of termite-inspired robots constructed structures by stigmergy. That is, instead of directly communicating with each other, the robots detected features of their environment and followed simple rules for moving and placing bricks in response to the configuration of bricks that existed at a given time. The rules for constructing a given structure were designed to guarantee that different structures emerged depending on the number of robots involved and on the initial placement of bricks. The TERMES robots constitute a proof of concept for the development of relatively simple teams of robots that are capable of constructing complex structures in, for example, remote or hostile locations. The Harvard researchers who built the initial termite-inspired robots have speculated that future robots could be designed to build a base for human habitation on Mars in preparation for the arrival of human astronauts. More immediate possibilities include teams of termite-inspired robots that are capable of moving sandbags and building levees in flood zones. References External links Official website of the Harvard TERMES project Robotics American inventions 2014 robots
Termite-inspired robots
[ "Engineering" ]
639
[ "Robotics", "Automation" ]
48,148,493
https://en.wikipedia.org/wiki/Divine%20equilibrium
The Divine Equilibrium is a refinement of Perfect Bayesian equilibrium in a signaling game proposed by Banks and Sobel (1987). One of the most widely-applied refinements is the D1-Criterion. It is a restriction of receiver's beliefs to the type of senders for whom deviating towards an off-the-equilibrium message could improve their outcome compared to the equilibrium payoff. In addition to the restriction suggested by the intuitive criterion, the divinity criterion considers only those types which are most likely to send the off-the-equilibrium message. If more than one sender could benefit from the deviation, the Intuitive Criterion assigns equal probabilities for all the senders, whereas the D1-Criterion considers different probabilities. Example The following example is adapted from the original Banks and Sobel (1987). Consider a case of sequential settlement. The defendant (sender of messages) has two types: t1 and t2. Type t1 is not negligent and type t2 is negligent. The defendant can offer a high settlement or low settlement (messages) to the plaintiff. The plaintiff (receiver of messages) can either accept or reject the settlement offer. The payoff is illustrated below. Here the payoff shows that if the plaintiff accepts the high settlement, the money transfer is larger (5 instead of 3 in the low settlement case). If the plaintiff rejects the offer, then the negligent type t2 receives a higher penalty from the court (-11 comparing with -6 for type t1). However, since the plaintiff does not know the type, he is better off to accept the settlement when the defendant is not negligent. Consider the prior probably to be half and half, meaning that before seeing the settlement offer the plaintiff believes that with 50 percent of the probability that the defendant is negligent. In this case, the game has two pooling equilibria. The first equilibrium (E1) is that both types of defendant choose a low settlement, and the plaintiff accepts the offer. The second equilibrium (E2) is that both types of defendant choose a high settlement, and the plaintiff accepts the offer. However, in order to support these equilibria, one also has to specify what are the beliefs if the plaintiff sees a different type of offer (commonly referred as off-equilibrium messages). From the payoff matrix we can see that under E1, the defendant is already getting his highest possible payoff (-3). Thus, regardless of what the plaintiff thinks, the defendant does not have incentive to deviate. This reasoning is not true if the equilibrium is E2. The idea of equilibrium refinement is indeed to provide a reasonable argument to select the most "intuitive" outcome. To see how the refinement works, we first need to check what kind of beliefs can support E2. In this case, in order for the defendant not to have incentive to deviate to low settlement, the plaintiff has to respond with "Reject" if he sees a low settlement, which translates into that the plaintiff has to have a belief that the defendant is negligent (type t2) with probability larger than 60 percent (if this is true, then "accept" gives the plaintiff a payoff of 3, and "reject" gives the plaintiff a payoff of slightly more than 3=5*60%). Implicitly, the belief says that type t2 is more likely to deviate than t1. The D1 criterion (or the more advanced version of divinity) is built on the idea of deciding which type is actually more likely to deviate. If we look again at the payoff matrix under the low settlement case, then regardless of how the plaintiff assigns his probability of choosing "accept" or "reject", the payoff for type t1 is higher than type t2 (for example, if the plaintiff plays a strategy of choosing equally between "accept" and "reject", then type t1 gets -4.5, whereas type t2 gets -7). Thus, comparing with the equilibrium payoff of -5 under E2, one can argue that whenever t2 wants to deviate, type t1 also wants to deviate. Thus, a reasonable belief should assign a higher probability to type t1. The D1 criterion pushes this type of reasoning to the extreme and requires that the plaintiff believes that the deviation (if observed) should come from t1. As a result, E2 is not plausible because it contradicts with the refinement. References Game theory equilibrium concepts
Divine equilibrium
[ "Mathematics" ]
943
[ "Game theory", "Game theory equilibrium concepts" ]
48,153,673
https://en.wikipedia.org/wiki/Mobility%20as%20a%20service
Mobility as a service (MaaS) is a type of service that enables users to plan, book, and pay for multiple types of mobility services through an integrated platform. Transportation services from public and private transportation providers are combined through a unified gateway, usually via an app or website, that creates and manages the trip and payments, including subscriptions, with a single account. The key concept behind MaaS is to offer travelers flexible mobility solutions based on their travel needs, thus "mobility as a service" also refers to the broader concept of a shift away from personally-owned modes of transportation and towards mobility provided as a service. Travel planning typically begins in a journey planner. For example, a trip planner can show that the user can get from one destination to another by using a train/bus combination. The user can then choose their preferred trip based on cost, time, and convenience. At that point, any necessary bookings (e.g. calling a taxi, reserving a seat on a long-distance train) would be performed as a unit. It is expected that this service should allow roaming, that is, the same end-user app should work in different cities, without the user needing to become familiar with a new app or to sign up to new services. Together with other emerging vehicular technologies such as automated driving, connected cars and electric vehicles, MaaS is contributing to a new type of future mobility, which is autonomous, connected, electric and shared vehicles. Trend towards MaaS Booming demand for more personalised transport services has created a market space and momentum for MaaS. The movement towards MaaS is fueled by a myriad of innovative new mobility service providers such as carpool and ridesharing companies, bicycle-sharing systems programs, scooter-sharing systems and carsharing services as well as on-demand "pop-up" bus services. On the other hand, the trend is motivated by the anticipation of self-driving cars, which puts into question the economic benefit of owning a personal car over using on-demand car services, which are widely expected to become significantly more affordable when cars can drive autonomously. This shift is further enabled by improvements in the integration of multiple modes of transport into seamless trip chains, with bookings and payments managed collectively for all legs of the trip. In London, commuters may use a contactless payment bank card (or a dedicated travel card called an Oyster card) to pay for their travel. Between the multiple modes, trips, and payments, data is gathered and used to help people's journeys become more efficient. In the government space, the same data allows for informed decision-making when considering improvements in regional transit systems. Most MaaS studies have been done in the Global North but in the Global South there is demand and proposals may have different characteristics like support for offline access and integration with informal transport. Potential impacts Mobility as a service may cause a decline in car ownership. If average vehicle occupancy for on-road time decreases, total vehicle-kilometres-travelled will increase. MaaS could significantly increase the efficiency and utilization of transit providers that contribute to the overall transit network in a region. The predictions were validated by the Ubigo trial in Gothenburg during which many private cars were deregistered for the duration of the trial and utilization of existing transit services increased the efficiency of the overall network. Ultimately, a more efficient network coupled with new technology such as autonomous vehicles could significantly reduce the cost of public transit. MaaS could improve ridership habits, transit network efficiency, decrease costs to the user, improve utilization of MaaS transit providers, reduce city congestion as more users adopt MaaS as a main source of transit, and reduce emissions as more users rely on public transit component, autonomous vehicles in a MaaS network. MaaS equally has many benefits for the business world - understanding the Total Cost of Business Mobility could help travel decision makers in the corporate world save hundreds of thousands. By analysing data and costs attributed to "business mobility" (e.g. vehicle rental costs, fuel costs, parking charges, train ticket admin fees and even the time taken to book a journey) businesses can make informed decisions about travel policy, fleet management and expense claims. Some MaaS companies suggest that in journey planning alone, it can take up to 9 steps before a simple travel arrangement is booked. However, there are also many anticipated challenges for sustainability and governance stemming from MaaS, ranging from increased energy use, reduced health effects, and up to conflicts across organizations. MaaS also holds remarkable potential in the revolutionizing of public transport systems in developing countries. Since developing countries tend to depend heavily on informal and unstructured public transport modes, the concept of MaaS according to some researchers, could hold the key to providing more efficient, equitable and accessible transportation services. In these contexts however, MaaS may need to be re-envisioned, and tailored to the unique challenges of the developing world, in order to create the desired impacts. Payment methods The concept assumes use through mobile app, although the concept can also be used for any type of payment (transit card, ticket, etc.). The concept is then broken down further into 2 payment models: The Monthly subscription model assumes that enough users consume public transit services on a monthly basis to offer bundled transit service. Users pay a monthly fee and receive bundled transit services such as unlimited travel on urban public transport in addition to a fixed number of taxi kilometers. The monthly subscription model incorporates a well-funded commercially operated "MaaS Operator" which will purchase transport services in bulk and provide guarantees to users. In Hanover, Germany, the MaaS operator can purchase bulk transit services and act as the middleman through the product, Hannovermobil. It is not necessary that the operator include all forms of transport, but just enough to be able to provide reasonable guarantees. A monthly subscription will also provide enough funding for the MaaS operator to purchase significant enough transit services that it can use market power to achieve competitive prices. In particular, a MaaS operator may improve the problems of low utilization - e.g. in Helsinki, taxi drivers spend 75% of their working time waiting for a customer, and 50% of kilometers driven without generating revenue. A MaaS operator can solve this problem by guaranteeing a base salary to taxi drivers through existing employers. The Pay-as-you-go model operates well in environments with a high number of "one-off" riders (tourists, transit networks in areas with high car adoption, etc.). Each leg of the booked trip (each train trip, taxi trip etc.) is priced separately and is set by the transport service provider. In this model, mobile applications would operate as search engines, seeking to draw all transport service providers into a single application, enabling users to avoid having to interact with multiple gateways in an attempt to assemble the optimal trip. Many cities have cards which pay for intermodal public transport, including Vienna and Stuttgart but none yet include taxis/on-demand buses in the service. Both models have similar requirements, such as trip planners to construct optimal trip chains, and technical and business relationships with transport service providers, (i.e. a taxi booking/payment API and e-ticketing, QR codes on urban buses and metros, etc.). Impact of autonomous vehicles As the development of the autonomous car accelerates, the company Uber has announced that it plans to transition its app to a fully autonomous service and aims to be cheaper than car ownership. Many automobile manufacturers and technology companies have announced plans or are rumored to develop autonomous vehicles, including Tesla, Mobileye, General Motors, Waymo, Apple, and Local Motors. Autonomous vehicles could allow the public to use roads in low cost-per-kilometre, self-navigating vehicles to a preferred destination at a significantly lower cost than current taxi and ridesharing prices. The vehicles could have a large impact on the quality of life in urban areas and form a critical part of the future of transportation, while benefiting the traveler, the environment, and even other sectors such as healthcare. Modelling scenarios were conducted on the deployment of shared autonomous vehicles on the city of Lisbon by PTV as part of the International Transport Forum's Corporate Partnership Board. This model shows that the positive impacts on transport networks and mobility in congested places will be realised to their greatest extent with increases in shared minibus/bus scale public transport in addition to ride-sharing; whereas autonomous taxis with individual passengers would see a large increase in vehicle kilometres and congestion. In January 2016, the President of the United States, Barack Obama, secured funding to be used over the next ten years to support the development of autonomous vehicles. Historical timeline In 1996, the concept of an "intelligent information assistant" integrating different travel and tourism services was introduced at the ENTER conference. The concept first arose in Sweden. A well-executed trial was conducted in Gothenburg under the monthly subscription model. The service was well received; however, it was discontinued due to lack of support at the government level for third party on-selling of public transport tickets. In June 2012, Agrion, an energy storage company, sponsored a 1/2-day conference in San Francisco, CA titled "E-Mobility as a Service" at which the concept of Mobility as a Service was discussed as a potential outcome of the confluence between the digital realm of smartphone technology and shared electric autonomous vehicles [hence the E-Mobility in the conferences title]. The notion of a digitally connected seamless multi-modal transportation network was discussed as a potential outcome of the real-time connectivity offered by the newly introduced smart phone. The idea was that this would become so ubiquitous and seamless that mobility could be "backgrounded" in the urban fabric similar to other essential utilities or services. It would come to be seen as common place as turning on the tap to get water or the light switch to get illumination; hence mobility-as-a-service. The idea then gained widespread publicity through the efforts of Sampo Hietanen, CEO of ITS Finland (later founder and CEO of Maas Global), and Sonja Heikkila, then a Masters student at Aalto University, and the support of the Finnish Ministry of Transport and Communication. MaaS became a popular topic at the World Congress on Intelligent Transport Systems 2015 in Bordeaux, and subsequently, the Mobility as a Service Alliance was formed. In 2017 the MaaS Alliance published its white paper on Mobility as a Service, and how to create foundation for thriving MaaS ecosystem. The EU-funded "Mobinet" project has laid some of the groundwork for MaaS, e.g. pan-European identity management of travelers, and payments, and links to trip planners. In September 2019, Berlin's public transport authority Berliner Verkehrsbetriebe (BVG) continued Mobility as a Service development by launching first in the world large scale and city owned project "Jelbi" together with a Lithuanian mobility startup Trafi. In the United States, the US Department of Transportation began a series of demonstration projects called the "Mobility on Demand Sandbox Program" in 2016. Overseen by the Federal Transit Administration (FTA), the goals of the program included improved efficiency, effectiveness, and customer experience of transportation services. Eleven cities received almost $8 million to conduct demonstration projects which were evaluated based on performance measures provided by the project partners, as well as independent evaluators. The Palo Alto, California "Adaptive Mobility with Reliability and Efficiency" (AMORE) project tested a flexible service for commuting or first/last-mile connections to fixed-route service in relatively high-income, high-vehicle-ownership communities. The flexibility of a transit-hailing private company was melded with the efficiency of a fixed-route bus by grouping customers traveling in similar patterns and allowing quicker connections to the core transit system. The evaluation revealed the AMORE service worked as anticipated in the test environment, but lack of demand during implementation limited its effectiveness. Successor programs are under development. The Puget Sound First/Last Mile Partnership with Via to Transit project was designed to improve mobility by expanding access to transit by developing a partnership with a private sector mobility company, integrating the services with existing transit services, broadening access to a wider audience, including populations without smartphones, those who need wheelchair-accessible vehicles, unbanked populations, low-income populations, people of color, and populations with limited English proficiency, and inform best practices and FTA guidance for public-private partnerships and novel transit service delivery models. Although the project had to be terminated when the COVID-a9 epidemic began, the evaluation found that transit agencies improved and increase access to transit. Through significant public-private coordination, the pilot provided valuable lessons to inform how transit agencies can leverage on-demand first/last mile services to enhance mobility. List of current MaaS systems by country Austria The SMILE (Simply MobILE) project started in 2012 and the trial began in November 2014. Belgium In September 2023, Brussels launched Floya, as MaaS app to book public transport, scooters, bikes, and cars. Finland Whim started in Helsinki in 2016 and provided 1.8 Million trips a year after launch. Germany Qixxit was a nationwide planning app by Deutsche Bahn. It was sold to lastminute.com in 2019. The Netherlands In 2019 seven MaaS projects were being organized around the country. Sweden UbiGo started as a pilot in Gothenburg and then launched in Stockholm. United Kingdom Transport for West Midlands launched a trial in 2018 that was promoted as the first MaaS app in the UK, but it "did not live up to expectations", according to TfWM’s Head of Transport Innovation. A new trial is expected to launch in 2024. United States Go Denver was launched in February 2016, and it had over 7,000 users by June 2017. Pittsburgh ran the "Move PGH" two year pilot program from July 2021 to July 2023. In 2022 Tampa launched a six-month pilot in collaboration with Moovit to have 200 participants provide feedback. The app included mapping, planning, mobile ticketing, real-time arrival information, and parking options. The pilot was funded by with $150,000 each from the Florida Department of Transportation and the city. See also Intelligent transportation system Demand-responsive transport as a service References External links MaaS Alliance As a service Transport culture
Mobility as a service
[ "Physics" ]
2,937
[ "Physical systems", "Transport", "Transport culture" ]
48,160,061
https://en.wikipedia.org/wiki/Unbalanced%20circuit
In electrical engineering, an unbalanced circuit is one in which the transmission properties between the ports of the circuit are different for the two poles of each port. It is usually taken to mean that one pole of each port is bonded to a common potential (single-ended signalling) but more complex topologies are possible. This common point is commonly called ground or earth but it may well not actually be connected to electrical ground at all. Unbalanced circuits are to be contrasted to balanced circuits where the transmission paths are impedance balanced (the impedances are identical). Examples Passive filter The figure shows two versions of a simple low-pass filter, unbalanced version (A) and balanced version (B). Both circuits have exactly the same effect as filters, they have the same transfer function. However, on the unbalanced circuit, the bottom pole of the input port is connected directly to the bottom pole of the output port. Thus, the impedance between the top poles is greater than the impedance between the bottom poles from input to output. For a circuit to be balanced the impedance of the top leg must be the same as the impedance of the bottom leg so that the transmission paths are identical. To achieve this, the inductor in the balanced version is split into two equal inductors, each with half the original inductance. Tuned amplifier The figure shows the circuit of a typical tuned amplifier. The lower pole of the input port is connected directly to the lower pole of the output port. This connection also forms the negative rail of the supply voltage. This scheme is typical of many electronic circuits that are not required to have differential inputs or outputs. An example of a circuit that does not follow this pattern is the differential amplifier. Advantages and disadvantages The basic advantage of using an unbalanced circuit topology, as compared to an equivalent balanced circuit, is that far fewer components are required. The difficulties come when a port of the circuit is to be connected to a transmission line or to an external device that's designed for balanced operation. Many transmission lines are intrinsically an unbalanced format such as the widely used coaxial cable. In such cases the circuit can be directly connected to the line. However, connecting an unbalanced circuit to, for instance, a twisted pair line, which is an intrinsically balanced format, makes the line susceptible to common-mode interference. For this reason, balanced lines are normally driven from balanced circuits. One option is to redesign the circuit so that it is properly impedance balanced. If that is not possible or desirable, a balun, a device for interfacing balanced and unbalanced circuits, may be used. References Don Davis, Eugene Patronis, Sound System Engineering, p. 433, CRC Press, 2014 . Douglas Self, Audio Power Amplifier Design, pp. 649-654, Taylor & Francis, 2013 . R.S. Sedha, A Textbook of Electronic Circuits, p. 627, S. Chand, 2008 . Electronic circuits
Unbalanced circuit
[ "Engineering" ]
620
[ "Electronic engineering", "Electronic circuits" ]
46,558,709
https://en.wikipedia.org/wiki/Dictyophorine
Dictyophorines are a pair of sesquiterpenes isolated from the fungus Phallus indusiatus (Dictyophora indusiata). These compounds are based on the eudesmane skeleton, a common structure found in plant-derived flavors and fragrances, and they are the first eudesmane derivatives isolated from fungi. Dictyophorines A and B promote the synthesis of nerve growth factor in astroglial cells. References Sesquiterpenes Cyclohexenes Cyclic ketones Isopropenyl compounds
Dictyophorine
[ "Chemistry" ]
120
[ "Functional groups", "Organic compounds", "Isopropenyl compounds", "Organic compound stubs", "Organic chemistry stubs" ]
46,564,090
https://en.wikipedia.org/wiki/Pawe%C5%82%20Kuczy%C5%84ski
Paweł Kuczyński is a Polish born political art satirist and philosopher who is anti-war. Awards 2013: silver plate at Salon of Antiwar Cartoons in Serbia. 2010: Silver prize at the Dicaco International Cartoon Contest. 2008: Golden hat at the international Cartoonfestival Knokke-Heist. Eryk Award (named after Eryk Lipiński) by the Association of Polish Cartoonists Exhibitions He held an exhibition in Brussels. In 2015 he exhibited at Cartoon Xira soon after the Charlie Hebdo incident. External links Personal web site in Polish www.pawelkuczynski.com References Polish graphic designers Polish caricaturists Polish satirists Polish male writers Draughtsmen 21st-century Polish painters 21st-century male artists 1976 births Political artists Artists from Szczecin Living people Polish male painters
Paweł Kuczyński
[ "Engineering" ]
174
[ "Design engineering", "Draughtsmen" ]
46,564,590
https://en.wikipedia.org/wiki/Dicyanamide
Dicyanamide, also known as dicyanamine, is an anion having the formula . It contains two cyanide groups bound to a central nitrogen anion. The chemical is formed by decomposition of 2-cyanoguanidine. It is used extensively as a counterion of organic and inorganic salts, and also as a reactant for the synthesis of various covalent organic structures. Dicyanimide was used as an anionic component in an organic superconductor that was, when reported in 1990, a superconductor with the highest transition temperature in its structural class. Dean Kenyon has examined the role of this chemical in reactions that can produce peptides. A co-worker then considered this reactive nature and examined the possible role dicyanamide may have had in primordial biogenesis. References Nitriles Anions
Dicyanamide
[ "Physics", "Chemistry" ]
177
[ "Matter", "Anions", "Functional groups", "Nitriles", "Ions" ]
39,204,523
https://en.wikipedia.org/wiki/Energetically%20modified%20cement
Energetically modified cements (EMCs) are a class of cements made from pozzolans (e.g. fly ash, volcanic ash, pozzolana), silica sand, blast furnace slag, or Portland cement (or blends of these ingredients). The term "energetically modified" arises by virtue of the mechanochemistry process applied to the raw material, more accurately classified as "high energy ball milling" (HEBM). At its simplest this means a milling method that invokes high kinetics by subjecting "powders to the repeated action of hitting balls" as compared to (say) the low kinetics of rotating ball mills. This causes, amongst others, a thermodynamic transformation in the material to increase its chemical reactivity. For EMCs, the HEBM process used is a unique form of specialised vibratory milling discovered in Sweden and applied only to cementitious materials, here called "EMC Activation". By improving the reactivity of pozzolans, their strength-development rate is increased. This allows for compliance with modern product-performance requirements ("technical standards") for concretes and mortars. In turn, this allows for the replacement of Portland cement in the concrete and mortar mixes. This has a number of benefits to their long-term qualities. Energetically modified cements have a wide range of uses. For example, EMCs have been used in concretes for large infrastructure projects in the United States, meeting U.S. concrete standards. Justification The term "energetically modified cement" incorporates a simple thermodynamic descriptor to refer to a class of cements produced using a specialised highly intensive milling process first discovered in 1993 at Luleå University of Technology (LTU) in Sweden. The transformatory process is initiated entirely mechanically as opposed to heating the materials directly. The mechanisms of mechanochemical transformations are often complex and different from "traditional" thermal or photochemical mechanisms. HEBM can transform both the physical and thermodynamic properties that for example, "can lead to glass formation from elemental powder mixtures as well as by amorphization of intermetallic compound powders". The effects of HEBM-transformation cause a thermodynamic change that resides ultimately in a modified Gibbs Energy. The process increases the binding capacity and chemical reactivity rates of the materials transformed. Continuing academic work and research regarding "self-healing" properties of energetically modified cements is ongoing at LTU. For example, EMCs has received awards from the Elsa ō Sven Thysells stiftelse för konstruktionsteknisk forskning (Elsa & Sven Thysell Foundation for Construction Engineering Research) of Sweden. The contribution of EMCs to the domain of mechanochemistry itself has also been recognised. Etymology The term "energetically modified cement" was first used in 1992 by Vladimir Ronin, introduced in a paper by Ronin et al. dated 1993 and presented at a formal meeting of the academic Nordic Concrete Research group. The process was refined by Ronin and others, including Lennart Elfgren (now Professor Emeritus of LTU, Department of Civil, Environmental and Natural Resources Engineering). In 2023, LTU awarded Elfgren the "Vice-Chancellor's Medal for Merit for outstanding and meritorious work" by virtue of his work "...for the spread of new knowledge and understanding of, in particular, the concrete construction field". At the 45th World Exhibition of Invention, Research and Innovation, held in 1996 in Brussels, Belgium, EMC Activation was awarded a gold medal with mention by EUREKA, the European inter-governmental (research and development) organisation, for "modification énergique de ciments". The term "energetically modified" has been used elsewhere—for example as recently as 2017—although such usage does not denote the method used was EMC Activation as defined here. Overview The claims made include: An EMC is a fine powder (typical of all cements) whose colour depends on the material processed. EMCs are produced using only a "fraction" of the energy used in Portland cement production (claimed ~100 KWh/tonne, <8% of Portland cement). No is released by the process. It is "zero emissions". The purpose of an EMC is to replace the Portland cement requirement in the mortar or concrete being used. More than 70% replacement is claimed. EMC Activation is a dry process. No noxious fumes are released. EMC Activation is a low-temperature process, even though temperatures can be "momentarily extreme" at "sub-micron" scales. EMCs require no chemicals for their thermodynamic transformation. There are several types of EMCs, depending on the raw materials transformed. Depending on user-requirements, delivered dry products may comprise also a minority proportion of "high clinker" Portland cement. Each type of EMC has its own performance characteristics, including mechanical load and strength development. Concretes cast from EMCs may yield significant "self-healing" capabilities. The most frequently used EMCs are made from fly ash and natural pozzolans. These are relatively abundant materials, and the performance characteristics can exceed those of Portland cement. In 2009, fly ash EMCs were demonstrated to exceed the 'Grade 120 Slag' benchmark per ASTM C989 — the most reactive form of cementitious blast furnace slag. Silica sand and granite can also be treated by the process to replace Portland cement. EMC products have been extensively tested by independent labs and certified for use by several US DOTs including in Federal Highway Administration projects. EMCs comply with respective technical standards, such as ASTM C618-19 (U.S.); EN-197, EN-206 and EN 450-1:2012 (CEN territories, including EEA); BS 8615‑1:2019 (U.K.). Compared to using Portland cement, the resulting concrete-mix using EMC does not require a higher "total cementitious content" to meet strength-development requirements. In testing by BASF, the 28-day strength-development for 55% replacement of Portland cement by a natural pozzolanic EMC was 14,000 psi / 96.5 MPa (i.e. > C95). This comprised a "total cementitious content" of 335 kg/m^3 (564 lbs/CY) concrete mix. EMCs as "low carbon" cements Unlike Portland Cement, an EMC's production releases no carbon dioxide whatsoever. This makes EMCs "low carbon cements". The first cited claims for EMC's CO2-reduction capabilities were made in 1999, when worldwide Portland cement production stood at 1.6 billion tonnes per year. From 2011 to 2019, worldwide Portland cement production increased from 3.6 to 4.1 billion tonnes per year. Energetically modified cement's potential for contributing to a worldwide reduction of CO2 has been externally recognised since 2002 and has been ongoing. Recent recognition has included the 2019 Energy Transitions Commission (Lord Adair Turner and Lord Stern) report Mission Possible sectoral focus: cement (2019). Recognition of the "Zero-Carbon" potential was set out by McKinsey & Co in its 2020 report Laying the foundation for zero-carbon cement. In 2023, the contribution offered by EMCs in achieving "low carbon" materials was further acknowledged within the academic domain of mechanochemistry. Production and field-usage No noxious emissions or toxic chemicals during production EMC Activation is purely a mechanical process. As such, it does not involve heating or burning or indeed any chemical treatments. This means no fumes at all are produced during an EMC's manufacture. History EMCs have been produced for project usage since 1992 for a wide range of uses. By 2010, the volume of concrete poured containing EMCs was about 4,500,000 cu yd (3,440,496 m3), largely on US DOT projects. To place this into context, that is more than the entire construction of the Hoover Dam, its associated power plants and appurtenant works, where a total of 4,360,000 cu·yds (3,333,459 m3) of concrete was poured—equivalent to a U.S. standard highway from San Francisco to New York City. Early usage in Sweden An early project used a concrete comprising a 50% Portland cement substitution using a silica sand EMC. This was deployed for the construction of a road bridge in Karungi, Sweden, in 1999, with Swedish construction firm Skanska. The Karungi road bridge has withstood Karungi's harsh subarctic climate and divergent annual and diurnal temperature ranges. Usage in the United States In the United States, energetically modified cements have been approved for usage by a number of state transportation agencies, including PennDOT, TxDOT and CalTrans. In the United States, highway bridges and hundreds of miles of highway paving have been constructed using concretes made from EMC derived from fly ash. These projects include sections of Interstate 10. In these projects, EMC replaced at least 50% of the Portland cement in the concrete poured. This is about 2.5 times more than the typical amount of fly ash in projects where energetic modification is not used. Independent test data showed 28-day strength-development requirements were exceeded in all projects. In 2009, fly ash EMCs were demonstrated to exceed the 'Grade 120 Slag' benchmark per ASTM C989. Another project was the extension of the passenger terminals at the Port of Houston, Texas, where energetically modified cement's ability to yield concretes that exhibit high resistances to chloride– and sulphate–ion permeability (i.e., increased resistance to seawater) was a factor. Developments in 2024 In February 2024 it was jointly announced that a manufacturing plant for EMCs made from volcanic materials will be jointly developed by "EMC Cement" and HES International at the Port of Amsterdam, and further, that the "all-electric zero-emissions plant, of an initial capacity of 1.2 million tonnes, will cut CO2 emissions by 1 million tonnes annually — using less than 10% of the energy of a conventional Portland cement plant". Properties of concretes and mortars made from EMCs Custom design for end-usage The performance of mortars and concretes made from EMCs can be custom-designed. For example, EMC concretes can range from general application (for strength and durability) through to the production of rapid and ultra-rapid hardening high-strength concretes (for example, over 70 MPa / 10,150 psi in 24 hours and over 200 MPa / 29,000 psi in 28 days). This allows energetically modified cements to yield High Performance Concretes. Durability of EMC concretes and mortars Any cementitious material undergoing EMC Activation will likely marshal improved durability—including Portland cement treated with EMC Activation. As regards pozzolanic EMCs, concretes made from pozzolanic EMCs are more durable than concretes made from Portland cement. Treating Portland cement with EMC activation will yield high-performance concretes (HPCs). These HPCs will be high strength, highly durable, and exhibiting greater strength-development in contrast to HPCs made from untreated Portland cement. Treating Portland cement with the EMC Activation process may increase the strength development by nearly 50% and also significantly improve the durability, as measured according to generally accepted methods. Enhanced resistance to saltwater attack Concrete made from ordinary Portland cement without additives has a relatively impaired resistance to saltwater. In contrast, EMCs exhibit high resistances to chloride and sulphate ion attack, together with low alkali-silica reactivities (ASR). For example, durability tests have been performed according to the "Bache method" (see diagram). Samples made of HPC having respective compressive strengths of 180.3 and 128.4 MPa (26,150 and 18,622 psi) after 28 days of curing, were then tested using the Bache method. The samples were made of (a) EMC (comprising Portland cement and silica fume both having undergone EMC Activation) and (b) Portland cement. The resulting mass-loss was plotted in order to determine durability. As a comparison, the test results showed: Whereas the reference Portland cement concrete had "total destruction after about 16 Bache method cycles, in line with Bache's own observations for high-strength concrete"; EMC high performance concrete showed a "consistent high-level durability" throughout the entire testing period of 80 Bache cycles, with for example, "practically no scaling of the concrete has been observed". In other words, treating Portland cement with the EMC Activation process, may increase the strength development by nearly 50% and also significantly improve the durability, as measured according to generally accepted methods. Low leachability of EMC Concretes Leachability tests were performed by LTU in 2001 in Sweden, on behalf of a Swedish power production company, on concrete made from an EMC made from fly ash.  These tests confirmed that the cast concrete "showed a low surface specific leachability" with respect to "all environmentally relevant metals." EMCs using Pozzolans such as volcanic materials [[File:EMC RILEM Beam.jpg|thumb|Demonstrating an EMC's"self-healing" propensity...Without intervention, cracks were totally self-filled after 4.5 months ]] Self-healing properties of pozzolanic EMCs Natural pozzolanic reactions can cause mortars and concretes containing these materials to "self-heal". The EMC Activation process can increase the likelihood of the occurrence of these pozzolanic reactions. The same tendency been noted and studied in the various supporting structures of Hagia Sophia built for the Byzantine emperor Justinian (now, Istanbul, Turkey). There, in common with most Roman cements, mortars comprising high amounts of pozzolana were used — in order to give what was thought to be an increased resistance to the stress-effects caused by earthquakes. EMCs made from pozzolanic materials exhibit "biomimetic" self-healing capabilities that can be photographed as they develop (see picture insert).EMCs using California pozzolans Concretes made by replacing at least 50% of the Portland cement with EMCs have yielded consistent field results in high-volume applications. This is also the case for EMC made from natural pozzolans (e.g., volcanic ash). Volcanic ash deposits from Southern California were independently tested; at 50% Portland cement replacement, the resulting concretes exceeded the requirements of the relevant US standard. At 28 days, the compressive strength was 4,180 psi / 28.8 MPa (N/mm²). The 56-day strength exceeded the requirements for 4,500 psi (31.1 MPa) concrete, even taking into account the safety margin as recommended by the American Concrete Institute. The concrete made in this way was workable and sufficiently strong, exceeding the 75% standard of pozzolanic activity at both 7 days and 28 days. The surface smoothness of pozzolans in the concrete was also increased. Effect on pozzolanic reactions EMC Activation is a process that increases a pozzolan's chemical affinity for pozzolanic reactions.Patent abstract for granted patent "Process for Producing Blended Cements with Reduced Carbon Dioxide Emissions" (Pub. No.:WO/2004/041746; International Application No.: PCT/SE2003001009; Pub. Date: 21.05.2004; International Filing Date: 16.06.2003) This leads to faster and greater strength development of the resulting concrete, at higher replacement ratios, than untreated pozzolans. These transformed (now highly reactive pozzolans) demonstrate further benefits using known pozzolanic reaction-pathways that typically see as their end-goal a range of hydrated products. An NMR study on EMCs concluded that EMC Activation caused "the formation of thin SiO2 layers around C3S crystals", which in turn, "accelerates the pozzolanic reaction and promotes growing of more extensive nets of the hydrated products". In simple terms, by using pozzolans in concrete, porous (reactive) Portlandite can be transformed into hard and impermeable (relatively non-reactive) compounds, rather than the porous and soft relatively reactive calcium carbonate produced using ordinary cement. Many of the end products of pozzolanic chemistry exhibit a hardness greater than 7.0 on the Mohs scale."Self healing" capabilities may also contribute to enhanced field-application durabilities where mechanical stresses may be present. In greater detail, the benefits of pozzolanic concrete, starts with an understanding that in concrete (including concretes with EMCs), Portland cement combines with water to produce a stone-like material through a complex series of chemical reactions, whose mechanisms are still not fully understood. That chemical process, called mineral hydration, forms two cementing compounds in the concrete: calcium silicate hydrate (C-S-H) and calcium hydroxide (Ca(OH)2). This reaction can be noted in three ways, as follows: Standard notation:  Ca3SiO5 + H2O -> (CaO) * (SiO2) * (H2O) + Ca(OH)2 Balanced:  2Ca3SiO5 + 7H2O -> 3CaO * 2SiO2 * 4H2O + 3Ca(OH)2 Cement chemist notation (the hyphenation denotes the variable stoichiometry):  C3S  +  H   →   C-S-H  +  CH The underlying hydration reaction forms two products: Calcium silicate hydrate (C-S-H), which gives concrete its strength and dimensional stability. The crystal structure of C-S-H in cement paste has not been fully resolved yet and there is still ongoing debate over its nanostructure. Calcium hydroxide (Ca(OH)2), which in concrete chemistry is known also as Portlandite. In comparison to calcium silicate hydrate, Portlandite is relatively porous, permeable and soft (2 to 3, on Mohs scale). It is also sectile, with flexible cleavage flakes. Portlandite is soluble in water, to yield an alkaline solution which can compromise a concrete's resistance to acidic attack. Portlandite makes up about 25% of concrete made with Portland cement without pozzolanic cementitious materials. In this type of concrete, carbon dioxide is slowly absorbed to convert the Portlandite into insoluble calcium carbonate (CaCO3), in a process called carbonatation: Ca(OH)2 + CO2 -> CaCO3 + H2O In mineral form, calcium carbonate can exhibit a wide range of hardness depending on how it is formed. At its softest, calcium carbonate can form in concrete as chalk (of hardness 1.0 on Mohs scale). Like Portlandite, calcium carbonate in mineral form can also be porous, permeable and with a poor resistance to acid attack, which causes it to release carbon dioxide. Pozzolanic concretes, including EMCs, however, continue to consume the soft and porous Portlandite as the hydration process continues, turning it into additional hardened concrete as calcium silicate hydrate (C-S-H) rather than calcium carbonate. This results in a denser, less permeable and more durable concrete. This reaction is an acid-base reaction between Portlandite and silicic acid (H4SiO4) that may be represented as follows: Ca(OH)2 + H4SiO4 -> Ca^2+ + H2SiO4^2- + 2H2O -> CaH2SiO4 * 2H2O  Further, many pozzolans contain aluminate (Al(OH)4−) that will react with Portlandite and water to form: calcium aluminate hydrates, such as calcium aluminium garnet (hydrogrossular: C4AH13 or C3AH6 in cement chemist notation, hardness 7.0 to 7.5 on Mohs scale);  or in combination with silica, to form strätlingite (Ca2Al2SiO7·8H2O or C2ASH8 in cement chemist notation), which geologically can form as xenoliths in basalt as metamorphosed limestone. Pozzolanic cement chemistry (along with high-aluminate cement chemistry) is complex and per se is not constrained by the foregoing pathways. For example, strätlingite can be formed in a number of ways, including per the following equation which can add to a concrete's strength: C2AH8  +  2CSH  +  AH3  +  3H    →    C2ASH8    (cement chemist notation) The role of pozzolans in a concrete's chemistry is not fully understood. For example, strätlingite is metastable, which in a high temperature and water-content environment (that can be generated during the early curing stages of concrete) may of itself yield stable calcium aluminium garnet (see first bullet point above). This can be represented per the following equation: 3C2AH8    →    2C3AH6  +  AH3  +  9H    (cement chemist notation) Per the first bullet point, although the inclusion of calcium aluminium garnet per se is not problematic, if it is instead produced by the foregoing pathway, then micro-cracking and strength-loss can occur in the concrete. However, adding high-reactivity pozzolans into the concrete mix prevents such a conversion reaction. In sum, whereas pozzolans provide a number of chemical pathways to form hardened materials, "high-reactivity" pozzolans such as blast furnace slag (GGBFS) can also stabilise certain pathways. In this context, EMCs made from fly ash have been demonstrated to produce concretes that meet the same characteristics as concretes comprising "120 Slag" (i.e., GGBFS) according to U.S. standard ASTM C989. Portlandite, when exposed to low temperatures, moist conditions and condensation, can react with sulphate ions to cause efflorescence. In contrast, pozzolanic chemistry reduces the amount of Portlandite available, to reduce the proliferation of efflorescence. EMC Activation EMC Activation's purpose is to cause a fundamental destruction to the crystalline structure of the material processed, to render it amorphous. Although this change increases the processed material's chemical reactivity, no chemical reaction is caused during the EMC Activation process. At its simplest, mechanochemistry can be stated as "a field studying chemical reactions initiated or accelerated by the direct absorption of mechanical energy." More technically, it can be defined as a branch of chemistry concerned with the "chemical and physico-chemical transformation of substances in all states of aggregation produced by the effect of mechanical energy." IUPAC carries no standard definition of the term mechanochemistry, instead defining a "mechanochemical reaction" as a chemical reaction "induced by the direct absorption of mechanical energy", while noting, "shearing, stretching, and grinding are typical methods for the mechano-chemical generation of reactive sites". More narrowly, "mechanical activation" was a term first defined in 1942 as a process "involving an increase in reaction ability of a substance which remains chemically unchanged." Even more narrowly, EMC Activation is a specialised form of mechanical activation limited to the application of high energy ball milling (HEBM) to cementitious materials. More narrowly than that, EMC Activation uses vibratory milling, and even then, only by using its own grinding media. As stated in a 2023 academic textbook limited to mechanochemistry, EMC Activation has "impressively demonstrated" its effects in causing a change to the reactivity of alternate cement material and the resulting physical characteristics of the concrete cast. Thermodynamic justification More particularly, HEBM can be described as increasing the chemical reactivity of a material by increasing its chemical potential energy. In EMC Activation, transferred mechanical energy is stored in the material as lattice defects caused by destroying the material's crystalline structure. Hence, the process transforms solid substances into thermodynamically and structurally more unstable states, allowing an explanation for that increased reactivity as an increase in Gibbs energy:   where, for temperature , the terms and are the respective Gibbs values in the processed and unprocessed material. At its simplest, HEBM causes the destruction of crystalline bonds, to increase a material's reactivity. From the thermodynamic perspective, any subsequent chemical reaction can decrease the excess energy level in the activated-material (i.e. as a reactant) to produce new components comprising both a lower chemical energy and a more stable physical structure. Conversely, to render the pre-processed material into a more reactive physical state, the disordering process during the HEBM process can be justified as being equivalent to a decrystallisation (and hence an entropy increase) that in part yields a volume increase (decrease of bulk density). A reverse process, sometimes called "relaxation", can be almost immediate (10−7 to 10−3 seconds) or take much longer (e.g. 106 seconds). Ultimately, any overall retained thermodynamic effect can be justified on the basis that any such reverse process is incapable of reaching an ideal thermodynamic end-state of its own accord. As a result, in the course of the mechanical activation of minerals, reverse "relaxation" processes cannot completely decrease the Gibbs free energy that has been created. Hence, energy remains in the material, which is stored in the crystal-lattice defects created. Net thermodynamic effect of HEBM Overall, HEBM renders a net thermodynamic effect: The structural disordering implies an increase of both entropy and enthalpy and thus stimulates the crystal properties according to the thermodynamic modifications. Only a small fraction (approximately 10%) of the excess enthalpy of the activated product may be accounted-for as surface-area enlargement. Instead, the main part of the excess enthalpy and modified properties can mostly be assigned to the development of thermodynamically unstable states in the material's lattice (and not as a reduction of particle size). Since the activated system is unstable, the process of activation is reversible—resulting in deactivation, re-crystallization, entropy loss and energy output of system. That reverse ("relaxation") process continues to a thermodynamic equilibrium, but ultimately can never reach an ideal structure (i.e. one free of defects). A more complete description of such an "activation" process factors-in enthalpy also, by which according to the Gibbs-Helmholtz equation, the Gibbs free energy between activated and non-activated solid state can be represented:    where, is the change in enthalpy and the change in entropy. Resulting crystalline disorder Where the crystal disordering is low, is very small (if not negligible). In contrast, in highly deformed and disordered crystals, the values of can have a significant impact on the rendered Gibbs free energy. Leaving aside the heat generated during the process on account of friction etc. occasioned during the activation process, the excess Gibbs free energy retained in the activated material can be justified as being due to two changes, namely an increase in () specific surface area; and () defect structure. In successful HEBM processes such as EMC Activation: as to (), only about 10% of the excess energy of such an activated product may be accounted-for as a change in surface area. as to (), almost all the imparted energy is contained in the actual structural defects in the material processed. An approximation for EMC Activation The relatively low value of () as against the high value of () serves to further distinguish HEBM from general grinding or "milling" (where instead the only aim there is to increase the surface area of the materials processed), thereby accounting for an explanation for the change in entropy of the rendered material in the form of elastic energy (stored in lattice defects that can take years to "relax" ) that is the "source of excess Gibbs energy and enthalpy". As for enthalpy , four descriptors can be derived to provide an overview as to the total change during such an activation process:   where:   is a measure of the dislocation density;   is a measure of new phases (polymorphic transformation);   is a measure of the formation of amorphous material;   is a measure of specific surface area. Because the majority of the work exacted during the EMC Activation process goes to aspect () above,  is trivial. Hence the major functions for the change in enthalpy approximate to: In EMC Activation, the foregoing terms and are seen as being particularly prominent because of the nature of the changes in the physical structure observed. Hence, the change in enthalpy occasioned during EMC Activation can be approximated to:       i.e,   where: , , and correspond respectively to the molar volume of the material, Burgers vector, shear modulus and dislocation density; and are respectively the concentration of the amorphous phase and molar amorphisation energy. Low temperature reactivity From the above thermodynamic construct, EMC Activation results in a highly amorphous phase that can be justified as a large and also a large increase. The benefits of the EMC Activation being large in means that an EMC's reactivity is less temperature dependent. In terms of any reaction's thermodynamic impetus, a reactant's overall is not dependent, meaning that a material having undergone HEBM with a corresponding elevation of can react at a lower temperature (as the "activated" reactant is rendered less reliant on the temperature-dependent function for its onward progression). Further, an EMC's reaction can exhibit physical mechanisms at extremely small scales "with the formation of thin SiO2 layers" to aid a reaction's pathway—with the suggestion that EMC Activation increases the ratio of favourable reaction sites. Studies elsewhere have determined that HEBM can significantly lower the temperature required for a subsequent reaction to proceed (up to a three-fold reduction), whereby a major component of the overall reaction-dynamics is initiated at a "nanocrystalline or amorphous phase" to exhibit "unusually low or even negative values of the apparent activation energy" required to cause a chemical reaction to occur. Overall, EMCs are likely less temperature dependent for a chemical pathway's onward progression (see section above on Pozzolanic reactions), which may explain why EMCs provide self-healing benefits even at low arctic temperatures. Physical justification (amorphisation) Large changes in , more particularly in the resultant values of and provide an insight into EMC Activation's efficacy. The amorphisation of crystalline material at high-pressure conditions "is a rather unusual phenomenon" for the simple reason that "most materials actually experience the reverse transformation from amorphous to crystalline at high-pressure conditions". Amorphisation represents a highly distorted "periodicity" of a material's lattice element, comprising a relatively high Gibbs free energy. Indeed, amorphisation may be compared to a quasi-molten state. As a possible explanation of why amorphous silica is more reactive than its crystalline version, thermodynamic treatments may give further insight, even if such approaches cannot fully explain the phenomenon. For example, the so-called "glass transition temperature" increases with an increasing cooling rate (i.e., ) that allows energy to be accumulated as if "frozen in". Thus, by substantially increasing that cooling rate, "glasses with thermodynamic properties can be obtained, substantially different from those of the initial metastable undercooled liquid". At very high cooling rates, the enthalpy frozen into the resulting vitrified system can be equal to (or exceed) the enthalpy of melting where the cooling rate is to the order of 106 to 109 K/s and upwards. Hence, assuming that the shock-wave dynamics of the EMC Activation process hold true, such that focal nanoscale temperature fluctuations are extremely transient (as described generally per the next section), means that the cooling rates during the HEBM process are to at least similar orders of magnitude, if not more. Hence: (recalling that ) As a result of the rise in , the chemical potential of the system is increased and "frozen in", which gives rise to an increase in any subsequent reactivity. All told, in common with other HEBM processes, EMC Activation causes crystalline destruction because of extremely violent and disruptive factors that are occasioned at the nanoscale of the material being processed. Although over in short duration and highly focal, the processes are repeated at a high frequency: hence those factors are thought to mimic pressures and temperatures found deep inside the Earth to cause the required phase change. For example, Peter Thiessen developed the magma-plasma model that assumes localised temperatures—higher than 103 kelvins—can be generated at the various impact points to induce a momentary excited plasma state in the material, characterized by the ejection of electrons and photons together with the formation of excited fragments (see diagram above). Experimental data gathered from localised crack-generation, itself an important component of EMC Activation, has confirmed temperatures in this region as long ago as 1975. Vibratory Ball Mills (VBMs) For EMC activation, the HEBM method used is a vibratory ball mill (VBM). A VBM uses a vertical eccentric drive-mechanism to vibrate an enclosed chamber up to many hundreds of cycles per minute. The chamber is filled with the material being processed together with specialised objects called grinding media. In their most simple format, such media can be simple balls made from specialised ceramics. In practical terms, EMC Activation deploys a range of grinding media of different sizes, shapes and composites to achieve the required mechanochemical transformation. It has been suggested that a VBM will grind at 20 to 30 times the rate of a rotary ball mill, reflecting that a VBM's mechanism is especially rapacious. VBM kinetics In simple terms, the compressive force acting between two identical colliding balls in a VBM can be expressed:      where,'' where, is the mass of both balls, the radius, the absolute velocity of impact and the Young's modulus of the balls' material. As can be seen, an increase in velocity of impact increases . The size and mass of the grinding media also contribute. 's denominator term incorporates meaning that the nature of the material used for the grinding media is an important factor ( is ultimately squared in , so its negative value is of no consequence). More fundamentally, due to the rapid vibration a high acceleration is imparted to the grinding media, whereupon the continuous, short, sharp impacts on the load result in rapid particle-size reduction. In addition, high pressures and shear stresses facilitate the required phase transition to an amorphous state both at the point of impact and also during the transmission of shock-waves that can yield even greater pressures than the impact itself. For example, the contact time of a two-ball collision can be as short as 20μs, generating a pressure of 3.3 GPa upwards and with an associated ambient temperature increase of 20 kelvins. Because of the short duration of the impact, the rate of change in momentum is significant—generating a shock wave of duration only 1-100μs but with an associated pressure of 10 GPa upwards and a highly localised and focal temperature (i.e., at the nanoscale) up to several thousands of kelvins. To place this into context, a pressure of 10GPa is equivalent to about 1,000 kilometers of sea water. As a further example, the impact of two identical steel balls of 2.5 cm diameter of velocity 1 m/s will generate a collision energy density of over 109 joules/m2, with alumina balls of the same 2.5 cm diameter and velocity of 1 m/s generating an even greater energy density. The collisions occur in a very short timescale and hence the "rate of energy release over the relatively small contact area can be very high". See also Background science to EMC Activation: Academic: Notes References External links , Sweden at lowcarboncement.com Luleå University of Technology, Sweden at LTU.se Future Infrastructure Forum, University of Cambridge, United Kingdom at Fif.construction.cam.ac.uk U.S. Geological Survey (USGS) Cement Statistics and Information at Minerals.usgs.gov U.S. Environmental Protection Agency (EPA), Rule Information for Portland Cement Industry at EPA.gov American Concrete Institute at Concrete.org EDGAR – Emission Database for Global Atmospheric Research at Edgar.jrc.ec.europa.eu Vitruvious: The Ten Books on Architecture online: cross-linked Latin text and English translation at Wbcsdcement.org Cement Swedish inventions Science and technology in Sweden Environmental design Building materials
Energetically modified cement
[ "Physics", "Engineering" ]
7,827
[ "Environmental design", "Building engineering", "Architecture", "Construction", "Materials", "Design", "Matter", "Building materials" ]
39,207,553
https://en.wikipedia.org/wiki/Exponential%20integrate-and-fire
In biology exponential integrate-and-fire models are compact and computationally efficient nonlinear spiking neuron models with one or two variables. The exponential integrate-and-fire model was first proposed as a one-dimensional model. The most prominent two-dimensional examples are the adaptive exponential integrate-and-fire model and the generalized exponential integrate-and-fire model. Exponential integrate-and-fire models are widely used in the field of computational neuroscience and spiking neural networks because of (i) a solid grounding of the neuron model in the field of experimental neuroscience, (ii) computational efficiency in simulations and hardware implementations, and (iii) mathematical transparency. Exponential integrate-and-fire (EIF) The exponential integrate-and-fire model (EIF) is a biological neuron model, a simple modification of the classical leaky integrate-and-fire model describing how neurons produce action potentials. In the EIF, the threshold for spike initiation is replaced by a depolarizing non-linearity. The model was first introduced by Nicolas Fourcaud-Trocmé, David Hansel, Carl van Vreeswijk and Nicolas Brunel. The exponential nonlinearity was later confirmed by Badel et al. It is one of the prominent examples of a precise theoretical prediction in computational neuroscience that was later confirmed by experimental neuroscience. In the exponential integrate-and-fire model, spike generation is exponential, following the equation: . where is the membrane potential, is the intrinsic membrane potential threshold, is the membrane time constant, is the resting potential, and is the sharpness of action potential initiation, usually around 1 mV for cortical pyramidal neurons. Once the membrane potential crosses , it diverges to infinity in finite time. In numerical simulation the integration is stopped if the membrane potential hits an arbitrary threshold (much larger than ) at which the membrane potential is reset to a value . The voltage reset value is one of the important parameters of the model. Two important remarks: (i) The right-hand side of the above equation contains a nonlinearity that can be directly extracted from experimental data. In this sense the exponential nonlinearity is not an arbitrary choice but directly supported by experimental evidence. (ii) Even though it is a nonlinear model, it is simple enough to calculate the firing rate for constant input, and the linear response to fluctuations, even in the presence of input noise. A didactive review of the exponential integrate-and-fire model (including fit to experimental data and relation to the Hodgkin-Huxley model) can be found in Chapter 5.2 of the textbook Neuronal Dynamics. Adaptive exponential integrate-and-fire (AdEx) The adaptive exponential integrate-and-fire neuron (AdEx) is a two-dimensional spiking neuron model where the above exponential nonlinearity of the voltage equation is combined with an adaptation variable w where denotes an adaptation current with time scale . Important model parameters are the voltage reset value , the intrinsic threshold , the time constants and as well as the coupling parameters and . The adaptive exponential integrate-and-fire model inherits the experimentally derived voltage nonlinearity of the exponential integrate-and-fire model. But going beyond this model, it can also account for a variety of neuronal firing patterns in response to constant stimulation, including adaptation, bursting and initial bursting. The adaptive exponential integrate-and-fire model is remarkable for three aspects: (i) its simplicity since it contains only two coupled variables; (ii) its foundation in experimental data since the nonlinearity of the voltage equation is extracted from experiments; and (iii) the broad spectrum of single-neuron firing patterns that can be described by an appropriate choice of AdEx model parameters. In particular, the AdEx reproduces the following firing patterns in response to a step current input: neuronal adaptation, regular bursting, initial bursting, irregular firing, regular firing. A didactic review of the adaptive exponential integrate-and-fire model (including examples of single-neuron firing patterns) can be found in Chapter 6.1 of the textbook Neuronal Dynamics. Generalized exponential integrate-and-fire Model (GEM) The generalized exponential integrate-and-fire model (GEM) is a two-dimensional spiking neuron model where the exponential nonlinearity of the voltage equation is combined with a subthreshold variable x where b is a coupling parameter, is a voltage-dependent time constant, and is a saturating nonlinearity, similar to the gating variable m of the Hodgkin-Huxley model. The term in the first equation can be considered as a slow voltage-activated ion current. The GEM is remarkable for two aspects: (i) the nonlinearity of the voltage equation is extracted from experiments; and (ii) the GEM is simple enough to enable a mathematical analysis of the stationary firing-rate and the linear response even in the presence of noisy input. A review of the computational properties of the GEM and its relation to other spiking neuron models can be found in. References Computational neuroscience Ion channels Electrophysiology Nonlinear systems
Exponential integrate-and-fire
[ "Chemistry", "Mathematics" ]
1,047
[ "Nonlinear systems", "Neurochemistry", "Ion channels", "Dynamical systems" ]
39,211,211
https://en.wikipedia.org/wiki/Freeze-casting
Freeze-casting, also frequently referred to as ice-templating, freeze casting, or freeze alignment, is a technique that exploits the highly anisotropic solidification behavior of a solvent (generally water) in a well-dispersed solution or slurry to controllably template directionally porous ceramics, polymers, metals and their hybrids. By subjecting an aqueous solution or slurry to a directional temperature gradient, ice crystals will nucleate on one side and grow along the temperature gradient. The ice crystals will redistribute the dissolved substance and the suspended particles as they grow within the solution or slurry, effectively templating the ingredients that are distributed in the solution or slurry. Once solidification has ended, the frozen, templated composite is placed into a freeze-dryer to remove the ice crystals. The resulting green body contains anisotropic macropores in a replica of the sublimated ice crystals and structures from micropores to nacre-like packing between the ceramic or metal particles in the walls. The walls templated by the morphology of the ice crystals often show unilateral features. These together build a hierarchically structured cellular structure. This structure is often sintered for metals and ceramics, and crosslinked for polymers, to consolidate the particulate walls and provide strength to the porous material. The porosity left by the sublimation of solvent crystals is typically between 2–200 μm. Overview The first observation of cellular structures resulting from the freezing of water goes back over a century, but the first reported instance of freeze-casting, in the modern sense, was in 1954 when Maxwell et al. attempted to fabricate turbosupercharger blades out of refractory powders. They froze extremely thick slips of titanium carbide, producing near-net-shape castings that were easy to sinter and machine. The goal of this work, however, was to make dense ceramics. It was not until 2001, when Fukasawa et al. created directionally porous alumina castings, that the idea of using freeze-casting as a means of creating novel porous structures really took hold. Since that time, research has grown considerably with hundreds of papers coming out within the last decade. The principles of freeze casting are applicable to a broad range of combinations of particles and suspension media. Water is by far the most commonly used suspension media, and by freeze drying is readily conducive to the step of sublimation that is necessary for the success of freeze-casting processes. Due to the high level of control and broad range of possible porous microstructures that freeze-casting can produce, the technique has been adopted in disparate fields such as tissue scaffolds, photonics, metal-matrix composites, dentistry, materials science, and even food science. There are three possible end results to uni-directionally freezing a suspension of particles. First, the ice-growth proceeds as a planar front, pushing particles in front like a bulldozer pushes a pile of rocks. This scenario usually occurs at very low solidification velocities (< 1 μm s−1) or with extremely fine particles because they can move by Brownian motion away from the front. The resultant structure contains no macroporosity. If one were to increase the solidification speed, the size of the particles or solid loading moderately, the particles begin to interact in a meaningful way with the approaching ice front. The result is typically a lamellar or cellular templated structure whose exact morphology depends on the particular conditions of the system. It is this type of solidification that is targeted for porous materials made by freeze-casting. The third possibility for a freeze-cast structure occurs when particles are given insufficient time to segregate from the suspension, resulting in complete encapsulation of the particles within the ice front. This occurs when the freezing rates are rapid, particle size becomes sufficiently large, or when the solids loading is high enough to hinder particle motion. To ensure templating, the particles must be ejected from the oncoming front. Energetically speaking, this will occur if there is an overall increase in free energy if the particle were to be engulfed (Δσ > 0). where Δσ is the change in free energy of the particle, σps is the surface potential between the particle and interface, σpl is the potential between the particle and the liquid phase and σsl is the surface potential between the solid and liquid phases. This expression is valid at low solidification velocities, when the system is shifted only slightly from equilibrium. At high solidification velocities, kinetics must also be taken into consideration. There will be a liquid film between the front and particle to maintain constant transport of the molecules which are incorporated into the growing crystal. When the front velocity increases, this film thickness (d) will decrease due to increasing drag forces. A critical velocity (vc) occurs when the film is no longer thick enough to supply the needed molecular supply. At this speed the particle will be engulfed. Most authors express vc as a function of particle size where . The transition from a porous R (lamellar) morphology to one where the majority of particles are entrapped occurs at vc, which is generally determined as: where a0 is the average intermolecular distance of the molecule that is freezing within the liquid, d is the overall thickness of the liquid film, η is the solution viscosity, R is the particle radius and z is an exponent that can vary from 1 to 5. As expected, vc decreases as particle radius R goes up. Waschkies et al. studied the structure of dilute to concentrated freeze-casts from low (< 1 μm s−1) to extremely high (> 700 μm s−1) solidification velocities. From this study, they were able to generate morphological maps for freeze-cast structures made under various conditions. Maps such as these are excellent for showing general trends, but they are quite specific to the materials system from which they were derived. For most applications where freeze-casts will be used after freezing, binders are needed to supply strength in the green state. The addition of binder can significantly alter the chemistry within the frozen environment, depressing the freezing point and hampering particle motion leading to particle entrapment at speeds far below the predicted vc. Assuming, however, that we are operating at speeds below vc and above those which produce a planar front, we will achieve some cellular structure with both ice-crystals and walls composed of packed ceramic particles. The morphology of this structure is tied to some variables, but the most influential is the temperature gradient as a function of time and distance along the freezing direction. Freeze-cast structures have at least three apparent morphological regions. At the side where freezing initiates is a nearly isotropic region with no visible macropores dubbed the Initial Zone (IZ). Directly after the IZ is the Transition Zone (TZ), where macropores begin to form and align with one another. The pores in this region may appear randomly oriented. The third zone is called the Steady-State Zone (SSZ), macropores in this region are aligned with one another and grow in a regular fashion. Within the SSZ, the structure is defined by a value λ that is the average thickness of a ceramic wall and its adjacent macropore. Initial zone: nucleation and growth mechanisms Although the ability of ice to reject suspended particles in the growth process has long been known, the mechanism remains the subject of some discussion. It was believed initially that during the moments immediately following the nucleation of the ice crystals, particles are rejected from the growing planar ice front, leading to the formation of a constitutionally super-cooled zone directly ahead of the growing ice. This unstable region eventually results in perturbations, breaking the planar front into a columnar ice front, a phenomenon better known as a Mullins-Serkerka instability. After the breakdown, the ice crystals grow along the temperature gradient, pushing ceramic particles from the liquid phase aside so that they accumulate between the growing ice crystals. However, recent in-situ X-ray radiography of directionally frozen alumina suspensions reveal a different mechanism. Transition zone: a changing microstructure As solidification slows and growth kinetics become rate-limiting, the ice crystals begin to exclude the particles, redistributing them within the suspension. A competitive growth process develops between two crystal populations, those with their basal planes aligned with the thermal gradient (z-crystals) and those that are randomly oriented (r-crystals) giving rise to the start of the TZ. There are colonies of similarly aligned ice crystals growing throughout the suspension. There are fine lamellae of aligned z-crystals growing with their basal planes aligned with the thermal gradient. The r-crystals appear in this cross-section as platelets but in actuality, they are most similar to columnar dendritic crystals cut along a bias. Within the transition zone, the r-crystals either stop growing or turn into z-crystals that eventually become the predominant orientation, and lead to steady-state growth. There are some reasons why this occurs. For one, during freezing, the growing crystals tend to align with the temperature gradient, as this is the lowest energy configuration and thermodynamically preferential. Aligned growth, however, can mean two different things. Assuming the temperature gradient is vertical, the growing crystal will either be parallel (z-crystal) or perpendicular (r-crystal) to this gradient. A crystal that lays horizontally can still grow in line with the temperature gradient, but it will mean growing on its face rather than its edge. Since the thermal conductivity of ice is so small (1.6 - 2.4 W mK−1) compared with most every other ceramic (ex. Al2O3= 40 W mK−1), the growing ice will have a significant insulative effect on the localized thermal conditions within the slurry. This can be illustrated using simple resistor elements. When ice crystals are aligned with their basal planes parallel to the temperature gradient (z-crystals), they can be represented as two resistors in parallel. The thermal resistance of the ceramic is significantly smaller than that of the ice however, so the apparent resistance can be expressed as the lower Rceramic. If the ice crystals are aligned perpendicular to the temperature gradient (r-crystals), they can be approximated as two resistor elements in series. For this case, the Rice is limiting and will dictate the localized thermal conditions. The lower thermal resistance for the z-crystal case leads to lower temperatures and greater heat flux at the growing crystals tips, driving further growth in this direction while, at the same time, the large Rice value hinders the growth of the r-crystals. Each ice crystal growing within the slurry will be some combination of these two scenarios. Thermodynamics dictate that all crystals will tend to align with the preferential temperature gradient causing r-crystals to eventually give way to z-crystals, which can be seen from the following radiographs taken within the TZ. When z-crystals become the only significant crystal orientation present, the ice-front grows in a steady-state manner except there are no significant changes to the system conditions. It was observed in 2012 that, in the initial moments of freezing, there are dendritic r-crystals that grow 5 - 15 times faster than the solidifying front. These shoot up into the suspension ahead of the main ice front and partially melt back. These crystals stop growing at the point where the TZ will eventually fully transition to the SSZ. Researchers determined that this particular point marks the position where the suspension is in an equilibrium state (i.e. freezing temperature and suspension temperature are equal). We can say then that the size of the initial and transition zones are controlled by the extent of supercooling beyond the already low freezing temperature. If the freeze-casting setup is controlled so that nucleation is favored at only small supercooling, then the TZ will give way to the SSZ sooner. Steady-state growth zone The structure in this final region contains long, aligned lamellae that alternate between ice crystals and ceramic walls. The faster a sample is frozen, the finer its solvent crystals (and its eventual macroporosity) will be. Within the SSZ, the normal speeds which are usable for colloidal templating are 10 – 100 mm s−1 leading to solvent crystals typically between 2 mm and 200 mm. Subsequent sublimation of the ice within the SSZ yields a green ceramic preform with porosity in a nearly exact replica of these ice crystals. The microstructure of a freeze-cast within the SSZ is defined by its wavelength (λ) which is the average thickness of a single ceramic wall plus its adjacent macropore. Several publications have reported the effects of solidification kinetics on the microstructures of freeze-cast materials. It has been shown that λ follows an empirical power-law relationship with solidification velocity (υ) (Eq. 2.14): Both A and υ are used as fitting parameters as currently there is no way of calculating them from first principles, although it is generally believed that A is related to slurry parameters like viscosity and solid loading while n is influenced by particle characteristics. Controlling the porous structure There are two general categories of tools for architecture a freeze-cast: Chemistry of the System - freezing medium and chosen particulate material(s), any additional binders, dispersants or additives. Operational Conditions - temperature profile, atmosphere, mold material, freezing surface, etc. Initially, the materials system is chosen based on what sort of final structure is needed. This review has focused on water as the vehicle for freezing, but there are some other solvents that may be used. Notably, camphene, which is an organic solvent that is waxy at room temperature. Freezing of this solution produces highly branched dendritic crystals. Once the materials system is settled on however, the majority of microstructural control comes from external operational conditions such as mold material and temperature gradient. Controlling pore size The microstructural wavelength (average pore + wall thickness) can be described as a function of the solidification velocity v (λ= Av−n) where A is dependent on solids loading. There are two ways then that the pore size can be controlled. The first is to change the solidification speed that then alters the microstructural wavelength, or the solids loading can be changed. In doing so, the ratio of pore size to wall size is changed. It is often more prudent to alter the solidification velocity seeing as a minimum solid loading is usually desired. Since microstructural size (λ) is inversely related to the velocity of the freezing front, faster speeds lead to finer structures, while slower speeds produce a coarse microstructure. Controlling the solidification velocity is, therefore, crucial to being able to control the microstructure. Controlling pore shape Additives can prove highly useful and versatile in changing the morphology of pores. These work by affecting the growth kinetics and microstructure of the ice in addition to the topology of the ice-water interface. Some additives work by altering the phase diagram of the solvent. For example, water and NaCl have a eutectic phase diagram. When NaCl is added into a freeze-casting suspension, the solid ice phase and liquid regions are separated by a zone where both solids and liquids can coexist. This briny region is removed during sublimation, but its existence has a strong effect on the microstructure of the porous ceramic. Other additives work by either altering the interfacial surface energies between the solid/liquid and particle/liquid, changing the viscosity of the suspension, or the degree of undercooling in the system. Studies have been done with glycerol, sucrose, ethanol, acetic acid and more. Static vs. dynamic freezing profiles If a freeze casting setup with a constant temperature on either side of the freezing system is used, (static freeze-casting) the front solidification velocity in the SSZ will decrease over time due to the increasing thermal buffer caused by the growing ice front. When this occurs, more time is given for the anisotropic ice crystals to grow perpendicularly to the freezing direction (c-axis) resulting in a structure with ice lamellae that increase in thickness along the length of the sample. To ensure highly anisotropic, yet predictable solidification behavior within the SSZ, dynamic freezing patterns are preferred. Using dynamic freezing, the velocity of the solidification front, and, therefore, the ice crystal size, can be controlled with a changing temperature gradient. The increasing thermal gradient counters the effect of the growing thermal buffer imposed by the growing ice front. It has been shown that a linearly decreasing temperature on one side of a freeze-cast will result in near-constant solidification velocity, yielding ice crystals with an almost constant thickness along the SSZ of an entire sample. However, as pointed out by Waschkies et al. even with constant solidification velocity, the thickness of the ice crystals does increase slightly over the course of freezing. In contrast to that, Flauder et al. demonstrated that an exponential change of the temperature at the cooling plate leads to a constant ice crystal thickness within the complete SSZ, which was attributed to a measurably constant ice-front velocity in a distinct study. This approach enables a prediction of the ice-front velocity from the thermal parameters of the suspension. Consequently, if the exact relationship between the pore diameter and ice-front velocity is known, an exact control over the pore diameter can be achieved. Anisotropy of the interface kinetics Even if the temperature gradient within the slurry is perfectly vertical, it is common to see tilting or curvature of the lamellae as they grow through the suspension. To explain this, it is possible to define two distinct growth directions for each ice crystal. There is the direction determined by the temperature gradient, and the one defined by the preferred growth direction crystallographically speaking. These angles are often at odds with one another, and their balance will describe the tilt of the crystal. The non-overlapping growth directions also help to explain why dendritic textures are often seen in freeze-casts. This texturing is usually found only on the side of each lamella; the direction of the imposed temperature gradient. The ceramic structure left behind shows the negative image of these dendrites. In 2013, Deville et al. made the observation that the periodicity of these dendrites (tip-to-tip distance) actually seems to be related to the primary crystal thickness. Particle packing effects Up until now, the focus has been mostly on the structure of the ice itself; the particles are almost an afterthought to the templating process but in fact, the particles can and do play a significant role during freeze-casting. It turns out that particle arrangement also changes as a function of the freezing conditions. For example, researchers have shown that freezing velocity has a marked effect on wall roughness. Faster freezing rates produce rougher walls since particles are given insufficient time to rearrange. This could be of use when developing permeable gas transfer membranes where tortuosity and roughness could impede gas flow. It also turns out that z- and r-crystals do not interact with ceramic particles in the same way. The z-crystals pack particles in the x-y plane while r-crystals pack particles primarily in the z-direction. R-crystals actually pack particles more efficiently than z-crystals and because of this, the area fraction of the particle-rich phase (1 - area fraction of ice crystals) changes as the crystal population shifts from a mixture of z- and r-crystals to only z-crystals. Starting from where ice crystals first begin to exclude particles, marking the beginning of the transition zone, we have a majority of r-crystals and a high value for the particle-rich phase fraction. We can assume that because the solidification speed is still rapid that the particles will not be packed efficiently. As the solidification rate slows down, however, the area fraction of the particle-rich phase drops indicating an increase in packing efficiency. At the same time, the competitive growth process is taking place, replacing r-crystals with z-crystals. At a certain point nearing the end of the transition zone, the particle-rich phase fraction rises sharply since z-crystals are less efficient at packing particles than r-crystals. The apex of this curve marks the point where only z-crystals are present (SSZ). During steady-state growth, after the maximum particle-rich phase fraction is reached, the efficiency of packing increases as steady-state is achieved. In 2011, researchers at Yale University set out to probe the actual spatial packing of particles within the walls. Using small-angle X-ray scattering (SAXS) they characterized the particle size, shape and interparticle spacing of nominally 32 nm silica suspensions that had been freeze-cast at different speeds. Computer simulations indicated that for this system, the particles within the walls should not be touching but rather separated from one another by thin films of ice. Testing, however, revealed that the particles were, in fact, touching and more than that, they attained a packed morphology that cannot be explained by typical equilibrium densification processes. Morphological instabilities In an ideal world, the spatial concentration of particles within the SSZ would remain constant throughout solidification. As it happens, though, the concentration of particles does change during compression, and this process is highly sensitive to solidification speed. At low freezing rates, Brownian motion takes place, allowing particles to move easily away from the solid-liquid interface and maintain a homogeneous suspension. In this situation, the suspension is always warmer than the solidified portion. At fast solidification speeds, approaching VC, the concentration, and concentration gradient at the solid-liquid interface increases because particles cannot redistribute soon enough. When it has built up enough, the freezing point of the suspension is below the temperature gradient in the solution and morphological instabilities can occur. For situations where the particle concentration bleeds into the diffusion layer, both the actual and freezing temperature dip below the equilibrium freezing temperature creating an unstable system. Often, these situations lead to the formation of what are known as ice lenses. These morphological instabilities can trap particles, preventing full redistribution and resulting in inhomogeneous distribution of solids along the freezing direction as well as discontinuities in the ceramic walls, creating voids larger than intrinsic pores within the walls of the porous ceramic. Mechanical properties Most research into the mechanical properties of freeze casted structures focus on the compressive strength of the material and its yielding behavior at increasing stresses. According to Ashby, the mechanical properties of a freeze-casted, open pore structure can be approximately modeled with an anisotropic, cellular solid. These include naturally occurring materials such as cork and wood that have properties that have anisotropic structures, and thus mechanical properties that are directionally dependent. Donius et al. have investigated the anisotropic nature of freeze-casted aerogels, comparing their mechanical strength to isotropically freeze casted aerogels. They found that the Young's modulus of the anisotropic structure was significantly higher than that of the isotropic aerogels, particularly when tested parallel to the freezing direction. The Young's modulus is several orders of magnitude higher in the parallel direction as compared to the direction perpendicular to freezing, demonstrating the anisotropic mechanical properties. The mechanical behavior of the freeze casted structure can be classified into distinct regions. At low strains, the lamallae follow a linear elastic behavior. Here, the lamellae bend under a compressive stress, and thus deflect. According to Ashby, this deflection can be calculated from single beam theory, in which each of the cellular sections are idealized to be cubic shaped where each of the cell walls are assumed to be beam-like members with a square base. Based on this idealization, the amount of bending in the cell walls under a compressive force is given by where is the length of each cell, is the second moment of area, is the Young's modulus of the cell wall material and is a geometry dependent constant. Furthermore, we find that the Young's modulus of the entire structure is proportional to the square of the relative density: . This shows that the density of the material is an important factor when designing structures that can withstand loads, and that the Young's modulus of the structure is heavily determined by the porosity of the structure. Past the linear region, the lamellae start to buckle elastically and deform non-linearly. In a stress-strain curve, this is shown as a flat plateau. The critical load at which buckling begins is given by: where is a constant dependent on the boundary constraints of the structure. This is one of the main failure mechanisms for freeze casted materials. From this, the maximum compressive stress that an anisotropic porous solid can maintain is given by where is the fracture stress for the bulk material. These models demonstrate that the bulk material selection can drastically impact the mechanical response of freeze casted structures under stress. Other microstructural features such as the lamellar thickness, pore morphology and degree of macroporosity can also heavily influence the compressive strength and Young's modulus of these highly anisotropic structures. Novel freeze-casting techniques Freeze-casting can be applied to produce aligned porous structure from diverse building blocks including ceramics, polymers, biomacromolecules, graphene and carbon nanotubes. As long as there are particles that may be rejected by a progressing freezing front, a templated structure is possible. By controlling cooling gradients and the distribution of particles during freeze casting, using various physical means, the orientation of lamellae in obtained freeze cast structures can be controlled to provide improved performance in diverse applied materials. Munch et al. showed that it is possible to control the long-range arrangement and orientation of crystals normal to the growth direction by templating the nucleation surface. This technique works by providing lower energy nucleation sites to control the initial crystal growth and arrangement. The orientation of ice crystals can also be affected by applying electromagnetic fields as was demonstrated in 2010 by Tang et al. in 2012 by Porter et al., and in 2021 by Yin et al. Using specialized setups, researchers have been able to create radially aligned freeze-casts tailored for biomedical applications and filtration or gas separation applications. Inspired by nature, scientists have also been able to use coordinating chemicals and cryopreserved to create remarkably distinctive microstructural architectures. Freeze cast materials Particles that are assembled into aligned porous materials in freeze casting processes are often referred to as building blocks. As freeze casting has become a widespread technique the range of materials used has expanded. In recent years, graphene and carbon nanotubes have been used to fabricate controlled porous structures using freeze casting methods, with materials often exhibiting outstanding properties. Unlike aerogel materials produced without ice-templating, freeze cast structures of carbon nanomaterials have the advantage of possessing aligned pores, allowing, for example unparalleled combinations of low density and high conductivity. Applications of freeze cast materials Freeze casting is unique in its ability to produce aligned pore structures. Such structures are often found in nature, and consequently freeze casting has emerged as a valuable tool to fabricate biomimetic structures. The transport of fluids through aligned pores has led to the use of freeze casting as a method towards biomedical applications including bone scaffold materials. The alignment of pores in freeze cast structures also imparts extraordinarily high thermal resistance in the direction perpendicular to the aligned pores. The freeze casting of aligned porous fibres by spinning processes presents a promising method towards the fabrication of high performance insulating clothing articles. In addition, materials with aligned pores produced via freeze casting from sintered nickel powder have gained significant attention in phase-change systems, such as loop heat pipes (LHPs), due to their excellent thermal properties. In these systems, wicks play a critical role in maintaining liquid-vapor equilibrium, enabling efficient circulation of the working fluid. Traditional wicks are often manufactured separately and integrated later, creating an interface that limits liquid transfer efficiency. To address this limitation, a porous wick with a gradient structure was developed in a single operation using freeze casting. This innovative approach eliminates interfacial resistance, ensuring seamless liquid transport while maintaining the high thermal conductivity and efficient capillary action required for optimal LHP performance. Another emerging and promising application of freeze casting is the production of porous foams for green hydrogen generation through advanced thermochemical processes like Chemical Loop Combustion (CLC) and the Steam Iron Process (SIP). These processes leverage the unique properties of porous metal structures, such as optimized reaction kinetics, enhanced thermal efficiency, and sustainability. In Chemical Loop Combustion (CLC), foams made from materials like iron oxides act as oxygen carriers, enabling fuel combustion without direct air contact, separating CO₂ for capture while producing high-purity hydrogen. Similarly, in Steam Iron Process (SIP), dendritic pore structures ensure efficient water vapor distribution and maximize hydrogen yield. The precise control over porosity and thermal properties afforded by freeze casting, along with the use of eco-friendly solvents like camphene, positions these foams as a vital innovation for scalable and sustainable hydrogen production, contributing to the fight against climate change. See also Freeze gelation Further reading J. Laurie, Freeze Casting: a Modified Sol-Gel Process, University of Bath, UK, Ph.D. Thesis, 1995 M. Statham, Economic Manufacture of Freeze-Cast Ceramic Substrate Shapes for the Spray-Forming Process, Univ. Bath, UK, Ph.D. Thesis, 1998 S. Deville, "Freezing Colloids: Observations, Principles, Control, and Use." Springer, 2017 External links A website with large dataset, allowing creation of graphs References Casting (manufacturing) Ceramic engineering Colloids Water ice
Freeze-casting
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
6,301
[ "Ceramic engineering", "Chemical mixtures", "Condensed matter physics", "Colloids" ]
31,108,393
https://en.wikipedia.org/wiki/Single-use%20bioreactor
A single-use bioreactor or disposable bioreactor is a bioreactor with a disposable bag instead of a culture vessel. Typically, this refers to a bioreactor in which the lining in contact with the cell culture will be plastic, and this lining is encased within a more permanent structure (typically, either a rocker or a cuboid or cylindrical steel support). Commercial single-use bioreactors have been available since the end of the 1990s and are now made by several well-known producers (See below) . Single-use at bioreactors Single-use bioreactors are widely used in the field of mammalian cell culture and are now rapidly replacing conventional bioreactors. Instead of a culture vessel made from stainless steel or glass, a single-use bioreactor is equipped with a disposable bag. The disposable bag is usually made of a three-layer plastic foil. One layer is made from Polyethylene terephthalate or LDPE to provide mechanical stability. A second layer made using PVA or PVC acts as a gas barrier. Finally, a contact layer is made from PVA or PP. For medical applications the single-use materials that contact the product must be certified by the European Medicines Agency or similar authorities responsible for other regions. Types of single-use bioreactors In general there are two different approaches for constructing single-use bioreactors, differing in the means used to agitate the culture medium. Some single-use bioreactors use stirrers like conventional bioreactors, but with stirrers that are integrated into the plastic bag. The closed bag and the stirrer are pre-sterilized. In use the bag is mounted in the bioreactor and the stirrer is connected to a driver mechanically or magnetically. Other single-use bioreactors are agitated by a rocking motion. This type of bioreactor does not need any mechanical agitators inside the single-use bag.,. Both the stirred and the rocking motion single-use bioreactors are used up to a scale of 1000 Liters volume. Several variations on these two methods exist. The Kuhner Shaker, was originally designed for media preparation, but is also useful for cell cultivation. The PBS Biotech Air Wheel technology uses buoyancy from the air feed to provide rotational power to a stirrer. Measurement and control Measurement and control of a cell culture process using a single-use bioreactor is challenging, as the bag in which the cultivation will be performed is a closed and pre-sterilized system. Sensors for measuring the temperature, conductivity, glucose, oxygen, or pressure must be built into the bag during the manufacturing prior to sterilization. The sensors can’t be installed prior to use of the bioreactor as in the conventional case. Consequently, some challenges must be taken into consideration. The bag is assembled, delivered and stored dry, with the consequence that the usual pH-electrodes can not be used. Calibration or additional assembly is not possible. These constraints have led to the development of preconfigured bags with new types of analytical probes. The pH value can be measured using a patch that is just a few millimeters in size. This patch consists of a protecting membrane with a pH-sensitive dye behind it. Changing pH in the culture medium changes the pH, and the color, of the dye. The color change can be detected with a laser external to the bag. This and other methods of non-invasive measurement have been developed for single-use bioreactors. Single-use bioprocessing Decreasing product contact with parts/systems decreases qualification and validation times when changing from one drug process to another. Since the biopharmaceutical manufacturing process includes many steps other than just the use of bioreactors, single-use technologies are utilized throughout the manufacturing process due to its advantages. Single-use bioprocessing (SUS) steps available are: media and buffer preparation, cell harvesting, filtration, purification and virus inactivation. The major innovation of single-use technologies in this area of processing has been in the construction of 2D/3D bags and tubing wielding- reducing the contact of product to non-single-use parts/systems. Advantages and disadvantages Compared with conventional bioreactor systems, the single-use solution has some advantages. Application of single-use technologies reduces cleaning and sterilization demands. Some estimates show cost savings of more than 60% with single use systems compared to fixed asset stainless steel bioreactors. In pharmaceutical production, complex qualification and validation procedures can be made easier, and will finally lead to significant cost reductions. The application of single-use bioreactors reduces the risk of cross contamination and enhances the biological and process safety. Single-use applications are especially suitable for any kind of biopharmaceutical product. A major reason single-use bioprocessing (SUS) is popular with pharmaceutical companies and contract manufacturing organizations (CMOs) is because a process area/facility can quickly change from one process (drug product) to another. This is due to, as stated previously, reduced qualification and validation procedures. This increases productivity and costs due to less resources and time being required for changing from one process to another. Since drugs in the clinical and R&D stage (pre-commercialized drugs) are not needed on the same scale of most commercial drugs, they are often produced in single-use suites so the same area/facility can quickly switch from one drug to another. Often when drug becomes commercialized the advantages of SUSs decrease since one area/facility can be dedicated to one product- essentially eliminating the need for flexibility which is the major advantage of SUSs. It is estimated that ≥85% of pre-commercial drug product production utilizes single-use systems-based manufacturing. Stainless steal reusable systems become more advantageous as the demand for the drug product and batch size increases- often a result of the commercialization of a drug. This is not always the case, as commercialized drugs can be found being produced in single-use suits/facilities. SUSs contain fewer parts compared with conventional biopharmaceutical manufacturing systems, so the initial and maintenance costs are reduced. Limiting factor for the use of some single-use bioreactors is the achievable oxygen transfer, represented by the specific mass transfer coefficient (kL) for the specific phase area (a), resulting in the volumetric oxygen mass transfer coefficient (kLa). Theoretically this can be influenced by a higher energy input (increasing the stirrer speed or the rocking frequency). However, since single-use bioreactors are mainly used for cell culturing, the energy input is limited by the delicate nature of cells. Higher energy input leads to higher shear forces causing the risk of cell damages. Single-use bioreactors are currently available with up to a volume of about 1000 L; that’s why scale up is limited compared to conventional bioreactors. However, a handful of suppliers are now delivering units at the 2,000 liter scale and some suppliers (Sartorius, Xcellerex, Thermo Scientific HyClone and PBS Biotech) are providing a family of single-use bioreactors from bench-top to full-scale production. Three challenges exist for faster and greater single use bioreactor adoption 1) higher quality and lower cost disposable bags and containers, 2) more reusable and disposable sensors and probes that can provide high quality analytics including real-time cell culture level data points, and 3) a family of bioreactors from lab to production that has full scale-up of the bioprocess. Suppliers are working to improve plastic bag materials and performance and also to develop a broader range of sensors and probes that provide scientists greater insight to cell density, quality and other metrics needed to improve yields and product efficacy. New perfusion devices are also becoming popular for certain cell culture applications. Environmental aspects Environmental aspects for single-use bioreactors are important to consider due to the amount of disposable material used compared with conventional bioreactors. A complete life cycle assessment comparing single-use bioreactors and conventional bioreactors does not exist, but many ecological reasons are supporting the concept of single-use bioreactors. For a complete life cycle assessment not only the manufacturing, but also the repeated use need to be considered. Even the main part of a single-use bioreactor is not a disposable, but will be continuously reused. The plastic bag that is used instead of a culture vessel is a disposable, as well as all the integrated sub-assemblies like sensors, tubing, and stirrers. The bag and all its parts are mainly made from plastics that are derived from petroleum. Current recycling concepts are mainly focused on incineration, to recover the energy originated from the petroleum as heat and electricity. Most of the petroleum would be burned anyway in power plants or automobiles (citation required). Burning of the single use components of bioreactors creates a detour through biochemical engineering during their life cycle that does not have a big influence. The making of conventional culture vessels form stainless steel or glass requires more energy than making plastic bags. Using conventional bioreactors the culture vessel need to be cleaned and sterilized after each fermentation. Cleaning requires large amounts of water, in addition to acids, alkali and detergents. Sterilization with steam at 121 degrees C and 1 bar pressure requires large quantities of energy and large amounts of distilled water. This distilled water (often called "water for injection" in pharmaceutical nomenclature) must be prepared by expending a large amount of energy as well. A comparison of the life cycle assessment of conventional and single-use bioreactors looks much more favorable for the single-use bioreactors as expected before. According to a report of A. Sinclair et al. Single-use bioreactors will help to save 30% of electrical energy for operation, 62% of the energy input for the production of the system, 87% of water and finally 95% of detergents, all compared to conventional bioreactors. Notes References External links Whitepaper: An environmental life cycle assessment comparison of single-use and conventional bioprocessing technology Bioreactors Bioreactor, Single-use
Single-use bioreactor
[ "Chemistry", "Engineering", "Biology" ]
2,146
[ "Bioreactors", "Biological engineering", "Chemical reactors", "Biochemical engineering", "Microbiology equipment" ]
31,113,949
https://en.wikipedia.org/wiki/RegulonDB
RegulonDB is a database of the regulatory network of gene expression in Escherichia coli K-12. RegulonDB also models the organization of the genes in transcription units, operons and regulons. A total of 120 sRNAs with 231 total interactions which all together regulate 192 genes are also included. RegulonDB was founded in 1998 and also contributes data to the EcoCyc database. Transcription factors and sensory-response units In bacteria, such as E. coli, genes, are regulated by sequence elements in promoters and related binding sites). RegulonDB provides a database of such regulatory elements, their binding sites and the transcription factors that bind to these sites in E. coli. RegulonDB 9.0 includes 184 experimentally determined transcription factors (TFs) as well as 120 computationally predicted TFs, that is, a total of 304. The complete repertoire of 189 genetic sensory-response units (GENSOR units) are reported, integrating their signal, regulatory interactions, and metabolic pathways. A total of 78 GENSOR units have their four components highlighted; 119 include the genetic switch and the response, and 2 contain only the genetic switch. A total of 103 TFs have a known effector in RegulonDB, including 25 two-component systems. There were enough sites to build a motif for 93 TFs to infer 16,207 predicted TF binding sites. This set of predicted binding sites corresponds to 12,574 TF → gene regulatory interactions; this represents a recovery of 52% of the 1592 annotated regulatory interactions in the database for the 93 TFs for which RegulonDB has a position-weight matrix (PWM). If only TFs with a good-quality PWM are taken into account, the total number of predicted TF → gene interactions is 8,714, recovering 672 (57%) of annotated interactions for this TF subset. Semi-automatic curation produced a total of 3,195 regulatory interactions for 199 TFs. Definitions Check the glossary for all definitions. Transcription unit (TU) A transcription unit is a set of one or more genes transcribed from a single promoter. A TU may also include regulatory protein binding sites affecting this promoter and a terminator. A complex operon with several promoters contains, therefore, several transcription units. A transcription unit must include all the genes in an operon. Promoters and terminators A promoter is defined in RegulonDB as the nucleotide sequence 60 bases upstream and 20 downstream from the precise initiation of transcription or +1. Terminators are regions where transcription ends, and RNA Polymerase unbinds from DNA. Binding site The TFs binding sites are physical DNA sites recognized by transcription factors within a genome, including enhancer, upstream activator (UAS) and operator sites that may bind repressors or activators. Graphic display in RegulonDB The graphic display of an operon contains all the genes of its different transcription units, as well as all the regulatory elements involved in the transcription and regulation of those TUs. An operon is here conceived as a structural unit encompassing all genes and regulatory elements. An operon with several promoters located near each other may also have dual binding sites, indicating that such a site can activate one particular promoter, but repress a second one. In the same page, the collection of the different TUs is displayed below the operon. The graphic display of an operon contains all the genes of its different transcription units, as well as all the regulatory elements involved in the transcription and regulation of those TUs. The graphic display of a TU will always contain only one promoter -when known- with the binding sites that regulate its activity, followed by the transcribed genes. Note that dual sites are frequently displayed at a TU as repressors or activators. This is because the site will have a particular effect on the promoter of that TU. References External links http://regulondb.ccg.unam.mx/ Biological databases Gene expression
RegulonDB
[ "Chemistry", "Biology" ]
837
[ "Gene expression", "Bioinformatics", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Biological databases" ]
31,115,267
https://en.wikipedia.org/wiki/Nowotny%20phase
In inorganic chemistry, a Nowotny chimney ladder phase (NCL phase) is a particular intermetallic crystal structure found with certain binary compounds. NLC phases are generally tetragonal and are composed of two separate sublattices. The first is a tetragonal array of transition metal atoms, generally from group 4 through group 9 of the periodic table. Contained within this array of transition metal atoms is a second network of main group atoms, typically from group 13 (boron group) or group 14 (carbon group). The transition metal atoms form a chimney with helical zigzag chain. The main-group elements form a ladder spiraling inside the transition metal helix. The phase is named after one of the early investigators H. Nowotny. Examples are RuGa2, Mn4Si7, Ru2Ge3, Ir3Ga5, Ir4Ge5 V17Ge31, Cr11Ge19, Mn11Si19, Mn15Si26, Mo9Ge16, Mo13Ge23, Rh10Ga17, and Rh17Ge22. In RuGa2 the ruthenium atoms in the chimney are separated by 329 pm. The gallium atoms spiral around the Ru chimney with a Ga–Ga intrahelix distance of 257 pm. The view perpendicular to the chimney axis is that of a hexagonal lattice with gallium atoms occupying the vertices and ruthenium atoms occupying the center. Each gallium atom bonds to 5 other gallium atoms forming a distorted trigonal bipyramid. The gallium atoms carry a positive charge and the ruthenium atoms have a formal charge of −2 (filled 4d shell). In Ru2Sn3 the ruthenium atoms spiral around the tin inner helix. In two dimension the Ru atoms form a tetragonal lattice with the tin atoms appearing as triangular units in the Ru channels. The occurrence of a LCP phase can be predicted by the so-called 14 electron rule. In it the total number of valence electrons per transition metal atom is 14. References Intermetallics
Nowotny phase
[ "Physics", "Chemistry", "Materials_science" ]
431
[ "Inorganic compounds", "Metallurgy", "Alloys", "Intermetallics", "Condensed matter physics" ]
31,117,483
https://en.wikipedia.org/wiki/Aggregate%20modulus
In relation to biomechanics, the aggregate modulus (Ha) is a measurement of the stiffness of a material at equilibrium when fluid has ceased flowing through it. The aggregate modulus can be calculated from Young's modulus (E) and the Poisson ratio (v). The aggregate modulus of a similar specimen is determined from a unidirectional deformational testing configuration, i.e., the only non-zero strain component is E11. This configuration is opposed to the Young's modulus, which is determined from a unidirectional loading testing configuration, i.e., the only non-zero stress component is, say, in the e1 direction. In this test, the only non-zero component of the stress tensor is T11. References Biomechanics Motor control Physical quantities
Aggregate modulus
[ "Physics", "Mathematics", "Biology" ]
175
[ "Biomechanics", "Physical phenomena", "Behavior", "Physical quantities", "Quantity", "Classical mechanics stubs", "Motor control", "Classical mechanics", "Mechanics", "Physical properties" ]
31,117,841
https://en.wikipedia.org/wiki/EOn
eOn was a volunteer computing project running on the Berkeley Open Infrastructure for Network Computing (BOINC) platform, which uses theoretical chemistry techniques to solve problems in condensed matter physics and materials science. It was a project of the Institute for Computational Engineering and Sciences at the University of Texas. Traditional molecular dynamics can accurately model events that occur within a fraction of a millisecond. In order to model events that take place on much longer timescales, Eon combines transition state theory with kinetic Monte Carlo. The result is a combination of classical mechanics and quantum methods like density functional theory. Since the generation of new work units depended on the results of previous units, the project could only give each host a few units at a time. On May 26, 2014, it was announced that eOn would be retiring from BOINC. See also List of volunteer computing projects References Science in society Free science software Volunteer computing projects
EOn
[ "Physics", "Chemistry", "Materials_science", "Technology" ]
186
[ "Materials science stubs", "Condensed matter physics", "Computing stubs", "Condensed matter stubs", "Physical chemistry stubs" ]
31,119,235
https://en.wikipedia.org/wiki/Roadworthiness
Roadworthiness or streetworthiness is a property or ability of a car, bus, truck or any kind of automobile to be in a suitable operating condition or meeting acceptable standards for safe driving and transport of people, baggage or cargo in roads or streets, being therefore street-legal. In Europe, roadworthy inspection is regulated by: Directive 2014/45/EU, on periodic roadworthiness tests for motor vehicles and their trailers, Directive 2014/46/EU, on the registration documents for vehicles, Directive 2014/47/EU, on the technical roadside inspection of the roadworthiness of commercial vehicles. Certificate A Certificate of Roadworthiness (also known as a ‘roadworthy’ or ‘RWC’) attests that a vehicle is safe enough to be used on public roads. A roadworthy is required in the selling of a vehicle in some countries. It may also be required when the vehicle is re-registered, and to clear some problematic notices. Inspection Roadworthy inspection is designed to check the vehicle to make sure that its important auto parts are in a good (not top) condition that is enough for safe road use. It includes: mirrors wheels and tires vehicle structure lights and reflectors seats and seat belts steering, suspensions and braking systems windscreen, and windows including front wipers and washers other safety related items on the body, chassis or engine Roadworthy inspection in Europe Directive 2014/45/EU regulates the periodic testing for various kind of vehicles: transport of people (M1, M2, M3) transport of good (N1, N2, N3) trailers of more than 3.5 tonnes (O3, O3) tractors of category T5 since January 2022, two- or three-wheel vehicles in categories L3e, L4e, L5e and L7e, with an engine displacement of more than 125 cm3. 18 of 27 EU member states have required motorcycle owners to have their vehicles checked for road-worthiness. The directive 2014/45/EU defines obligations and responsibilities, minimum requirements concerning road-worthiness tests, administrative provisions and cooperation and exchange of information. Minimum requirements concerning road-worthiness tests encompass date and frequency of testing, contents and methods of testing, assessment of deficiencies, road-worthiness certificate, follow-up of deficiencies and proof of test. See also Airworthiness Crashworthiness Cyberworthiness Railworthiness Seaworthiness Spaceworthiness Street-legal vehicle Vehicle inspection Reference list Transport law Mechanical engineering Motor vehicle maintenance
Roadworthiness
[ "Physics", "Engineering" ]
513
[ "Applied and interdisciplinary physics", "Transport stubs", "Transport law", "Physical systems", "Transport", "Mechanical engineering" ]
31,120,170
https://en.wikipedia.org/wiki/California%20Green%20Building%20Standards%20Code
The California Green Building Standards Code (CALGreen Code) is Part 11 of the California Building Standards Code and is the first statewide "green" building code in the US. Background and purpose The purpose of CALGreen is to improve public health, safety and general welfare by enhancing the design and construction of buildings through the use of building concepts having a reduced negative impact or positive environmental impact and encouraging sustainable construction practices in the following categories: Planning and design Energy efficiency Water efficiency and conservation Material conservation and resource efficiency Environmental quality To achieve CALGreen Tier 1, buildings must comply with the latest edition of "Savings By Design, Healthcare Modeling Procedures". To achieve CALGreen Tier 2, buildings must exceed the latest edition of “Savings By Design, Healthcare Modeling Procedures” by a minimum of 15%. The provisions of this code are directed to: State-owned buildings, including buildings constructed by the Trustees of the California State University, and to the extent permitted by California law, buildings designed and constructed by the Regents of the University of California and regulated by the California Building Standards Commission. Energy efficiency standards regulated by the California Energy Commission. Low-rise residential buildings constructed throughout California, including hotels, motels, lodging houses, apartment houses, dwellings, dormitories, condominiums, shelters for homeless persons, congregate residences, employee housing, factory-built housing and other types of dwellings containing sleeping accommodations. Public elementary and secondary schools, and community college buildings regulated by the Division of the State Architect within the California Department of General Services. Qualified historical buildings and structures and their associated sites regulated by the State Historical Building Safety Board within the Division of the State Architect within the California Department of General Services. General acute care hospitals, acute psychiatric hospitals, skilled nursing and/or intermediate care facilities, clinics licensed by the Department of Public Health and correctional treatment centers regulated by the California Office of Statewide Health Planning and Development within the California Health and Human Services Agency. Graywater systems regulated by the California Department of Water Resources and the California Department of Housing and Community Development. Land use In US urban land area quadrupled from 1945 to 2002, increasing at about twice the rate of population growth over this period. Estimated area of rural land used for residential purposes increased by (29%) from 1997 to 2002 (2002). Water use Water is a precious natural resource. At least two-thirds of the United States have experienced or are bracing for local, regional, or statewide water shortages. US population, and in particular California population has constantly increased during the last decades, so using water wisely is crucial in order to provide enough water also for the future generations. During the 20th century, water diverted south through the California Aqueduct was economically essential to Los Angeles. But fisheries, wildlife and water quality in the bay and delta paid a heavy price. Water is becoming an increasingly important resource throughout California and the United States. The largest single use of potable water in California is water used to irrigate for agriculture. The largest remaining segment of water use is that of public water supplies. Air and atmosphere Buildings in the United States contribute 38.9% of the nation's total carbon dioxide emissions, including 20.8% from the residential sector and 18.0% from the commercial sector (2008). On average, the energy use for typical buildings is assumed to consist of 67% electricity and 33% natural gas. The annual mean air temperature of a city with 1 million people or more can be 1.8–5.4 °F (1–3 °C) warmer than its surroundings. In the evening, the difference can be as high as 22 °F (12 °C). Heat islands can increase summertime peak energy demand, air conditioning costs, air pollution and greenhouse gas emissions, heat-related illness and mortality. One study estimates that the heat island effect is responsible for 5–10% of peak electricity demand for cooling buildings in cities. HVAC systems was required to use MERV 13 filtration, up from MERV 8, (2019 CalGreen, effective January 1, 2020) Materials and waste Approximately 170 million tons of building-related C&D materials were generated in the U.S. during 2003. This is a 25% increase in generation from the 1996 estimate of 136 million tons ( which was 25% to 40% of the national solid waste stream). Provisions The residential mandatory measures are provided in chapter 4 and the non-residential ones in chapter 5 of CALGreen Code. About the residential mandatory measures, the Code provides measures like storm water drainage and retention systems thought to prevent flooding of adjacent properties and prevent pollution from storm water runoff by retaining soil on-site or by providing filtering to restrict sedimentation from reaching storm water drainage systems and receiving streams or rivers. To comply, retention basin has to be sized and shown on the site plan, and water has to be filtered and routed to a public drainage system. The new residential structure has to also comply with local storm water ordinances. The drainage system has to be shown on the site plan (swales, drain piping, retention areas, ground water recharge). CALGreen does not regulate energy efficiency (both for residential and non-residential structures), instead remanding it to the California Energy Commission (CEC) and its California Energy Code. Concerning the water issue, the code requires a 20% reduction of indoor water use and it uses both a prescriptive and performance method. The prescriptive method provides some technical features that have to be followed: Showerheads ≤ 2.0 gpm (gallons per minute) @ 80 psi Lavatory faucets ≤ 1.5 gpm @ 60 psi Kitchen faucets ≤ 1.8 gpm @ 60 psi Urinals ≤ 0.5 gal/flush Waterclosets ≤ 1.28 gallon effective flush rate The performance method uses the performance calculation worksheets in Chapter 8 (or other calculation acceptable to the enforcing agency). CALGreen also specifies acceptable performance standards for plumbing fixtures with reduced water usage. Fixtures can be installed if they meet standards listed in Table 4.303.3. Also outdoor water usage is regulated: the Code requires irrigation controls to be weather- or soil moisture-based and automatically adjust irrigation in response to changes in plants' needs as weather conditions change, or have rain sensors or communication systems that account for local rainfall. About construction waste reduction, disposal, and recycling, the code says that at least 50% of nonhazardous construction and demolition debris have to be recycled and/or salvaged. This has to be done through the development of a waste management plan submitted for approval to the enforcing agency. CALGreen Appendix A4 contains the voluntary measures (Tier 1 and Tier 2) that were developed in response to numerous stakeholder requests for a statewide method of enhancing green construction practiced beyond the Code's minimum levels. To meet Tier 1 or Tier 2, designers, builders, or property owners must increase the number of green building measures and further reduce percentages of water and energy use and waste to landfills in order to meet the threshold levels for each tier (these measures are listed in Section A4.601.4.2 (Tier 1) and Section A4.601.5.2 (Tier 2)). Also for non-residential structures CALGreen demands 20% savings of potable water, standards for plumbing fixtures and fittings, a construction waste management plan, and a construction reduction waste of at least 50%. The Code also requires a finish material pollutant control and an acoustical control for exterior noise transmission and interior sound. The CALGreen 2010 Code was adopted by the California Building Standards Commission (CBSC), the California Department of Housing and Community Development (HCD), the Division of the State Architect (DSA) within the California Department of General Services, and the Office of Statewide Health Planning and Development (OSHPD) within the California Health and Human Services Agency. CBSC has the responsibility to administer the program and review building standards proposed by state agencies, develop building standards for occupancies where no other state agency has the authority (non-residential) and adopt and approve building standards for publication. The targets of the Code are designers, architects, builders, property owners, and also businesses and the government that have to take into consideration the new standards when they decide to build new structures. History Several legislative bills like AB 35, AB 888, and AB 1058 were introduced during the 2007–2008 legislative session to require green building standards for state-owned or leased buildings, commercial buildings, and residential buildings respectively. Development of CALGreen began in 2007 and, during the rulemaking process, CBSC collaborated with the Department of Housing and Community Development (HCD), stakeholder groups and others. The first result of this cooperation was the adoption of the 2008 California Green Building Standards Code (CGBC) that became effective since August 1, 2009. The initial 2008 California Green Building Code publication provided a framework and first step toward establishing green building standards for low-rise residential structures. 2008 GBSC was used as a base document, analyzed and evaluated for necessary updates that lead to the 2010 CALGreen Code, but this is not the last step of the process: as new materials, technology, and designs are developed and become available, also CALGreen has to develop. There are some enhancements from the 2008 Code to the 2010 one, among them: The previous code said that energy efficiency was regulated by the California Energy Code. Section 4.201.1 of CALGreen 2010 clarifies instead that the CEC adopts regulations to establish the minimum level of energy efficiency a structure that is heated or cooled must meet or exceed. About indoor water use, HCD adopted maximum flush rates for toilets and the CEC adopted appliance standards which limit water use of appliances and fixtures. Section 4.303.1 of 2010 CALGreen reduces indoor water use by at least 20% and it also provides a prescriptive and a performance method to meet the requirements. CALGreen 2010 also covers items that weren't covered before like multiple showerheads and irrigation controllers. Policy tools CALGreen 2010 uses prescriptive regulation (it provides technical characteristics that have to be met in the construction of new buildings). Economists and industry, often criticize this kind of regulation because it provides little reason for innovation once the regulated party has achieved the required standard. However CALGreen provides just the minimum standard to achieve and it delegates to Local authorities to increase the level of the standards to apply (depending on the particular characteristics of the local area). This tool is the most effective to solve in the long run the environmental problems faced by this policy because fixing technical characteristics that have to be followed in the construction process it assures that all the new buildings will have certain desirable characteristics of efficiency. CALGreen 2010 alleviates the environmental problems connected with residential and non-residential structures, but it doesn't solve them both because it is limited to the new buildings. Stakeholders There are a lot of players interested in CALGreen and in its evolution. Policy targets are important stakeholders: designers, architects, builders, property owners and in general also businesses, the government and its agencies. Some of the stakeholders try to influence the evolution of the policy participating to CBSC's and HCD's Green Building Focus Groups (stakeholder focus groups). They are: building officials; representatives from the construction industry; representatives from the environmental community; state agency representatives and public members. There are also government agencies involved. A part from CBSC, HCD, DSA, and OSHPD, the following agencies contribute to the formulation of the policy: Air Resource Board (for standards concerning air pollutants), California Integrated Waste Management Board (CalRecycle) (for what concerns landfill disposals), the Department of General Services, the Department of Public Health, the Department of Water Resources and the Energy Resources Conservation and Development Commission (Energy Commission). Building officials are interested in the policy because they want to know what are the new standards and what processes lead to them in order to understand how to do their job in the best way possible. Construction Industry is very interested in influencing the policy because changes in the standards could mean changes in suppliers and maybe also increasing costs and they are mainly interested in minimizing costs. Producers of plumbing fixtures or companies that produce insulation systems for the house are interested in CALGreen because it modifies their sectors (like it modify also the construction industry); the change can be encouraged by those companies that produce energy or water saving products or it can be obstructed by those that are not ready yet. The environmentalists are interested in maximize the level of the mandatory provisions contained in the policy in order to maximize the benefits for the environment. State agencies like the Office of Statewide Health Planning and Development are instead interested in protecting particular benefits for the community (in the case of OSHPD the health). Policy evaluation HCD organizes annual and triennial focus group meetings among stakeholders to check the effectiveness of the policy adopted and to discuss proposed changes to the code. CALGreen 2010 is effective since January 1, 2011, so it's still too early for evaluate its effectiveness. However both BIG and LEED are successful standards similar to CALGreen. The system developed by Build-it-Green is called GreenPoint Rated Climate Calculator and initial project run-throughs using the Climate Calculator found emissions reductions of about 20% over conventional new construction built to code. In March 2008 a study of New Buildings Institute found that on average, LEED-NC buildings deliver anticipated savings and that LEED energy use is similar to predictions: 25–30% better than the national average (average savings increase for the higher LEED levels). See also California Energy Code Green building Green building in the United States Autonomous building Zero-energy building EPA LEED References This article incorporates text from publications of the California Department of Housing and Community Development, which is in the public domain. External links 2010 California Green Building Standards Code Green Building Standards Code Building codes Standards of the United States
California Green Building Standards Code
[ "Engineering" ]
2,880
[ "Building engineering", "Building codes" ]
31,120,858
https://en.wikipedia.org/wiki/Mendocino%20County%20GMO%20Ban
Mendocino County, California, was the first jurisdiction in the United States to ban the cultivation, production or distribution of genetically modified organisms (GMOs). The ordinance, entitled Measure H, was passed by referendum on March 2, 2004. Initiated by the group "GMO Free Mendocino", the campaign was a highly publicized grassroots effort by local farmers and environmental groups who contend that the potential risks of GMOs to human health and the ecosystem have not yet been fully understood. The measure was met with opposition by several interest groups representing the biotechnology industry, The California Plant Health Association (now the Western Plant Health Association) and CropLife America, a Washington-based consortium whose clients represent some of the largest food distributors in the nation, including Monsanto, DuPont and Dow Chemical. Since the enactment of the ordinance, Mendocino County has been added to an international list of "GMO free zones." Pre-emptive statutes banning local municipalities from such ordinances have now become widespread with adoption in sixteen states. Background GMOs are commonly considered to be any organism whose DNA has been modified by human intervention. Agricultural practices, however, have long used selective breeding techniques for the same purpose as modern biotechnology. Researchers now define genetically engineered organisms (GEOs) as those that are produced from a range of recombinant DNA technologies, which introduce a transgene into the genome of a host cell. The most widely practiced method involves the use of bacteria, which are able to penetrate the cell membrane of the host. Other methods include a "gene gun" or "biolistic particle delivery system". Recombinant DNA technology allows for the creation of synthetic genes with specific traits that have anthropogenic benefits. The term GEOs will be used here to describe organisms produced by the recombinant DNA technology commonly referred to by usage of the term GMOs. Environmental concerns The geographic and temporal scope of the regulatory debate regarding transgenic organisms and recombinant DNA technology is vast. Environmental risk assessments must weigh unquantifiable long-term risks against high and quantifiable short-term benefits. There is widespread concern amongst environmental groups, organic farmers and the international community that the introduction of transgenic organisms into local ecosystems may cause irreversible loss of biodiversity when the new synthetic strains become predominant. Further concern comes from members of the medical community who warn of the risk that the antibiotics used in the production of many GEOs may give rise to more resistant strains of bacteria. While the scientific community generally acknowledges the possibility of these risks, the scenarios are hard to quantify, particularly in a Risk-Cost Benefit Analysis (RCBA) model commonly used in public policy. Proponents of the technology dispute many of these findings or the significance of the risk factors. They generally contend that the GE strains have no further evolutionary advantage than any other new strain introduced into a local ecosystem and that they behave in the same way. They also cite less need for pesticides than conventional non-organic crops. Opponents, by contrast, adhere to the precautionary principle, which advocates waiting until further study is done and puts the burden of proof on the producer to prove that their productive activities are no threat to the environment or human health before continuing. The precautionary principle has become the foundation of environmental policy for the EU. Causes The promulgation of agricultural biotechnology follows a trajectory that began in what is commonly referred to as the "green revolution". In the twentieth century, industrial methods were increasingly applied to agriculture for the mass production of monocultures, large tracts of land used for the production of single high yield crops with the use of fertilizers and pesticides. The actual physical problem GEOs seek to address, most commonly, is the destruction of crops from pests. The gradual shift towards monoculture has increased the frequency and severity of pest invasion and infestation because of the lack of agricultural diversity. Many pests that threaten different crops are natural predators to each other, which helps to offset their impacts. When many miles of the same crop are planted, however, it leaves the local habitat more vulnerable to the threat of single pest. GEOs have brought short-term benefits in the control of pests by having genetically programmed immunities to the pesticides that are produced to be used along with them, such as the well-known post-emergence herbicide Roundup produced by Monsanto. The primary forces that are driving the production of GEOs, however, are social and economic. GEOs are the latest development in the drive to produce higher yields with less inputs, according to the profit maximization model. Proponents of the technology see it as an answer to growing food shortages in the face of rising global population. They also cite potential benefits, such as the creation of more healthy strains of produce, aquaculture or livestock, with higher nutrient content and less fat. Opponents of GEOs, however, argue that world hunger is caused by economic and political dynamics rather than scarcity so regardless of whether the yield is increased, the produce will not to flow through the supply chain to those in need. Policy According to the ordinance, it is "unlawful for any person, firm, or corporation to propagate, cultivate, raise, or grow genetically modified organisms in Mendocino County." The measure is careful to define transgenic-organisms as dependent on biotechnology as opposed to traditional methods of selective breeding. It also excludes micro-organisms from the prohibition. The complex geographic and spatial dimensions of the issue are highlighted by the fact that the ordinance only affects unincorporated areas of the county. City, state, federal and tribal lands are exempt from the prohibition and are free to grow and distribute GEOs. Measure H uses the traditional regulatory approach as its only policy tool. The policy targets are producers or distributors of genetically engineered organisms. Ostensibly, the farming industry is the stakeholder primarily targeted, though the law affects any person or entity. Due to the recognized inability to limit all GEO propagation within geographical proximity to the unincorporated areas affected by the law, the policy goals are rather to limit the expansion of the biotechnology industry in the county and to make it harder for seed companies to sell GE seed to local farmers. According to Andrew Kimbrall of the Center for Food Safety, who backed the measure stated that local municipalities have "no alternative but to try to halt" the spread of GE crops. In this context, the purpose is not to completely eradicate any GEOs in the county but rather to counteract the prevailing trend of the agricultural industry. History The "GMO Free Mendocino" campaign was started by Els Cooperrider, a retired cancer researcher and founding member of "The Mendocino Organic Network." Initially, the coalition sought to enact local legislation requiring the labeling of GEO products. However, since national efforts to push for labeling had been largely unsuccessful it was decided to advocate the prohibition of GEO production and propagation within county limits instead. Janie Sheppard, a local attorney and Dr. Ron Epstein, a research professor, were signed proponents on Measure H, alongside Cooperrider. More money was spent on Measure H than any other ballot in Mendocino County’s history. In total, "No on H" supporters spent over $700,000, with $600,000 of it coming from Croplife America. The "Yes on H" coalition raised $135,000 by the end of the campaign. The measure passed by 57% of the vote and was portrayed in the media as a "David vs. Goliath" battle between a small grassroots coalition of community activists and a deep pocketed special interest group in Washington. Stakeholders Mendocino County’s measure H highlights a localized battle of stakeholders over a contentious public policy debate that is international in scope. The organic farming industry is the fastest growing sector of the US agricultural market. It accounts for approximately one third of Mendocino’s agriculture, the majority of which consists of wineries. The organic farming industry in California has been the most organized lobby against GMOs due to concerns about cross-pollination. Patented strains of "Roundup Ready" seed, which are resistant to the pesticide "Roundup" produced by Monsanto, have been found to disperse onto neighboring farms, creating legal battles over proprietary rights such as the famous test case in the Canadian Supreme Court, Monsanto Canada Inc. v. Schmeiser. With a growing demand for organic products in European and Japanese markets, the prospect of cross-pollination is perceived as a significant economic threat to the organic industry. Mendocino’s wine industry was especially concerned about losing Japanese markets and have since used the ordinance as a marketing tool. The United States Department of Agriculture (USDA) has assured organic farmers that they will not lose their certification if contamination occurs. This has not erased the perception that the integrity of their industry and the ecosystems that they are dependent upon are at risk by the gradual introduction of GEOs. Measure H’s passing was considered a victory for environmental groups and the local organic farming industry and brought Mendocino international attention. Sharp criticism, however, came from industry insiders who have accepted widespread adoption of GEOs. According to rancher and Mendocino County Farm Bureau Director Peter Bradford, the measure was motivated by "a fear of science and big corporations" Nationwide, 90% of the soybean, 73% of corn and 87% of cotton produced in the US come from genetically engineered seed. The largest financial sector of the industry views biotechnology as the natural progression of trade techniques, which have passed adequate safety standards. The Food and Drug Administration (FDA) and USDA have so far agreed. The FDA drafted new guidelines in 1991 stating that GEOs and non-GEOs were "substantially equivalent." Board of Supervisors Table-1 The trend towards federal deregulation of GEOs leaves local municipalities facing a much tougher challenge in prohibiting them. Jurisdictional issues bring into question the legal standing of county ordinances regulating GEOs. Federal pre-emption statutes may override them if challenged. GE crops are regulated by the EPA under the pesticide guidelines in FIFRA (Federal Insecticide, Fungicide and Rodenticide Act) according to which "…a state shall not impose any requirements for labeling or packaging in addition to or different from those under" FIFRA. Some legal experts contend that any regulation of GEOs must take place at the federal level because of this statute. Since the passage of measure H, counties in California followed Mendocino’s example with eight similar initiatives making it to the ballots. Of the eight counties that voted on anti-GEO initiatives, four passed and four were defeated. In addition, eleven counties passed pro-GEO ordinances banning their prohibition. Table-1 lists the counties and their voting percentages. In 2005, state senator Dean Florez attempted to pass a state preemptive bill prohibiting counties from banning GEOs. The bill passed in the assembly but got stalled in the senate where it had previously passed. California remains a challenging regulatory environment for GE producers and the farmers who wish to use their seed. The Federal District Court of Northern California has undergone a protracted regulatory battle with the USDA regarding two crops in particular, GE Alfalfa and GE Sugar Beets. In 2005, the USDA had deregulated Roundup Ready Alfalfa (RRA). Two years later, in response to a lawsuit filed by Earthjustice and the Center for Food Safety, the district court ruled that the deregulation was in violation of the National Environmental Policy Act (NEPA) because an environmental impact statement (EIS) had not been done. In Geertson Farms Inc., et al. v. Mike Johanns, et al., Judge Charles R. Breyer imposed an injunction on planting any further seed. In June 2010, the Supreme Court overturned the injunction, stating that it was unnecessary because the USDA’s deregulation was, in fact, in violation of NEPA and thus there was no legal standing to plant the seed in the first place obviating the need for an injunction. It was ordered that an EIS be done, which was projected to be complete in 2012. Another injunction was ordered by Judge Jeffrey White against the planting of GE sugar beets in August 2010. When it was discovered that GE sugar beets had been planted in September, in violation of the injunction, Judge White ordered the destruction of the crops. It was the first time that GEO crops were ever ordered to be destroyed by a US court. Farmers in the sugar beet industry reported that there was not enough non-GE seed left. The government warned that the US was faced with a potential 20% reduction in sugar production. On February 4, 2011, at the request of Monsanto and a German seed company named KWS, the USDA proceeded with a "partial deregulation" that will allow planting to continue until the EIS is complete and a final ruling is made. Local environmental activists were dismayed by this decision and took it as a defeat. The partial deregulation requires farmers of GE seed to take measures to prevent cross-pollination. They are not allowed to plant within three miles of non-GE crops, for instance, and they are subject to government inspections. Opponents of the decision contend that these protections will be inadequate. The ruling came the day after a consortium of the nation's largest organic food distributors including Whole Foods, Organic Valley and Stonyfield Farm, agreed to no longer oppose the propagation of RRA and GE crops in general. Evaluation As yet, there is little available data on the outcome of the measure. It has remained on the books and has continued to be enforced within its jurisdictional boundaries. Appropriate measures of evaluation stem from the measure’s stated goals "The people of Mendocino County wish to protect the county’s agriculture, environment, economy, and private property from genetic pollution by genetically modified organisms." This policy goal lacks definition of the term "genetic pollution". Taken at face value, from an empirical standpoint, it could be seen as having failed in that GEOs certainly have migrated across jurisdictional boundaries. Many commercial food products contain GEOs, which residents of the county have been buying unknowingly since there are still no federal laws requiring their labeling. GE corn products, in particular, have become ubiquitous as additives in food processing. Furthermore, since the term "genetic pollution" is left undefined, even if cross-pollination has occurred, whether nor not it is considered pollution will vary according to the stakeholder. In the broader sense of the policy’s purpose as a political tool to impede the advancement of the biotechnology industry and the spread of "GE crops" generally, the ensuing nationwide regulatory debate over jurisdictional issues with similar county prohibitions could be seen as a success. Mendocino is now cited internationally as a center of the organic movement and a catalyst for anti-GEO movements that have, in some cases impeded their growth, particularly in California where GE crops have actually been ordered to be destroyed by a federal judge. Civic agriculture Mendocino is famous for being a bastion of rural counter-culture where many liberal activists and members of California’s hippie generation led a back-to-the-land movement during the 1970s. The Measure H campaign reaffirmed these sensibilities and has been studied as an example of "civic agriculture." The agenda-setting phase of the policy cycle was highly localized. Public policy experts and social historians contend that the implications of the "GMO free Mendocino" movement were beyond the empirical basis of the ordinance or the larger political debate regarding GEOs. The social forces animating the conflict were embedded in localized rural values of stewardship and decentralization. The community’s self conception as a synthesis of its counter-cultural legacy and rural working class ethos fostered a powerful sense of local collective action that was pitted against the perception of top down "command and control" of local agriculture by a distant monolithic nexus of multinational power. See also Genetic engineering in the United States References 2004 in California 2004 in the environment Genetically modified organisms Genetic engineering in the United States Mendocino County, California California law Environmental issues in California Environmental law in the United States
Mendocino County GMO Ban
[ "Engineering", "Biology" ]
3,312
[ "Genetic engineering", "Genetically modified organisms" ]
31,121,373
https://en.wikipedia.org/wiki/Memistor
A memistor is a nanoelectric circuitry element used in parallel computing memory technology. Essentially, a resistor with memory able to perform logic operations and store information, it is a three-terminal implementation of the memristor. History While the memristor is defined in terms of a two-terminal circuit element, there was an implementation of a three-terminal device called a memistor developed by Bernard Widrow in 1960. Memistors formed basic components of a neural network architecture called ADALINE developed by Widrow. The memistor was also used in MADALINE. Essence In one of the technical reports the memistor was described as follows: Since the conductance was described as being controlled by the time integral of current as in Chua's theory of the memristor, the memistor of Widrow may be considered as a form of memristor having three instead of two terminals. However, one of the main limitations of Widrow's memistors was that they were made from an electroplating cell rather than as a solid-state circuit element. Solid-state circuit elements were required to achieve the scalability of the integrated circuit which was gaining popularity around the same time as the invention of Widrow's memistor. An article on ArXiv suggests that the floating-gate MOSFET as well as other 3-terminal "memory transistors" may be modeled using dynamical systems equations in a similar fashion to the memristive systems of memristors. See also Memristor Trancitor References External links Memistor - Research at Cisco Electrical components Electronic circuits in computer storage
Memistor
[ "Technology", "Engineering" ]
350
[ "Electrical engineering", "Electrical components", "Components" ]
33,685,939
https://en.wikipedia.org/wiki/Pharmacognosy%20Research
Pharmacognosy Research is a peer-reviewed open-access medical journal published on behalf of the Pharmacognosy Network Worldwide. The journal publishes articles on the subject of pharmacognosy, natural products, and phytochemistry and is indexed with CASPUR, EBSCO, ProQuest, and Scopus. External links Pharmacognosy Network Worldwide Open access journals Biannual journals English-language journals Pharmacology journals Academic journals established in 2007 Medknow Publications academic journals Pharmacognosy
Pharmacognosy Research
[ "Chemistry" ]
113
[ "Pharmacology", "Pharmacognosy" ]
33,686,793
https://en.wikipedia.org/wiki/Ferdinand%20Bernhard%20Vietz
Ferdinand Bernhard Vietz (18 November 1772 in Vienna – 15 December 1815 in Vienna), was an Austrian pharmacologist, a Doctor of the Healing Arts and Professor of Forensic Medicine at the University of Vienna, and is best known for Icones Plantarum Medico-Oeconomico-Technologicarum cum Earum Fructus ususque Descriptione (1800–1822), an 11-volume compilation of medicinal, culinary and decorative plant species consulted by pharmacologists during the early 1800s. The noted cartographic engraver, Ignaz Alberti, worked on the 1100 hand-coloured copperplate engravings on laid-watermarked paper and completed the work after the early death of Vietz. Volumes 1 and 2 were printed in Latin and German in adjacent columns. Volumes 3-10 have the title in German only. Volume 11 is a supplementary volume by Joseph Lorenz Kendl. In the introduction to Volume 1, Vietz lists a lengthy bibliography of consulted works, an enormous number of sponsors and a dedication to Maria Theresa, Empress of Austria. Vietz's monumental work is extremely rare, and the British Natural History Museum writes: The work is “not being held in any other of the United Kingdom's national or public library collections. Only three copies have been found in North American libraries, of which two are certainly fragile and in need of conservation. One copy is in the Austrian National Library.” On his death, Vietz was succeeded by Joseph Bernt (1770–1842), as professor of state medicine. External links Biodiversity Heritage Library (online) References Pharmacologists 1772 births 1815 deaths 18th-century Austrian scientists Scientists from the Austrian Empire Scientists from Vienna Academic staff of the University of Vienna
Ferdinand Bernhard Vietz
[ "Chemistry" ]
352
[ "Pharmacology", "Biochemists", "Pharmacologists" ]
33,688,973
https://en.wikipedia.org/wiki/Microscanner
A microscanner, or micro scanning mirror, is a microoptoelectromechanical system (MOEMS) in the category of micromirror actuators for dynamic light modulation. Depending upon the type of microscanner, the modulatory movement of a single mirror can be either translatory or rotational, on one or two axes. In the first case, a phase shifting effect takes place. In the second case, the incident light wave is deflected. Microscanners are different from spatial light modulators and other micromirror actuators which need a matrix of individually addressable mirrors in order to accomplish the desired modulation at any yield. If a single array mirror accomplishes the desired modulation but is operated in parallel with other array mirrors to increase light yield, then the term microscanner array is used. Characteristics Common chip dimensions are 4 mm × 5 mm for mirror diameters between 1 and 3 mm. Larger mirror apertures with side measurements of up to approx. 10 mm × 3 mm can also be produced. The scan frequencies depend upon the design and mirror size and range between 0.1 and 50 kHz. The deflection movement is either resonant or quasi-static. With microscanners that are capable of tilting movement, light can be directed over a projection plane. Many applications requires that a surface is addressed instead of only a single line. For these applications, actuation using a Lissajous pattern can accomplish sinusoidal scan motion, or double resonant operation. Mechanical deflection angles of micro scanning devices reach up to ±30°. Translational (piston type) microscanners, can attain a mechanical stroke of up to approx. ±500 μm. This configuration is energy efficient, but requires complicated control electronics. For high end display applications the common choice is raster scanning, where a resonant scanner (for the longer display dimension) is paired with quasi-static scanner (for the shorter dimension). Drive principles The required drive forces for the mirror movement can be provided by various physical principles. In practice, the relevant principles for driving such a mirror are the electromagnetic, electrostatic, thermoelectric, and piezoelectric effects. Because the physical principles differ in their advantages and disadvantages, the driving principle is chosen according to the application. Specifically, the mechanical solutions required for resonant scanning are very different for those of quasi-static scanning. Thermoelectric actuators are not applicable for high-frequency resonant scanners, but the other three principles can be applied to the full spectrum of applications. For resonant scanners, one often employed configuration is the indirect drive. In an indirect drive, a small motion in a larger mass is coupled to a large motion in a smaller mass (the mirror) through mechanical amplification at a favorable mode shape. This is in contrast to the more common direct drive, where the actuator mechanism moves the mirror directly. Indirect drives have been implemented for electromagnetic, electrostatic, as well as piezoelectric actuators. Existing piezoelectric scanners are more efficient using direct drive. Electrostatic actuators offer high power similar to electromagnetic drives. In contrast to an electromagnetic drive, the resulting drive force between the drive structures cannot be reversed in polarity. For the realization of quasi-static components with positive and negative effective direction, two drives with positive and negative polarity are required. As a rule of thumb, vertical comb drives are utilized here. Nevertheless, the highly non-linear drive characteristics in some parts of the deflection area can be hindering for controlling the mirror properly. For that reason many highly developed microscanners today utilize a resonant mode of operation, where an eigenmode is activated. Resonant operation is the most energy-efficient. For beam positioning and applications which are to be static-actuated or linearized-scanned, quasi-static drives are required and therefore of great interest. Magnetic actuators offer very good linearity of the tilt angle versus the applied signal amplitude, both in static and dynamic operation. The working principle is that a metallic coil is placed on the moving MEMS mirror itself and as the mirror is placed in a magnetic field, the alternating current flowing in the coil generates Lorentz force that tilts the mirror. Magnetic actuation can either be used for actuating 1D or 2D MEMS mirrors. Another characteristic of the magnetically actuated MEMS mirror is the fact that low voltage is required (below 5V) making this actuation compatible with standard CMOS voltage. An advantage of such an actuation type is that MEMS behaviour does not present hysteresis, as opposed to electrostatic actuated MEMS mirrors, which make it very simple to control. Power consumption of magnetically actuated MEMS mirrors can be as low as 0.04 mW. Thermoelectric drives produce high driving forces, but they present a few technical drawbacks inherent to their fundamental principle. The actuator has to be thermally well insulated from the environment, as well as being preheated in order to prevent thermal drift due to environmental influences. That is why the necessary heat output and power consumption for a thermal bimorph actuator is relatively high. One further disadvantage is the comparably low displacement which needs to be leveraged to reach usable mechanical deflections. Also thermal actuators are not suitable for high frequency operation due to significant low pass behaviour. Piezoelectric drives produce high force, but as with electrothermal actuators the stroke length is short. Piezoelectric drives are, however, less susceptible to thermal environmental influences and can also transmit high-frequency drive signals well. To achieve the desired angle some mechanism utilizing mechanical amplification will be required for most applications. This has proven to be difficult for quasi-static scanners, although there are promising approaches in the literature using long meandering flexures for deflection amplification. For resonant rotational scanners, on the other hand, scanners using piezoelectric actuation combined with an indirect drive are the highest performer in terms of scan angle and working frequency. However, the technology is newer than electrostatic and electromagnetic drives and remains to be implemented in commercial products. Fields of Application Applications for tilting microscanners are numerous and include: Projection displays Image recording, e.g. for technical and medical endoscopes Bar code scanning Spectroscopy Laser marking and material processing Object measurement / triangulation 3D cameras Object recognition 1D and 2D light grid Confocal microscopy / OCT Fluorescence microscopy Laser wavelength modulation Some of the applications for piston type microscanners are: Fourier transform infrared spectrometer Confocal microscopy Focus variation Manufacture Microscanners are usually manufactured with surface or bulk micromechanic processes. As a rule, silicon or BSOI (bonded silicon on insulator) are used. Advantages and disadvantages of microscanners Microscanners are smaller, lower mass, and consume smaller amounts of power compared to macroscopic light modulators such as galvanometer scanners. Additionally, microscanners can be integrated with other electronic components such as position sensors. Microscanners are resistant to environmental influences, and can tolerate humidity, dust, physical shocks in some models up to 2500g, and can operate in temperatures from -20 °C to +80 °C. With current manufacturing technology microscanners can suffer from high costs and long lead times to delivery. This is an active area of process improvement References External links Scanning Micromirrors. Mirrorcle Technologies Gimbal-less, Two-axis scanning micromirrors MEMS Scanners. Fraunhofer Institute for Photonic Microsystems ARI MEMS Micromirror Demonstration Devices. Adriatic Research Institute Getting Started with Analog Mirrors. Texas Instruments (Product Page) Magnetic MEMS micromirrors. Lemoptix (Technology description Page) MEMS Laser Scanning Mirrors. Maradin Ltd Microtechnology Microelectronic and microelectromechanical systems
Microscanner
[ "Materials_science", "Engineering" ]
1,651
[ "Microelectronic and microelectromechanical systems", "Materials science", "Microtechnology" ]
49,449,732
https://en.wikipedia.org/wiki/PROTO%20%28fusion%20reactor%29
PROTO is a proposed nuclear fusion reactor to be implemented after 2050, a successor to the ITER and DEMO projects. It is part of the European Commission long-term strategy for research of fusion energy. PROTO would act as a prototype power station, taking in any technology refinements from earlier projects, and demonstrating electricity generation on a commercial basis. It may or may not be a second part of DEMO/PROTO experiment. References Tokamaks Proposed fusion reactors ITER
PROTO (fusion reactor)
[ "Physics" ]
96
[ "Plasma physics stubs", "Plasma physics" ]
49,450,154
https://en.wikipedia.org/wiki/Melanoleuca%20cognata
Melanoleuca cognata, commonly known as the spring cavalier, is an edible species of agaric fungus. It is found in Europe and North America in forests, meadows, and parks. The species may be difficult to identify without analysis of its microscopic features. The mushroom is fairly tall for species of its genus. The cap is orange to red-brown and semi-viscid. The gills are a shade of ochre. The odour is mild to sweetish. References External links Enigmatic Agaricales taxa Fungi of Europe Fungi of North America Fungi described in 1874 Taxa named by Elias Magnus Fries Fungus species
Melanoleuca cognata
[ "Biology" ]
131
[ "Fungi", "Fungus species" ]
49,451,315
https://en.wikipedia.org/wiki/Star%20quad%20cable
In electrical engineering, star-quad cable is a four-conductor electrical cable that has a special quadrupole geometry which provides magnetic immunity when used in a balanced line. Four conductors are used to carry the two legs of the balanced line. All four conductors must be an equal distance from a common point (usually the center of the cable). The four conductors are arranged in a four-pointed star (forming a square). Opposite points of the star are connected together at each end of the cable to form each leg of the balanced circuit. Star quad cables often use filler elements to hold the conductor centers in a symmetric four-point arrangement about the cable axis. All points of the star must lie at equal distances from the center of the star. When opposite points are connected together, they act as if they are one conductor located at the center of the star. This configuration places the geometric center of each of the two legs of the balanced circuit in the center of the star. To a magnetic field, both legs of the balanced circuit appear to be in the exact center of the star. This means that both legs of the balanced circuit will receive exactly the same interference from the magnetic field and a common-mode interference signal will be produced. This common-mode interference signal will be rejected by the balanced receiver. The magnetic immunity of star quad cable is a function of the accuracy of the star-quad geometry, the accuracy of the impedance balancing, and the common-mode rejection ratio of the balanced receiver. Star-quad cable typically provides a 10 dB to 30 dB reduction in magnetically-induced interference. Advantages When star-quad cable is used for a single balanced line, such as professional audio applications and two-wire telephony, two non-adjacent conductors are terminated together at both ends of the cable, and the other two conductors are also terminated together. Interference picked up by the cable arrives as a virtually perfect common mode signal, which is easily removed by a coupling transformer or differential amplifier. The combined benefits of twisting, differential signalling, and quadrupole pattern give outstanding noise immunity, especially advantageous for low-signal-level applications such as long microphone cables, even when installed very close to a power cable. It is particularly beneficial compared to twisted pair when AC magnetic field sources are in close proximity, for example a stage cable that can lie against an inline power transformer. Disadvantages The disadvantage is that star quad, in combining two conductors, typically has more capacitance than similar two-conductor twisted and shielded audio cable. High capacitance causes an increasing loss of high frequencies as distance increases. The high-frequency loss is due to the RC filter formed by the output impedance of the cable driver and the capacitance of the cable. In some cases an increase in distortion can occur in the cable driver if it has difficulty driving the higher cable capacitance. The capacitance of a four-conductor quad-star cable is roughly equal to the capacitance of a standard two-conductor cable about 1.5 times as long. The increased capacitance of the star quad cable is not usually a problem with short cable runs, but it can be an issue for long cable runs. For example, an 8 m (25 ft) star-quad cable has a capacitance of 150 pF/m for a total capacitance of 1200 pF for the entire length of cable. With a 150 Ohm source impedance and 1200 pF load capacitance, the frequency response of this RC circuit is -0.02 dB at 20 kHz. If the cable were 80 m instead of 8 m, then the frequency response would be -0.2 dB at 20 kHz, and -3 dB at 88 kHz. Other applications for star quad cable While the above discussion focuses on preventing noise from getting in (e.g. into a microphone cable) the same star-quad quadrupole configuration is useful for audio speaker cable, for split-phase electric power wiring, and even for open-wire star quad transmission line. In these cases, the purpose of the star quad configuration is reversed. The star-quad geometry partially cancels the magnetic fields that are produced by the two pairs of conductors. This cancellation reduces the magnetic emissions of the cable. To work properly, the cable must be wired in the same fashion as the microphone cable example above. Wires on opposite sides of the star must be shorted together at each end of the cable. This means that four conductors are required for a two-wire circuit. Furthermore, this scheme only works if the two pairs of conductors carry equal and opposite currents. If a ground conductor is also needed, it must be added in a way that will not interfere with the star-quad geometry. It should also be added in a geometric configuration that exposes the ground conductor to equal interference from all four star-quad conductors. The most common solution is to wrap the star quad with a cylindrical ground conductor. Star-quad cable can be used for two circuits, such as four-wire telephony and other telecommunications applications, but it will not provide magnetic immunity in this application. In this configuration each pair uses two non-adjacent conductors. Because the conductors are always the same distance from each other, crosstalk is reduced relative to cables with two separate twisted pairs. Each conductor of one pair sees an equal capacitance to both wires in the other pair. This cancels the capacitive crosstalk between the two pairs. The geometry also cancels the magnetic interference between the two pairs. References Electrical wiring
Star quad cable
[ "Physics", "Engineering" ]
1,139
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
29,744,149
https://en.wikipedia.org/wiki/Nickel-dependent%20hydrogenase
Hydrogenases are enzymes that catalyze the reversible activation of hydrogen and which occur widely in prokaryotes as well as in some eukaryotes. There are various types of hydrogenases, but all of them seem to contain at least one iron-sulphur cluster. They can be broadly divided into two groups: hydrogenases containing nickel and, in some cases, also selenium (the [NiFe] and [NiFeSe] hydrogenases) and those lacking nickel (the [Fe] hydrogenases). The [NiFe] and [NiFeSe] hydrogenases are heterodimer that consist of a small subunit that contains a signal peptide and a large subunit. All the known large subunits seem to be evolutionary related; they contain two Cys-x-x-Cys motifs; one at their N-terminal end; the other at their C-terminal end. These four cysteines are involved in the binding of nickel. In the [NiFeSe] hydrogenases the first cysteine of the C-terminal motif is a selenocysteine which has experimentally been shown to be a nickel ligand. References Protein domains Enzymes
Nickel-dependent hydrogenase
[ "Biology" ]
253
[ "Protein domains", "Protein classification" ]
29,744,467
https://en.wikipedia.org/wiki/Isocitrate%20lyase%20family
Isocitrate lyase family is a family of evolutionarily related proteins. Isocitrate lyase () is an enzyme that catalyzes the conversion of isocitrate to succinate and glyoxylate. This is the first step in the glyoxylate bypass, an alternative to the tricarboxylic acid cycle in bacteria, fungi and plants. A cysteine, a histidine and a glutamate or aspartate have been found to be important for the enzyme's catalytic activity. Only one cysteine residue is conserved between the sequences of the fungal, plant and bacterial enzymes; it is located in the middle of a conserved hexapeptide. Other enzymes also belong to this family including carboxyvinyl-carboxyphosphonate phosphorylmutase () which catalyses the conversion of 1-carboxyvinyl carboxyphosphonate to 3-(hydrohydroxyphosphoryl) pyruvate carbon dioxide, and phosphoenolpyruvate mutase (), which is involved in the biosynthesis of phosphinothricin tripeptide antibiotics. Subfamilies Isocitrate lyase Methylisocitrate lyase Carboxyvinyl-carboxyphosphonate phosphorylmutase References Protein domains Protein families
Isocitrate lyase family
[ "Chemistry", "Biology" ]
298
[ "Protein stubs", "Protein classification", "Biochemistry stubs", "Protein domains", "Protein families" ]
29,746,234
https://en.wikipedia.org/wiki/Hector%20%28cloud%29
Hector is a cumulonimbus thundercloud cluster that forms regularly nearly every afternoon on the Tiwi Islands in the Northern Territory of Australia, from approximately September to March each year. Hector, or sometimes Hector the Convector, is known as one of the world's most consistently large thunderstorms; specifically, a small mesoscale convective system (MCS) or large multicellular thunderstorm. It reaches heights of approximately . History Named by pilots during the Second World War, the recurring position of the thunderstorm made it a navigational beacon for pilots and mariners in the region. A mesoscale phenomenon, Hector is caused primarily by a collision of several sea breeze boundaries across the Tiwi Islands and is known for its consistency and intensity. Lightning flash rates and updraft speeds are notable aspects of this thunderstorm and during the 1990s National Geographic magazine published a comprehensive study of the storm with pictures of damaged trees and details of updraft speeds and references to tornadic events. The consistency of the phenomenon is caused by frequently occurring atmospheric conditions due to the sea and due to topography, and the underlying atmospheric environment constitutes a distinct microclimate (which are common with islands, especially ones exhibiting significant topographic relief). Since the late 1980s the thunderstorm complex has been the subject of many meteorological studies, many centred on Hector itself, but also utilising the consistency of the storm cell to study other aspects of thunderstorms, lightning, atmospheric boundaries, and marine and terrain effects on the atmosphere. See also List of cloud types Morning Glory cloud Catatumbo lightning References Regional climate effects Clouds Climate of Australia Tiwi Islands Anomalous weather
Hector (cloud)
[ "Physics" ]
341
[ "Weather", "Physical phenomena", "Anomalous weather" ]
29,753,527
https://en.wikipedia.org/wiki/Dithiol
In organic chemistry, a dithiol is a type of organosulfur compound with two thiol () functional groups. Their properties are generally similar to those of monothiols in terms of solubility, odor, and volatility. They can be classified according to the relative location of the two thiol groups on the organic backbone. Geminal dithiols Geminal dithiols have the formula RR'C(SH)2. They are derived from aldehydes and ketones by the action of hydrogen sulfide. Their stability contrasts with the rarity of geminal diols. Examples include methanedithiol, ethane-1,1-dithiol, and cyclohexane-1,1-dithiol. Upon heating, gem-dithiols often release hydrogen sulfide, giving the transient thioketone or thial, which typically convert to oligomers. 1,2-Dithiols Compounds containing thiol groups on adjacent carbon centers are common. Ethane-1,2-dithiol reacts with aldehydes () and ketones () to give 1,3-dithiolanes: (HS)2C2H4 + R-CHO -> R-CHS2C2H4 + H2O Some dithiols are used in chelation therapy, i.e. the removal of heavy metal poisons. Examples include dimercaptopropanesulfate (DMPS), dimercaprol ("BAL"), and meso-2,3-dimercaptosuccinic acid. Enedithiols Enedithiols, with the exception of aromatic examples, are rare. The parent aromatic example is benzenedithiol. The dithiol of 1,3-dithiole-2-thione-4,5-dithiolate2- is also known. 1,3-Dithiols Propane-1,3-dithiol is the parent member of this series. It is employed as a reagent in organic chemistry, since it forms 1,3-dithianes upon treatment with ketones and aldehydes. When derived from aldehydes, the methyne () group is sufficiently acidic that it can be deprotonated and the resulting anion can be C-alkylated. The process is the foundation of the umpolung phenomenon. Like 1,2-ethanedithiol, propanedithiol forms complexes with metals, for example with triiron dodecacarbonyl: A naturally occurring 1,3-dithiol is dihydrolipoic acid. 1,3-Dithiols oxidize to give 1,2-dithiolanes. 1,4-Dithiols A common 1,4-dithiol is dithiothreitol (DTT), HSCH2CH(OH)CH(OH)CH2SH, sometimes called Cleland's reagent, for to reduce protein disulfide bonds. Oxidation of DTT results a stable six-membered heterocyclic ring with an internal disulfide bond. References Functional groups Organosulfur compounds
Dithiol
[ "Chemistry" ]
675
[ "Organic compounds", "Organosulfur compounds", "Thiols", "Functional groups" ]
43,414,656
https://en.wikipedia.org/wiki/Vincent%20average
In applied statistics, Vincentization was described by Ratcliff (1979), and is named after biologist S. B. Vincent (1912), who used something very similar to it for constructing learning curves at the beginning of the 1900s. It basically consists of averaging subjects' estimated or elicited quantile functions in order to define group quantiles from which can be constructed. To cast it in its greatest generality, let represent arbitrary (empirical or theoretical) distribution functions and define their corresponding quantile functions by The Vincent average of the 's is then computed as where the non-negative numbers have a sum of . References Applied statistics
Vincent average
[ "Mathematics" ]
130
[ "Applied mathematics", "Applied statistics" ]
40,502,817
https://en.wikipedia.org/wiki/Maxwell%E2%80%93J%C3%BCttner%20distribution
In physics, the Maxwell–Jüttner distribution, sometimes called Jüttner–Synge distribution, is the distribution of speeds of particles in a hypothetical gas of relativistic particles. Similar to the Maxwell–Boltzmann distribution, the Maxwell–Jüttner distribution considers a classical ideal gas where the particles are dilute and do not significantly interact with each other. The distinction from Maxwell–Boltzmann's case is that effects of special relativity are taken into account. In the limit of low temperatures much less than (where is the mass of the kind of particle making up the gas, is the speed of light and is Boltzmann constant), this distribution becomes identical to the Maxwell–Boltzmann distribution. The distribution can be attributed to Ferencz Jüttner, who derived it in 1911. It has become known as the Maxwell–Jüttner distribution by analogy to the name Maxwell–Boltzmann distribution that is commonly used to refer to Maxwell's or Maxwellian distribution. Definition As the gas becomes hotter and approaches or exceeds , the probability distribution for in this relativistic Maxwellian gas is given by the Maxwell–Jüttner distribution: where and is the modified Bessel function of the second kind. Alternatively, this can be written in terms of the momentum as where . The Maxwell–Jüttner equation is covariant, but not manifestly so, and the temperature of the gas does not vary with the gross speed of the gas. Jüttner distribution graph A visual representation of the distribution in particle velocities for plasmas at four different temperatures: Where thermal parameter has been defined as . The four general limits are: ultrarelativistic temperatures relativistic temperatures: , weakly (or mildly) relativistic temperatures: , low temperatures: , Limitations Some limitations of the Maxwell–Jüttner distributions are shared with the classical ideal gas: neglect of interactions, and neglect of quantum effects. An additional limitation (not important in the classical ideal gas) is that the Maxwell–Jüttner distribution neglects antiparticles. If particle-antiparticle creation is allowed, then once the thermal energy is a significant fraction of , particle-antiparticle creation will occur and begin to increase the number of particles while generating antiparticles (the number of particles is not conserved, but instead the conserved quantity is the difference between particle number and antiparticle number). The resulting thermal distribution will depend on the chemical potential relating to the conserved particle–antiparticle number difference. A further consequence of this is that it becomes necessary to incorporate statistical mechanics for indistinguishable particles, because the occupation probabilities for low kinetic energy states becomes of order unity. For fermions it is necessary to use Fermi–Dirac statistics and the result is analogous to the thermal generation of electron–hole pairs in semiconductors. For bosonic particles, it is necessary to use the Bose–Einstein statistics. Perhaps most significantly, the basic distribution has two main issues: it does not extend to particles moving at relativistic speeds, and  it assumes anisotropic temperature (where each DoF does not have the same translational kinetic energy). While the classic Maxwell–Jüttner distribution generalizes for the case of special relativity, it fails to consider the anisotropic description. Derivation The Maxwell–Boltzmann () distribution describes the velocities or the kinetic energy of the particles at thermal equilibrium, far from the limit of the speed of light, i.e.: Or, in terms of the kinetic energy: where is the temperature in speed dimensions, called thermal speed, and d denotes the kinetic degrees of freedom of each particle. (Note that the temperature is defined in the fluid's rest frame, where the bulk speed is zero. In the non-relativistic case, this can be shown by using . The relativistic generalization of Eq. (1a), that is, the Maxwell–Jüttner () distribution, is given by: where and . (Note that the inverse of the unitless temperature is the relativistic coldness , Rezzola and Zanotti, 2013.) This distribution (Eq. 2) can be derived as follows. According to the relativistic formalism for the particle momentum and energy, one has While the kinetic energy is given by . The Boltzmann distribution of a Hamiltonian is In the absence of a potential energy, is simply given by the particle energy , thus: (Note that is the sum of the kinetic and inertial energy ). Then, when one includes the -dimensional density of states: So that: Where denotes the -dimensional solid angle. For isotropic distributions, one has or Then, so that: Or: Now, because . Then, one normalises the distribution . One sets And the angular integration: Where is the surface of the unit d-dimensional sphere. Then, using the identity one has: and Where one has defined the integral: The Macdonald function (Modified Bessel function of the II kind) (Abramowitz and Stegun, 1972, p.376) is defined by: So that, by setting one obtains: Hence, Or The inverse of the normalization constant gives the partition function Therefore, the normalized distribution is: Or one may derive the normalised distribution in terms of: Note that can be shown to coincide with the thermodynamic definition of temperature. Also useful is the expression of the distribution in the velocity space. Given that , one has: Hence Take (the “classic case” in our world): And Note that when the distribution clearly deviates from the distribution of the same temperature and dimensionality, one can misinterpret and deduce a different distribution that will give a good approximation to the distribution. This new distribution can be either: a convected distribution, that is, an distribution with the same dimensionality, but with different temperature and bulk speed (or bulk energy ) an distribution with the same bulk speed, but with different temperature and degrees of freedom . These two types of approximations are illustrated. Other properties The probability density function is given by: This means that a relativistic non-quantum particle with parameter has a probability of of having its Lorentz factor in the interval . The cumulative distribution function is given by: That has a series expansion at : By definition , regardless of the parameter . To find the average speed, , one must compute , where is the speed in terms of its Lorentz factor. The integral simplifies to the closed- form expression: This closed formula for has a series expansion at : Or substituting the definition for the parameter : Where the first term of the expansion, which is independently of , corresponds to the average speed in the Maxwell–Boltzmann distribution, , whilst the following are relativistic corrections. This closed formula for has a series expansion at : Or substituting the definition for the parameter : Where it follows that is an upper limit to the particle's speed, something only present in a relativistic context, and not in the Maxwell–Boltzmann distribution. References Gases Special relativity
Maxwell–Jüttner distribution
[ "Physics", "Chemistry" ]
1,487
[ "Matter", "Particle statistics", "Phases of matter", "Special relativity", "Theory of relativity", "Statistical mechanics", "Gases" ]
40,503,196
https://en.wikipedia.org/wiki/ZBTB48
Zinc finger and BTB domain containing 48 (ZBTB48), also known as telomeric zinc-finger associated protein (TZAP), is a protein that directly binds to the double-stranded repeat sequence of telomeres. In humans it is encoded by the ZBTB48 gene. Loss of ZBTB48 has been shown to lead to telomere elongation both in cells with long and short telomeres. In addition, overexpression of ZBTB48 in cancer cells maintaining their telomeres based on the Alternative Lengthening of Telomeres (ALT) mechanism leads to trimming of telomeres. Beyond its telomeric function, ZBTB48 acts as a transcriptional activator on a small set of target genes, including mitochondrial fission process 1 (MTFP1) and CDKN2A. ZBTB48 localizes to chromosome 1p36, a region that is frequently rearranged (leiomyoma & leukaemia) or deleted (neuroblastoma, melanoma, Merkel cell carcinoma, pheochromocytoma, and carcinomas of colon and breast) in different human cancers and therefore might be a putative tumour suppressor, but not without dispute. References Further reading Genes on human chromosome 1 Telomeres Transcription factors
ZBTB48
[ "Chemistry", "Biology" ]
281
[ "Gene expression", "Signal transduction", "Senescence", "Induced stem cells", "Telomeres", "Transcription factors" ]
40,503,515
https://en.wikipedia.org/wiki/Gauge%20vector%E2%80%93tensor%20gravity
Gauge vector–tensor gravity (GVT) is a relativistic generalization of Mordehai Milgrom's modified Newtonian dynamics (MOND) paradigm where gauge fields cause the MOND behavior. The former covariant realizations of MOND such as the Bekenestein's tensor–vector–scalar gravity and the Moffat's scalar–tensor–vector gravity attribute MONDian behavior to some scalar fields. GVT is the first example wherein the MONDian behavior is mapped to the gauge vector fields. The main features of GVT can be summarized as follows: As it is derived from the action principle, GVT respects conservation laws; In the weak-field approximation of the spherically symmetric, static solution, GVT reproduces the MOND acceleration formula; It can accommodate gravitational lensing. It is in total agreement with the Einstein–Hilbert action in the strong and Newtonian gravities. Its dynamical degrees of freedom are: Two gauge fields: ; A metric, . Details The physical geometry, as seen by particles, represents the Finsler geometry–Randers type: This implies that the orbit of a particle with mass can be derived from the following effective action: The geometrical quantities are Riemannian. GVT, thus, is a bi-geometric gravity. Action The metric's action coincides to that of the Einstein–Hilbert gravity: where is the Ricci scalar constructed out from the metric. The action of the gauge fields follow: where L has the following MOND asymptotic behaviors and represent the coupling constants of the theory while are the parameters of the theory and Coupling to the matter Metric couples to the energy-momentum tensor. The matter current is the source field of both gauge fields. The matter current is where is the density and represents the four velocity. Regimes of the GVT theory GVT accommodates the Newtonian and MOND regime of gravity; but it admits the post-MONDian regime. Strong and Newtonian regimes The strong and Newtonian regime of the theory is defined to be where holds: The consistency between the gravitoelectromagnetism approximation to the GVT theory and that predicted and measured by the Einstein–Hilbert gravity demands that which results in So the theory coincides to the Einstein–Hilbert gravity in its Newtonian and strong regimes. MOND regime The MOND regime of the theory is defined to be So the action for the field becomes aquadratic. For the static mass distribution, the theory then converts to the AQUAL model of gravity with the critical acceleration of So the GVT theory is capable of reproducing the flat rotational velocity curves of galaxies. The current observations do not fix which is supposedly of order one. Post-MONDian regime The post-MONDian regime of the theory is defined where both of the actions of the are aquadratic. The MOND type behavior is suppressed in this regime due to the contribution of the second gauge field. See also Dark energy Dark fluid Dark matter General theory of relativity Law of universal gravitation Modified Newtonian dynamics Nonsymmetric gravitational theory Pioneer anomaly Scalar – scalar field Scalar–tensor–vector gravity Tensor Vector References Theories of gravity Astrophysics
Gauge vector–tensor gravity
[ "Physics", "Astronomy" ]
670
[ "Astronomical sub-disciplines", "Theoretical physics", "Astrophysics", "Theories of gravity" ]
40,504,424
https://en.wikipedia.org/wiki/Gravity%20current%20intrusion
The term gravity current intrusion denotes the fluid mechanics phenomenon within which a fluid intrudes with a predominantly horizontal motion into a separate stratified fluid, typically along a plane of neutral buoyancy. This behaviour distinguishes the difference between gravity current intrusions and gravity currents, as intrusions are not restrained by a well-defined boundary surface. As with gravity currents, intrusion flow is driven within a gravity field by density differences typically small enough to allow for the Boussinesq approximation. The driving density difference between fluids that produces intrusion motion could simply be due to chemical composition. However variations can also be caused by differences in respective fluid temperatures, dissolved matter concentrations and by particulate matter suspended in flows. Examples of particulate suspension intrusions include sediment laden river outflows within oceans, 'short-circuit' sewage sedimentation tank intrusions and turbidity current flows over hypersaline Mediterranean pools. Examples also exist of particulate intrusions caused by the lateral spread of thermals or plumes along planes of neutral buoyancy; such as intrusions containing metalliferous sediments formed from deep ocean hydrothermal vents. Or equally crystal laden intrusions formed by plumes within volcanic magma chambers. Arguably the most striking of all gravitational intrusions, is the atmospheric gravity current generated from a large, 'Plinean' volcanic eruption. In which case the volcano's overhanging 'umbrella' is an example of an intrusion laterally intruding into the stratified Troposphere. Research Work analysing gravity currents propagating within a single fluid host was broadened to consider intrusions within sharply stratified fluids by Hoyler & Huppert in 1980. Since then there have been further significant analytical and experimental advancements into understanding specifically particle laden intrusions by researchers including Bonnecaze, et al., (1993, 1995, 1996), Rimoldi et al. (1996), and Rooij, et al. (1999). As of 2012 the most recent rigorous analytical analysis, designed to determine the propagation speed of a classically extending intrusion, was performed by Flynn and Linden. Practical experimentation into intrusions has typically employed a lock exchange to study intrusion dynamics. Structure The basic structure of a gravity intrusion is approximate to that of a classic current with a roughly elliptical 'head' followed by a tail which stretches with increased current length, it is within the rear half of the intrusion head that the majority of mixing with ambient fluids takes place. As with gravity currents, intrusions display the same 'slumping', 'self –similar' and 'viscous' phases as gravity currents during propagation. References Fluid dynamics
Gravity current intrusion
[ "Chemistry", "Engineering" ]
538
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
28,309,002
https://en.wikipedia.org/wiki/Isogenic%20human%20disease%20models
Isogenic human disease models are a family of cells that are selected or engineered to accurately model the genetics of a specific patient population, in vitro. They are provided with a genetically matched 'normal cell' to provide an isogenic system to research disease biology and novel therapeutic agents. They can be used to model any disease with a genetic foundation. Cancer is one such disease for which isogenic human disease models have been widely used. Historical models Human isogenic disease models have been likened to 'patients in a test-tube', since they incorporate the latest research into human genetic diseases and do so without the difficulties and limitations involved in using non-human models. Historically, cells obtained from animals, typically mice, have been used to model cancer-related pathways. However, there are obvious limitations inherent in using animals for modelling genetically determined diseases in humans. Despite a large proportion of genetic conservation between humans and mice, there are significant differences between the biology of mice and humans that are important to cancer research. For example, major differences in telomere regulation enable murine cells to bypass the requirement for telomerase upregulation, which is a rate-limiting step in human cancer formation. As another example, certain ligand-receptor interactions are incompatible between mice and humans. Additionally, experiments have demonstrated important and significant differences in the ability to transform cells, compared with cells of murine origin. For these reasons, it remains essential to develop models of cancer that employ human cells. Targeting vectors Isogenic cell lines are created via a process called homologous gene-targeting. Targeting vectors that utilize homologous recombination are the tools or techniques that are used to knock-in or knock-out the desired disease-causing mutation or SNP (single nucleotide polymorphism) to be studied. Although disease mutations can be harvested directly from cancer patients, these cells usually contain many background mutations in addition to the specific mutation of interest, and a matched normal cell line is typically not obtained. Subsequently, targeting vectors are used to 'knock-in' or 'knock out' gene mutations enabling a switch in both directions; from a normal to cancer genotype; or vice versa; in characterized human cancer cell lines such as HCT116 or Nalm6. There are several gene targeting technologies used to engineer the desired mutation, the most prevalent of which are briefly described, including key advantages and limitations, in the summary table below. Homologous recombination in cancer cell disease models Homologous recombination (HR) is a kind of genetic recombination in which genetic sequences are exchanged between two similar segments of DNA. HR plays a major role in eukaryotic cell division, promoting genetic diversity through the exchange between corresponding segments of DNA to create new, and potentially beneficial combinations of genes. HR performs a second vital role in DNA repair, enabling the repair of double-strand breaks in DNA which is a common occurrence during a cell's lifecycle. It is this process which is artificially triggered by the above technologies and bootstrapped in order to engender 'knock-ins' or 'knockouts' in specific genes 5, 7. A recent key advance was discovered using AAV-homologous recombination vectors, which increases the low natural rates of HR in differentiated human cells when combined with gene-targeting vectors-sequences. Commercialization Factors leading to the recent commercialization of isogenic human cancer cell disease models for the pharmaceutical industry and research laboratories are twofold. Firstly, successful patenting of enhanced targeting vector technology has provided a basis for commercialization of the cell-models which eventuate from the application of these technologies. Secondly, the trend of relatively low success rates in pharmaceutical RnD and the enormous costs have created a real need for new research tools that illicit how patient sub-groups will respond positively or be resistant to targeted cancer therapeutics based upon their individual genetic profile. See also AAV FLP-FRT recombination Genome engineering Homologous recombination in viruses Technological applications Cancer therapy Plasmid Recombinant AAV mediated genome engineering Synthetic lethality Zinc finger nuclease References Sources Endogenous Expression of Oncogenic PI3K Mutation Leads to Activated PI3K Signaling and an Invasive Phenotype Poster Presented at AACR/EORTC Molecular Targets and Cancer Therapeutics, Boston, USA, Nov. 2009 Endogenous Expression of Oncogenic PI3K Mutation Leads to accumulation of anti-apoptotic proteins in mitochondria Poster Presented at AACR 2010, Washington, D.C., USA, April. 2010 The use of 'X-MAN' isogenic cell lines to define PI3-kinase inhibitor activity profiles Poster Presented at AACR 2010, Washington, D.C., USA, April. 2010 The use of 'X-MAN' mutant PI3CA increases the expression of individual tubulin isoforms and promoted resistance to anti-mitotic chemotherapy drugs Poster Presented at AACR 2010, Washington, D.C., USA, April. 2010 Human genetics Genetic engineering Genetics experiments Gene banks DNA
Isogenic human disease models
[ "Chemistry", "Engineering", "Biology" ]
1,040
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
28,310,124
https://en.wikipedia.org/wiki/Modular%20decomposition
In graph theory, the modular decomposition is a decomposition of a graph into subsets of vertices called modules. A module is a generalization of a connected component of a graph. Unlike connected components, however, one module can be a proper subset of another. Modules therefore lead to a recursive (hierarchical) decomposition of the graph, instead of just a partition. There are variants of modular decomposition for undirected graphs and directed graphs. For each undirected graph, this decomposition is unique. This notion can be generalized to other structures (for example directed graphs) and is useful to design efficient algorithms for the recognition of some graph classes, for finding transitive orientations of comparability graphs, for optimization problems on graphs, and for graph drawing. Modules As the notion of modules has been rediscovered in many areas, modules have also been called autonomous sets, homogeneous sets, stable sets, clumps, committees, externally related sets, intervals, nonsimplifiable subnetworks, and partitive sets . Perhaps the earliest reference to them, and the first description of modular quotients and the graph decomposition they give rise to appeared in (Gallai 1967). A module of a graph is a generalization of a connected component. A connected component has the property that it is a set of vertices such that every member of is a non-neighbor of every vertex not in . (It is a union of connected components if and only if it has this property.) More generally, is a module if, for each vertex , either every member of is a non-neighbor of or every member of is a neighbor of . Equivalently, is a module if all members of have the same set of neighbors among vertices not in . Contrary to the connected components, the modules of a graph are the same as the modules of its complement, and modules can be "nested": one module can be a proper subset of another. Note that the set of vertices of a graph is a module, as are its one-element subsets and the empty set; these are called the trivial modules. A graph may or may not have other modules. A graph is called prime if all of its modules are trivial. Despite these differences, modules preserve a desirable property of connected components, which is that many properties of the subgraph induced by a connected component are independent of the rest of the graph. A similar phenomenon also applies to the subgraphs induced by modules. The modules of a graph are therefore of great algorithmic interest. A set of nested modules, of which the modular decomposition is an example, can be used to guide the recursive solution of many combinatorial problems on graphs, such as recognizing and transitively orienting comparability graphs, recognizing and finding permutation representations of permutation graphs, recognizing whether a graph is a cograph and finding a certificate of the answer to the question, recognizing interval graphs and finding interval representations for them, defining distance-hereditary graphs (Spinrad, 2003) and for graph drawing (Papadopoulos, 2006). They play an important role in Lovász's celebrated proof of the perfect graph theorem (Golumbic, 1980). For recognizing distance-hereditary graphs and circle graphs, a further generalization of modular decomposition, called the split decomposition, is especially useful (Spinrad, 2003). To avoid the possibility of ambiguity in the above definitions, we give the following formal definitions of modules. Let be a graph. A set is a module of if the vertices of cannot be distinguished by any vertex in , i.e., , either is adjacent to both and or is neither adjacent to nor to . This condition can be succinctly written as for all . Here, denotes the set of neighbours of . For example, , and all the singletons for are modules. They are called trivial modules. A graph is prime if all its modules are trivial. Connected components of a graph , or of its complement graph are also modules of . is a strong module of a graph if it does not overlap any other module of : module of , either or or . Modular quotients and factors If and are disjoint modules, then it is easy to see that either every member of is a neighbor of every element of , or no member of is adjacent to any member of . Thus, the relationship between two disjoint modules is either adjacent or nonadjacent. No relationship intermediate between these two extremes can exist. Because of this, modular partitions of where each partition class is a module are of particular interest. Suppose is a modular partition. Since the partition classes are disjoint, their adjacencies constitute a new graph, a quotient graph , whose vertices are the members of . That is, each vertex of is a module of G, and the adjacencies of these modules are the edges of . In the figure below, vertex 1, vertices 2 through 4, vertex 5, vertices 6 and 7, and vertices 8 through 11 are a modular partition. In the upper right diagram, the edges between these sets depict the quotient given by this partition, while the edges internal to the sets depict the corresponding factors. The partitions and are the trivial modular partitions. is just the one-vertex graph, while . Suppose is a nontrivial module. Then and the one-elements subsets of are a nontrivial modular partition of . Thus, the existence of any nontrivial modules implies the existence of nontrivial modular partitions. In general, many or all members of can be nontrivial modules. If is a nontrivial modular partition, then is a compact representation of all the edges that have endpoints in different partition classes of . For each partition class in , the subgraph induced by is called a factor and gives a representation of all edges with both endpoints in . Therefore, the edges of can be reconstructed given only the quotient graph and its factors. The term prime graph comes from the fact that a prime graph has only trivial quotients and factors. When is a factor of a modular quotient , it is possible that can be recursively decomposed into factors and quotients. Each level of the recursion gives rise to a quotient. As a base case, the graph has only one vertex. Collectively, can be reconstructed inductively by reconstructing the factors from the bottom up, inverting the steps of the decomposition by combining factors with the quotient at each level. In the figure below, such a recursive decomposition is represented by a tree that depicts one way of recursively decomposing factors of an initial modular partition into smaller modular partitions. A way to recursively decompose a graph into factors and quotients may not be unique. (For example, all subsets of the vertices of a complete graph are modules, which means that there are many different ways of decomposing it recursively.) Some ways may be more useful than others. The modular decomposition Fortunately, there exists such a recursive decomposition of a graph that implicitly represents all ways of decomposing it; this is the modular decomposition. It is itself a way of decomposing a graph recursively into quotients, but it subsumes all others. The decomposition depicted in the figure below is this special decomposition for the given graph. The following is a key observation in understanding the modular decomposition: If is a module of and is a subset of , then is a module of , if and only if it is a module of . In (Gallai, 1967), Gallai defined the modular decomposition recursively on a graph with vertex set , as follows: As a base case, if only has one vertex, its modular decomposition is a single tree node. Gallai showed that if is connected and so is its complement, then the maximal modules that are proper subsets of are a partition of . They are therefore a modular partition. The quotient that they define is prime. The root of the tree is labeled a prime node, and these modules are assigned as children of . Since they are maximal, every module not represented so far is contained in a child of . For each child of , replacing with the modular decomposition tree of gives a representation of all modules of , by the key observation above. If is disconnected, its complement is connected. Every union of connected components is a module of . All other modules are subsets of a single connected component. This represents all modules, except for subsets of connected components. For each component , replacing by the modular decomposition tree of gives a representation of all modules of , by the key observation above. The root of the tree is labeled a parallel node, and it is attached in place of as a child of the root. The quotient defined by the children is the complement of a complete graph. If the complement of is disconnected, is connected. The subtrees that are children of are defined in a way that is symmetric with the case where is disconnected, since the modules of a graph are the same as the modules of its complement. The root of the tree is labeled a serial node, and the quotient defined by the children is a complete graph. The final tree has one-element sets of vertices of as its leaves, due to the base case. A set of vertices of is a module if and only if it is a node of the tree or a union of children of a series or parallel node. This implicitly gives all modular partitions of . It is in this sense that the modular decomposition tree "subsumes" all other ways of recursively decomposing into quotients. Algorithmic issues A data structure for representing the modular decomposition tree should support the operation that inputs a node and returns the set of vertices of that the node represents. An obvious way to do this is to assign to each node a list of the vertices of that it represents. Given a pointer to a node, this structure could return the set of vertices of that it represents in time. However, this data structure would require space in the worst case. An -space alternative that matches this performance is obtained by representing the modular decomposition tree using any standard rooted-tree data structure and labeling each leaf with the vertex of that it represents. The set represented by an internal node is given by the set of labels of its leaf descendants. It is well known that any rooted tree with leaves has at most internal nodes. One can use a depth-first search starting at to report the labels of leaf-descendants of in time. Each node is a set of vertices of and, if is an internal node, the set of children of is a partition of where each partition class is a module. They therefore induce the quotient in . The vertices of this quotient are the elements of , so can be represented by installing edges among the children of . If and are two members of and and , then and are adjacent in if and only if and are adjacent in this quotient. For any pair of vertices of , this is determined by the quotient at children of the least common ancestor of and in the modular decomposition tree. Therefore, the modular decomposition, labeled in this way with quotients, gives a complete representation of . Many combinatorial problems can be solved on by solving the problem separately on each of these quotients. For example, is a comparability graph if and only if each of these quotients is a comparability graph (Gallai, 67; Möhring, 85). Therefore, to find whether a graph is a comparability graph, one need only find whether each of the quotients is. In fact, to find a transitive orientation of a comparability graph, it suffices to transitively orient each of these quotients of its modular decomposition (Gallai, 67; Möhring, 85). A similar phenomenon applies for permutation graphs, (McConnell and Spinrad '94), interval graphs (Hsu and Ma '99), perfect graphs, and other graph classes. Some important combinatorial optimization problems on graphs can be solved using a similar strategy (Möhring, 85). Cographs are the graphs that only have parallel or series nodes in their modular decomposition tree. The first polynomial algorithm to compute the modular decomposition tree of a graph was published in 1972 (James, Stanton & Cowan 1972) and now linear algorithms are available (McConnell & Spinrad 1999, Tedder et al. 2007, Cournier & Habib 1994). Generalizations Modular decomposition of directed graphs can be done in linear time . With a small number of simple exceptions, every graph with a nontrivial modular decomposition also has a skew partition . References External links A Perl implementation of a modular decomposition algorithm A Java implementation of a modular decomposition algorithm A Julia implementation of a modular decomposition algorithm Graph theory objects
Modular decomposition
[ "Mathematics" ]
2,653
[ "Mathematical relations", "Graph theory", "Graph theory objects" ]
35,304,616
https://en.wikipedia.org/wiki/Spherical%20roller%20bearing
A spherical roller bearing is a rolling-element bearing that permits rotation with low friction, and permits angular misalignment. Typically these bearings support a rotating shaft in the bore of the inner ring that may be misaligned in respect to the outer ring. The misalignment is possible due to the spherical internal shape of the outer ring and spherical rollers. Despite what their name may imply, spherical roller bearings are not truly spherical in shape. The rolling elements of spherical roller bearings are mainly cylindrical in shape, but have a (barrel like) profile that makes them appear like cylinders that have been slightly over-inflated (i.e. like a barrel). Construction Spherical roller bearings consist of an inner ring with two raceways inclined at an angle to the bearing axis, an outer ring with a common spherical raceway, spherical rollers, cages and, in certain designs, also internal guide rings or center rings. These bearings can also be sealed. History The spherical roller bearing was invented by engineer Arvid Palmgren and was introduced on the market 1919 by SKF. The design of the bearing that Arvid Palmgren invented is similar to the design that is still in use in modern machines. Designs Most spherical roller bearings are designed with two rows of rollers, allowing them to take very heavy radial loads and heavy axial loads. There are also designs with one row of rollers, suitable for lower radial loads and virtually no axial load. These are also called "barrel roller bearings" or "Tonnenlager" and are typically available in the 202- and 203-series. The internal design of the bearing is not standardised by ISO, so it varies between different manufacturers and different series. Some features that may or may not exist in different bearings are: Lubrication features in inner or outer ring Central flange Guide ring or center ring Integrated seals Cage Dimensions External dimensions of spherical roller bearings are standardised by ISO in the standard ISO 15:1998. Some of the common series of spherical roller bearings are: 213, 222, 223, 230, 231, 232, 238, 239, 240, 241, 248, 249. Materials Bearing rings and rolling elements can be made of a number of different materials, but the most common is "chrome steel", (high carbon chromium) a material with approximately 1.5% chrome content. Such "chrome steel" has been standardized by a number of authorities, and there are therefore a number of similar materials, such as: AISI 52100 (USA), 100CR6 (Germany), SUJ2 (Japan) and GCR15 (China). Some common materials for bearing cages: Sheet steel (stamped or laser-cut) Polyamide (injection molded) Brass (stamped or machined) Steel (machined) The choice of material is mainly done by the manufacturing volume and method. For large-volume bearings, cages are often of stamped sheet-metal or injection molded polyamide, whereas low volume manufacturers or low volume series often have cages of machined brass or machined steel. For some specific applications, special material for coating (e.g. PTFE coated cylindrical bore for vibratory applications) is adopted. Manufacturers Some manufacturers of spherical roller bearings are SKF, Schaeffler, Timken Company, NSK Ltd., NTN Corporation and JTEKT. Since SKF introduced the spherical roller bearing in 1919, spherical roller bearings have purposefully been refined through the decades to improve carrying capacity and to reduce operational friction. This has been possible by playing with a palette of parameters such as materials, internal geometry, tolerance and lubricant. Nowadays, spherical roller bearing manufacturers are striving to refine the bearing knowledge towards more environmentally-friendly and energy-efficient solutions. Applications Spherical bearings are used in countless industrial applications where there are heavy loads, moderate speeds and possibly misalignment. Some common application areas are: Gearboxes Wind turbines Continuous casting machines Material handling Pumps Mechanical fans and blowers Mining and construction equipment Pulp and paper processing equipment Marine propulsion and offshore drilling Off-road vehicles See also References Bearings (mechanical) Rolling-element bearings Mechanical engineering Swedish inventions
Spherical roller bearing
[ "Physics", "Engineering" ]
845
[ "Applied and interdisciplinary physics", "Mechanical engineering" ]
35,310,202
https://en.wikipedia.org/wiki/Cagniard%E2%80%93De%20Hoop%20method
In the mathematical modeling of seismic waves, the Cagniard–De Hoop method is a sophisticated mathematical tool for solving a large class of wave and diffusive problems in horizontally layered media. The method is based on the combination of a unilateral Laplace transformation with the real-valued and positive transform parameter and the slowness field representation. It is named after Louis Cagniard and Adrianus de Hoop; Cagniard published his method in 1939, and De Hoop published an ingenious improvement on it in 1960. Initially, the Cagniard–De Hoop technique was of interest to the seismology community only. Thanks to its versatility, however, the technique has become popular in other disciplines and is nowadays widely accepted as the benchmark for the computation of wavefields in layered media. In its applications to calculating wavefields in general N-layered stratified media, the Cagniard–De Hoop technique is also known as the generalized ray theory. The complete generalized-ray theory, including the pertaining wave-matrix formalism for the layered medium with arbitrary point sources, has been developed by De Hoop (with his students) for acoustics waves, elastic waves and electromagnetic waves. Early applications of the Cagniard-DeHoop technique were limited to the wavefield propagation in piecewise homogeneous, loss-free layered media. To circumvent the limitations, a number of extensions enabling the incorporation of arbitrary dissipation and loss mechanisms and continuously-layered media were introduced. More recently, the Cagniard–De Hoop technique has been employed to put forward a fundamentally new time-domain integral-equation technique in computational electromagnetics, the so-called Cagniard–De Hoop Method of Moments (CdH-MoM), for time-domain modeling of wire and planar antennas. References Further reading Aki, K., & Richards, P. G. (2002). Quantitative Seismology. Chew, W. C. (1995). Waves and Fields in Inhomogeneous Media. IEEE Press. Fourier analysis Wave mechanics Computational electromagnetics
Cagniard–De Hoop method
[ "Physics", "Mathematics" ]
427
[ "Physical phenomena", "Computational electromagnetics", "Mathematical analysis", "Mathematical analysis stubs", "Classical mechanics", "Computational physics", "Waves", "Wave mechanics" ]
32,097,685
https://en.wikipedia.org/wiki/Newmark%27s%20sliding%20block
The Newmark's sliding block analysis method is an engineering that calculates permanent displacements of soil slopes (also embankments and dams) during seismic loading. Newmark analysis does not calculate actual displacement, but rather is an index value that can be used to provide an indication of the structures likelihood of failure during a seismic event. It is also simply called Newmark's analysis or Sliding block method of slope stability analysis. History The method is an extension of the Newmark's direct integration method originally proposed by Nathan M. Newmark in 1943. It was applied to the sliding block problem in a lecture delivered by him in 1965 in the British Geotechnical Association's 5th Rankine Lecture in London and published later in the Association's scientific journal Geotechnique. The extension owes a great deal to Nicholas Ambraseys whose doctoral thesis on the seismic stability of earth dams at Imperial College London in 1958 formed the basis of the method. At his Rankine Lecture, Newmark himself acknowledged Ambraseys' contribution to this method through various discussions between the two researchers while the latter was a visiting professor at the University of Illinois. Method According to Kramer, the Newmark method is an improvement over the traditional pseudo-static method which considered the seismic slope failure only at limiting conditions (i.e. when the Factor of Safety, FOS, became equal to 1) and providing information about the collapse state but no information about the induced deformations. The new method points out that when the FOS becomes less than 1 "failure" does not necessarily occur as the time for which this happens is very short. However, each time the FOS falls below unity, some permanent deformations occur which accumulate whenever FOS < 1. The method further suggests that a failing mass from the slope may be considered as a block of mass sliding (and therefore sliding block) on an inclined surface only when the inertial force (acceleration x mass) acting on it, is equal or higher than the force required to cause sliding. Following these assumptions, the method suggests that whenever the acceleration (i.e. the seismic load) is higher than the critical acceleration required to cause collapse, which may be obtained from the traditional pseudo-static method (such as Sarma method), permanent displacements will occur. The magnitude of these displacements is obtained by integrating twice (acceleration is the second time derivative of displacement) the difference of the applied acceleration and the critical acceleration with respect to time. Modern alternatives The method is still widely used nowadays in engineering practice to assess the consequences of earthquakes on slopes. In the special case of earth dams, it is used in conjunction with the shear beam method which can provide the acceleration time history at the level of the failure surface. It has been proved to give reasonable results and quite comparable to measured data. However, Newmark's sliding block assumes rigidity – perfect plasticity which is not realistic. It also cannot really take account of pore water pressure built-up during cyclic loading which can lead to initiation of liquefaction and different failures than simple distinct slip surfaces. As a result, more rigorous methods have been developed and are used nowadays in order to overcome these shortcomings. Numerical methods such as finite difference and finite element analysis are used which can employ more complicated elasto-plastic constitutive models simulating pre-yield elasticity. See also Slope stability Slope stability analysis Earthquake engineering Finite element analysis References Bibliography Kramer, S. L. (1996) Geotechnical Earthquake Engineering. Prentice Hall, New Jersey. Soil mechanics Landslide analysis, prevention and mitigation Geological techniques Earthquake engineering
Newmark's sliding block
[ "Physics", "Engineering", "Environmental_science" ]
735
[ "Structural engineering", "Applied and interdisciplinary physics", " prevention and mitigation", "Soil mechanics", "Civil engineering", "Earthquake engineering", "Environmental soil science", "Landslide analysis" ]
32,102,172
https://en.wikipedia.org/wiki/Sarma%20method
The Sarma method is a method used primarily to assess the stability of soil slopes under seismic conditions. Using appropriate assumptions the method can also be employed for static slope stability analysis. It was proposed by Sarada K. Sarma in the early 1970s as an improvement over the other conventional methods of analysis which had adopted numerous simplifying assumptions. History Sarma worked in the area of seismic analysis of earth dams under Ambraseys at Imperial College for his doctoral studies in the mid 1960s. The methods for seismic analysis of dams available at that time were based on the Limit Equilibrium approach and were restricted to planar or circular failures surfaces adopting several assumptions regarding force and moment equilibrium (usually satisfying one of the two) and about the magnitude of the forces (such as interslice forces being equal to zero). Sarma looked into the various available methods of analysis and developed a new method for analysis in seismic conditions and calculating the permanent displacements due to strong shaking. His method was published in the 1970s (the very first publication was in 1973 and later improvements came in 1975 and 1979 ). Method Assumptions The method satisfies all conditions of equilibrium, (i.e. horizontal and vertical force equilibrium and moment equilibrium for each slice). It may be applied to any shape of slip surface as the slip surfaces are not assumed to be vertical, but they may be inclined. It is assumed that magnitudes of vertical side forces follow prescribed patterns. For n slices (or wedges), there are 3n equations and 3n unknowns, and therefore it statically determinate without the need of any further additional assumptions. Advantages The Sarma method is called an advanced and rigorous method of static and seismic slope stability analysis. It is called advanced because it can take account of non-circular failure surfaces. Also, the multi-wedge approach allows for non-vertical slices and irregular slope geometry. It is called a rigorous method because it can satisfy all the three conditions of equilibrium, horizontal and vertical forces and moments. The Sarma method is nowadays used as a verification to finite element programs (also FE limit analysis) and it is the standard method used for seismic analysis. Use The method is used mainly for two purposes, to analyse earth slopes and earth dams. When used to analyse seismic slope stability it can provide the factor of safety against failure for a given earthquake load, i.e. horizontal seismic force or acceleration (critical acceleration). Besides, it can provide the required earthquake load (force or acceleration) for which a given slope will fail, i.e. the factor of safety will be equal to 1. When the method is used in the analysis of earth dams (i.e. the slopes of the dam faces), the results of the analysis, i.e. the critical acceleration is used in the Newmark's sliding block analysis in order to calculate the induced permanent displacements. This follows the assumption that displacements will result if the earthquake induced accelerations exceed the value of the critical acceleration for stability. Accuracy General acceptance The Sarma method has been extensively used in seismic analysis software for many years and has been the standard practice until recently for seismic slope stability for many years (similar to the Mononobe–Okabe method for retaining walls). Its accuracy has been verified by various researchers and it has been proved to yield results quite similar to the modern safe Lower Bound numerical stability Limit Analysis methods (e.g. the 51st Rankine Lecture). Modern alternatives However, nowadays modern numerical analysis software employing usually the finite element, finite difference and boundary element methods are more widely used for special case studies. Particular attention has been recently given to the finite element method which can provide very accurate results through the release of several assumptions usually adopted by the conventional methods of analysis. Special boundary conditions and constitutive laws can model the case in a more realistic fashion. See also Earthquake engineering Finite element method Slope stability analysis References Bibliography Kramer, S. L. (1996) Geotechnical Earthquake Engineering. Prentice Hall, New Jersey. External links Dr Sarada K Sarma Landslide analysis, prevention and mitigation Earthquake engineering
Sarma method
[ "Engineering", "Environmental_science" ]
833
[ "Structural engineering", " prevention and mitigation", "Civil engineering", "Earthquake engineering", "Environmental soil science", "Landslide analysis" ]
32,103,592
https://en.wikipedia.org/wiki/Langlands%E2%80%93Deligne%20local%20constant
In mathematics, the Langlands–Deligne local constant, also known as the local epsilon factor or local Artin root number (up to an elementary real function of s), is an elementary function associated with a representation of the Weil group of a local field. The functional equation L(ρ,s) = ε(ρ,s)L(ρ∨,1−s) of an Artin L-function has an elementary function ε(ρ,s) appearing in it, equal to a constant called the Artin root number times an elementary real function of s, and Langlands discovered that ε(ρ,s) can be written in a canonical way as a product ε(ρ,s) = Π ε(ρv, s, ψv) of local constants ε(ρv, s, ψv) associated to primes v. Tate proved the existence of the local constants in the case that ρ is 1-dimensional in Tate's thesis. proved the existence of the local constant ε(ρv, s, ψv) up to sign. The original proof of the existence of the local constants by used local methods and was rather long and complicated, and never published. later discovered a simpler proof using global methods. Properties The local constants ε(ρ, s, ψE) depend on a representation ρ of the Weil group and a choice of character ψE of the additive group of E. They satisfy the following conditions: If ρ is 1-dimensional then ε(ρ, s, ψE) is the constant associated to it by Tate's thesis as the constant in the functional equation of the local L-function. ε(ρ1⊕ρ2, s, ψE) = ε(ρ1, s, ψE)ε(ρ2, s, ψE). As a result, ε(ρ, s, ψE) can also be defined for virtual representations ρ. If ρ is a virtual representation of dimension 0 and E contains K then ε(ρ, s, ψE) = ε(IndE/Kρ, s, ψK) Brauer's theorem on induced characters implies that these three properties characterize the local constants. showed that the local constants are trivial for real (orthogonal) representations of the Weil group. Notational conventions There are several different conventions for denoting the local constants. The parameter s is redundant and can be combined with the representation ρ, because ε(ρ, s, ψE) = ε(ρ⊗||s, 0, ψE) for a suitable character ||. Deligne includes an extra parameter dx consisting of a choice of Haar measure on the local field. Other conventions omit this parameter by fixing a choice of Haar measure: either the Haar measure that is self dual with respect to ψ (used by Langlands), or the Haar measure that gives the integers of E measure 1. These different conventions differ by elementary terms that are positive real numbers. References External links Representation theory Zeta and L-functions Class field theory
Langlands–Deligne local constant
[ "Mathematics" ]
637
[ "Representation theory", "Fields of abstract algebra" ]
32,104,707
https://en.wikipedia.org/wiki/Cellular%20noise
Cellular noise is random variability in quantities arising in cellular biology. For example, cells which are genetically identical, even within the same tissue, are often observed to have different expression levels of proteins, different sizes and structures. These apparently random differences can have important biological and medical consequences. Cellular noise was originally, and is still often, examined in the context of gene expression levels – either the concentration or copy number of the products of genes within and between cells. As gene expression levels are responsible for many fundamental properties in cellular biology, including cells' physical appearance, behaviour in response to stimuli, and ability to process information and control internal processes, the presence of noise in gene expression has profound implications for many processes in cellular biology. Definitions The most frequent quantitative definition of noise is the coefficient of variation: where is the noise in a quantity , is the mean value of and is the standard deviation of . This measure is dimensionless, allowing a relative comparison of the importance of noise, without necessitating knowledge of the absolute mean. Other quantities often used for mathematical convenience are the Fano factor: and the normalized variance: Experimental measurement The first experimental account and analysis of gene expression noise in prokaryotes is from Becskei & Serrano and from Alexander van Oudenaarden's lab. The first experimental account and analysis of gene expression noise in eukaryotes is from James J. Collins's lab. Intrinsic and extrinsic noise Cellular noise is often investigated in the framework of intrinsic and extrinsic noise. Intrinsic noise refers to variation in identically regulated quantities within a single cell: for example, the intra-cell variation in expression levels of two identically controlled genes. Extrinsic noise refers to variation in identically regulated quantities between different cells: for example, the cell-to-cell variation in expression of a given gene. Intrinsic and extrinsic noise levels are often compared in dual reporter studies, in which the expression levels of two identically regulated genes (often fluorescent reporters like GFP and YFP) are plotted for each cell in a population. An issue with the general depiction of extrinsic noise as a spread along the main diagonal in dual-reporter studies is the assumption that extrinsic factors cause positive expression correlations between the two reporters. In fact, when the two reporters compete for binding of a low-copy regulator, the two reporters become anomalously anticorrelated, and the spread is perpendicular to the main diagonal. In fact, any deviation of the dual-reporter scatter plot from circular symmetry indicates extrinsic noise. Information theory offers a way to avoid this anomaly. Sources Note: These lists are illustrative, not exhaustive, and identification of noise sources is an active and expanding area of research. Intrinsic noise Low copy-number effects (including discrete birth and death events): the random (stochastic) nature of production and degradation of cellular components means that noise is high for components at low copy number (as the magnitude of these random fluctuations is not negligible with respect to the copy number); Diffusive cellular dynamics: many important cellular processes rely on collisions between reactants (for example, RNA polymerase and DNA) and other physical criteria which, given the diffusive dynamic nature of the cell, occur stochastically. Noise propagation: Low copy-number effects and diffusive dynamics result in each of the biochemical reactions in a cell occurring randomly. Stochasticity of reactions can be either attenuated or amplified. Contribution each reaction makes to the intrinsic variability in copy numbers can be quantified via Van Kampen's system size expansion. Extrinsic noise Cellular age / cell cycle stage: cells in a dividing population that is not synchronised will, at a given snapshot in time, be at different cell cycle stages, with corresponding biochemical and physical differences; Cell growth: variations in growth rates leading to concentration variations between cells; Physical environment (temperature, pressure, ...): physical quantities and chemical concentrations (particularly in the case of cell-to-cell signalling) may vary spatially across a population of cells, provoking extrinsic differences as a function of position; Organelle distributions: random factors in the quantity and quality of organelles (for example, the number and functionality of mitochondria) lead to significant cell-to-cell differences in a range of processes (as, for example, mitochondria play a central role in the energy budget of eukaryotic cells); Inheritance noise: uneven partitioning of cellular components between daughter cells at mitosis can result in large extrinsic differences in a dividing population. Regulator competition: Regulators competing to bind downstream promoters can cause negative correlations: when one promoter is bound the other is not and vice versa. Note that extrinsic noise can affect levels and types of intrinsic noise: for example, extrinsic differences in the mitochondrial content of cells lead, through differences in ATP levels, to some cells transcribing faster than others, affecting the rates of gene expression and the magnitude of intrinsic noise across the population. Effects Note: These lists are illustrative, not exhaustive, and identification of noise effects is an active and expanding area of research. Gene expression levels: noise in gene expression causes differences in the fundamental properties of cells, limits their ability to biochemically control cellular dynamics, and directly or indirectly induce many of the specific effects below; Energy levels and transcription rate: noise in transcription rate, arising from sources including transcriptional bursting, is a significant source of noise in expression levels of genes. Extrinsic noise in mitochondrial content has been suggested to propagate to differences in the ATP concentrations and transcription rates (with functional relationships implied between these three quantities) in cells, affecting cells' energetic competence and ability to express genes; Phenotype selection: bacterial populations exploit extrinsic noise to choose a population subset to enter a quiescent state. In a bacterial infection, for example, this subset will not propagate quickly but will be more robust when the population is threatened by antibiotic treatment: the rapidly replicating, infectious bacteria will be killed more quickly than the quiescent subset, which may be capable of restarting the infection. This phenomenon is why courses of antibiotics should be finished even when symptoms seem to have disappeared; Development and stem cell differentiation: developmental noise in biochemical processes which need to be tightly controlled (for example, patterning of gene expression levels that develop into different body parts) during organismal development can have dramatic consequences, necessitating the evolution of robust cellular machinery. Stem cells differentiate into different cell types depending on the expression levels of various characteristic genes: noise in gene expression can clearly perturb and influence this process, and noise in transcription rate can affect the structure of the dynamic landscape that differentiation occurs on. There are review articles summarizing these effects from bacteria to mammalian cells; Drug resistance: Noise improves short-term survival and long-term evolution of drug resistance at high levels of drug treatment. Noise has the opposite effect at low levels of drug treatment; Cancer treatments: recent work has found extrinsic differences, linked to gene expression levels, in the response of cancer cells to anti-cancer treatments, potentially linking the phenomenon of fractional killing (whereby each treatment kills some but not all of a tumour) to noise in gene expression. Because individual cells could repeatedly and stochastically perform transitions between states associated with differences in responsiveness to a therapeutic modality (chemotherapy, targeted agent, radiation, etc.), therapy might need to be administered frequently (to ensure cells are treated soon after entering a therapy-responsive state, before they can rejoin the therapy-resistant subpopulation and proliferate) and over long times (to treat even those cells emerging late from the final residue of the therapy-resistant subpopulation). Evolution of genome: Genome are covered by chromatin that can be roughly classified into "open" (also known as euchromatin) or "closed" (also known as heterochromatin). Open chromatin leads to less noise in transcription compared to heterochromatin. Often "housekeeping" proteins (which are proteins that carry out tasks in the required for cellular survival) work large multiprotein complexes. If the noise in proteins of such complexes are to discoordinated, it can lead to reduced level of production of multiprotein complexes, with potentially deleterious effects. Reduction in noise may provide an evolutionary selection movement of essential genes into open chromatin. Information processing: as cellular regulation is performed with components that are themselves subject to noise, the ability of cells to process information and perform control is fundamentally limited by intrinsic noise Analysis As many quantities of cell biological interest are present in discrete copy number within the cell (single DNAs, dozens of mRNAs, hundreds of proteins), tools from discrete stochastic mathematics are often used to analyse and model cellular noise. In particular, master equation treatments – where the probabilities of observing a system in a state at time are linked through ODEs – have proved particularly fruitful. A canonical model for noise gene expression, where the processes of DNA activation, transcription and translation are all represented as Poisson processes with given rates, gives a master equation which may be solved exactly (with generating functions) under various assumptions or approximated with stochastic tools like Van Kampen's system size expansion. Numerically, the Gillespie algorithm or stochastic simulation algorithm is often used to create realisations of stochastic cellular processes, from which statistics can be calculated. The problem of inferring the values of parameters in stochastic models (parametric inference) for biological processes, which are typically characterised by sparse and noisy experimental data, is an active field of research, with methods including Bayesian MCMC and approximate Bayesian computation proving adaptable and robust. Regarding the two-state model, a moment-based method was described for parameters inference from mRNAs distributions. References Cell biology Biophysics Molecular biology Biostatistics Randomness
Cellular noise
[ "Physics", "Chemistry", "Biology" ]
2,074
[ "Cell biology", "Applied and interdisciplinary physics", "Biophysics", "Molecular biology", "Biochemistry" ]
32,106,812
https://en.wikipedia.org/wiki/Convergence%20%28logic%29
In mathematics, computer science and logic, convergence is the idea that different sequences of transformations come to a conclusion in a finite amount of time (the transformations are terminating), and that the conclusion reached is independent of the path taken to get to it (they are confluent). More formally, a preordered set of term rewriting transformations are said to be convergent if they are confluent and terminating. See also Logical equality Logical equivalence Rule of replacement References Rewriting systems
Convergence (logic)
[ "Mathematics" ]
102
[ "Mathematical logic stubs", "Mathematical logic" ]
32,106,995
https://en.wikipedia.org/wiki/Belinfante%E2%80%93Rosenfeld%20stress%E2%80%93energy%20tensor
In mathematical physics, the Belinfante–Rosenfeld tensor is a modification of the stress–energy tensor that is constructed from the canonical stress–energy tensor and the spin current so as to be symmetric yet still conserved. In a classical or quantum local field theory, the generator of Lorentz transformations can be written as an integral of a local current Here is the canonical stress–energy tensor satisfying , and is the contribution of the intrinsic (spin) angular momentum. The anti-symmetry implies the anti-symmetry Local conservation of angular momentum requires that Thus a source of spin-current implies a non-symmetric canonical stress–energy tensor. The Belinfante–Rosenfeld tensor is a modification of the stress–energy tensor that is constructed from the canonical stress–energy tensor and the spin current so as to be symmetric yet still conserved, i.e., An integration by parts shows that and so a physical interpretation of Belinfante tensor is that it includes the "bound momentum" associated with gradients of the intrinsic angular momentum. In other words, the added term is an analogue of the "bound current" associated with a magnetization density . The curious combination of spin-current components required to make symmetric and yet still conserved seems totally ad hoc, but it was shown by both Rosenfeld and Belinfante that the modified tensor is precisely the symmetric Hilbert stress–energy tensor that acts as the source of gravity in general relativity. Just as it is the sum of the bound and free currents that acts as a source of the magnetic field, it is the sum of the bound and free energy–momentum that acts as a source of gravity. Belinfante–Rosenfeld and the Hilbert energy–momentum tensor The Hilbert energy–momentum tensor is defined by the variation of the action functional with respect to the metric as or equivalently as (The minus sign in the second equation arises because because ) We may also define an energy–momentum tensor by varying a Minkowski-orthonormal vierbein to get Here is the Minkowski metric for the orthonormal vierbein frame, and are the covectors dual to the vierbeins. With the vierbein variation there is no immediately obvious reason for to be symmetric. However, the action functional should be invariant under an infinitesimal local Lorentz transformation , , and so should be zero. As is an arbitrary position-dependent skew symmetric matrix, we see that local Lorentz and rotation invariance both requires and implies that . Once we know that is symmetric, it is easy to show that , and so the vierbein-variation energy–momentum tensor is equivalent to the metric-variation Hilbert tensor. We can now understand the origin of the Belinfante–Rosenfeld modification of the Noether canonical energy momentum tensor. Take the action to be where is the spin connection that is determined by via the condition of being metric compatible and torsion free. The spin current is then defined by the variation the vertical bar denoting that the are held fixed during the variation. The "canonical" Noether energy momentum tensor is the part that arises from the variation where we keep the spin connection fixed: Then Now, for a torsion-free and metric-compatible connection, we have that where we are using the notation Using the spin-connection variation, and after an integration by parts, we find Thus we see that corrections to the canonical Noether tensor that appear in the Belinfante–Rosenfeld tensor occur because we need to simultaneously vary the vierbein and the spin connection if we are to preserve local Lorentz invariance. As an example, consider the classical Lagrangian for the Dirac field Here the spinor covariant derivatives are We therefore get There is no contribution from if we use the equations of motion, i.e. we are on shell. Now if are distinct and zero otherwise. As a consequence is totally anti-symmetric. Now, using this result, and again the equations of motion, we find that Thus the Belinfante–Rosenfeld tensor becomes The Belinfante–Rosenfeld tensor for the Dirac field is therefore seen to be the symmetrized canonical energy–momentum tensor. Weinberg's definition Steven Weinberg defined the Belinfante tensor as where is the Lagrangian density, the set {Ψ} are the fields appearing in the Lagrangian, the non-Belinfante energy momentum tensor is defined by and are a set of matrices satisfying the algebra of the homogeneous Lorentz group . References Tensors in general relativity
Belinfante–Rosenfeld stress–energy tensor
[ "Physics", "Engineering" ]
941
[ "Tensors in general relativity", "Tensors", "Tensor physical quantities", "Physical quantities" ]
37,748,394
https://en.wikipedia.org/wiki/2-Vinylpyridine
2-Vinylpyridine is an organic compound with the formula CH2CHC5H4N. It is a derivative of pyridine with a vinyl group in the 2-position, next to the nitrogen. It is a colorless liquid, although samples are often brown. It is used industrially as a precursor to specialty polymers and as an intermediate in the chemical, pharmaceutical, dye, and photo industries. Vinylpyridine is sensitive to polymerization. It may be stabilized with a polymerisation inhibitor such as tert-butylcatechol. Owing to its tendency to polymerize, samples are typically refrigerated. Synthesis It was first synthesized in 1887. A contemporary preparation entails condensation of 2-methylpyridine with formaldehyde, followed by dehydration of the intermediate alcohol. The reaction is carried out between 150–200 °C in an autoclave. The conversion is kept relatively low. After removal of unreacted 2-methylpyridine by distillation, concentrated aqueous sodium hydroxide is added to the residue and the resultant mixture is distilled under reduced pressure. During distillation, the dehydration of 2-(2-pyridyl)ethanol occurs to give 2-vinylpyridine, which can be purified further by fractional distillation under reduced pressure in the presence of an inhibitor such as 4-tert-butylcatechol. CH3C5H4N + CH2O → HOCH2CH2C5H4N HOCH2CH2C5H4N → CH2=CHC5H4N + H2O An alternative synthesis involves the reaction of acrylonitrile and acetylene below 130–140 ̊C in the presence of organocobalt compounds as a catalyst. Acrylonitrile is the solvent for the reaction. Uses Polymeric derivatives 2-Vinylpyridine is readily polymerized or copolymerized with styrene, butadiene, isobutylene, methyl methacrylate, and other compounds in the presence of radical, cationic, or anionic initiators. The homopolymer is soluble in organic solvents such as methanol and acetone, whereas cross-linked copolymers are insoluble in organic solvents. An important application of 2-vinylpyridine involves the production of a latex terpolymer of 2-vinylpyridine, styrene, and butadiene, for use as a tire-cord binder. The tire cord is treated first with a resorcinol-formaldehyde polymer and then with a terpolymer made from 15% 2-vinylpyridine, styrene, and butadiene. This treatment gives the close bonding of tire cord to rubber. 2-Vinylpyridine is a co-monomer for acrylic fibers. Between 1–5% of copolymerized 2-vinylpyridine provide the reactive sites for dyes. Organic synthesis Due to the electron-withdrawing effect of the ring nitrogen atom, 2-vinylpyridine adds nucleophiles such as methoxide, cyanide, hydrogen sulfide at the vinylic site to give addition products. The addition product of methanol to 2-vinylpyridine, 2-(2-methoxyethyl)pyridine is a veterinary anthelmintic. Treating 2-vinylpyridine with 4-pyridinecarbonitrile and hydrogen chloride gives 1-[2-(2-pyridyl)ethyl]-4-cyanopyridinium chloride, which then can be used to prepare dimethylaminopyridine (DMAP), a widely used base catalyst. 2-Vinylpyridine is used in the production of Axitinib, a pharmaceutical. See also 4-Vinylpyridine References 2-Pyridyl compounds Monomers Vinyl compounds
2-Vinylpyridine
[ "Chemistry", "Materials_science" ]
842
[ "Monomers", "Polymer chemistry" ]
37,749,002
https://en.wikipedia.org/wiki/C28H48O2
{{DISPLAYTITLE:C28H48O2}} The molecular formula C28H48O2 (molar mass: 416.68 g/mol, exact mass: 416.3654 u) may refer to: β-Tocopherol γ-Tocopherol
C28H48O2
[ "Chemistry" ]
65
[ "Isomerism", "Set index articles on molecular formulas" ]
37,749,068
https://en.wikipedia.org/wiki/Extreme%20mass%20ratio%20inspiral
In astrophysics, an extreme mass ratio inspiral (EMRI) is the orbit of a relatively light object around a much heavier (by a factor 10,000 or more) object, that gradually spirals in due to the emission of gravitational waves. Such systems are likely to be found in the centers of galaxies, where stellar mass compact objects, such as stellar black holes and neutron stars, may be found orbiting a supermassive black hole. In the case of a black hole in orbit around another black hole this is an extreme mass ratio binary black hole. The term EMRI is sometimes used as a shorthand to denote the emitted gravitational waveform as well as the orbit itself. The main reason for scientific interest in EMRIs is that they are one of the most promising sources for gravitational wave astronomy using future space-based detectors such as the Laser Interferometer Space Antenna (LISA). If such signals are successfully detected, they will allow accurate measurements of the mass and angular momentum of the central object, which in turn gives crucial input for models for the formation and evolution of supermassive black holes. Moreover, the gravitational wave signal provides a detailed map of the spacetime geometry surrounding the central object, allowing unprecedented tests of the predictions of general relativity in the strong gravity regime. Overview Scientific potential If successfully detected, the gravitational wave signal from an EMRI will carry a wealth of astrophysical data. EMRIs evolve slowly and complete many (~10,000) cycles before eventually plunging. Therefore, the gravitational wave signal encodes a precise map of the spacetime geometry of the supermassive black hole. Consequently, the signal can be used as an accurate test of the predictions of general relativity in the regime of strong gravity; a regime in which general relativity is completely untested. In particular, it is possible to test the hypothesis that the central object is indeed a supermassive black hole to high accuracy by measuring the quadrupole moment of the gravitational field to an accuracy of a fraction of a percent. In addition, each observation of an EMRI system will allow an accurate determination of the parameters of the system, including: The mass and angular momentum of the central object to an accuracy of 1 in 10,000. By gathering the statistics of the mass and angular momentum of a large number of supermassive black holes, it should be possible to answer questions about their formation. If the angular momentum of the supermassive black holes is large, then they probably acquired most of their mass by swallowing gas from their accretion disc. Moderate values of the angular momentum indicate that the object is most likely formed from the merger of several smaller objects with a similar mass, while low values indicate that the mass has grown by swallowing smaller objects coming in from random directions. The mass of the orbiting object to an accuracy of 1 in 10,000. The population of these masses could yield interesting insights in the population of compact objects in the nuclei of galaxies. The eccentricity (1 in 10,000) and the (cosine of the) inclination (1 in 100-1000) of the orbit. The statistics for the values concerning the shape and orientation of the orbit contains information about the formation history of these objects. (See the Formation section below.) The luminosity distance (5 in 100) and position (with an accuracy of 10−3 steradian) of the system. Because the shape of the signal encodes the other parameters of the system, we know how strong the signal was when it was emitted. Consequently, one can infer the distance of the system from the observed strength of the signal (since it diminishes with the distance travelled). Unlike other means of determining distances of the order of several billion light-years, the determination is completely self-contained and does not rely on the cosmic distance ladder. If the system can be matched with an optical counterpart, then this provides a completely independent way of determining the Hubble parameter at cosmic distances. Testing the validity of the Kerr conjecture. This hypothesis states that all black holes are rotating black holes of the Kerr or Kerr–Newman types. Formation It is currently thought that the centers of most (large) galaxies consist of a supermassive black hole of 106 to 109 solar masses () surrounded by a cluster of 107 to 108 stars maybe 10 light-years across, called the nucleus. The orbits of the objects around the central supermassive black hole are continually perturbed by two-body interactions with other objects in the nucleus, changing the shape of the orbit. Occasionally, an object may pass close enough to the central supermassive black hole for its orbit to produce large amounts of gravitational waves, significantly affecting the orbit. Under specific conditions such an orbit may become an EMRI. In order to become an EMRI, the back-reaction from the emission of gravitational waves must be the dominant correction to the orbit (compared to, for example, two-body interactions). This requires that the orbiting objects passes very close the central supermassive black hole. A consequence of this is that the inspiralling object cannot be a large heavy star, because it will be ripped apart by the tidal forces. However, if the object passes too close to the central supermassive black hole, it will make a direct plunge across the event horizon. This will produce a brief violent burst of gravitational radiation which would be hard to detect with currently planned observatories. Consequently, the creation of EMRI requires a fine balance between objects passing too close and too far from the central supermassive black hole. Currently, the best estimates are that a typical supermassive black hole of , will capture an EMRI once every 106 to 108 years. This makes witnessing such an event in our Milky Way unlikely. However, a space based gravitational wave observatory like LISA will be able to detect EMRI events up to cosmological distances, leading to an expected detection rate somewhere between a few and a few thousand per year. Extreme mass ratio inspirals created in this way tend to have very large eccentricities (e > 0.9999). The initial, high eccentricity orbits may also be a source of gravitational waves, emitting a short burst as the compact object passes through periapsis. These gravitational wave signals are known as extreme mass ratio bursts. As the orbit shrinks due to the emission of gravitational waves, it becomes more circular. When it has shrunk enough for the gravitational waves to become strong and frequent enough to be continuously detectable by LISA, the eccentricity will typically be around 0.7. Since the distribution of objects in the nucleus is expected to be approximately spherically symmetric, there is expected to be no correlation between the initial plane of the inspiral and the spin of the central supermassive black holes. In 2011, an impediment to the formation of EMRIs was proposed. The "Schwarzschild Barrier" was thought to be an upper limit to the eccentricity of orbits near a supermassive black hole. Gravitational scattering would drive by torques from the slightly asymmetric distribution of mass in the nucleus ("resonant relaxation"), resulting in a random walk in each star's eccentricity. When its eccentricity would become sufficiently large, the orbit would begin to undergo relativistic precession, and the effectiveness of the torques would be quenched. It was believed that there would be a critical eccentricity, at each value of the semi-major axis, at which stars would be "reflected" back to lower eccentricities. However, it is now clear that this barrier is nothing but an illusion, probably originating from an animation based on numerical simulations, as described in detail in two works. The role of the spin It was realised that the role of the spin of the central supermassive black hole in the formation and evolution of EMRIs is crucial. For a long time it has been believed that any EMRI originating farther away than a certain critical radius of about a hundredth of a parsec would be either scattered away from the capture orbit or directly plunge into the supermassive black hole on an extremely radial orbit. These events would lead to one or a few bursts, but not to a coherent set of thousands of them. Indeed, when taking into account the spin, proved that these capture orbits accumulate thousands of cycles in the detector band. Since they are driven by two-body relaxation, which is chaotic in nature, they are ignorant of anything related to a potential Schwarzchild barrier. Moreover, since they originate in the bulk of the stellar distribution, the rates are larger. Additionally, due to their larger eccentricity, they are louder, which enhances the detection volume. It is therefore expected that EMRIs originate at these distances, and that they dominate the rates. Alternatives Several alternative processes for the production of extreme mass ratio inspirals are known. One possibility would be for the central supermassive black hole to capture a passing object that is not bound to it. However, the window where the object passes close enough to the central black hole to be captured, but far enough to avoid plunging directly into it is extremely small, making it unlikely that such event contribute significantly to the expected event rate. Another possibility is present if the compact object occurs in a bound binary system with another object. If such a system passes close enough to the central supermassive black hole it is separated by the tidal forces, ejecting one of the objects from the nucleus at a high velocity while the other is captured by the central black hole with a relatively high probability of becoming an EMRI. If more than 1% of the compact objects in the nucleus is found in binaries this process may compete with the "standard" picture described above. EMRIs produced by this process typically have a low eccentricity, becoming very nearly circular by the time they are detectable by LISA. A third option is that a giant star passes close enough to the central massive black hole for the outer layers to be stripped away by tidal forces, after which the remaining core may become an EMRI. However, it is uncertain if the coupling between the core and outer layers of giant stars is strong enough for stripping to have a significant enough effect on the orbit of the core. Finally, supermassive black holes are often accompanied by an accretion disc of matter spiraling towards the black hole. If this disc contains enough matter, instabilities can collapse to form new stars. If massive enough, these can collapse to form compact objects, which are automatically on a trajectory to become an EMRI. Extreme mass ratio inspirals created in this way are characterized by the fact their orbital plane is strongly correlated with the plane of the accretion disc and the spin of the supermassive black hole. Intermediate mass ratio inspirals Besides stellar black holes and supermassive black holes, it is speculated that a third class of intermediate mass black holes with masses between 102 and 104 also exists. One way that these may possibly form is through a runway series of collisions of stars in a young cluster of stars. If such a cluster forms within a thousand light years from the galactic nucleus, it will sink towards the center due to dynamical friction. Once close enough the stars are stripped away through tidal forces and the intermediate mass black hole may continue on an inspiral towards the central supermassive black hole. Such a system with a mass ratio around 1000 is known as an intermediate mass ratio inspiral (IMRI). There are many uncertainties in the expected frequency for such events, but some calculations suggest there may be up to several tens of these events detectable by LISA per year. If these events do occur, they will result in an extremely strong gravitational wave signal, that can easily be detected. Another possible way for an intermediate mass ratio inspiral is for an intermediate mass black hole in a globular cluster to capture a stellar mass compact object through one of the processes described above. Since the central object is much smaller, these systems will produce gravitational waves with a much higher frequency, opening the possibility of detecting them with the next generation of Earth-based observatories, such as Advanced LIGO and Advanced VIRGO. Although the event rates for these systems are extremely uncertain, some calculations suggest that Advanced LIGO may see several of them per year. Modelling Although the strongest gravitational wave from EMRIs may easily be distinguished from the instrumental noise of the gravitational wave detector, most signals will be deeply buried in the instrumental noise. However, since an EMRI will go through many cycles of gravitational waves (~105) before making the plunge into the central supermassive black hole, it should still be possible to extract the signal using matched filtering. In this process, the observed signal is compared with a template of the expected signal, amplifying components that are similar to the theoretical template. To be effective this requires accurate theoretical predictions for the wave forms of the gravitational waves produced by an extreme mass ratio inspiral. This, in turn, requires accurate modelling of the trajectory of the EMRI. The equations of motion in general relativity are notoriously hard to solve analytically. Consequently, one needs to use some sort of approximation scheme. Extreme mass ratio inspirals are well suited for this, as the mass of the compact object is much smaller than that of the central supermassive black hole. This allows it to be ignored or treated perturbatively. Issues with traditional binary modelling approaches Post-Newtonian expansion One common approach is to expand the equations of motion for an object in terms of its velocity divided by the speed of light, v/c. This approximation is very effective if the velocity is very small, but becomes rather inaccurate if v/c becomes larger than about 0.3. For binary systems of comparable mass, this limit is not reached until the last few cycles of the orbit. EMRIs, however, spend their last thousand to a million cycles in this regime, making the post-Newtonian expansion an inappropriate tool. Numerical relativity Another approach is to completely solve the equations of motion numerically. The non-linear nature of the theory makes this very challenging, but significant success has been achieved in numerically modelling the final phase of the inspiral of binaries of comparable mass. The large number of cycles of an EMRI make the purely numerical approach prohibitively expensive in terms of computing time. Gravitational self force The large value of the mass ratio in an EMRI opens another avenue for approximation: expansion in one over the mass ratio. To zeroth order, the path of the lighter object will be a geodesic in the Kerr spacetime generated by the supermassive black hole. Corrections due to the finite mass of the lighter object can then be included, order-by-order in the mass ratio, as an effective force on the object. This effective force is known as the gravitational self force. In the last decade or so, a lot of progress has been made in calculating the gravitational self force for EMRIs. Numerical codes are available to calculate the gravitational self force on any bound orbit around a non-rotating (Schwarzschild) black hole. And significant progress has been made for calculating the gravitational self force around a rotating black hole. Notes References Further reading External links The Schwarzschild Barrier Black holes Binary systems Gravitational-wave astronomy
Extreme mass ratio inspiral
[ "Physics", "Astronomy" ]
3,124
[ "Black holes", "Physical phenomena", "Physical quantities", "Binary systems", "Unsolved problems in physics", "Astrophysics", "Density", "Stellar phenomena", "Astronomical objects", "Gravitational-wave astronomy", "Astronomical sub-disciplines" ]
37,749,393
https://en.wikipedia.org/wiki/Copper%20in%20heat%20exchangers
Heat exchangers are devices that transfer heat to achieve desired heating or cooling. An important design aspect of heat exchanger technology is the selection of appropriate materials to conduct and transfer heat fast and efficiently. Copper has many desirable properties for thermally efficient and durable heat exchangers. First and foremost, copper is an excellent conductor of heat. This means that copper's high thermal conductivity allows heat to pass through it quickly. Other desirable properties of copper in heat exchangers include its corrosion resistance, biofouling resistance, maximum allowable stress and internal pressure, creep rupture strength, fatigue strength, hardness, thermal expansion, specific heat, antimicrobial properties, tensile strength, yield strength, high melting point, alloy, ease of fabrication, and ease of joining. The combination of these properties enable copper to be specified for heat exchangers in industrial facilities, HVAC systems, vehicular coolers and radiators, and as heat sinks to cool computers, disk drives, televisions, computer monitors, and other electronic equipment. Copper is also incorporated into the bottoms of high-quality cookware because the metal conducts heat quickly and distributes it evenly. Non-copper heat exchangers are also available. Some alternative materials include aluminum, carbon steel, stainless steel, nickel alloys, and titanium. This article focuses on beneficial properties and common applications of copper in heat exchangers. New copper heat exchanger technologies for specific applications are also introduced. History Heat exchangers using copper and its alloys have evolved along with heat transfer technologies over the past several hundred years. Copper condenser tubes were first used in 1769 for steam engines. Initially, the tubes were made of unalloyed copper. By 1870, Muntz metal, a 60% Cu-40% Zn brass alloy, was used for condensers in seawater cooling. Admiralty metal, a 70% Cu-30% Zn yellow brass alloy with 1% tin added to improve corrosion resistance, was introduced in 1890 for seawater service. By the 1920s, a 70% Cu-30% Ni alloy was developed for naval condensers. Soon afterwards, a 2% manganese and 2% iron copper alloy was introduced for better erosion resistance. A 90% Cu-10% Ni alloy first became available in the 1950s, initially for seawater piping. This alloy is now the most widely used copper-nickel alloy in marine heat exchangers. Today, steam, evaporator, and condenser coils are made from copper and copper alloys. These heat exchangers are used in air conditioning and refrigeration systems, industrial and central heating and cooling systems, radiators, hot water tanks, and under-floor heating systems. Copper-based heat exchangers can be manufactured with copper tube/aluminum fin, cupro-nickel, or all-copper constructions. Various coatings can be applied to enhance corrosion resistance of the tubes and fins. Beneficial properties of copper heat exchangers Thermal conductivity Thermal conductivity (k, also denoted as λ or κ) is a measure of a material's ability to conduct heat. Heat transfer across materials of high thermal conductivity occurs at a higher rate than across materials of low thermal conductivity. In the International System of Units (SI), thermal conductivity is measured in watts per meter Kelvin (W/(m•K)). In the Imperial System of Measurement (British Imperial, or Imperial units), thermal conductivity is measured in Btu/(hr•ft⋅F). Copper has a thermal conductivity of 231 Btu/(hr-ft-F). This is higher than all other metals except silver, a precious metal. Copper has a 60% better thermal conductivity rating than aluminum and has almost 30 times more thermal conductivity than stainless steel. Further information about the thermal conductivity of selected metals is available. Corrosion resistance Corrosion resistance is essential in heat transfer applications where fluids are involved, such as in hot water tanks, radiators, etc. The only affordable material that has similar corrosion resistance to copper is stainless steel. However, the thermal conductivity of stainless steel is 1/30th times than that of copper. Aluminum tubes are not suitable for potable or untreated water applications because it corrodes at pH<7.0 and so it releases hydrogen gas. Protective films can be applied to the inner surface of copper alloy tubes to increase corrosion resistance. For certain applications, the film is composed of iron. In power plant condensers, duplex tubes consisting of an inner titanium layer with outer copper-nickel alloys are employed. This enables the use of copper’s beneficial mechanical and chemical properties (e.g., stress corrosion cracking, ammonia attack) along with titanium’s excellent corrosion resistance. A duplex tube with inner aluminium brass or copper-nickel and outer stainless or mild steel can be used for cooling in the oil refining and petrochemical industries. Biofouling resistance Copper and copper-nickel alloys have a high natural resistance to biofouling relative to alternative materials. Other metals used in heat exchangers, such as steel, titanium and aluminum, foul readily. Protection against biofouling, particularly in marine structures, can be accomplished over long periods of time with copper metals. Copper-nickel alloys have been proven over many years in sea water pipework and other marine applications. These alloys resist biofouling in open seas where they do not allow microbial slime to build up and support macrofouling. Researchers attribute copper's resistance to biofouling, even in temperate waters, to two possible mechanisms: 1) a retarding sequence of colonization through slow release of copper ions during the corrosion process, thereby inhibiting the attachment of microbial layers to marine surfaces; and/or, 2) separating layers that contain corrosive products and the larvae of macro-encrusting organisms. The latter mechanism deters the settlement of pelagic larval stages on the metal surface, rather than killing the organisms. Antimicrobial properties Due to copper’s strong antimicrobial properties, copper fins can inhibit bacterial, fungal and viral growths that commonly build up in air conditioning systems. Hence, the surfaces of copper-based heat exchangers are cleaner for longer periods of time than heat exchangers made from other metals. This benefit offers a greatly expanded heat exchanger service life and contributes to improved air quality. Heat exchangers fabricated separately from antimicrobial copper and aluminum in a full-scale HVAC system have been evaluated for their ability to limit microbial growth under conditions of normal flow rates using single-pass outside air. Commonly used aluminum components developed stable biofilms of bacteria and fungi within four weeks of operation. During the same time period, antimicrobial copper was able to limit bacterial loads associated with the copper heat exchanger fins by 99.99% and fungal loads by 99.74%. Copper fin air conditioners have been deployed on buses in Shanghai to rapidly and completely kill bacteria, viruses and fungi that were previously thriving on non-copper fins and permitted to circulate around the systems. The decision to replace aluminum with copper followed antimicrobial tests by the Shanghai Municipal Center for Disease Control and Prevention (SCDC) from 2010 to 2012. The study found that microbial levels on copper fin surfaces were significantly lower than on aluminum, thereby helping to protect the health of bus passengers. Further information about the benefits of antimicrobial copper in HVAC systems is available. Ease of inner grooving Internally grooved copper tube of smaller diameters is more thermally efficient, materially efficient, and easier to bend and flare and otherwise work with. It is generally easier to make inner grooved tubes out of copper, a very soft metal. Common applications for copper heat exchangers Industrial facilities and power plants Copper alloys are extensively used as heat exchanger tubing in fossil and nuclear steam generating electric power plants, chemical and petrochemical plants, marine services, and desalination plants. The largest use of copper alloy heat exchanger tubing on a per unit basis is in utility power plants. These plants contain surface condensers, heaters, and coolers, all of which contain copper tubing. The main surface condenser that accepts turbine-steam discharges uses the most copper. Copper nickel is the group of alloys that are commonly specified in heat exchanger or condenser tubes in evaporators of desalination plants, process industry plants, air cooling zones of thermal power plants, high-pressure feed water heaters, and sea water piping in ships. The composition of the alloys can vary from 90% Cu–10% Ni to 70% Cu–30% Ni. Condenser and heat exchanger tubing of arsenical admiralty brass (Cu-Zn-Sn-As) once dominated the industrial facility market. Aluminum brass later rose in popularity because of its enhanced corrosion resistance. Today, aluminum-brass, 90%Cu-10%Ni, and other copper alloys are widely used in tubular heat exchangers and piping systems in seawater, brackish water and fresh water. Aluminum-brass, 90% Cu-10% Ni and 70% Cu-30% Ni alloys show good corrosion resistance in hot de-aerated seawater and in brines in multi-stage flash desalination plants. Fixed tube liquid-cooled heat exchangers especially suitable for marine and harsh applications can be assembled with brass shells, copper tubes, brass baffles, and forged brass integral end hubs. Copper alloy tubes can be supplied either with a bright metallic surface (CuNiO) or with a thin, firmly attached oxide layer (aluminum brass). These finish types allow for the formation of a protective layer. The protective oxide surface is best achieved when the system is operated for several weeks with clean, oxygen containing cooling water. While the protective layer forms, supportive measures can be carried out to enhance the process, such as the addition of iron sulfate or intermittent tube cleaning. The protective film that forms on Cu-Ni alloys in aerated seawater becomes mature in about three months at 60 °F and becomes increasingly protective with time. The film is resistant to polluted waters, irregular velocities, and other harsh conditions. Further details are available. The biofouling resistance of Cu-Ni alloys enables heat exchange units to operate for several months between mechanical cleanings. Cleanings are nevertheless needed to restore original heat transfer capabilities. Chlorine injection can extend the mechanical cleaning intervals to a year or more without detrimental effects on the Cu-Ni alloys. Further information about copper alloy heat exchangers for industrial facilities is available. Solar thermal water systems Solar water heaters can be a cost-effective way to generate hot water for homes in many regions of the world. Copper heat exchangers are important in solar thermal heating and cooling systems because of copper's high thermal conductivity, resistance to atmospheric and water corrosion, sealing and joining by soldering, and mechanical strength. Copper is used both in receivers and in primary circuits (pipes and heat exchangers for water tanks) of solar thermal water systems. Various types of solar collectors for residential applications are available with either direct circulation (i.e., heats water and brings it directly to the home for use) or indirect circulation (i.e., pumps a heat transfer fluid through a heat exchanger, which then heats water that flows into the home) systems. In an evacuated tube solar hot water heater with an indirect circulation system, the evacuated tubes contain a glass outer tube and metal absorber tube attached to a fin. Solar thermal energy is absorbed within the evacuated tubes and is converted into usable concentrated heat. Evacuated glass tubes have a double layer. Inside the glass tube is the copper heat pipe. It is a sealed hollow copper tube that contains a small amount of thermal transfer fluid (water or glycol mixture) which under low pressure boils at a very low temperature. The copper heat pipe transfers thermal energy from within the solar tube into a copper header. As the solution circulates through the copper header, the temperature rises. Other components in solar thermal water systems that contain copper include solar heat exchanger tanks and solar pumping stations, along with pumps and controllers. HVAC systems Air conditioning and heating in buildings and motor vehicles are two of the largest applications for heat exchangers. While copper tube is used in most air conditioning and refrigeration systems, typical air conditioning units currently use aluminum fins. These systems can harbor bacteria and mold and develop odors and fouling that can make them function poorly. Stringent new requirements including demands for increased operating efficiencies and the reduction or elimination of harmful emissions are enhancing copper's role in modern HVAC systems. Copper’s antimicrobial properties can enhance the performance of HVAC systems and associated indoor air quality. After extensive testing, copper became a registered material in the U.S. for protecting heating and air conditioning equipment surfaces against bacteria, mold, and mildew. Furthermore, testing funded by the U.S. Department of Defense is demonstrating that all-copper air conditioners suppress the growth of bacteria, mold and mildew that cause odors and reduce system energy efficiency. Units made with aluminum have not been demonstrating this benefit. Copper can cause a galvanic reaction in the presence of other alloys, leading to corrosion. Gas water heaters Water heating is the second largest energy use in the home. Gas-water heat exchangers that transfer heat from gaseous fuels to water between 3 and 300 kilowatts thermal (kWth) have widespread residential and commercial use in water heating and heating boiler appliance applications. Demand is increasing for energy-efficient compact water heating systems. Tankless gas water heaters produce hot water when needed. Copper heat exchangers are the preferred material in these units because of their high thermal conductivity and ease of fabrication. To protect these units in acidic environments, durable coatings or other surface treatments are available. Acid-resistant coatings are capable of withstanding temperatures of 1000 °C. Forced air heating and cooling Air-source heat pumps have been used for residential and commercial heating and cooling for many years. These units rely on air-to-air heat exchange through evaporator units similar to those used for air conditioners. Finned water to air heat exchangers are most commonly used for forced air heating and cooling systems, such as with indoor and outdoor wood furnaces, boilers, and stoves. They can also be suitable for liquid cooling applications. Copper is specified in supply and return manifolds and in tube coils. Direct Exchange (DX) Geothermal Heating/Cooling Geothermal heat pump technology, variously known as "ground source," "earth-coupled," or "direct exchange," relies on circulating a refrigerant through buried copper tubing for heat exchange. These units, which are considerably more efficient than their air-source counterparts, rely on the constancy of ground temperatures below the frost zone for heat transfer. The most efficient ground source heat pumps use ACR, Type L or special-size copper tubing buried into the ground to transfer heat to or from the conditioned space. Flexible copper tube (typically 1/4-inch to 5/8-inch) can be buried in deep vertical holes, horizontally in a relatively shallow grid pattern, in a vertical fence-like arrangement in medium-depth trenches, or as custom configurations. Further information is available. Electronic systems Copper and aluminum are used as heat sinks and heat pipes in electronic cooling applications. A heat sink is a passive component that cools semiconductor and optoelectronic devices by dissipating heat into the surrounding air. Heat sinks have temperatures higher than their surrounding environments so that heat can be transferred into the air by convection, radiation, and conduction. Aluminum is the most prominently used heat sink material because of its lower cost. Copper heat sinks are a necessity when higher levels of thermal conductivity are needed. An alternative to all-copper or all-aluminum heat sinks is the joining of aluminum fins to a copper base. Copper heat sinks are die-cast and bound together in plates. They spread heat quickly from the heat source to copper or aluminum fins and into the surrounding air. Heat pipes are used to move heat away from central processing units (CPUs) and graphics processing units (GPUs) and towards heat sinks, where thermal energy is dissipated into the environment. Copper and aluminum heat pipes are used extensively in modern computer systems where increased power requirements and associated heat emissions result in greater demands on cooling systems. A heat pipe typically consists of a sealed pipe or tube at both the hot and cold ends. Heat pipes utilize evaporative cooling to transfer thermal energy from one point to another by the evaporation and condensation of a working fluid or coolant. They are fundamentally better at heat conduction over larger distances than heat sinks because their effective thermal conductivity is several orders of magnitude greater than that of the equivalent solid conductor. When it is desirable to maintain junction temperatures below 125–150 °C, copper/water heat pipes are typically used. Copper/methanol heat pipes are used if the application requires heat pipe operations below 0 °C. New technologies Internally Grooved The benefits of smaller-diameter internally grooved copper tube for heat transfer are well documented. Smaller diameter coils have better rates of heat transfer than conventional sized coils so that they can withstand higher pressures required by the new generation of environmentally friendlier refrigerants. Smaller diameter coils also have lower material costs because they require less refrigerant, fin, and coil materials; and they enable the design of smaller and lighter high-efficiency air conditioners and refrigerators because the evaporators and condensers coils are smaller and lighter. MicroGroove uses a grooved inner surface of the tube to increase the surface to volume ratio and increase turbulence to mix the refrigerant and homogenize temperatures across the tube. 3D Printing A new technology to make heat exchangers is 3D Printing. With 3D printing, you can create complex forms and inside channels. This results in high performance of heat exchangers. The heat exchanger printed is mainly for the industry. The heat exchangers can be printed in pure copper, CuCrZr, and CuNi2SiCr alloy. References Heat exchangers Heat conduction Heat transfer Heating, ventilation, and air conditioning Copper Power stations Geothermal energy Corrosion prevention
Copper in heat exchangers
[ "Physics", "Chemistry", "Engineering" ]
3,763
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Corrosion prevention", "Chemical equipment", "Corrosion", "Thermodynamics", "Heat exchangers", "Heat conduction" ]
37,750,468
https://en.wikipedia.org/wiki/Target%20angle
Target angle is the relative bearing of the observing station from the vehicle being observed. It may be used to compute point-of-aim for a fire-control problem when vehicle range and speed can be estimated from other information. Target angle may be best explained from the example of a submarine preparing to launch a straight-run (non-homing) torpedo at a moving target ship. Since the torpedo travels relatively slowly, the torpedo course must be set not toward the target, but toward where the target will be when the torpedo reaches it. Target angle is used to estimate target course. The submarine observer estimating target angle pictures himself on the target ship looking back at the submarine. Relative bearing of the submarine is the clockwise angle in degrees from the heading of the target ship to a straight line drawn from the target ship to the submarine. When target angle is 0°  (or 360° ) the target ship is coming directly toward the submarine. Target angles between 0°  and 90°  indicate the target ship is moving toward and to the right of the submarine. Target angles between 90°  and 180°  indicate the target ship is moving to the right and away from the submarine. When target angle is 180°  the target ship is moving directly away from the submarine. Target angles between 180°  and 270°  indicate the target ship is moving away from and to the left of the submarine. Target angles between 270°  and 360°  indicate the target ship is moving to the left and toward the submarine. A target passing a stationary observer from left to right might have target angles progressing from 45°  to 135° , with broadside facing of 90°  marking the minimum distance between target and observer. A target moving from right to left on the same track would have target angles progressing downward from 315°  to 225°  with the closest point of approach occurring at 270° . Angle on the bow Angle on the bow is a variation of target angle used by Naval submarines. Angle on the bow is measured over an arc of 180°  clockwise from the bow if viewing the starboard side of the target, or counterclockwise from the bow if viewing the port side of the target. Target angles from 0°  to 180°  are reported as "starboard [target angle]", while target angles from 180°  to 360°  are reported as "port [360° -target angle]". Angle on the bow provided the basis for submarine attack decisions through the world wars. When angle on the bow was less than 90° , the submarine would continue a submerged approach toward the target to launch torpedoes when angle on the bow increased to 90°  indicating the minimum range torpedo launch opportunity for the submarine with the given target course and speed. Unless the target was already within torpedo range, angle on the bow greater than 90°  required the submarine to attempt to surface and run around the target beyond visual range to submerge ahead of the target. As a practical matter, the speed differential required to run around a target meant most warships and ocean liners could not be attacked when angle on the bow was greater than 90° . Estimation of target angle is based on the observer's visual identification of target features like differentiating the bow from the stern. Dazzle camouflage patterns pictured in the black and white images illustrate a form of ship camouflage attempting to impair an observer's recognition of ship features. References Nautical terminology Angle
Target angle
[ "Physics" ]
677
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Wikipedia categories named after physical quantities", "Angle" ]
37,752,313
https://en.wikipedia.org/wiki/Rapid%20Climate%20Change-Meridional%20Overturning%20Circulation%20and%20Heatflux%20Array
The Rapid Climate Change-Meridional Overturning Circulation and Heatflux Array (RAPID or MOCHA) program is a collaborative research project between the National Oceanography Centre (Southampton, U.K.), the University of Miami's Rosenstiel School of Marine, Atmospheric, and Earth Science (RSMAS), and NOAA’s Atlantic Oceanographic and Meteorological Laboratory (AOML) that measure the meridional overturning circulation (MOC) and ocean heat transport in the North Atlantic Ocean. This array was deployed in March 2004 to continuously monitor the MOC and ocean heat transport that are primarily associated with the Thermohaline Circulation across the basin at 26°N. The RAPID-MOCHA array is planned to be continued through 2014 to provide a decade or longer continuous time series. The continuous observations are measured by an array of instruments along 26°N. This monitoring array directly measures the transport of the Gulf Stream in the Florida Strait using an undersea cable and a moored array measures bottom pressure and water column density (including temperature and salinity) at the western and eastern boundary and on either side of the Mid-Atlantic Ridge (MAR). Absolute transports including barotropic circulation are monitored using precision bottom pressure gauges. "Dynamic height" moorings are used to estimate the spatially average geostropic velocity profile and associated transports over relatively wide mooring separations. The dynamic height moorings requires measurements on both sides of the current field only, rather both the horizontal and vertical structure of the current field to be sufficiently well resolved to estimate transports. The basin-wide MOC strength and vertical structure are estimated via Ekman transports by satellite scatterometer measurements and the geostrophic and direct current observations. RAPID-MOCHA is funded by the National Environmental Research Council (NERC) and the National Science Foundation (NSF). MOC Observations Kanzow and colleagues (2007) demonstrated the effective of the array and reported that the sum of the transports into the North Atlantic from March 2004 to March 2005 varies with root-mean-square value of only 3.4 Sv (where 1 Sv = a flow of ocean water of 106 cubic meters per second) as compared to expected measurement errors of 2.7 Sv. In another study also utilizing observations from March 2004 to March 2005, Cunningham et al. (2007) reported a year-long average MOC of 18.7 ± 5.6 Sv with a large variability ranging from 4.4 to 35.3 Sv within the course of a year. Johns et al. (2009) concluded that the meridional heat transport was highly correlated with changes in the strength of the MOC with the circulation accounting for nearly 90% of the total heat transport and the remainder contained in a quasi-stationary gyre pattern with little net contribution by mesoscale eddies. The average annual mean meridional heat transport from 2004-2007 was reported by Johns et al. (2009) to be 1.33 ± 0.14 petawatts (PW). References External links Meridional Overturning Circulation and Heatflux Array (MOCHA) Climate change organizations Oceanography
Rapid Climate Change-Meridional Overturning Circulation and Heatflux Array
[ "Physics", "Environmental_science" ]
650
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]