url stringlengths 14 1.76k | text stringlengths 100 1.02M | metadata stringlengths 1.06k 1.1k |
|---|---|---|
http://rspa.royalsocietypublishing.org/content/464/2099/3055.full | Natural selection for least action
Ville R.I Kaila , Arto Annila
Abstract
The second law of thermodynamics is a powerful imperative that has acquired several expressions during the past centuries. Connections between two of its most prominent forms, i.e. the evolutionary principle by natural selection and the principle of least action, are examined. Although no fundamentally new findings are provided, it is illuminating to see how the two principles rationalizing natural motions reconcile to one law. The second law, when written as a differential equation of motion, describes evolution along the steepest descents in energy and, when it is given in its integral form, the motion is pictured to take place along the shortest paths in energy. In general, evolution is a non-Euclidian energy density landscape in flattening motion.
Keywords:
1. Introduction
The principle of least action (de Maupertuis 1744, 1746; Euler 1744; Lagrange 1788) and the evolutionary principle by natural selection (Darwin 1859) account for many motions in nature. The calculus of variation, i.e. ‘take the shortest path’, explains diverse physical phenomena (Feynman & Hibbs 1965; Landau & Lifshitz 1975; Taylor & Wheeler 2000; Hanc & Taylor 2004). Likewise, the theory of evolution by natural selection, i.e. ‘take the fittest unit’, rationalizes various biological courses. Although the two old principles both describe natural motions, they seem to be far apart from each other, not least because still today the formalism of physics and the language of biology differ from each other. However, it is reasonable to suspect that the two principles are in fact one and the same, since for a long time science has failed to recognize any demarcation line between the animate and the inanimate.
In order to reconcile the two principles to one law, the recent formulation of the second law of thermodynamics as an equation of motion (Sharma & Annila 2007) is used. Evolution, when stated in terms of statistical physics, is a probable motion. The natural process directs along the steepest descents of an energy landscape by equalizing differences in energy via various transport and transformation processes, e.g. diffusion, heat flows, electric currents and chemical reactions (Kondepudi & Prigogine 1998). These flows of energy, as they channel down along various paths, propel evolution. In a large and complicated system, the flows are viewed to explore diverse evolutionary paths, e.g. by random variation, and those that lead to a faster entropy increase, equivalent to a more rapid decrease in the free energy, become, in terms of physics, naturally selected (Sharma & Annila 2007). The abstract formalism has been applied to rationalize diverse evolutionary courses as energy transfer processes (Grönholm & Annila 2007; Jaakkola et al. 2008a,b; Karnani & Annila in press).
The theory of evolution by natural selection, when formulated in terms of chemical thermodynamics, is easy to connect with the principle of least action, which also is well established in terms of energy (Maslov 1991). In accordance with Hamilton's principle (Hamilton 1834, 1835), the equivalence of the differential equation of evolution and the integral equation of dissipative motion is provided here, starting from the second law of thermodynamics (Boltzmann 1905; Stöltzner 2003). In this way, the similarity of the fitness criterion (‘take the steepest gradient in energy’) and the ubiquitous imperative (‘take the shortest path in energy’) becomes evident. The two formulations are equivalent ways of picturing the energy landscape in flattening motion. Thus, there are no fundamentally new results. However, as once pointed out by Feynman (1948), there is a pleasure in recognizing old things from a new point of view.
2. The probable motion
Probability is the concise concept to denote a state of a system. Forces, i.e. potential energy gradients and differences, drive the system towards more probable states via flows of energy that diminish the differences. The principle is general but it depends on the particular potentials and mechanisms of energy transfer of how the differences are abolished. A small system may evolve rapidly by equalizing its potentials with its surroundings, whereas a large system may evolve over the aeons in the quest for a stationary state in its surroundings.
Usually, motions are described by differential equations. Examples are Newton's equation of motion, the time-dependent Schrödinger and the Liouville–von Neumann equations. Alternatively, motion may be described by integral equations, e.g. as a Lagrangian or an action. For example, Newtonian mechanics and Maxwell's equations (Landau & Lifshitz 1975) can be derived from the principle of least action, which can also be used in the theory of relativity (Taylor & Wheeler 2000).
Recently, the second law of thermodynamics was expressed as a differential equation of motion (Sharma & Annila 2007) for the probability P,(2.1)where the propagator,(2.2a)drives the transport (dxk/dt=−Σdxj/dt) between general coordinates xk and xj, e.g. as diffusion and currents, by draining the potential energy gradients μk/xj and the fields Qk/xj that couple to the jk-transport process. The notation kBT for the average energy per particle implies that there is a sufficiently statistic system (Kullback 1959), i.e. a set of repositories of energy where lost and acquired quanta are rapidly dispersed within. After each dissipative event, i.e. an emission or absorption, the system settles via interactions to a new partition corresponding to a new value of kBT, the common reference. The system evolves by dissipation, i.e. by energy efflux or influx, in the quest to reduce the gradients in equation (2.2a) and to attain a stationary state in its surroundings.
Similarly to transport processes, diverse transformation processes, e.g. chemical reactions converting Nk substrates to Nj products or vice versa, are driven by the propagator (Sharma & Annila 2007)(2.2b)The chemical potential difference between μk and μj is for convenience denoted as a gradient μk/Nj, although this field is not spatially resolved by the observer. When the surrounding density-in-energy couples to the jk-transformation, it contributes by Qk/Nj. For chemical reactions, the average energy RT=NAkBT is, as usual, given per mole via Avogadro's number NA and Boltzmann's constant kB. The chemical potential (Atkins & de Paula 2006), written as μk=kBT ln ϕk, where the density-in-energy (Gibbs 1993–1994) is defined as ϕk=Nkexp(Gk/kBT) for discrete entities k, serves to compare the levels of diverse repositories of energy with each other (figure 1). The Gibbs free energy Gk contains internal and surrounding potential gradients, e.g. in the form of the Coulomb force or electromagnetic radiation field.
Figure 1
An energy-level diagram depicts schematically transitions between two energy densities ϕk and ϕj at the position xk and xj that are characterized by the entity numbers Nk and Nj and the Gibbs free energy Gk and Gj, respectively. In the quest for a stationary state, the partition evolves by equalizing the potential energy difference μkμj, indicated by the black vertical arrow. The resulting motion appears as kinetic and dissipated flows of energy. The flow of the kinetic energy is depicted by the blue arrow. The dissipated (emitted or absorbed) quanta, indicated by the red arrow, bridge the jk-transformation from Nk to Nj. The dissipation stems from the changes in the interaction energy when mk transform to mj. The invariant part of the mass is presented by blue circles that are the constituents of j and k densities. Note that the potentials are in relation to each other and the line at the bottom is drawn only for illustrative purposes.
The evolution, as given by equation (2.2b), is essentially a restatement of the Gibbs–Duhem equation (Atkins & de Paula 2006) that relates a decrease in the chemical potential of one substance to an increase in the chemical potentials of the other substances. In accordance with Le Chatelier's principle, the system will evolve towards a stationary state by acquiring from or emitting quanta to its surroundings (figure 1). In the dynamic equilibrium, gradients vanish but diverse pools of energy, indexed by j and k, continuously convert to one and another without net dissipation. These stationary motions along isergonic trajectories are conserved.
In chemical reactions, substrates Nk are distinguished from products Nj when the reaction coordinates differ, i.e. energies Gk(xk)≠Gj(xj). A chemical reaction, which is a movement along the reaction coordinate, is dissipative if it is endergonic μk<μj or exergonic μk>μj. Thus, when μk/NjQk/Nj≠0, a transformation process dNk/dt=−dNj/dt may proceed, whereas when the densities-in-energy ϕk and and associated gradients are equal, then the entities are indistinguishable from each other.
Likewise, a spatial position xk differs from another position xj, i.e. xkxj when the motion from one coordinate to another is dissipative. Thus, when μk/xjQk/xj≠0, a transport process dxk/dt=−dxj/dt may advance. The dissipative detection itself may impose the energy gradient, i.e. a field with a sufficient resolution to distinguish one compound from another, just as one coordinate from another. On the other hand, when densities-in-energy and their gradients are equal at xk and , then the two coordinates are indistinguishable from each other. They may, nevertheless, differ by a relative phase φ along an isergonic contour. The strength of the potential μk determines the invariant rate ω=dφ/dt of conserved motion. Then, a coordinate transformation may be found which renders the precession time independent. Conversely, a change in phase serves to determine μk or Nk when Gk is known, subjected to the uncertainty condition ΔNkΔϕ (Aharonov & Bohm 1959; Gleyzes et al. 2007).
The essential difference between equations (2.2a) and (2.2b) is the ability versus inability, respectively, of an observer to resolve an energy transfer process. When distinct entities within the system are resolved, e.g. by a spatial energy gradient, equation (2.2a) can be used, whereas when not, equation (2.2b) is the appropriate form. For example, chemical reactions are customarily monitored only at the level of ensembles but trajectories of individual reactants, if resolved, are expected to be similar to those tracked during simulations or calculations. In the same way, processes in distant stellar objects couple to the observer only via dissipated quanta. Therefore, even if the kinetic energy of the whole ensemble vanishes, the dissipation informs us about internal motions that devour the potential energy gradients in transformations between distinguishable entities within the unresolved system.
For the resolved (equation (2.2a)) as well as for the unresolved systems (equation (2.2b)), the shortest path of motion along the steepest descent in energy will be obtained by minimizing the kinetic energy (2K) or the Lagrangian, i.e. the combination of the kinetic and potential energies (KU), or dissipation (Q) because the three measures of energy, as will be described in the following sections, are interdependent due to the conservation of the total energy 2K+U=Q.
Evolution as an energy transfer process aims at an equilibrium where gradients and differences have vanished (figure 2). The process is described from the observer's viewpoint so that the densities-in-energy are given by partitions whereas surroundings are denoted cursorily as fields. The open system evolves via the energy transfer from a partition to another more probable one until a stationary state in its surroundings is attained. Although transitions within the system are often of interest, it is the surrounding forces that drive the evolution. Specifically, a closed or stationary system is not subjected to the evolutionary forces. It does not evolve and hence time does not advance either. In a historical perspective, it seems that the concept of a closed system appeared when Lagrange multipliers, corresponding to a fixed entity number and total energy, were employed to determine the maximum entropy state where L=0. By contrast, the equation of motion (equation (2.1)) that is derived directly from the probability calculation (Sharma & Annila 2007) elucidates explicitly the driving forces of evolution by L≠0.
Figure 2
Evolving energy landscape is depicted schematically by a series of grey lines as it is levelling due to the flows of energy towards the stationary state. The flows of energy are driven by the potential energy difference μkμj or gradient μk/xj between the sites k and j down along the steepest descents. During the energy transfer, the directed arc s between xk and xj is shortening at the rate ds/dt. At the equilibrium, net dissipation vanishes and subsequent stationary motions are conserved.
3. The second law as an equation of motion
The second law of thermodynamics given by equation (2.1) is the view by the system. When the surroundings are lower in energy density, the system undergoes dissipative jk-transitions from μk to μj by emitting quanta that are then no longer part of the system. Hence, the energy content of the system is decreasing. Likewise, when the surroundings are higher in energy density, the system undergoes jk-transitions from μj to μk by absorbing quanta that become an integral part of the system. Hence, the energy content of the system is increasing. Thus, the inequality dS/dt≥0 in the second law, i.e. the principle of increasing entropy S means that the open system is evolving towards a more probable partition.
The equation of motion, in terms of the logarithmic probability, i.e. entropy dS=kBd(ln P), is for the resolved transformations(3.1a)where the flow vk=dxk/dt. When the potential μk at xk is higher (lower) than μj at xj, including the surrounding energy density that couples to the jk-transformation, then the gradient and the flow will be negative (positive), i.e. μk/xjQk/xj<0 (>0) and vk<0 (>0). Thus, dS/dt>0 until the gradients have vanished and a stationary state dS/dt=0 has been reached. Likewise, for the unresolved transformations, the entropy increases at the rate(3.1b)where .
During evolution, given by equations (3.1a) and (3.1b), energy is not conserved within an open system, hence dS/dt>0. The change in the average energy kBT resulting from the dissipation is communicated within the system via its interactions. The net dissipation renders the process irreversible and gives the direction of time (Boltzmann 1905; Eddington 1928; Sharma & Annila 2007). When the energy flows affect the driving forces μk/xjQk/xj, the forces, in turn, redirect the flows vk. In the absence of invariants of motion, there is no transformation that would make the evolution time independent. In general, trajectories of non-conserved motions cannot be traced (Sharma & Annila 2007) but for symmetrical systems (Noether 1918) analytical solutions (Schwarzschild 1916) can be found. Although the equation of motion (equation (2.1)) is non-integrable for any non-trivial system, the energy transfer can be simulated step by step and the evolving energy landscape can be examined in a piecewise manner.
The probability associates via equation (2.1), and hence also entropy via equations (3.1a) and (3.1b), with energy but not with other attributes, e.g. a disorder that is often, but one-sidedly, linked with entropy (Schrödinger 1948). Obviously, the disorder increases by isergonic phase dispersal due to the sporadic exchange of quanta with incoherent surroundings. Nevertheless, the probability Pk=〈ψk|ψk〉 of the wave function ψk remains the same in the coherent and decoherent configurations.
4. The differential equation of motion
An increase in entropy (equations (3.1a) and (3.1b)) corresponds to a decrease in the free energy for both the resolved and unresolved energy transfer processes. For the resolved transformations, the flows between distinct potentials μk and μj result in a decrease in the free energy(4.1a)The equation states that a flow of the kinetic energy from xk to xj stems from the decreasing potential μk (μk/xj<0) concurrent with the dissipation Qk/t=−ΣvjQk/xj when interactions, defining mk, break apart and yield mj (figure 1). Likewise, the kinetic energy flow directs from xj to xk and increases μk when the influx of energy from the surroundings is bound in interactions and yields mk from mj.
The directional derivatives D=v.∇ in equation (4.1a) describe a manifold of energy by time-dependent tangent vectors (Lee 2003). The landscape is levelling when the flows direct down from the convex regions along the steepest gradients down along the concave regions. The change in the free energy amounts from the projection of potential −v.∇μk and the projection of dissipation v.∇Qk (figure 3). Specifically, when there is no dissipative flow d(TS)/dt=0, the parallel transport Djμk=0 (Carroll 2004) does not change the free energy between the densities-in-energy indexed by j and k.
Figure 3
Energy landscape is depicted schematically at time t (solid line) and a moment dt earlier (dashed line) and a moment dt later (dotted line) when an energy flow directs from the high potential μk at xk down along the steepest gradient (blue arrow) towards the low potential μj at xj. The expansion illustrates the differential form of the equation of motion: the potential gradient −μk/xj (black vertical arrow) translates the mass mk into the acceleration ak along a curved path due to the concurrent dissipation Qk/xj (red horizontal arrow). During evolution, the potential energy gradient is diminishing and the dissipation stems from the changes in interactions that show up as a decrease in mass vkdmk/dt. The curvilinear motion adds up, according to equation (4.2a) and (4.2b), from the projection of the potential gradient and the dissipation, indicated by the normal (dotted) of the arc (blue).
Likewise, for the unresolved densities-in-energy that appear as point sources and sinks of the manifold, transformations equalize differences in energy according to(4.1b)but as was pointed out earlier, the kinetic trajectories remain unresolved and only dissipation is detected.
The gradient Qk/xj is the dissipative force that corresponds to the second term of the time derivative of the momentum pk=mkvk (Newton 1687),(4.2a)where ak=d2xk/dt2 is the acceleration and the energy released from interactions is denoted by the mass loss . In nuclear reactions, the mass change is apparent, whereas in chemical reactions it is almost negligible. During many transport processes, the potential energy remains nearly constant. Consequently, the dissipation is extremely small but non-zero, i.e. the landscape is almost flat. Owing to the dissipation, energy is transferred from μk at xk towards μj at xj along a curved path. The dissipative force is pictured orthogonal to the potential gradient (figures 1 and 3). When the two are combined into one, the orthogonality of the components in −∇V=−∇(U−iQ) is best denoted by i.
When equation (4.2a) is multiplied by velocities dx/dt and integrated over time, the familiar conservation 2K=−U+Q for energy in the forms of kinetic (K), attractive potential (U) and dissipation (Q) is obtained. Its time derivative d(2K)/dt=−v.∇(UQ), when written using the continuity /t=v.∇, is the flow equation (equations (4.1a) and (4.1b)). The energy flow is expelled from the open system to its surroundings as the matter dmk at the velocity vk and/or radiated at the speed of light c.
A conserved system that complies with the integrated condition 2K+U=0 is without the net dissipation 〈Q〉=0. In other words, the mass is invariant dmk/dt=0 and the forces balance as Σmkak=−Σμk/xj. Then, it is, at least in principle, possible to find a solution to the equation of motion (the Liouville equation) by a transformation that renders the Hamiltonian time independent.
Likewise, for the unresolved transformations, the forces due to the unresolved spatial potential gradient and the detected energy flux add to each other as vectors,(4.2b)where mk denotes the mass in motion from compounds Nk to Nj. For example, in chemical reactions, the mass change dmk relates to electronic restructuring, whereas nuclear masses mk remain intact. The flow rate is proportional to the potential energy gradient and to the mechanisms of energy transduction (Sharma & Annila 2007).
Although motions are not resolved, the notion of the kinetic energy is not meaningless (Gyarmati 1970). The observed dissipation discloses that energy does flow from higher to lower potentials within the system in the quest for a stationary state with its surroundings (figure 3). The dissipated quanta may, on their way, be influenced by new gradients before being absorbed |1〉→|0〉 in a detecting potential where the particular energy transfer process ends. The equation of motion for the electromagnetic energy transfer, corresponding to equation (4.1a) and (4.1b), is due to Poynting (1920). It can be used to picture the propagation of light analogously to figure 3 but will not be digressed into.
Customarily, non-dissipative motions are described by the Euler–Lagrange equation that can be derived by varying the point (x, t) in the middle of an infinitesimal space–time interval (Hanc et al. 2004) to determine the stationary trajectory (figure 4). The dissipative motions are described by the dissipative Euler–Lagrange equation (Nesbet 2003)(4.3)where the Lagrangian Lk=KkUk for the transport processes contains the kinetic and potential Uk=μk energy terms. The emitted or absorbed quanta ΣkQk≠0 in the kj transitions are included. However, equation (4.3) does not explicitly identify the changes in interactions as the emission sources or sinks of absorption, unlike equation (4.2a) that shows the mass changes. When identities are created or destroyed, the dissipation is invariably accompanied. Owing to the finite transformation rates (c, vk or ), it takes time to distinguish xk from xj (Nk from Nj) on the basis of an energy difference (GkGj) by a dissipative process (Brillouin 1963). The dissipative flow of energy 〈Q〉≠0 manifests itself as the flow of time.
Figure 4
Expanded fraction of an energy landscape illustrates the Euler–Lagrange differential form of the integral equation of motion. The shortest path is found by minimizing the distance L=KU (dotted line) to the point (x, t) on the infinitesimally short path. Along this optimal path of energy dispersal, the change in the potential U transfers into the kinetic energy K and the dissipation Q.
Both the resolved and unresolved transformations direct from high potentials down to low potentials along the steepest gradients in energy in terms of both μk/xj and Qk/xj. In the following, it is argued that these paths, also known as the geodesics, are the shortest in energy, i.e. in space and time.
5. The integral equation of motion
The integral form of the differential equation (equation (4.2a)) for the resolved processes is known as the abbreviated action. The r.h.s.(5.1a)amounts from the potential energy U and the dissipation Q, when the energy flows down along the gradients during the time period dt and over the spatial distance dx. Likewise, for the unresolved transformations (equation (4.2b)), the integral form is(5.1b)where the free energy is consumed in dissipative transformations during dt. The l.h.s. of equation (4.2a) is the form proposed by Maupertuis,(5.2a)that amounts from the kinetic energy during t. When equations (5.1a) and (5.2a) are equated, the integral form of continuity, actio est reactio (the interaction principle), is obtained. The integrands sum as 2K+U=Q. For the non-dissipative motions along isergonic trajectories 〈Q〉=0, the familiar condition of a stationary state 2K+U=0 is recovered.
In general, the integral equation of motions just as the differential form cannot be solved because the driving forces, i.e. the potential gradients, are consumed by the flows. There is no invariant of motion and evolution may redirect its course. Therefore, to allow for a changed destination, the integrals are left indefinite.
It is of interest to examine a short stretch of the path where energy is in motion from μk to μj. The directed arc s that is interconnecting the two repositories on the energy landscape is shortened by ds during dt (figure 5). When the short arc ds is approximated by a straight chord, the dissipative directional step can be carried out by complex numbers so that the differential dσ, associated with the potential energy change, adds with the dissipation-associated differential ic dt along the orthogonal direction, indicated by i, to result in the differential ds=−dσ+ic dt that associates with the kinetic energy. When multiplying with the complex conjugate ds*, the familiar expression dσ2=ds2c2 dt2 for the local metric is obtained (Taylor & Wheeler 2000; Berry 2001). The squared differentials associate with the attractive potential energy U=−m(dσ/dt)2, the kinetic energy 2K=m(ds/dt)2=mv2 and the dissipated energy Q=mc2(dt/dt)2. Thus, after multiplying with dt2/mc2, it is observed that it is the conservation of energy U=Q−2K that defines the Lorentzian manifold with the familiar metric dτ2=dt2−ds2/c2, where the proper distance σ is related to the proper time τ by c2dτ2=−dσ2.
Figure 5
When the energy flows from the high potential μk down along the shortest path (solid line) towards the low potential μj, the directed path s between xk and xj is shortening at the rate ds/dt. The expansions describe the local metric of the curved landscape. (a) The momentum vector v=u+iq is a sum of the momentum due to the potential energy change and the concurrent dissipation along the orthogonal direction. The metric u2=v2q2 of the landscape is the Lorentz covariant with respect to a change in the frame of reference (dashed line). (b) When a short local path ds on the non-Euclidian manifold is approximated by a straight chord, the kinetic energy (ds/dt)2 (blue line) amounts from the change in the potential (dσ/dt)2 (black vertical line) and the dissipation (c dt/dt)2 (red horizontal line) according to Pythagoras' theorem.
The unresolved jk-transformations convert the energy associated with mk to ∑mj with concomitant dissipation. When interactions break apart, some mass Δmk is dissipated, most apparent in nuclear reactions but also non-zero in other evolutionary processes. Since the system as a whole is not observed to move, only the dissipation associated momentum is detected. Integration over dt,(5.2b)gives the net dissipation due to the flows of energy between distinct potentials during t. An efflux will consume a high potential, whereas an influx will build up a low potential until the stationary state in the surroundings is attained. When the net energy flow from and to the system has vanished 〈Q〉=0, the through-flux supporting the stationary state is at maximum. This is the maximum power principle (Lotka 1922). The principle of minimum (net) dissipation (Moiseev 1987) refers to the state of minimum free energy. In this sense, it is also re-expressing the second law.
Customarily, the dissipative Lagrange form of action(5.3)is preferred over the Maupertuis form (equation (5.2a)). However, since 2K, L=KU and Q are interdependent by the conservation of energy (figure 4), identical results are obtained either by minimizing A0 for 2K (or Q) or A for L. The shortest path in energy is the one where the kinetic energy 2K as the integrand of the abbreviated action or the Lagrangian L=KU or the dissipation Q is at minimum.
6. Manifold in motion
The thermodynamic description of an energy landscape in motion, as outlined previously, expresses the basic conservation laws for energy and its differential, i.e. force, as well as for the energy integral, i.e. action, and its differential momentum. These equations of evolution state that the time-dependent manifold of energy densities is continuous. The flows of energy are most voluminous between the well-connected reservoirs. The connected densities-in-energy make an affine manifold (Lee 2003; Carroll 2004), i.e. a system, where a flow from a high density necessarily passes through the neighbouring coordinates on its way towards a low density.
The evolving landscape is described by the calculus of variations that is equipped with powerful mathematical machinery conceived by Gauss and elaborated further by Riemann (Weinberg 1972; Carroll 2004). The mathematical counterpart for the thermodynamic evolution is the geometric evolution tgjk=−2Rjk (Chow & Knopf 2004). The Riemannian manifold deforms via the Ricci flows that are driven by the curvature tensor Rjk. In analogy to the surrounding fields that are included along with the thermodynamic potential gradients, the Ricci flows may also be powered by additional vector fields. The negative sign signifies that the Ricci flows contract the positively curved regions and expand the negatively curved regions of the manifold, in accordance with the thermodynamic evolution that flattens heights and fills depths of energy densities (figures 2–5), as well as stretches the saddle regions. Since our description provides mathematically nothing new, we will not make excursions to rephrase the well-established results. Rather, we will picture an evolving space–time as an energy transfer process at a formal level without reference to the explicit forms of diverse potentials and their gradients, i.e. fields.
According to thermodynamics, the space is not empty but energized. If there is no energy in a form of U and no radiation Q, the space–time is empty (does not exist), i.e. 2K=0 (Foster & Nightingale 1994). The manifold, given above as a tangential vector field, is customarily given by elements gjk of the metric tensor =JTJ, which is obtained from the Jacobian J. Since the jk-path is directional, the Jacobian element Jjk<0 means that the density at xk (or Nk) is decreasing during evolution in favour of xj (or Nj) until the energy gradients (or differences) have vanished. The corresponding transposed element Jkj>0 is illustrative in identifying the coordinate xj as the site of μj in the same way as the coordinate xk associates with μk. Thus, the transposition literally means a change in the viewpoint on the common landscape. As usual, the trace, determinant and discriminant of J disclose motional modes of the manifold (Strogatz 2000).
At the stationary state, there is no dissipation and there is no directionality of time and no evolution of the metric either. The space is flat without net forces and all dynamics are conserved during the period of integration. Along an isergonic contour where 〈Q〉=0, dS/dt=0 and Tr(J)=0. The dynamics of the manifold about a local point is revealed by the discriminant of the characteristic equation. The conserved motions retain all energies at xk(Nk) or return to it without net dissipation after t has elapsed in cyclic or sporadic excursions about xk. Hence, the potential (dσ/dt)2 equals the energy in motion (ds/dt)2 in accordance with the familiar condition U+2K=〈Q〉=0. For example, the constant curvature, i.e. a fixed radius r, means that 2K=m(ds/dt)2=mr2(dφ/dt)2=mr2ω2=−U, where the characteristic frequency ω is determined by the strength of the potential. Likewise, the basic wave equation is an expression for the geodesic in a constant potential.
According to thermodynamics, time is only a convenient way to compare the relative rates of dissipative processes that are flattening the energy landscape. The manifold is levelled by the flows that may also be viewed as motions that dilute energy density. The famous invariant of motion dt/dτ=(1−v2/c2)−1/2=E/mc2 relates, for example, the source of quanta to the sink that is receding with high velocity v. Thus, the general coordinates, customarily given by space and time, intermingle with each other in dissipative motions that unfold the manifold of energy densities. Since the rate of dissipation depends on the surrounding energy densities that may differ radically from those which we are accustomed to on Earth, some observations may appear peculiar and counter-intuitive to us.
7. Discussion
Ludwig Boltzmann expressed his unrelenting desire to connect the second law with the principle of least action, as late as 1899 when closing his lectures at Clark University by saying ‘It turns out that the analogies with the second law are neither simply identical to the principle of least action, nor to Hamilton's principle, but that they are closely related to each of them.’ (Boltzmann 1905). Apparently, Boltzmann yearned also to express Darwin's theory of evolution by natural selection in terms of statistical mechanics when saying that the existence of animate beings is a struggle for entropy (Boltzmann 1974).
In retrospect, it seems that both the objectives of Boltzmann were somehow concealed already early on. Apparently, the primary objective at that time was to find the equilibrium state partition that is characterized by the well-known Boltzmann factors. Since the equilibrium per definition has zero free energy, the driving force of evolution that led to the stationary state remained obscure. Therefore, in many cases, the Boltzmann factors still today are imposed ad hoc to command the system to a stationary state, rather than allowing the system on its own to find the way via natural motions to the equilibrium with its surroundings. Perhaps the elegant mathematical machinery due to Joseph-Louis Lagrange, which is also employed today to determine the equilibrium partition, disguised the physics of evolution, i.e. the probabilities are not invariant but relate to the free energy (Sharma & Annila 2007). Thus, the irreversibility is exclusively based on reasons of probability (Ritz & Einstein 1909; Zeh 2007).
The contemporary obsession to predefine the steady state appears also in the desire for normalized probabilities. A norm associates with symmetry and facilitates calculus. However, probabilities keep changing with a changing energy landscape. This is also the basic idea of Bayesian inference (Bayes 1763), yet another early and conceptually sound realization of evolution, in particular when augmented with the steepest ascent imperative (Jaynes 2003). In general, all paths are explored (Feynman & Hibbs 1965) to distribute energy flows through them according to the principle of least action. Excursions on the energy manifold, e.g. by random variation, will sooner or later naturally converge on the most probable, the shortest paths that follow the steepest gradients in energy.
The principle of increasing entropy, equivalent to the decreasing free energy, is pure and austere but its mechanical manifestations can be complex and intricate. The energy transfer involving the coherent motions between a small system and its surroundings may display complicated phenomena (Sudarshan & Misra 1977; Schieve et al. 1989), whereas energy transduction in a large hierarchical system, channelling via numerous paths, becomes easily intractable. For example, numerous enzymes that constitute metabolic machinery of a cell are viewed here as mechanisms that transform chemical energy from one compound pool to another. Likewise, species of an ecosystem form a chain of energy transduction mechanisms that distribute solar energy acquired by photosynthesis. Since the mechanisms of energy transduction are also themselves repositories of energy, other mechanisms may, in turn, tap into and draw from them. Therefore, evolutionary courses and responses to environmental changes of many ecosystems are difficult to predict precisely. Technically speaking, although the equation of motion is known, it is non-integrable.
Particularly, intriguing phenomena may emerge when a high-energy source, such as the Sun, is powering a large energy transduction network, such as that on Earth. When a steady stream of external energy is falling on an open system, there is a driving force to assemble mechanisms from the available ingredients and to improve on them in order to acquire more energy in the quest for a stationary state. The driving force makes no difference between abiotic and biotic mechanisms of energy transduction but favours all those that are dispersing energy more and more effectively. Therefore, the large global system is, in the language of thermodynamics, an energy manifold in myriad motional modes, most of which are referred to as life. For the large global system that apparently has a suitable mixture of ingredients to couple to the high-energy influx, it has taken aeons to evolve in energy transduction. Although the abstract description of evolution provided by the statistical physics results in a holistic view of nature, it is unarmed to say specifically how energy transduction mechanisms, i.e. species, have emerged. These questions can be addressed by appropriate models. The present formalism emphasizes the imperatives in evolution.
The role of (genetic) information is undoubtedly important in evolution but it has not been elaborated in this study. However, considering the close connection between mathematical communication theory (Shannon 1948) and statistical physics (Kullback 1959), it is not surprising that a piece of information, due to its physical representation (Landauer 1961), is identified by thermodynamics to a deviation from the average energy density. Since deviations are consumed during the probable motion, thermodynamics sheds light on the origin of (genetic) information as a powerful mechanism to increase energy transduction. Thus, life has not only emerged on Earth but also the globe has evolved to a living planet (Lovelock 1988). To extend the thermodynamic description to biotic systems is not new (Lotka 1925) but consistent with many earlier studies that are based on the principle of increasing entropy or reduction of gradients (Ulanowicz & Hannon 1987; Brooks & Wiley 1988; Salthe 1993; Schneider & Kay 1994; Chaisson 1998; Lorenz 2002).
The established connections between the differential and integral equations of evolutionary motions may appear naive and the conclusions may seem simple by modern standards. On the other hand, the presentation, using the basic concepts of physics, is in accord with the inspiring ideas about the evolving nature that appeared in various forms during the past centuries. In summary, tracks of evolution are non-deterministic because the energy flows will affect potentials that, in turn, will alter the flows. The trajectories are integrable only when a stationary state without net dissipation is reached. The contrast between the non-conserved dissipative motions and the conserved stationary-state stance has been phrased so that the subjective non-Euclidian world, i.e. a curved landscape, happens (evolves), whereas the objective Euclidian world, i.e. a flat landscape, simply is (stationary; Weyl 1949).
Acknowledgments
We are grateful to Mahesh Karnani, Janne Nuutinen, Tuomas Pernu, Kimmo Pääkkönen and Vivek Sharma for their many informative and insightful corrections and comments. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9282948970794678, "perplexity": 1267.9826734446867}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463093.76/warc/CC-MAIN-20150226074103-00292-ip-10-28-5-156.ec2.internal.warc.gz"} |
http://en.wikipedia.org/wiki/Krull_ring | # Krull ring
In commutative algebra, a Krull ring or Krull domain is a commutative ring with a well behaved theory of prime factorization. They were introduced by Wolfgang Krull (1931). They are a higher-dimensional generalization of Dedekind domains, which are exactly the Krull domains of dimension at most 1.
## Formal definition
Let $A$ be an integral domain and let $P$ be the set of all prime ideals of $A$ of height one, that is, the set of all prime ideals properly containing no nonzero prime ideal. Then $A$ is a Krull ring if
1. $A_{\mathfrak{p}}$ is a discrete valuation ring for all $\mathfrak{p} \in P$,
2. $A$ is the intersection of these discrete valuation rings (considered as subrings of the quotient field of $A$).
3. Any nonzero element of $A$ is contained in only a finite number of height 1 prime ideals.
## Properties
A Krull domain is a unique factorization domain if and only if every prime ideal of height one is principal.[1]
Let A be a Zariski ring (e.g., a local noetherian ring). If the completion $\widehat{A}$ is a Krull domain, then A is a Krull domain.[2]
## Examples
1. Every integrally closed noetherian domain is a Krull ring. In particular, Dedekind domains are Krull rings. Conversely Krull rings are integrally closed, so a Noetherian domain is Krull if and only if it is integrally closed.
2. If $A$ is a Krull ring then so is the polynomial ring $A[x]$ and the formal power series ring $A[[x]]$.
3. The polynomial ring $R[x_1, x_2, x_3, \ldots]$ in infinitely many variables over a unique factorization domain $R$ is a Krull ring which is not noetherian. In general, any unique factorization domain is a Krull ring.
4. Let $A$ be a Noetherian domain with quotient field $K$, and $L$ be a finite algebraic extension of $K$. Then the integral closure of $A$ in $L$ is a Krull ring (Mori–Nagata theorem).[3]
## The divisor class group of a Krull ring
A (Weil) divisor of a Krull ring A is a formal integral linear combination of the height 1 prime ideals, and these form a group D(A). A divisor of the form div(x) for some non-zero x in A is called a principal divisor, and the principal divisors form a subgroup of the group of divisors. The quotient of the group of divisors by the subgroup of principal divisors is called the divisor class group of A.
A Cartier divisor of a Krull ring is a locally principal (Weil) divisor. The Cartier divisors form a subgroup of the group of divisors containing the principal divisors. The quotient of the Cartier divisors by the principal divisors is a subgroup of the divisor class group, isomorphic to the Picard group of invertible sheaves on Spec(A).
Example: in the ring k[x,y,z]/(xyz2] the divisor class group has order 2, generated by the divisor y=z, but the Picard subgroup is the trivial group. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9076098799705505, "perplexity": 135.2203593810081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455180384.30/warc/CC-MAIN-20150501043940-00062-ip-10-235-10-82.ec2.internal.warc.gz"} |
http://www.gradesaver.com/alice-in-wonderland/e-text/chapter-xi-who-stole-the-tarts | # Alice in Wonderland
## Chapter XI: Who Stole the Tarts?
The King and Queen of Hearts were seated on their throne when they arrived, with a great crowd assembled about them--all sorts of little birds and beasts, as well as the whole pack of cards: the Knave was standing before them, in chains, with a soldier on each side to guard him; and near the King was the White Rabbit, with a trumpet in one hand, and a scroll of parchment in the other. In the very middle of the court was a table, with a large dish of tarts upon it: they looked so good, that it made Alice quite hungry to look at them--I wish they'd get the trial done,' she thought, and hand round the refreshments!' But there seemed to be no chance of this, so she began looking at everything about her, to pass away the time.
Alice had never been in a court of justice before, but she had read about them in books, and she was quite pleased to find that she knew the name of nearly everything there. That's the judge,' she said to herself, because of his great wig.'
The judge, by the way, was the King; and as he wore his crown over the wig, (look at the frontispiece if you want to see how he did it,) he did not look at all comfortable, and it was certainly not becoming.
And that's the jury-box,' thought Alice, and those twelve creatures,' (she was obliged to say creatures,' you see, because some of them were animals, and some were birds,) I suppose they are the jurors.' She said this last word two or three times over to herself, being rather proud of it: for she thought, and rightly too, that very few little girls of her age knew the meaning of it at all. However, jury-men' would have done just as well.
The twelve jurors were all writing very busily on slates. What are they doing?' Alice whispered to the Gryphon. They can't have anything to put down yet, before the trial's begun.'
They're putting down their names,' the Gryphon whispered in reply, for fear they should forget them before the end of the trial.'
Stupid things!' Alice began in a loud, indignant voice, but she stopped hastily, for the White Rabbit cried out, Silence in the court!' and the King put on his spectacles and looked anxiously round, to make out who was talking.
Alice could see, as well as if she were looking over their shoulders, that all the jurors were writing down stupid things!' on their slates, and she could even make out that one of them didn't know how to spell stupid,' and that he had to ask his neighbour to tell him. A nice muddle their slates'll be in before the trial's over!' thought Alice.
One of the jurors had a pencil that squeaked. This of course, Alice could not stand, and she went round the court and got behind him, and very soon found an opportunity of taking it away. She did it so quickly that the poor little juror (it was Bill, the Lizard) could not make out at all what had become of it; so, after hunting all about for it, he was obliged to write with one finger for the rest of the day; and this was of very little use, as it left no mark on the slate.
Herald, read the accusation!' said the King.
On this the White Rabbit blew three blasts on the trumpet, and then unrolled the parchment scroll, and read as follows:--
The Queen of Hearts, she made some tarts, All on a summer day: The Knave of Hearts, he stole those tarts, And took them quite away!'
Consider your verdict,' the King said to the jury.
Not yet, not yet!' the Rabbit hastily interrupted. There's a great deal to come before that!'
Call the first witness,' said the King; and the White Rabbit blew three blasts on the trumpet, and called out, First witness!'
The first witness was the Hatter. He came in with a teacup in one hand and a piece of bread-and-butter in the other. I beg pardon, your Majesty,' he began, for bringing these in: but I hadn't quite finished my tea when I was sent for.'
You ought to have finished,' said the King. When did you begin?'
The Hatter looked at the March Hare, who had followed him into the court, arm-in-arm with the Dormouse. Fourteenth of March, I think it was,' he said.
Fifteenth,' said the March Hare.
Write that down,' the King said to the jury, and the jury eagerly wrote down all three dates on their slates, and then added them up, and reduced the answer to shillings and pence.
Take off your hat,' the King said to the Hatter.
It isn't mine,' said the Hatter.
Stolen!' the King exclaimed, turning to the jury, who instantly made a memorandum of the fact.
I keep them to sell,' the Hatter added as an explanation; I've none of my own. I'm a hatter.'
Here the Queen put on her spectacles, and began staring at the Hatter, who turned pale and fidgeted.
Give your evidence,' said the King; and don't be nervous, or I'll have you executed on the spot.'
This did not seem to encourage the witness at all: he kept shifting from one foot to the other, looking uneasily at the Queen, and in his confusion he bit a large piece out of his teacup instead of the bread-and-butter.
Just at this moment Alice felt a very curious sensation, which puzzled her a good deal until she made out what it was: she was beginning to grow larger again, and she thought at first she would get up and leave the court; but on second thoughts she decided to remain where she was as long as there was room for her.
I wish you wouldn't squeeze so.' said the Dormouse, who was sitting next to her. I can hardly breathe.'
I can't help it,' said Alice very meekly: I'm growing.'
You've no right to grow here,' said the Dormouse.
Don't talk nonsense,' said Alice more boldly: you know you're growing too.'
Yes, but I grow at a reasonable pace,' said the Dormouse: not in that ridiculous fashion.' And he got up very sulkily and crossed over to the other side of the court.
All this time the Queen had never left off staring at the Hatter, and, just as the Dormouse crossed the court, she said to one of the officers of the court, Bring me the list of the singers in the last concert!' on which the wretched Hatter trembled so, that he shook both his shoes off.
Give your evidence,' the King repeated angrily, or I'll have you executed, whether you're nervous or not.'
I'm a poor man, your Majesty,' the Hatter began, in a trembling voice, --and I hadn't begun my tea--not above a week or so--and what with the bread-and-butter getting so thin--and the twinkling of the tea--'
The twinkling of the what?' said the King.
It began with the tea,' the Hatter replied.
Of course twinkling begins with a T!' said the King sharply. Do you take me for a dunce? Go on!'
I'm a poor man,' the Hatter went on, and most things twinkled after that--only the March Hare said--'
I didn't!' the March Hare interrupted in a great hurry.
You did!' said the Hatter.
I deny it!' said the March Hare.
He denies it,' said the King: leave out that part.'
Well, at any rate, the Dormouse said--' the Hatter went on, looking anxiously round to see if he would deny it too: but the Dormouse denied nothing, being fast asleep.
After that,' continued the Hatter, I cut some more bread- and-butter--'
But what did the Dormouse say?' one of the jury asked.
That I can't remember,' said the Hatter.
You MUST remember,' remarked the King, or I'll have you executed.'
The miserable Hatter dropped his teacup and bread-and-butter, and went down on one knee. I'm a poor man, your Majesty,' he began.
You're a very poor speaker,' said the King.
Here one of the guinea-pigs cheered, and was immediately suppressed by the officers of the court. (As that is rather a hard word, I will just explain to you how it was done. They had a large canvas bag, which tied up at the mouth with strings: into this they slipped the guinea-pig, head first, and then sat upon it.)
I'm glad I've seen that done,' thought Alice. I've so often read in the newspapers, at the end of trials, "There was some attempts at applause, which was immediately suppressed by the officers of the court," and I never understood what it meant till now.'
If that's all you know about it, you may stand down,' continued the King.
I can't go no lower,' said the Hatter: I'm on the floor, as it is.'
Then you may SIT down,' the King replied.
Here the other guinea-pig cheered, and was suppressed.
Come, that finished the guinea-pigs!' thought Alice. Now we shall get on better.'
I'd rather finish my tea,' said the Hatter, with an anxious look at the Queen, who was reading the list of singers.
You may go,' said the King, and the Hatter hurriedly left the court, without even waiting to put his shoes on.
--and just take his head off outside,' the Queen added to one of the officers: but the Hatter was out of sight before the officer could get to the door.
Call the next witness!' said the King.
The next witness was the Duchess's cook. She carried the pepper-box in her hand, and Alice guessed who it was, even before she got into the court, by the way the people near the door began sneezing all at once.
Give your evidence,' said the King.
Shan't,' said the cook.
The King looked anxiously at the White Rabbit, who said in a low voice, Your Majesty must cross-examine THIS witness.'
Well, if I must, I must,' the King said, with a melancholy air, and, after folding his arms and frowning at the cook till his eyes were nearly out of sight, he said in a deep voice, What are tarts made of?'
Pepper, mostly,' said the cook.
Treacle,' said a sleepy voice behind her.
Collar that Dormouse,' the Queen shrieked out. Behead that Dormouse! Turn that Dormouse out of court! Suppress him! Pinch him! Off with his whiskers!'
For some minutes the whole court was in confusion, getting the Dormouse turned out, and, by the time they had settled down again, the cook had disappeared.
Never mind!' said the King, with an air of great relief. Call the next witness.' And he added in an undertone to the Queen, Really, my dear, YOU must cross-examine the next witness. It quite makes my forehead ache!'
Alice watched the White Rabbit as he fumbled over the list, feeling very curious to see what the next witness would be like, --for they haven't got much evidence YET,' she said to herself. Imagine her surprise, when the White Rabbit read out, at the top of his shrill little voice, the name Alice!' | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41630855202674866, "perplexity": 5896.772010178599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285244.23/warc/CC-MAIN-20170116095125-00083-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/137152-subgroup-question.html | # Thread: Subgroup question
1. ## Subgroup question
Let H1,H2,H3,... be a (possible infinite) sequence of subgroups of a group with the property that . Prove that the union of the sequence is also a subgroup.
Would I use a subgroup test on this or what? I will be very grateful for any help.
2. Originally Posted by wutang
Let H1,H2,H3,... be a (possible infinite) sequence of subgroups of a group with the property that . Prove that the union of the sequence is also a subgroup.
Would I use a subgroup test on this or what? I will be very grateful for any help.
Lemma: A non empty subset H of agroup G is a subgroup of G iff $\forall a,b\in H\,,\,\,ab^{-1}\in H$ .
With the above result your problem is trivial.
Tonio
3. could you explain this to me, I am still not getting it.
Thanks
4. Originally Posted by wutang
could you explain this to me, I am still not getting it.
Thanks
Prove the lemma: show that such a subset H is actually a group.
Tonio
5. tonio, Iam still unsure what I would let H equal, would I let H equal the union of all H1,H2,H3,,? I am still unsure what to do.
Thanks
6. Originally Posted by wutang
tonio, Iam still unsure what I would let H equal, would I let H equal the union of all H1,H2,H3,,? I am still unsure what to do.
Thanks
Yes, that'd be H and then with the lemma it is trivial that the subset H is actually a subgroup.
Tonio
7. I am still lost. Here is what I have so far:
Pf/
Let H= the union of subgroups H1,H2,H3,...Hn. Obviously H is a subset of the group G containing all the subgroups H1,H@,H3,... THe defining property of H is that H1 is contained in H2, H2 is contained in H3,....
I don't know how I could prove that the identity is in my group, or that for all elements a,b in H that a(b^-1) is in H.
Do I use the fact that since H1,H2,H3,... are all subgroups, and that H1 is contained in all the subgroups H2 to Hn, that the identity element of H1 is the same for all H2 to Hn? How do I show that for all a,b in H, that a(b^-1) is in H. Is it due to closure of the subgroups H1,H3,H3,....?
Thanks again for your help
8. would your lemma work due to the fact that since each H1, H2,... Hn is a subgroup,then a(b^-1) will be in each H1,H2,..,Hn by definition of a subgroup. Then taking the union of all these subgroups, we get all a(b^-1) that were in H1,H2,H3,...Hn in are new subgroup H?
9. To prove closure: for any a,b, pick a big $H_n$ (which is certainly a subset of H) such that $a,b\in H_n$.
To prove identity: obvious
To prove inverse: also obvious
Then what else do you need to do? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9265816807746887, "perplexity": 579.3816995528488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345775355/warc/CC-MAIN-20131218054935-00072-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/1639/whats-the-difference-between-running-up-a-hill-and-running-up-an-inclined-tread | # What's the difference between running up a hill and running up an inclined treadmill?
Clearly there will be differences like air resistance; I'm not interested in that. It seems like you're working against gravity when you're actually running in a way that you're not if you're on a treadmill, but on the other hand it seems like one should be able to take a piece of the treadmill's belt as an inertial reference point. What's going on here?
-
Speaking as a some-time "fell runner", I observe that setting the treadmill to 10% vs running up a road at 10%, I do the same distance in times that differ by less than 5%, often in favour of the road as it happens. i.e. if the surfaces are even I do not observe very much difference at all. Running up actual hills, as opposed to roads, is much slower. Running on the flat I do times that are consistently 2-3% faster on the treadmill, which I presumed was just wind resistance. – Rob Jeffries Dec 12 '14 at 9:03
For me it is axiomatic that machine miles are easier than real miles, but let's analyze the situation.
Assume the runner maintains a constant velocity up the hill, or remains stationary in the frame of the gym on the treadmill. In both cases the runner's acceleration is zero, so we know that her legs must provide a constant force with upward magnitude $mg$, and the they have to do this against a surface passing by at an angle $\theta$ below the horizontal and moving with a velocity $v$.
The kinematics in the runners frame of reference look the same. This is not the cause of the difference in perceived difficulty.
I have always assumed that the difference in difficulty was two fold:
• Wind resistance is not really negligible.
• The treadmill presents a very uniform reliable surface and the runner need not lift her legs as high to insure non-tripping progress.
Also modern treadmill are designed to be relatively easy on the knees, and the accomplish this by having a slightly springy feeling which presumably returns some energy to the runner.
-
From experience, I agree that treadmill miles are easier. Aside from the possibility of miscalibration, I have a guess that the belt slows down a bit when your foot impacts it, then picks back up the lost speed mostly while you're in the air. Hence, you're not pushed back at quite the speed the treadmill is set to. – Mark Eichenlaub Dec 4 '10 at 22:54
@Mark: That's a good observation, to prevent it you'd have to really overbuild the whole thing which would drive the price up to the point that no one would buy it. – dmckee Dec 4 '10 at 22:57
I'm inclined to believe this answer, but how does it take into account the conservation of energy? You'll have to excuse me if this is silly; I haven't taken physics since high school (and it's been a few years). – aaron Dec 8 '10 at 9:44
@aaron: It is not a silly question at all, and the answer is wrapped up in (1) the difference between "work" meaning $W= \vec{f} \dot{} d\vec{x}$ and "work" meaning "man, this is tiring" (see Pavel's answer) and (2) the question of what happens if you just stop running: on the hill, you just stand there; but on the treadmill you come to a stop and stand on the belt and fly back into the gym, land on your butt (if you're lucky) and look really silly because the platform isn't static---the extra energy is expending to maintain your place in the gym despite the motion of the belt. – dmckee Dec 8 '10 at 23:58
Couldn't the m of moving my body up a hill be different than the m of moving my legs underneath me on an incline treadmill? – Dave Feb 2 '12 at 16:15
The word 'difference' may be ambiguous, but let's look at the situation from several points of view.
Energy balance: Indeed, your potential energy does increase in case 1 and not in case 2. Muscles clearly perform the same work, so the energy must go somewhere? Yes, to the electric grid. The treadmill device's engine, to maintain constant velocity, will consume less electrical power to do so (or might even push energy back into the grid, in case of an efficient motor) because your legs are actually pulling it now downwards. If you do the math, you see it exactly compensates.
Muscle work: work, in thermodynamical sense, is not just F*dx. One has to take a machine and consider all interfaces. For example, a spring or a muscle have two ends, and dx in the formula is actually the difference between two paths. Muscle expansion/contraction will be the same, and so is the force. Therefore, they are doing the same work. This work is, the amount of chemical internal energy stored within the muscle converted to mechanical work.
-
Assume that the hill and the treadmill have the same angle of elevation (are inclined identically), and that two identical persons A and B are running on them at the same speed $v$. Here the speed of the person B running on the treadmill is obviously zero with respect to the ground, but we will consider the speed of the treadmill's belt to be $-v$.
Let's assume that B and the treadmill are now in a truck which is running up the hill parallel to A, and with the same speed $v$ as A. The truck should be arranged so that B and the treadmill are not inclined while the truck is running up the hill. By our hypothesis, the speed of the upper part of the belt is zero with respect to the ground. This will not affect the effort done by the runner B, because the truck is moving with constant velocity. That is, there are no extra forces caused by inertia, because the truck doesn't accelerate, decelerate or change direction.
By looking now at both runners A and B we see that they are moving parallel with the same speed. They can even do the same moves in synchrony. The angle elevation is the same for both of them too. But B may even not realize that he is climbing a hill, he may think that he is in a room with no windows, which doesn't move. The conditions are identical. So, there is no difference between them.
If our intuition still saids that the treadmill guy is burning less calories, let's imagine that the road on which the person A is running up the hill is a very long treadmill belt. Imagine that underneath the belt there is a treadmill which is doing two things: is moving with the same speed $v$ towards the top of the hill, and the upper side of the belt is moving backwards with $-v$, so that the belt appear to be fixed with respect to the ground. From the outside, the belt doesn't move (and for runner A too). Now, it should be clear that there is no difference.
In the above I assumed that there is no wind, the treadmill and the runners are moving with uniform speed, there is no difference between the runners. I also assumed that gravity is not weaker towards the top of the hill.
-
I think this argument is isomorphic with mine concerning what dosen't account for the perceived difference. I like the truck arrangement: that's clever. – dmckee Dec 12 '10 at 18:50
i get the impression this is something that doesn't need complicated physics to explain if you apply a quick common sense test first. if you step up a hill, you have to push the weight of your body upward with each step, or you do not continue moving forward. if you are on a treadmill, you may place your foot forward, but at the same time you would be (on a hill) pushing against the higher point on the ground to move yourself uphill, the treadmill is conveniently lowering your foot back down to the starting point, so you didn't have to actually push yourself uphill very much before you move on to the next step. and it just continues like this, with the treadmill continuously re-lowering your step part way before you ever have the chance to expend that energy you'd need to genuinely push yourself uphill. does anyone else see this?
-
Let's get one thing out of the way: The work done against gravity is the same whether you are running on an inclined treadmill or running (at the same velocity) up a hill. You see this by considering the motion in the frame of reference of the runner - you can't tell whether you are moving up or the hill is moving down.
Yet the treadmill is easier for two reasons: wind resistance and coefficient of restitution. Let's start with wind resistance:
Approximate cross sectional area of body - 0.5 m^2 . Speed 2.5 m/s (uphill...). Coefficient of drag 1.2, density of air 1.2 kg/m^2. Drag force is $$F=\frac12\rho v^2 A c_d= 2.3N$$ so power loss due to wind resistance (treadmill vs real life) is only about 6W - a very small fraction of typical power expended.
The real difference lies in the way that the belt "stores" energy. When your foot strikes, it typically is just ahead of your body. With good form the point will be very close and little energy needs to be absorbed as the legs "shortens" while under stress. This is the major loss mechanism in running. When the treadmill replaces the road surface, two things happen: the belt has lower inertia so the runners body is not so decelerated by the impact; and the belt stretches and stores some of the energy elastically. This is where the energy saving comes from - what DMcKee called "easy on the knees".
-
Let's estimate some of the contributions discussed in dmckee's answer
## Gravity
We can compute the power spent gaining altitude.
$$W_\text{grav} = mg \dot h = m g v \sin \theta \sim m g v \theta$$ for small angles, with some typical numbers $$W_\text{grav} = ( 180 \text{ lbs} ) ( 9.8 \text{ m/s}^2 ) ( 1 \text{ mile} / 10 \text{ minutes}) ( 5 \text{ degrees} ) \sim 200 \text{ kcal/hr}$$ if I look up the energy burned running on the web, I get $800 \text{ kcal/hr}$.
So, it would seem, running a 10 minute mile on a 5 degree incline outside takes some 25% more work than on flat ground.
## Air resistance
Next let's consider air resistance. We have for its work
$$W_{\text{air}} = \frac{C_d}{2} \rho A v^3$$ with typical numbers $$W_{\text{air}} \sim (0.5) ( 1 \text{ kg/m}^3 ) ( 2 \text{ m} \times 0.5 \text{ m} ) * ( 1 \text{ mile} / 10 \text{ min} )^3 \sim 4 \text{ kcal/hr}$$
which is a much smaller effect. About 1/2 %
## Non uniform surface
If we assume that running outside requires us to life our legs an extra 3 inches on average, the power contribution would be $$W_{\text{rough}} = m_{\text{legs}} g h_{\text{extra}} f_{\text{stride}}$$ with some typical numbers $$W_{\text{rough}} = ( 0.3 \times 180 \text{ lbs}) ( 9.8 \text{ m/s}^2 ) ( 3 \text{ inches } ) ( 2 / \text{ s}) \sim 30 \text{ kcal/hr}$$ which is larger than wind resistance but only a 4% increase over our base running number.
## Springing of landing
What if the coefficient of restitution was different for the treadmill versus land, then everytime you impact you'd need to put in less energy springing off, its power saving should be $$W_{\text{spring}} = (\Delta r) m g (\Delta h_{\text{center of mass}}) f$$ with some numbers $$W_{\text{spring}} \sim ( 10 \% )( 180 \text{ lbs} )( 9.8 \text{ m/s}^2 )( 6 \text{ inch} ) \sim 20 \text{ kcal/hr}$$
which is a roughly 3% change.
Not sure what to make of these yet...
-
Again, if this analysis held then tilting the bed of the treadmill wouldn't make it harder to run. Tilting the bed does make it harder to run. – dmckee Jul 18 '14 at 3:35
@dmckee You are probably more right than wrong. I've added estimates for the other points you discuss and removed my assertion. But... it seems like gravity has to make a contribution, because I would say running outside is more than 5% harder. – alemi Jul 18 '14 at 3:58
@dmckee Tilting the bed does make it harder. I never meant to argue against that, but I still want to believe actually gaining altitude should take more work than not. – alemi Jul 18 '14 at 4:01
I think the most significant difference between work done on an inclined treadmill and work done on a real incline is the gain in potential energy on the real incline. There is no real delta mgz on a treadmill, whereas if you fell back to your starting height from a real incline, you'd certainly notice a large amount of stored energy being turned into kinetic energy!
-
(Running up a treadmill) = (expend energy to keep feet moving at a constant speed) + (other effects)
(Running up a hill) = (expend energy to keep feet moving at a constant speed) + (energy to lift center of gravity by hill height) + (other effects)
-
If this analysis was viable tilting the bed of the treadmill would not make the running any harder. That is not the case (try it), and the reason is you have to support the body while pushing against a surface that falls away from you. – dmckee Dec 6 '10 at 17:21
Maybe you expend energy lifting your center of gravity and then release it (fall back) without recovering it causing a net expendeture. I can account for this in my balance by adding higher-order-terms to it. – ja72 Dec 6 '10 at 18:17
Well, yes, but you can take that argument to the infinitesimal limit and recover a inertial frame for both cases. Check my answer for an argument based on the kinematics relative the ground as seen in the runner's frame of reference. – dmckee Dec 6 '10 at 20:35
To explain this within the framework you proposed: You need to add another term to your first equation (expend energy to push down the treadmill). At first, you might think that you're not pushing down the treadmill (it's supposed to roll back on its own after all), but remember that if you weren't pushing it down than you wouldn't be sustained in place by it. – Malabarba Dec 14 '10 at 4:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6458197236061096, "perplexity": 670.4784208524966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246652114.13/warc/CC-MAIN-20150417045732-00184-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://zenodo.org/record/3380047/export/csl | Journal article Open Access
# Performances of micro credits among small scale maize farmers in Kagarko local government area, Kaduna State, Nigeria
Barau, B.,; Allimi, H. M.; Maiwada, A. A.; Funmi, A. A.; Ajayi, S. E.; Abdullahi, I. D
### Citation Style Language JSON Export
{
"publisher": "Zenodo",
"DOI": "10.5281/zenodo.3380047",
"container_title": "Direct Research Journal of Agriculture and Food Science",
"title": "Performances of micro credits among small scale maize farmers in Kagarko local government area, Kaduna State, Nigeria",
"issued": {
"date-parts": [
[
2019,
8,
28
]
]
},
"abstract": "<p>This study was carried out to analyze the performances of micro credit among small scale maize farmers in Kagarko local government area of Kaduna State. Random sampling technique was employed and 40 maize farmers (20 loan beneficiaries and 20 non loan beneficiaries) were purposively selected to analyze the performances of the credit. Data collected using a structure questionnaire to analyze the performances and results were interpreted using descriptive statistics. As shown, most of the respondents were between the age bracket of 31-40 and 41-50, for with and without loan respectively. In addition about 60% and 40% of the respondent for the two sets production (with loan and without loan respectively) had formal education. Furthermore, majority of the beneficiary of the loan had 11-15 year of farming experience and 1-10 for non-loan formers. Loan granted for small scale farmers ranges between N500, 000 to N100, 000 for those that accessed the loan. However, the size of the loan does not give a significant impact on increasing production because of its volume and time of disbursement. It is recommended that loan should be given close to production cycle and according to the size of farm, farmers should be linked with high quality inputs dealers for the purchase of agricultural inputs at time.</p>",
"author": [
{
"family": "Barau, B.,"
},
{
"family": "Allimi, H. M."
},
{
"family": "Maiwada, A. A."
},
{
"family": "Funmi, A. A."
},
{
"family": "Ajayi, S. E."
},
{
"family": "Abdullahi, I. D"
}
],
"page": "236-242",
"volume": "7",
"type": "article-journal",
"issue": "8",
"id": "3380047"
}
20
16
views
downloads
All versions This version
Views 2020
Downloads 1616
Data volume 2.7 MB2.7 MB
Unique views 1919
Unique downloads 1212 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5278317928314209, "perplexity": 19050.233144875438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703512342.19/warc/CC-MAIN-20210117112618-20210117142618-00449.warc.gz"} |
http://aimsciences.org/article/doi/10.3934/dcdsb.2003.3.469 | # American Institute of Mathematical Sciences
2003, 3(3): 469-477. doi: 10.3934/dcdsb.2003.3.469
## Western boundary currents versus vanishing depth
1 Laboratoire de Mathématiques Appliquées, UMR6620, 24 avenue des Landais, 63177 Aubière, France, France
Received May 2002 Revised January 2003 Published May 2003
In the case of a constant depth, western intensification of currents in oceanic basins was mathematically recovered in various models (such as Stommel, Munk or quasi-geostrophic ones) as a boundary layer appearing when the solution of equations converges to the solution of a pure transport equation. This convergence is linked to the fact that any characteristic line of the transport vector field included in the equations crosses the boundary, and the boundary layer is located at outgoing points.
Here we recover such a boundary layer for the vertical-geostrophic model with a general bathymetry. More precisely, we allow depth to vanish on the shore in which case the above mentioned characteristic lines no longer cross the boundary. However a boundary layer still appears because the transport vector field $a$ (which is tangential to the boundary) locally converges to a vector field $\overline{a}$ with characteristic lines crossing the boundary.
Citation: Didier Bresch, Jacques Simon. Western boundary currents versus vanishing depth. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 469-477. doi: 10.3934/dcdsb.2003.3.469
[1] Xiangjun Wang, Jianghui Wen, Jianping Li, Jinqiao Duan. Impact of $\alpha$-stable Lévy noise on the Stommel model for the thermohaline circulation. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1575-1584. doi: 10.3934/dcdsb.2012.17.1575 [2] Botao ZHOU, Ying XU. How the “Best” CMIP5 Models Project Relations of Asian–Pacific Oscillation to Circulation Backgrounds Favorable for Tropical Cyclone Genesis over the Western North Pacific. Inverse Problems & Imaging, 2017, 11 (2) : 107-116. doi: 10.1007/s13351-017-6088-4 [3] Carine Lucas, Antoine Rousseau. Cosine effect in ocean models. Discrete & Continuous Dynamical Systems - B, 2010, 13 (4) : 841-857. doi: 10.3934/dcdsb.2010.13.841 [4] Tómas Chacón-Rebollo, Macarena Gómez-Mármol, Samuele Rubino. On the existence and asymptotic stability of solutions for unsteady mixing-layer models. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 421-436. doi: 10.3934/dcds.2014.34.421 [5] Masahiro Suzuki. Asymptotic stability of a boundary layer to the Euler--Poisson equations for a multicomponent plasma. Kinetic & Related Models, 2016, 9 (3) : 587-603. doi: 10.3934/krm.2016008 [6] Julien Arino, Fred Brauer, P. van den Driessche, James Watmough, Jianhong Wu. A final size relation for epidemic models. Mathematical Biosciences & Engineering, 2007, 4 (2) : 159-175. doi: 10.3934/mbe.2007.4.159 [7] Bo You, Chengkui Zhong, Fang Li. Pullback attractors for three dimensional non-autonomous planetary geostrophic viscous equations of large-scale ocean circulation. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1213-1226. doi: 10.3934/dcdsb.2014.19.1213 [8] Michael Ghil. The wind-driven ocean circulation: Applying dynamical systems theory to a climate problem. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 189-228. doi: 10.3934/dcds.2017008 [9] T. Tachim Medjo. Multi-layer quasi-geostrophic equations of the ocean with delays. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 171-196. doi: 10.3934/dcdsb.2008.10.171 [10] Jian Yang. Asymptotic behavior of solutions for competitive models with a free boundary. Discrete & Continuous Dynamical Systems - A, 2015, 35 (7) : 3253-3276. doi: 10.3934/dcds.2015.35.3253 [11] Fujun Zhou, Junde Wu, Shangbin Cui. Existence and asymptotic behavior of solutions to a moving boundary problem modeling the growth of multi-layer tumors. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1669-1688. doi: 10.3934/cpaa.2009.8.1669 [12] Ghendrih Philippe, Hauray Maxime, Anne Nouri. Derivation of a gyrokinetic model. Existence and uniqueness of specific stationary solution. Kinetic & Related Models, 2009, 2 (4) : 707-725. doi: 10.3934/krm.2009.2.707 [13] Sebastián Ferrer, Francisco Crespo. Parametric quartic Hamiltonian model. A unified treatment of classic integrable systems. Journal of Geometric Mechanics, 2014, 6 (4) : 479-502. doi: 10.3934/jgm.2014.6.479 [14] David Julitz. Numerical approximation of atmospheric-ocean models with subdivision algorithm. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2/3) : 429-447. doi: 10.3934/dcds.2007.18.429 [15] A. Rousseau, Roger Temam, J. Tribbia. Boundary conditions for the 2D linearized PEs of the ocean in the absence of viscosity. Discrete & Continuous Dynamical Systems - A, 2005, 13 (5) : 1257-1276. doi: 10.3934/dcds.2005.13.1257 [16] Gung-Min Gie, Chang-Yeol Jung, Roger Temam. Recent progresses in boundary layer theory. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2521-2583. doi: 10.3934/dcds.2016.36.2521 [17] X. Liang, Roderick S. C. Wong. On a Nested Boundary-Layer Problem. Communications on Pure & Applied Analysis, 2009, 8 (1) : 419-433. doi: 10.3934/cpaa.2009.8.419 [18] Faker Ben Belgacem. Uniqueness for an ill-posed reaction-dispersion model. Application to organic pollution in stream-waters. Inverse Problems & Imaging, 2012, 6 (2) : 163-181. doi: 10.3934/ipi.2012.6.163 [19] Chengxia Lei, Yihong Du. Asymptotic profile of the solution to a free boundary problem arising in a shifting climate model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 895-911. doi: 10.3934/dcdsb.2017045 [20] Khalid Latrach, Hatem Megdiche. Time asymptotic behaviour for Rotenberg's model with Maxwell boundary conditions. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 305-321. doi: 10.3934/dcds.2011.29.305
2016 Impact Factor: 0.994 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5224965810775757, "perplexity": 5357.7935701081615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645177.12/warc/CC-MAIN-20180317135816-20180317155816-00002.warc.gz"} |
http://mathhelpforum.com/calculus/138149-calculating-normal-acceleration.html | 1. ## Calculating Normal Acceleration
Hi guys,
I have a 3D curve that is defined by a parameterization in the form of x(t), y(t), and z(t). In the context of the problem the curve is a waterslide and I need to calculate the normal acceleration at various points along the curve. I can calculate the velocity at any point by a simple conservation of energy from the top of the slide to the point of interest, but what do I need to do to calculate normal acceleration?
2. You have x(t), y(t), z(t).
Velocity vector v = (dx/dt,dy/dt,dz/dt).
Acceleration vector a=dv/dt = $(\frac{d^2x}{dt^2},\frac{d^2y}{dt^2},\frac{d^2z}{d t^2})$.
v direction tangential to the curve.
Find vector $\tau$ = v/|v| |n|=1 tangential to the curve.
Tangential acceleration
$a_t=a\tau$
Normal acceleration
${a_n}^2=a^2-{a_t}^2$.
3. Thank you for helping, but I must admit that I do not totally get it yet. Perhaps someone could help me with a simple example and then I will be able to apply it to my more complicated problem.
Imagine that my curve is defined by the simple parametrization:
x(t)=sin(t)
y(t)=cos(t)
z(t)=.01t
I am interested in the point where t=90 degrees, which represents one specific point on the curve. Based on the conservation of energy and the height at that point, I know that the rider will be traveling at 10m/s. How do I calculate his normal acceleration?
4. $r(t) = ( \sin t, \cos t, 0.01t )$
$r'(t) = \frac{dr(t)}{dt} = ( \cos t, -\sin t, 0.01 )$
$a(t) = r''(t) = \frac{d^2r(t)}{dt^2} = ( -\sin t, -\cos t, 0 )$
$T = \frac{r'(t)}{|r'(t)|}$ - unit vector tangential to the curve
$N = \frac{T'}{|T'|}$ - unit vector normal to the curve
$a(t) = Ta_T + Na_N$ where
$a_T = T\cdot a(t) = \frac{|r'(t) \cdot a(t)|}{|r'(t)|}$
$|r'(t)| = \sqrt{1.01}$.
$r'(t) \cdot a(t) = ( \cos t, -\sin t, 0.01 )( -\sin t, -\cos t, 0 ) = 0$ there is no tangetial component only normal, thus
$a(t) = Na_N = ( -\sin t, -\cos t, 0 )\: and\: |a(t)| = 1$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8807191252708435, "perplexity": 384.18093146598443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189032.76/warc/CC-MAIN-20170322212949-00223-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://rspa.royalsocietypublishing.org/content/224/1157/209.short | # The Hollow-Cathode Effect and the Theory of Glow Discharges
P. F. Little, A. von Engel
## Abstract
The nature of the processes in the cathode dark space and the negative glow of a glow discharge is not well understood. Moreover, the existing theory leading to relations between the cathode fall in potential, the current density, the width of the dark space and the electric field distribution in it is based on dubious assumptions and does not indicate the important physical processes in operation. Thus further experimental evidence would be valuable in developing the theory. By exploring the electric field between two plane-parallel cathodes with an electron beam, and observing simultaneously the other discharge parameters, new information was obtained. A double (hollow) cathode was used because in a conventional glow discharge the dark space, cathode fall and current density are interdependent; here the cathode separation controls the width of the dark space. When the separation is sufficiently reduced the two negative glows coalesce and the light emitted as well as the cathode current density rise greatly. This is the hollow-cathode effect. Results show that the field in the two dark spaces of a hollow cathode falls linearly with the distance from the cathode, and thus the net space-charge density is constant, as it is known to be in the conventional discharge. From the same observations the dark-space length is found. The conclusions drawn from these results lead to an elementary theory which covers both the hollow and the conventional glow discharge in various gases as indeed it should, since with increasing cathode separation the first goes over into the second type. The main feature is the contribution of the ultra-violet quanta from the glow to the photo-electric emission from the cathodes which is regarded as the essential factor in secondary electron emission. Another result comes from a reconsideration of the motion of positive ions in the dark space based on atomic beam studies and the modern theory of elastic collisions between ions and atoms. The discrepancy between earlier experiments showing that ions of energy of the order of the cathode fall in potential arrive at the cathode and classical calculations leading to low ion energies is resolved by allowing for small-angle scattering and charge transfer. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9407452940940857, "perplexity": 717.8637463084916}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990114.79/warc/CC-MAIN-20150728002310-00281-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.hpmuseum.org/forum/thread-9366-page-2.html | Finally acquired a Woodstock
01-09-2018, 03:47 AM
Post: #21
rprosperi Senior Member Posts: 3,576 Joined: Dec 2013
RE: Finally acquired a Woodstock
(01-09-2018 03:14 AM)mfleming Wrote: I struggled quite a bit to set up a Shapeways shop to make the design available for order (no markup) or download. Quite a pain! Try this link and PM me if there are problems. You'll like the case - no tape to hold the halves together.
https://www.shapeways.com/product/6NQAST...d=64514022
Mark,
I believe I somehow missed this case when you introduced it, and just saw this now. A 1-piece case is a very nice answer for this nagging problem.
Is this case designed for 'normal' cells with the button top on the positive end, or for 'flat top' cells like the originals that came with a Woodstock?
Also, do you have a recommended source for the required spring for the 'far' end of the holder? I've seen similar springs for '41 battery holders, but not sure if the case geometry is the same.
Nice!
--Bob Prosperi
01-09-2018, 03:48 PM
Post: #22
mfleming Senior Member Posts: 440 Joined: Jul 2015
RE: Finally acquired a Woodstock
Hi Bob,
I'm using normal buttontop Eneloop cells and the replacement springs for the 41 battery case found on TAS. The battery holder snaps into the cavity with a nice fit and spring pressure keeps the batteries pressed tightly against the contacts inside. The holder requires a firm push downward to remove, so there's very little chance the holder will pop out of the calculator. The TAS springs use a thicker wire than the stock 41 springs and are almost fully compressed when the batteries are inserted.
The only change I made to the original design was to remove a couple of floating pieces. At the bottom of the case are two projections that slip into the well behind the two battery contacts and help hold the bottom of the case in place (excellent design!) It looked like the original designer wanted a hook to rise from the end of the two projections to provide further latching, but they were rejected during the design check. Not really needed in my opinion.
Having used an original battery holder and another two-piece Shapeways design, I can definitely recommend this one. It's a very clever piece of work by the original designer! Still haven't found time to match the original case color though
~Mark
Who decides?
01-09-2018, 04:04 PM (This post was last modified: 01-09-2018 04:29 PM by rprosperi.)
Post: #23
rprosperi Senior Member Posts: 3,576 Joined: Dec 2013
RE: Finally acquired a Woodstock
(01-09-2018 03:48 PM)mfleming Wrote: I'm using normal buttontop Eneloop cells and the replacement springs for the 41 battery case found on TAS. The battery holder snaps into the cavity with a nice fit and spring pressure keeps the batteries pressed tightly against the contacts inside. The holder requires a firm push downward to remove, so there's very little chance the holder will pop out of the calculator. The TAS springs use a thicker wire than the stock 41 springs and are almost fully compressed when the batteries are inserted.
Excellent, thanks for these details Mark. Gonna go get me one (well, 2).
Do you have a link or item # for the springs on TAS (or elsewhere)? I've done a few searches, and besides finding plenty of 41 battery cases (ranging up to $75+ !) I don't see a listing for just the springs. I'll search old posts as well; IIRC, Sylvain found some somewhere a couple years ago. Update - Found it: https://www.ebay.com/itm/6x-Copper-Spring-Coil-and-6x-Contact-For-Dual-18650-Lithium-Batteries/253193465702 Does this look like it will work (often the strand that connects across the bottom is diagonal, this one is straight)? --Bob Prosperi 01-09-2018, 04:19 PM (This post was last modified: 01-10-2018 02:56 PM by Krauts In Space.) Post: #24 Krauts In Space Junior Member Posts: 18 Joined: Jan 2018 RE: Finally acquired a Woodstock I fished my Woody in the bay right 10 minutes ago. Googled for HP25, lead me to a fresh auction. 4 guys "just visiting", no bids (peeeeew!!!), buyed it, got it. A fairly good HP25C with both manuals, charger (US, maybe UK, no European), pouch, box and 2 batt holders for 150 €/179 U$
Last auction ended with 161 €/192 U\$ - without manuals and box.
HP: 20S 25C 32S 33E 33s 35s 41CV 42S 39GS 48SX 71B
Casio: FX702P
Swissmicros: DM15L
Need Forth71B
01-09-2018, 05:17 PM
Post: #25
Dave Frederickson Senior Member Posts: 1,678 Joined: Dec 2013
RE: Finally acquired a Woodstock
(01-09-2018 03:48 PM)mfleming Wrote: Still haven't found time to match the original case color though
I was less than successful dying some 71B port covers dark brown. Do you have a recipe that works?
Also, if you like, I can add your battery holder to my Shapeways shop.
https://www.shapeways.com/shops/hpparts
Dave
01-09-2018, 05:34 PM
Post: #26
Krauts In Space Junior Member Posts: 18 Joined: Jan 2018
RE: Finally acquired a Woodstock
(01-09-2018 05:17 PM)Dave Frederickson Wrote:
(01-09-2018 03:48 PM)mfleming Wrote: Still haven't found time to match the original case color though
I was less than successful dying some 71B port covers dark brown. Do you have a recipe that works?
Also, if you like, I can add your battery holder to my Shapeways shop.
https://www.shapeways.com/shops/hpparts
Dave
Give it a try with black tea, test it before to see if the color meets your mood
HP: 20S 25C 32S 33E 33s 35s 41CV 42S 39GS 48SX 71B
Casio: FX702P
Swissmicros: DM15L
Need Forth71B
01-09-2018, 05:52 PM (This post was last modified: 01-10-2018 04:30 AM by Dave Frederickson.)
Post: #27
Dave Frederickson Senior Member Posts: 1,678 Joined: Dec 2013
RE: Finally acquired a Woodstock
(01-09-2018 05:34 PM)Krauts In Space Wrote:
(01-09-2018 05:17 PM)Dave Frederickson Wrote: I was less than successful dying some 71B port covers dark brown. Do you have a recipe that works?
Give it a try with black tea, test it before to see if the color meets your mood
That's a novel idea. I was using Rit dye, but if the color I'm going for is brown then that opens up other options. I wonder if coffee would work - maybe a dark roast like a Kona peaberry.
Dave
01-09-2018, 07:36 PM
Post: #28
larthurl Member Posts: 102 Joined: Nov 2017
RE: Finally acquired a Woodstock
(01-09-2018 03:14 AM)mfleming Wrote:
(01-08-2018 10:06 PM)larthurl Wrote: Hello Mark:
do you have a link to the shapeways version of the HP-25 battery holder you did?
I'd like to get one for my Dad's HP-25.
Thanks
.....Art
Hi Art,
I struggled quite a bit to set up a Shapeways shop to make the design available for order (no markup) or download. Quite a pain! Try this link and PM me if there are problems. You'll like the case - no tape to hold the halves together.
https://www.shapeways.com/product/6NQAST...d=64514022
~Mark
Thank you Mark. I just placed my order. Looking forward to seeing if new batteries will bring this heirloom to life.
btw, do you recommend AA alkaline or Enerloop AA rechargeables?
.....Art
01-09-2018, 09:13 PM
Post: #29
mfleming Senior Member Posts: 440 Joined: Jul 2015
RE: Finally acquired a Woodstock
(01-09-2018 04:04 PM)rprosperi Wrote: ...snip...
Update - Found it:
https://www.ebay.com/itm/6x-Copper-Spring-Coil-and-6x-Contact-For-Dual-18650-Lithium-Batteries/253193465702
Does this look like it will work (often the strand that connects across the bottom is diagonal, this one is straight)?
That's the one. I think you can order just the springs if you don't need the other parts. I'm thinking of shortening the spring a coil or two so it's not completely compressed. I'll post a picture so you can see what I mean.
(01-09-2018 05:17 PM)Dave Frederickson Wrote: ...snip...
Also, if you like, I can add your battery holder to my Shapeways shop.
https://www.shapeways.com/shops/hpparts
Dave
That would be great (I've bought a few things from there!) I like the idea of HP parts in as few places as possible, and I'm unlikely to make any of my own. Just provide a reference to the original author, Chris Osborn (aka "FozzTexx") and the Creative Commons license (I'm going to add those details soon if I leave this up for a while).
(01-09-2018 05:34 PM)Krauts In Space Wrote:
(01-09-2018 05:17 PM)Dave Frederickson Wrote: I was less than successful dying some 71B port covers dark brown. Do you have a recipe that works?
Give it a try with black tea, test it before to see if the color meets your mood
Love it
(01-09-2018 07:36 PM)larthurl Wrote: Thank you Mark. I just placed my order. Looking forward to seeing if new batteries will bring this heirloom to life.
btw, do you recommend AA alkaline or Enerloop AA rechargeables?
.....Art
Ka-ching! Now I can frame and display on the wall my first non-existent dollar bill from my first no-markup sale!
I'd go with the rechargeable batteries. Alkalines may last a little longer, but we're talking battery life of only a few hours. That'd get pretty expensive if you're buying a pair every day or two. A big plus with the Eneloop (or Amazon Basics-labeled version) is the low discharge characteristic. Holds a charge for many months without trickling away...
~Mark
Who decides?
01-09-2018, 09:57 PM (This post was last modified: 01-09-2018 10:01 PM by mfleming.)
Post: #30
mfleming Senior Member Posts: 440 Joined: Jul 2015
RE: Finally acquired a Woodstock
Here are a couple of pictures to illustrate the battery case with springs. The first has a 41 battery holder using the TAS springs to show them in a fully extended position. There are five coil loops for each. The battery case is flush with the back of the calculator, there are grips at the top to push the case down, and the gap at the bottom is the distance it must traverse for the top latch to come free.
This is the battery case removed and rotated 180 degrees. You can see how compressed the spring is above the battery button on the right. That I may clip a coil loop off and bend the tip inwards for contact with the battery.
Hope the pictures help!
~Mark
Attached File(s) Thumbnail(s)
01-10-2018, 03:01 AM
Post: #31
rprosperi Senior Member Posts: 3,576 Joined: Dec 2013
RE: Finally acquired a Woodstock
Thanks a lot Mark, all very helpful.
--Bob Prosperi
01-10-2018, 07:54 AM (This post was last modified: 01-10-2018 02:55 PM by Krauts In Space.)
Post: #32
Krauts In Space Junior Member Posts: 18 Joined: Jan 2018
RE: Finally acquired a Woodstock
(01-09-2018 05:52 PM)Dave Frederickson Wrote:
(01-09-2018 05:34 PM)Krauts In Space Wrote: Give it a try with black tea, test it before to see if the color meets your mood
That's a novel idea. I was using Rit dye, but if the color I'm going for is brown then that opens up other options. I wonder if coffee would work - maybe a dark roast like a Kona peaberry.
Dave
I could imagine lots more "green" dyes, like: red wine, black coffee, black tea, red beets, brown balsamic vinegar, "molassed" (chewing) tobacco. Or try some sort of cola (Pepsi or Coke - it's your choice ). (Chewing) tobacco might work best, but ... the smell??? X)
Let's go scientific! Try any sort of dyeing liquid or stuff and see what comes out.
The color might be a specific combination of pastics with dyeing fluid.
Even pink could be possible B)
HP: 20S 25C 32S 33E 33s 35s 41CV 42S 39GS 48SX 71B
Casio: FX702P
Swissmicros: DM15L
Need Forth71B
01-10-2018, 06:30 PM
Post: #33
larthurl Member Posts: 102 Joined: Nov 2017
RE: Finally acquired a Woodstock
[/quote]
Ka-ching! Now I can frame and display on the wall my first non-existent dollar bill from my first no-markup sale!
I'd go with the rechargeable batteries. Alkalines may last a little longer, but we're talking battery life of only a few hours. That'd get pretty expensive if you're buying a pair every day or two. A big plus with the Eneloop (or Amazon Basics-labeled version) is the low discharge characteristic. Holds a charge for many months without trickling away...
~Mark
[/quote]
Do you think any less leakage with Enerloops over traditional Alkaline?
It took me many years to learn my lessons on leaving batteries in electronic devices for long periods.
.....Art
01-10-2018, 06:33 PM
Post: #34
larthurl Member Posts: 102 Joined: Nov 2017
RE: Finally acquired a Woodstock
(01-09-2018 09:57 PM)mfleming Wrote: Here are a couple of pictures to illustrate the battery case with springs. The first has a 41 battery holder using the TAS springs to show them in a fully extended position. There are five coil loops for each. The battery case is flush with the back of the calculator, there are grips at the top to push the case down, and the gap at the bottom is the distance it must traverse for the top latch to come free.
This is the battery case removed and rotated 180 degrees. You can see how compressed the spring is above the battery button on the right. That I may clip a coil loop off and bend the tip inwards for contact with the battery.
Hope the pictures help!
~Mark
Let us know how the clipping goes.
I ordered the springs too, and am curious what tool needed to cut the springs to size.
.....ARt
01-11-2018, 06:56 AM
Post: #35
mfleming Senior Member Posts: 440 Joined: Jul 2015
RE: Finally acquired a Woodstock
(01-10-2018 06:30 PM)larthurl Wrote: Do you think any less leakage with Enerloops over traditional Alkaline?
It took me many years to learn my lessons on leaving batteries in electronic devices for long periods.
.....Art
The HP25 Owners Manual states that a rechargable battery pack should last from 2 to 6 hours. I've experienced battery life at the low end of that range with Eneloop rechargables. Alkaline batteries wouldn't last mich longer.
I know what you mean about leaving batteries, even fairly new ones, in a lightly used electronic device. The only lesson I've able to discern after decades of experience is; The more expensive the device, the more likely a leak. Seriously. It's like they want to ruin your whole day.
If you've ordered springs and also bought the contacts, see if the contacts also work if bent properly. I suspect they might be a good alternative to the springs, but I don't have any to test that assumption. On my part, I'll let you know how trimming down the spring coils goes...
~Mark
01-11-2018, 10:10 AM (This post was last modified: 01-11-2018 10:12 AM by Dieter.)
Post: #36
Dieter Senior Member Posts: 2,398 Joined: Dec 2013
RE: Finally acquired a Woodstock
(01-11-2018 06:56 AM)mfleming Wrote: The HP25 Owners Manual states that a rechargable battery pack should last from 2 to 6 hours. I've experienced battery life at the low end of that range with Eneloop rechargables.
The operating time in the manual refers to a capacity of 400...500 mAh as it was usual for NiCds in the Seventies. The Eneloops have 2000 mAh, i.e. at least 4x this capacity and thus 4x the expected operation time. So if you only get 3 hours from a correctly charged set of Eneloops something must have gone wrong.
(01-11-2018 06:56 AM)mfleming Wrote: Alkaline batteries wouldn't last mich longer.
AA size Alkalines have a somewhat higher nominal capacity than NiMHs (2800 or 3000 mAh are typical values) but this does not always mean that the device can be powered 40 or 50% longer.
But again, if you say you can operate the calculator only 3 hours from a set of freshly and correctly charged AA Eneloops there definitely is something wrong. This would mean that the device constantly draws a current of 700 mA (!). How do you charge these batteries?
Dieter
01-11-2018, 03:36 PM (This post was last modified: 01-11-2018 03:53 PM by mfleming.)
Post: #37
mfleming Senior Member Posts: 440 Joined: Jul 2015
RE: Finally acquired a Woodstock
(01-11-2018 10:10 AM)Dieter Wrote: ...snip...
But again, if you say you can operate the calculator only 3 hours from a set of freshly and correctly charged AA Eneloops there definitely is something wrong. This would mean that the device constantly draws a current of 700 mA (!). How do you charge these batteries?
Dieter
Well, the batteries were recharged but unused for several months which may have affected total capacity. Also, not exactly a stock HP25, as evidenced by the IR diode at upper right. I had the ACT replacement configured for full speed, internal ROM as I worked my way through all of the on-board Application Program examples, making occasional program listings. Not sure what the IR diode current draw is, but it has a decent print range.
The calculator went a little wonky just past the two hour usage clock mark, which I (perhaps hastily) interpreted as time for a recharge. I charge the batteries with an external Panasonic BQ-CC17 charger. If given a vintage wall charger, I'd heave it as hard as I can, as far away, as possible
~Mark
01-13-2018, 12:20 AM
Post: #38
mfleming Senior Member Posts: 440 Joined: Jul 2015
RE: Finally acquired a Woodstock
(01-10-2018 06:33 PM)larthurl Wrote: Let us know how the clipping goes.
I ordered the springs too, and am curious what tool needed to cut the springs to size.
.....ARt
Here's the result of clipping two full turns from the spring, as shown in the left-hand side picture and the amount of spring compression with batteries installed on the right-hand side picture. The spring was cut with heavy duty wire cutters, and note how the end of the right-hand spring was bent inwards to make contact with the battery button.
I'll have to say this was a useful modification. The battery case is much easier to snap in and out, but the spring tension is still enough to maintain good physical and electrical contact between battery and spring. That likely puts less stress on the calculator battery contacts as well.
~Mark
Attached File(s) Thumbnail(s)
01-13-2018, 12:37 AM
Post: #39
rprosperi Senior Member Posts: 3,576 Joined: Dec 2013
RE: Finally acquired a Woodstock
(01-13-2018 12:20 AM)mfleming Wrote: I'll have to say this was a useful modification. The battery case is much easier to snap in and out, but the spring tension is still enough to maintain good physical and electrical contact between battery and spring. That likely puts less stress on the calculator battery contacts as well.
Thanks for sharing your results Mark.
Did you cut both sides, or only the contact that touches the positive 'button' terminal. I presume both, but the photo could imply just one (or more likely was used to show the difference before/after trim)? Just wanted to make sure, as my parts are on their way.
--Bob Prosperi
01-13-2018, 01:33 AM
Post: #40
Bill Duncan Member Posts: 62 Joined: Jan 2016
RE: Finally acquired a Woodstock
Nostalgia time. Finally got my hp-29c going again after a few years. Fortunately I had taken the battery out, so the little leakage encountered didn't impact the machine.
What I finally ended up doing was cutting the crossbar out of the original battery. Spread the sides out a bit and pop the old cells out. Clean everything up a bit, pink eraser on the spring and the calculator contacts. Pop some new cells in. I had to shim out the ends a bit with some aluminum foil. Presto, the "Error" display. Back in business..
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17741738259792328, "perplexity": 5315.272761334462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987787444.85/warc/CC-MAIN-20191021194506-20191021222006-00095.warc.gz"} |
https://support.10xgenomics.com/de-novo-assembly/software/pipelines/latest/using/demultiplex | Software › pipelines
# Generating FASTQs with supernova demux
The supernova demux pipeline is the first step in analyzing a Chromium sequencer run. It takes an Illumina BCL output folder and demultiplexes based on the 8bp sample index read, and generates FASTQs for the R1 and R2 paired-end reads as well as the sample index.
While this pipeline runs Illumina bcl2fastq as one its stages, it produces a FASTQ output folder whose layout is better optimized for parallelized analysis than the standard file layout produced by bcl2fastq alone.
If your sequencing run includes extra bases in barcode or sample index reads, or you only want to demultiplex part of your flowcell, you will need to follow the instructions on Generating FASTQs with bcl2fastq.
For the following example, it is assumed that you have already installed:
• Supernova package such that supernova demux --help returns without errors.
• Illumina's bcl2fastq such that either bcl2fastq --version (v2.17 and higher) or configureBclToFastq.pl --help (v1.8.4) returns without errors.
The supernova demux command requires only the path to a BCL sequencer output folder:
$supernova demux --run=/sequencing/140101_D00123_0111_AHAWT7ADXX supernova demux Copyright (c) 2016 10x Genomics, Inc. All rights reserved. ----------------------------------------------------------------------------- Martian Runtime - v2.3.3 Running preflight checks (please wait)... (Martian is 10x Genomics' pipeline execution framework.) supernova demux will first run "preflight checks" to ensure that there are no critical errors with the arguments you provided or in your environment settings. Following the preflight checks, the runtime will begin running pipeline stages: Running preflight checks (please wait)... 2016-05-01 12:00:00 [runtime] (ready) ID.HAWT7ADXX.BCL_PROCESSOR_CS.BCL_PROCESSOR.ANALYZE_RUN 2016-05-01 12:00:00 [runtime] (ready) ID.HAWT7ADXX.BCL_PROCESSOR_CS.BCL_PROCESSOR.BARCODE_AWARE_BCL2FASTQ 2016-05-01 12:00:03 [runtime] (split_complete) ID.HAWT7ADXX.BCL_PROCESSOR_CS.BCL_PROCESSOR.ANALYZE_RUN 2016-05-01 12:00:03 [runtime] (run:local) ID.HAWT7ADXX.BCL_PROCESSOR_CS.BCL_PROCESSOR.ANALYZE_RUN.fork0.chnk0.main ... If you encounter any preflight errors, please refer to the Troubleshooting page. Once the supernova demux pipeline has successfully completed, the output can be found in a new folder named with the serial number of the flowcell processed by supernova demux. The flowcell serial number in this example is HAWT7ADXX: $ ls -l
drwxr-xr-x 4 jdoe jdoe 4096 May 1 13:39 HAWT7ADXX
The demultiplexed FASTQ files can be found in outs/fastq_path:
\$ ls -l HAWT7ADXX/outs/fastq_path/
-rw-r--r-- 1 jdoe jdoe 3071801 May 1 13:39 read-I1_si-AAACGTAC_lane-001-chunk-000.fastq.gz
...
-rw-r--r-- 1 jdoe jdoe 52246181 May 1 13:39 read-RA_si-GTGGAATT_lane-001-chunk-000.fastq.gz
-rw-r--r-- 1 jdoe jdoe 3759265 May 1 13:39 read-RA_si-X_lane-001-chunk-000.fastq.gz
It is important not to change the naming of these FASTQ files, as the supernova pipeline depends on the specific file structure produced by supernova demux. The layout of the pipestance output folder is described in more detail in the Pipestance Structure section. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43587198853492737, "perplexity": 14467.16022315762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313589.19/warc/CC-MAIN-20190818022816-20190818044816-00063.warc.gz"} |
https://opendsa-server.cs.vt.edu/ODSA/Books/Everything/html/ProblemSolving.html | Register
# 21.2. An Introduction to Problem Solving¶
## 21.2.1. An Introduction to Problem Solving¶
This document presents a brief overview of selected material from four textbooks (see [FL95, Lev94, WL99, Zei07] in the bibliography). Reading any of these books should help you to become a better problem solver.
To successfully solve any problem, the most important issue to get actively involved. Levine [Lev94] calls this “The Principle of Intimate Engagement”. Writers on problem solving often use terms like “roll up your sleeves” and “get your hands dirty”. It means actively engaging with the problem, and doing some work to get it done. For easier problems, you will “see” an answer fairly quickly once you actively engage, and the issue then is to work through to completion. For most problems, the ones matter most, you won’t “see” an answer right away. For these problems, you will have to use various strategies to come up with a potential solution for even getting started.
Problem solvers can be categorized as either “engagers” or “dismissers”. Engagers typically have a history of success with problem solving. Dismissers have a history of failure. Of course, you might be an engager for one type of problem, and a dismisser for another. Many students do significant problem solving for recreation—Sodoku puzzles, computer games with meaningful problem solving tasks, and all sorts of “puzzles”. They might spend hours engaged with “interesting” problems. Yet, these same students might dismiss math and analytical computer science problems due to a historical lack of success. If you have this problem, then to be successful in life you will need to find ways to get over what is obviously a mental block. You need to learn to transfer successful problem-solving strategies from one part of your life to other parts.
Levine uses examples of trying to repair a clothes dryer or a wobbly table. How to solve the problem might not be immediately obvious. The first step is to take the effort to look at the problem. In this example, it starts by opening the back of the dryer, or looking under the table. This initial investigation can often lead to a solution. It is a matter of adopting the mental attitude of being willing to take the risk and the effort. Then it is a matter of working with the problem for awhile to see what can be done. At that point, a possible solution path might open up. But nothing can be solved unless you are willing to take the time and make the effort. All of the heuristics for solving problems start with that.
Fogler and LeBlanc [FL95] discuss the differences between effective and ineffective problem solvers.
The most important factors that distinguish between ineffective and effective problem solvers are the attitudes with which they approach the problem, their aggressiveness in the problem-solving process, their concern for accuracy, and the solution procedures they use. For example, effective problem solvers believe that problems can be solved through the use of heuristics and careful persistent analysis, while ineffective problem solvers think, “You either know it or you don’t”.
Effective problem solvers become very active in the problem-solving process: They draw figures, make sketches, and ask questions of themselves and others. Ineffective problem solvers don’t seem to understand the level of personal effort needed to solve the problem. Effective problem solvers take great care to understand all the facts and relationships accurately. Ineffective problem solvers make judgments without checking for accuracy… By approaching a situation using the characteristic attitudes and actions of an effective problem solver, you will be well on your way to finding the real problem and generating an outstanding solution.
### 21.2.1.1. Investigation and Argument¶
Problem solving has two parts [Zei07]: the investigation and the argument. Students are too used to seeing only the argument in their textbooks and lectures. Unfortunately, to be successful in school (and in life after school), one needs to be good at both, and to understand the differences between these two phases of the process. To solve the problem, you must investigate successfully. Then, to give the answer to your client (solution on homework or exam, or report to boss), you need to be able to make the argument in a way that gets the solution across clearly and succinctly. The argument phase involves good technical writing skills—the ability to make a clear, logical argument. Understanding standard proof techniques can help you. The three most-used proof techniques are deduction (direct proof), contradiction, and induction.
### 21.2.1.2. Heuristics for Problem Solving “In the Small”¶
Write it down After motivation and mental attitude, the most important limitation on your ability to solve problems is biological: While you have lots of storage capacity, your “working memory” is tiny. For active manipulation, you can only store $7\pm 2$ pieces of information. You can’t change this biological fact. All you can do is take advantage of your environment to get around it. That means, you must put things into your environment to manipulate them. Most often, that translates to writing things down, and doing it in a way that lets you manipulate aspects of the problem (correct representation).
Look for special features Examples include cryptogram addition problems. You might recognize that something on one end must be a 1, in other circumstances one of the numbers must be a zero. Consider the following cryptogram puzzle where you must replace the letters with numbers to make a legal addition problem. In this case, we should recognize two special features: (1) The leading digit of the answer must be a one, and (2) One of the right most digits must be a zero. Recognizing these special features puts us well on the way to solving the full problem.
A D
+ D I
-----
D I D
Go to the extremes Study boundary conditions of the problem. For lots of problems, it helps to start with the small cases, which are one form of boundary condition.
Simplify A version of going to extremes is to simplify the problem. This might give a partial solution that can be extended to the original problem.
Penultimate step What precondition must take place before the final solution step is possible? If you recognize this, then getting to the penultimate step leads to the final solution, and solving the penultimate problem might be easier. Towers of Hanoi gives an excellent example of finding a solution from looking at the penultimate step.
Lateral thinking Be careful about being lead into a blind alley. Using an inappropriate problem-solving strategy might blind you to the solution.
Get your hands dirty Sometimes you need to just “play around” with the problem to get some initial insight. For example, when trying to see the closed form solution to a summation, its often a good place to start writing the first few sums down.
Wishful thinking A version of simplifying the problem. Sometimes you can transform the problem into something easy, or see how to get the start position to something that you could “wish” was the solution. That might be a smaller step to the actual solution.
Symmetry Look for symmetries in the problem. They might give clues to the solution.
### 21.2.1.3. Problem Solving “In the Large”¶
There are lots of standard techniques for solving larger and messier “real-world” problems (the type of problems often encountered by engineers in their professional lives). Fogler and LeBlanc [FL95] discuss such techniques in detail. Here is a brief outline of an overall process for disciplined problem solving of “real world” problems.
Problem Definition The client for a problem will often not state it in the correct way. Your first step toward solution is often to define the “real” problem that needs to be solved. It might not be obvious what this is. To get at the “real” problem, you will need to begin by studying it, collecting information about it, and talking to people familiar with the problem. You might consider restating the problem in a number of ways. Define the desired state. Then make restatements of the current problem formulation that can trigger new insights. Consider looking at the problem statement by making the opposite statement. Alternatively, perhaps we can change the surrounding situation such that the current problem can be “made OK” rather than solved directly.
Generate solutions Once you have settled on a problem statement, you need to generate and analyze a range of possible solutions. Blockbusting and brainstorming techniques can generate a list of possible solutions to study.
Decide the Course of Action There are a number of standard techniques for select from a given list of potential actions (e.g., situation analysis, Pareto analysis, K.T. Problem analysis, decision analysis).
Implement the Solution Getting approval may be the necessary first step to implementation. Once that is taken care of, again there are a number of standard techniques for planning implementations (e.g., Gannt charts, critical path analysis).
Evaluation Evaluation should be built into all phases of the problem solving process.
### 21.2.1.4. Pairs Problem Solving¶
Whimbey & Lochhead [WL99] discuss a technique for pair problem solving that separates the pair into a solver and a listener. The listener plays an active role, being responsible for keeping the problem solver on track and requiring the problem solver to vocalize their process. The listener is actively checking for errors by the problem solver. See the handout for more details on this.
### 21.2.1.5. Errors in Reasoning¶
Again from Whimbey & Lochhead [WL99] comes a description of how people go wrong in problem solving. Specifically related to homework and tests, typical problems stem from failing to read the problem carefully. Thus, students will often fail to use all relevant facts, or plain mis-interpret the problem. Other typical mistakes come from failing to be systematic, or worse yet being just plain careless. All of this indicates that many of the points lost by students on tests and homeworks are not caused by “not knowing the material”, but rather are caused by not executing problem solving effectively. Those are points that don’t need to be lost.
Comprehension in reading is a major factor to success. Proper comprehension of technical material requires careful reading, and often re-reading. There is no such thing as speed reading with comprehension. The mythology of the speed reading advocates, such as “read in thought groups”, “skim for concepts”, and “don’t re-read”, are all ineffective.
### 21.2.1.6. References¶
[FL95] H. Scott Fogler and Steven E. LeBlanc. Strategies for Creative Problem Solving. Prentice Hall, 1995.
[Lev94] Marvin Levine. Effective Problem Solving. Prentice Hall, second edition, 1994.
[WL99] Arthur Whimbey and Jack Lochhead. Problem Solving & Comprehension. Lawrence Erlbaum Associates, sixth edition, 1999.
[Zei07] Paul Zeitz. The Art and Craft of Problem Solving. John Wiley & Sons, second edition, 2007. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5879836082458496, "perplexity": 912.2126213711786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521041.0/warc/CC-MAIN-20220518021247-20220518051247-00516.warc.gz"} |
http://planetmath.org/transfiniterecursion | # transfinite recursion
Transfinite recursion, roughly speaking, is a statement about the ability to define a function recursively using transfinite induction. In its most general and intuitive form, it says
###### Theorem 1.
Let $G$ be a (class) function on $V$, the class of all sets. Then there exists a unique (class) function $F$ on $\mathbf{On}$, the class of ordinals, such that
$F(\alpha)=G(F|\alpha)$
where $F|\alpha$ is the function whose domain is $\operatorname{seg}(\alpha):=\{\beta\mid\beta<\alpha\}$ and whose values coincide with $F$ on every $\beta\in\operatorname{seg}(\alpha)$. In other words, $F|\alpha$ is the restriction of $F$ to $\operatorname{\alpha}$.
Notice that the theorem above is not provable in ZF set theory, as $G$ and $F$ are both classes, not sets. In order to prove this statement, one way of getting around this difficulty is to convert both $G$ and $F$ into formulas, and modify the statement, as follows:
Let $\varphi(x,y)$ be a formula such that
$\forall x\exists y\forall z(\varphi(x,z)\leftrightarrow z=y).$
Think of $G=\{(x,y)\mid\varphi(x,y)\}$. Then there is a unique formula $\psi(\alpha,z)$ (think of $F$ as $\{(\alpha,z)\mid\psi(\alpha,z)\}$) such that the following two sentences are derivable using ZF axioms:
1. 1.
$\forall x\exists y\forall z\big{(}\mathbf{On}(x)\wedge(\psi(x,z)% \leftrightarrow z=y)\big{)},$ where $\mathbf{On}(x)$ means “$x$ is an ordinal”,
2. 2.
$\forall x\forall y\Big{(}\mathbf{On}(x)\wedge\big{(}\psi(x,y)\leftrightarrow% \exists f(A\wedge B\wedge C\wedge D)\big{)}\Big{)}$, where
• $A$ is the formula “$f$ is a function”,
• $B$ is the formula “$\operatorname{dom}(f)=x$”,
• $C$ is the formula $\forall z\big{(}z\in x\wedge\varphi(f|z,f(z))\big{)}$, and
• $D$ is the formula $\varphi(f,y)$.
A stronger form of the transfinite recursion theorem says:
###### Theorem 2.
Let $\varphi(x,y)$ be any formula (in the language of set theory). Then the following is a theorem: assume that $\varphi$ satisfies property that, for every $x$, there is a unique $y$ such that $\varphi(x,y)$. If $A$ be a well-ordered set (well-ordered by $\leq$), then there is a unique function $f$ defined on $A$ such that
$\varphi(f|\operatorname{seg}(s),f(s))$
for every $s\in A$. Here, $\operatorname{seg}(s):=\{t\in A\mid t, the initial segment of $s$ in $A$.
The above theorem is actually a collection of theorems, or known as a theorem schema, where each theorem corresponds to a formula. The other difference between this and the previous theorem is that this theorem is provable in ZF, because the domain of the function $f$ is now a set.
Title transfinite recursion TransfiniteRecursion 2013-03-22 17:53:51 2013-03-22 17:53:51 CWoo (3771) CWoo (3771) 12 CWoo (3771) Theorem msc 03E45 msc 03E10 WellFoundedRecursion TransfiniteInduction | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 49, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855194687843323, "perplexity": 195.81269143379808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267868237.89/warc/CC-MAIN-20180625170045-20180625190045-00563.warc.gz"} |
https://www.physicsforums.com/threads/basis-of-a-vector-space.202746/ | # Basis of a vector space
1. Dec 5, 2007
### mathboy
Maximal subspace
Problem: Prove that every vector space V has maximal subspace, i.e. a proper subspace that is not properly contained in a proper subspace of V.
I let A be the collection of all proper subspaces of V, but I can't prove that every totally ordered subcollection of A has an upper bound in A. The problem that the union of proper subspaces is not necessarily a proper subspace of V. What do I do now?
Last edited: Dec 5, 2007
2. Dec 5, 2007
### morphism
But the union of a chain of subspaces is a subspace.
3. Dec 5, 2007
### JasonRox
Think basis elements.
4. Dec 5, 2007
### andytoh
But it has to be a proper subspace of V.
For example { span{1}, span{1,x}, span{1,x,x^2}, span{1,x,x^2,x^3}, ..... } is a chain of proper subspaces of R[x], but its union is all of R[x], which is not a proper subspace of R[x].
Last edited: Dec 5, 2007
5. Dec 5, 2007
### andytoh
JasonRox's idea is good, take a basis of V and delete one element. The span of that would have to be a maximal subspace.
But I'm assuming that mathboy wants to use Zorn's lemma. In that case choose any v in V, and let A be the collection of all subspaces not containing v. This time the upper bound of any chain will be a proper subspace. The maximal element of A would be a maximal subspace of V.
Last edited: Dec 5, 2007
6. Dec 5, 2007
### morphism
Oops! I should learn to read! Thanks for pointing that out.
7. Dec 5, 2007
### Office_Shredder
Staff Emeritus
I don't think the problem implies there IS a basis of V (unless it turns out all vector spaces have a basis, and I just don't know that yet)
8. Dec 5, 2007
### JasonRox
Is V finite-dimensional? Is the book assuming that?
Do you know what finite-dimensional is?
9. Dec 5, 2007
### andytoh
Every vector space V has a basis, whether it is finite-dimensional or not. In mathboy's problem V can be infinite-dimensional and the result is still true.
If you want to prove that V has a basis if V is infinite-dimensional, you would have to use Zorn's lemma as well. Ultimately, mathboy's problem rests on Zorn's Lemma.
My approach to mathboy's problem is: Choose any v in V, and let A be the collection of all subspaces not containing v and then use Zorn's lemma. But I'm trying to figure out if there is a better partially ordered set to use, because my A seems a little clumsy (though I believe it would still get the job done).
10. Dec 5, 2007
### JasonRox
Of course I know this!
Ok, a vector space has a basis {v_1,...}, now delete one vector from there and span that that set. What do you get?
Voila!
11. Dec 5, 2007
### mathboy
Thanks guys. I forgot to say that I have to use Zorn's Lemma. But I know how to proceed now. I will use the collection of all proper subspaces that does not contain some fixed v in V. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9414360523223877, "perplexity": 646.0904553787218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00057-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://www.astronomy.swin.edu.au/cms/astro/cosmos/R/Resolution | ## Resolution
• The resolution of a telescope is its ability to separate two point sources into separate images. Under ideal conditions, such as above the atmosphere where there is no turbulence (seeing), the resolving power is limited by diffraction effects.
For a circular aperture, such as the objective lens of a refracting telescope or the primary mirror of a reflecting telescope, a point source will appear as a disk surrounded by many thin, faint rings. These rings are produced by Fraunhofer diffraction, and the shape of the rings is given by the equation:
where I(θ) is the irradiance at an angle θ, I(0) is the peak irradiance at the centre of the diffraction pattern, D=2a is the diameter of the aperture, k is the wave number and J1(u) is the first order Bessel function.
The central disk is called the Airy disk, and it has an angular radius (angle between the peak and the first minimum) of:
or
using the small angle approximation, sin θθ for small θ.
According to the Rayleigh criterion, two point sources cannot be resolved if their separation is less than the radius of the Airy disk.
The overlapping irradiance patterns from two stars. The blue line shows the diffraction pattern for each star, while the green line shows the (normalised) combined diffraction pattern.
left: These two stars are on the limit of resolution according to the Rayleigh criterion.
right: These two stars are on the limit of resolution according to the Sparrow criterion, as the "dip" has disappeared from the combined diffraction pattern.
If we look at the combined profile of the diffraction patterns produced by two point sources, we can see that when the Rayleigh criterion is satisfied, there is a dip between the two peaks. In practice, a telescope is able to resolve sources that are closer together than
so an alternative criterion for resolution was proposed by C. Sparrow (the Sparrow criterion). The minimum resolvable case now occurs when the central dip in the combined diffraction pattern just disappears. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9468967914581299, "perplexity": 559.013313561179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707439012/warc/CC-MAIN-20130516123039-00012-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://worldwidescience.org/topicpages/s/system+programs+design.html | #### Sample records for system programs design
1. Large Coil Program magnetic system design study
International Nuclear Information System (INIS)
Moses, S.D.; Johnson, N.E.
1977-01-01
The primary objective of the Large Coil Program (LCP) is to demonstrate the reliable operation of large superconducting coils to provide a basis for the design principles, materials, and fabrication techniques proposed for the toroidal magnets for the THE NEXT STEP (TNS) and other future tokamak devices. This paper documents a design study of the Large Coil Test Facility (LCTF) in which the structural response of the Toroidal Field (TF) Coils and the supporting structure was evaluated under simulated reactor conditions. The LCP test facility structural system consists of six TF Coils, twelve coil-to-coil torsional restraining beams (torque rings), a central bucking post with base, and a Pulse Coil system. The NASTRAN Finite Element Structural Analysis computer Code was utilized to determine the distribution of deflections, forces, and stresses for each of the TF Coils, torque rings, and the central bucking post. Eleven load conditions were selected to represent probable test operations. Pulse Coils suspended in the bore of the test coil were energized to simulate the pulsed field environment characteristic of the TNS reactor system. The TORMAC Computer Code was utilized to develop the magnetic forces in the TF Coils for each of the eleven loading conditions examined, with or without the Pulse Coils energized. The TORMAC computer program output forces were used directly as input load conditions for the NASTRAN analyses. Results are presented which demonstrate the reliability of the LCTF under simulated reactor operating conditions
2. Program computes single-point failures in critical system designs
Science.gov (United States)
Brown, W. R.
1967-01-01
Computer program analyzes the designs of critical systems that will either prove the design is free of single-point failures or detect each member of the population of single-point failures inherent in a system design. This program should find application in the checkout of redundant circuits and digital systems.
3. Design of All Digital Flight Program Training Desktop Application System
Directory of Open Access Journals (Sweden)
Li Yu
2017-01-01
Full Text Available All digital flight program training desktop application system operating conditions are simple. Can make the aircraft aircrew learning theory and operation training closely. Improve the training efficiency and effectiveness. This paper studies the application field and design requirements of flight program training system. Based on the WINDOWS operating system desktop application, the design idea and system architecture of the all digital flight program training system are put forward. Flight characteristics, key airborne systems and aircraft cockpit are simulated. Finally, By comparing flight training simulator and the specific script program training system, The characteristics and advantages of the training system are analyzed in this paper.
4. Advanced Turbine Systems (ATS) program conceptual design and product development
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-08-31
Achieving the Advanced Turbine Systems (ATS) goals of 60% efficiency, single-digit NO{sub x}, and 10% electric power cost reduction imposes competing characteristics on the gas turbine system. Two basic technical issues arise from this. The turbine inlet temperature of the gas turbine must increase to achieve both efficiency and cost goals. However, higher temperatures move in the direction of increased NO{sub x} emission. Improved coatings and materials technologies along with creative combustor design can result in solutions to achieve the ultimate goal. GEs view of the market, in conjunction with the industrial and utility objectives, requires the development of Advanced Gas Turbine Systems which encompass two potential products: a new aeroderivative combined-cycle system for the industrial market, and a combined-cycle system for the utility sector that is based on an advanced frame machine. The GE Advanced Gas Turbine Development program is focused on two specific products: (1) a 70 MW class industrial gas turbine based on the GE90 core technology utilizing an innovative air cooling methodology; (2) a 200 MW class utility gas turbine based on an advanced Ge heavy-duty machine utilizing advanced cooling and enhancement in component efficiency. Both of these activities required the identification and resolution of technical issues critical to achieving ATS goals. The emphasis for the industrial ATS was placed upon innovative cycle design and low emission combustion. The emphasis for the utility ATS was placed on developing a technology base for advanced turbine cooling, while utilizing demonstrated and planned improvements in low emission combustion. Significant overlap in the development programs will allow common technologies to be applied to both products. GE Power Systems is solely responsible for offering GE products for the industrial and utility markets.
5. Student Attitudes toward Information Systems Graduate Program Design and Delivery
Science.gov (United States)
Thouin, Mark F.; Hefley, William E.; Raghunathan, Srinivasan
2018-01-01
This study examines student preferences regarding graduate management information systems (MIS) education. One hundred and eighty four graduate students responded to a survey exploring student attitudes towards degree program content, delivery format, and peer group interaction. Study results indicate that students prefer a program with an even…
6. LLL's Quality Assurance Program and the design of specific systems: Tritium Handling Facility
International Nuclear Information System (INIS)
Dow, J.P.
1975-01-01
Lawrence Livermore Laboratory operates a Tritium Handling Facility for several programs. Besides the tritium work for the weapons program, basic research is conducted on all phases of tritium. Additional work is being conducted for the laser fusion program and the controlled thermonuclear program. The Quality Assurance Program for the tritium facility and how it is being implemented on specific tritium handling systems are described. The program is intended to prevent or mitigate the consequences of accidents by rigidly controlling the design, fabrication, procurement, construction and operation of safety-related critical structures, systems, and components of such facilities. (CH)
7. Programming biological operating systems: genome design, assembly and activation.
Science.gov (United States)
Gibson, Daniel G
2014-05-01
The DNA technologies developed over the past 20 years for reading and writing the genetic code converged when the first synthetic cell was created 4 years ago. An outcome of this work has been an extraordinary set of tools for synthesizing, assembling, engineering and transplanting whole bacterial genomes. Technical progress, options and applications for bacterial genome design, assembly and activation are discussed.
8. Application programs written by using customizing tools of a computer-aided design system
Energy Technology Data Exchange (ETDEWEB)
Li, X.; Huang, R.; Juricic, D. [Univ. of Texas, Austin, TX (United States). Mechanical Engineering Dept.
1995-12-31
Customizing tools of Computer-Aided Design Systems have been developed to such a degree as to become equivalent to powerful higher-level programming languages that are especially suitable for graphics applications. Two examples of application programs written by using AutoCADs customizing tools are given in some detail to illustrate their power. One tool uses AutoLISP list-processing language to develop an application program that produces four views of a given solid model. The other uses AutoCAD Developmental System, based on program modules written in C, to produce an application program that renders a freehand sketch from a given CAD drawing.
9. NASIS data base management system - IBM 360/370 OS MVT implementation. 4: Program design specifications
Science.gov (United States)
1973-01-01
The design specifications for the programs and modules within the NASA Aerospace Safety Information System (NASIS) are presented. The purpose of the design specifications is to standardize the preparation of the specifications and to guide the program design. Each major functional module within the system is a separate entity for documentation purposes. The design specifications contain a description of, and specifications for, all detail processing which occurs in the module. Sub-modules, reference tables, and data sets which are common to several modules are documented separately.
10. Control system design and analysis using the INteractive Controls Analysis (INCA) program
Science.gov (United States)
Bauer, Frank H.; Downing, John P.
1987-01-01
The INteractive Controls Analysis (INCA) program was developed at the Goddard Space Flight Center to provide a user friendly efficient environment for the design and analysis of linear control systems. Since its inception, INCA has found extensive use in the design, development, and analysis of control systems for spacecraft, instruments, robotics, and pointing systems. Moreover, the results of the analytic tools imbedded in INCA have been flight proven with at least three currently orbiting spacecraft. This paper describes the INCA program and illustrates, using a flight proven example, how the package can perform complex design analyses with relative ease.
11. Design and Implementation of Practical Constraint Logic Programming Systems
Science.gov (United States)
1992-08-24
also helps to separate out the part of mur analysis that is directly relevant to AProlog, where all computation happens at levels 0 and 1 (due to the...Pasero, and P. Roussel. Une systeme de com- munication homme-machine en Francais . Technical report, Groupe Intelligence Artificielle, Universite Aix
12. Learning Information Systems: Designing Education Programs Using Letrinhas
Directory of Open Access Journals (Sweden)
Célio Gonçalo Marques
2017-01-01
Full Text Available The Letrinhas information system contributes to the improvement of students' reading literacy combining the potential of mobile devices and the specific needs of students and teachers. This information system has emerged within the framework of a partnership established between the Instituto Politécnico de Tomar (IPT and the Artur Gonçalves Cluster of Schools, in Torres Novas, Portugal. After three years of the creation of the tool and its use in a real learning environment, the evaluation already carried out suggests a high degree of satisfaction on the part of teachers and students, as well as a very positive impact on improving the reading skills of the students involved in the project. The latest version of Letrinhas has new features which lead to the specific challenges and needs of the teachers in the above-mentioned cluster of schools. Being so, in addition to the evaluation and improvement of reading skills, the new version provides features that enable the creation of educational scenarios promoting learning environments that enhance, not only the autonomy of students, but also their motivation.
13. Efficient System Design and Sustainable Finance for China's Village Electrification Program: Preprint
Energy Technology Data Exchange (ETDEWEB)
Ma, S.; Yin, H.; Kline, D. M.
2006-08-01
This paper describes a joint effort of the Institute for Electrical Engineering of the Chinese Academy of Sciences (IEE), and the U.S. National Renewable Energy Laboratory (NREL) to support China's rural electrification program. This project developed a design tool that provides guidelines both for off-grid renewable energy system designs and for cost-based tariff and finance schemes to support them. This tool was developed to capitalize on lessons learned from the Township Electrification Program that preceded the Village Electrification Program. We describe the methods used to develop the analysis, some indicative results, and the planned use of the tool in the Village Electrification Program.
14. The computer program system for structural design of nuclear power plants
International Nuclear Information System (INIS)
Aihara, S.; Atsumi, K.; Sasagawa, K.; Satoh, S.
1979-01-01
In recent days, the design method of the Nuclear Power Plant has become more complex than in the past. The Finite Element Method (FEM) applied for analysis of Nuclear Power Plants, especially requires more computer use. The recent computers have made remarkable progress, so that in design work manpower and time necessary for analysis have been reduced considerably. However, instead the arrangement of outputs have increased tremendously. Therefore, a computer program system was developed for performing all of the processes, from data making to output arrangement, and rebar evaluations. This report introduces the computer program system pertaining to the design flow of the Reactor Building. (orig.)
15. Feasibility study for objective oriented design of system thermal hydraulic analysis program
International Nuclear Information System (INIS)
Chung, Bub Dong; Jeong, Jae Jun; Hwang, Moon Kyu
2008-01-01
The system safety analysis code, such as RELAP5, TRAC, CATHARE etc. have been developed based on Fortran language during the past few decades. Refactoring of conventional codes has been also performed to improve code readability and maintenance. However the programming paradigm in software technology has been changed to use objects oriented programming (OOP), which is based on several techniques, including encapsulation, modularity, polymorphism, and inheritance. In this work, objective oriented program for system safety analysis code has been tried utilizing modernized C language. The analysis, design, implementation and verification steps for OOP system code development are described with some implementation examples. The system code SYSTF based on three-fluid thermal hydraulic solver has been developed by OOP design. The verifications of feasibility are performed with simple fundamental problems and plant models. (author)
16. Designing PV Incentive Programs to Promote System Performance: AReview of Current Practice
Energy Technology Data Exchange (ETDEWEB)
Barbose, Galen; Wiser, Ryan; Bolinger, Mark
2006-11-12
rather than the rated capacity of the modules or system, are often suggested as one possible strategy. Somewhat less recognized are the many other program design options also available, each with its particular advantages and disadvantages. To provide a point of reference for assessing the current state of the art, and to inform program design efforts going forward, we examine the approaches to encouraging PV system performance - including, but not limited to, PBIs - used by 32 prominent PV incentive programs in the U.S. (see Table 1).1 We focus specifically on programs that offer an explicit subsidy payment for customer-sited PV installations. PV support programs that offer other forms of financial support or that function primarily as a mechanism for purchasing renewable energy credits (RECs) through energy production-based payments are outside the scope of our review.2 The information presented herein is derived primarily from publicly available sources, including program websites and guidebooks, programs evaluations, and conference papers, as well as from a limited number of personal communications with program staff. The remainder of this report is organized as follows. The next section presents a simple conceptual framework for understanding the issues that affect PV system performance and provides an overview of the eight general strategies to encourage performance used among the programs reviewed in this report. The subsequent eight sections discuss in greater detail each of these program design strategies and describe how they have been implemented among the programs surveyed. Based on this review, we then offer a series of recommendations for how PV incentive programs can effectively promote PV system performance.
17. The Impact of Programming Experience on Successfully Learning Systems Analysis and Design
Science.gov (United States)
Wong, Wang-chan
2015-01-01
In this paper, the author reports the results of an empirical study on the relationship between a student's programming experience and their success in a traditional Systems Analysis and Design (SA&D) class where technical skills such as dataflow analysis and entity relationship data modeling are covered. While it is possible to teach these…
Science.gov (United States)
Farrell, C. E.
1982-01-01
The LSS preliminary and conceptual design requires extensive iteractive analysis because of the effects of structural, thermal, and control intercoupling. A computer aided design program that will permit integrating and interfacing of required large space system (LSS) analyses is discussed. The primary objective of this program is the implementation of modeling techniques and analysis algorithms that permit interactive design and tradeoff studies of LSS concepts. Eight software modules were added to the program. The existing rigid body controls module was modified to include solar pressure effects. The new model generator modules and appendage synthesizer module are integrated (interfaced) to permit interactive definition and generation of LSS concepts. The mass properties module permits interactive specification of discrete masses and their locations. The other modules permit interactive analysis of orbital transfer requirements, antenna primary beam n, and attitude control requirements.
19. Design and implementation of the control system for nuclear plant VVER-1000. Instrumentation (program technical complexes)
International Nuclear Information System (INIS)
Siora, A.; Tokarev, V.; Bakhmach, E.
2004-01-01
Program-technical complexes (PTC) are designed as control and protection systems in water-moderated atomic reactors, including emergency and preventive systems, automatic control, unloading, reactor capacity limitation and accelerated preventive protection systems. Utilization of programmable logic integrated circuits from world leading manufacturers makes the complexes simple in structure, compact, with low energy demands and mutually independent for key and supporting functions The results of PTC assessment and implementation in Ukraine are outlined. Opportunities for a future development of RADIJ company in the area of control and protection systems for VVER reactors are also discussed
20. Advanced turbine systems program conceptual design and product development. Annual report, August 1993--July 1994
Energy Technology Data Exchange (ETDEWEB)
NONE
1994-11-01
This Yearly Technical Progress Report covers the period August 3, 1993 through July 31, 1994 for Phase 2 of the Advanced Turbine Systems (ATS) Program by Solar Turbines Incorporated under DOE Contract No. DE-AC421-93MC30246. As allowed by the Contract (Part 3, Section J, Attachment B) this report is also intended to fulfill the requirements for a fourth quarterly report. The objective of Phase 2 of the ATS Program is to provide the conceptual design and product development plan for an ultra-high efficiency, environmentally superior and cost-competitive industrial gas turbine system to be commercialized in the year 2000. During the period covered by this report, Solar has completed three of eight program tasks and has submitted topical reports. These three tasks included a Project Plan submission of information required by NEPA, and the selection of a Gas-Fueled Advanced Turbine System (GFATS). In the latest of the three tasks, Solars Engineering team identified an intercooled and recuperated (ICR) gas turbine as the eventual outcome of DOEs ATS program coupled with Solars internal New Product Introduction (NPI) program. This machine, designated ATS50 will operate at a thermal efficiency (turbine shaft power/fuel LHV) of 50 percent, will emit less than 10 parts per million of NOx and will reduce the cost of electricity by 10 percent. It will also demonstrate levels of reliability, availability, maintainability, and durability (RAMD) equal to or better than those of todays gas turbine systems. Current activity is concentrated in three of the remaining five tasks a Market Study, GFATS System Definition and Analysis, and the Design and Test of Critical Components.
International Nuclear Information System (INIS)
1993-01-01
The purpose of this report is to provide a status of the progress that was made towards Design Certification of System 80+trademark during the US government's 1993 fiscal year. The System 80+ Advanced Light Water Reactor (ALWR) is a 3931 MW t (1350 MWe) Pressurized Water Reactor (PWR). The design consists of an essentially complete plant. It is based on evolutionary improvements to the Standardized System 80 nuclear steam supply system in operation at Palo Verde Units 1, 2, and 3, and the Duke Power Company P-81 balance-of-plant (BOP) that was designed and partially constructed at the Cherokee plant site. The System 80/P-81 original design has been substantially enhanced to increase conformance with the EPRI ALWR Utility Requirements Document (URD). Some design enhancements incorporated in the System 80+ design are included in the four units currently under construction in the Republic of Korea. These units form the basis of the Korean standardization program. The full System 80+ standard design has been offered to the Republic of China, in response to their recent bid specification. The ABB-CE Standard Safety Analysis Report (CESSAR-DC) was submitted to the NRC and a Draft Safety Evaluation Report was issued by the NRC in October 1992. CESSAR-DC contains the technical basis for compliance with the EPRI URD for simplified emergency planning. The Nuclear Steam Supply System (NSSS) is the standard ABB-Combustion Engineering two-loop arrangement with two steam generators, two hot legs and four cold legs each with a reactor coolant pump. The System 80+ standard plant includes a sperical steel containment vessel which is enclosed in a concrete shield building, thus providing the safety advantages of a dual containment
2. A two-stage stochastic programming model for the optimal design of distributed energy systems
International Nuclear Information System (INIS)
Zhou, Zhe; Zhang, Jianyun; Liu, Pei; Li, Zheng; Georgiadis, Michael C.; Pistikopoulos, Efstratios N.
2013-01-01
Highlights: ► The optimal design of distributed energy systems under uncertainty is studied. ► A stochastic model is developed using genetic algorithm and Monte Carlo method. ► The proposed system possesses inherent robustness under uncertainty. ► The inherent robustness is due to energy storage facilities and grid connection. -- Abstract: A distributed energy system is a multi-input and multi-output energy system with substantial energy, economic and environmental benefits. The optimal design of such a complex system under energy demand and supply uncertainty poses significant challenges in terms of both modelling and corresponding solution strategies. This paper proposes a two-stage stochastic programming model for the optimal design of distributed energy systems. A two-stage decomposition based solution strategy is used to solve the optimization problem with genetic algorithm performing the search on the first stage variables and a Monte Carlo method dealing with uncertainty in the second stage. The model is applied to the planning of a distributed energy system in a hotel. Detailed computational results are presented and compared with those generated by a deterministic model. The impacts of demand and supply uncertainty on the optimal design of distributed energy systems are systematically investigated using proposed modelling framework and solution approach.
3. Development of Thermal Design Program for an Electronic Telecommunication System Using Heat Sink
International Nuclear Information System (INIS)
Lee, Jung Hwan; Kim, Jong Man; Chun, Ji Hwan; Bae, Chul Ho; Suh, Myung Won
2007-01-01
The purpose of this study is to investigate the cooling performance of heat sinks for an electronic telecommunication system by adequate natural convection. Heat generation rates of electronic components and the temperature distributions of heat sinks and surrounding air are analyzed experimentally and numerically. In order to perform the heat transfer analysis for the thermal design of telecommunication system, a program is developed. The program used the graphic user interface environment to determine the arrangement of heat sources, interior fan capacity, and heat sink configuration. The simulation results showed that the heat sinks were able to achieve a cooling capacity of up to 230W at the maximum temperature difference of 19 .deg. C. To verify the results from the numerical simulation, an experiment was conducted under the same condition as the numerical simulation, and their results were compared. The design program gave good prediction of the effects of various parameters involved in the design of a heat sinks for an electronic telecommunication system
4. Design and implementation of a modular program system for the carrying-through of statistical analyses
International Nuclear Information System (INIS)
Beck, W.
1984-01-01
From the complexity of computer programs for the solution of scientific and technical problems results a lot of questions. Typical questions concern the strength and weakness of computer programs, the propagation of incertainties among the input data, the sensitivity of input data on output data and the substitute of complex models by more simple ones, which provide equivalent results in certain ranges. Those questions have a general practical meaning, principle answers may be found by statistical methods, which are based on the Monte Carlo Method. In this report the statistical methods are chosen, described and valuated. They are implemented into the modular program system STAR, which is an own component of the program system RSYST. The design of STAR considers users with different knowledge of data processing and statistics. The variety of statistical methods, generating and evaluating procedures. The processing of large data sets in complex structures. The coupling to other components of RSYST and RSYST foreign programs. That the system can be easily modificated and enlarged. Four examples are given, which demonstrate the application of STAR. (orig.) [de
5. BEAMR: An interactive graphic computer program for design of charged particle beam transport systems
Science.gov (United States)
Leonard, R. F.; Giamati, C. C.
1973-01-01
A computer program for a PDP-15 is presented which calculates, to first order, the characteristics of charged-particle beam as it is transported through a sequence of focusing and bending magnets. The maximum dimensions of the beam envelope normal to the transport system axis are continuously plotted on an oscilloscope as a function of distance along the axis. Provision is made to iterate the calculation by changing the types of magnets, their positions, and their field strengths. The program is especially useful for transport system design studies because of the ease and rapidity of altering parameters from panel switches. A typical calculation for a system with eight elements is completed in less than 10 seconds. An IBM 7094 version containing more-detailed printed output but no oscilloscope display is also presented.
6. Oak Ridge TNS Program: reference design and program plan for a TNS ECH startup system
International Nuclear Information System (INIS)
Rosenfeld, R.
1979-04-01
The use of microwave radio frequency (rf) heating in The Next Step (TNS) is considered to be a viable approach to accomplishing reliable preionization while significantly lowering the peak power requirements and cost of the ohmic heating power supply system. Electron cyclotron heating (ECH) is a promising type of rf heating in which high power microwave energy is deposited into the plasma region. The proposed system is based on a configuration of five 200-kW gyroklystrons which will deliver 1 MW at 120 GHz to the plasma area for pulse periods of up to 6.0 sec. Completion of an operational system could be targeted for December 1989 at an estimated cost (in 1978 dollars) of $4 million. A discussion and description of a conceptual system are presented. Estimates of costs, schedules, and research and development (R and D) needs are included 7. Improving human reliability through better nuclear power plant system design: Program for advanced nuclear power studies International Nuclear Information System (INIS) Golay, M.W. 1993-01-01 The project on ''Development of a Theory of the Dependence of Human Reliability upon System Designs as a Means of Improving Nuclear Power Plant Performance'' was been undertaken in order to address the problem of human error in advanced nuclear power plant designs. Lack of a mature theory has retarded progress in reducing likely frequencies of human errors. Work being pursued in this project is to perform a set of experiments involving human subjects who are required to operate, diagnose and respond to changes in computer-simulated systems, relevant to those encountered in nuclear power plants, which are made to differ in complexity in a systematic manner. The computer program used to present the problems to be solved also records the response of the operator as it unfolds 8. Objective Oriented Design of System Thermal Hydraulic Analysis Program and Verification of Feasibility International Nuclear Information System (INIS) Chung, Bub Dong; Jeong, Jae Jun; Hwang, Moon Kyu 2008-01-01 The system safety analysis code, such as RELAP5, TRAC, CATHARE etc. have been developed based on Fortran language during the past few decades. Refactoring of conventional codes has been also performed to improve code readability and maintenance. TRACE, RELAP5-3D and MARS codes are examples of these activities. The codes were redesigned to have modular structures utilizing Fortran 90 features. However the programming paradigm in software technology has been changed to use objects oriented programming (OOP), which is based on several techniques, including encapsulation, modularity, polymorphism, and inheritance. It was not commonly used in mainstream software application development until the early 1990s. Many modern programming languages now support OOP. Although the recent Fortran language also support the OOP, it is considered to have limited functions compared to the modern software features. In this work, objective oriented program for system safety analysis code has been tried utilizing modern C language feature. The advantage of OOP has been discussed after verification of design feasibility 9. REFERENCE MANUAL FOR RASSMIT VERSION 2.1: SUB-SLAB DEPRESSURIZATION SYSTEM DESIGN PERFORMANCE SIMULATION PROGRAM Science.gov (United States) The report is a reference manual for RASSMlT Version 2.1, a computer program that was developed to simulate and aid in the design of sub-slab depressurization systems used for indoor radon mitigation. The program was designed to run on DOS-compatible personal computers to ensure ... 10. System programs design of motors; Sistema de programas de diseno de motores Energy Technology Data Exchange (ETDEWEB) Diaz Gonzalez Palomas, Oscar; Ciprian Avila, Fernando [Instituto de Investigaciones Electricas, Cuernavaca (Mexico) 1988-12-31 This paper describes the objective of creating the program system for induction motors design SIPRODIMO, its scope, its general characteristics, its structure and the results obtained with its application, as well as the service capacity developed by the Motors Area of the Instituto de Investigaciones Elctricas. [Espanol] En este articulo se describe el objetivo de crear el sistema de programas de diseno de motores de induccion, Siprodimo, su alcance, sus caracteristicas generales, su estructura y los resultados obtenidos con su aplicacion, asi como la capacidad de servicio desarrollada por el area de motores, del Instituto de Investigaciones Electricas. 11. Third-order TRANSPORT: A computer program for designing charged particle beam transport systems International Nuclear Information System (INIS) Carey, D.C.; Brown, K.L.; Rothacker, F. 1995-05-01 TRANSPORT has been in existence in various evolutionary versions since 1963. The present version of TRANSPORT is a first-, second-, and third-order matrix multiplication computer program intended for the design of static-magnetic beam transport systems. This report discusses the following topics on TRANSPORT: Mathematical formulation of TRANSPORT; input format for TRANSPORT; summaries of TRANSPORT elements; preliminary specifications; description of the beam; physical elements; other transformations; assembling beam lines; operations; variation of parameters for fitting; and available constraints -- the FIT command 12. Third-order TRANSPORT: A computer program for designing charged particle beam transport systems Energy Technology Data Exchange (ETDEWEB) Carey, D.C. [Fermi National Accelerator Lab., Batavia, IL (United States); Brown, K.L.; Rothacker, F. [Stanford Linear Accelerator Center, Menlo Park, CA (United States) 1995-05-01 TRANSPORT has been in existence in various evolutionary versions since 1963. The present version of TRANSPORT is a first-, second-, and third-order matrix multiplication computer program intended for the design of static-magnetic beam transport systems. This report discusses the following topics on TRANSPORT: Mathematical formulation of TRANSPORT; input format for TRANSPORT; summaries of TRANSPORT elements; preliminary specifications; description of the beam; physical elements; other transformations; assembling beam lines; operations; variation of parameters for fitting; and available constraints -- the FIT command. 13. Advanced turbine systems program conceptual design and product development. Annual report, August 1994--July 1995 Energy Technology Data Exchange (ETDEWEB) NONE 1995-11-01 This report summarizes the tasks completed under this project during the period from August 1, 1994 through July 31, 1994. The objective of the study is to provide the conceptual design and product development plan for an ultra high efficiency, environmentally superior and cost-competitive industrial gas turbine system to be commercialized by the year 2000. The tasks completed include a market study for the advanced turbine system; definition of an optimized recuperated gas turbine as the prime mover meeting the requirements of the market study and whose characteristics were, in turn, used for forecasting the total advanced turbine system (ATS) future demand; development of a program plan for bringing the ATS to a state of readiness for field test; and demonstration of the primary surface recuperator ability to provide the high thermal effectiveness and low pressure loss required to support the proposed ATS cycle. 14. Student Support and Advising in a New Online Ed.D. of Instructional Systems Technology Program: A Design Case Science.gov (United States) Exter, Marisa; Korkmaz, Nilufer; Boling, Elizabeth 2014-01-01 This design case describes an online Ed.D. in Instructional Systems Technology (IST) launched in 2012. We will focus on a key aspect of the design: program advising and students' relationship with their advisors. While the design was responsive in its earliest stages to organizational constraints, legislative requirements and the known… 15. Third-Order Transport with MAD Input: A Computer Program for Designing Charged Particle Beam Transport Systems Energy Technology Data Exchange (ETDEWEB) Brown, Karl 1998-10-28 TRANSPORT has been in existence in various evolutionary versions since 1963. The present version of TRANSPORT is a first-, second-, and third-order matrix multiplication computer program intended for the design of static-magnetic beam transport systems. 16. TRANSPORT: a computer program for designing charged particle beam transport systems International Nuclear Information System (INIS) Brown, K.L.; Rothacker, F.; Carey, D.C.; Iselin, C. 1977-05-01 TRANSPORT is a first- and second-order matrix multiplication computer program intended for the design of static-magnetic beam transport systems. It has been in existence in various evolutionary versions since 1963. The present version, described in the manual given, includes both first- and second-order fitting capabilities. TRANSPORT will step through the beam line, element by element, calculating the properties of the beam or other quantities, described below, where requested. Therefore one of the first elements is a specification of the phase space region occupied by the beam entering the system. Magnets and intervening spaces and other elements then follow in the sequence in which they occur in the beam line. Specifications of calculations to be done or of configurations other than normal are placed in the same sequence, at the point where their effect is to be made 17. Advanced turbine systems program -- Conceptual design and product development. Final report Energy Technology Data Exchange (ETDEWEB) NONE 1996-07-26 This Final Technical Report presents the accomplishments on Phase 2 of the Advanced Turbine Systems (ATS). The ATS is an advanced, natural gas fired gas turbine system that will represent a major advance on currently available industrial gas turbines in the size range of 1--20 MW. This report covers a market-driven development. The Market Survey reported in Section 5 identified the customers performance needs. This market survey used analyses performed by Solar turbine Incorporated backed up by the analyses done by two consultants, Research Decision Consultants (RDC) and Onsite Energy Corporation (Onsite). This back-up was important because it is the belief of all parties that growth of the ATS will depend both on continued participation in Solars traditional oil and gas market but to a major extent on a new market. This new market is distributed electrical power generation. Difficult decisions have had to be made to meet the different demands of the two markets. Available resources, reasonable development schedules, avoidance of schedule or technology failures, probable acceptance by the marketplace, plus product cost, performance and environmental friendliness are a few of the complex factors influencing the selection of the Gas Fired Advanced Turbine System described in Section 3. Section 4 entitled Conversion to Coal was a task which addresses the possibility of a future interruption to an economic supply of natural gas. System definition and analysis is covered in Section 6. Two major objectives were met by this work. The first was identification of those critical technologies that can support overall attainment of the program goals. Separate technology or component programs were begun to identify and parameterize these technologies and are described in Section 7. The second objective was to prepare parametric analyses to assess performance sensitivity to operating variables and to select design approaches to meet the overall program goals. 18. Design of the CART data system for the US Department of Energy's ARM Program International Nuclear Information System (INIS) Melton, R.B.; Campbell, A.P.; Edwards, D.M.; Kanciruk, P.; Tichler, J.L. 1991-01-01 The Department of Energy (DOE) has initiated a major atmospheric research effort to reduce the uncertainties found in general circulation and other models due to the effects of clouds and radiation. The objective of the Atmospheric Radiation Measurement Program (ARM) is to provide an experimental testbed for the study of important atmospheric effects, particularly cloud and radiative processes, and testing parameterizations of the processes for use in atmospheric models. This experimental testbed, known as the Clouds and Radiation Testbed (CART), will include a complex data system, the CART Data Environment (CDE). The major functions of the CDE will be to: acquire environments from instruments and external data sources; perform quality assessments of the data streams; create data streams of known quality to be used as model input compared to model output; execute the models and capture their predictions; and make data streams associated with model tests available to ARM investigators in near real-time. The CDE will also be expected to capture ancillary information (''meta-data'') associated with the data streams, provide data management facilities for design of ARM experiments, and provide for archival data storage. The first section of this paper presents background information on CART. Next the process for the functional design of the system is described, the functional requirements summarized, and the conceptual architecture of the CDE is presented. Finally, the status of the CDE design activities is summarized, and major technical challenges are discussed 19. Possibilities of CoDeSys Programming System during Software Development and Designing of Micro-Processor Control Systems Directory of Open Access Journals (Sweden) S. O. Novikov 2009-01-01 Full Text Available A great attention is presently paid to technologies pertaining to software development for systems which are applied for control of industrial automatic equipment designed on the basis of programmable logical controllers (PLC and practical programming using languages of International Electrotechnical Commission (IEC 61131-3 standard.A programming CoDeSys complex is one of the systems for PLC software development. This complex has been developed by 3S-Smart Software Solutions GmbH (3S company. Its main purpose is to program PLC and industrial computers in accordance with the IEC 61131-3 standard. A number of unordinary 3S solutions have led to the fact that the CoDeSys is considered now as a standard PLC programming tool of the leading European manufacturers: ABB, Beckhoff, Beck IPC, Berger Lahr, Bosch Rexroth, ifm, Keb, Kontron, Lenze, Moeller, WAGO, Fastwel и др.An introduction of the standard has served as a foundation for creation of the unified school for specialists’ training. A person who is trained in accordance with the program including the IEC 61131-3 standard shall be able to work with PLC of any company. At the same time if he/she has had some experience of work with any PLC then his/her skills shall be helpful and significantly simplify studying process of new possibilities. 20. Advanced turbine systems program conceptual design and product development. Quarterly report, February 1995--April 1995 Energy Technology Data Exchange (ETDEWEB) Karstensen, K.W. 1995-07-01 This Quarterly Technical Progress Report covers the period February 1, 1995, through April 30, 1995, for Phase II of the Advanced Turbine Systems (ATS) Program by Solar Turbines Incorporated under DOE contract No. DE-AC21-93MC30246. The objective of Phase II of the ATS Program is to provide the conceptual design and product development plan for an ultra high efficiency, environmentally superior and cost competitive industrial gas turbine system to be commercialized by the year 2000. A secondary objective is to begin early development of technologies critical to the success of ATS. Tasks 1, 2, 3, 5, 6 and 7 of Phase II have been completed in prior quarters. Their results have been discussed in the applicable quarterly reports and in their respective topical reports. With the exception of Task 7, final editions of these topical reports have been submitted to the DOE. This quarterly report, then, addresses only Task 4 and the nine subtasks included in Task 8, {open_quotes}Design and Test of Critical Components.{close_quotes} These nine subtasks address six ATS technologies as follows: (1) Catalytic Combustion - Subtasks 8.2 and 8.5, (2) Recuperator - Subtasks 8.1 and 8.7, (3) Autothermal Fuel Reformer - Subtask 8.3, (4) High Temperature Turbine Disc - Subtask 8.4, (5) Advanced Control System (MMI) - Subtask 8.6, and (6) Ceramic Materials - Subtasks 8.8 and 8.9. Major technological achievements from Task 8 efforts during the quarter are as follows: (1) The subscale catalytic combustion rig in Subtask 8.2 is operating consistently at 3 ppmv of NO{sub x} over a range of ATS operating conditions. (2) The spray cast process used to produce the rim section of the high temperature turbine disc of Subtask 8.4 offers additional and unplanned spin-off opportunities for low cost manufacture of certain gas turbine parts. 1. Simulation programs for ph.D. study of analysis, modeling and optimum design of solar domestic hot water systems Energy Technology Data Exchange (ETDEWEB) Lin Qin 1998-12-31 The design of solar domestic hot water (DHW) systems is a complex process, due to characteristics inherent in the solar heating technology. Recently, computer simulation has become a widely used technique to improve the understanding of the thermal processes in such systems. One of the main objects of the Ph.D. study of Analysis, Modelling and optimum Design of Solar Domestic Hot Water Systems is to develop and verify programs for carrying out the simulation and evaluation of the dynamic performance of solar DHW systems. During this study, simulation programs for hot water distribution networks and for certain types of solar DHW systems were developed. (au) 2. Digital signal processing system design LabVIEW-bases hybrid programming CERN Document Server Kehtarnavaz, Nasser; Peng, Qingzhong 2008-01-01 Reflecting LabView's new MathScripting feature, the new edition of this book combines textual and graphical programming to form a hybrid programming approach, enabling a more effective means of building and analyzing DSP systems. The hybrid programming approach allows the use of previously developed textual programming solutions to be integrated into LabVIEW's highly interactive and visual environment, providing an easier and quicker method for building DSP systems.Features * The only DSP laboratory book that combines both textual and graphical programming * 12 lab experime 3. [Assessment of patient needs to design a patient education program in systemic lupus erythematosus]. Science.gov (United States) Hervier, B; Devilliers, H; Amiour, F; Ayçaguer, S; Neves, Y; Ganem, M-C; Amoura, Z; Antignac, M 2014-05-01 The aim of this study was to collect information to design a patient education program (PEP) for patients with systemic lupus erythematosus (SLE), based as much as possible on their expectations. Three different approaches were used for addressing patients' needs: 1) A questionnaire on their expectations in terms of a PEP was sent to the members of SLE associations and offered to patients at the French reference center for SLE, 2) A patients' focus group was conducted, and 3) After the teaching sessions, satisfaction questionnaires were also evaluated. The patients who answered the expectation questionnaire (n=422, women/men sex-ratio: 12.6) indicated a major interest in the PEP (70.4%). Their expectations were broad, and covered the topics of pregnancy (90% of the women under the age of 40), the outcome of the disease (80.8%), the respective roles of the different treatments (70.4%), and also the management of everyday symptoms: fatigue and pain (66.4%). The focus group (eight people) highlighted the need for improving how the diagnosis of the disease was delivered, and also revealed the loneliness and the guilty feeling experienced by some patients toward their relatives. Satisfaction questionnaires confirmed these expectations for the PEP, and even extended them to new topics: the mechanisms behind SLE, travel and leisure, and possible accommodations in the workplace. The direct consultation of patients with SLE targeted by a specific PEP program allowed us to confirm and adapt the topics and the content of a program designed by medical staff. Copyright © 2013 Société nationale française de médecine interne (SNFMI). Published by Elsevier SAS. All rights reserved. 4. Designing computer programs CERN Document Server Haigh, Jim 1994-01-01 This is a book for students at every level who are learning to program for the first time - and for the considerable number who learned how to program but were never taught to structure their programs. The author presents a simple set of guidelines that show the programmer how to design in a manageable structure from the outset. The method is suitable for most languages, and is based on the widely used 'JSP' method, to which the student may easily progress if it is needed at a later stage.Most language specific texts contain very little if any information on design, whilst books on des 5. Operating experience and systems analysis at Trillo NPP: A program intended for systematic review of plant safety systems to assess design basis requirements compliance International Nuclear Information System (INIS) Vega, R. de la 1996-01-01 The program was defined to apply to all plant safety systems and/or systems included in plant Technical Specifications. The goal of the program was to ensure, by systematic design, construction, and commissioning review, the adequacy of safety systems, structures and components to fulfill their safety functions. Also, as a result of the program, it was established that a complete, unambiguous, systematic, design basis definition shall take place. And finally, a complete documental review of the plant design shall result from the program execution 6. Development of verification program for safety evaluation of KNGR on-site and off-site power system design Energy Technology Data Exchange (ETDEWEB) Kim, Kem Joong; Ryu, Eun Sook; Choi, Jang Hong; Lee, Byung Il; Han, Hyun Kyu; Oh, Seong Kyun; Kim, Han Kee; Park, Chul Woo; Kim, Min Jeong [Chungnam National Univ., Taejon (Korea, Republic of) 2001-04-15 In order to verify the adequacy of the design and analysis of the on-site and off-site power system, we developed the regulatory analysis program. We established the methodology for electric power system and constructed the algorithm of steady-state load flow analysis, fault analysis, transient stability analysis. The developed program to be an advantage of GUI and C++ programming technique. The design of input made easy to access the common use PSS/E format and that of output made users to work with Excel spreadsheet. The performance of program was verified to compare with PSS/E results. The case studies as follows. The verification of load flow analysis of KNGR on-site power system. The evaluation of load flow and transient stability analysis of off-site power system of KNGR. The verification of load flow and transient stability analysis. The frequency drop analysis of loss of generation. 7. Advanced Turbine Systems (ATS) program conceptual design and product development. Quarterly progress report, December 1, 1995--February 29, 1996 Energy Technology Data Exchange (ETDEWEB) NONE 1997-06-01 This report describes the overall program status of the General Electric Advanced Gas Turbine Development program, and reports progress on three main task areas. The program is focused on two specific products: (1) a 70-MW class industrial gas turbine based on the GE90 core technology, utilizing a new air cooling methodology; and (2) a 200-MW class utility gas turbine based on an advanced GE heavy-duty machine, utilizing advanced cooling and enhancement in component efficiency. The emphasis for the industrial system is placed on cycle design and low emission combustion. For the utility system, the focus is on developing a technology base for advanced turbine cooling while achieving low emission combustion. The three tasks included in this progress report are on: conversion to a coal-fueled advanced turbine system, integrated program plan, and design and test of critical components. 13 figs., 1 tab. 8. CAL--ERDA program manual. [Building Design Language; LOADS, SYSTEMS, PLANT, ECONOMICS, REPORT, EXECUTIVE, CAL-ERDA Energy Technology Data Exchange (ETDEWEB) Hunn, B. D.; Diamond, S. C.; Bennett, G. A.; Tucker, E. F.; Roschke, M. A. 1977-10-01 A set of computer programs, called Cal-ERDA, is described that is capable of rapid and detailed analysis of energy consumption in buildings. A new user-oriented input language, named the Building Design Language (BDL), has been written to allow simplified manipulation of the many variables used to describe a building and its operation. This manual provides the user with information necessary to understand in detail the Cal-ERDA set of computer programs. The new computer programs described include: an EXECUTIVE Processor to create computer system control commands; a BDL Processor to analyze input instructions, execute computer system control commands, perform assignments and data retrieval, and control the operation of the LOADS, SYSTEMS, PLANT, ECONOMICS, and REPORT programs; a LOADS analysis program that calculates peak (design) zone and hourly loads and the effect of the ambient weather conditions, the internal occupancy, lighting, and equipment within the building, as well as variations in the size, location, orientation, construction, walls, roofs, floors, fenestrations, attachments (awnings, balconies), and shape of a building; a Heating, Ventilating, and Air-Conditioning (HVAC) SYSTEMS analysis program capable of modeling the operation of HVAC components including fans, coils, economizers, humidifiers, etc.; 16 standard configurations and operated according to various temperature and humidity control schedules. A plant equipment program models the operation of boilers, chillers, electrical generation equipment (diesel or turbines), heat storage apparatus (chilled or heated water), and solar heating and/or cooling systems. An ECONOMIC analysis program calculates life-cycle costs. A REPORT program produces tables of user-selected variables and arranges them according to user-specified formats. A set of WEATHER ANALYSIS programs manipulates, summarizes and plots weather data. Libraries of weather data, schedule data, and building data were prepared. 9. Use of Generalized Fluid System Simulation Program (GFSSP) for Teaching and Performing Senior Design Projects at the Educational Institutions Science.gov (United States) Majumdar, A. K.; Hedayat, A. 2015-01-01 This paper describes the experience of the authors in using the Generalized Fluid System Simulation Program (GFSSP) in teaching Design of Thermal Systems class at University of Alabama in Huntsville. GFSSP is a finite volume based thermo-fluid system network analysis code, developed at NASA/Marshall Space Flight Center, and is extensively used in NASA, Department of Defense, and aerospace industries for propulsion system design, analysis, and performance evaluation. The educational version of GFSSP is freely available to all US higher education institutions. The main purpose of the paper is to illustrate the utilization of this user-friendly code for the thermal systems design and fluid engineering courses and to encourage the instructors to utilize the code for the class assignments as well as senior design projects. 10. The Development of Online Tutorial Program Design Using Problem-Based Learning in Open Distance Learning System Science.gov (United States) Said, Asnah; Syarif, Edy 2016-01-01 This research aimed to evaluate of online tutorial program design by applying problem-based learning Research Methods currently implemented in the system of Open Distance Learning (ODL). The students must take a Research Methods course to prepare themselves for academic writing projects. Problem-based learning basically emphasizes the process of… 11. A New Skid Trail Pattern Design for Farm Tractors Using Linear Programing and Geographical Information Systems Directory of Open Access Journals (Sweden) Selcuk Gumus 2016-12-01 Full Text Available Farm tractor skidding is one of the common methods of timber extraction in Turkey. However, the absence of an optimal skidding plan covering the entire production area can result in time loss and negative environmental impacts. In this study, the timber extraction by farm tractors was analyzed, and a new skid trail pattern design was developed using Linear Programming (LP and Geographical Information Systems (GIS. First, a sample skidding operation was evaluated with a time study, and an optimum skidding model was generated with LP. Then, the new skidding pattern was developed by an optimum skidding model and GIS analysis. At the end of the study, the developed new skid trail pattern was implemented in the study area and tested by running a time study. Using the newly developed “Direct Skid Trail Pattern (DSTP” model, a 16.84% increase in working time performance was observed when the products were extracted by farm tractors compared to the existing practices. On the other hand, the average soil compaction value measured in the study area at depths of 0–5 cm and 5–10 cm was found to be greater in the sample area skid trails than in the control points. The average density of the skid trails was 281 m/ha, while it decreased to 187 m/ha by using the developed pattern. It was also found that 44,829 ton/ha of soil losses were prevented by using the DSTP model; therefore, environmental damages were decreased. 12. Summary of the fuel rod support system (grids) design for LWBR (LWBR development program) International Nuclear Information System (INIS) Richardson, K.D. 1979-02-01 Design features of the fuel rod support system (grids) for the Light Water Breeder Reactor (LWBR) installed in the Shippingport Atomic Power Station, Shippingport, Pennsylvania, are described. The grids are fabricated from AM-350 stainless steel and provide lateral support of the fuel rods in the three regions (seed, blanket, and reflector) of the reactor. A comparison is made of the LWBR grids, whose cells are arranged in triangular-pitched arrays, with rod support systems employed in commercial light water reactors 13. Army Gas-Cooled Reactor Systems program: alternator final design report Energy Technology Data Exchange (ETDEWEB) 1964-06-01 The development and testing of a demonstration brushless alternator for the ML-1 mobile nuclear power plant is described. The brushless concept was selected after it became apparent that a conventional power generator could not satisfy the ML-1 weight and size requirements. The demonstration alternator fabricated and tested under this program did not meet all performance specifications; the efficiency was low and the unit could not be operated for significant periods of time without overheating. However, a large body of useful data was accumulated during the extensive development program. Of special interest are data on the rotor and stator design, the cooling requirements and on the distribution of eddy current losses. Analysis of the data indicates that a brushless alternator, only slightly larger and heavier than was specified for the ML-1, could be developed with a modest additional effort. 14. Implementing evidence-based patient self-management programs in the Veterans Health Administration: perspectives on delivery system design considerations. Science.gov (United States) Damush, T M; Jackson, G L; Powers, B J; Bosworth, H B; Cheng, E; Anderson, J; Guihan, M; LaVela, S; Rajan, S; Plue, L 2010-01-01 While many patient self-management (PSM) programs have been developed and evaluated for effectiveness, less effort has been devoted to translating and systematically delivering PSM in primary and specialty care. Therefore, the purpose of this paper is to review delivery system design considerations for implementing self-management programs in practice. As lessons are learned about implementing PSM programs in Veterans Health Administration (VHA), resource allocation by healthcare organization for formatting PSM programs, providing patient access, facilitating PSM, and incorporating support tools to foster PSM among its consumers can be refined and tailored. Redesigning the system to deliver and support PSM will be important as implementation researchers translate evidence based PSM practices into routine care and evaluate its impact on the health-related quality of life of veterans living with chronic disease. 15. RECOMMENDED SUB-SLAB DEPRESSURIZATION SYSTEMS DESIGN STANDARD OF THE FLORIDA RADON RESEARCH PROGRAM Science.gov (United States) The report recommends sub-slab depressurization systems design criteria to the State of Florida's Department of Community Affairs for their building code for radon resistant houses. Numerous details are set forth in the full report. Primary criteria include: (1) the operating soi... 16. Space Power Program, Instrumentation and Control System Architecture, Pre-conceptual Design, for Information Energy Technology Data Exchange (ETDEWEB) JM Ross 2005-10-20 The purpose of this letter is to forward the Prometheus preconceptual Instrumentation and Control (I&C) system architecture (Enclosure (1)) to NR for information as part of the Prometheus closeout work. The preconceptual 1&C system architecture was considered a key planning document for development of the I&C system for Project Prometheus. This architecture was intended to set the technical approach for the entire I&C system. It defines interfaces to other spacecraft systems, defines hardware blocks for future development, and provides a basis for accurate cost and schedule estimates. Since the system requirements are not known at this time, it was anticipated that the architecture would evolve as the design of the reactor module was matured. 17. Space Power Program, Instrumentation and Control System Architecture, Preconceptual Design, for Information International Nuclear Information System (INIS) JM Ross 2005-01-01 The purpose of this letter is to forward the Prometheus preconceptual Instrumentation and Control (I and C) system architecture (Enclosure (1)) to NR for information as part of the Prometheus closeout work. The preconceptual 1 and C system architecture was considered a key planning document for development of the I and C system for Project Prometheus. This architecture was intended to set the technical approach for the entire I and C system. It defines interfaces to other spacecraft systems, defines hardware blocks for future development, and provides a basis for accurate cost and schedule estimates. Since the system requirements are not known at this time, it was anticipated that the architecture would evolve as the design of the reactor module was matured 18. Advanced Turbine Systems Program: Conceptual design and product development. Quarterly report, February--April 1994 Energy Technology Data Exchange (ETDEWEB) Benjamin, G.J. 1994-06-01 Objective (Phase II) is to develop an industrial gas turbine system to operate at a thermal efficiency of 50% (ATS50) with efficiency enhancements to be added as they become possible. During this quarter, Solars engine design team has refined both the 1- and 2-spool cycle concepts, to determine sensitivity to key component efficiencies, cooling air usage and origin, and location of compressor surge lines. The refined analysis included more detailed component work such as compressor and turbine design; different speed trade-offs for the low-and high-pressure compressor in the 1-spool configuration were examined for the best overall compressor efficiency. High-temperature and creep testing of recuperator candidate materials continued. Creep, yield, and proportional limit were measured for foil thicknesses 0.0030--0.0050 for Type 347 ss, Inconel 625, and Haynes 230. Combustor design work included preliminary layout of a multi-can annular combustor integrated into the main engine layout. During the subscale catalytic combustion rig testing, NOx emissions < 5 ppmv were measured. Integration of the engine concept designs into the full power plant system designs has started. 19. Human Systems Design Criteria DEFF Research Database (Denmark) Rasmussen, Jens 1982-01-01 This paper deals with the problem of designing more humanised computer systems. This problem can be formally described as the need for defining human design criteria, which — if used in the design process - will secure that the systems designed get the relevant qualities. That is not only...... the necessary functional qualities but also the needed human qualities. The author's main argument is, that the design process should be a dialectical synthesis of the two points of view: Man as a System Component, and System as Man's Environment. Based on a man's presentation of the state of the art a set...... of design criteria is suggested and their relevance discussed. The point is to focus on the operator rather than on the computer. The crucial question is not to program the computer to work on its own conditions, but to “program” the operator to function on human conditions.... 20. Technology Development Program for an Advanced Potassium Rankine Power Conversion System Compatible with Several Space Reactor Designs Energy Technology Data Exchange (ETDEWEB) Yoder, G.L. 2005-10-03 This report documents the work performed during the first phase of the National Aeronautics and Space Administration (NASA), National Research Announcement (NRA) Technology Development Program for an Advanced Potassium Rankine Power Conversion System Compatible with Several Space Reactor Designs. The document includes an optimization of both 100-kW{sub e} and 250-kW{sub e} (at the propulsion unit) Rankine cycle power conversion systems. In order to perform the mass optimization of these systems, several parametric evaluations of different design options were investigated. These options included feed and reheat, vapor superheat levels entering the turbine, three different material types, and multiple heat rejection system designs. The overall masses of these Nb-1%Zr systems are approximately 3100 kg and 6300 kg for the 100- kW{sub e} and 250-kW{sub e} systems, respectively, each with two totally redundant power conversion units, including the mass of the single reactor and shield. Initial conceptual designs for each of the components were developed in order to estimate component masses. In addition, an overall system concept was presented that was designed to fit within the launch envelope of a heavy lift vehicle. A technology development plan is presented in the report that describes the major efforts that are required to reach a technology readiness level of 6. A 10-year development plan was proposed. 1. Experience in the review of utility control room design review and safety parameter display system programs International Nuclear Information System (INIS) Moore, V.A. 1985-01-01 The Detailed Control Room Design Review (DCRDR) and the Safety Parameter Display System (SPDS) had their origins in the studies and investigations conducted as the result of the TMI-2 accident. The President's Commission (Kemeny Commission) critized NRC for not examining the man-machine interface, over-emphasizing equipment, ignoring human beings, and tolerating outdated technology in control rooms. The Commission's Special Inquiry Group (Rogovin Report) recommended greater application of human factors engineering including better instrumentation displays and improved control room design. The NRC Lessons Learned Task Force concluded that licensees should review and improve control rooms using NRC Human engineering guidelines, and install safety parameter display systems (then called the safety staff vector). The TMI Action Plan Item I.D.1 and I.D.2 were based on these recommendations 2. Methodology for optimal energy system design of Zero Energy Buildings using mixed-integer linear programming OpenAIRE Lindberg, Karen Byskov; Doorman, Gerard L.; Fischer, David; Korpås, Magnus; Ånestad, Astrid; Sartori, Igor 2016-01-01 According to EU’s Energy Performance of Buildings Directive (EPBD), all new buildings shall be nearly Zero Energy Buildings (ZEB) from 2018/2020. How the ZEB requirement is defined has large implications for the choice of energy technology when considering both cost and environmental issues. This paper presents a methodology for determining ZEB buildings’ cost optimal energy system design seen from the building owner’s perspective. The added value of this work is the inclusion of peak load ta... 3. Programmed Tool for Quantifying Reliability and Its Application in Designing Circuit Systems Directory of Open Access Journals (Sweden) N. S. S. Singh 2014-01-01 Full Text Available As CMOS technology scales down to nanotechnologies, reliability continues to be a decisive subject in the design entry of nanotechnology-based circuit systems. As a result, several computational methodologies have been proposed to evaluate reliability of those circuit systems. However, the process of computing reliability has become very time consuming and troublesome as the computational complexity grows exponentially with the dimension of circuit systems. Therefore, being able to speed up the task of reliability analysis is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into developing a MATLAB-based automated reliability tool by incorporating the generalized form of the existing computational approaches that can be found in the current literature. Secondly, a comparative study involving those existing computational approaches is carried out on a set of standard benchmark test circuits. Finally, the paper continues to find the exact error bound for individual faulty gates as it plays a significant role in the reliability of circuit systems. 4. Program management system manual International Nuclear Information System (INIS) 1989-08-01 OCRWM has developed a program management system (PMS) to assist in organizing, planning, directing and controlling the Civilian Radioactive Waste Management Program. A well defined management system is necessary because: (1) the Program is a complex technical undertaking with a large number of participants, (2) the disposal and storage facilities to be developed by the Program must be licensed by the Nuclear Regulatory Commission (NRC) and hence are subject to rigorous quality assurance (QA) requirements, (3) the legislation mandating the Program creates a dichotomy between demanding schedules of performance and a requirement for close and continuous consultation and cooperation with external entities, (4) the various elements of the Program must be managed as parts of an integrated waste management system, (5) the Program has an estimated total system life cycle cost of over$30 billion, and (6) the Program has a unique fiduciary responsibility to the owners and generators of the nuclear waste for controlling costs and minimizing the user fees paid into the Nuclear Waste Fund. This PMS Manual is designed and structured to facilitate strong, effective Program management by providing policies and requirements for organizing, planning, directing and controlling the major Program functions
5. Design of Health Monitoring Program for Filling System based on Data Level Fusion
OpenAIRE
Cheng Long; Xing Xiaochen; Xie Weiqi; Cheng Rui; Wang Lei
2016-01-01
Aiming at filling system health monitoring, the health monitoring type partition based on the data level fusion is studied. The health monitoring type is mainly divided into two parts, fusion-threshold monitoring based on single sensor data and fusion monitoring based on multi-sensor data of the same type. On this basis, Single sensor fusion monitoring based on RTS-TA algorithm and Multi-sensor fusion monitoring based on improved weighted fusion algorithm are designed. For multi-sensor data f...
6. Design and Evaluation of the User-Adapted Program Scheduling system based on Bayesian Network and Constraint Satisfaction
Science.gov (United States)
Iwasaki, Hirotoshi; Sega, Shinichiro; Hiraishi, Hironori; Mizoguchi, Fumio
In recent years, lots of music content can be stored in mobile computing devices, such as a portable digital music player and a car navigation system. Moreover, various information content like news or traffic information can be acquired always anywhere by a cellular communication and a wireless LAN. However, usability issues arise from the simple interfaces of mobile computing devices. Moreover, retrieving and selecting such content poses safety issues, especially while driving. Thus, it is important for the mobile system to recommend content automatically adapted to user's preference and situation. In this paper, we present the user-adapted program scheduling that generates sequences of content (Program) suiting user's preference and situation based on the Bayesian network and the Constraint Satisfaction Problem (CSP) technique. We also describe the design and evaluation of its realization system, the Personal Program Producer (P3). First, preference such as a genre ratio of content in a program is learned as a Bayesian network model using simple operations such as a skip behavior. A model including each content tends to become large-scale. In order to make it small, we present the model separation method that carries out losslessly compression of the model. Using the model, probabilistic distributions of preference to generate constraints are inferred. Finally satisfying the constraints, a program is produced. This kind of CSP has an issue of which the number of variables is not fixedness. In order to make it variable, we propose a method using metavariables. To evaluate the above methods, we applied them to P3 on a car navigation system. User evaluations helped us clarify that the P3 can produce the program that a user prefers and adapt it to the user.
7. FIELD TEST PROGRAM TO DEVELOP COMPREHENSIVE DESIGN, OPERATING, AND COST DATA FOR MERCURY CONTROL SYSTEMS
Energy Technology Data Exchange (ETDEWEB)
Michael D. Durham
2004-10-01
PG&E NEG Salem Harbor Station Unit 1 was successfully tested for applicability of activated carbon injection as a mercury control technology. Test results from this site have enabled a thorough evaluation of mercury control at Salem Harbor Unit 1, including performance, estimated cost, and operation data. This unit has very high native mercury removal, thus it was important to understand the impacts of process variables on native mercury capture. The team responsible for executing this program included plant and PG&E headquarters personnel, EPRI and several of its member companies, DOE, ADA, Norit Americas, Inc., Hamon Research-Cottrell, Apogee Scientific, TRC Environmental Corporation, Reaction Engineering, as well as other laboratories. The technical support of all of these entities came together to make this program achieve its goals. Overall the objectives of this field test program were to determine the mercury control and balance-of-plant impacts resulting from activated carbon injection into a full-scale ESP on Salem Harbor Unit 1, a low sulfur bituminous-coal-fired 86 MW unit. It was also important to understand the impacts of process variables on native mercury removal (>85%). One half of the gas stream was used for these tests, or 43 MWe. Activated carbon, DARCO FGD supplied by NORIT Americas, was injected upstream of the cold side ESP, just downstream of the air preheater. This allowed for approximately 1.5 seconds residence time in the duct before entering the ESP. Conditions tested in this field evaluation included the impacts of the Selective Non-Catalytic Reduction (SNCR) system on mercury capture, of unburned carbon in the fly ash, of adjusting ESP inlet flue gas temperatures, and of boiler load on mercury control. The field evaluation conducted at Salem Harbor looked at several sorbent injection concentrations at several flue gas temperatures. It was noted that at the mid temperature range of 322-327 F, the LOI (unburned carbon) lost some of its
8. Ocean Thermal Energy Conversion power system development. Phase I: preliminary design. Final report. [ODSP-3 code; OTEC Steady-State Analysis Program
Energy Technology Data Exchange (ETDEWEB)
1978-12-04
The following appendices are included; Dynamic Simulation Program (ODSP-3); sample results of dynamic simulation; trip report - NH/sub 3/ safety precautions/accident records; trip report - US Coast Guard Headquarters; OTEC power system development, preliminary design test program report; medium turbine generator inspection point program; net energy analysis; bus bar cost of electricity; OTEC technical specifications; and engineer drawings. (WHK)
9. Design of combi systems
DEFF Research Database (Denmark)
Andersen, Elsa; Shah, Louise Jivan; Furbo, Simon
2001-01-01
Investigations have shown that the thermal performance of Danish combi systems is a subject of large variations from system to system. Some systems are well performing, however, more systems have a poor performance. [Ellehauge K et al (2000)]. Most of the combined systems that are installed...... is determined. The calculations are based on the simulation program TrnSys [Klein S.A et al. (1996)] and weather data from the Danish Design Reference Year, DRY. The paper will present and compare measured and calculated thermal performances and solar fractions of different combi systems and the main reasons...... in Denmark correspond to the system illustrated in Figure1. The control system operates the three-way valve in the solar collector circuit so solar heat is supplied either to the storage tank or to the heat exchanger between the collector loop and the space-heating loop. [Ellehauge K, ShahL.J. (2000...
10. Embedded software design and programming of multiprocessor system-on-chip simulink and system C case studies
CERN Document Server
Popovici, Katalin; Jerraya, Ahmed A; Wolf, Marilyn
2010-01-01
Current multimedia and telecom applications require complex, heterogeneous multiprocessor system on chip (MPSoC) architectures with specific communication infrastructure in order to achieve the required performance. Heterogeneous MPSoC includes different types of processing units (DSP, microcontroller, ASIP) and different communication schemes (fast links, non standard memory organization and access).Programming an MPSoC requires the generation of efficient software running on MPSoC from a high level environment, by using the characteristics of the architecture. This task is known to be tediou
11. System programming languages
OpenAIRE
Šmit, Matej
2016-01-01
Most operating systems are written in the C programming language. Similar is with system software, for example, device drivers, compilers, debuggers, disk checkers, etc. Recently some new programming languages emerged, which are supposed to be suitable for system programming. In this thesis we present programming languages D, Go, Nim and Rust. We defined the criteria which are important for deciding whether programming language is suitable for system programming. We examine programming langua...
12. Design basis reconstitution for an effective design control program
International Nuclear Information System (INIS)
Banerjee, A.K.
1987-01-01
Configuration management is a new buzz word in the nuclear power industry. Whatever its definition, everyone agrees that the configuration of a nuclear power plant must be managed effectively. In layman's terms, configuration management means that a plant must be built, operated, and maintained in a manner consistent with its design basis. Thus, control of the design basis is the most important element in any configuration management program. Until recently, the US Nuclear Regulatory Commission's (NRC's) review of design basis focused on the plants that were about to get operating licenses. However, incidents at a few operating nuclear plants and NRC inspections (Safety System Functional Inspection and Safety System Outage Modification Inspection) have indicated weaknesses in older operating plant design basis documentation and design change control programs. Thus, reconstitution of plant design basis has become an important issue. This paper presents the major element of a design basis reconstitution program, which can be an immense undertaking for some of the older operating plants
13. Lightweight Cooling Component Development (LCCD) Program. Polymeric LVS Cooling System Design Report
National Research Council Canada - National Science Library
1998-01-01
The purpose of this analysis was to discuss the design of the new polymeric cooling components and comment on the expected performance and durability of these units over the field trial and in-service use...
14. NASA-GIT predoctoral design training program. [systems and mechanical engineering
Science.gov (United States)
1974-01-01
The training program is discussed briefly, and the quantity and quality of academic achievement of those students who were supported by the traineeships are summarized. Dissertations which were completed or on which substantial progress was made are listed, along with a short description of the activities and status of each of the former trainees.
15. Design of the superconducting coil system in JT-60 modification program aimed at achieving high performance plasmas
International Nuclear Information System (INIS)
Tsuchiya, K.; Kizu, K.; Tamai, H.; Matsukawa, M.; Ando, T.
2006-01-01
The modification program of the JT-60 tokamak progresses to establish scientific and technological bases of an economically and environmentally attractive DEMO by achieving steady-state high-beta plasma. For the economical feasibility, a aspect ratio of a fusion power plant tends to become lower to achieve high mass power density. Therefore, the design of future experiment device is required to have a capability of covering the broad operational space of the aspect ratio and the plasma shape parameter, which strongly correlate to enhance the critical beta value for the ideal MHD limit. In the modified JT-60 tokamak, the system of superconducting coils is also designed to consider this concept. In this device, the toroidal field (TF) coil system consists of 18 coils, and the poloidal field (PF) coil system has 4 modules of central solenoids (CS) and 7 equilibrium field (EF) coils. In the latest design of superconducting coil system, the number of EF coil is increased from 6, and the position of EF coils are optimized to realize the operation space broader. Consequently, flexibility of triangularity becomes broader in order to cover the ITER configuration, so that we obtain the flexibility of plasma configuration, e.g. ITER similarity operation or high plasma current (I P = 5.5 MA) operation in the lowest aspect ratio (A = 2.6). CS design is also revised to supply the sufficient flux for the designed time duration. Under the condition of the designed space for CS, it is found that 17.3 Wb of the flux will be provided with 10 T of the maximum field. Therefore, the conductor should be designed to adopt the strand with 2.8 of Cu/non-Cu ratio Nb 3 Sn for the conductor of CS. For the conductor of the superconducting coils in this device, the cable-in-conduit (CIC) type conductor is adopted. In particular, CS is operated under the condition of variable coil current in the strong magnetic field, so that the evaluation of the fatigue appeared at the conduit in order to
16. Multi-Agent Programming Contest 2013: The Teams and the Design of Their Systems
DEFF Research Database (Denmark)
Ahlbrecht, Tobias; Bender-Saebelkampf, Christian; Brito, Maiquel
2013-01-01
Five teams participated in the Multi-Agent Programming Contest in 2013: All of them gained experience in 2012 already. In order to better understand which paradigms they used, which techniques they considered important and how much work they invested, the organisers of the contest compiled together...... a detailed list of questions (circa 50). This paper collects all answers to these questions as given by the teams....
17. The Implications of Program Genres for the Design of Social Television Systems
NARCIS (Netherlands)
D. Geerts (David); P.S. Cesar Garcia (Pablo Santiago); D.C.A. Bulterman (Dick)
2008-01-01
htmlabstractIn this paper, we look at how television genres can play a role in the use of social interactive television systems (social iTV). Based on a user study of a system for sending and receiving enriched video fragments to and from a range of devices, we discuss which genres are preferred for
18. Use of Generalized Fluid System Simulation Program (GFSSP) for Teaching and Performing Senior Design Projects at the Educational Institutions
Science.gov (United States)
Majumdar, A. K.; Hedayat, A.
2015-01-01
This paper describes the experience of the authors in using the Generalized Fluid System Simulation Program (GFSSP) in teaching Design of Thermal Systems class at University of Alabama in Huntsville. GFSSP is a finite volume based thermo-fluid system network analysis code, developed at NASA/Marshall Space Flight Center, and is extensively used in NASA, Department of Defense, and aerospace industries for propulsion system design, analysis, and performance evaluation. The educational version of GFSSP is freely available to all US higher education institutions. The main purpose of the paper is to illustrate the utilization of this user-friendly code for the thermal systems design and fluid engineering courses and to encourage the instructors to utilize the code for the class assignments as well as senior design projects. The need for a generalized computer program for thermofluid analysis in a flow network has been felt for a long time in aerospace industries. Designers of thermofluid systems often need to know pressures, temperatures, flow rates, concentrations, and heat transfer rates at different parts of a flow circuit for steady state or transient conditions. Such applications occur in propulsion systems for tank pressurization, internal flow analysis of rocket engine turbopumps, chilldown of cryogenic tanks and transfer lines, and many other applications of gas-liquid systems involving fluid transients and conjugate heat and mass transfer. Computer resource requirements to perform time-dependent, three-dimensional Navier-Stokes computational fluid dynamic (CFD) analysis of such systems are prohibitive and therefore are not practical. Available commercial codes are generally suitable for steady state, single-phase incompressible flow. Because of the proprietary nature of such codes, it is not possible to extend their capability to satisfy the above-mentioned needs. Therefore, the Generalized Fluid System Simulation Program (GFSSP1) has been developed at NASA
Energy Technology Data Exchange (ETDEWEB)
Gregory Gaul
2004-04-21
Natural gas combustion turbines are rapidly becoming the primary technology of choice for generating electricity. At least half of the new generating capacity added in the US over the next twenty years will be combustion turbine systems. The Department of Energy has cosponsored with Siemens Westinghouse, a program to maintain the technology lead in gas turbine systems. The very ambitious eight year program was designed to demonstrate a highly efficient and commercially acceptable power plant, with the ability to fire a wide range of fuels. The main goal of the Advanced Turbine Systems (ATS) Program was to develop ultra-high efficiency, environmentally superior and cost effective competitive gas turbine systems for base load application in utility, independent power producer and industrial markets. Performance targets were focused on natural gas as a fuel and included: System efficiency that exceeds 60% (lower heating value basis); Less than 10 ppmv NO{sub x} emissions without the use of post combustion controls; Busbar electricity that are less than 10% of state of the art systems; Reliability-Availability-Maintainability (RAM) equivalent to current systems; Water consumption minimized to levels consistent with cost and efficiency goals; and Commercial systems by the year 2000. In a parallel effort, the program was to focus on adapting the ATS engine to coal-derived or biomass fuels. In Phase 1 of the ATS Program, preliminary investigators on different gas turbine cycles demonstrated that net plant LHV based efficiency greater than 60% was achievable. In Phase 2 the more promising cycles were evaluated in greater detail and the closed-loop steam-cooled combined cycle was selected for development because it offered the best solution with least risk for achieving the ATS Program goals for plant efficiency, emissions, cost of electricity and RAM. Phase 2 also involved conceptual ATS engine and plant design and technology developments in aerodynamics, sealing
20. The design of the MAD Design Program
International Nuclear Information System (INIS)
Niederer, J.
1992-01-01
The study of long term stability in particle accelerators has long been served by a group of widely circulated computer programs. The progress in these programs has mirrored the growth and versatility in accelerator size, complexity, and purpose, as well as evolving technologies in computing software and hardware. A number of large accelerator projects during the last decade were designed with the aid of physics programs either written for, or tailored for the project at hand, each invariably benefiting from contributions of previous workers. This paper outlines the recent history of of expample of an accelerator lattice model tool kit, the Methodical Accelerator Design (MAD) Program, which has tried to knit together this collective wisdom of the accelerator community, The ideas behind the software design of the program itself are traced here; the accelerator physics contents and origins are thoroughly documented elsewhere. These informal notes have a Brookhaven flavor, in part because of early BNL efforts to generalize the ways that technical problems are organized and presented to computers. Some recent BNL applications not covered in the extensive CERN documentation are also included
1. Building 834 -- Cost-effective and innovative design of remediation systems using surplus equipment from former weapons programs
Energy Technology Data Exchange (ETDEWEB)
Daley, P.F.; Landgraf, R.K.; Lima, M.R.; Lamarre, A.L.
1995-07-01
The Building 834 Complex at the Lawrence Livermore National Laboratory (LLNL) Site 300, has been used by the weapons development programs at LLNL as a testing facility for measuring component response to environmental stresses such as extreme temperature. The heat-exchange system at the facility used trichloroethene TCE, at times with adjuvants, as the primary heat-transfer media for over 20 years. Accidental spills, pipe failures, and seal blowouts over that period contributed to a substantial contaminant plume in a perched water-bearing zone underlying the Complex. Individual wells near the source area have produced ground water samples with TCE concentrations exceeding 800,000 ppb. In the last several years, the authors have developed a modular ground water and soil vapor extraction system for remediating the plume source area. The modular facility design permits the testing of new technologies to expedite remediation, and/or reduce the quantity of hazardous wastes generated as byproducts of the primary remedial activities. To contain costs, the authors have used equipment and components recycled from the original Building 834 Complex heat-exchange system, and surplus equipment from other LLNL divisions. The authors have executed two large-scale tests of energy injection systems for TCE destruction in air (a free-air electron beam and a pulsed, ultraviolet photolysis system), and a soil heating test for accelerating vapor extraction. New work plans for this unique site are being prepared, incorporating the lessons learned in developing new technology with recycled equipment.
2. A Program Manager’s Methodology for Developing Structured Design in Embedded Weapons Systems.
Science.gov (United States)
1983-12-01
launch testing. 4. Pre-launch sequencing and timing. 5. Data formatting and transfer synchronization. The DCU processes all digital and analog signal...system specifications from their totally graphic fcrm into a short narrativa form. This trans.tion is a necessary first step toward using an SDL, an
3. FIELD TEST PROGRAM TO DEVELOP COMPREHENSIVE DESIGN, OPERATING, AND COST DATA FOR MERCURY CONTROL SYSTEMS
Energy Technology Data Exchange (ETDEWEB)
Michael D. Durham
2005-03-17
Brayton Point Unit 1 was successfully tested for applicability of activated carbon injection as a mercury control technology. Test results from this site have enabled a thorough evaluation of the impacts of future mercury regulations to Brayton Point Unit 1, including performance, estimated cost, and operation data. This unit has variable (29-75%) native mercury removal, thus it was important to understand the impacts of process variables and activated carbon on mercury capture. The team responsible for executing this program included: (1) Plant and PG&E National Energy Group corporate personnel; (2) Electric Power Research Institute (EPRI); (3) United States Department of Energy National Energy Technology Laboratory (DOE/NETL); (4) ADA-ES, Inc.; (5) NORIT Americas, Inc.; (6) Apogee Scientific, Inc.; (7) TRC Environmental Corporation; (8) URS Corporation; (9) Quinapoxet Solutions; (10) Energy and Environmental Strategies (EES); and (11) Reaction Engineering International (REI). The technical support of all of these entities came together to make this program achieve its goals. Overall, the objectives of this field test program were to determine the impact of activated carbon injection on mercury control and balance-of-plant processes on Brayton Point Unit 1. Brayton Point Unit 1 is a 250-MW unit that fires a low-sulfur eastern bituminous coal. Particulate control is achieved by two electrostatic precipitators (ESPs) in series. The full-scale tests were conducted on one-half of the flue gas stream (nominally 125 MW). Mercury control sorbents were injected in between the two ESPs. The residence time from the injection grid to the second ESP was approximately 0.5 seconds. In preparation for the full-scale tests, 12 different sorbents were evaluated in a slipstream of flue gas via a packed-bed field test apparatus for mercury adsorption. Results from these tests were used to determine the five carbon-based sorbents that were tested at full-scale. Conditions of interest
4. FIELD TEST PROGRAM TO DEVELOP COMPREHENSIVE DESIGN, OPERATING, AND COST DATA FOR MERCURY CONTROL SYSTEMS
Energy Technology Data Exchange (ETDEWEB)
Michael D. Durham
2003-05-01
With the Nation's coal-burning utilities facing the possibility of tighter controls on mercury pollutants, the U.S. Department of Energy is funding projects that could offer power plant operators better ways to reduce these emissions at much lower costs. Mercury is known to have toxic effects on the nervous system of humans and wildlife. Although it exists only in trace amounts in coal, mercury is released when coal burns and can accumulate on land and in water. In water, bacteria transform the metal into methylmercury, the most hazardous form of the metal. Methylmercury can collect in fish and marine mammals in concentrations hundreds of thousands times higher than the levels in surrounding waters. One of the goals of DOE is to develop technologies by 2005 that will be capable of cutting mercury emissions 50 to 70 percent at well under one-half of today's costs. ADA Environmental Solutions (ADA-ES) is managing a project to test mercury control technologies at full scale at four different power plants from 2000--2003. The ADA-ES project is focused on those power plants that are not equipped with wet flue gas desulfurization systems. ADA-ES has developed a portable system that will be tested at four different utility power plants. Each of the plants is equipped with either electrostatic precipitators or fabric filters to remove solid particles from the plant's flue gas. ADA-ES's technology will inject a dry sorbent, such as activated carbon, which removes the mercury and makes it more susceptible to capture by the particulate control devices. A fine water mist may be sprayed into the flue gas to cool its temperature to the range where the dry sorbent is most effective. PG&E National Energy Group is providing two test sites that fire bituminous coals and both are equipped with electrostatic precipitators and carbon/ash separation systems. Wisconsin Electric Power Company is providing a third test site that burns Powder River Basin (PRB) coal and
5. MS PHD'S PDP: Vision, Design, Implementation, and Outcomes of a Minority-Focused Earth System Sciences Program
Science.gov (United States)
Habtes, S. Y.; Mayo, M.; Ithier-Guzman, W.; Pyrtle, A. J.; Williamson Whitney, V.
2007-05-01
As minorities are predicted to comprise at least 33% of the US population by the year 2010, their representation in the STEM fields, including the ocean sciences, is still poorly established. In order to advance the goal of better decision making, the Ocean Sciences community must achieve greater levels of diversity in membership. To achieve this objective of greater diversity in the sciences, the Minorities Striving and Pursuing Higher Degrees of Success in Earth System Science® Professional Development Program (MS PHD'S PDP), which was launched in 2003, is supported via grants from NASA's Office of Earth Science, and NSF's Directorate for Geosciences. The MS PHD'S PDP is designed to provide professional and mentoring experiences that facilitate the advancement of minorities committed to achieving outstanding Earth System Science careers. The MS PHD'S PDP is structured in three phases, connected by engagement in a virtual community, continuous peer and mentor to mentee interactions, and the professional support necessary for ensuring the educational success of the student participants. Since the pilot program in 2003, the MSPHD'S PDP, housed at the University of South Florida's College of Marine Science, has produced 4 cohorts of students. Seventy-five have completed the program; of those 6 have earned their doctoral degrees. Of the 45 current participants 10 are graduate students in Marine Science and 15 are still undergraduates, the remaining 10 participants are graduate students in other STEM fields. Since the implementation of the MSPHD'S PDP a total of 87 students and 33 scientist mentors have become part of the MSPHD'S virtual community, helping to improve the learning environment for current and future participants as well as build a community of minority students that encourages each other to pursue their academic degrees.
6. Design and implementation of an integrated, continuous evaluation, and quality improvement system for a state-based home-visiting program.
Science.gov (United States)
McCabe, Bridget K; Potash, Dru; Omohundro, Ellen; Taylor, Cathy R
2012-10-01
To describe the design and implementation of an evaluation system to facilitate continuous quality improvement (CQI) and scientific evaluation in a statewide home visiting program, and to provide a summary of the system's progress in meeting intended outputs and short-term outcomes. Help Us Grow Successfully (HUGS) is a statewide home visiting program that provides services to at-risk pregnant/post-partum women, children (0-5 years), and their families. The program goals are to improve parenting skills and connect families to needed services and thus improve the health of the service population. The evaluation system is designed to: (1) integrate evaluation into daily workflow; (2) utilize standardized screening and evaluation tools; (3) facilitate a culture of CQI in program management; and, (4) facilitate scientifically rigorous evaluations. The review of the system's design and implementation occurred through a formative evaluation process (reach, dose, and fidelity). Data was collected through electronic and paper surveys, administrative data, and notes from management meetings, and medical chart review. In the design phase, four process and forty outcome measures were selected and are tracked using standardized screening and monitoring tools. During implementation, the reach and dose of training were adequate to successfully launch the evaluation/CQI system. All staff (n = 165) use the system for management of families; the supervisors (n = 18) use the system to track routine program activities. Data quality and availability is sufficient to support periodic program reviews at the region and state level. In the first 7 months, the HUGS evaluation system tracked 3,794 families (7,937 individuals). System use and acceptance is high. A successful implementation of a structured evaluation system with a strong CQI component is feasible in an existing, large statewide program. The evaluation/CQI system is an effective mechanism to drive modest change in management
7. Designing information systems
CERN Document Server
Blethyn, Stanley G
2014-01-01
Designing Information Systems focuses on the processes, methodologies, and approaches involved in designing information systems. The book first describes systems, management and control, and how to design information systems. Discussions focus on documents produced from the functional construction function, users, operators, analysts, programmers and others, process management and control, levels of management, open systems, design of management information systems, and business system description, partitioning, and leveling. The text then takes a look at functional specification and functiona
8. Time history solution program, L225 (TEV126). Volume 2: Supplemental system design and maintenance document. [for airplane dynamic response using frequency response data
Science.gov (United States)
Tornallyay, A.; Clemmons, R. E.; Kroll, R. I.
1979-01-01
The time history solution program L225 (TEV126) is described. The program calculates the time responses of a linear system by convoluting the impulsive response functions with the time dependent excitation. The convolution is performed as a multiplication in the frequency domain. Fast Fourier transform techniques are used to transform the product back into the time domain to obtain response time histories. The design and structure of the program is presented.
9. N286.7-99, A Canadian standard specifying software quality management system requirements for analytical, scientific, and design computer programs and its implementation at AECL
International Nuclear Information System (INIS)
Abel, R.
2000-01-01
Analytical, scientific, and design computer programs (referred to in this paper as 'scientific computer programs') are developed for use in a large number of ways by the user-engineer to support and prove engineering calculations and assumptions. These computer programs are subject to frequent modifications inherent in their application and are often used for critical calculations and analysis relative to safety and functionality of equipment and systems. N286.7-99(4) was developed to establish appropriate quality management system requirements to deal with the development, modification, and application of scientific computer programs. N286.7-99 provides particular guidance regarding the treatment of legacy codes
10. System Critical Design Audit (CDA). Books 1, 2 and 3; [Small Satellite Technology Initiative (SSTI Lewis Spacecraft Program)
Science.gov (United States)
1995-01-01
Small Satellite Technology Initiative (SSTI) Lewis Spacecraft Program is evaluated. Spacecraft integration, test, launch, and spacecraft bus are discussed. Payloads and technology demonstrations are presented. Mission data management system and ground segment are also addressed.
11. Programming languages for circuit design.
Science.gov (United States)
Pedersen, Michael; Yordanov, Boyan
2015-01-01
This chapter provides an overview of a programming language for Genetic Engineering of Cells (GEC). A GEC program specifies a genetic circuit at a high level of abstraction through constraints on otherwise unspecified DNA parts. The GEC compiler then selects parts which satisfy the constraints from a given parts database. GEC further provides more conventional programming language constructs for abstraction, e.g., through modularity. The GEC language and compiler is available through a Web tool which also provides functionality, e.g., for simulation of designed circuits.
12. Design Minimalism in Robotics Programming
Directory of Open Access Journals (Sweden)
Anthony Cowley
2006-03-01
Full Text Available With the increasing use of general robotic platforms in different application scenarios, modularity and reusability have become key issues in effective robotics programming. In this paper, we present a minimalist approach for designing robot software, in which very simple modules, with well designed interfaces and very little redundancy can be connected through a strongly typed framework to specify and execute different robotics tasks.
13. Design Minimalism in Robotics Programming
Directory of Open Access Journals (Sweden)
Anthony Cowley
2008-11-01
Full Text Available With the increasing use of general robotic platforms in different application scenarios, modularity and reusability have become key issues in effective robotics programming. In this paper, we present a minimalist approach for designing robot software, in which very simple modules, with well designed interfaces and very little redundancy can be connected through a strongly typed framework to specify and execute different robotics tasks.
14. Photovoltaic systems. Program summary
Energy Technology Data Exchange (ETDEWEB)
None
1978-12-01
Each of the Department of Energy's Photovoltaic Systems Program projects funded and/or in existence during fiscal year 1978 (October 1, 1977 through September 30, 1978) are described. The project sheets list the contractor, principal investigator, and contract number and funding and summarize the programs and status. The program is divided into various elements: program assessment and integration, research and advanced development, technology development, system definition and development, system application experiments, and standards and performance criteria. (WHK)
15. Extension of the hybrid linear programming method to optimize simultaneously the design and operation of groundwater utilization systems
Science.gov (United States)
2015-04-01
This article proposes a hybrid linear programming (LP-LP) methodology for the simultaneous optimal design and operation of groundwater utilization systems. The proposed model is an extension of an earlier LP-LP model proposed by the authors for the optimal operation of a set of existing wells. The proposed model can be used to optimally determine the number, configuration and pumping rates of the operational wells out of potential wells with fixed locations to minimize the total cost of utilizing a two-dimensional confined aquifer under steady-state flow conditions. The model is able to take into account the well installation, piping and pump installation costs in addition to the operational costs, including the cost of energy and maintenance. The solution to the problem is defined by well locations and their pumping rates, minimizing the total cost while satisfying a downstream demand, lower/upper bound on the pumping rates, and lower/upper bound on the water level drawdown at the wells. A discretized version of the differential equation governing the flow is first embedded into the model formulation as a set of additional constraints. The resulting mixed-integer highly constrained nonlinear optimization problem is then decomposed into two subproblems with different sets of decision variables, one with a piezometric head and the other with the operational well locations and the corresponding pumping rates. The binary variables representing the well locations are approximated by a continuous variable leading to two LP subproblems. Having started with a random value for all decision variables, the two subproblems are solved iteratively until convergence is achieved. The performance and ability of the proposed method are tested against a hypothetical problem from the literature and the results are presented and compared with those obtained using a mixed-integer nonlinear programming method. The results show the efficiency and effectiveness of the proposed method for
16. Double degree master program: Optical Design
Science.gov (United States)
Bakholdin, Alexey; Kujawinska, Malgorzata; Livshits, Irina; Styk, Adam; Voznesenskaya, Anna; Ezhova, Kseniia; Ermolayeva, Elena; Ivanova, Tatiana; Romanova, Galina; Tolstoba, Nadezhda
2015-10-01
Modern tendencies of higher education require development of master programs providing achievement of learning outcomes corresponding to quickly variable job market needs. ITMO University represented by Applied and Computer Optics Department and Optical Design and Testing Laboratory jointly with Warsaw University of Technology represented by the Institute of Micromechanics and Photonics at The Faculty of Mechatronics have developed a novel international master double-degree program "Optical Design" accumulating the expertise of both universities including experienced teaching staff, educational technologies, and experimental resources. The program presents studies targeting research and professional activities in high-tech fields connected with optical and optoelectronics devices, optical engineering, numerical methods and computer technologies. This master program deals with the design of optical systems of various types, assemblies and layouts using computer modeling means; investigation of light distribution phenomena; image modeling and formation; development of optical methods for image analysis and optical metrology including optical testing, materials characterization, NDT and industrial control and monitoring. The goal of this program is training a graduate capable to solve a wide range of research and engineering tasks in optical design and metrology leading to modern manufacturing and innovation. Variability of the program structure provides its flexibility and adoption according to current job market demands and personal learning paths for each student. In addition considerable proportion of internship and research expands practical skills. Some special features of the "Optical Design" program which implements the best practices of both Universities, the challenges and lessons learnt during its realization are presented in the paper.
17. General aviation and community development; Summer Faculty Fellowship Program in Engineering Systems Design, Hampton, Va., June 2-August 15, 1975, Report
Science.gov (United States)
Sincoff, M. Z.; Dajani, J. S.
1975-01-01
The document summarizes the results of a faculty program in engineering systems design whose primary aim was to provide a framework for communication and collaboration between academic personnel, research engineers, and scientists in government agencies and private industry. Other objectives were to provide a useful study of a broadly based societal problem, requiring the coordinated efforts of a multidisciplinary team, and to generate experience in the development of systems design and multidisciplinary activities. The success of the program is evidenced by the resulting study of general aviation and community development, characterized by thorough scrutiny of ideas, philosophies, and academic perspectives.
18. Control system design guide
Energy Technology Data Exchange (ETDEWEB)
Sellers, David; Friedman, Hannah; Haasl, Tudi; Bourassa, Norman; Piette, Mary Ann
2003-05-01
The ''Control System Design Guide'' (Design Guide) provides methods and recommendations for the control system design process and control point selection and installation. Control systems are often the most problematic system in a building. A good design process that takes into account maintenance, operation, and commissioning can lead to a smoothly operating and efficient building. To this end, the Design Guide provides a toolbox of templates for improving control system design and specification. HVAC designers are the primary audience for the Design Guide. The control design process it presents will help produce well-designed control systems that achieve efficient and robust operation. The spreadsheet examples for control valve schedules, damper schedules, and points lists can streamline the use of the control system design concepts set forth in the Design Guide by providing convenient starting points from which designers can build. Although each reader brings their own unique questions to the text, the Design Guide contains information that designers, commissioning providers, operators, and owners will find useful.
19. The PACKTRAM database on national competent authorities' approval certificates for package design, special form material and shipment of radioactive material. User's guide for compiled system program
International Nuclear Information System (INIS)
1995-01-01
The PACKTRAM system program enables Member States to prepare data diskettes on national competent authorities' approval certificates for package design, special form material and shipment of radioactive material, for submission to the IAEA, and facilitates data manipulation and report preparation for the IAEA. The system program is provided as a 424 kbyte executable file, for which this document is the User Guide. The system is fully menu-driven and requires an IBM-compatible personal computer with a minimum of 640 kbyte random access memory, a hard drive and one 3-1/2 inch diskette drive. 3 refs, 6 tabs
20. Instructional Design and the Media Program.
Science.gov (United States)
Hug, William E.
Designed for training school library/media specialists to establish media programs as an integral part of the school curriculum, this text is divided into four general areas. The first two chapters focus on what society expects of the schools and how educators respond. Systems principles are the shown to apply to the building of an educational…
1. Designing photovoltaic systems
Energy Technology Data Exchange (ETDEWEB)
Jones, G.J.
1987-03-22
Photovoltaic system design understanding has matured rapidly in the last decade. Initially the design process emphasized detailed modeling, load match, and on-site energy storage. This entire approach ended once the systems were allowed to operate interactively with the utility. Current design thinking emphasizes system energy cost in relation to utility avoided cost. This leads to a new logic that allows for much simplified design procedures. This paper reviews these procedures for the two types of grid-connected photovoltaic systems and presents a brief discussion of balance-of-system options.
2. Clothing Systems Design Lab
Data.gov (United States)
Federal Laboratory Consortium — The Clothing Systems Design Lab houses facilities for the design and rapid prototyping of military protective apparel.Other focuses include: creation of patterns and...
3. Reactor System Design
International Nuclear Information System (INIS)
Chi, S. K.; Kim, G. K.; Yeo, J. W.
2006-08-01
SMART NPP(Nuclear Power Plant) has been developed for duel purpose, electricity generation and energy supply for seawater desalination. The objective of this project IS to design the reactor system of SMART pilot plant(SMART-P) which will be built and operated for the integrated technology verification of SMART. SMART-P is an integral reactor in which primary components of reactor coolant system are enclosed in single pressure vessel without connecting pipes. The major components installed within a vessel includes a core, twelve steam generator cassettes, a low-temperature self pressurizer, twelve control rod drives, and two main coolant pumps. SMART-P reactor system design was categorized to the reactor coe design, fluid system design, reactor mechanical design, major component design and MMIS design. Reactor safety -analysis and performance analysis were performed for developed SMART=P reactor system. Also, the preparation of safety analysis report, and the technical support for licensing acquisition are performed
4. A Summary Description of a Computer Program Concept for the Design and Simulation of Solar Pond Electric Power Generation Systems
Science.gov (United States)
1984-01-01
A solar pond electric power generation subsystem, an electric power transformer and switch yard, a large solar pond, a water treatment plant, and numerous storage and evaporation ponds. Because a solar pond stores thermal energy over a long period of time, plant operation at any point in time is dependent upon past operation and future perceived generation plans. This time or past history factor introduces a new dimension in the design process. The design optimization of a plant must go beyond examination of operational state points and consider the seasonal variations in solar, solar pond energy storage, and desired plant annual duty-cycle profile. Models or design tools will be required to optimize a plant design. These models should be developed in order to include a proper but not excessive level of detail. The model should be targeted to a specific objective and not conceived as a do everything analysis tool, i.e., system design and not gradient-zone stability.
5. Optical system design
CERN Document Server
Fischer, Robert F
2008-01-01
Honed for more than 20 years in an SPIE professional course taught by renowned optical systems designer Robert E. Fischer, Optical System Design, Second Edition brings you the latest cutting-edge design techniques and more than 400 detailed diagrams that clearly illustrate every major procedure in optical design. This thoroughly updated resource helps you work better and faster with computer-aided optical design techniques, diffractive optics, and the latest applications, including digital imaging, telecommunications, and machine vision. No need for complex, unnecessary mathematical derivations-instead, you get hundreds of examples that break the techniques down into understandable steps. For twenty-first century optical design without the mystery, the authoritative Optical Systems Design, Second Edition features: Computer-aided design use explained through sample problems Case studies of third-millennium applications in digital imaging, sensors, lasers, machine vision, and more New chapters on optomechanic...
6. Designing and Programming CICS Applications
CERN Document Server
Horswill, John
2011-01-01
CICS is an application server that delivers industrial-strength, online transaction management for critical enterprise applications. Proven in the market for over 30 years with many of the world's leading businesses, CICS enables today's customers to modernize and extend their applications to take advantage of the opportunities provided by e-business while maximizing the benefits of their existing investments. Designing and Programming CICS Applications will benefit a diverse audience. It introduces new users of IBM's mainframe (OS/390) to CICS features. It shows experienced users how t
7. Advanced light water reactor plants System 80+{trademark} design certification program. Annual progress report, October 1, 1995--September 30, 1996
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-12-31
The purpose of this report is to provide a status of the progress that was made towards Design Certification of System 80+{trademark} during the US governments 1996 fiscal year. The System 80+ Advanced Light Water Reactor (ALWR) is a 3931 MW (1350 MWe) Pressurized Water Reactor (PWR). The design covers an essentially complete plant. It is based on EPRI ALWR Utility Requirements Document (URD) improvements to the Standardized System 80 Nuclear Steam Supply System (NSSS) in operation at Palo Verde Units 1, 2 and 3. The NSSS is a traditional two-loop arrangement with two steam generators, two hot legs and four cold legs, each with a reactor coolant pump. The System 80+ standard design houses the NSSS in a spherical steel containment vessel which is enclosed in a concrete shield building, thus providing the safety advantages of a dual barrier to radioactivity release. Other major features include an all-digital, human-factors-engineered control room, an alternate electrical AC power source, an In-Containment Refueling Water Storage Tank (IRWST), and plant arrangements providing complete separation of redundant trains in safety systems.
8. Advanced light water reactor plants System 80+{trademark} design certification program. Annual progress report, October 1, 1994--September 30, 1995
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-09-01
The purpose of this report is to provide the status of the progress that was made towards Design Certification of System 80+{trademark} during the US governments 1995 fiscal year. The System 80+ Advanced Light Water Reactor (ALWR) is a 3931 MW (1350 MWe) Pressurized Water Reactor (PWR). The design covers an essentially complete plant. It is based on EPRI ALWR Utility Requirements Document (URD) improvements to the Standardized System 80 Nuclear Steam Supply System (NSSS) in operation at Palo Verde Units 1, 2, and 3. The NSSS is a traditional two-loop arrangement with two steam generators, two hot legs and four cold legs, each with a reactor coolant pump. The System 80+ standard design houses the NSSS in a spherical steel containment vessel which is enclosed in a concrete shield building, thus providing the safety advantages of a dual barrier to radioactivity release. Other major features include an all-digital, human-factors-engineered control room, an alternate electrical AC power source, an In-Containment Refueling Water Storage Tank (IRWST), and plant arrangements providing complete separation of redundant trains in safety systems.
9. Advanced light water reactor plants System 80+trademark design certification program. Annual progress report, October 1, 1994 - September 30, 1995
International Nuclear Information System (INIS)
1998-01-01
The purpose of this report is to provide the status of the progress that was made towards Design Certification of System 80+trademark during the US government's 1995 fiscal year. The System 80+ Advanced Light Water Reactor (ALWR) is a 3931 MW (1350 MWe) Pressurized Water Reactor (PWR). The design covers an essentially complete plant. It is based on EPRI ALWR Utility Requirements Document (URD) improvements to the Standardized System 80 Nuclear Steam Supply System (NSSS) in operation at Palo Verde Units 1, 2, and 3. The NSSS is a traditional two-loop arrangement with two steam generators, two hot legs and four cold legs, each with a reactor coolant pump. The System 80+ standard design houses the NSSS in a spherical steel containment vessel which is enclosed in a concrete shield building, thus providing the safety advantages of a dual barrier to radioactivity release. Other major features include an all-digital, human-factors-engineered control room, an alternate electrical AC power source, an In-Containment Refueling Water Storage Tank (IRWST), and plant arrangements providing complete separation of redundant trains in safety systems
10. Advanced light water reactor plants System 80+trademark design certification program. Annual progress report, October 1, 1995 - September 30, 1996
International Nuclear Information System (INIS)
1996-01-01
The purpose of this report is to provide a status of the progress that was made towards Design Certification of System 80+trademark during the US government's 1996 fiscal year. The System 80+ Advanced Light Water Reactor (ALWR) is a 3931 MW (1350 MWe) Pressurized Water Reactor (PWR). The design covers an essentially complete plant. It is based on EPRI ALWR Utility Requirements Document (URD) improvements to the Standardized System 80 Nuclear Steam Supply System (NSSS) in operation at Palo Verde Units 1, 2 and 3. The NSSS is a traditional two-loop arrangement with two steam generators, two hot legs and four cold legs, each with a reactor coolant pump. The System 80+ standard design houses the NSSS in a spherical steel containment vessel which is enclosed in a concrete shield building, thus providing the safety advantages of a dual barrier to radioactivity release. Other major features include an all-digital, human-factors-engineered control room, an alternate electrical AC power source, an In-Containment Refueling Water Storage Tank (IRWST), and plant arrangements providing complete separation of redundant trains in safety systems
11. Designing automatic resupply systems.
Science.gov (United States)
Harding, M L
1999-02-01
This article outlines the process for designing and implementing autoresupply systems. The planning process includes determination of goals and appropriate participation. Different types of autoresupply mechanisms include kanban, breadman, consignment, systems contracts, and direct shipping from an MRP schedule.
12. HVAC systems design handbook
CERN Document Server
Haines, Roger W
2010-01-01
Thoroughly updated with the latest codes, technologies, and practices, this all-in-one resource provides details, calculations, and specifications for designing efficient and effective residential, commercial, and industrial HVAC systems. HVAC Systems Design Handbook, Fifth Edition, features new information on energy conservation and computer usage for design and control, as well as the most recent International Code Council (ICC) Mechanical Code requirements. Detailed illustrations, tables, and essential HVAC equations are also included. This comprehensive guide contains everything you need to design, operate, and maintain peak-performing HVAC systems.
13. Design of the cryogenic systems for the Near and Far LAr-TPC detectors of the Short-Baseline Neutrino program (SBN) at Fermilab
Energy Technology Data Exchange (ETDEWEB)
Geynisman, M. [Fermilab; Bremer, J. [CERN; Chalifour, M. [CERN; Delaney, M. [Fermilab; Dinnon, M. [Fermilab; Doubnik, R. [Fermilab; Hentschel, S. [Fermilab; Kim, M. J. [Fermilab; Montanari, C. [INFN, Pavia; Monatanari, D. [Fermilab; Nichols, T. [Fermilab; Norris, B. [Fermilab; Sarychev, M. [Fermilab; Schwartz, F. [Fermilab; Tillman, J. [Fermilab; Zuckerbrot, M. [Fermilab
2017-08-31
The Short-Baseline Neutrino (SBN) physics program at Fermilab and Neutrino Platform (NP) at CERN are part of the international Neutrino Program leading to the development of Long-Baseline Neutrino Facility/Deep Underground Neutrino Experiment (LBNF/DUNE) science project. The SBN program consisting of three Liquid Argon Time Projection Chamber (LAr-TPC) detectors positioned along the Booster Neutrino Beam (BNB) at Fermilab includes an existing detector known as MicroBooNE (170-ton LAr-TPC) plus two new experiments known as SBN’s Near Detector (SBND, ~260 tons) and SBN’s Far Detector (SBN-FD, ~760 tons). All three detectors have distinctly different design of their cryostats thus defining specific requirements for the cryogenic systems. Fermilab has already built two new facilities to house SBND and SBN-FD detectors. The cryogenic systems for these detectors are in various stages of design and construction with CERN and Fermilab being responsible for delivery of specific sub-systems. This contribution presents specific design requirements and typical implementation solutions for each sub-system of the SBND and SBN-FD cryogenic systems.
14. Design of the cryogenic systems for the Near and Far LAr-TPC detectors of the Short-Baseline Neutrino program (SBN) at Fermilab
Science.gov (United States)
Geynisman, M.; Bremer, J.; Chalifour, M.; Delaney, M.; Dinnon, M.; Doubnik, R.; Hentschel, S.; Kim, M. J.; Montanari, C.; Montanari, D.; Nichols, T.; Norris, B.; Sarychev, M.; Schwartz, F.; Tillman, J.; Zuckerbrot, M.
2017-12-01
The Short-Baseline Neutrino (SBN) physics program at Fermilab and Neutrino Platform (NP) at CERN are part of the international Neutrino Program leading to the development of Long-Baseline Neutrino Facility/Deep Underground Neutrino Experiment (LBNF/DUNE) science project. The SBN program consisting of three Liquid Argon Time Projection Chamber (LAr-TPC) detectors positioned along the Booster Neutrino Beam (BNB) at Fermilab includes an existing detector known as MicroBooNE (170-ton LAr-TPC) plus two new experiments known as SBN’s Near Detector (SBND, ∼260 tons) and SBN’s Far Detector (SBN-FD, ∼760 tons). All three detectors have distinctly different design of their cryostats thus defining specific requirements for the cryogenic systems. Fermilab has already built two new facilities to house SBND and SBN-FD detectors. The cryogenic systems for these detectors are in various stages of design and construction with CERN and Fermilab being responsible for delivery of specific sub-systems. This contribution presents specific design requirements and typical implementation solutions for each sub-system of the SBND and SBN-FD cryogenic systems.
15. Advanced Turbine Systems (ATS) program conceptual design and product development. Quarterly report, August 25--November 30, 1993
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-06-01
GE has achieved a leadership position in the worldwide gas turbine industry in both industrial/utility markets and in aircraft engines. This design and manufacturing base plus our close contact with the users provides the technology for creation of the next generation advanced power generation systems for both the industrial and utility industries. GE has been active in the definition of advanced turbine systems for several years. These systems will leverage the technology from the latest developments in the entire GE gas turbine product line. These products will be USA based in engineering and manufacturing and are marketed through the GE Industrial and Power Systems. Achieving the advanced turbine system goals of 60% efficiency, 8 ppmvd NOx and 10% electric power cost reduction imposes competing characteristics on the gas turbine system. Two basic technical issues arise from this. The turbine inlet temperature of the gas turbine must increase to achieve both efficiency and cost goals. However, higher temperatures move in the direction of increased NOx emission. Improved coating and materials technologies along with creative combustor design can result in solutions to achieve the ultimate goal.
16. Advanced Turbine Systems (ATS) program conceptual design and product development. Quarterly report, December 1, 1993--February 28, 1994
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-06-01
GE has achieved a leadership position in the worldwide gas turbine industry in both industrial/utility markets and in aircraft engines. This design and manufacturing base plus our close contact with the users provides the technology for creation of the next generation advanced power generation systems for both the industrial and utility industries. GE has been active in the definition of advanced turbine systems for several years. These systems will leverage the technology from the latest developments in the entire GE gas turbine product line. These products will be USA based in engineering and manufacturing and are marketed through the GE Industrial and Power Systems. Achieving the advanced turbine system goals of 60% efficiency, 8 ppmvd NOx and 10% electric power cost reduction imposes competing characteristics on the gas turbine system. Two basic technical issues arise from this. The turbine inlet temperature of the gas turbine must increase to achieve both efficiency and cost goals. However, higher temperatures move in the direction of increased NOx emission. Improved coating and materials technologies along with creative combustor design can result in solutions to achieve the ultimate goal.
17. Design of A Vibration and Stress Measurement System for an Advanced Power Reactor 1400 Reactor Vessel Internals Comprehensive Vibration Assessment Program
International Nuclear Information System (INIS)
Ko, Doyoung; Kim, Kyuhyung
2013-01-01
In accordance with the US Nuclear Regulatory Commission (US NRC), Regulatory Guide 1.20, the reactor vessel internals comprehensive vibration assessment program (RVI CVAP) has been developed for an Advanced Power Reactor 1400 (APR1400). The purpose of the RVI CVAP is to verify the structural integrity of the reactor internals to flow-induced loads prior to commercial operation. The APR1400 RVI CVAP consists of four programs (analysis, measurement, inspection, and assessment). Thoughtful preparation is essential to the measurement program, because data acquisition must be performed only once. The optimized design of a vibration and stress measurement system for the RVI CVAP is essential to verify the integrity of the APR1400 RVI. We successfully designed a vibration and stress measurement system for the APR1400 RVI CVAP based on the design materials, the hydraulic and structural analysis results, and performance tests of transducers in an extreme environment. The measurement system designed in this paper will be utilized for the APR1400 RVI CVAP as part of the first construction project in Korea
18. DESIGN OF A VIBRATION AND STRESS MEASUREMENT SYSTEM FOR AN ADVANCED POWER REACTOR 1400 REACTOR VESSEL INTERNALS COMPREHENSIVE VIBRATION ASSESSMENT PROGRAM
Directory of Open Access Journals (Sweden)
DO-YOUNG KO
2013-04-01
Full Text Available In accordance with the US Nuclear Regulatory Commission (US NRC, Regulatory Guide 1.20, the reactor vessel internals comprehensive vibration assessment program (RVI CVAP has been developed for an Advanced Power Reactor 1400 (APR1400. The purpose of the RVI CVAP is to verify the structural integrity of the reactor internals to flow-induced loads prior to commercial operation. The APR1400 RVI CVAP consists of four programs (analysis, measurement, inspection, and assessment. Thoughtful preparation is essential to the measurement program, because data acquisition must be performed only once. The optimized design of a vibration and stress measurement system for the RVI CVAP is essential to verify the integrity of the APR1400 RVI. We successfully designed a vibration and stress measurement system for the APR1400 RVI CVAP based on the design materials, the hydraulic and structural analysis results, and performance tests of transducers in an extreme environment. The measurement system designed in this paper will be utilized for the APR1400 RVI CVAP as part of the first construction project in Korea.
19. Psychology of system design
CERN Document Server
Meister, D
2014-01-01
This is a book about systems, including: systems in which humans control machines; systems in which humans interact with humans and the machine component is relatively unimportant; systems which are heavily computerized and those that are not; and governmental, industrial, military and social systems. The book deals with both traditional systems like farming, fishing and the military, and with systems just now tentatively emerging, like the expert and the interactive computer system. The emphasis is on the system concept and its implications for analysis, design and evaluation of these many di
20. The Role of Aerospace Technology in Agriculture. The 1977 Summer Faculty Fellowship Program in Engineering Systems Design
Science.gov (United States)
1977-01-01
Possibilities were examined for improving agricultural productivity through the application of aerospace technology. An overview of agriculture and of the problems of feeding a growing world population are presented. The present state of agriculture, of plant and animal culture, and agri-business are reviewed. Also analyzed are the various systems for remote sensing, particularly applications to agriculture. The report recommends additional research and technology in the areas of aerial application of chemicals, of remote sensing systems, of weather and climate investigations, and of air vehicle design. Also considered in detail are the social, legal, economic, and political results of intensification of technical applications to agriculture.
1. Control system design method
Science.gov (United States)
Wilson, David G [Tijeras, NM; Robinett, III, Rush D.
2012-02-21
A control system design method and concomitant control system comprising representing a physical apparatus to be controlled as a Hamiltonian system, determining elements of the Hamiltonian system representation which are power generators, power dissipators, and power storage devices, analyzing stability and performance of the Hamiltonian system based on the results of the determining step and determining necessary and sufficient conditions for stability of the Hamiltonian system, creating a stable control system based on the results of the analyzing step, and employing the resulting control system to control the physical apparatus.
2. Program Management System manual
International Nuclear Information System (INIS)
1986-01-01
The Program Management System (PMS), as detailed in this manual, consists of all the plans, policies, procedure, systems, and processes that, taken together, serve as a mechanism for managing the various subprograms and program elements in a cohesive, cost-effective manner. The PMS is consistent with the requirements of the Nuclear Waste Policy Act of 1982 and the ''Mission Plan for the Civilian Radioactive Waste Management Program'' (DOE/RW-0005). It is based on, but goes beyond, the Department of Energy (DOE) management policies and procedures applicable to all DOE programs by adapting these directives to the specific needs of the Civilian Radioactive Waste Management program. This PMS Manual describes the hierarchy of plans required to develop and maintain the cost, schedule, and technical baselines at the various organizational levels of the Civilian Radioactive Waste Management Program. It also establishes the management policies and procedures used in the implementation of the Program. These include requirements for internal reports, data, and other information; systems engineering management; regulatory compliance; safety; quality assurance; and institutional affairs. Although expanded versions of many of these plans, policies, and procedures are found in separate documents, they are an integral part of this manual. The PMS provides the basis for the effective management that is needed to ensure that the Civilian Radioactive Waste Management Program fulfills the mandate of the Nuclear Waste Policy Act of 1982. 5 figs., 2 tabs
3. Ocean Thermal Energy Conversion power system development. Phase I: preliminary design. Final report. [OSAP-1 code; OTEC Steady-State Analysis Program
Energy Technology Data Exchange (ETDEWEB)
Westerberg, Arthur
1978-12-04
The following appendices are included: highlights of direction and correspondence; user manual for OTEC Steady-State Analysis Program (OSAP-1); sample results of OSAP-1; surface condenser installations; double-clad systems; aluminum alloy seawater piping; references searched for ammonia evaluation; references on stress-corrosion for ammonia; references on anhydrous ammonia storage; references on miscellaneous ammonia items; OTEC materials testing; test reports; OTEC technical specification chlorination system; OTEC technical specification AMERTAP system; OTEC optimization program users guide; concrete hull construction; weight and stability estimates; packing factor data; machinery and equipment list; letter from HPTI on titanium tubes; tables on Wolverine Korodense tubes; evaporator and condenser enhancement tables; code weld titanium tube price, weight tables Alcoa tubing tables; Union Carbide tubing pricing tables; turbotec tubing pricing tables; Wolverine tubing pricing tables; Union Carbide tubing characteristics and pricing; working fluids and turbines for OTEC power system; and hydrodynamic design of prototype OTEC cold and warm seawater pumps. (WHK)
4. Resilient computer system design
CERN Document Server
Castano, Victor
2015-01-01
This book presents a paradigm for designing new generation resilient and evolving computer systems, including their key concepts, elements of supportive theory, methods of analysis and synthesis of ICT with new properties of evolving functioning, as well as implementation schemes and their prototyping. The book explains why new ICT applications require a complete redesign of computer systems to address challenges of extreme reliability, high performance, and power efficiency. The authors present a comprehensive treatment for designing the next generation of computers, especially addressing safety-critical, autonomous, real time, military, banking, and wearable health care systems. § Describes design solutions for new computer system - evolving reconfigurable architecture (ERA) that is free from drawbacks inherent in current ICT and related engineering models § Pursues simplicity, reliability, scalability principles of design implemented through redundancy and re-configurability; targeted for energy-,...
5. Remote Systems Design & Deployment
Energy Technology Data Exchange (ETDEWEB)
Bailey, Sharon A.; Baker, Carl P.; Valdez, Patrick LJ
2009-08-28
The Pacific Northwest National Laboratory (PNNL) was tasked by Washington River Protection Solutions, LLC (WRPS) to provide information and lessons learned relating to the design, development and deployment of remote systems, particularly remote arm/manipulator systems. This report reflects PNNL’s experience with remote systems and lays out the most important activities that need to be completed to successfully design, build, deploy and operate remote systems in radioactive and chemically contaminated environments. It also contains lessons learned from PNNL’s work experiences, and the work of others in the national laboratory complex.
6. Preliminary design and manufacturing feasibility study for a machined Zircaloy triangular pitch fuel rod support system (grids) (AWBA development program)
International Nuclear Information System (INIS)
Horwood, W.A.
1981-07-01
General design features and manufacturing operations for a high precision machined Zircaloy fuel rod support grid intended for use in advanced light water prebreeder or breeder reactor designs are described. The grid system consists of a Zircaloy main body with fuel rod and guide tube cells machined using wire EDM, a separate AM-350 stainless steel insert spring which fits into a full length T-slot in each fuel rod cell, and a thin (0.025'' or 0.040'' thick) wire EDM machined Zircaloy coverplate laser welded to each side of the grid body to retain the insert springs. The fuel rods are placed in a triangular pitch array with a tight rod-to-rod spacing of 0.063 inch nominal. Two dimples are positioned at the mid-thickness of the grid (single level) with a 90 0 included angle. Data is provided on the effectiveness of the manufacturing operations chosen for grid machining and assembly
7. System-Reliability Cumulative-Binomial Program
Science.gov (United States)
Scheuer, Ernest M.; Bowerman, Paul N.
1989-01-01
Cumulative-binomial computer program, NEWTONP, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. NEWTONP, CUMBIN (NPO-17555), and CROSSER (NPO-17557), used independently of one another. Program finds probability required to yield given system reliability. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Program written in C.
8. Embedded systems circuits and programming
CERN Document Server
Sanchez, Julio
2012-01-01
During the development of an engineered product, developers often need to create an embedded system--a prototype--that demonstrates the operation/function of the device and proves its viability. Offering practical tools for the development and prototyping phases, Embedded Systems Circuits and Programming provides a tutorial on microcontroller programming and the basics of embedded design. The book focuses on several development tools and resources: Standard and off-the-shelf components, such as input/output devices, integrated circuits, motors, and programmable microcontrollers The implementat
9. Design criteria for piping and nozzles program
International Nuclear Information System (INIS)
Moore, S.E.; Bryson, J.W.
1977-01-01
This report reviews the activities and accomplishments of the Design Criteria for Piping and Nozzles program being conducted by the Oak Ridge National Laboratory for the period July 1, 1975, to September 30, 1976. The objectives of the program are to conduct integrated experimental and analytical stress analysis studies of piping system components and isolated and closely-spaced pressure vessel nozzles in order to confirm and/or improve the adequacy of structural design criteria and analytical methods used to assure the safe design of nuclear power plants. Activities this year included the development of a finite-element program for analyzing two closely spaced nozzles in a cylindrical pressure vessel; a limited-parameter study of vessels with isolated nozzles, finite-element studies of piping elbows, a fatigue test of an out-of-round elbow, summary and evaluation of experimental studies on the elastic-response and fatigue failure of tees, parameter studies on the behavior of flanged joints, publication of fifteen topical reports and papers on various experimental and analytical studies; and the development and acceptance of a number of design rules changes to the ASME Code. 2 figures, 2 tables
10. Optimal Control Strategy Design Based on Dynamic Programming for a Dual-Motor Coupling-Propulsion System
Science.gov (United States)
Zhang, Shuo; Zhang, Chengning; Han, Guangwei; Wang, Qinghui
2014-01-01
A dual-motor coupling-propulsion electric bus (DMCPEB) is modeled, and its optimal control strategy is studied in this paper. The necessary dynamic features of energy loss for subsystems is modeled. Dynamic programming (DP) technique is applied to find the optimal control strategy including upshift threshold, downshift threshold, and power split ratio between the main motor and auxiliary motor. Improved control rules are extracted from the DP-based control solution, forming near-optimal control strategies. Simulation results demonstrate that a significant improvement in reducing energy loss due to the dual-motor coupling-propulsion system (DMCPS) running is realized without increasing the frequency of the mode switch. PMID:25540814
11. Army Gas-Cooled Reactor Systems Program. ML-1 analytical design report. Volume II. Systems analysis: heat transfer and fluid flow
Energy Technology Data Exchange (ETDEWEB)
None
1961-01-01
The analysis preceding and supporting the design of the cooling system of the ML-1, a mobile, low-power, nuclear power plant, is described in sufficient detail for an engineer to follow the development of the design. Test results and similar data are used to support the calculations whenever possible.
12. Design of adaptive programming teaching tools
OpenAIRE
Urbonienė, Jūratė
2014-01-01
The dissertation examines programming teaching subject material adaptation to the learner's individual qualities - his learning style by Herrmann learning style classification. To that purpose, programming teaching features, existing programming teaching environments and systems, adaptive learning systems, software agents and agent systems, students learning styles and learning material elements (learning objects - LO) and their repositories and learning systems development methods were exami...
13. NRC Seismic Design Margins Program Plan
International Nuclear Information System (INIS)
Cummings, G.E.; Johnson, J.J.; Budnitz, R.J.
1985-08-01
Recent studies estimate that seismically induced core melt comes mainly from earthquakes in the peak ground acceleration range from 2 to 4 times the safe shutdown earthquake (SSE) acceleration used in plant design. However, from the licensing perspective of the US Nuclear Regulatory Commission, there is a continuing need for consideration of the inherent quantitative seismic margins because of, among other things, the changing perceptions of the seismic hazard. This paper discusses a Seismic Design Margins Program Plan, developed under the auspices of the US NRC, that provides the technical basis for assessing the significance of design margins in terms of overall plant safety. The Plan will also identify potential weaknesses that might have to be addressed, and will recommend technical methods for assessing margins at existing plants. For the purposes of this program, a general definition of seismic design margin is expressed in terms of how much larger that the design basis earthquake an earthquake must be to compromise plant safety. In this context, margin needs to be determined at the plant, system/function, structure, and component levels. 14 refs., 1 fig
14. A Program of Research and Education to Advance the Design, Synthesis, and Optimization of Aero-Space System Concepts
Science.gov (United States)
Sandusky, Robert
2002-01-01
Since its inception in December 1999, the program has provided support for a total of 11 Graduate Research Scholar Assistants, of these, 6 have completed their MS degree program. The program has generated 3 MS theses and a total of 4 publications/presentations.
15. Market Aspects of an Interior Design Program.
Science.gov (United States)
Gold, Judy E.
A project was conducted to evaluate a proposed interior design program in order to determine the marketability (job availability in the field of interior design and home furnishings merchandising) and the feasibility (educational requirements for entrance into the interior design and home furnishings merchandising job market) of the program. To…
16. Conventional RF system design
International Nuclear Information System (INIS)
Puglisi, M.
1994-01-01
The design of a conventional RF system is always complex and must fit the needs of the particular machine for which it is planned. It follows that many different design criteria should be considered and analyzed, thus exceeding the narrow limits of a lecture. For this reason only the fundamental components of an RF system, including the generators, are considered in this short seminar. The most common formulas are simply presented in the text, while their derivations are shown in the appendices to facilitate, if desired, a more advanced level of understanding. (orig.)
17. USAF Mobility Program Water Treatment System.
Science.gov (United States)
also be necessary. Water treatment systems are presented which can be developed to yield potable water from these sources. The proposed systems can be designed to meet the requirements of the Bare Base Mobility Program. (Author)
18. Distributed System Design Checklist
Science.gov (United States)
Hall, Brendan; Driscoll, Kevin
2014-01-01
This report describes a design checklist targeted to fault-tolerant distributed electronic systems. Many of the questions and discussions in this checklist may be generally applicable to the development of any safety-critical system. However, the primary focus of this report covers the issues relating to distributed electronic system design. The questions that comprise this design checklist were created with the intent to stimulate system designers' thought processes in a way that hopefully helps them to establish a broader perspective from which they can assess the system's dependability and fault-tolerance mechanisms. While best effort was expended to make this checklist as comprehensive as possible, it is not (and cannot be) complete. Instead, we expect that this list of questions and the associated rationale for the questions will continue to evolve as lessons are learned and further knowledge is established. In this regard, it is our intent to post the questions of this checklist on a suitable public web-forum, such as the NASA DASHLink AFCS repository. From there, we hope that it can be updated, extended, and maintained after our initial research has been completed.
19. Designing Deliberation Systems
DEFF Research Database (Denmark)
Rose, Jeremy; Sæbø, Øystein
2010-01-01
the potential to revitalize and transform citizen engagement in democracy. Although the majority of web 2.0 systems enable these discourses to some extent, government institutions commission and manage specialized deliberation systems (information systems designed to support participative discourse) intended....... In this article we analyze the issues involved in establishing political deliberation systems under four headings: stakeholder engagement, web platform design, service management, political process re-shaping and evaluation and improvement. We review the existing literature and present a longitudinal case study......In a liberal democracy, the evolution of political agendas and formation of policy involves deliberation: serious consideration of political issues. Modern day political participation is dependent on widespread deliberation supported by information and communication technologies, which also offer...
20. The proposed human factors engineering program plan for man-machine interface system design of the next generation NPP in Korea
International Nuclear Information System (INIS)
Oh, I.S.; Lee, H.C.; Seo, S.M.; Cheon, S.W.; Park, K.O.; Lee, J.W.; Sim, B.S.
1994-01-01
Human factors application to nuclear power plant (NPP) design, especially, to man-machine interface system (MMIS) design becomes an important issue among the licensing requirements. Recently, the nuclear regulatory bodies require the evidence of systematic human factors application to the MMIS design. Human Factors Engineering Program Plan (HFEPP), as a basis and central one among the human factors application by the MMIS designers. This paper describes the framework of HFEPP for the MMIS design of next generation NPP (NG-NPP) in Korea. This framework provides an integral plan and some bases of the systematic application of human factors to the MMIS design, and consists of purpose and scope, codes and standards, human factors organization, human factors tasks, engineering control methodology, human factors documentations, and milestones. The proposed HFEPP is a top level document to define and describe human factors tasks, based on each step of MMIS design process, in view point of how, what, when and by whom to be performed. (author). 11 refs, 1 fig
1. Development of the Symbolic Manipulator Laboratory modeling package for the kinematic design and optimization of the Future Armor Rearm System robot. Ammunition Logistics Program
Energy Technology Data Exchange (ETDEWEB)
March-Leuba, S.; Jansen, J.F.; Kress, R.L.; Babcock, S.M. [Oak Ridge National Lab., TN (United States); Dubey, R.V. [Tennessee Univ., Knoxville, TN (United States). Dept. of Mechanical and Aerospace Engineering
1992-08-01
A new program package, Symbolic Manipulator Laboratory (SML), for the automatic generation of both kinematic and static manipulator models in symbolic form is presented. Critical design parameters may be identified and optimized using symbolic models as shown in the sample application presented for the Future Armor Rearm System (FARS) arm. The computer-aided development of the symbolic models yields equations with reduced numerical complexity. Important considerations have been placed on the closed form solutions simplification and on the user friendly operation. The main emphasis of this research is the development of a methodology which is implemented in a computer program capable of generating symbolic kinematic and static forces models of manipulators. The fact that the models are obtained trigonometrically reduced is among the most significant results of this work and the most difficult to implement. Mathematica, a commercial program that allows symbolic manipulation, is used to implement the program package. SML is written such that the user can change any of the subroutines or create new ones easily. To assist the user, an on-line help has been written to make of SML a user friendly package. Some sample applications are presented. The design and optimization of the 5-degrees-of-freedom (DOF) FARS manipulator using SML is discussed. Finally, the kinematic and static models of two different 7-DOF manipulators are calculated symbolically.
2. IGDS/TRAP Interface Program (ITIP). Software Design Document
Science.gov (United States)
Jefferys, Steve; Johnson, Wendell
1981-01-01
The preliminary design of the IGDS/TRAP Interface Program (ITIP) is described. The ITIP is implemented on the PDP 11/70 and interfaces directly with the Interactive Graphics Design System and the Data Management and Retrieval System. The program provides an efficient method for developing a network flow diagram. Performance requirements, operational rquirements, and design requirements are discussed along with sources and types of input and destination and types of output. Information processing functions and data base requirements are also covered.
3. Maglev system design considerations
Energy Technology Data Exchange (ETDEWEB)
Coffey, H.T.
1991-01-01
Although efforts are now being made to develop magnetic levitation technologies in the United States, they have been underway for two decades in Germany and Japan. The characteristics of maglev systems being considered for implementation in the United States are speculative. A conference was held at Argonne National Laboratory on November 28--29, 1990, to discuss these characteristics and their implications for the design requirements of operational systems. This paper reviews some of the factors considered during that conference.
4. AN EXPERT SYSTEM USED IN DESIGN
Directory of Open Access Journals (Sweden)
Hüdayim BAŞAK
2001-03-01
Full Text Available In this work, an expert system used in computer aided design has been developed. In the developed program, the features which are used in the models prepared by a feature based design program are evaluated by the expert system module and are used in part modeling after determining of their compatibilty according to the rules. This program, particulary for those who do not know or know very little manufacturing stages, accomplishes the duty of informing and directing them. The program developed warns the user for design mistakes made during modeling.
5. Automating software design system DESTA
Science.gov (United States)
Lovitsky, Vladimir A.; Pearce, Patricia D.
1992-01-01
'DESTA' is the acronym for the Dialogue Evolutionary Synthesizer of Turnkey Algorithms by means of a natural language (Russian or English) functional specification of algorithms or software being developed. DESTA represents the computer-aided and/or automatic artificial intelligence 'forgiving' system which provides users with software tools support for algorithm and/or structured program development. The DESTA system is intended to provide support for the higher levels and earlier stages of engineering design of software in contrast to conventional Computer Aided Design (CAD) systems which provide low level tools for use at a stage when the major planning and structuring decisions have already been taken. DESTA is a knowledge-intensive system. The main features of the knowledge are procedures, functions, modules, operating system commands, batch files, their natural language specifications, and their interlinks. The specific domain for the DESTA system is a high level programming language like Turbo Pascal 6.0. The DESTA system is operational and runs on an IBM PC computer.
6. Aerial measuring systems program
International Nuclear Information System (INIS)
Jobst, J.E.
1979-01-01
EG and G, Inc., has developed for the Department of Energy (DOE) an Aerial Measuring Systems (AMS) program dedicated to environmental research at facilities of interest to DOE, the Nuclear Regulatory Commission (NRC), and other federal agencies. The AMS was orginally created to measure nuclear radiation; the program scope has been broadened dramatically to include a wide variety of remote sensors: multispectral and mapping cameras, optical and infrared multispectral scanners, air-sampling systems, and meteorological sensors. The AMS maintains seven aircraft as survey platforms, both fixed-wind aircraft and helicopters. Photography, scanner imagery, and radiation data are processed in dedicated, modern laboratories and used for a broad range of environmental impact studies. A graphic overview system has been developed for effective presentation of all types of remotely sensed data obtained at a facility of interest
7. Clean air program : design guidelines for bus transit systems using compressed natural gas as an alternative fuel
Science.gov (United States)
1996-06-01
This report documents design guidelines for the safe use of Compressed Natural Gas (CNG). The report is designed to provide guidance, information on safe industry practices, applicable national codes and standards, and reference data that transit age...
8. SPACE-R Thermionic Space Nuclear Power System: Design and Technology Demonstration Program. Semiannual technical progress report for period ending March 1993
Energy Technology Data Exchange (ETDEWEB)
1993-05-01
This Semiannual Technical Progress Report summarizes the technical progress and accomplishments for the Thermionic Space Nuclear Power System (TI-SNPS) Design and Technology Demonstration Program of the Prime Contractor, Space Power Incorporated (SPI), its subcontractors and supporting National Laboratories during the first half of the Government Fiscal Year (GFY) 1993. SPIs subcontractors and supporting National Laboratories include: Babcock & Wilcox for the reactor core and externals; Space Systems/Loral for the spacecraft integration; Thermocore for the radiator heat pipes and the heat exchanger; INERTEK of CIS for the TFE, core elements and nuclear tests; Argonne National Laboratories for nuclear safety, physics and control verification; and Oak Ridge National laboratories for materials testing. Parametric trade studies are near completion. However, technical input from INERTEK has yet to be provided to determine some of the baseline design configurations. The INERTEK subcontract is expected to be initiated soon. The Point Design task has been initiated. The thermionic fuel element (TFE) is undergoing several design iterations. The reactor core vessel analysis and design has also been started.
9. Design status of Hyper system
International Nuclear Information System (INIS)
Park, Won S.; Hwang, Woan; Kom, Yong G.; Tak, Nam Il; Song, Tae T.
2000-01-01
Korea Atomic Energy Research Institute (KAERI) has been performing accelerator driven system related research and development (Rid) called Hyper for the transmutation of nuclear waste and energy production through the transmutation process. Hyper program is within the frame work of the national mid and long-term nuclear research plan. KAERI is aiming to develop the system concept and a type of road map by the year of 2001 and complete the conceptual design of HYPER system by the year of 2006. Some major design features of HYPER system have been developed. On-power fueling concepts are employed to compensate for the rapid drop of core reactivity. In order to increase the proliferation resistance, whole TRU without any actinide separation will be transmuted in the HYPER system. The long-lived fission products such as Tc-99 and I-129 will be destroyed using the localized thermal neutrons separately in the HYPER. A hollow cylinder-type metal fuel (TRU-Zr) has been chosen because of its high compatibility with pyro-chemical process. Pb-Bi is adopted as a coolant and spallation target material. The heat removal system is designed based on 3 loop concept. 1Gev 6mA proton beam is designed to be provided for HYPER. HYPER is to transmute about 380 kg of TRU a year and produce 1000MWth power. The support ratio of HYPER is believed to be 5 - 6. (author)
10. BWID System Design Study
International Nuclear Information System (INIS)
O'Brien, M.C.; Rudin, M.J.; Morrison, J.L.; Richardson, J.G.
1991-01-01
The mission of the Buried Waste Integrated Demonstration (BWID) System Design Study is to identify and evaluate technology process options for the cradle-to-grave remediation of Transuranic (TRU)-Contaminated Waste Pits and Trenches buried at the Idaho National Engineering Laboratory (INEL). Emphasis is placed upon evaluating system configuration options and associated functional and operational requirements for retrieving and treating the buried wastes. A Performance-Based Technology Selection Filter was developed to evaluate the identified remediation systems and their enabling technologies based upon system requirements and quantification of technical Comprehensive Environmental Response, Compensation, and Liability (CERCLA) balancing criteria. Remediation systems will also be evaluated with respect to regulatory and institutional acceptance and cost-effectiveness
11. Computer-aided system design
Science.gov (United States)
Walker, Carrie K.
1991-01-01
A technique has been developed for combining features of a systems architecture design and assessment tool and a software development tool. This technique reduces simulation development time and expands simulation detail. The Architecture Design and Assessment System (ADAS), developed at the Research Triangle Institute, is a set of computer-assisted engineering tools for the design and analysis of computer systems. The ADAS system is based on directed graph concepts and supports the synthesis and analysis of software algorithms mapped to candidate hardware implementations. Greater simulation detail is provided by the ADAS functional simulator. With the functional simulator, programs written in either Ada or C can be used to provide a detailed description of graph nodes. A Computer-Aided Software Engineering tool developed at the Charles Stark Draper Laboratory (CSDL CASE) automatically generates Ada or C code from engineering block diagram specifications designed with an interactive graphical interface. A technique to use the tools together has been developed, which further automates the design process.
12. An ion synchrotron design program
International Nuclear Information System (INIS)
Yoshida, Katsuhisa; Ishi, Yoshihiro
1995-01-01
Ion synchrotrons have promising applications in medical and other commercial settings as well as in physics research. Mitsubishi Electric has developed a program to facilitate efficiency studies on processes such as ion injection, radio-frequency capture and acceleration, and beam extraction. The integration method used in the particle-orbit calculations maintains the symplectic characteristic of Hamilton dynamics making it possible to simulate long-term phenomena reliably. The article introduces this program and several of its applications. (author)
13. Institutional capacity for health systems research in East and Central Africa schools of public health: enhancing capacity to design and implement teaching programs.
Science.gov (United States)
Nangami, Mabel N; Rugema, Lawrence; Tebeje, Bosena; Mukose, Aggrey
2014-06-02
The role of health systems research (HSR) in informing and guiding national programs and policies has been increasingly recognized. Yet, many universities in sub-Saharan African countries have relatively limited capacity to teach HSR. Seven schools of public health (SPHs) in East and Central Africa undertook an HSR institutional capacity assessment, which included a review of current HSR teaching programs. This study determines the extent to which SPHs are engaged in teaching HSR-relevant courses and assessing their capacities to effectively design and implement HSR curricula whose graduates are equipped to address HSR needs while helping to strengthen public health policy. This study used a cross-sectional study design employing both quantitative and qualitative approaches. An organizational profile tool was administered to senior staff across the seven SPHs to assess existing teaching programs. A self-assessment tool included nine questions relevant to teaching capacity for HSR curricula. The analysis triangulates the data, with reflections on the responses from within and across the seven SPHs. Proportions and average of values from the Likert scale are compared to determine strengths and weaknesses, while themes relevant to the objectives are identified and clustered to elicit in-depth interpretation. None of the SPHs offer an HSR-specific degree program; however, all seven offer courses in the Master of Public Health (MPH) degree that are relevant to HSR. The general MPH curricula partially embrace principles of competency-based education. Different strengths in curricula design and staff interest in HSR at each SPH were exhibited but a number of common constraints were identified, including out-of-date curricula, face-to-face delivery approaches, inadequate staff competencies, and limited access to materials. Opportunities to align health system priorities to teaching programs include existing networks. Each SPH has key strengths that can be leveraged to
14. Institutional capacity for health systems research in East and Central Africa schools of public health: enhancing capacity to design and implement teaching programs
Science.gov (United States)
2014-01-01
Background The role of health systems research (HSR) in informing and guiding national programs and policies has been increasingly recognized. Yet, many universities in sub-Saharan African countries have relatively limited capacity to teach HSR. Seven schools of public health (SPHs) in East and Central Africa undertook an HSR institutional capacity assessment, which included a review of current HSR teaching programs. This study determines the extent to which SPHs are engaged in teaching HSR-relevant courses and assessing their capacities to effectively design and implement HSR curricula whose graduates are equipped to address HSR needs while helping to strengthen public health policy. Methods This study used a cross-sectional study design employing both quantitative and qualitative approaches. An organizational profile tool was administered to senior staff across the seven SPHs to assess existing teaching programs. A self-assessment tool included nine questions relevant to teaching capacity for HSR curricula. The analysis triangulates the data, with reflections on the responses from within and across the seven SPHs. Proportions and average of values from the Likert scale are compared to determine strengths and weaknesses, while themes relevant to the objectives are identified and clustered to elicit in-depth interpretation. Results None of the SPHs offer an HSR-specific degree program; however, all seven offer courses in the Master of Public Health (MPH) degree that are relevant to HSR. The general MPH curricula partially embrace principles of competency-based education. Different strengths in curricula design and staff interest in HSR at each SPH were exhibited but a number of common constraints were identified, including out-of-date curricula, face-to-face delivery approaches, inadequate staff competencies, and limited access to materials. Opportunities to align health system priorities to teaching programs include existing networks. Conclusions Each SPH has key
15. General-purpose RFQ design program
International Nuclear Information System (INIS)
1984-01-01
We have written a general-purpose, radio-frequency quadrupole (RFQ) design program that allows maximum flexibility in picking design algorithms. This program optimizes the RFQ on any combination of design parameters while simultaneously satisfying mutually compatible, physically required constraint equations. It can be very useful for deriving various scaling laws for RFQs. This program has a friendly user interface in addition to checking the consistency of the user-defined requirements and is written to minimize the effort needed to incorporate additional constraint equations. We describe the program and present some examples
16. A NEW EXHAUST VENTILATION SYSTEM DESIGN SOFTWARE
Directory of Open Access Journals (Sweden)
2007-09-01
Full Text Available A Microsoft Windows based ventilation software package is developed to reduce time-consuming and boring procedure of exhaust ventilation system design. This program Assure accurate and reliable air pollution control related calculations. Herein, package is tentatively named Exhaust Ventilation Design Software which is developed in VB6 programming environment. Most important features of Exhaust Ventilation Design Software that are ignored in formerly developed packages are Collector design and fan dimension data calculations. Automatic system balance is another feature of this package. Exhaust Ventilation Design Software algorithm for design is based on two methods: Balance by design (Static pressure balance and design by Blast gate. The most important section of software is a spreadsheet that is designed based on American Conference of Governmental Industrial Hygienists calculation sheets. Exhaust Ventilation Design Software is developed so that engineers familiar with American Conference of Governmental Industrial Hygienists datasheet can easily employ it for ventilation systems design. Other sections include Collector design section (settling chamber, cyclone, and packed tower, fan geometry and dimension data section, a unit converter section (that helps engineers to deal with units, a hood design section and a Persian HTML help. Psychometric correction is also considered in Exhaust Ventilation Design Software. In Exhaust Ventilation Design Software design process, efforts are focused on improving GUI (graphical user interface and use of programming standards in software design. Reliability of software has been evaluated and results show acceptable accuracy.
17. Systems engineering agile design methodologies
CERN Document Server
Crowder, James A
2013-01-01
This book examines the paradigm of the engineering design process. The authors discuss agile systems and engineering design. The book captures the entire design process (functionbases), context, and requirements to affect real reuse. It provides a methodology for an engineering design process foundation for modern and future systems design. This book captures design patterns with context for actual Systems Engineering Design Reuse and contains a new paradigm in Design Knowledge Management.
18. How do eHealth Programs for Adolescents With Depression Work? A Realist Review of Persuasive System Design Components in Internet-Based Psychological Therapies
Science.gov (United States)
McGrath, Patrick J
2017-01-01
Background Major depressive disorders are common among adolescents and can impact all aspects of their daily life. Traditional therapies, cognitive behavioral therapy (CBT), and interpersonal psychotherapy (IPT) have been delivered face-to-face. However, Internet-based (online) delivery of these therapies is emerging as an option for adolescents. Internet-based CBT and IPT involve therapeutic content, interaction between the user and the system, and different technological features embedded into the online program (eg, multimedia). Studies of Internet-based CBT and IPT for adolescent depression differ on all three aspects, and variable, positive therapy effects have been reported. A better understanding of the treatment conditions that influence therapy outcomes is important to designing and evaluating these novel therapies. Objective Our aim was to examine the technological and program delivery features of Internet-based CBT and IPT for adolescent depression and to document their potential relation to treatment outcomes and program use. Methods We performed a realist synthesis. We started with an extensive search of published and gray literature. We included intervention studies that evaluated Internet-based CBT or IPT for adolescent depression. We included mixed-methods and qualitative studies, theoretical papers, and policy/implementation documents if they included a focus on how Internet-based psychological therapy is proposed to work for adolescents with depression/depressive symptoms. We used the Mixed-Methods Appraisal Tool to assess the methodological quality of studies. We used the Persuasive System Design (PSD) model as a framework for data extraction and analysis to examine how Internet-based CBT and IPT, as technology-based systems, influence the attitudes and behaviors of system users. PSD components described for the therapies were linked to reported outcomes using a cross-case comparison method and thematic synthesis. Results We identified 19
19. Overall primary system design
International Nuclear Information System (INIS)
Schulz, H.
1980-01-01
The general design goals for the primary system are: - plant safety, this means protection against catastrophic failure - plant availability, this means a leak-tight system with a high reliability of active components (as pumps, valves) and internal structures. The purpose of this lecture is to show how these general goals are translated into technical requirements. The related criteria, rules and guide-lines necessary for the evaluation of the system are mentioned and discussed in detail as demanded. Special requirements as break assumptions and pipe whip protection are pointed out. The main topic of the lecture is devoted to the focal points in the safety review. The present state of operational experience will be briefly discussed. (orig./RW)
20. Electronic automation of LRFD design programs.
Science.gov (United States)
2010-03-01
The study provided electronic programs to WisDOT for designing pre-stressed girders and piers using the Load : Resistance Factor Design (LRFD) methodology. The software provided is intended to ease the transition to : LRFD for WisDOT design engineers...
1. Interfacing Computer-Assisted Drafting and Design with the Building Loads Analysis and System Thermodynamics (BLAST) Program
Science.gov (United States)
1992-10-01
architectural engineering community, its status as a de facto standard CADD package, and its extensibility via AutoLISP (Autodesk’s proprietary subset of...actually invokes AutoCAD and the AutoLISP program portion of Drawing Navigator. By using a pointing device such as a mouse, the user interacts with...shown in Figure 4 (for the developed prototype) is AutoCAD. The Drawing Navigator box shows three subcomponents. The embedded code is the AutoLISP
2. Clean Air Program : Design Guidelines for Bus Transit Systems Using Alcohol Fuel (Methanol and Ethanol) as an Alternative Fuel
Science.gov (United States)
1996-08-01
Although there are over one thousand transit buses in revenue service in the U.S. that are powered by alternative fuels, there are no comprehensive guidelines for the safe design and operation of alternative fuel facilities and vehicles for transit s...
3. Engineering Design Education Program for Graduate School
Science.gov (United States)
Ohbuchi, Yoshifumi; Iida, Haruhiko
The new educational methods of engineering design have attempted to improve mechanical engineering education for graduate students in a way of the collaboration in education of engineer and designer. The education program is based on the lecture and practical exercises concerning the product design, and has engineering themes and design process themes, i.e. project management, QFD, TRIZ, robust design (Taguchi method) , ergonomics, usability, marketing, conception etc. At final exercise, all students were able to design new product related to their own research theme by applying learned knowledge and techniques. By the method of engineering design education, we have confirmed that graduate students are able to experience technological and creative interest.
4. Digital system design with VHDL
International Nuclear Information System (INIS)
Kang, Jin Gu; Lee, Da Young; Song, Je Chel
2000-09-01
This book is comprised of eleven chapters, which are review of basic logic design including combinational logic circuit, KARNAUGH MAPS, Hazard of combinational circuit, Melay order circuit design and synchronous design, introduction of VHDL like VHDL module of Multiplexer and VHDL Function, design with PLD for program, circuit design for arithmetical operation, digital design using SM chart, PGA and CPLD design, Floating-point calculation, extra issues on VHDL, VHDL module for memory and bus,design for hardware test and a testing and examples for design such as UART design and M68HC05 micro controller.
5. Effective safety training program design
International Nuclear Information System (INIS)
Chilton, D.A.; Lombardo, G.J.; Pater, R.F.
1991-01-01
Changes in the oil industry require new strategies to reduce costs and retain valuable employees. Training is a potentially powerful tool for changing the culture of an organization, resulting in improved safety awareness, lower-risk behaviors and ultimately, statistical improvements. Too often, safety training falters, especially when applied to pervasive, long-standing problems. Stepping, Handling and Lifting injuries (SHL) more commonly known as back injuries and slips, trips and falls have plagued mankind throughout the ages. They are also a major problem throughout the petroleum industry. Although not as widely publicized as other immediately-fatal accidents, injuries from stepping, materials handling, and lifting are among the leading causes of employee suffering, lost time and diminished productivity throughout the industry. Traditional approaches have not turned the tide of these widespread injuries. a systematic safety training program, developed by Anadrill Schlumberger with the input of new training technology, has the potential to simultaneously reduce costs, preserve employee safety, and increase morale. This paper: reviews the components of an example safety training program, and illustrates how a systematic approach to safety training can make a positive impact on Stepping, Handling and Lifting injuries
6. System tests and applications photovoltaic program
Energy Technology Data Exchange (ETDEWEB)
1979-05-01
A summary of all the photovoltaic system tests and application experiments that have been initiated since the start of the US DOE Photovoltaics Program in 1975 is presented. They are organized in the following manner for ease of reference: (1) application experiments: these are independently designed and constructed projects which are funded by DOE; (2) system field tests: projects designed and monitored by the national laboratories involved in the photovoltaic program; (3) exhibits: designed to acquaint the general public to photovoltaics; (4) component field tests: real time endurance testing conducted to monitor module reliability under actual environmental conditions; and (5) test facilities: descriptions of the four national laboratories involved in the photovoltaic program.
7. Successful Bullying Prevention Programs: Influence of Research Design, Implementation Features, and Program Components
Directory of Open Access Journals (Sweden)
Bryanna Hahn Fox
2012-12-01
Full Text Available Bullying prevention programs have been shown to be generally effective in reducing bullying and victimization. However, the effects are relatively small in randomized experiments and greater in quasi-experimental and age-cohort designs. Programs that are more intensive and of longer duration (for both children and teachers are more effective, as are programs containing more components. Several program components are associated with large effect sizes, including parent training or meetings and teacher training. These results should inform the design and evaluation of anti-bullying programs in the future, and a system ofaccreditation of effective programs.
8. Final report bridge design system analysis and modernization.
Science.gov (United States)
2016-09-27
The Bridge Design System (BDS) is an in-house software program developed by the Michigan Department of Transportations : (MDOT) Bridge Design Unit. The BDS designs bridges according to the required specifications, and outputs corresponding design ...
9. Airport Information Retrieval System (AIRS) System Design
Science.gov (United States)
1974-07-01
This report presents the system design for a prototype air traffic flow control automation system developed for the FAA's Systems Command Center. The design was directed toward the immediate automation of airport data for use in traffic load predicti...
10. Using Intervention Mapping for Program Design and Production of iCHAMPSS: An Online Decision Support System to Increase Adoption, Implementation, and Maintenance of Evidence-Based Sexual Health Programs
Directory of Open Access Journals (Sweden)
Melissa F. Peskin
2017-08-01
Full Text Available In Texas and across the United States, unintended pregnancy, HIV, and sexually transmitted infections (STIs among adolescents remain serious public health issues. Sexual risk-taking behaviors, including early sexual initiation, contribute to these public health problems. Over 35 sexual health evidence-based programs (EBPs have been shown to reduce sexual risk behaviors and/or prevent teen pregnancies or STIs. Because more than half of these EBPs are designed for schools, they could reach and impact a considerable number of adolescents if implemented in these settings. Most schools across the U.S. and in Texas, however, do not implement these programs. U.S. school districts face many barriers to the successful dissemination (i.e., adoption, implementation, and maintenance of sexual health EBPs, including lack of knowledge about EBPs and where to find them, perceived lack of support from school administrators and parents, lack of guidance regarding the adoption process, competing priorities, and lack of specialized training on sexual health. Therefore, this paper describes how we used intervention mapping (Steps 3 and 4, in particular, a systematic design framework that uses theory, empirical evidence, and input from the community to develop CHoosing And Maintaining Effective Programs for Sex Education in Schools (iCHAMPSS, an online decision support system to help school districts adopt, implement, and maintain sexual health EBPs. Guided by this systematic intervention design approach, iCHAMPSS has the potential to increase dissemination of sexual health EBPs in school settings.
11. Instructional Design of a Programming Course
DEFF Research Database (Denmark)
Caspersen, Michael Edelgaard; Bennedsen, Jens
2007-01-01
object-oriented programming course is designed according to results of cognitive science and educational psychology in general and cognitive load theory and cognitive skill acquisition in particular; the principal techniques applied are: worked examples, scaffolding, faded guidance, cognitive...... apprenticeship, and emphasis of patterns to aid schema creation and improve learning. As part of the presentation of the course, we provide a characterization of model-driven programming---the approach we have adopted in the introductory programming course. The result is an introductory programming course...... emphasizing a pattern-based approach to programming and schema acquisition in order to improve learning....
12. Preliminary recommendations on the design of the characterization program for the Hanford Site single-shell tanks: A system analysis
Energy Technology Data Exchange (ETDEWEB)
Buck, J.W.; Peffers, M.S.; Hwang, S.T.
1991-11-01
The work described in this volume was conducted by Pacific Northwest Laboratory to provide preliminary recommendations on data quality objectives (DQOs) to support the Waste Characterization Plan (WCP) and closure decisions for the Hanford Site single-shell tanks (SSTs). The WCP describes the first of a two-phase characterization program that will obtain information to assess and implement disposal options for SSTs. This work was performed for the Westinghouse Hanford Company (WHC), the current operating contractor on the Hanford Site. The preliminary DQOs contained in this volume deal with the analysis of SST wastes in support of the WCP and final closure decisions. These DQOs include information on significant contributors and detection limit goals (DLGs) for SST analytes based on public health risk.
13. Advanced turbine systems program conceptual design and product development Task 8.3 - autothermal fuel reformer (ATR). Topical report
Energy Technology Data Exchange (ETDEWEB)
NONE
1996-11-01
Autothermal fuel reforming (ATR) consists of reacting a hydrocarbon fuel such as natural gas or diesel with steam to produce a hydrogen-rich {open_quotes}reformed{close_quotes} fuel. This work has been designed to investigate the fuel reformation and the product gas combustion under gas turbine conditions. The hydrogen-rich gas has a high flammability with a wide range of combustion stability. Being lighter and more reactive than methane, the hydrogen-rich gas mixes readily with air and can be burned at low fuel/air ratios producing inherently low emissions. The reformed fuel also has a low ignition temperature which makes low temperature catalytic combustion possible. ATR can be designed for use with a variety of alternative fuels including heavy crudes, biomass and coal-derived fuels. When the steam required for fuel reforming is raised by using energy from the gas turbine exhaust, cycle efficiency is improved because of the steam and fuel chemically recuperating. Reformation of natural gas or diesel fuels to a homogeneous hydrogen-rich fuel has been demonstrated. Performance tests on screening various reforming catalysts and operating conditions were conducted on a batch-tube reactor. Producing over 70 percent of hydrogen (on a dry basis) in the product stream was obtained using natural gas as a feedstock. Hydrogen concentration is seen to increase with temperature but less rapidly above 1300{degrees}F. The percent reforming increases as the steam to carbon ratio is increased. Two basic groups of reforming catalysts, nickel - and platinum-basis, have been tested for the reforming activity.
14. A Framework for Systemic Design
Directory of Open Access Journals (Sweden)
Alex Ryan
2014-12-01
Full Text Available As designers move upstream from traditional product and service design to engage with challenges characterised by complexity, uniqueness, value conflict, and ambiguity over objectives, they have increasingly integrated systems approaches into their practice. This synthesis of systems thinking with design thinking is forming a distinct new field of systemic design. This paper presents a framework for systemic design as a mindset, methodology, and set of methods that together enable teams to learn, innovate, and adapt to a complex and dynamic environment. We suggest that a systemic design mindset is inquiring, open, integrative, collaborative, and centred. We propose a systemic design methodology composed of six main activities: framing, formulating, generating, reflecting, inquiring, and facilitating. We view systemic design methods as a flexible and open-ended set of procedures for facilitating group collaboration that are both systemic and designerly.
15. APPROACH TO ADAPTIVE LEARNING MANAGEMENT SYSTEM DESIGN
Directory of Open Access Journals (Sweden)
Vitaly A. Gaevoy
2014-01-01
Full Text Available In this paper, we describe how to increase the learning management systems effi ciency by using an adaptive approach. In our work we try and summarize the existing systems; the adaptability absence problem is discovered, programming and architectural adaptive learning management system designing approach is offered.
16. Systems Analysis and Design: Know Your Audience
Science.gov (United States)
Reinicke, Bryan A.
2012-01-01
Systems analysis and design (SAD) classes are required in both Information Systems and Accounting programs, but these audiences have very different needs for these skills. This article will review the requirements for SAD within each of these disciplines and compare and contrast the different requirements for teaching systems analysis and design…
17. Program for three-phase power transformer design
Directory of Open Access Journals (Sweden)
Olivian Chiver
2011-12-01
Full Text Available This paper presents a program developed for designing three-phase power transformers used in power systems. The program was developed in Visual Basic because this programming language allows us to realize a friendly and suggestive interface with minimum effort. The second reason, which is the most important, is to use Visual Basic, because this language is recognized by the used finite elements analysis (FEA software, MagNet produced by Infolytica. This software package is designed for calculation of the magnetic field of electromagnetic devices and machines. The 3D components of the numerical model are carried out using CATIA program, automatically, based on the calculated main geometric data.
18. Designing magnetic systems for reliability
International Nuclear Information System (INIS)
Heitzenroeder, P.J.
1991-01-01
Designing magnetic system is an iterative process in which the requirements are set, a design is developed, materials and manufacturing processes are defined, interrelationships with the various elements of the system are established, engineering analyses are performed, and fault modes and effects are studied. Reliability requires that all elements of the design process, from the seemingly most straightforward such as utilities connection design and implementation, to the most sophisticated such as advanced finite element analyses, receives a balanced and appropriate level of attention. D.B. Montgomery's study of magnet failures has shown that the predominance of magnet failures tend not to be in the most intensively engineered areas, but are associated with insulation, leads, ad unanticipated conditions. TFTR, JET, JT-60, and PBX are all major tokamaks which have suffered loss of reliability due to water leaks. Similarly the majority of causes of loss of magnet reliability at PPPL has not been in the sophisticated areas of the design but are due to difficulties associated with coolant connections, bus connections, and external structural connections. Looking towards the future, the major next-devices such as BPX and ITER are most costly and complex than any of their predecessors and are pressing the bounds of operating levels, materials, and fabrication. Emphasis on reliability is a must as the fusion program enters a phase where there are fewer, but very costly devices with the goal of reaching a reactor prototype stage in the next two or three decades. This paper reviews some of the magnet reliability issues which PPPL has faced over the years the lessons learned from them, and magnet design and fabrication practices which have been found to contribute to magnet reliability
19. Design progress of hyper system
International Nuclear Information System (INIS)
Park, W.S.; Hwang, W.; Kim, Y.H.; Tak, N.I.; Song, T.Y.
2003-01-01
The Korea Atomic Energy Research Institute (KAERI) has been performing accelerator driven system related research and development, called HYPER, for the transmutation of nuclear waste and energy production through the transmutation process. The HYPER program is being performed within the framework of the national mid- and long-term nuclear research plan. KAERI is aiming to develop a system concept and type of road map by the year 2001 and complete the conceptual design of the HYPER system by the year 2006. Some major design features of the HYPER system have been developed. The burnable poison concept is being developed to keep the core reactivity swing less than 10%. In order to increase proliferation resistance, a pyro-chemical process is employed for the separation. The trade-off studies for fuel fabrication are being performed. A dispersion type fuel is believed to have advantages in terms of achieving high discharge burnup. The long-lived fission products such as Tc-99 and I-129 will be destroyed using localized thermal neutrons separately in the HYPER. A calcium hydride is employed as moderator. SSC-H (Super System Code-HYPER) is being developed to simulate the behaviour of coolant systems. The thermal hydraulic properties of Pb-Bi are implemented on the SSC-H. Design optimization of target and beam window is being performed using FLUENT and ANSYS computer codes. In addition, beam irradiation testing is performed to estimate the hardness of window material (9Cr-2WVTa) due to the proton using keV order accelerator. Beam diameter and window thickness are optimized based on the simulation results. (author)
20. Design progress of HYPER system
International Nuclear Information System (INIS)
Park, Won S.; Hwang, Woan; Kim, Yong H.; Nam-Il Tak; Song, Tae Y.
2001-01-01
Korea Atomic Energy Research Institute (KAERI) has been performing accelerator driven system related research and development called HYPER for the transmutation of nuclear waste and energy production through the transmutation process. HYPER program is within the framework of the national mid and long-term nuclear research plan. KAERI is aiming to develop the system concept and a type of roadmap by the year of 2001 and complete the conceptual design of HYPER system by the year 2006. Some major design features of HYPER system have been developed. Burnable poison concept is being developed to keep the core reactivity swing less than 10%. In order to increase the proliferation resistance, a pyrochemical process is employed for the separation. The trade-off studies for the fuel fabrication are being performed. A dispersion type is believed to have advantages in terms of achieving high discharge burnup. The long-lived fission products such as Tc-99 and I-129 will be destroyed using the localized thermal neutrons separately in the HYPER. A calcium hydride is employed as moderator. SSC-H(Super System Code-HYPER) is being developed to simulate the behavior of coolant systems. The thermal hydraulic properties of Pb-Bi are implemented to SSC-H. The design optimization of target and beam window is performed using FLUENT and ANSYS computer codes. In addition, beam irradiation test is performed to estimate the hardness of window material (9Cr-2WVTa) due to the proton using KeV order accelerator. Beam diameter and window thickness are optimized based on the simulation results. (author)
Science.gov (United States)
Diallo, Lamine; Gerhardt, Kris
2017-01-01
With a growing number of leadership programs in universities and colleges in North America, leadership educators and researchers are engaged in a wide ranging dialogue to propose clear processes, content, and designs for providing academic leadership education. This research analyzes the curriculum design of 52 institutions offering a "Minor…
2. FFTF Heat Transport System (HTS) component and system design
International Nuclear Information System (INIS)
Young, M.W.; Edwards, P.A.
1980-01-01
The FFTF Heat Transport Systems and Components designs have been completed and successfully tested at isothermal conditions up to 427 0 C (800 0 F). General performance has been as predicted in the design analyses. Operational flexibility and reliability have been outstanding throughout the test program. The components and systems have been demonstrated ready to support reactor powered operation testing planned later in 1980
3. An assessment of design control practices and design reconstitution programs in the nuclear power industry
International Nuclear Information System (INIS)
Imbro, E.V.
1991-02-01
The US Nuclear Regulatory Commission (NRC) and the utilities have identified shortcomings involving the maintenance of well-defined design bases and the availability of the necessary supporting design documentation. Many utilities have embarked on design-document reconstitution programs although there has been no clear consensus regarding what information should be included in design-bases documents, what is the minimum set of necessary design documents to support the design bases, or how missing or deficient design documentation should be handled. The NRC initiated a survey to ascertain the status of design control programs within the industry and the approaches to design-bases documentation used by some utilities. The survey scope included six utilities and one nuclear steam supply system vendor. Conclusions and observations resulting from the survey assessments are provided so that utilities and the NRC can consider actions to improve these programs. 12 refs
4. How do eHealth Programs for Adolescents With Depression Work? A Realist Review of Persuasive System Design Components in Internet-Based Psychological Therapies.
Science.gov (United States)
Wozney, Lori; Huguet, Anna; Bennett, Kathryn; Radomski, Ashley D; Hartling, Lisa; Dyson, Michele; McGrath, Patrick J; Newton, Amanda S
2017-08-09
Major depressive disorders are common among adolescents and can impact all aspects of their daily life. Traditional therapies, cognitive behavioral therapy (CBT), and interpersonal psychotherapy (IPT) have been delivered face-to-face. However, Internet-based (online) delivery of these therapies is emerging as an option for adolescents. Internet-based CBT and IPT involve therapeutic content, interaction between the user and the system, and different technological features embedded into the online program (eg, multimedia). Studies of Internet-based CBT and IPT for adolescent depression differ on all three aspects, and variable, positive therapy effects have been reported. A better understanding of the treatment conditions that influence therapy outcomes is important to designing and evaluating these novel therapies. Our aim was to examine the technological and program delivery features of Internet-based CBT and IPT for adolescent depression and to document their potential relation to treatment outcomes and program use. We performed a realist synthesis. We started with an extensive search of published and gray literature. We included intervention studies that evaluated Internet-based CBT or IPT for adolescent depression. We included mixed-methods and qualitative studies, theoretical papers, and policy/implementation documents if they included a focus on how Internet-based psychological therapy is proposed to work for adolescents with depression/depressive symptoms. We used the Mixed-Methods Appraisal Tool to assess the methodological quality of studies. We used the Persuasive System Design (PSD) model as a framework for data extraction and analysis to examine how Internet-based CBT and IPT, as technology-based systems, influence the attitudes and behaviors of system users. PSD components described for the therapies were linked to reported outcomes using a cross-case comparison method and thematic synthesis. We identified 19 Internet-based CBT programs in 59 documents
5. Development of intellectual reactor design system IRDS
International Nuclear Information System (INIS)
Tsuchihashi, K.; Nakagawa, M.; Mori, T.; Kugo, T.
1992-01-01
An intellectual reactor design system IRDS has been developed as a prototype of the ADES program. The objective is to support feasibility study and pre-conceptual design of new type reactor cores. Design process is achieved by the sequential steps to get/put information from/to design model. It works on an EWS by utilizing its capabilities of menu window display for an interactive usage and of graphic display for a visualization of input and output of simulation. An object oriented architecture is realized in system control, integration of simulation modules, and structure of design data base. (author)
6. Incremental approximate dynamic programming for nonlinear flight control design
NARCIS (Netherlands)
Zhou, Y.; Van Kampen, E.J.; Chu, Q.P.
2015-01-01
A self-learning adaptive flight control design for non-linear systems allows reliable and effective operation of flight vehicles in a dynamic environment. Approximate dynamic programming (ADP) provides a model-free and computationally effective process for designing adaptive linear optimal
7. Designing of Loss Optimum Regulator for Control of D.C. Electric Drive with Varying Inertia Moment in CoDeSys Programming System
OpenAIRE
S. O. Novikov; A. V. Paschenko
2009-01-01
CoDeSys programming system is considered as the most complete version of software for programmed logical controllers (PLC) that meets requirements of IEC 61131-3-standard. The given software is the most suitable for simulation and development of control system algorithms and execution of semi-full-scale tests without involvement of an actual object. Programming medium operated at a personal computer in Windows medium. As CoDeSys provides a machine code it is rather easy to support its program...
8. Knowledge-based optical system design
Science.gov (United States)
Nouri, Taoufik
1992-03-01
This work is a new approach for the design of start optical systems and represents a new contribution of artificial intelligence techniques in the optical design field. A knowledge-based optical-systems design (KBOSD), based on artificial intelligence algorithms, first order logic, knowledge representation, rules, and heuristics on lens design, is realized. This KBOSD is equipped with optical knowledge in the domain of centered dioptrical optical systems used at low aperture and small field angles. It generates centered dioptrical, on-axis and low-aperture optical systems, which are used as start systems for the subsequent optimization by existing lens design programs. This KBOSD produces monochromatic or polychromatic optical systems, such as singlet lens, doublet lens, triplet lens, reversed singlet lens, reversed doublet lens, reversed triplet lens, and telescopes. In the design of optical systems, the KBOSD takes into account many user constraints such as cost, resistance of the optical material (glass) to chemical, thermal, and mechanical effects, as well as the optical quality such as minimal aberrations and chromatic aberrations corrections. This KBOSD is developed in the programming language Prolog and has knowledge on optical design principles and optical properties. It is composed of more than 3000 clauses. Inference engine and interconnections in the cognitive world of optical systems are described. The system uses neither a lens library nor a lens data base; it is completely based on optical design knowledge.
9. Liner system design
International Nuclear Information System (INIS)
Hutchinson, I.P.G.; Ellison, R.D.
1992-01-01
This paper discusses one of the most important regulatory and design decisions which is determining the type of liner system. The liner system includes a combination of low hydraulic conductivity and leakage control materials to be provided beneath a mine waste management unit to avoid seepage losses, which could result in an unacceptable threat to beneficial uses of ground water. This is more difficult for mine wastes than for other types of waste disposal because: The physical and chemical properties of mine wastes vary widely; The sizes )volume and areal extent) of mine waste management units is often very large so that the costs of liners can impact economic feasibility of some operations. The U.S. Congress considered the differences between mine wastes and other types of wastes when it passed the Bevill amendment to the Resource Conservation and Recovery Act (RCRA) in 1980. That amendment exempted most mine wastes from hazardous waste regulation until the United States Environmental Protection Agency (EPA) conducted a study to determine the appropriate degree of regulation for mine wastes. In 1986, the EPA issued a report recognizing that, with a few exceptions for certain processed materials, mine wastes do not present the same level of threat as other wastes and therefore should be regulated differently. An additional important factor which differentiates mine waste disposal management units form other solid waste disposal units is that, except in unusual circumstances, mine and process facilities are located where the mineral resource is being extracted. Therefore, the location of the mine waste disposal facilities cannot solely be based upon a site selection study. as a result, some mines are located where the distance or depth to a valuable water resource is relatively small, while others are located in remote desert areas with no contiguous surface water resources, and deep ground water of limited quantity and/or quality
10. Logic-programming language enriches design processes
Energy Technology Data Exchange (ETDEWEB)
Kitson, B.; Ow-Wing, K.
1984-03-22
With the emergence of a set of high-level CAD tools for programmable logic devices, designers can translate logic into functional custom devices simply and efficiently. The core of the package is a blockstructured hardware description language called PLPL, for ''programmable-logic programming language.'' The cheif advantage of PLPL lies in its multiple input formats, which permit different design approaches for a variety of design problems. The higher the level of the approach, the closer PLPL will come to directly specifying the desired function. Intermediate steps in the design process can be eliminated, along with the errors that might have been generated during those steps.
11. Structural design by CAD system
International Nuclear Information System (INIS)
Kim, Jhin Wung; Shim, Jae Ku; Kim, Sun Hoon; Kim, Dae Hong; Lee, Kyung Jin; Choi, Kyu Sup; Choi, In Kil; Lee, Dong Yong
1988-12-01
CAD systems are now widely used for the design of many engineering problems involving static, dynamic and thermal stress analyses of structures. In order to apply CAD systems to the structural analysis and design, the function of hardwares and softwares necessary for the CAD systems must be understood. The purpose of this study is to introduce the basic elements that are indispensible in the application of CAD systems to the analysis and design of structures and to give a thorough understanding of CAD systems to design engineers, so as to participate in the further technological developments of CAD systems. Due to the complexity and variety of the shape and size of the nowa-days structures, the need of new design technologies is growing for more efficient, accurate and economical design of structures. The application of CAD systems to structural engineering fields enables to improve structural engineering analysis and design technologies and also to obtain the standardization of the design process. An active introduction of rapidly developing CAD technologies will contribute to analyzing and designing structures more efficiently and reliably. Based on this report of the current status of the application of CAD systems to the structural analysis and design, the next goal is to develop the expert system which enables to perform the design of structures by CAD systems from the preliminary conceptual design to the final detail drawings automatically. (Author)
12. Radioisotope Power Systems Program: A Program Overview
Science.gov (United States)
Hamley, John A.
2016-01-01
NASA's Radioisotope Power Systems (RPS) Program continues to plan, mature research in energy conversion, and partners with the Department of Energy (DOE) to make RPS ready and available to support the exploration of the solar system in environments where the use of conventional solar or chemical power generation is impractical or impossible to meet potential future mission needs. Recent programs responsibilities include providing investment recommendations to NASA stakeholders on emerging thermoelectric and Stirling energy conversion technologies and insight on NASA investments at DOE in readying a generator for the Mars 2020 mission. This presentation provides an overview of the RPS Program content and status and the approach used to maintain the readiness of RPS to support potential future NASA missions.
13. Programming models for energy-aware systems
Science.gov (United States)
Zhu, Haitao
Energy efficiency is an important goal of modern computing, with direct impact on system operational cost, reliability, usability and environmental sustainability. This dissertation describes the design and implementation of two innovative programming languages for constructing energy-aware systems. First, it introduces ET, a strongly typed programming language to promote and facilitate energy-aware programming, with a novel type system design called Energy Types. Energy Types is built upon a key insight into today's energy-efficient systems and applications: despite the popular perception that energy and power can only be described in joules and watts, real-world energy management is often based on discrete phases and modes, which in turn can be reasoned about by type systems very effectively. A phase characterizes a distinct pattern of program workload, and a mode represents an energy state the program is expected to execute in. Energy Types is designed to reason about energy phases and energy modes, bringing programmers into the optimization of energy management. Second, the dissertation develops Eco, an energy-aware programming language centering around sustainability. A sustainable program built from Eco is able to adaptively adjusts its own behaviors to stay on a given energy budget, avoiding both deficit that would lead to battery drain or CPU overheating, and surplus that could have been used to improve the quality of the program output. Sustainability is viewed as a form of supply and demand matching, and a sustainable program consistently maintains the equilibrium between supply and demand. ET is implemented as a prototyped compiler for smartphone programming on Android, and Eco is implemented as a minimal extension to Java. Programming practices and benchmarking experiments in these two new languages showed that ET can lead to significant energy savings for Android Apps and Eco can efficiently promote battery awareness and temperature awareness in real
14. Computer System Design System-on-Chip
CERN Document Server
Flynn, Michael J
2011-01-01
The next generation of computer system designers will be less concerned about details of processors and memories, and more concerned about the elements of a system tailored to particular applications. These designers will have a fundamental knowledge of processors and other elements in the system, but the success of their design will depend on the skills in making system-level tradeoffs that optimize the cost, performance and other attributes to meet application requirements. This book provides a new treatment of computer system design, particularly for System-on-Chip (SOC), which addresses th
15. Software tools to aid Pascal and Ada program design
Energy Technology Data Exchange (ETDEWEB)
Jankowitz, H.T.
1987-01-01
This thesis describes a software tool which analyses the style and structure of Pascal and Ada programs by ensuring that some minimum design requirements are fulfilled. The tool is used in much the same way as a compiler is used to teach students the syntax of a language, only in this case issues related to the design and structure of the program are of paramount importance. The tool operates by analyzing the design and structure of a syntactically correct program, automatically generating a report detailing changes that need to be made in order to ensure that the program is structurally sound. The author discusses how the model gradually evolved from a plagiarism detection system which extracted several measurable characteristics in a program to a model that analyzed the style of Pascal programs. In order to incorporate more-sophistical concepts like data abstraction, information hiding and data protection, this model was then extended to analyze the composition of Ada programs. The Ada model takes full advantage of facilities offered in the language and by using this tool the standard and quality of written programs is raised whilst the fundamental principles of program design are grasped through a process of self-tuition.
16. Disease management: program design, development, and implementation.
Science.gov (United States)
Harvey, N; DePue, D M
1997-06-01
Disease management is an emerging approach to patient management, customer satisfaction, and cost containment that comprises disease modeling; patient segmentation and risk assessment; clinical protocols; and wellness, self-management, and education. Implementing a disease management program poses significant challenges to healthcare organizations. To successfully implement a disease management program, a tightly integrated continuum of care, sophisticated information systems, and disease management support systems must be in place. Strategic partnerships with outside vendors may speed program implementation and provide opportunities to develop risk-sharing relationships.
17. Embedded Systems Design: Optimization Challenges
DEFF Research Database (Denmark)
Pop, Paul
2005-01-01
-to-market, and reduce development and manufacturing costs. In this paper, the author introduces several embedded systems design problems, and shows how they can be formulated as optimization problems. Solving such challenging design optimization problems are the key to the success of the embedded systems design...... of designing such systems is becoming increasingly important and difficult at the same time. New automated design optimization techniques are needed, which are able to: successfully manage the complexity of embedded systems, meet the constraints imposed by the application domain, shorten the time...
18. System Design for Telecommunication Gateways
CERN Document Server
Bachmutsky, Alexander
2010-01-01
System Design for Telecommunication Gateways provides a thorough review of designing telecommunication network equipment based on the latest hardware designs and software methods available on the market. Focusing on high-end efficient designs that challenge all aspects of the system architecture, this book helps readers to understand a broader view of the system design, analyze all its most critical components, and select the parts that best fit a particular application. In many cases new technology trends, potential future developments, system flexibility and capability extensions are outline
19. Superconducting magnet systems in EPR designs
International Nuclear Information System (INIS)
Knobloch, A.F.
1976-10-01
Tokamak experiments have reached a stage where large scale application of superconductors can be envisaged for machines becoming operational within the next decade. Existing designs for future devices already indicate some of the tasks and problems associated with large superconducting magnet systems. Using this information the coming magnet system requirements are summarized, some design considerations given and in conclusion a brief survey describes already existing Tokamak magnet development programs. (orig.) [de
20. Flight Path Recovery System (FPRS) design study
International Nuclear Information System (INIS)
1978-09-01
The study contained herein presents a design for a Flight Path Recovery System (FPPS) for use in the NURE Program which will be more accurate than systems presently used, provide position location data in digital form suitable for automatic data processing, and provide for flight path recovery in a more economic and operationally suitable manner. The design is based upon the use of presently available hardware and technoloy, and presents little, it any, development risk. In addition, a Flight Test Plan designed to test the FPRS design concept is presented
1. Flight Path Recovery System (FPRS) design study
Energy Technology Data Exchange (ETDEWEB)
1978-09-01
The study contained herein presents a design for a Flight Path Recovery System (FPPS) for use in the NURE Program which will be more accurate than systems presently used, provide position location data in digital form suitable for automatic data processing, and provide for flight path recovery in a more economic and operationally suitable manner. The design is based upon the use of presently available hardware and technoloy, and presents little, it any, development risk. In addition, a Flight Test Plan designed to test the FPRS design concept is presented.
2. A radiation protection training program designed to reduce occupational radiation dose to individuals using pneumatic-transfer systems at the Oregon State TRIGA reactor
International Nuclear Information System (INIS)
Johnson, A.G.; Anderson, T.V.; Pratt, D.; Dodd, B.; Carpenter, W.T.
1984-01-01
In order to keeping personnel doses as low as reasonably achievable, and also to help satisfy requirements of NRC regulations contained in 10 CFR 19, a training program was established to qualify all individuals prior to their use of the OSTR PT systems. Program objectives are directed mainly towards minimizing the spread of radioactive contamination and reducing the potential for unnecessary and inappropriate personnel radiation exposure; however, other operational and emergency procedures are also covered. The PT systems training program described in this report was established approximately 8 to 10 years ago but recently there was an increased interest to use it. Whether or not a PT system training program should be implemented at a specific TRIGA operation (assuming the facility is equipped with a PT system) will undoubtedly be influenced heavily by the nature and frequency of the PT system's use, by who uses the system, and by whether the system is one of the automatic loading and unloading types, or one of the more' commonly encountered manually operated systems. However, from our experience we feel that training commensurate with the type of PT system operation being conducted is a wise investment, and should be a requirement for all system operators
3. CASKS (Computer Analysis of Storage casKS): A microcomputer based analysis system for storage cask design review. User's manual to Version 1b (including program reference)
International Nuclear Information System (INIS)
Chen, T.F.; Gerhard, M.A.; Trummer, D.J.; Johnson, G.L.; Mok, G.C.
1995-02-01
4. Advanced thermionic reactor systems design code
International Nuclear Information System (INIS)
Lewis, B.R.; Pawlowski, R.A.; Greek, K.J.; Klein, A.C.
1991-01-01
An overall systems design code is under development to model an advanced in-core thermionic nuclear reactor system for space applications at power levels of 10 to 50 kWe. The design code is written in an object-oriented programming environment that allows the use of a series of design modules, each of which is responsible for the determination of specific system parameters. The code modules include a neutronics and core criticality module, a core thermal hydraulics module, a thermionic fuel element performance module, a radiation shielding module, a module for waste heat transfer and rejection, and modules for power conditioning and control. The neutronics and core criticality module determines critical core size, core lifetime, and shutdown margins using the criticality calculation capability of the Monte Carlo Neutron and Photon Transport Code System (MCNP). The remaining modules utilize results of the MCNP analysis along with FORTRAN programming to predict the overall system performance
5. Material control system simulator program reference manual
International Nuclear Information System (INIS)
Hollstien, R.B.
1978-01-01
A description is presented of a Material Control System Simulator (MCSS) program for determination of material accounting uncertainty and system response to particular adversary action sequences that constitute plausible material diversion attempts. The program is intended for use in situations where randomness, uncertainty, or interaction of adversary actions and material control system components make it difficult to assess safeguards effectiveness against particular material diversion attempts. Although MCSS may be used independently in the design or analysis of material handling and processing systems, it has been tailored toward the determination of material accountability and the response of material control systems to adversary action sequences
6. Material control system simulator program reference manual
Energy Technology Data Exchange (ETDEWEB)
Hollstien, R.B.
1978-01-24
A description is presented of a Material Control System Simulator (MCSS) program for determination of material accounting uncertainty and system response to particular adversary action sequences that constitute plausible material diversion attempts. The program is intended for use in situations where randomness, uncertainty, or interaction of adversary actions and material control system components make it difficult to assess safeguards effectiveness against particular material diversion attempts. Although MCSS may be used independently in the design or analysis of material handling and processing systems, it has been tailored toward the determination of material accountability and the response of material control systems to adversary action sequences.
7. Lofar information system design
NARCIS (Netherlands)
Valentijn, E.; Belikov, A. N.
2009-01-01
The Lofar Information System is a solution for Lofar Long Term Archive that is capable to store and handle PBs of raw and processed data. The newly created information system is based on Astro-WISE - the information system for wide field astronomy. We review an adaptation of Astro-WISE for the new
Energy Technology Data Exchange (ETDEWEB)
Sy Ali
2002-03-01
The market for power generation equipment is undergoing a tremendous transformation. The traditional electric utility industry is restructuring, promising new opportunities and challenges for all facilities to meet their demands for electric and thermal energy. Now more than ever, facilities have a host of options to choose from, including new distributed generation (DG) technologies that are entering the market as well as existing DG options that are improving in cost and performance. The market is beginning to recognize that some of these users have needs beyond traditional grid-based power. Together, these changes are motivating commercial and industrial facilities to re-evaluate their current mix of energy services. One of the emerging generating options is a new breed of advanced fuel cells. While there are a variety of fuel cell technologies being developed, the solid oxide fuel cells (SOFC) and molten carbonate fuel cells (MCFC) are especially promising, with their electric efficiency expected around 50-60 percent and their ability to generate either hot water or high quality steam. In addition, they both have the attractive characteristics of all fuel cells--relatively small siting footprint, rapid response to changing loads, very low emissions, quiet operation, and an inherently modular design lending itself to capacity expansion at predictable unit cost with reasonably short lead times. The objectives of this project are to:(1) Estimate the market potential for high efficiency fuel cell hybrids in the U.S.;(2) Segment market size by commercial, industrial, and other key markets;(3) Identify and evaluate potential early adopters; and(4) Develop results that will help prioritize and target future R&D investments. The study focuses on high efficiency MCFC- and SOFC-based hybrids and competing systems such as gas turbines, reciprocating engines, fuel cells and traditional grid service. Specific regions in the country have been identified where these
9. Advanced Design Program (ARIES) Final Report
Energy Technology Data Exchange (ETDEWEB)
Tillack, Mark [Univ. of California, San Diego, CA (United States)
2016-02-16
Progress is reported for the ARIES 3-year research program at UC San Diego, including three main tasks: 1. Completion of ARIES research on PMI/PFC issues. 2. Detailed engineering design and analysis of divertors and first wall/blankets. 3. Mission & requirements of FNSF.
10. Reduced Power Laer Designation Systems
National Research Council Canada - National Science Library
Sherlock, Barry G
2008-01-01
This work contributes to the Micropulse Laser Designation (MPLD) project. The objective of MPLD is to develop a 6-lb eye-safe micro-pulse laser system to locate, identify, range, mark, and designate stationary and moving targets...
11. Reduced Power Laser Designation Systems
National Research Council Canada - National Science Library
Sherlock, Barry
2009-01-01
This work contributes to the Micropulse Laser Designation (MPLD) project. The objective of this project is to develop a 6-lb eye-safe micro-pulse laser system to locate, identify, range, mark, and designate stationary and moving targets...
12. Intelligent Systems for Active Program Diagnosis
Directory of Open Access Journals (Sweden)
2000-12-01
Full Text Available Intelligent program diagnosis systems are computer programs capable of analyzing logical and design-level errors and misconceptions in programs. Upon discovering the errors, these systems provide intelligent feedback and thus guide the users in the problem-solving process. Intelligent program diagnosis systems are classified by their primary means of program analysis. The most distinct split is between those systems that are unable to analyze partial code segments as they are provided by the user and must wait until the entire solution code is completed before attempting any diagnosis, and those that are capable of analyzing partial solutions and providing proper guidance whenever an error or misconception is encountered. This paper gives an overview of the field and then critically compares work accomplished on several closely related active diagnosis systems, emphasizing such issues as the representation techniques used to capture the domain knowledge required for the diagnosis, ability to handle the diagnosis of partial code segments of the solutions, features of the user interfaces, and methodologies used in conducting the diagnosis process. Finally the paper presents a detailed discussion on issues related to active program diagnosis along with various design considerations to improve the engineering of this approach to intelligent diagnosis. The discussion presented in this paper tackles the issues referred above within the context of DISCOVER, an intelligent system for programming by discovery.
13. SASD-tools for program design
International Nuclear Information System (INIS)
Gather, K.S.
1989-01-01
An overview of Structured Analysis Structured Design (SASD) methodology is given. Some emphasis is put on the time needed to start in a HEP environment with software design methodologies, and on the motivation for SASD. The need for tools is indicated, and examples of their usefulness in analysis and design steps are discussed. Limitations of certain design methods are indicated and additional tools are briefly discussed. Criteria for the selection of tools to be used in large systems design are discussed, and some attention is given to implications for management structures. (orig.)
14. Program Helps Design Tests Of Developmental Software
Science.gov (United States)
Hops, Jonathan
1994-01-01
Computer program called "A Formal Test Representation Language and Tool for Functional Test Designs" (TRL) provides automatic software tool and formal language used to implement category-partition method and produce specification of test cases in testing phase of development of software. Category-partition method useful in defining input, outputs, and purpose of test-design phase of development and combines benefits of choosing normal cases having error-exposing properties. Traceability maintained quite easily by creating test design for each objective in test plan. Effort to transform test cases into procedures simplified by use of automatic software tool to create cases based on test design. Method enables rapid elimination of undesired test cases from consideration and facilitates review of test designs by peer groups. Written in C language.
15. Systems engineering requirements impacting MHTGR circulator design
International Nuclear Information System (INIS)
Chi, H.W.; Baccaglini, G.M.; Potter, R.C.; Shenoy, A.S.
1988-01-01
At the initiation of the MHTGR program, an important task involved translating the plant users' requirements into design conditions. This was particularly true in the case of the heat transport and shutdown cooling systems since these embody many components. This paper addresses the two helium circulators in these systems. An integrated approach is being used in the development of design and design documentation for the MHTGR plant. It is an organized and systematic development of plant functions and requirements, determined by top-down design, performance, and cost trade-off studies and analyses, to define the overall plant systems, subsystems, components, and human actions. These studies, that led to the identification of the major design parameters for the two circulators, are discussed in this paper. This includes the performance information, steady state and transient data, and the various interface requirements. The design of the circulators used in the MHTGR is presented. (author). 1 ref., 17 figs
16. The art of programming embedded systems
CERN Document Server
Ganssle, Jack
1992-01-01
Embedded systems are products such as microwave ovens, cars, and toys that rely on an internal microprocessor. This book is oriented toward the design engineer or programmer who writes the computer code for such a system. There are a number of problems specific to the embedded systems designer, and this book addresses them and offers practical solutions.Key Features* Offers cookbook routines, algorithms, and design techniques* Includes tips for handling debugging management and testing* Explores the philosophy of tightly coupling software and hardware in programming and dev
17. Designing a leadership development program for surgeons.
Science.gov (United States)
Jaffe, Gregory A; Pradarelli, Jason C; Lemak, Christy Harris; Mulholland, Michael W; Dimick, Justin B
2016-01-01
18. Accelerating Science Driven System Design With RAMP
Energy Technology Data Exchange (ETDEWEB)
Wawrzynek, John [Univ. of California, Berkeley, CA (United States)
2015-05-01
Researchers from UC Berkeley, in collaboration with the Lawrence Berkeley National Lab, are engaged in developing an Infrastructure for Synthesis with Integrated Simulation (ISIS). The ISIS Project was a cooperative effort for “application-driven hardware design” that engages application scientists in the early parts of the hardware design process for future generation supercomputing systems. This project served to foster development of computing systems that are better tuned to the application requirements of demanding scientific applications and result in more cost-effective and efficient HPC system designs. In order to overcome long conventional design-cycle times, we leveraged reconfigurable devices to aid in the design of high-efficiency systems, including conventional multi- and many-core systems. The resulting system emulation/prototyping environment, in conjunction with the appropriate intermediate abstractions, provided both a convenient user programming experience and retained flexibility, and thus efficiency, of a reconfigurable platform. We initially targeted the Berkeley RAMP system (Research Accelerator for Multiple Processors) as that hardware emulation environment to facilitate and ultimately accelerate the iterative process of science-driven system design. Our goal was to develop and demonstrate a design methodology for domain-optimized computer system architectures. The tangible outcome is a methodology and tools for rapid prototyping and design-space exploration, leading to highly optimized and efficient HPC systems.
19. Program status 3. quarter -- FY 1990: Confinement systems programs
Energy Technology Data Exchange (ETDEWEB)
NONE
1990-07-24
Highlights of the DIII-D Research Operations task are: completed five weeks tokamak operations; initiated summer vent; achievement of 10.7% beta; carried out first dimensionless transport scaling experiment; completed IBW program; demonstrated divertor heat reduction with gas puffing; field task proposals presented to OFE; presentation of DIII-D program to FPAC; made presentation to Admiral Watkins; and SAN safety review. Summaries are given on research programs, operations, program development, hardware development, operations support and collaborative efforts. Brief summaries of progress on the International Cooperation task include: TORE SUPRA, ASDEX, JFT-2M, and JET. Funding for work on CIT physics was received this quarter. Several physics R and D planning tasks were initiated. Earlier in FY90, a poloidal field coil shaping system (PFC) was found for DIGNITOR. This quarter more detailed analysis has been done to optimize the design of the PFC system.
20. Inductive Communication System Design Summary
Science.gov (United States)
1978-09-01
The report documents the experience obtained during the design and development of the Inductive Communications System used in the Morgantown People Mover. The Inductive Communications System is used to provide wayside-to-vehicle and vehicle-to-waysid...
1. Designing of Loss Optimum Regulator for Control of D.C. Electric Drive with Varying Inertia Moment in CoDeSys Programming System
Directory of Open Access Journals (Sweden)
S. O. Novikov
2009-01-01
Full Text Available CoDeSys programming system is considered as the most complete version of software for programmed logical controllers (PLC that meets requirements of IEC 61131-3-standard. The given software is the most suitable for simulation and development of control system algorithms and execution of semi-full-scale tests without involvement of an actual object. Programming medium operated at a personal computer in Windows medium. As CoDeSys provides a machine code it is rather easy to support its programming and its minimum support presupposes selection of IN/OUT and program debugging functions.The investigations have been directed on optimization of an electric drive operation with varying inertia moment. The whole software developed for the solution of the considered problem, has been written and realized in the CoDeSys programming system. «Modified principle of maximum» of V. I. Panasiuk is applied as a body of mathematics that allows to obtain positive results of the investigations.
2. Embedded Systems Design with FPGAs
CERN Document Server
Pnevmatikatos, Dionisios; Sklavos, Nicolas
2013-01-01
This book presents methodologies for modern applications of embedded systems design, using field programmable gate array (FPGA) devices. Coverage includes state-of-the-art research from academia and industry on a wide range of topics, including advanced electronic design automation (EDA), novel system architectures, embedded processors, arithmetic, dynamic reconfiguration and applications. Describes a variety of methodologies for modern embedded systems design; Implements methodologies presented on FPGAs; Covers a wide variety of applications for reconfigurable embedded systems, including Bioinformatics, Communications and networking, Application acceleration, Medical solutions, Experiments for high energy physics, Astronomy, Aerospace, Biologically inspired systems and Computational fluid dynamics (CFD).
3. Another Program Simulates A Modular Manufacturing System
Science.gov (United States)
Schroer, Bernard J.; Wang, Jian
1996-01-01
SSE5 computer program provides simulation environment for modeling manufacturing systems containing relatively small numbers of stations and operators. Designed to simulate manufacturing of apparel, also used in other manufacturing domains. Valuable for small or medium-size firms, including those lacking expertise to develop detailed mathematical models or have only minimal knowledge in describing manufacturing systems and in analyzing results of simulations on mathematical models. Two other programs available bundled together as SSE (MFS-26245). Each program models slightly different manufacturing scenario. Written in Turbo C v2.0 for IBM PC-series and compatible computers running MS-DOS and successfully compiled using Turbo C++ v3.0.
4. HYPER system design study
Energy Technology Data Exchange (ETDEWEB)
Park, Won S.; Han, Seok J.; Song, Tae Y. [Korea Atomic Energy Research Institute, Taejon (Korea)
1999-04-01
KAERI is developing ADS, named HYPER for the transmutation of nuclear waste. HYPER is designed to produce 1000 MWth with the subcriticality of 0.97. HYPER adopts a hollow cylinder type metal fuel and require 1.0GeV, 16mA proton beams. Pb-Bi is used as coolant and the inlet and outlet temperatures are 340 deg C, 510 deg C, respectively. In addition, Pb-Bi coolant is used as spallation target also. HYPER is expected to incinerate about 380 kg of TRU a year, which is corresponding to the support ratio 5 {approx} 6. 23 refs., 50 figs., 31 tabs. (Author)
5. Mars oxygen production system design
Science.gov (United States)
Cotton, Charles E.; Pillow, Linda K.; Perkinson, Robert C.; Brownlie, R. P.; Chwalowski, P.; Carmona, M. F.; Coopersmith, J. P.; Goff, J. C.; Harvey, L. L.; Kovacs, L. A.
1989-01-01
The design and construction phase is summarized of the Mars oxygen demonstration project. The basic hardware required to produce oxygen from simulated Mars atmosphere was assembled and tested. Some design problems still remain with the sample collection and storage system. In addition, design and development of computer compatible data acquisition and control instrumentation is ongoing.
6. Compressive Feedback Control Design for Spatially Distributed Systems
Science.gov (United States)
2017-01-03
AFRL-AFOSR-VA-TR-2017-0004 Compressive Feedback Control Design for Spatially Distributed Systems Nader Motee LEHIGH UNIVERSITY 526 BRODHEAD AVE...0158 Compressive Feedback Control Design for Spatially Distributed Systems Program Manager: Dr. Frederick A. Leve Principle Investigator: Nader Motee...Feedback Control Design for Spatially Distributed Systems Summary of Accomplishments and Research Results 1 Systemic Performance and Robustness
7. Degaussing System Design Optimization
NARCIS (Netherlands)
Bekers, D.J.; Lepelaars, E.S.A.M.
2013-01-01
Steel ships with a magnetic signature requirement are equipped with a degaussing system to reduce their perceptibility for magnetic influence mines. To be able to reduce the magnetic signature accurately, a proper distribution of coils over the ship is essential. Finding the best distribution of
8. Fundamentals of electronic systems design
CERN Document Server
Lienig, Jens
2017-01-01
This textbook covers the design of electronic systems from the ground up, from drawing and CAD essentials to recycling requirements. Chapter by chapter, it deals with the challenges any modern system designer faces: the design process and its fundamentals, such as technical drawings and CAD, electronic system levels, assembly and packaging issues and appliance protection classes, reliability analysis, thermal management and cooling, electromagnetic compatibility (EMC), all the way to recycling requirements and environmental-friendly design principles. Enables readers to face various challenges of designing electronic systems, including coverage from various engineering disciplines; Written to be accessible to readers of varying backgrounds; Uses illustrations extensively to reinforce fundamental concepts; Organized to follow essential design process, although chapters are self-contained and can be read in any order.
9. Design of object processing systems
NARCIS (Netherlands)
Grigoras, D.R.; Hoede, C.
Object processing systems are met rather often in every day life, in industry, tourism, commerce, etc. When designing such a system, many problems can be posed and considered, depending on the scope and purpose of design. We give here a general approach which involves graph theory, and which can
10. Large coil program support structure conceptual design
International Nuclear Information System (INIS)
Litherland, P.S.
1977-01-01
The purpose of the Large Coil Program (LCP) is to perform tests on both pool boiling and force cooled superconducting toroidal field coils. The tests will attempt to approximate conditions anticipated in an ignition tokamak. The test requirements resulted in a coil support design which accommodates up to six (6) test coils and is mounted to a structure capable of resisting coil interactions. The steps leading to the present LCP coil support structure design, details on selected structural components, and the basic assembly sequence are discussed
11. Systems design for remote healthcare
CERN Document Server
Bonfiglio, Silvio
2014-01-01
This book provides a multidisciplinary overview of the design and implementation of systems for remote patient monitoring and healthcare. Readers are guided step-by-step through the components of such a system and shown how they could be integrated in a coherent framework for deployment in practice. The authors explain planning from subsystem design to complete integration and deployment, given particular application constraints. Readers will benefit from descriptions of the clinical requirements underpinning the entire application scenario, physiological parameter sensing techniques, information processing approaches and overall, application dependent system integration. Each chapter ends with a discussion of practical design challenges and two case studies are included to provide practical examples and design methods for two remote healthcare systems with different needs. · Provides a multi-disciplinary overview of next-generation mobile healthcare system design; · Includes...
12. Computer programming and computer systems
CERN Document Server
Hassitt, Anthony
1966-01-01
Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten
13. An Expert System for Designing Fire Prescriptions
Science.gov (United States)
Elizabeth Reinhardt
1987-01-01
Managers use prescribed fire to accomplish a variety of resource objectives. The knowledge needed to design successful prescriptions is both quantitative and qualitative. Some of it is available through publications and computer programs, but much of the knowledge of expert practitioners has never been collected or published. An expert system being developed at the,...
14. Issues and Approaches in the Design of Distributed Ada Programs.
Science.gov (United States)
1989-10-11
COVERED 11 October 1989 jFinal ____________ 4- ILE AND SUBTITLE . FUID "N NUMES Issues and Approaches in the Design of Distributed Ada PR E45146 Programs...connections between end systems and provides end-to-end flow control and error control. The transport layer was designed according to the ISO standards...34out-of-band" data transfer, bypassing the flow control mechanisms of the transport layer. Message control types were included in the kernel to
15. Vacuum system design
International Nuclear Information System (INIS)
Mathewson, A.G.
1994-01-01
In this paper the basic terms used by the vacuum engineer are presented and some useful formulae are also given. The concept of bakeout is introduced and the physics behind it explained. We concentrate on the effects in electron and proton storage rings which are due to energetic particle bombardment of the vacuum system walls and the ensuing gas desorption which may detrimentally affect the running of the machine. In addition, the problems associated with proton storage rings where the vacuum chamber is at cryogenic temperature are described
16. Planning-Programming-Budgeting Systems.
Science.gov (United States)
Tudor, Dean
Planning Programming and Budgeting Systems (PPBS) have been considered as either synonymous with abstract, advanced, mathematical systems analysis or as an advanced accounting and control system. If PPBS is to perform a useful function, both viewpoints must be combined such that a number of standardized procedures and reports are required and…
17. Design-reliability assurance program application to ACP600
International Nuclear Information System (INIS)
Zhichao, Huang; Bo, Zhao
2012-01-01
ACP600 is a newly nuclear power plant technology made by CNNC in China and it is based on the Generation III NPPs design experience and general safety goals. The ACP600 Design Reliability Assurance Program (D-RAP) is implemented as an integral part of the ACP600 design process. A RAP is a formal management system which assures the collection of important characteristic information about plant performance throughout each phase of its life and directs the use of this information in the implementation of analytical and management process which are specifically designed to meet two specific objects: confirm the plant goals and cost effective improvements. In general, typical reliability assurance program have 4 broad functional elements: 1) Goals and performance criteria; 2) Management system and implementing procedures; 3) Analytical tools and investigative methods; and 4) Information management. In this paper we will use the D-RAP technical and Risk-Informed requirements, and establish the RAM and PSA model to optimize the ACP600 design. Compared with previous design process, the D-RAP is more competent for the higher design targets and requirements, enjoying more creativity through an easier implementation of technical breakthroughs. By using D-RAP, the plants goals, system goals, performance criteria and safety criteria can be easier to realize, and the design can be optimized and more rational
18. Modular system design and evaluation
CERN Document Server
Levin, Mark Sh
2015-01-01
This book examines seven key combinatorial engineering frameworks (composite schemes consisting of algorithms and/or interactive procedures) for hierarchical modular (composite) systems. These frameworks are based on combinatorial optimization problems (e.g., knapsack problem, multiple choice problem, assignment problem, morphological clique problem), with the author’s version of morphological design approach – Hierarchical Morphological Multicritieria Design (HMMD) – providing a conceptual lens with which to elucidate the examples discussed. This approach is based on ordinal estimates of design alternatives for systems parts/components, however, the book also puts forward an original version of HMMD that is based on new interval multiset estimates for the design alternatives with special attention paid to the aggregation of modular solutions (system versions). The second part of ‘Modular System Design and Evaluation’ provides ten information technology case studies that enriches understanding of th...
19. Unattended Monitoring System Design Methodology
International Nuclear Information System (INIS)
Drayer, D.D.; DeLand, S.M.; Harmon, C.D.; Matter, J.C.; Martinez, R.L.; Smith, J.D.
1999-01-01
A methodology for designing Unattended Monitoring Systems starting at a systems level has been developed at Sandia National Laboratories. This proven methodology provides a template that describes the process for selecting and applying appropriate technologies to meet unattended system requirements, as well as providing a framework for development of both training courses and workshops associated with unattended monitoring. The design and implementation of unattended monitoring systems is generally intended to respond to some form of policy based requirements resulting from international agreements or domestic regulations. Once the monitoring requirements are established, a review of the associated process and its related facilities enables identification of strategic monitoring locations and development of a conceptual system design. The detailed design effort results in the definition of detection components as well as the supporting communications network and data management scheme. The data analyses then enables a coherent display of the knowledge generated during the monitoring effort. The resultant knowledge is then compared to the original system objectives to ensure that the design adequately addresses the fundamental principles stated in the policy agreements. Implementation of this design methodology will ensure that comprehensive unattended monitoring system designs provide appropriate answers to those critical questions imposed by specific agreements or regulations. This paper describes the main features of the methodology and discusses how it can be applied in real world situations
20. Generative Representations for Computer-Automated Design Systems
Science.gov (United States)
Hornby, Gregory S.
2004-01-01
With the increasing computational power of Computers, software design systems are progressing from being tools for architects and designers to express their ideas to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design programs is the representation with which they encode designs. If the representation cannot encode a certain design, then the design program cannot produce it. Similarly, a poor representation makes some types of designs extremely unlikely to be created. Here we define generative representations as those representations which can create and reuse organizational units within a design and argue that reuse is necessary for design systems to scale to more complex and interesting designs. To support our argument we describe GENRE, an evolutionary design program that uses both a generative and a non-generative representation, and compare the results of evolving designs with both types of representations.
1. Design Theory in Information Systems
Directory of Open Access Journals (Sweden)
Shirley Gregor
2002-11-01
Full Text Available The aim of this paper is to explore an important category of information systems knowledge that is termed “design theory”. This knowledge is distinguished as the fifth of five types of theory: (i theory for analysing and describing, (ii theory for understanding, (iii theory for predicting, (iv theory for explaining and predicting, and (v theory for design and action. Examples of design theory in information systems are provided, with associated research methods. The limited understanding and recognition of this type of theory in information systems indicates that further debate concerning its nature and role in our discipline is needed.
2. NASA System Engineering Design Process
Science.gov (United States)
Roman, Jose
2011-01-01
This slide presentation reviews NASA's use of systems engineering for the complete life cycle of a project. Systems engineering is a methodical, disciplined approach for the design, realization, technical management, operations, and retirement of a system. Each phase of a NASA project is terminated with a Key decision point (KDP), which is supported by major reviews.
3. SNAP-21 program, Phase II. Deep sea radioisotope-fueled thermoelectric generator power supply system. Final design description, 10-watt system
Energy Technology Data Exchange (ETDEWEB)
Wickenberg, R.F.; Harris, W.W.
1969-10-01
The SNAP-21 10-W system provides electrical power for use under the surface of the sea. It functions by converting the heat from a decaying radioisotope fuel into useful electrical energy. This heat energy is converted into electrical energy by a thermoelectric generator. Semiconductor-type thermoelectric materials, maintained in a temperature gradient, accomplish the conversion. The isotopic fuel supplies heat to the thermoelectric materials and sea water acts as the heat sink to maintain the temperature gradient. Other components are employed to increase efficiency and condition the electrical output to the desired form. The components performing these functions are enclosed in a pressure vessel which protects them from sea water pressure and exposure. No external inputs are required to maintain operation of the system. With this type of mechanically-static, unsupported operation, long life with no maintenance is achieved.
4. Tritium systems test assembly quality assurance program
International Nuclear Information System (INIS)
Kerstiens, F.L.; Wilhelm, R.C.
1986-07-01
A quality assurance program should establish the planned and systematic actions necessary to provide adequate confidence that fusion facilities and their subsystems will perform satisfactorily in service. The Tritium Systems Test Assembly (TSTA) Quality Assurance Program has been designed to assure that the designs, tests, data, and interpretive reports developed at TSTA are valid, accurate, and consistent with formally specified procedures and reviews. The quality consideration in all TSTA activities is directed toward the early detection of quality problems, coupled with timely and positive disposition and corrective action
5. System Design of the SWRL Financial System.
Science.gov (United States)
Ikeda, Masumi
To produce various management and accounting reports in order to maintain control of SWRL (Southwest Regional Laboratory) operational and financial activities, a computer-based SWRL financial system was developed. The system design is outlined, and various types of system inputs described. The kinds of management and accounting reports generated…
6. General Systems Theory and Instructional Systems Design.
Science.gov (United States)
Salisbury, David F.
1990-01-01
Describes basic concepts in the field of general systems theory (GST) and identifies commonalities that exist between GST and instructional systems design (ISD). Models and diagrams that depict system elements in ISD are presented, and two matrices that show how GST has been used in ISD literature are included. (11 references) (LRW)
7. System design projects for undergraduate design education
Science.gov (United States)
Batill, S. M.; Pinkelman, J.
1993-01-01
Design education has received considerable in the recent past. This paper is intended to address one aspect of undergraduate design education and that is the selection and development of the design project for a capstone design course. Specific goals for a capstone design course are presented and their influence on the project selection are discussed. The evolution of a series of projects based upon the design of remotely piloted aircraft is presented along with students' perspective on the capstone experience.
8. Design and analysis of environmental monitoring programs
DEFF Research Database (Denmark)
Lophaven, Søren Nymand
2005-01-01
This thesis describes statistical methods for modelling space-time phenomena. The methods were applied to data from the Danish marine monitoring program in the Kattegat, measured in the five-year period 1993-1997. The proposed model approaches are characterised as relatively simple methods, which...... into account. Thus, it serves as a compromise between existing methods. The space-time model approaches and geostatistical design methods used in this thesis are generally applicable, i.e. with minor modifications they could equally well be applied within areas such as soil and air pollution. In Danish: Denne...
9. Effectiveness of predictive computer programs in the design of noise barriers : a before and after approach, part I, the data acquisition system.
Science.gov (United States)
1978-01-01
A digital data acquisition system has been designed to meet the need for a long duration noise analysis capability. By sampling the DC outputs from sound level meters, it has been possible to make twenty-four hour or longer recordings, in contrast to...
10. Embedded Systems Design: Optimization Challenges
DEFF Research Database (Denmark)
Pop, Paul
2005-01-01
Summary form only given. Embedded systems are everywhere: from alarm clocks to PDAs, from mobile phones to cars, almost all the devices we use are controlled by embedded systems. Over 99% of the microprocessors produced today are used in embedded systems, and recently the number of embedded systems...... in use has become larger than the number of humans on the planet. The complexity of embedded systems is growing at a very high pace and the constraints in terms of functionality, performance, low energy consumption, reliability, cost and time-to-market are getting tighter. Therefore, the task...... of designing such systems is becoming increasingly important and difficult at the same time. New automated design optimization techniques are needed, which are able to: successfully manage the complexity of embedded systems, meet the constraints imposed by the application domain, shorten the time...
11. Repository simulation system (REPSIMS) for design analyses
International Nuclear Information System (INIS)
Griesmeyer, J.M.; Dennis, A.W.
1989-01-01
The Repository Simulation System (REPSIMS) combines graphic programming and interactive simulation to facilitate early identification of acceptable design concepts for a nuclear waste repository. REPSIMS is an object-oriented, menu-driven, versatile computer modeling system that allows the facility designer to create visual models of proposed facilities, graphically define operations, and using simulation analyses, determine the efficiencies of proposed designs and their operations. Hierarchical representations of both physical facilities and operations allow REPSIMS to be used early in the evaluation of conceptual designs as well as for the analysis of mature designs. High-level models of conceptual designs can be used to identify critical facility layout and operation issues. These preliminary models can then be refined to investigate those issues and to incorporate additional information as it becomes available. REPSIMS thus supports the typical top-down design process in which general specifications for major systems and operations are successively refined as the design progresses. REPSIMS has been used to determine the impact of using robotic, manual contact, or master/slave operations on cask turnaround times, throughput, and equipment utilization, and to investigate the impact of the ratio between truck and rail shipments to the repository. An analysis of alternative designs for the waste-handling building at Yucca Mountain has begun
12. CR mammography: Design and implementation of a quality control program
Energy Technology Data Exchange (ETDEWEB)
Moreno-Ramirez, A.; Brandan, M. E.; Villasenor-Navarro, Y.; Galvan, H. A.; Ruiz-Trejo, C. [Instituto de Fisica, Universidad Nacional Autonoma de Mexico, DF 04510 (Mexico); Departamento de Radiodiagnostico, Instituto Nacional de Cancerologia, DF 14080 (Mexico); Instituto de Fisica, Universidad Nacional Autonoma de Mexico, DF 04510 (Mexico)
2012-10-23
Despite the recent acquisition of significant quantities of computed radiography CR equipment for mammography, Mexican regulations do not specify the performance requirements for digital systems such as those of CR type. The design of a quality control program QCP specific for CR mammography systems was thus considered relevant. International protocols were taken as reference to define tests, procedures and acceptance criteria. The designed QCP was applied in three CR mammography facilities. Important deficiencies in spatial resolution, noise, image receptor homogeneity, artifacts and breast thickness compensation were detected.
13. Computer-aided control system design
International Nuclear Information System (INIS)
Lebenhaft, J.R.
1986-01-01
Control systems are typically implemented using conventional PID controllers, which are then tuned manually during plant commissioning to compensate for interactions between feedback loops. As plants increase in size and complexity, such controllers can fail to provide adequate process regulations. Multivariable methods can be utilized to overcome these limitations. At the Chalk River Nuclear Laboratories, modern control systems are designed and analyzed with the aid of MVPACK, a system of computer programs that appears to the user like a high-level calculator. The software package solves complicated control problems, and provides useful insight into the dynamic response and stability of multivariable systems
14. Refining System Requirements to Program Specifications
DEFF Research Database (Denmark)
Olderog, Ernst-Ruediger; Ravn, Anders P.; Skakkebæk, Jens Ulrik
1996-01-01
A coherent and mathematically well-founded approach to the design ofreal-time and hybrid systems is presented.It covers requirementsanalysis and specification, design of controlling automatasatisfying the requirements, and derivation ofoccam-like communicating programs from these automata.......The generalized railroad crossing due to Heitmeyer and Lynchillustrates the approach.Requirements are analyzed within aconventional dynamic systems model of a plant, where states arefunctions of the reals, representing time. The requirements arespecified in an assumption-commitment style using Duration Calculus...... to component descriptions in asystems design language that uses timed trace assertions over statetransition events to constrain control flow.Components can under certain conditions be transformed tooccam-like communicating programs....
15. Licensing management system prototype system design
International Nuclear Information System (INIS)
Immerman, W.H.; Arcuni, A.A.; Elliott, J.M.; Chapman, L.D.
1983-11-01
This report is a design document for a prototype implementation of a licensing management system (LMS) as defined in SAND83-7080. It describes the concept of operations for full implementation of an LMS in accordance with the previously defined functional requirements. It defines a subset of a full LMS suitable for meeting prototype implementation goals, and proposes a system design for this subset. The report describes overall system design considerations consistent with, but more explicit than the general characteristics required by the LMS functional definition. A high level design is presented for just those functions selected for prototype implementation. The report also provides a data element dictionary describing the structured logical data elements required to implement the selected functions
16. Design & development fo a 20-MW flywheel-based frequency regulation power plant : a study for the DOE Energy Storage Systems program.
Energy Technology Data Exchange (ETDEWEB)
Rounds, Robert (Beacon Power, Tyngsboro, MA); Peek, Georgianne Huff
2009-01-01
This report describes the successful efforts of Beacon Power to design and develop a 20-MW frequency regulation power plant based solely on flywheels. Beacon's Smart Matrix (Flywheel) Systems regulation power plant, unlike coal or natural gas generators, will not burn fossil fuel or directly produce particulates or other air emissions and will have the ability to ramp up or down in a matter of seconds. The report describes how data from the scaled Beacon system, deployed in California and New York, proved that the flywheel-based systems provided faster responding regulation services in terms of cost-performance and environmental impact. Included in the report is a description of Beacon's design package for a generic, multi-MW flywheel-based regulation power plant that allows accurate bids from a design/build contractor and Beacon's recommendations for site requirements that would ensure the fastest possible construction. The paper concludes with a statement about Beacon's plans for a lower cost, modular-style substation based on the 20-MW design.
17. The Malemute development program. [rocket upper stage engine design
Science.gov (United States)
Bolster, W. J.; Hoekstra, P. W.
1976-01-01
The Malemute vehicle systems are two-stage systems based on utilizing a new high performance upper stage motor with two existing military boosters. The Malmute development program is described relative to program structure, preliminary design, vehicle subsystems, and the Malemute motor. Two vehicle systems, the Nike-Malemute and Terrier-Malemute, were developed which are capable of transporting comparatively large diameter (16 in.) 200-lb payloads to altitudes of 500 and 700 km, respectively. These vehicles provide relatively low-cost transportation with two-stage reliability and launch simplicity. Flight tests of both vehicle systems revealed their performance capabilities, with the Terrier-Malemute system involving a unique Malemute motor spin sensitivity problem. It is suggested that the vehicles can be successfully flown by lowering the burnout spin rate.
18. Evolution of safeguards systems design
International Nuclear Information System (INIS)
Shipley, J.P.; Christensen, E.L.; Dietz, R.J.
1979-01-01
Safeguards systems play a vital detection and deterrence role in current nonproliferation policy. These safeguards systems have developed over the past three decades through the evolution of three essential components: the safeguards/process interface, safeguards performance criteria, and the technology necessary to support effective safeguards. This paper discusses the background and history of this evolutionary process, its major developments and status, and the future direction of safeguards system design
19. The software design of area γ radiation monitoring system
International Nuclear Information System (INIS)
Song Chenxin; Deng Changming; Cheng Chang; Ren Yi; Meng Dan; Liu Yun
2008-01-01
This paper main introduction the system structure, software architecture, design ideas of the area γ radiation monitoring system. Detailed introduction some programming technology about the computer communication with the local display unit. (authors)
20. The software design of area γ radiation monitoring system
International Nuclear Information System (INIS)
Song Chenxin; Deng Changming; Cheng Chang; Ren Yi; Meng Dan; Liu Yun
2007-01-01
This paper main introduction the system structure, software architecture, design ideas of the area γ radiation monitoring system. Detailed introduction some programming technology about the computer communication with the local display unit. (authors)
1. Off-line programming (OLP) system comparison
International Nuclear Information System (INIS)
Holliday, M.A.
1993-01-01
Off-line programming (OLP) systems are being used to conceptualize, design, simulate, and now control automated robotic workcells. Currently available systems by Deneb, SILMA, and Cimetrix are being used at the Lawrenece Livermore National Laboratory (LLNL) to simulate and control automated robotic systems for radioactive material processing and hazardous waste sorting. The differences in system architectures, workcell and robot calibration procedures, operator interface, and graphical output capability of each will be discussed. The relative strengths and weaknesses of these attributes will be discussed as they relate to varying applications in robotic workcell development and control
2. Tritium glovebox stripper system seismic design evaluation
Energy Technology Data Exchange (ETDEWEB)
Grinnell, J. J. [Savannah River Site (SRS), Aiken, SC (United States); Klein, J. E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-09-01
The use of glovebox confinement at US Department of Energy (DOE) tritium facilities has been discussed in numerous publications. Glovebox confinement protects the workers from radioactive material (especially tritium oxide), provides an inert atmosphere for prevention of flammable gas mixtures and deflagrations, and allows recovery of tritium released from the process into the glovebox when a glovebox stripper system (GBSS) is part of the design. Tritium recovery from the glovebox atmosphere reduces emissions from the facility and the radiological dose to the public. Location of US DOE defense programs facilities away from public boundaries also aids in reducing radiological doses to the public. This is a study based upon design concepts to identify issues and considerations for design of a Seismic GBSS. Safety requirements and analysis should be considered preliminary. Safety requirements for design of GBSS should be developed and finalized as a part of the final design process.
3. Instrumentation and control system design
International Nuclear Information System (INIS)
Saito, Kenji; Sawahata, Hiroaki; Homma, Fumitaka; Kondo, Makoto; Mizushima, Toshihiko
2004-01-01
The instrumentation and control system of the high temperature engineering test reactor consists of the instrumentation, control equipments and safety protection systems. There are not many differences in the instrumentation and control equipments design between the HTTR and light water reactors except for some features. Various kinds of R and D of reactor instrumentation were performed taking into account the HTTR operational conditions, and a plant dynamic analysis was carried out considering the operational conditions of the HTTR in order to design the control system. These systems are required to have a high reliability in respect to safety. In the rise-to-power test it was confirmed that the instrumentation has a high reliability and the control system has a high stability and reasonable damped characteristics for various disturbances
4. SMART core protection system design
International Nuclear Information System (INIS)
Lee, J. K.; Park, H. Y.; Koo, I. S.; Park, H. S.; Kim, J. S.; Son, C. H.
2003-01-01
SMART COre Protection System(SCOPS) is designed with real-tims Digital Signal Processor(DSP) board and Network Interface Card(NIC) board. SCOPS has a Control Rod POSition (CRPOS) software module while Core Protection Calculator System(CPCS) consists of Core Protection Calculators(CPCs) and Control Element Assembly(CEA) Calculators(CEACs) in the commercial nuclear plant. It's not necessary to have a independent cabinets for SCOPS because SCOPS is physically very small. Then SCOPS is designed to share the cabinets with Plant Protection System(PPS) of SMART. Therefor it's very easy to maintain the system because CRPOS module is used instead of the computer with operating system
5. Advanced Transport Operating Systems Program
Science.gov (United States)
White, John J.
1990-01-01
NASA-Langley's Advanced Transport Operating Systems Program employs a heavily instrumented, B 737-100 as its Transport Systems Research Vehicle (TRSV). The TRSV has been used during the demonstration trials of the Time Reference Scanning Beam Microwave Landing System (TRSB MLS), the '4D flight-management' concept, ATC data links, and airborne windshear sensors. The credibility obtainable from successful flight test experiments is often a critical factor in the granting of substantial commitments for commercial implementation by the FAA and industry. In the case of the TRSB MLS, flight test demonstrations were decisive to its selection as the standard landing system by the ICAO.
6. Pilot chargeback system program plan
International Nuclear Information System (INIS)
Smith, P.
1997-03-01
This planning document outlines the steps necessary to develop, test, evaluate, and potentially implement a pilot chargeback system at the Idaho National Engineering and Environmental Laboratory for the treatment, storage, and disposal of current waste. This pilot program will demonstrate one system that can be used to charge onsite generators for the treatment and disposal of low-level radioactive waste. In FY 1997, mock billings will begin by July 15, 1997. Assuming approvals are received to do so, FY 1998 activities will include modifying the associated automated systems, testing and evaluating system performance, and estimating the amount generators will spend for waste storage, treatment, and disposal in FY 1999. If the program is fully implemented in FY 1999, generators will pay actual, automated bills for waste management services from funds transferred to their budgets from Environmental Management
7. Basic design of parallel computational program for probabilistic structural analysis
International Nuclear Information System (INIS)
Kaji, Yoshiyuki; Arai, Taketoshi; Gu, Wenwei; Nakamura, Hitoshi
1999-06-01
In our laboratory, for 'development of damage evaluation method of structural brittle materials by microscopic fracture mechanics and probabilistic theory' (nuclear computational science cross-over research) we examine computational method related to super parallel computation system which is coupled with material strength theory based on microscopic fracture mechanics for latent cracks and continuum structural model to develop new structural reliability evaluation methods for ceramic structures. This technical report is the review results regarding probabilistic structural mechanics theory, basic terms of formula and program methods of parallel computation which are related to principal terms in basic design of computational mechanics program. (author)
8. Tools for designing industrial vision systems
Science.gov (United States)
Batchelor, Bruce G.
1991-09-01
The cost of commissioning and installing a machine vision system is almost always dominated by that of designing it. Indeed, the cost of design and the shortage of skilled vision systems engineers are together likely to be two of the most important factors limiting the future adoption of this technology by manufacturing industry. The article describes several software tools that have been developed for making the design process easier, cheaper and faster. These include: (a) An extension of Prolog, called Prolog+. This is intended for prototyping intelligent image processing, as well as for programming future target systems. (b) A knowledge-based program intended to assist an engineer to select a suitable lighting and image acquisition sub-system. This called a Lighting Advisor. (c) A knowledge-based program which advises an engineer on how to select a suitable lens. This called a Lens Advisor. (d) A knowledge-based program which assists an engineer to choose a suitable camera. This called a Camera Advisor. Ideally, items (b) to (d) should be integrated with Prolog+, so that a programmer has access to all of them in one unified working environment. Prolog+ is able to accept simple natural language descriptions (i.e., in a simple sub-set of English) of the objects/scenes that are to be inspected and is able to generate a recognition program automatically. A range of inspection tasks is described, in which Automated Visual Inspection has, to date, made no real impact. Amongst these is the inspection of products that are made in very small quantities. An electro-mechanical arrangement, called a Flexible Inspection Cell, is described. This is intended to provide a "general purpose" inspection facility for small-batch artifacts. Such a cell is controlled using Prolog+.
9. Networking systems design and development
CERN Document Server
Chao, Lee
2009-01-01
Effectively integrating theory and hands-on practice, Networking Systems Design and Development provides students and IT professionals with the knowledge and skills needed to design, implement, and manage fully functioning network systems using readily available Linux networking tools. Recognizing that most students are beginners in the field of networking, the text provides step-by-step instruction for setting up a virtual lab environment at home. Grounded in real-world applications, this book provides the ideal blend of conceptual instruction and lab work to give students and IT professional
10. Design Concept Evaluation Using System Throughput Model
International Nuclear Information System (INIS)
Sequeira, G.; Nutt, W. M.
2004-01-01
The U.S. Department of Energy (DOE) Office of Civilian Radioactive Waste Management (OCRWM) is currently developing the technical bases to support the submittal of a license application for construction of a geologic repository at Yucca Mountain, Nevada to the U.S. Nuclear Regulatory Commission. The Office of Repository Development (ORD) is responsible for developing the design of the proposed repository surface facilities for the handling of spent nuclear fuel and high level nuclear waste. Preliminary design activities are underway to sufficiently develop the repository surface facilities design for inclusion in the license application. The design continues to evolve to meet mission needs and to satisfy both regulatory and program requirements. A system engineering approach is being used in the design process since the proposed repository facilities are dynamically linked by a series of sub-systems and complex operations. In addition, the proposed repository facility is a major system element of the overall waste management process being developed by the OCRWM. Such an approach includes iterative probabilistic dynamic simulation as an integral part of the design evolution process. A dynamic simulation tool helps to determine if: (1) the mission and design requirements are complete, robust, and well integrated; (2) the design solutions under development meet the design requirements and mission goals; (3) opportunities exist where the system can be improved and/or optimized; and (4) proposed changes to the mission, and design requirements have a positive or negative impact on overall system performance and if design changes may be necessary to satisfy these changes. This paper will discuss the type of simulation employed to model the waste handling operations. It will then discuss the process being used to develop the Yucca Mountain surface facilities model. The latest simulation model and the results of the simulation and how the data were used in the design
11. System of automated design of conveyor transportation
Energy Technology Data Exchange (ETDEWEB)
Zamula, V.G.
1981-01-01
SAPR KT automated design system, developed by Giprokoks, permits multi-variational evaluation of belt conveyor transportation and selection of the optimum solution. Using SAPR KT in the Giprokoks firm economizes yearly 266,000 rubles. The system permits labor productivity of the designing personnel to be increased by 20%, and the cost of investment to be reduced by about 27%. Designing a variant of belt conveyor operation using the computer program takes 10 to 15 minutes. SAPR KT can be used to design conveyors with a belt 0.65 to 1.6 m wide, driven by one electric motor. Such conveyors are used in coking plants. A scheme of the design system is given. The most important blocks are characterized: TRASS (elements of conveyor scheme geometrics), BV (width and speed of belt), NB (power of the motor) PRIVB (dimensions of driving drum), LENTA (belt design), DVIG (parameters of electric motor), SNEMA (dimensions of conveyor system), OBOR (idlers) and METAL (elements of steel construction). (In Russian)
12. Business System Planning Project, Preliminary System Design
International Nuclear Information System (INIS)
EVOSEVICH, S.
2000-01-01
CH2M HILL Hanford Group, Inc. (CHG) is currently performing many core business functions including, but not limited to, work control, planning, scheduling, cost estimating, procurement, training, and human resources. Other core business functions are managed by or dependent on Project Hanford Management Contractors including, but not limited to, payroll, benefits and pension administration, inventory control, accounts payable, and records management. In addition, CHG has business relationships with its parent company CH2M HILL, U.S. Department of Energy, Office of River Protection and other River Protection Project contractors, government agencies, and vendors. The Business Systems Planning (BSP) Project, under the sponsorship of the CH2M HILL Hanford Group, Inc. Chief Information Officer (CIO), have recommended information system solutions that will support CHG business areas. The Preliminary System Design was developed using the recommendations from the Alternatives Analysis, RPP-6499, Rev 0 and will become the design base for any follow-on implementation projects. The Preliminary System Design will present a high-level system design, providing a high-level overview of the Commercial-Off-The-Shelf (COTS) modules and identify internal and external relationships. This document will not define data structures, user interface components (screens, reports, menus, etc.), business rules or processes. These in-depth activities will be accomplished at implementation planning time
13. Design of Racing Electric Control System Based on AVR SCM
Directory of Open Access Journals (Sweden)
Shuang WAN
2014-10-01
Full Text Available A racing car’s instrument system, signal system and monitoring system were designed based on the rules of the competition (FSAE, Formula SAE. The main components of the instrument system were selected by comparing the advantages and disadvantages of various instrument systems. And the circuit diagram and PCB diagram of the instrument system was drawn by Altium Designer. Then, the instrument system with Single Chip Microcomputer (SCM as the main body was set up according to the circuit diagram. Besides, programs were written according to the function of instrument system. Finally, the instrument system was debugged. In the aspect of the design of signal system and monitoring system, the circuit diagram of signal system and signal system were drawn according to the racing design requirements and rules. Currently, the instrument system has been successfully debugged. And the design of circuit diagram of signal system and monitoring system has been completed.
14. Requirements analysis and system design
CERN Document Server
Maciaszek, Leszek A
2007-01-01
An examination of the methods and techniques used in the analysis and design phases of Information System development. Emphasis is placed upon the application of object technology in enterprise information systems (EIS) with UML being used throughout. Through its excellent balance of practical explanation and theoretical insight the book manages to avoid unnecessary, complicating details without sacrificing rigor. Examples of real-world scenarios are used throughout, giving the reader an understanding of what really goes on within the field of Software Engineering.
15. Programming guidelines for computer systems of NPPs
International Nuclear Information System (INIS)
Suresh babu, R.M.; Mahapatra, U.
1999-09-01
Software quality is assured by systematic development and adherence to established standards. All national and international software quality standards have made it mandatory for the software development organisation to produce programming guidelines as part of software documentation. This document contains a set of programming guidelines for detailed design and coding phases of software development cycle. These guidelines help to improve software quality by increasing visibility, verifiability, testability and maintainability. This can be used organisation-wide for various computer systems being developed for our NPPs. This also serves as a guide for reviewers. (author)
16. Design of interpretable fuzzy systems
CERN Document Server
Cpałka, Krzysztof
2017-01-01
This book shows that the term “interpretability” goes far beyond the concept of readability of a fuzzy set and fuzzy rules. It focuses on novel and precise operators of aggregation, inference, and defuzzification leading to flexible Mamdani-type and logical-type systems that can achieve the required accuracy using a less complex rule base. The individual chapters describe various aspects of interpretability, including appropriate selection of the structure of a fuzzy system, focusing on improving the interpretability of fuzzy systems designed using both gradient-learning and evolutionary algorithms. It also demonstrates how to eliminate various system components, such as inputs, rules and fuzzy sets, whose reduction does not adversely affect system accuracy. It illustrates the performance of the developed algorithms and methods with commonly used benchmarks. The book provides valuable tools for possible applications in many fields including expert systems, automatic control and robotics.
17. Consistent Design of Dependable Control Systems
DEFF Research Database (Denmark)
Blanke, M.
1996-01-01
Design of fault handling in control systems is discussed, and a method for consistent design is presented.......Design of fault handling in control systems is discussed, and a method for consistent design is presented....
18. NASA universities advanced space design program, focus on nuclear engineering
International Nuclear Information System (INIS)
Lyon, W.F. III; George, J.A.; Alred, J.W.; Peddicord, K.L.
1987-01-01
In January 1985, the National Aeronautics and Space Administration (NASA), in affiliation with the Universities Space Research Association (USRA), inaugurated the NASA Universities Advanced Space Design Program. The purpose of the program was to encourage participating universities to utilize design projects for the senior and graduate level design courses that would focus on topics relevant to the nation's space program. The activities and projects being carried out under the NASA Universities Advanced Space Design Program are excellent experiences for the participants. This program is a well-conceived, well-planned effort to achieve the maximum benefit out of not only the university design experience but also of the subsequent summer programs. The students in the university design classes have the opportunity to investigate dramatic and new concepts, which at the same time have a place in a program of national importance. This program could serve as a very useful model for the development of university interaction with other federal agencies
19. Integrated Aeropropulsion Control System Design
Science.gov (United States)
Lin, C. -F.; Hurley, Francis X.; Huang, Jie; Hadaegh, F. Y.
1996-01-01
%T Integrated Aeropropulsion Control System Design%A C-F. Lin%A Francis X. Hurley%A Jie Huang%A F. Y. Hadaegh%J International Conference on Control and Information(psi)995%C Hong Kong%D June 1995%K aeropropulsion, control, system%U http://jpltrs.jpl.nasa.gov/1995/95-0658.pdfAn integrated intelligent control approach is proposed to design a high performance control system for aeropropulsion systems based on advanced sensor processing, nonlinear control and neural fuzzy control integration. Our approach features the following innovations:??e complexity and uncertainty issues are addressed via the distributed parallel processing, learning, and online reoptimization properties of neural networks.??e nonlinear dynamics and the severe coupling can be naturally incorporated into the design framework.??e knowledge base and decision making logic furnished by fuzzy systems leads to a human intelligence enhanced control scheme.In addition, fault tolerance, health monitoring and reconfigurable control strategies will be accommodated by this approach to ensure stability, graceful degradation and reoptimization in the case of failures, malfunctions and damage.!.
20. Fuel Flexible Turbine System (FFTS) Program
Energy Technology Data Exchange (ETDEWEB)
None, None
2012-12-31
In this fuel flexible turbine system (FFTS) program, the Parker gasification system was further optimized, fuel composition of biomass gasification process was characterized and the feasibility of running Capstone MicroTurbine(TM) systems with gasification syngas fuels was evaluated. With high hydrogen content, the gaseous fuel from a gasification process of various feed stocks such as switchgrass and corn stover has high reactivity and high flashback propensity when running in the current lean premixed injectors. The research concluded that the existing C65 microturbine combustion system, which is designed for natural gas, is not able to burn the high hydrogen content syngas due to insufficient resistance to flashback (undesired flame propagation to upstream within the fuel injector). A comprehensive literature review was conducted on high-hydrogen fuel combustion and its main issues. For Capstone's lean premixed injector, the main mechanisms of flashback were identified to be boundary layer flashback and bulk flow flashback. Since the existing microturbine combustion system is not able to operate on high-hydrogen syngas fuels, new hardware needed to be developed. The new hardware developed and tested included (1) a series of injectors with a reduced propensity for boundary layer flashback and (2) two new combustion liner designs (Combustion Liner Design A and B) that lead to desired primary zone air flow split to meet the overall bulk velocity requirement to mitigate the risk of core flashback inside the injectors. The new injector designs were evaluated in both test apparatus and C65/C200 engines. While some of the new injector designs did not provide satisfactory performance in burning target syngas fuels, particularly in improving resistance to flashback. The combustion system configuration of FFTS-4 injector and Combustion Liner Design A was found promising to enable the C65 microturbine system to run on high hydrogen biomass syngas. The FFTS-4 injector
1. NASA Space Engineering Research Center for VLSI systems design
Science.gov (United States)
1991-01-01
This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.
2. Nuclear Engine System Simulation (NESS). Volume 1: Program user's guide
Science.gov (United States)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.
1993-01-01
A Nuclear Thermal Propulsion (NTP) engine system design analysis tool is required to support current and future Space Exploration Initiative (SEI) propulsion and vehicle design studies. Currently available NTP engine design models are those developed during the NERVA program in the 1960's and early 1970's and are highly unique to that design or are modifications of current liquid propulsion system design models. To date, NTP engine-based liquid design models lack integrated design of key NTP engine design features in the areas of reactor, shielding, multi-propellant capability, and multi-redundant pump feed fuel systems. Additionally, since the SEI effort is in the initial development stage, a robust, verified NTP analysis design tool could be of great use to the community. This effort developed an NTP engine system design analysis program (tool), known as the Nuclear Engine System Simulation (NESS) program, to support ongoing and future engine system and stage design study efforts. In this effort, Science Applications International Corporation's (SAIC) NTP version of the Expanded Liquid Engine Simulation (ELES) program was modified extensively to include Westinghouse Electric Corporation's near-term solid-core reactor design model. The ELES program has extensive capability to conduct preliminary system design analysis of liquid rocket systems and vehicles. The program is modular in nature and is versatile in terms of modeling state-of-the-art component and system options as discussed. The Westinghouse reactor design model, which was integrated in the NESS program, is based on the near-term solid-core ENABLER NTP reactor design concept. This program is now capable of accurately modeling (characterizing) a complete near-term solid-core NTP engine system in great detail, for a number of design options, in an efficient manner. The following discussion summarizes the overall analysis methodology, key assumptions, and capabilities associated with the NESS presents an
3. Design of longwall mining system
Energy Technology Data Exchange (ETDEWEB)
Curth, E. A.
1979-01-01
A premining investigation includes core drill holes, seismic studies, satellite imagery and, where the underground is accessible, bearing strength tests of roof and floor, and residual stress determination. Longwall retreat is the preferred method in the USA. Exceptions are few. The objective of panel design is to maintain ground stability and to provide optimum resource recovery with the most cost-effective panel configuration. Environmental considerations, such as surface utilization, hydrology, oil and gas wells, affect panel location and layout. Roof support selection criteria stem from premining data. Prototype roof supports must satisfy an intensive testing program. Mining equipment, such as cutter loaders, face conveyors and stage loaders, is designed to match the production goal and anticipated face conditions. Coal clearance, power supply, men and supply transportation, communications, and equipment maintenance programs must be commensurate with the extraction potential. Face lighting must comply with mandatory standards. Operational monitoring and analysis include operational data extracted and summarized from shift and monthly records, and from ground control and subsidence observations.
4. Decontamination systems information and research program -- Literature review in support of development of standard test protocols and barrier design models for in situ formed barriers project
International Nuclear Information System (INIS)
1994-12-01
The US Department of Energy is responsible for approximately 3,000 sites in which contaminants such as carbon tetrachloride, trichlorethylene, perchlorethylene, non-volatile and soluble organic and insoluble organics (PCBs and pesticides) are encountered. In specific areas of these sites radioactive contaminants are stored in underground storage tanks which were originally designed and constructed with a 30-year projected life. Many of these tanks are now 10 years beyond the design life and failures have occurred allowing the basic liquids (ph of 8 to 9) to leak into the unconsolidated soils below. Nearly one half of the storage tanks located at the Hanford Washington Reservation are suspected of leaking and contaminating the soils beneath them. The Hanford site is located in a semi-arid climate region with rainfall of less than 6 inches annually, and studies have indicated that very little of this water finds its way to the groundwater to move the water down gradient toward the Columbia River. This provides the government with time to develop a barrier system to prevent further contamination of the groundwater, and to develop and test remediation systems to stabilize or remove the contaminant materials. In parallel to remediation efforts, confinement and containment technologies are needed to retard or prevent the advancement of contamination plumes through the environment until the implementation of remediation technology efforts are completed. This project examines the various confinement and containment technologies and protocols for testing the materials in relation to their function in-situ
5. Decontamination systems information and research program -- Literature review in support of development of standard test protocols and barrier design models for in situ formed barriers project
Energy Technology Data Exchange (ETDEWEB)
NONE
1994-12-01
The US Department of Energy is responsible for approximately 3,000 sites in which contaminants such as carbon tetrachloride, trichlorethylene, perchlorethylene, non-volatile and soluble organic and insoluble organics (PCBs and pesticides) are encountered. In specific areas of these sites radioactive contaminants are stored in underground storage tanks which were originally designed and constructed with a 30-year projected life. Many of these tanks are now 10 years beyond the design life and failures have occurred allowing the basic liquids (ph of 8 to 9) to leak into the unconsolidated soils below. Nearly one half of the storage tanks located at the Hanford Washington Reservation are suspected of leaking and contaminating the soils beneath them. The Hanford site is located in a semi-arid climate region with rainfall of less than 6 inches annually, and studies have indicated that very little of this water finds its way to the groundwater to move the water down gradient toward the Columbia River. This provides the government with time to develop a barrier system to prevent further contamination of the groundwater, and to develop and test remediation systems to stabilize or remove the contaminant materials. In parallel to remediation efforts, confinement and containment technologies are needed to retard or prevent the advancement of contamination plumes through the environment until the implementation of remediation technology efforts are completed. This project examines the various confinement and containment technologies and protocols for testing the materials in relation to their function in-situ.
6. ARGOS laser system mechanical design
Science.gov (United States)
Deysenroth, M.; Honsberg, M.; Gemperlein, H.; Ziegleder, J.; Raab, W.; Rabien, S.; Barl, L.; Gässler, W.; Borelli, J. L.
2014-07-01
ARGOS, a multi-star adaptive optics system is designed for the wide-field imager and multi-object spectrograph LUCI on the LBT (Large Binocular Telescope). Based on Rayleigh scattering the laser constellation images 3 artificial stars (at 532 nm) per each of the 2 eyes of the LBT, focused at a height of 12 km (Ground Layer Adaptive Optics). The stars are nominally positioned on a circle 2' in radius, but each star can be moved by up to 0.5' in any direction. For all of these needs are following main subsystems necessary: 1. A laser system with its 3 Lasers (Nd:YAG ~18W each) for delivering strong collimated light as for LGS indispensable. 2. The Launch system to project 3 beams per main mirror as a 40 cm telescope to the sky. 3. The Wave Front Sensor with a dichroic mirror. 4. The dichroic mirror unit to grab and interpret the data. 5. A Calibration Unit to adjust the system independently also during day time. 6. Racks + platforms for the WFS units. 7. Platforms and ladders for a secure access. This paper should mainly demonstrate how the ARGOS Laser System is configured and designed to support all other systems.
7. IPAD applications to the design, analysis, and/or machining of aerospace structures. [Integrated Program for Aerospace-vehicle Design
Science.gov (United States)
Blackburn, C. L.; Dovi, A. R.; Kurtze, W. L.; Storaasli, O. O.
1981-01-01
A computer software system for the processing and integration of engineering data and programs, called IPAD (Integrated Programs for Aerospace-Vehicle Design), is described. The ability of the system to relieve the engineer of the mundane task of input data preparation is demonstrated by the application of a prototype system to the design, analysis, and/or machining of three simple structures. Future work to further enhance the system's automated data handling and ability to handle larger and more varied design problems are also presented.
8. Learners Programming Language a Helping System for Introductory Programming Courses
Directory of Open Access Journals (Sweden)
2016-07-01
Full Text Available Programming is the core of computer science and due to this momentousness a special care is taken in designing the curriculum of programming courses. A substantial work has been conducted on the definition of programming courses, yet the introductory programming courses are still facing high attrition, low retention and lack of motivation. This paper introduced a tiny pre-programming language called LPL (Learners Programming Language as a ZPL (Zeroth Programming Language to illuminate novice students about elementary concepts of introductory programming before introducing the first imperative programming course. The overall objective and design philosophy of LPL is based on a hypothesis that the soft introduction of a simple and paradigm specific textual programming can increase the motivation level of novice students and reduce the congenital complexities and hardness of the first programming course and eventually improve the retention rate and may be fruitful in reducing the dropout/failure level. LPL also generates the equivalent high level programs from user source program and eventually very fruitful in understanding the syntax of introductory programming languages. To overcome the inherent complexities of unusual and rigid syntax of introductory programming languages, the LPL provide elementary programming concepts in the form of algorithmic and plain natural language based computational statements. The initial results obtained after the introduction of LPL are very encouraging in motivating novice students and improving the retention rate.
9. Algebraic Varieties and System Design
DEFF Research Database (Denmark)
Aabrandt, Andreas
Design and analysis of networks have many applications in the engineering sciences. This dissertation seeks to contribute to the methods used in the analysis of networks with a view towards assisting decision making processes. Networks are initially considered as objects in the category of graphs...... of cover ideals of hypergraphs, the topological ranking demonstrates the non-trivial decisions that needs to be considered in system design. All the methods developed here have an underlying common structure, namely that they all appear at solution sets for systems of polynomials. These solution sets...... and later as objects in the category of hypergraphs. The connection with the category of simplicial pairs become apparent when the topology is analyzed using homological algebra. A topological ranking is developed that measures the ability of the network to stay path-connected. Combined with the analysis...
10. Software design for resilient computer systems
CERN Document Server
Schagaev, Igor
2016-01-01
This book addresses the question of how system software should be designed to account for faults, and which fault tolerance features it should provide for highest reliability. The authors first show how the system software interacts with the hardware to tolerate faults. They analyze and further develop the theory of fault tolerance to understand the different ways to increase the reliability of a system, with special attention on the role of system software in this process. They further develop the general algorithm of fault tolerance (GAFT) with its three main processes: hardware checking, preparation for recovery, and the recovery procedure. For each of the three processes, they analyze the requirements and properties theoretically and give possible implementation scenarios and system software support required. Based on the theoretical results, the authors derive an Oberon-based programming language with direct support of the three processes of GAFT. In the last part of this book, they introduce a simulator...
11. Design of SPring-8 control system
International Nuclear Information System (INIS)
Wada, T.; Kumahara, T.; Yonehara, H.; Yoshikawa, H.; Masuda, T.; Wang Zhen
1992-01-01
The control system of SPring-8 facility is designed. A distributed computer system is adopted with a three-hierarchy levels. All the computers are linked by computer networks. The network of upper level is a high-speed multi-media LAN such as FDDI which links sub-system control computers, and middle are Ethernet or MAP networks which link front end processors (FEP) such as VME system. The lowest is a field level bus which links VME and controlled devices. Workstations (WS) or X-terminals are useful for man-machine interfaces. For operating system (OS), UNIX is useful for upper level computers, and real-time OS's for FEP's. We will select hardwares and OS of which specifications are close to international standards. Since recently the cost of software has become higher than that of hardware, we introduce computer aided tools as many as possible for program developments. (author)
12. ISABELLE control system: design concepts
International Nuclear Information System (INIS)
Humphrey, J.W.
1979-01-01
ISABELLE is a Department of Energy funded proton accelerator/storage ring being built at Brookhaven National Laboratory (Upton, Long Island, New York). It is large (3.8 km circumference) and complicated (approx. 30,000 monitor and control variables). It is based on superconducting technology. Following the example of previous accelerators, ISABELLE will be operated from a single control center. The control system will be distributed and will incorporate a local computer network. An overview of the conceptual design of the ISABELLE control system will be presented
13. Object-oriented design and programming in medical decision support.
Science.gov (United States)
Heathfield, H; Armstrong, J; Kirkham, N
1991-12-01
The concept of object-oriented design and programming has recently received a great deal of attention from the software engineering community. This paper highlights the realisable benefits of using the object-oriented approach in the design and development of clinical decision support systems. These systems seek to build a computational model of some problem domain and therefore tend to be exploratory in nature. Conventional procedural design techniques do not support either the process of model building or rapid prototyping. The central concepts of the object-oriented paradigm are introduced, namely encapsulation, inheritance and polymorphism, and their use illustrated in a case study, taken from the domain of breast histopathology. In particular, the dual roles of inheritance in object-oriented programming are examined, i.e., inheritance as a conceptual modelling tool and inheritance as a code reuse mechanism. It is argued that the use of the former is not entirely intuitive and may be difficult to incorporate into the design process. However, inheritance as a means of optimising code reuse offers substantial technical benefits.
14. Operations Monitoring Assistant System Design
Science.gov (United States)
1986-07-01
Logic. Artificial Inteligence 25(1)::75-94. January.18. 41 -Nils J. Nilsson. Problem-Solving Methods In Artificli Intelligence. .klcG raw-Hill B3ook...operations monitoring assistant (OMA) system is designed that combines operations research, artificial intelligence, and human reasoning techniques and...KnowledgeCraft (from Carnegie Group), and 5.1 (from Teknowledze). These tools incorporate the best methods of applied artificial intelligence, and
15. Embedded Systems Programming: Accessing Databases from Esterel
Directory of Open Access Journals (Sweden)
2009-03-01
Full Text Available A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel’s synchrony hypothesis, or remotely along several of Esterel’s execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel’s facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs’ utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot’s controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse’s floor layout into account, both of which are stored in a MySQL database.
16. Embedded Systems Programming: Accessing Databases from Esterel
Directory of Open Access Journals (Sweden)
White David
2008-01-01
Full Text Available Abstract A current limitation in embedded controller design and programming is the lack of database support in development tools such as Esterel Studio. This article proposes a way of integrating databases and Esterel by providing two application programming interfaces (APIs which enable the use of relational databases inside Esterel programs. As databases and Esterel programs are often executed on different machines, result sets returned as responses to database queries may be processed either locally and according to Esterel's synchrony hypothesis, or remotely along several of Esterel's execution cycles. These different scenarios are reflected in the design and usage rules of the two APIs presented in this article, which rely on Esterel's facilities for extending the language by external data types, external functions, and procedures, as well as tasks. The APIs' utility is demonstrated by means of a case study modelling an automated warehouse storage system, which is constructed using Lego Mindstorms robotics kits. The robot's controller is programmed in Esterel in a way that takes dynamic ordering information and the warehouse's floor layout into account, both of which are stored in a MySQL database.
17. Reactor Design for Bioelectrochemical Systems
KAUST Repository
Mohanakrishna, G.
2017-12-01
Bioelectrochemical systems (BES) are novel hybrid systems which are designed to generate renewable energy from the low cost substrate in a sustainable way. Microbial fuel cells (MFCs) are the well studied application of BES systems that generate electricity from the wide variety of organic components and wastewaters. MFC mechanism deals with the microbial oxidation of organic molecules for the production of electrons and protons. The MFC design helps to build the electrochemical gradient on anode and cathode which leads for the bioelectricity generation. As whole reactions of MFCs happen at mild environmental and operating conditions and using waste organics as the substrate, it is defined as the sustainable and alternative option for global energy needs and attracted worldwide researchers into this research area. Apart from MFC, BES has other applications such as microbial electrolysis cells (MECs) for biohydrogen production, microbial desalinations cells (MDCs) for water desalination, and microbial electrosynthesis cells (MEC) for value added products formation. All these applications are designed to perform efficiently under mild operational conditions. Specific strains of bacteria or specifically enriched microbial consortia are acting as the biocatalyst for the oxidation and reduction of BES. Detailed function of the biocatalyst has been discussed in the other chapters of this book.
18. 100 KW pv system design
International Nuclear Information System (INIS)
Khan, N.; Abas, N.
2010-01-01
This paper reports the design of a 100kw photovolatic (PV) system for 10,000 ft/sup 2/ roof building. It explain the necessary design steps for medium power PV system specifications of associated accessories for large size building solar electrification. A comparison between various solar technologies and their ancillary anchors is provided to guide the young electric power and renewable energy engineers. It is to prove that hybrid photovolatic thermal (PVT) technology is more efficient than simple photovolatic (PV), photovolatic concentrator (PVC) or stand-alone solar thermal heating systems. We have been using fossil fuels from 1859 to 2009. Oil triggered population growth rate which in turn increased energy demand for iil. A 200 years close loop positive feedback has amplified oil production rate for few thousand barrels to 86-87 billion barrels. Today the world population is burning oil at rate of 1000 barrels/sec. The oil reserves are likely to end by 2050 (worst case) or 2100 (best case). At 1.7% growth rate the current global population might double to 12 billion barrels and electric power demand will increase from current 15 TW to TW. Unfortunately oil reserves would be breathing last and global warning would be at its climax. To cope with upcoming power, water water and energy cataclysms, it is more than essential to go for sustainable and renewable and renewable energy education and lifestyles. I hope this design venture will create interest among power and energy students, engineers and professional engine. (author)
19. University Program Management Information System
Science.gov (United States)
Gans, Gary (Technical Monitor)
2004-01-01
As basic policy, NASA believes that colleges and universities should be encouraged to participate in the nation's space and aeronautics program to the maximum extent practicable. Indeed, universities are considered as partners with government and industry in the nation's aerospace program. NASA's objective is to have them bring their scientific, engineering, and social research competence to bear on aerospace problems and on the broader social, economic, and international implications of NASA's technical and scientific programs. It is expected that, in so doing, universities will strengthen both their research and their educational capabilities to contribute more effectively to the national well being. This annual report is one means of documenting the NASA-university relationship, frequently denoted, collectively, as NASA's University Program. This report is consistent with agency accounting records, as the data is obtained from NASA's Financial and Contractual Status (FACS) System, operated by the Financial Management Division and the Procurement Office. However, in accordance with interagency agreements, the orientation differs from that required for financial or procurement purposes. Any apparent discrepancies between this report and other NASA procurement or financial reports stem from the selection criteria for the data.
20. Distributed Persistent Identifiers System Design
Directory of Open Access Journals (Sweden)
Pavel Golodoniuc
2017-06-01
Full Text Available The need to identify both digital and physical objects is ubiquitous in our society. Past and present persistent identifier (PID systems, of which there is a great variety in terms of technical and social implementation, have evolved with the advent of the Internet, which has allowed for globally unique and globally resolvable identifiers. PID systems have, by in large, catered for identifier uniqueness, integrity, and persistence, regardless of the identifier’s application domain. Trustworthiness of these systems has been measured by the criteria first defined by Bütikofer (2009 and further elaborated by Golodoniuc 'et al'. (2016 and Car 'et al'. (2017. Since many PID systems have been largely conceived and developed by a single organisation they faced challenges for widespread adoption and, most importantly, the ability to survive change of technology. We believe that a cause of PID systems that were once successful fading away is the centralisation of support infrastructure – both organisational and computing and data storage systems. In this paper, we propose a PID system design that implements the pillars of a trustworthy system – ensuring identifiers’ independence of any particular technology or organisation, implementation of core PID system functions, separation from data delivery, and enabling the system to adapt for future change. We propose decentralisation at all levels — persistent identifiers and information objects registration, resolution, and data delivery — using Distributed Hash Tables and traditional peer-to-peer networks with information replication and caching mechanisms, thus eliminating the need for a central PID data store. This will increase overall system fault tolerance thus ensuring its trustworthiness. We also discuss important aspects of the distributed system’s governance, such as the notion of the authoritative source and data integrity
1. A CAD (Classroom Assessment Design) of a Computer Programming Course
Science.gov (United States)
Hawi, Nazir S.
2012-01-01
This paper presents a CAD (classroom assessment design) of an entry-level undergraduate computer programming course "Computer Programming I". CAD has been the product of a long experience in teaching computer programming courses including teaching "Computer Programming I" 22 times. Each semester, CAD is evaluated and modified…
2. Study and design of cryogenic propellant acquisition systems. Volume 1: Design studies
Science.gov (United States)
Burge, G. W.; Blackmon, J. B.
1973-01-01
An in-depth study and selection of practical propellant surface tension acquisition system designs for two specific future cryogenic space vehicles, an advanced cryogenic space shuttle auxiliary propulsion system and an advanced space propulsion module is reported. A supporting laboratory scale experimental program was also conducted to provide design information critical to concept finalization and selection. Designs using localized pressure isolated surface tension screen devices were selected for each application and preliminary designs were generated. Based on these designs, large scale acquisition prototype hardware was designed and fabricated to be compatible with available NASA-MSFC feed system hardware.
3. System 80+{trademark} Standard Design: CESSAR design certification. Volume 11: Amendment I
Energy Technology Data Exchange (ETDEWEB)
1990-12-21
This report, entitled Combustion Engineering Standard Safety Analysis Report -- Design Certification (CESSAR-DC), has been prepared in support of the industry effort to standardize nuclear plant designs. These volumes describe the Combustion Engineering, Inc. System 80{sup +}{trademark} Standard Design. This volume 11 discusses Radiation Protection, Conduct of Operations, and the Initial Test Program.
4. The Assessment Agent System: Design, Development, and Evaluation
Science.gov (United States)
Liu, Jianhua
2013-01-01
This article reports the design, development, and evaluation of an online software application for assessing students' understanding of curricular content based on concept maps. This computer-based assessment program, called the Assessment Agent System, was designed by following an agent-oriented software design method. The Assessment Agent System…
5. Energy-aware design of digital systems
Energy Technology Data Exchange (ETDEWEB)
Gruian, F.
2000-02-01
Power and energy consumption are important issues in many digital applications, for reasons such as packaging cost and battery life-span. With the development of portable computing and communication, an increasing number of research groups are addressing power and energy related issues at various stages during the design process. Most of the work done in this area focuses on lower abstraction levels, such as gate or transistor level. Ideally, a power and energy-efficient design flow should consider the power and energy issues at every stage in the design process. Therefore, power and energy aware methods, applicable early in the design process are required. In this trend, the thesis presents two high-level design methods addressing power and energy consumption minimization. The first of the two approaches we describe, targets power consumption minimization during behavioral synthesis. This is carried out by minimizing the switching activity, while taking the correlations between signals into account. The second approach performs energy consumption minimization during system-level design, by choosing the most energy-efficient schedule and configuration of resources. Both methods make use of the constraint programming paradigm to model the problems in an elegant manner. The experimental results presented in this thesis show the impact of addressing the power and energy related issues early in the design process.
6. Design of a continuous quality improvement program to prevent falls among community-dwelling older adults in an integrated healthcare system
Directory of Open Access Journals (Sweden)
Yano Elizabeth M
2009-11-01
Full Text Available Abstract Background Implementing quality improvement programs that require behavior change on the part of health care professionals and patients has proven difficult in routine care. Significant randomized trial evidence supports creating fall prevention programs for community-dwelling older adults, but adoption in routine care has been limited. Nationally-collected data indicated that our local facility could improve its performance on fall prevention in community-dwelling older people. We sought to develop a sustainable local fall prevention program, using theory to guide program development. Methods We planned program development to include important stakeholders within our organization. The theory-derived plan consisted of 1 an initial leadership meeting to agree on whether creating a fall prevention program was a priority for the organization, 2 focus groups with patients and health care professionals to develop ideas for the program, 3 monthly workgroup meetings with representatives from key departments to develop a blueprint for the program, 4 a second leadership meeting to confirm that the blueprint developed by the workgroup was satisfactory, and also to solicit feedback on ideas for program refinement. Results The leadership and workgroup meetings occurred as planned and led to the development of a functional program. The focus groups did not occur as planned, mainly due to the complexity of obtaining research approval for focus groups. The fall prevention program uses an existing telephonic nurse advice line to 1 place outgoing calls to patients at high fall risk, 2 assess these patients' risk factors for falls, and 3 triage these patients to the appropriate services. The workgroup continues to meet monthly to monitor the progress of the program and improve it. Conclusion A theory-driven program development process has resulted in the successful initial implementation of a fall prevention program.
7. Automatic seismic support design of piping system by an object oriented expert system
International Nuclear Information System (INIS)
Nakatogawa, T.; Takayama, Y.; Hayashi, Y.; Fukuda, T.; Yamamoto, Y.; Haruna, T.
1990-01-01
The seismic support design of piping systems of nuclear power plants requires many experienced engineers and plenty of man-hours, because the seismic design conditions are very severe, the bulk volume of the piping systems is hyge and the design procedures are very complicated. Therefore we have developed a piping seismic design expert system, which utilizes the piping design data base of a 3 dimensional CAD system and automatically determines the piping support locations and support styles. The data base of this system contains the maximum allowable seismic support span lengths for straight piping and the span length reduction factors for bends, branches, concentrated masses in the piping, and so forth. The system automatically produces the support design according to the design knowledge extracted and collected from expert design engineers, and using design information such as piping specifications which give diameters and thickness and piping geometric configurations. The automatic seismic support design provided by this expert system achieves in the reduction of design man-hours, improvement of design quality, verification of design result, optimization of support locations and prevention of input duplication. In the development of this system, we had to derive the design logic from expert design engineers and this could not be simply expressed descriptively. Also we had to make programs for different kinds of design knowledge. For these reasons we adopted the object oriented programming paradigm (Smalltalk-80) which is suitable for combining programs and carrying out the design work
8. Content of system design descriptions
International Nuclear Information System (INIS)
1998-10-01
A System Design Description (SDD) describes the requirements and features of a system. This standard provides guidance on the expected technical content of SDDs. The need for such a standard was recognized during efforts to develop SDDs for safety systems at DOE Hazard Category 2 nonreactor nuclear facilities. Existing guidance related to the corresponding documents in other industries is generally not suitable to meet the needs of DOE nuclear facilities. Across the DOE complex, different contractors have guidance documents, but they vary widely from site to site. While such guidance documents are valuable, no single guidance document has all the attributes that DOE considers important, including a reasonable degree of consistency or standardization. This standard is a consolidation of the best of the existing guidance. This standard has been developed with a technical content and level of detail intended to be most applicable to safety systems at DOE Hazard Category 2 nonreactor nuclear facilities. Notwithstanding that primary intent, this standard is recommended for other systems at such facilities, especially those that are important to achieving the programmatic mission of the facility. In addition, application of this standard should be considered for systems at other facilities, including non-nuclear facilities, on the basis that SDDs may be beneficial and cost-effective
9. A Symbiosis between Instructional Systems Design and Project Management
Science.gov (United States)
Pan, Cheng-Chang
2012-01-01
This study is intended to explore a complementary relationship between instructional systems design (ISD) and project management in an attempt to build a plausible case for integrating project management as a distinct course in the core of the graduate instructional systems design programs. It is argued that ISD and project management should form…
10. Selecting, adapting, and sustaining programs in health care systems.
Science.gov (United States)
Zullig, Leah L; Bosworth, Hayden B
2015-01-01
Practitioners and researchers often design behavioral programs that are effective for a specific population or problem. Despite their success in a controlled setting, relatively few programs are scaled up and implemented in health care systems. Planning for scale-up is a critical, yet often overlooked, element in the process of program design. Equally as important is understanding how to select a program that has already been developed, and adapt and implement the program to meet specific organizational goals. This adaptation and implementation requires attention to organizational goals, available resources, and program cost. We assert that translational behavioral medicine necessitates expanding successful programs beyond a stand-alone research study. This paper describes key factors to consider when selecting, adapting, and sustaining programs for scale-up in large health care systems and applies the Knowledge to Action (KTA) Framework to a case study, illustrating knowledge creation and an action cycle of implementation and evaluation activities.
11. A knowledge-based system design/information tool for aircraft flight control systems
Science.gov (United States)
Mackall, Dale A.; Allen, James G.
1991-01-01
Research aircraft have become increasingly dependent on advanced electronic control systems to accomplish program goals. These aircraft are integrating multiple disciplines to improve performance and satisfy research objective. This integration is being accomplished through electronic control systems. Systems design methods and information management have become essential to program success. The primary objective of the system design/information tool for aircraft flight control is to help transfer flight control system design knowledge to the flight test community. By providing all of the design information and covering multiple disciplines in a structured, graphical manner, flight control systems can more easily be understood by the test engineers. This will provide the engineers with the information needed to thoroughly ground test the system and thereby reduce the likelihood of serious design errors surfacing in flight. The secondary object is to apply structured design techniques to all of the design domains. By using the techniques in the top level system design down through the detailed hardware and software designs, it is hoped that fewer design anomalies will result. The flight test experiences are reviewed of three highly complex, integrated aircraft programs: the X-29 forward swept wing; the advanced fighter technology integration (AFTI) F-16; and the highly maneuverable aircraft technology (HiMAT) program. Significant operating technologies, and the design errors which cause them, is examined to help identify what functions a system design/informatin tool should provide to assist designers in avoiding errors.
12. Mechanistic flexible pavement overlay design program.
Science.gov (United States)
2009-07-01
The current Louisiana Department of Transportation and Development (LADOTD) overlay thickness design method follows the Component : Analysis procedure provided in the 1993 AASHTO pavement design guide. Since neither field nor laboratory tests a...
13. Safety parameter display system (SPDS) for Russian-designed NPPs
International Nuclear Information System (INIS)
Anikanov, S.S.; Catullo, W.J.; Pelusi, J.L.
1997-01-01
As part of the programs aimed at improving the safety of Russian-designed reactors, the US DoE has sponsored a project of providing a safety parameter display system (SPDS) for nuclear power plants with such reactors. The present paper is focused mostly on the system architecture design features of SPDS systems for WWER-1000 and RBMK-1000 reactors. The function and the operating modes of the SPDS are outlined, and a description of the display system is given. The system architecture and system design of both an integrated and a stand-alone IandC system is explained. (A.K.)
14. Seismic design of piping systems
International Nuclear Information System (INIS)
Anglaret, G.; Beguin, J.L.
1986-01-01
This paper deals with the method used in France for the PWR nuclear plants to derive locations and types of supports of auxiliary and secondary piping systems taking earthquake in account. The successive steps of design are described, then the seismic computation method and its particular conditions of applications for piping are presented. The different types of support (and especially seismic ones) are described and also their conditions of installation. The method used to compare functional tests results and computation results in order to control models is mentioned. Some experiments realised on site or in laboratory, in order to validate models and methods, are presented [fr
15. Planning an Injection Mold Design Training Program.
Science.gov (United States)
Allyn, Edward P.
With the increased use of plastics worldwide the shortage of trained personnel in moldmaking and design for plastic injection molds is becoming critical. Local schools and community colleges should provide courses in mold design and mold making, since most workers presently learn while working under experienced designers on the job. Following this…
16. Reliability program requirements for aeronautical and space system contractors
Science.gov (United States)
1987-01-01
General reliability program requirements for NASA contracts involving the design, development, fabrication, test, and/or use of aeronautical and space systems including critical ground support equipment are prescribed. The reliability program requirements require (1) thorough planning and effective management of the reliability effort; (2) definition of the major reliability tasks and their place as an integral part of the design and development process; (3) planning and evaluating the reliability of the system and its elements (including effects of software interfaces) through a program of analysis, review, and test; and (4) timely status indication by formal documentation and other reporting to facilitate control of the reliability program.
17. A CONCEPT OF SOLAR TRACKER SYSTEM DESIGN
OpenAIRE
Meita Rumbayan *, Muhamad Dwisnanto Putro
2017-01-01
Improvement of solar panel efficiency is an ongoing research work recently. Maximizing the output power by integrating with the solar tracker system becomes a interest point of the research. This paper presents the concept in designing a solar tracker system applied to solar panel. The development of solar panel tracker system design that consist of system display prototype design, hardware design, and algorithm design. This concept is useful as the control system for solar tracker to improve...
18. Design Exception In-Service Monitoring Program Development
Science.gov (United States)
2017-05-01
This study evaluates various possible program designs for in-service monitoring of design exceptions (DEs) for the Georgia Department of Transportation. The study recommends a multitier stepwise approach to the evaluation of DEs. Specifically, the pr...
19. Financial innovation and system design
Directory of Open Access Journals (Sweden)
Mario Tonveronachi
2010-01-01
Full Text Available The official regulatory responses to the current crisis do not alter the laissez faire approach to the production and allocation of financial risks which characterises the existing regulatory framework. The stated goal remains that of maintaining the freedom for the private sector to introduce financial innovations, whose nature is consistent with the system design pursued by the official authorities. I argue that adopting a systemic perspective the crucial point is not just the nature of innovations but their quantitative dimension and dynamics, which are responsible for the endogenous creation of financial fragility. The new official proposals do not appear capable of changing this picture. A radical revision of the regulatory approach is necessary, of which an outline is presented.
20. Nuclear integrated database and design advancement system
Energy Technology Data Exchange (ETDEWEB)
Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young
1997-01-01
The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs.
1. Nuclear integrated database and design advancement system
International Nuclear Information System (INIS)
Ha, Jae Joo; Jeong, Kwang Sub; Kim, Seung Hwan; Choi, Sun Young.
1997-01-01
The objective of NuIDEAS is to computerize design processes through an integrated database by eliminating the current work style of delivering hardcopy documents and drawings. The major research contents of NuIDEAS are the advancement of design processes by computerization, the establishment of design database and 3 dimensional visualization of design data. KSNP (Korea Standard Nuclear Power Plant) is the target of legacy database and 3 dimensional model, so that can be utilized in the next plant design. In the first year, the blueprint of NuIDEAS is proposed, and its prototype is developed by applying the rapidly revolutionizing computer technology. The major results of the first year research were to establish the architecture of the integrated database ensuring data consistency, and to build design database of reactor coolant system and heavy components. Also various softwares were developed to search, share and utilize the data through networks, and the detailed 3 dimensional CAD models of nuclear fuel and heavy components were constructed, and walk-through simulation using the models are developed. This report contains the major additions and modifications to the object oriented database and associated program, using methods and Javascript.. (author). 36 refs., 1 tab., 32 figs
2. Program Simulates A Modular Manufacturing System
Science.gov (United States)
Schroer, Bernard J.; Wang, Jian
1996-01-01
SSE computer program provides simulation environment for modeling manufacturing systems containing relatively small numbers of stations and operators. Designed to simulate manufacturing of apparel, also used in other manufacturing domains. Excellent for small or medium-size firms including those lacking expertise to develop detailed models or have only minimal knowledge in describing manufacturing systems and in analyzing results of simulations on mathematical models. User does not need to know simulation language to use SSE. Used to design new modules and to evaluate existing modules. Originally written in Turbo C v2.0 for IBM PC-compatible computers running MS-DOS and successfully implemented by use of Turbo C++ v3.0.
3. Design and Development of a Learning Design Virtual Internship Program
Science.gov (United States)
Ruggiero, Dana; Boehm, Jeff
2016-01-01
Incorporation of practical experience in learning design and technology education has long been accepted as an important step in the developmental process of future learning designers. The proliferation of adult online education has increased the number of graduate students who are in need of a practical internship placement but have limited…
4. Fermilab Recycler Collimation System Design
Energy Technology Data Exchange (ETDEWEB)
Brown, B. C. [Fermilab; Adamson, P. [Fermilab; Ainsworth, R. [Fermilab; Capista, D. [Fermilab; Hazelwood, K. [Fermilab; Kourbanis, I. [Fermilab; Mokhov, N. V. [Fermilab; Morris, D. K. [Fermilab; Murphy, M. [Fermilab; Sidorov, V. [Fermilab; Stern, E. [Fermilab; Tropin, I. [Fermilab; Yang, M-J. [Fermilab
2016-10-04
To provide 700 kW proton beams for neutrino production in the NuMI facility, we employ slip stacking in the Recycler with transfer to the Main Injector for recapture and acceleration. Slip stacking with 12 Booster batches per 1.33 sec cycle of the Main Injector has been implemented and briefly tested while extensive operation with 8 batches and 10 batches per MI cycle has been demonstrated. Operation in this mode since 2013 shows that loss localization is an essential component for long term operation. Beam loss in the Recycler will be localized in a collimation region with design capability for absorbing up to 2 kW of lost protons in a pair of 20-Ton collimators (absorbers). This system will employ a two stage collimation with a thin molybdenum scattering foil to define the bottom edge of both the injected and decelerated-for-slipping beams. Optimization and engineering design of the collimator components and radiation shielding are based on comprehensive MARS15 simulations predicting high collimation efficiency as well as tolerable levels of prompt and residual radiation. The system installation during the Fermilab 2016 facility shutdown will permit commissioning in the subsequent operating period.
5. Seca Coal-Based Systems Program
International Nuclear Information System (INIS)
Alinger, Matthew
2008-01-01
This report summarizes the progress made during the August 1, 2006 - May 31, 2008 award period under Cooperative Agreement DE-FC26-05NT42614 for the U. S. Department of Energy/National Energy Technology Laboratory (USDOE/NETL) entitled 'SECA Coal Based Systems'. The initial overall objective of this program was to design, develop, and demonstrate multi-MW integrated gasification fuel cell (IGFC) power plants with >50% overall efficiency from coal (HHV) to AC power. The focus of the program was to develop low-cost, high performance, modular solid oxide fuel cell (SOFC) technology to support coal gas IGFC power systems. After a detailed GE internal review of the SOFC technology, the program was de-scoped at GE's request. The primary objective of this program was then focused on developing a performance degradation mitigation path for high performing, cost-effective solid oxide fuel cells (SOFCs). There were two initial major objectives in this program. These were: (1) Develop and optimize a design of a >100 MWe integrated gasification fuel cell (IGFC) power plant; (2) Resolve identified barrier issues concerning the long-term economic performance of SOFC. The program focused on designing and cost estimating the IGFC system and resolving technical and economic barrier issues relating to SOFC. In doing so, manufacturing options for SOFC cells were evaluated, options for constructing stacks based upon various cell configurations identified, and key performance characteristics were identified. Key factors affecting SOFC performance degradation for cells in contact with metallic interconnects were be studied and a fundamental understanding of associated mechanisms was developed using a fixed materials set. Experiments and modeling were carried out to identify key processes/steps affecting cell performance degradation under SOFC operating conditions. Interfacial microstructural and elemental changes were characterized, and their relationships to observed degradation
6. System 80+trademark Standard Design: CESSAR design certification
International Nuclear Information System (INIS)
1990-01-01
This report, entitled Combustion Engineering Standard Safety Analysis Report -- Design Certification (CESSAR-DC), has been prepared in support of the industry effort to standardize nuclear plant designs. These volumes describe the Combustion Engineering, Inc. System 80 + trademark Standard Design. This volume 10 discusses the Steam and Power Conversion System and Radioactive Waste Management
7. System 80+trademark Standard Design: CESSAR design certification
International Nuclear Information System (INIS)
1990-01-01
This report, entitled Combustion Engineering Standard Safety Analysis Report -- Design Certification (CESSAR-DC), has been prepared in support of the industry effort to standardize nuclear plant designs. These volumes describe the Combustion Engineering, Inc. System 80 + trademark Standard Design. This volume 9 discusses Electric Power and Auxiliary Systems
8. System 80+trademark standard design: CESSAR design certification
International Nuclear Information System (INIS)
1990-01-01
This report, entitled Combustion Engineering Standard Safety Analysis Report--Design Certification (CESSAR-DC), has been prepared in support of the industry effort to standardize nuclear plant designs. These documents describe the Combustion Engineering, Inc. System 80+trademark Standard Design. This report, Volume 13, documents increase and decrease of reactor cooling system inventory and radioactive material release from a subsystem or component
9. Design Patterns for Functional Strategic Programming
OpenAIRE
Laemmel, Ralf; Visser, Joost
2002-01-01
In previous work, we introduced the fundamentals and a supporting combinator library for \\emph{strategic programming}. This an idiom for generic programming based on the notion of a \\emph{functional strategy}: a first-class generic function that cannot only be applied to terms of any type, but which also allows generic traversal into subterms and can be customized with type-specific behaviour. This paper seeks to provide practicing functional programmers with pragmatic guidance in crafting th...
10. Designing Microporus Carbons for Hydrogen Storage Systems
Energy Technology Data Exchange (ETDEWEB)
Alan C. Cooper
2012-05-02
An efficient, cost-effective hydrogen storage system is a key enabling technology for the widespread introduction of hydrogen fuel cells to the domestic marketplace. Air Products, an industry leader in hydrogen energy products and systems, recognized this need and responded to the DOE 'Grand Challenge' solicitation (DOE Solicitation DE-PS36-03GO93013) under Category 1 as an industry partner and steering committee member with the National Renewable Energy Laboratory (NREL) in their proposal for a center-of-excellence on Carbon-Based Hydrogen Storage Materials. This center was later renamed the Hydrogen Sorption Center of Excellence (HSCoE). Our proposal, entitled 'Designing Microporous Carbons for Hydrogen Storage Systems,' envisioned a highly synergistic 5-year program with NREL and other national laboratory and university partners.
11. System 80+trademark Standard Design: CESSAR design certification
International Nuclear Information System (INIS)
1990-01-01
This report, entitled Combustion Engineering Standard Safety Analysis Report - Design Certification (CESSAR-DC), has been prepared in support of the industry effort to standardize nuclear plant designs. These volumes describe the Combustion Engineering, Inc. System 80+trademark Standard Design. This Volume 16 details the application of Human Factors Engineering in the design process
12. DESIGN OF A VIBRATION AND STRESS MEASUREMENT SYSTEM FOR AN ADVANCED POWER REACTOR 1400 REACTOR VESSEL INTERNALS COMPREHENSIVE VIBRATION ASSESSMENT PROGRAM
OpenAIRE
KO, DO-YOUNG; KIM, KYU-HYUNG
2013-01-01
In accordance with the US Nuclear Regulatory Commission (US NRC), Regulatory Guide 1.20, the reactor vessel internals comprehensive vibration assessment program (RVI CVAP) has been developed for an Advanced Power Reactor 1400 (APR1400). The purpose of the RVI CVAP is to verify the structural integrity of the reactor internals to flow-induced loads prior to commercial operation. The APR1400 RVI CVAP consists of four programs (analysis, measurement, inspection, and assessment). Thoughtful prepa...
13. Design and Data Management System
Science.gov (United States)
Messer, Elizabeth; Messer, Brad; Carter, Judy; Singletary, Todd; Albasini, Colby; Smith, Tammy
2007-01-01
The Design and Data Management System (DDMS) was developed to automate the NASA Engineering Order (EO) and Engineering Change Request (ECR) processes at the Propulsion Test Facilities at Stennis Space Center for efficient and effective Configuration Management (CM). Prior to the development of DDMS, the CM system was a manual, paper-based system that required an EO or ECR submitter to walk the changes through the acceptance process to obtain necessary approval signatures. This approval process could take up to two weeks, and was subject to a variety of human errors. The process also requires that the CM office make copies and distribute them to the Configuration Control Board members for review prior to meetings. At any point, there was a potential for an error or loss of the change records, meaning the configuration of record was not accurate. The new Web-based DDMS eliminates unnecessary copies, reduces the time needed to distribute the paperwork, reduces time to gain the necessary signatures, and prevents the variety of errors inherent in the previous manual system. After implementation of the DDMS, all EOs and ECRs can be automatically checked prior to submittal to ensure that the documentation is complete and accurate. Much of the configuration information can be documented in the DDMS through pull-down forms to ensure consistent entries by the engineers and technicians in the field. The software also can electronically route the documents through the signature process to obtain the necessary approvals needed for work authorization. The workflow of the system allows for backups and timestamps that determine the correct routing and completion of all required authorizations in a more timely manner, as well as assuring the quality and accuracy of the configuration documents.
14. Embedded Systems Design with 8051 Microcontrollers
DEFF Research Database (Denmark)
Karakahayov, Zdravko; Winther, Ole; Christensen, Knud Smed
Textbook on embedded microcontrollers. Example microcontroller family: Intel 8051 with special emphasis on Philips 80C552. Structure, design examples and programming in C and assembler. Hardware - software codesign. EProm emulator.......Textbook on embedded microcontrollers. Example microcontroller family: Intel 8051 with special emphasis on Philips 80C552. Structure, design examples and programming in C and assembler. Hardware - software codesign. EProm emulator....
15. The System 80+ Standard Plant design control document. Volume 15
International Nuclear Information System (INIS)
1997-01-01
This Design Control Document (DCD) is a repository of information comprising the System 80+trademark Standard Plant Design. The DCD also provides that design-related information to be incorporated by reference in the design certification rule for the System 80+ Standard Plant Design. Applicants for a combined license pursuant to 10 CFR 52 must ensure that the final Design Certification Rule and the associated Statements of Consideration are used when making all licensing decisions relevant to the System 80+ Standard Plant Design. The Design Control Document contains the DCD introduction, The Certified Design Material (CDM) [i.e., ''Tier 1''] and the Approved Design Material (ADM) [i.e., ''Tier 2''] for the System 80+ Standard Plant Design. The CDM includes the following sections: (1) Introductory material; (2) Certified Design Material for System 80+ systems and structures; (3) Certified Design Material for non-system-based aspects of the System 80+ Certified design; (4) Interface requirements; and (5) Site parameters. The ADM, to the extent applicable for the System 80+ Standard Plant Design, includes: (1) the information required for the final safety analysis report under 20 CFR 50.34; (2) other relevant information required by 10 CFR 52.47; and (3) emergency operations guidelines. This volume contains all five parts of section 12 (Radiation Protection) of the ADM Design and Analysis. Topics covered are: ALARA exposures; radiation sources; radiation protection; dose assessment; and health physics program. All six parts and appendices A and B for section 13 (Conduct of Operations) of the ADM Design and Analysis are also contained in this volume. Topics covered are: organizational structure; training program; emergency planning; review and audit; plant procedures; industrial security; sabotage protection (App 13A); and vital equipment list (App 13B)
16. development of a computer program for the design of auger
African Journals Online (AJOL)
User
program was developed for the above processes to remove the constraints of the classical ... Results of evaluation tests show that the program is efficient in the ..... C. C AUGER CONVEYOR DESIGN FOR CHART A MATERIALS. 100 WRITE(*,2). WRITE(*,4)'DESIGN OF SCREW FOR CHART A MATERIALS'. WRITE(* ...
17. An Integer Programming Approach to Item Bank Design.
Science.gov (United States)
van der Linden, Wim J.; Veldkamp, Bernard P.; Reese, Lynda M.
2000-01-01
Presents an integer programming approach to item bank design that can be used to calculate an optimal blueprint for an item bank in order to support an existing testing program. Demonstrates the approach empirically using an item bank designed for the Law School Admission Test. (SLD)
18. System 80+trademark standard design: CESSAR design certification
International Nuclear Information System (INIS)
1990-01-01
This report has been prepared in support of the industry effort to standardize nuclear plant designs. The documents in this series describe the Combustion Engineering, Inc. System 80+ TM Standard Design
19. GRAPHIC, time-sharing magnet design computer programs at Argonne
International Nuclear Information System (INIS)
Lari, R.J.
1974-01-01
This paper describes three magnet design computer programs in use at the Zero Gradient Synchrotron of Argonne National Laboratory. These programs are used in the time sharing mode in conjunction with a Tektronix model 4012 graphic display terminal. The first program in called TRIM, the second MAGNET, and the third GFUN. (U.S.)
20. Warehouses information system design and development
Science.gov (United States)
Darajatun, R. A.; Sukanta
2017-12-01
Materials/goods handling industry is fundamental for companies to ensure the smooth running of their warehouses. Efficiency and organization within every aspect of the business is essential in order to gain a competitive advantage. The purpose of this research is design and development of Kanban of inventory storage and delivery system. Application aims to facilitate inventory stock checks to be more efficient and effective. Users easily input finished goods from production department, warehouse, customer, and also suppliers. Master data designed as complete as possible to be prepared applications used in a variety of process logistic warehouse variations. The author uses Java programming language to develop the application, which is used for building Java Web applications, while the database used is MySQL. System development methodology that I use is the Waterfall methodology. Waterfall methodology has several stages of the Analysis, System Design, Implementation, Integration, Operation and Maintenance. In the process of collecting data the author uses the method of observation, interviews, and literature.
1. Design of multistable systems via partial synchronization
Many researchers introduce schemes for designing multistable systems by coupling two identical systems. In this paper, we introduce a generalized scheme for designing multistable systems by coupling two different dynamical systems. The basic idea of the scheme is to design partial synchronization of states betweenthe ...
2. Design and components of photovoltaic systems
NARCIS (Netherlands)
Sark, W.G.J.H.M. van
2012-01-01
This chapter provides an overview of the various aspects of photovoltaic (PV) system components and design. The basic performance of cells, modules, and inverters and how this is used in PV system design is described. Two case studies illustrating PV system design are presented: a hybrid system on
3. Stress analysis program system for nuclear vessel: STANSAS
International Nuclear Information System (INIS)
Okamoto, Asao; Michikami, Shinsuke
1979-01-01
IHI has developed a computer system of stress analysis and evaluation for nuclear vessels: STANSAS (STress ANalysis System for Axi-symmetric Structure). The system consists of more than twenty independent programs divided into the following six parts. 1. Programs for opening design by code rule. 2. Calculation model generating programs. 3. Load defining programs. 4. Structural analysis programs. 5. Load data/calculation results plotting programs. 6. Stress evaluation programs. Each program is connected with its pre- or post-processor through three data-bases which enable automatic data transfer. The user can make his choice of structural analysis programs in accordance with the problem to be solved. The interface to STANSAS can be easily installed in generalized structural analysis programs such as NASTRAN and MARC. For almost all tables and figures in the stress report, STANSAS has the function to print or plot out. The complicated procedures of ''Design by Analysis'' for pressure vessels have been well standardized by STANSAS. The system will give a high degree of efficiency and confidence to the design work. (author)
4. MW-Class Electric Propulsion System Designs
Science.gov (United States)
LaPointe, Michael R.; Oleson, Steven; Pencil, Eric; Mercer, Carolyn; Distefano, Salvador
2011-01-01
Electric propulsion systems are well developed and have been in commercial use for several years. Ion and Hall thrusters have propelled robotic spacecraft to encounters with asteroids, the Moon, and minor planetary bodies within the solar system, while higher power systems are being considered to support even more demanding future space science and exploration missions. Such missions may include orbit raising and station-keeping for large platforms, robotic and human missions to near earth asteroids, cargo transport for sustained lunar or Mars exploration, and at very high-power, fast piloted missions to Mars and the outer planets. The Advanced In-Space Propulsion Project, High Efficiency Space Power Systems Project, and High Power Electric Propulsion Demonstration Project were established within the NASA Exploration Technology Development and Demonstration Program to develop and advance the fundamental technologies required for these long-range, future exploration missions. Under the auspices of the High Efficiency Space Power Systems Project, and supported by the Advanced In-Space Propulsion and High Power Electric Propulsion Projects, the COMPASS design team at the NASA Glenn Research Center performed multiple parametric design analyses to determine solar and nuclear electric power technology requirements for representative 300-kW class and pulsed and steady-state MW-class electric propulsion systems. This paper describes the results of the MW-class electric power and propulsion design analysis. Starting with the representative MW-class vehicle configurations, and using design reference missions bounded by launch dates, several power system technology improvements were introduced into the parametric COMPASS simulations to determine the potential system level benefits such technologies might provide. Those technologies providing quantitative system level benefits were then assessed for technical feasibility, cost, and time to develop. Key assumptions and primary
5. System 80+trademark Standard Design: CESSAR design certification. Volume 8
International Nuclear Information System (INIS)
1997-01-01
This report has been prepared in support of the industry effort to standardize nuclear plant designs. This document describes the Combustion Engineering, Inc. System 80+trademark Standard Design. This volume contains Chapter 7 -- Instrumentation and Controls. Topics covered are: reactor protection system; engineered safety feature actuation systems; systems required for safe shutdown; safety-related display instrumentation; all other instrumentation for safety; and control systems not required for safety. Appendix 7A is included
6. High-level verification of system designs
OpenAIRE
Kundu, Sudipta
2009-01-01
Given the growing size and heterogeneity of Systems on Chip (SOC), the design process from initial specification to chip fabrication has become increasingly complex. The growing complexity provides incentive for designers to use high-level languages such as C, SystemC, and SystemVerilog for system-level design. While a major goal of these high- level languages is to enable verification at a higher level of abstraction, allowing early exploration of system -level designs, the focus so far has ...
7. Documentation of Calculation Program and Guideline for Optimal Window Design
DEFF Research Database (Denmark)
Vanhoutteghem, Lies; Svendsen, Svend
. A user-friendly calculation program based on simple input data has recently been developed to assist engineers and architects during the process of selecting suitable windows for residential building design. The program is organised in four steps, which together represent an analysis of how windows...... in a specific building design perform with regard to energy consumption, thermal indoor environment, and cost. The analyses in the steps gradually increase in level of detail and support the design decisions throughout the design process. This document presents work done to validate the program and demonstrates...
8. ADVANCED TURBINE SYSTEM FEDERAL ASSISTANCE PROGRAM
Energy Technology Data Exchange (ETDEWEB)
Frank Macri
2003-10-01
Rolls-Royce Corporation has completed a cooperative agreement under Department of Energy (DOE) contract DE-FC21-96MC33066 in support of the Advanced Turbine Systems (ATS) program to stimulate industrial power generation markets. This DOE contract was performed during the period of October 1995 to December 2002. This final technical report, which is a program deliverable, describes all associated results obtained during Phases 3A and 3B of the contract. Rolls-Royce Corporation (formerly Allison Engine Company) initially focused on the design and development of a 10-megawatt (MW) high-efficiency industrial gas turbine engine/package concept (termed the 701-K) to meet the specific goals of the ATS program, which included single digit NOx emissions, increased plant efficiency, fuel flexibility, and reduced cost of power (i.e., $/kW). While a detailed design effort and associated component development were successfully accomplished for the 701-K engine, capable of achieving the stated ATS program goals, in 1999 Rolls-Royce changed its focus to developing advanced component technologies for product insertion that would modernize the current fleet of 501-K and 601-K industrial gas turbines. This effort would also help to establish commercial venues for suppliers and designers and assist in involving future advanced technologies in the field of gas turbine engine development. This strategy change was partly driven by the market requirements that suggested a low demand for a 10-MW aeroderivative industrial gas turbine, a change in corporate strategy for aeroderivative gas turbine engine development initiatives, and a consensus that a better return on investment (ROI) could be achieved under the ATS contract by focusing on product improvements and technology insertion for the existing Rolls-Royce small engine industrial gas turbine fleet. 9. TPX: Contractor preliminary design review. Volume 2, PF systems engineering International Nuclear Information System (INIS) Calvin, H.A. 1995-01-01 This system development specification covers the Poloidal Field (PF) Magnet System, WBS 14 in the Princeton Plasma Physics Laboratory TPX Program to build a tokamak fusion reactor. This specification establishes the performance, design, development and test requirements of the PF Magnet System 10. TPX: Contractor preliminary design review. Volume 2, PF systems engineering Energy Technology Data Exchange (ETDEWEB) Calvin, H.A. [Lawrence Livermore National Lab., CA (United States) 1995-07-28 This system development specification covers the Poloidal Field (PF) Magnet System, WBS 14 in the Princeton Plasma Physics Laboratory TPX Program to build a tokamak fusion reactor. This specification establishes the performance, design, development and test requirements of the PF Magnet System. 11. Status of the HTR 500 design program International Nuclear Information System (INIS) Baust, E.; Arndt, E. 1987-01-01 Since 1982 BBC/HRB have offered the HTR 500 as the follow-on project of the THTR 300, the first large pebble bed reactor. The technical concept of the HTR-500 largely corresponds to the THTR 300 which has been in operation for almost 2 years now. In developing the design concept of the HTR 500 the ideas and demands of the reactor users in the FRG interested in the HTR have been taken into consideration to a large extent. In 1982 these potential users formed a working group 'Arbeitsgemeinschaft Hochtemperaturreaktor' (AHR), representing 16 power indusry companies and in early 1983, awarded a contract to HRB to perform a conceptual design study on the HTR 500. Within this conceptual design study BBC/HRB developed the safety concept of the HTR 500, prepared a detailed description of the overall power plant, and performed a cost calculation. These activities were completed in 1984. Based on the positive results of this conceptual design study, BBC/HRB are expecting to be granted a design contract by the users company Hochtemperaturreaktor GmbH (HRG) to establish the final complete design plans and documents for the HTR 500. (author) 12. Conceptual design of advanced central receiver power system. Final report Energy Technology Data Exchange (ETDEWEB) Tracey, T. R. 1978-09-01 The design of a 300 MWe tower focus power plant which uses molten salt heat transfer fluids and sensible heat storage is described in detail. The system consists of nine heliostat fields with 7711 heliostats in each. Four cavity receivers are located at the top of a 155-meter tower. Tasks include: (1) review and analysis of preliminary specification; (2) parametric analysis; (3) selection of preferred configuration; (4) commercial plant conceptual design; (5) assessment of commercial-sized advanced central power system; (6) development plan; (7) program plan; (8) reports and data; (9) program management; (10) safety analysis; and (11) material study and test program. (WHK) 13. Unified Program Design: Organizing Existing Programming Models, Delivery Options, and Curriculum Science.gov (United States) Rubenstein, Lisa DaVia; Ridgley, Lisa M. 2017-01-01 A persistent problem in the field of gifted education has been the lack of categorization and delineation of gifted programming options. To address this issue, we propose Unified Program Design as a structural framework for gifted program models. This framework defines gifted programs as the combination of delivery methods and curriculum models.… 14. Multidisciplinary systems engineering architecting the design process CERN Document Server Crowder, James A; Demijohn, Russell 2016-01-01 This book presents Systems Engineering from a modern, multidisciplinary engineering approach, providing the understanding that all aspects of systems design, systems, software, test, security, maintenance and the full life-cycle must be factored in to any large-scale system design; up front, not factored in later. It lays out a step-by-step approach to systems-of-systems architectural design, describing in detail the documentation flow throughout the systems engineering design process. It provides a straightforward look and the entire systems engineering process, providing realistic case studies, examples, and design problems that will enable students to gain a firm grasp on the fundamentals of modern systems engineering. Included is a comprehensive design problem that weaves throughout the entire text book, concluding with a complete top-level systems architecture for a real-world design problem. 15. Quality assurance of analytical, scientific, and design computer programs for nuclear power plants International Nuclear Information System (INIS) 1994-06-01 This Standard applies to the design and development, modification, documentation, execution, and configuration management of computer programs used to perform analytical, scientific, and design computations during the design and analysis of safety-related nuclear power plant equipment, systems, structures, and components as identified by the owner. 2 figs 16. Aircraft System Design and Integration Directory of Open Access Journals (Sweden) D. P. Coldbeck 2000-01-01 Full Text Available In the 1980's the British aircraft industry changed its approach to the management of projects from a system where a project office would manage a project and rely on a series of specialist departments to support them to a more process oriented method, using systems engineering models, whose most outwardly visible signs were the introduction of multidisciplinary product teams. One of the problems with the old method was that the individual departments often had different priorities and projects would get uneven support. The change in the system was only made possible for complex designs by the electronic distribution of data giving instantaneous access to all involved in the project. In 1997 the Defence and Aerospace Foresight Panel emphasised the need for a system engineering approach if British industry was to remain competitive. The Royal Academy of Engineering recognised that the change in working practices also changed what was required of a chartered engineer and redefined their requirements in 1997 [1]. The result of this is that engineering degree courses are now judged against new criteria with more emphasis placed on the relevance to industry rather than on purely academic content. At the University of Glasgow it was realized that the students ought to be made aware of current working practices and that there ought to be a review to ensure that the degrees give students the skills required by industry. It was decided to produce a one week introduction course in systems engineering for Masters of Engineering (MEng students to be taught by both university lecturers and practitioners from a range of companies in the aerospace industry with the hope of expanding the course into a module. The reaction of the students was favourable in terms of the content but it seems ironic that the main criticism was that there was not enough discussion involving the students. This paper briefly describes the individual teaching modules and discusses the 17. Systemization of Design and Analysis Technology for Advanced Reactor International Nuclear Information System (INIS) Kim, Keung Koo; Lee, J.; Zee, S. K. 2009-01-01 The present study is performed to establish the base for the license application of the original technology by systemization and enhancement of the technology that is indispensable for the design and analysis of the advanced reactors including integral reactors. Technical reports and topical reports are prepared for this purpose on some important design/analysis methodology; design and analysis computer programs, structural integrity evaluation of main components and structures, digital I and C systems and man-machine interface design. PPS design concept is complemented reflecting typical safety analysis results. And test plans and requirements are developed for the verification of the advanced reactor technology. Moreover, studies are performed to draw up plans to apply to current or advanced power reactors the original technologies or base technologies such as patents, computer programs, test results, design concepts of the systems and components of the advanced reactors. Finally, pending issues are studied of the advanced reactors to improve the economics and technology realization 18. Analysing Student Programs in the PHP Intelligent Tutoring System Science.gov (United States) Weragama, Dinesha; Reye, Jim 2014-01-01 Programming is a subject that many beginning students find difficult. The PHP Intelligent Tutoring System (PHP ITS) has been designed with the aim of making it easier for novices to learn the PHP language in order to develop dynamic web pages. Programming requires practice. This makes it necessary to include practical exercises in any ITS that… 19. Programming Embedded Systems With C and GNU Development Tools CERN Document Server Barr, Michael 2009-01-01 Whether you're writing your first embedded program, designing the latest generation of hand-held whatchamacalits, or managing the people who do, this book is for you. Programming Embedded Systems will help you develop the knowledge and skills you need to achieve proficiency with embedded software. 20. A Theory of Available-by-Design Communicating Systems OpenAIRE López, Hugo A.; Nielson, Flemming; Nielson, Hanne Riis 2016-01-01 Choreographic programming is a programming-language design approach that drives error-safe protocol development in distributed systems. Starting from a global specification (choreography) one can generate distributed implementations. The advantages of this top-down approach lie in the correctness-by-design principle, where implementations (endpoints) generated from a choreography behave according to the strict control flow described in the choreography, and do not deadlock. Motivated by chall... 1. On the design of reversible QDCA systems. Energy Technology Data Exchange (ETDEWEB) DeBenedictis, Erik P.; Frank, Michael P. (Florida State University, Tallahassee, FL); Ottavi, Marco; Frost-Murphy, Sarah E. (University of Notre Dame, Notre Dame, IN) 2006-10-01 This work is the first to describe how to go about designing a reversible QDCA system. The design space is substantial, and there are many questions that a designer needs to answer before beginning to design. This document begins to explicate the tradeoffs and assumptions that need to be made and offers a range of approaches as starting points and examples. This design guide is an effective tool for aiding designers in creating the best quality QDCA implementation for a system. 2. Participatory simulation in hospital work system design OpenAIRE Andersen, Simone Nyholm; Broberg, Ole; Havn, Erling C. 2016-01-01 When ergonomic considerations are integrated into the design of work systems, both overall system performance and employee well-being improve. A central part of integrating ergonomics in work system design is to benefit from emplo y-ees’ knowledge of existing work systems. Participatory simulation (PS) is a method to access employee knowledge; namely employees are involved in the simulation and design of their own future work systems through the exploration of models representing work system ... 3. Systems engineering programs for geologic nuclear waste disposal Energy Technology Data Exchange (ETDEWEB) Klett, R. D.; Hertel, Jr., E. S.; Ellis, M. A. 1980-06-01 The design sequence and system programs presented begin with general approximate solutions that permit inexpensive analysis of a multitude of possible wastes, disposal media, and disposal process properties and configurations. It then continues through progressively more precise solutions as parts of the design become fixed, and ends with repository and waste form optimization studies. The programs cover both solid and gaseous waste forms. The analytical development, a program listing, a users guide, and examples are presented for each program. Sensitivity studies showing the effects of disposal media and waste form thermophysical properties and repository layouts are presented as examples. 4. Game Programming Course - Creative Design and Development Directory of Open Access Journals (Sweden) Jaak Henno 2008-10-01 Full Text Available Rapid developments of the Electronic Entertainment - computer and video games, virtual environments, the "Games 3.0" revolution - influences also courses about Games and Virtual Environments. In the following is discussed the course “Games and Virtual Environments” presented in the fall 2007 term in Tallinn University of Technology; the main emphasis of the course was not on programming technology, but on understanding games as a special form of communication and exploring specific features of this form. 5. TFTR neutral beam systems conceptual design Energy Technology Data Exchange (ETDEWEB) 1976-03-01 The functions, design requirements, and design descriptions of the injection system are described. Cost summaries are given for each system and subsystem. The costs presented are for: materials procurement; and shipping, assembly, and installation at the Princeton site. (MOW) 6. Biomass energy systems program summary Energy Technology Data Exchange (ETDEWEB) None 1980-07-01 Research programs in biomass which were funded by the US DOE during fiscal year 1978 are listed in this program summary. The conversion technologies and their applications have been grouped into program elements according to the time frame in which they are expected to enter the commercial market. (DMC) 7. Development of design program for air handling units Energy Technology Data Exchange (ETDEWEB) Ham, J.K.; Kim, J.H.; Kim, Y.K.; Kim, Y.I.; Kang, P.Y. [Hyundai Heavy Industries Co., Ltd. (Korea) 2000-11-01 An air handling unit(AHU) has been usually designed by manual calculations. Drawing works together with design calculations should be redone for every designing work, and also be needed to make some corrections of them. In order to design the AHU more efficiently, an AHU program has been developed. The developed program on the Windows environment is operated by the graphic user interface(GUI) realized using the Visual Basic Interpreter. The program provides calculation sheet of coils, weights and pressures in a MS-Excel file format as well as design drawing of the AHU in a Auto CAD file format idealized by AutoLISP. Those files of the commercial softwares make easier for a designer to transfer design results to the another company for bid via e-mail. (author). 5 refs., 9 figs., 3 tabs. 8. Development of design program for air handling units Energy Technology Data Exchange (ETDEWEB) Ham, J. K.; Kim, J. H.; Kim, Y. K.; Kim, Y. I.; Kang, P. Y. [Hyundai Heavy Industries Co., Ltd., Ulsan (Korea, Republic of) 2000-07-01 An Air Handling Unit(AHU) has been usually designed by manual calculations. Drawing works together with design calculations should be redone for every designing work, and also be needed to make some corrections of them. In order to design the AHU more efficiently, an AHU program has been developed. The developed program on the Windows environment is operated by the Graphic User Interface(GUI) realized using the Visual Basic Interpreter. The program provides calculation sheet of coils, weights and pressures in a MS-Excel file format as well as design drawing of the AHU in a auto CAD file format idealized by AutoLISP. Those files of the commercial softwares make easier for a designer to transfer design results to the another company for bid via e-mail. 9. NRC review of passive reactor design certification testing programs: Overview, progress, and regulatory perspective Energy Technology Data Exchange (ETDEWEB) Levin, A.E. 1995-09-01 New reactor designs, employing passive safety systems, are currently under development by reactor vendors for certification under the U.S. Nuclear Regulatory Commissions (NRCs) design certification rule. The vendors have established testing programs to support the certification of the passive designs, to meet regulatory requirements for demonstration of passive safety system performance. The NRC has, therefore, developed a process for the review of the vendors` testing programs and for incorporation of the results of those reviews into the safety evaluations for the passive plants. This paper discusses progress in the test program reviews, and also addresses unique regulatory aspects of those reviews. 10. Designing PV Incentive Programs to Promote Performance: A Reviewof Current Practice Energy Technology Data Exchange (ETDEWEB) Barbose, Galen; Wiser, Ryan; Bolinger, Mark 2007-06-01 Increasing levels of financial support for customer-sited photovoltaic (PV) systems, provided through publicly-funded incentive programs, has heightened concerns about the long-term performance of these systems. Given the barriers that customers face to ensuring that their PV systems perform well, and the responsibility that PV incentive programs bear to ensure that public funds are prudently spent, these programs should, and often do, play a critical role in ensuring that PV systems receiving incentives perform well. To provide a point of reference for assessing the current state of the art, and to inform program design efforts going forward, we examine the approaches to encouraging PV system performance used by 32 prominent PV incentive programs in the U.S. We identify eight general strategies or groups of related strategies that these programs have used to address performance issues, and highlight important differences in the implementation of these strategies among programs. 11. Windows Calorimeter Control (WinCal) program computer software design description International Nuclear Information System (INIS) Pertzborn, N.F. 1997-01-01 The Windows Calorimeter Control (WinCal) Program System Design Description contains a discussion of the design details for the WinCal product. Information in this document will assist a developer in maintaining the WinCal system. The content of this document follows the guidance in WHC-CM-3-10, Software Engineering Standards, Standard for Software User Documentation 12. Hanford Site waste tank farm facilities design reconstitution program plan International Nuclear Information System (INIS) Vollert, F.R. 1994-01-01 Throughout the commercial nuclear industry the lack of design reconstitution programs prior to the mid 1980's has resulted in inadequate documentation to support operating facilities configuration changes or safety evaluations. As a result, many utilities have completed or have ongoing design reconstitution programs and have discovered that without sufficient pre-planning their program can be potentially very expensive and may result in end-products inconsistent with the facility needs or expectations. A design reconstitution program plan is developed here for the Hanford waste tank farms facility as a consequence of the DOE Standard on operational configuration management. This design reconstitution plan provides for the recovery or regeneration of design requirements and basis, the compilation of Design Information Summaries, and a methodology to disposition items open for regeneration that were discovered during the development of Design Information Summaries. Implementation of this plan will culminate in an end-product of about 30 Design Information Summary documents. These documents will be developed to identify tank farms facility design requirements and design bases and thereby capture the technical baselines of the facility. This plan identifies the methodology necessary to systematically recover documents that are sources of design input information, and to evaluate and disposition open items or regeneration items discovered during the development of the Design Information Summaries or during the verification and validation processes. These development activities will be governed and implemented by three procedures and a guide that are to be developed as an outgrowth of this plan 13. Designing fractional factorial split-plot experiments using integer programming DEFF Research Database (Denmark) Capehart, Shay R.; Keha, Ahmet; Kulahci, Murat 2011-01-01 factorial (FF) design, with the restricted randomisation structure to account for the whole plots and subplots. We discuss the formulation of FFSP designs using integer programming (IP) to achieve various design criteria. We specifically look at the maximum number of clear two-factor interactions... 14. COMPLEX DESIGN OF INTEGRATED MATERIAL FLOW SYSTEMS OpenAIRE PÉTER TELEK; TAMÁS BÁNYAI 2013-01-01 Material flow systems have in generally very complex structure and relations. During design, building and operation of complex systems there are many different problems. This paper shows some usable solution for the design and selection process of material flow systems. We give a short overview about the theoretical principles of the design process, then describe the base design tasks, the possibilities of the using of heuristic algorithms and the modelling of material flow systems. At the en... 15. Risk Informed Design as Part of the Systems Engineering Process Science.gov (United States) Deckert, George 2010-01-01 This slide presentation reviews the importance of Risk Informed Design (RID) as an important feature of the systems engineering process. RID is based on the principle that risk is a design commodity such as mass, volume, cost or power. It also reviews Probabilistic Risk Assessment (PRA) as it is used in the product life cycle in the development of NASA's Constellation Program. 16. Application and design of solar photovoltaic system International Nuclear Information System (INIS) Li Tianze; Lu Hengwei; Jiang Chuan; Hou Luan; Zhang Xia 2011-01-01 Solar modules, power electronic equipments which include the charge-discharge controller, the inverter, the test instrumentation and the computer monitoring, and the storage battery or the other energy storage and auxiliary generating plant make up of the photovoltaic system which is shown in the thesis. PV system design should follow to meet the load supply requirements, make system low cost, seriously consider the design of software and hardware, and make general software design prior to hardware design in the paper. To take the design of PV system for an example, the paper gives the analysis of the design of system software and system hardware, economic benefit, and basic ideas and steps of the installation and the connection of the system. It elaborates on the information acquisition, the software and hardware design of the system, the evaluation and optimization of the system. Finally, it shows the analysis and prospect of the application of photovoltaic technology in outer space, solar lamps, freeways and communications. 17. Integrated design for space transportation system CERN Document Server Suresh, B N 2015-01-01 The book addresses the overall integrated design aspects of a space transportation system involving several disciplines like propulsion, vehicle structures, aerodynamics, flight mechanics, navigation, guidance and control systems, stage auxiliary systems, thermal systems etc. and discusses the system approach for design, trade off analysis, system life cycle considerations, important aspects in mission management, the risk assessment, etc. There are several books authored to describe the design aspects of various areas, viz., propulsion, aerodynamics, structures, control, etc., but there is no book which presents space transportation system (STS) design in an integrated manner. This book attempts to fill this gap by addressing systems approach for STS design, highlighting the integrated design aspects, interactions between various subsystems and interdependencies. The main focus is towards the complex integrated design to arrive at an optimum, robust and cost effective space transportation system. The orbit... 18. Application and design of solar photovoltaic system Science.gov (United States) Tianze, Li; Hengwei, Lu; Chuan, Jiang; Luan, Hou; Xia, Zhang 2011-02-01 Solar modules, power electronic equipments which include the charge-discharge controller, the inverter, the test instrumentation and the computer monitoring, and the storage battery or the other energy storage and auxiliary generating plant make up of the photovoltaic system which is shown in the thesis. PV system design should follow to meet the load supply requirements, make system low cost, seriously consider the design of software and hardware, and make general software design prior to hardware design in the paper. To take the design of PV system for an example, the paper gives the analysis of the design of system software and system hardware, economic benefit, and basic ideas and steps of the installation and the connection of the system. It elaborates on the information acquisition, the software and hardware design of the system, the evaluation and optimization of the system. Finally, it shows the analysis and prospect of the application of photovoltaic technology in outer space, solar lamps, freeways and communications. 19. The NASA computer aided design and test system Science.gov (United States) Gould, J. M.; Juergensen, K. 1973-01-01 A family of computer programs facilitating the design, layout, evaluation, and testing of digital electronic circuitry is described. CADAT (computer aided design and test system) is intended for use by NASA and its contractors and is aimed predominantly at providing cost effective microelectronic subsystems based on custom designed metal oxide semiconductor (MOS) large scale integrated circuits (LSIC's). CADAT software can be easily adopted by installations with a wide variety of computer hardware configurations. Its structure permits ease of update to more powerful component programs and to newly emerging LSIC technologies. The components of the CADAT system are described stressing the interaction of programs rather than detail of coding or algorithms. The CADAT system provides computer aids to derive and document the design intent, includes powerful automatic layout software, permits detailed geometry checks and performance simulation based on mask data, and furnishes test pattern sequences for hardware testing. 20. Space Launch System Ascent Flight Control Design Science.gov (United States) Orr, Jeb S.; Wall, John H.; VanZwieten, Tannen S.; Hall, Charles E. 2014-01-01 A robust and flexible autopilot architecture for NASA's Space Launch System (SLS) family of launch vehicles is presented. The SLS configurations represent a potentially significant increase in complexity and performance capability when compared with other manned launch vehicles. It was recognized early in the program that a new, generalized autopilot design should be formulated to fulfill the needs of this new space launch architecture. The present design concept is intended to leverage existing NASA and industry launch vehicle design experience and maintain the extensibility and modularity necessary to accommodate multiple vehicle configurations while relying on proven and flight-tested control design principles for large boost vehicles. The SLS flight control architecture combines a digital three-axis autopilot with traditional bending filters to support robust active or passive stabilization of the vehicle's bending and sloshing dynamics using optimally blended measurements from multiple rate gyros on the vehicle structure. The algorithm also relies on a pseudo-optimal control allocation scheme to maximize the performance capability of multiple vectored engines while accommodating throttling and engine failure contingencies in real time with negligible impact to stability characteristics. The architecture supports active in-flight disturbance compensation through the use of nonlinear observers driven by acceleration measurements. Envelope expansion and robustness enhancement is obtained through the use of a multiplicative forward gain modulation law based upon a simple model reference adaptive control scheme. 1. Large coil test facility instrumentation system design International Nuclear Information System (INIS) Walstrom, P.L.; Fletcher, W.M.; Goddard, J.S.; Murphy, J.L. 1979-01-01 The design of the instrumentation system for the Large Coil Test Facility (LCTF) is described. Sensors are divided into two categories: coil diagnostic sensors, installed in the test coils; and facility sensors, installed in the various systems of the test facility in order to monitor their performance. After signal conditioning, data from the ''fast'' channels are multiplexed, digitized, and stored in four microcomputer systems programmed to be used in a ring buffer mode to record data before and after receipt of a random trigger from the normal zone detection circuitry. ''Slow'' channels are digitized by a scanner and buffered by a microcomputer. Selected data channels are continuously displayed on digital or recorded on strip chart recorders. The microcomputer systems are interfaced to a central minicomputer system for display and archival storage. Facility variables are digitized by a separate scanner system. Certain critical fault variables are compared with set point values, and if they are out of range, cause a programmable logic controller to initiate an emergency coil energy dump. 2 refs 2. System 80+trademark Standard Design: CESSAR design certification International Nuclear Information System (INIS) 1990-01-01 This report, entitled Combustion Engineering Standard Safety Analysis Report -- Design Certification (CESSAR-DC), has been prepared in support of the industry effort to standardize nuclear plant designs. These volumes describe the Combustion Engineering, Inc. System 80 + trademark Standard Design. This volume 8 provides a description of instrumentation and controls 3. System 80+trademark Standard Design: CESSAR design certification International Nuclear Information System (INIS) 1990-01-01 This report, entitled Combustion Engineering Standard Safety Analysis Report -- Design Certification (CESSAR-DC), has been prepared in support of the industry effort to standardize nuclear plant designs. These volumes describe the Combustion Engineering, Inc. System 80+trademark Standard Design. This Volume 18 provides Appendix B, Probabilistic Risk Assessment 4. System 80+trademark Standard Design: CESSAR design certification International Nuclear Information System (INIS) 1990-01-01 This report, entitled Combustion Engineering Standard Safety Analysis Report -- Design Certification (CESSAR-DC), has been prepared in support of the industry effort to standardize nuclear plant designs. These volumes describes the Combustion Engineering, Inc. System 80+trademark Standard Design. This Volume 17 provides Appendix A of this report, closure of unresolved and Genetic Safety Issues 5. Interactive computer program for optimal designs of longitudinal cohort studies. Science.gov (United States) Tekle, Fetene B; Tan, Frans E S; Berger, Martijn P F 2009-05-01 Many large scale longitudinal cohort studies have been carried out or are ongoing in different fields of science. Such studies need a careful planning to obtain the desired quality of results with the available resources. In the past, a number of researches have been performed on optimal designs for longitudinal studies. However, there was no computer program yet available to help researchers to plan their longitudinal cohort design in an optimal way. A new interactive computer program for the optimization of designs of longitudinal cohort studies is therefore presented. The computer program helps users to identify the optimal cohort design with an optimal number of repeated measurements per subject and an optimal allocations of time points within a given study period. Further, users can compute the loss in relative efficiencies of any other alternative design compared to the optimal one. The computer program is described and illustrated using a practical example. 6. Operating system design the Xinu approach, Linksys version CERN Document Server Press, CRC 2011-01-01 Introduction and OverviewOperating SystemsApproach Used In The TextA Hierarchical DesignThe Xinu Operating SystemWhat An Operating System Is NotAn Operating System Viewed From The OutsideRemainder Of The TextConcurrent Execution And Operating System ServicesProgramming Models For Multiple ActivitiesOperating System ServicesConcurrent Processing Concepts And TerminologyDistinction Between Sequential And Concurrent ProgramsMultiple Processes Sharing A Single Piece Of CodeProcess Exit And Process TerminationShared Memory, Race Conditions, And SynchronizationSemaphores And Mutual ExclusionType Nam 7. A Learning Tool and Program Development for Mechatronics Design Education Science.gov (United States) Iribe, Masatsugu; Shirahata, Akihiro; Kita, Hiromasa; Sasashige, Yousuke; Dasai, Ryoichi In this paper we propose a new type educational program for Mechatronics design which contributes to develop the physical sense and problem solving ability of the students who study Mechatronics design. For this program we provide a new handicraft kit of 4-wheeled car which is composed of inexpensive and commonplace parts, and the performance of the assembled 4-wheeled car is sensitive to its assembly arrangement. And then we implemented this program with the handicraft kit to the university freshmen, and verified its effectiveness, and report the results of the program. 8. The System 80+ Standard Plant design control document. Volume 16 International Nuclear Information System (INIS) 1997-01-01 This Design Control Document (DCD) is a repository of information comprising the System 80+trademark Standard Plant Design. The DCD also provides that design-related information to be incorporated by reference in the design certification rule for the System 80+ Standard Plant Design. Applicants for a combined license pursuant to 10 CFR 52 must ensure that the final Design Certification Rule and the associated Statements of Consideration are used when making all licensing decisions relevant to the System 80+ Standard Plant Design. The Design Control Document contains the DCD introduction, The Certified Design Material (CDM) [i.e., ''Tier 1''] and the Approved Design Material (ADM) [i.e., ''Tier 2''] for the System 80+ Standard Plant Design. The CDM includes the following sections: (1) Introductory material; (2) Certified Design Material for System 80+ systems and structures; (3) Certified Design Material for non-system-based aspects of the System 80+ Certified design; (4) Interface requirements; and (5) Site parameters. The ADM, to the extent applicable for the System 80+ Standard Plant Design, includes: (1) the information required for the final safety analysis report under 20 CFR 50.34; (2) other relevant information required by 10 CFR 52.47; and (3) emergency operations guidelines. This volume contains all 3 parts of section 14 (Initial Test Program) of the ADM Design and Analysis. Topics covered are: PSAR information; FSAR information; certified design material. Also part 1 of section 15 (Accident Analysis) of the ADM Design and Analysis is included in this volume. The topic of part 1 is increase in heat removal 9. Quality assurance program for isotopic power systems International Nuclear Information System (INIS) Hannigan, R.L.; Harnar, R.R. 1982-12-01 This report summarizes the Sandia National Laboratories Quality Assurance Program that applies to non-weapon (reimbursable) Radioisotopic Thermoelectric Generators. The program has been implemented over the past 16 years on power supplies used in various space and terrestrial systems. The quality assurance (QA) activity of the program is in support of the Department of Energy, Office of Space Nuclear Projects. Basic elements of the program are described in the report and examples of program decumentation are presented 10. Auditing and financial system. Preliminary systems design: management summary report Energy Technology Data Exchange (ETDEWEB) 1980-01-01 The decade of the 80's will see an unprecedented mobilization of the economic and technological resources of the United States in an attempt to regain energy independence. Deregulation of domestic petroleum prices, along with a worldwide energy shortage, created a search for oil and gas of incomparable magnitude in this country. This is paralleled by a massive effort to convert to alternate fuels, such as coal, and to develop new energy sources, such as oil shale. The Federal government will play a significant role in this effort in many ways. One of the most important is increased leasing of Federal lands for energy exploration. Geological Survey's Conservation Division is responsible for regulating the development of mineral resources on Federal leased lands, and for collecting the rents and royalties due from these lands. Effective management and administration in the volatile areas of development present a significant challenge to the Conservation Division. The objective of the Division is to reduce the regulatory burden on industry while effectively and efficiently discharging its responsibility. The development of the Improved Royalty Management Program is a major step in accomplishing these goals. This Management Summary Report represents the completion of the Preliminary Systems Design of the Auditing and Financial System, and is the first phase of the Improved Royalty Management Program (IRMP). The purpose of this document is to summarize information about the design and implementation of the Auditing and Financial System. 11. Anatomy of a control system; a system designer's view International Nuclear Information System (INIS) Magyary, S. 1993-05-01 The Advanced Light Source (ALS) control system is quite unconventional in its design and implementation. This paper discusses the system design considerations, the actual implementation, hardware and software costs, and the measured performance across all layers of the system 12. Program status 3. quarter -- FY 1994: Confinement systems programs Energy Technology Data Exchange (ETDEWEB) NONE 1994-07-19 Highlights of the DIII-D Research Operations are: began experimental research operations; successfully passed radiative divertor project review; presented papers at PSI, Diagnostics, and EPS meetings and prepared IAEA synopses; new computer speeds up data acquisition; completed installation of FWCD antennas with Faraday shields; and completed report of radiative divertor preliminary design with review committee. Summaries are given for progress in research programs; operations; mechanical engineering; electrical engineering; upgrade project; operations support; and collaborative efforts. Brief summaries are given for progress on the International Cooperation task which include JET, ASDEX, TEXTOR, TORE SUPRA, JAERI, TRINTI, T-10, and ARIES support. The work in support of the development plan for the TPX (Tokamak Physics Experiment) goals and milestones continued. Progress in improving on existing models and codes leading to improved understanding of experiments is given. Highlights from the User Service Center are: 18 gigabytes of disks were purchased for exclusive fusion use; a Hewlett-Packard 9000 Series 800 T500 computer was selected as the fusion complete server; the first VAX was removed from the USC cluster; security vulnerability on HP VUE software was corrected; and a cleanup script was developed for the NERSC Cray system. A list of personnel and their assignments is given for the ITER Design Engineering task. 13. Detecting Data Concealment Programs Using Passive File System Analysis Science.gov (United States) Davis, Mark; Kennedy, Richard; Pyles, Kristina; Strickler, Amanda; Shenoi, Sujeet Individuals who wish to avoid leaving evidence on computers and networks often use programs that conceal data from conventional digital forensic tools. This paper discusses the application of passive file system analysis techniques to detect trace evidence left by data concealment programs. In addition, it describes the design and operation of Seraph, a tool that determines whether certain encryption, steganography and erasing programs were used to hide or destroy data. 14. A framework for AI-based nuclear design support system International Nuclear Information System (INIS) Furuta, Kazuo; Kondo, Shunsuke 1991-01-01 Nowadays many computer programs are being developed and used for the analytic tasks in nuclear reactor design, but experienced designers are still responsible for most of the synthetic tasks which are not amenable to algorithmic computer processes. Artificial intelligence (AI) is a promising technology to deal with these intractable tasks in design. In development of AI-based design support systems, it is desirable to choose a comprehensive framework based on the scientific theory of design. In this work a framework for AI-based design support systems for nuclear reactor design will be proposed based on an exploration model of design. The fundamental architectures of this framework will be described especially on knowledge representation, context management and design planning. (author) 15. Framework for AI-based nuclear reactor design support system International Nuclear Information System (INIS) Furuta, Kazuo; Kondo, Shunsuke 1992-01-01 Nowadays many computer programs are being developed and used for the analytic tasks in nuclear reactor design, but experienced designers are still responsible for most of the synthetic tasks which are not amenable to algorithmic computer processes. Artificial intelligence (AI) is a promising technology to deal with these intractable tasks in design. In development of AI-based design support systems, it is desirable to choose a comprehensive framework based on the scientific theory of design. In this work a framework for AI-based design support systems for nuclear reactor design will be proposed based on an explorative abduction model of design. The fundamental architectures of this framework will be described especially on knowledge representation, context management and design planning. (author) 16. System level ESD co-design CERN Document Server Gossner, Harald 2015-01-01 An effective and cost efficient protection of electronic system against ESD stress pulses specified by IEC 61000-4-2 is paramount for any system design. This pioneering book presents the collective knowledge of system designers and system testing experts and state-of-the-art techniques for achieving efficient system-level ESD protection, with minimum impact on the system performance. All categories of system failures ranging from ‘hard’ to ‘soft’ types are considered to review simulation and tool applications that can be used. The principal focus of System Level ESD Co-Design is defining and establishing the importance of co-design efforts from both IC supplier and system builder perspectives. ESD designers often face challenges in meeting customers' system-level ESD requirements and, therefore, a clear understanding of the techniques presented here will facilitate effective simulation approaches leading to better solutions without compromising system performance. With contributions from Robert Asht... 17. Nonfunctional requirements in systems analysis and design CERN Document Server Adams, Kevin MacG 2015-01-01 This book will help readers gain a solid understanding of non-functional requirements inherent in systems design endeavors. It contains essential information for those who design, use, and maintain complex engineered systems, including experienced designers, teachers of design, system stakeholders, and practicing engineers. Coverage approaches non-functional requirements in a novel way by presenting a framework of four systems concerns into which the 27 major non-functional requirements fall: sustainment, design, adaptation, and viability. Within this model, the text proceeds to define each non-functional requirement, to specify how each is treated as an element of the system design process, and to develop an associated metric for their evaluation. Systems are designed to meet specific functional needs. Because non-functional requirements are not directly related to tasks that satisfy these proposed needs, designers and stakeholders often fail to recognize the importance of such attributes as availability, su... 18. XTAL system of crystallographic programs: programmer's manual International Nuclear Information System (INIS) Hall, S.R.; Stewart, J.M.; Norden, A.P.; Munn, R.J.; Freer, S.T. 1980-02-01 This document establishes the basis for collaborative writing of transportable computer programs for x-ray crystallography. The concepts and general-purpose utility subroutines described here can be readily adapted to other scientific calculations. The complete system of crystallographic programs and subroutines is called XTAL and replaces the XRAY (6,7,8) system of programs. The coding language for the XTAL system is RATMAC (5). The XTAL system of programs contains routines for controlling execution of application programs. In this sense it forms a suboperating system that presents the same computational environment to the user and programmer irrespective of the operating system in use at a particular installation. These control routines replace all FORTRAN I/O code, supply character reading and writing, supply binary file reading and writing, serve as a support library for applications programs, and provide for interprogram communication 19. Empowerment and programs designed to address domestic violence. Science.gov (United States) Kasturirangan, Aarati 2008-12-01 Programs designed to address domestic violence often name empowerment of women as a major program goal. However, programs do not necessarily define what empowerment for survivors of domestic violence entails. This review examines the literature on empowerment, including characteristics of an empowerment process and critiques of empowerment. Diversity of goals for empowerment and differences in access to resources for women experiencing domestic violence are explored as two major factors that should inform program development. Recommendations are offered for developing programs to address domestic violence that support women engaged in an empowerment process. 20. Design Process-System and Methodology of Design Research Science.gov (United States) Bashier, Fathi 2017-10-01 Studies have recognized the failure of the traditional design approach both in practice and in the studio. They showed that design problems today are too complex for the traditional approach to cope with and reflected a new interest in a better quality design services in order to meet the challenges of our time. In the mid-1970s and early 1980s, there has been a significant shift in focus within the field of design research towards the aim of creating a ‘design discipline’. The problem, as will be discussed, is the lack of an integrated theory of design knowledge that can explicitly describe the design process in a coherent way. As a consequence, the traditional approach fails to operate systematically, in a disciplinary manner. Addressing this problem is the primary goal of the research study in the design process currently being conducted in the research-based master studio at Wollega University, Ethiopia. The research study seeks to make a contribution towards a disciplinary approach, through proper understanding the mechanism of knowledge development within design process systems. This is the task of the ‘theory of design knowledge’. In this article the research project is introduced, and a model of the design process-system is developed in the studio as a research plan and a tool of design research at the same time. Based on data drawn from students’ research projects, the theory of design knowledge is developed and empirically verified through the research project. 1. High-speed digital system design CERN Document Server Davis, Justin 2006-01-01 High-Speed Digital System Design bridges the gap from theory to implementation in the real world. Systems with clock speeds in low megahertz range qualify for high-speed. Proper design results in quality digital transmissions and lowers the chance for errors. This book is for computer and electrical engineers who may or may not have learned electromagnetic theory. The presentation style allows readers to quickly begin designing their own high-speed systems and diagnosing existing designs for errors. 2. An intelligent design methodology for nuclear power systems International Nuclear Information System (INIS) Nassersharif, B.; Martin, R.P.; Portal, M.G.; Gaeta, M.J. 1989-01-01 The goal of this investigation is to research possible methodologies into automating the design of, specifically, nuclear power facilities; however, it is relevant to all thermal power systems. The strategy of this research has been to concentrate on individual areas of the thermal design process, investigate procedures performed, develop methodology to emulate that behavior, and prototype it in the form of a computer program. The design process has been generalized as follows: problem definition, design definition, component selection procedure, optimization and engineering analysis, testing and final design with the problem definition defining constraints that will be applied to the selection procedure as well as design definition. The result of this research is a prototype computer program applying an original procedure for the selection of the best set of real components that would be used in constructing a system with desired performance characteristics. The mathematical model used for the selection procedure is possibility theory 3. Report to the Board of Regents State University System of Florida. Review of Programs: Architecture, Architectural Technology, Landscape Architecture, Interior Design, Construction and Construction Technology, Building Construction, Urban and Regional Planning. Science.gov (United States) McMinn, William G. An evaluation and report was done on the status of programs in architecture and related fields in the Florida State University System as a follow-up to a 1983 evaluation. The evaluation involved self-studies prepared by each program and a series of site visits to each of seven campuses and two centers with programs under review. These institutions… 4. Making embedded systems design patterns for great software CERN Document Server White, Elecia 2011-01-01 Interested in developing embedded systems? Since they don't tolerate inefficiency, these systems require a disciplined approach to programming. This easy-to-read guide helps you cultivate a host of good development practices, based on classic software design patterns and new patterns unique to embedded programming. Learn how to build system architecture for processors, not operating systems, and discover specific techniques for dealing with hardware difficulties and manufacturing requirements. Written by an expert who's created embedded systems ranging from urban surveillance and DNA scanner 5. Challenges in Designing Mechatronic Systems DEFF Research Database (Denmark) Torry-Smith, Jonas; Qamar, Ahsan; Achiche, Sofiane 2013-01-01 Development of mechatronic products is traditionally carried out by several design experts from different design domains. Performing development of mechatronic products is thus greatly challenging. In order to tackle this, the critical challenges in mechatronics have to be well understood and well... 6. Designing Camera Networks by Convex Quadratic Programming KAUST Repository Ghanem, Bernard 2015-05-04 In this paper, we study the problem of automatic camera placement for computer graphics and computer vision applications. We extend the problem formulations of previous work by proposing a novel way to incorporate visibility constraints and camera-to-camera relationships. For example, the placement solution can be encouraged to have cameras that image the same important locations from different viewing directions, which can enable reconstruction and surveillance tasks to perform better. We show that the general camera placement problem can be formulated mathematically as a convex binary quadratic program (BQP) under linear constraints. Moreover, we propose an optimization strategy with a favorable trade-off between speed and solution quality. Our solution is almost as fast as a greedy treatment of the problem, but the quality is significantly higher, so much so that it is comparable to exact solutions that take orders of magnitude more computation time. Because it is computationally attractive, our method also allows users to explore the space of solutions for variations in input parameters. To evaluate its effectiveness, we show a range of 3D results on real-world floorplans (garage, hotel, mall, and airport). 7. Exploring Open-Ended Design Space of Mechatronic Systems DEFF Research Database (Denmark) Fan, Zhun; Wang, J.; Goodman, E. 2004-01-01 To realize design automation of mechatronic systems, there are two major issues to be dealt with: open-topology generation of mechatronic systems and simulation or analysis of those models. For the first issue, we exploit the strong topology exploration capability of genetic programming to create... 8. Automatic design of optical systems by digital computer Science.gov (United States) Casad, T. A.; Schmidt, L. F. 1967-01-01 Computer program uses geometrical optical techniques and a least squares optimization method employing computing equipment for the automatic design of optical systems. It evaluates changes in various optical parameters, provides comprehensive ray-tracing, and generally determines the acceptability of the optical system characteristics. 9. What Should Instructional Designers Know about General Systems Theory? Science.gov (United States) Salisbury, David F. 1989-01-01 Describes basic concepts in the field of general systems theory (GST) and explains the relationship between instructional systems design (ISD) and GST. Benefits of integrating GST into the curriculum of ISD graduate programs are discussed, and a short bibliography on GST is included. (LRW) 10. Designing a Flood-Risk Education Program in the Netherlands Science.gov (United States) Bosschaart, Adwin; van der Schee, Joop; Kuiper, Wilmad 2016-01-01 This study focused on designing a flood-risk education program to enhance 15-year-old students' flood-risk perception. In the flood-risk education program, learning processes were modeled in such a way that the arousal of moderate levels of fear should prompt experiential and analytical information processing. In this way, understanding of flood… 11. Designing a flood-risk education program in the Netherlands NARCIS (Netherlands) Bosschaert, A.; van der Schee, J.; Kuiper, W. 2016-01-01 This study focused on designing a flood-risk education program to enhance 15-year-old students’ flood-risk perception. In the flood-risk education program, learning processes were modeled in such a way that the arousal of moderate levels of fear should prompt experiential and analytical information 12. Perceptions of Interior Design Program Chairs Regarding Credentials for Faculty Science.gov (United States) Miller, Beth R. 2017-01-01 The purpose of this study was to determine whether program chairs in interior design have a preferred degree credential for candidates seeking a full-time, tenure-track position or other full-time position at their institution and to determine if there is a correlation between this preference and the program chair's university's demographics,… 13. Situated Research Design and Methodological Choices in Formative Program Evaluation Science.gov (United States) Supovitz, Jonathan 2013-01-01 Design-based implementation research offers the opportunity to rethink the relationships between intervention, research, and situation to better attune research and evaluation to the program development process. Using a heuristic called the intervention development curve, I describe the rough trajectory that programs typically follow as they… 14. Scrap your boilerplate: a practical design pattern for generic programming NARCIS (Netherlands) R. Lämmel (Ralf); S. Peyton Jones 2003-01-01 textabstractWe describe a design pattern for writing programs that traverse data structures built from rich mutually-recursive data types. Such programs often have a great deal of 'boilerplate' code that simply walks the structure, hiding a small amount of 'real' code that constitutes the reason for 15. Designing and Deploying Programming Courses: Strategies, Tools, Difficulties and Pedagogy Science.gov (United States) Xinogalos, Stelios 2016-01-01 Designing and deploying programming courses is undoubtedly a challenging task. In this paper, an attempt to analyze important aspects of a sequence of two courses on imperative-procedural and object-oriented programming in a non-CS majors Department is made. This analysis is based on a questionnaire filled in by fifty students in a voluntary… 16. Contribution of the VISTA ITL program for the standard design approval of the SMART design Energy Technology Data Exchange (ETDEWEB) Park, Hyun Sik; Chung, Young Jong; Joo, Hyung Kook; Lee, Won Jae; Kim, Hark Rho; Song, Chul Hwa; Yi, Sung Jae [KAERI, Daejeon (Korea, Republic of) 2012-10-15 A small scale integral effect test (IET) program had been performed by the Korea Atomic Energy Research Institute (KAERI) using the VISTA integral test loop (VISTA ITL). It has the simulation capability of small break loss of coolant accident (SBLOCA), complete loss of reactor coolant system (RCS) flow (CLOF) and passive residual heat removal system (PRHRS) performance for the SMART design. The reference plant of the VISTA ITL is a 330 MWth integral pressurized water reactor (iPWR), SMART, which was developed by KAERI. Its standard design had been approved by Korean regulatory authority on July 2012. The SMART reactor is characterized by the introduction of simplified and improved safety systems such as passive residual heat removal system (PRHRS) and its integral arrangement of the reactor vessel assembly. Integral reactor design eliminates the large size pipe connections between major components. Thus, it excludes the occurrence of a large break loss of coolant accident (LBLOCA), and a SBLOCA is one of major concerns for safety analysis. Therefore, the VISTA ITL was used to investigate various thermal hydraulic phenomena during the SBLOCA. The break flow rate, safety injection flow rate, and thermal hydraulic behaviors of major components were measured for a typical break size and break locations. The acquired data was used to validate related thermal hydraulic models of the safety analysis code, TASS/SMR S which is used to validate the safety of the SMART in coping with the SBLOCA scenarios. A set of tests for SBLOCAs, CLOF and PRHRS performance was performed to understand the general behavior and to assess its safety of the SMART design using the VISTA ITL facility. The test results were used to validate the TASS/SMR S code. This paper also introduces regulatory issues concerning the VISTA ITL tests and how they were resolved. The scoping analysis was performed in resolving regulatory issues using a best estimate system analysis code, MARS KS developed by KAERI. 17. NASA's Space Launch System Program Update Science.gov (United States) May, Todd; Lyles, Garry 2015-01-01 Hardware and software for the world's most powerful launch vehicle for exploration is being welded, assembled, and tested today in high bays, clean rooms and test stands across the United States. NASA's Space Launch System (SLS) continued to make significant progress in 2014 with more planned for 2015, including firing tests of both main propulsion elements and the program Critical Design Review (CDR). Developed with the goals of safety, affordability, and sustainability, SLS will still deliver unmatched capability for human and robotic exploration. The initial Block 1 configuration will deliver more than 70 metric tons of payload to low Earth orbit (LEO). The evolved Block 2 design will deliver some 130 metric tons to LEO. Both designs offer enormous opportunity and flexibility for larger payloads, simplifying payload design as well as ground and on-orbit operations, shortening interplanetary transit times, and decreasing overall mission risk. Over the past year, every vehicle element has manufactured or tested hardware. An RS-25 liquid propellant engine was hotfire-tested at NASA's Stennis Space Center, Miss. for the first time since 2009 exercising and validating the new engine controller, the renovated A-1 test stand, and the test teams. Four RS-25s will power the SLS core stage. A qualification five-segment solid rocket motor incorporating several design, material, and process changes was scheduled to be test-fired in March at the prime contractor's facility in Utah. The booster also successfully completed its Critical Design Review (CDR) validating the planned design. All six major manufacturing tools for the core stage are in place at the Michoud Assembly Facility in Louisiana, and have been used to build numerous pieces of confidence, qualification, and even flight hardware, including barrel sections, domes and rings used to assemble the world's largest rocket stage. SLS Systems Engineering accomplished several key tasks including vehicle avionics software 18. Photovoltaic energy systems. Program summary Energy Technology Data Exchange (ETDEWEB) None 1982-01-01 The ongoing research, development, and demonstration efforts of the Photovoltaics Program are highlighted and each of the US Department of Energy's current photovoltaics projects initiated or renewed during fiscal year 1981 is described, including its title, directing organization, project engineer, contractor, principal investigator, contract period, funding, and objectives. The Photovoltaics Program is briefly summarized, including the history and organization and highlights of the research and development and of planning, assessment, and integration. Also summarized is the Federal Photovoltaic Utilization Program. An exhaustive bibliography is included. (LEW) 19. Examination of Web-Based PVGIS and SUNNY Design Web Photovoltaic System Simulation Programs and Assessment of Reliability of the Results OpenAIRE HAYDAROĞLU, Cem; Gümüş, Bilal 2018-01-01 Due to thepolluting effect of fossil fuels on environment and their exhaustible nature,investments in renewable energy resources continue to increase. In order tobenefit from solar energy which is one of these energy resources, 50 GW of new powerplants were installed only in 2015. Following the "regulation on unlicensedelectricity generation" issued to benefit from the renewable energypotential available in Turkey, the installation of systems that generateelectricity from solar ener... 20. Utility oversight of Cask System Development Program International Nuclear Information System (INIS) Vincent, J.A.; Jordan, J.M.; Schwartz, M.H. 1993-01-01 This paper will present the electric utility industry's perspective on the status and scope of the DOE's Office of Civilian Radioactive Waste Management's (DOE/OCRWM) transportation cask systems development activities, including the Cask Systems Development Program (CSDP) Initiative I transportation cask projects. This presentation is particularly timely because the CSDP Independent Management Review Group (IMRG), os which one of the authors is a member, completed an objective assessment of OCRWM's transportation cask system development activities and issued its first report in late August 1992. The perspective on these cask systems development activities that will be presented reflects conclusions based on (1) the industry's review of CSDP Preliminary and Draft Final Design Reports for the Initiative I cask projects, (2) the activities of one of the authors as a member of the IMRG, and (3) the positions that the industry has consistently taken on what it believes to be the appropriate scope and pace of the CSDP and its integration with other OCRWM activities. Background information on the OCRWM transportation cask systems development activities and the relevant industry activities will also be provided 1. Introduction of circuit design on RFID system International Nuclear Information System (INIS) Pak, Sunho 2007-06-01 This is a case of research of Fujitsu company and design of basic circuit of electronic technique. It is composed of two parts. The first part deals with introduction of RFID system design, which lists basic knowledge of ubiquitous, glossary of high frequency, design of impedance matching circuit, RFID system, sorts and design of filter, modulator and a transmission and RFID system design. The second part deals with research and development of Fujitsu company, including RFID middle ware RFID CONNECT of Fujitsu, sensor network of Fujitsu and high handing technique of RFID system. 2. Engineering Software Suite Validates System Design Science.gov (United States) 2007-01-01 EDAptive Computing Inc.'s (ECI) EDAstar engineering software tool suite, created to capture and validate system design requirements, was significantly funded by NASA's Ames Research Center through five Small Business Innovation Research (SBIR) contracts. These programs specifically developed Syscape, used to capture executable specifications of multi-disciplinary systems, and VectorGen, used to automatically generate tests to ensure system implementations meet specifications. According to the company, the VectorGen tests considerably reduce the time and effort required to validate implementation of components, thereby ensuring their safe and reliable operation. EDASHIELD, an additional product offering from ECI, can be used to diagnose, predict, and correct errors after a system has been deployed using EDASTAR -created models. Initial commercialization for EDASTAR included application by a large prime contractor in a military setting, and customers include various branches within the U.S. Department of Defense, industry giants like the Lockheed Martin Corporation, Science Applications International Corporation, and Ball Aerospace and Technologies Corporation, as well as NASA's Langley and Glenn Research Centers 3. Simple adaptive control system design trades NARCIS (Netherlands) Mooij, E. 2017-01-01 In the design of a Model Reference Adaptive Control system, a reference model serves as the (well-known) basis through which system and user requirements can find their way into the design. By tuning the design parameters, the response of the actual vehicle should track the response of the 4. Superconducting magnets. B. Superconducting magnet systems in EPR designs International Nuclear Information System (INIS) Knobloch, A.F. 1978-01-01 Tokamak experiments have reached a stage where large scale application of superconductors can be envisaged for machines becoming operational within the next decade. Existing designs for future devices already indicate some of the tasks and problems associated with large superconducting magnet systems. Using this information the coming magnet system requirements are summarized, some design considerations given and in conclusion a brief survey describes already existing Tokamak magnet development programs 5. General design methodology applied to the research domain of physical programming for computer illiterate CSIR Research Space (South Africa) Smith, Andrew C 2011-09-01 Full Text Available programs. We distilled this as being the challenge of enabling computer illiterates to program a computer without using a keyboard or mouse. If realised, such mechanisms will ?push the computer into the background? (Weiser, 1991:66 - 75) and allow... Systems. Available WWW: http://desrist.org/design-research-in-information-systems (accessed May 2011). Weiser, M. 1991. The computer for the 21st century. IEEE Pervasive computing: Mobile and ubiquitous systems, 66 - 75. The challenges... 6. Programming Guidelines for FBD Programs in Reactor Protection System Software International Nuclear Information System (INIS) Jung, Se Jin; Lee, Dong Ah; Kim, Eui Sub; Yoo, Jun Beom; Lee, Jang Su 2014-01-01 Properties of programming languages, such as reliability, traceability, etc., play important roles in software development to improve safety. Several researches are proposed guidelines about programming to increase the dependability of software which is developed for safety critical systems. Misra-c is a widely accepted programming guidelines for the C language especially in the sector of vehicle industry. NUREG/CR-6463 helps engineers in nuclear industry develop software in nuclear power plant systems more dependably. FBD (Function Block Diagram), which is one of programming languages defined in IEC 61131-3 standard, is often used for software development of PLC (programmable logic controllers) in nuclear power plants. Software development for critical systems using FBD needs strict guidelines, because FBD is a general language and has easily mistakable elements. There are researches about guidelines for IEC 61131-3 programming languages. They, however, do not specify details about how to use languages. This paper proposes new guidelines for the FBD based on NUREG/CR-6463. The paper introduces a CASE (Computer-Aided Software Engineering) tool to check FBD programs with the new guidelines and shows availability with a case study using a FBD program in a reactor protection system. The paper is organized as follows 7. Design and Analysis of Decision Rules via Dynamic Programming KAUST Repository Amin, Talha M. 2017-04-24 The areas of machine learning, data mining, and knowledge representation have many different formats used to represent information. Decision rules, amongst these formats, are the most expressive and easily-understood by humans. In this thesis, we use dynamic programming to design decision rules and analyze them. The use of dynamic programming allows us to work with decision rules in ways that were previously only possible for brute force methods. Our algorithms allow us to describe the set of all rules for a given decision table. Further, we can perform multi-stage optimization by repeatedly reducing this set to only contain rules that are optimal with respect to selected criteria. One way that we apply this study is to generate small systems with short rules by simulating a greedy algorithm for the set cover problem. We also compare maximum path lengths (depth) of deterministic and non-deterministic decision trees (a non-deterministic decision tree is effectively a complete system of decision rules) with regards to Boolean functions. Another area of advancement is the presentation of algorithms for constructing Pareto optimal points for rules and rule systems. This allows us to study the existence of “totally optimal” decision rules (rules that are simultaneously optimal with regards to multiple criteria). We also utilize Pareto optimal points to compare and rate greedy heuristics with regards to two criteria at once. Another application of Pareto optimal points is the study of trade-offs between cost and uncertainty which allows us to find reasonable systems of decision rules that strike a balance between length and accuracy. 8. System Definition and Analysis: Power Plant Design and Layout International Nuclear Information System (INIS) 1996-01-01 This is the Topical report for Task 6.0, Phase 2 of the Advanced Turbine Systems (ATS) Program. The report describes work by Westinghouse and the subcontractor, Gilbert/Commonwealth, in the fulfillment of completing Task 6.0. A conceptual design for critical and noncritical components of the gas fired combustion turbine system was completed. The conceptual design included specifications for the flange to flange gas turbine, power plant components, and balance of plant equipment. The ATS engine used in the conceptual design is an advanced 300 MW class combustion turbine incorporating many design features and technologies required to achieve ATS Program goals. Design features of power plant equipment and balance of plant equipment are described. Performance parameters for these components are explained. A site arrangement and electrical single line diagrams were drafted for the conceptual plant. ATS advanced features include design refinements in the compressor, inlet casing and scroll, combustion system, airfoil cooling, secondary flow systems, rotor and exhaust diffuser. These improved features, integrated with prudent selection of power plant and balance of plant equipment, have provided the conceptual design of a system that meets or exceeds ATS program emissions, performance, reliability-availability-maintainability, and cost goals 9. The environment power system analysis tool development program Science.gov (United States) Jongeward, Gary A.; Kuharski, Robert A.; Kennedy, Eric M.; Stevens, N. John; Putnam, Rand M.; Roche, James C.; Wilcox, Katherine G. 1990-01-01 The Environment Power System Analysis Tool (EPSAT) is being developed to provide space power system design engineers with an analysis tool for determining system performance of power systems in both naturally occurring and self-induced environments. The program is producing an easy to use computer aided engineering (CAE) tool general enough to provide a vehicle for technology transfer from space scientists and engineers to power system design engineers. The results of the project after two years of a three year development program are given. The EPSAT approach separates the CAE tool into three distinct functional units: a modern user interface to present information, a data dictionary interpreter to coordinate analysis; and a data base for storing system designs and results of analysis. 10. Records Center Program Billing System Data.gov (United States) National Archives and Records Administration — RCPBS supports the Records center programs (RCP) in producing invoices for the storage (NARS-5) and servicing of National Archives and Records Administrationâs... 11. A computer simulator for development of engineering system design methodologies Science.gov (United States) Padula, S. L.; Sobieszczanski-Sobieski, J. 1987-01-01 A computer program designed to simulate and improve engineering system design methodology is described. The simulator mimics the qualitative behavior and data couplings occurring among the subsystems of a complex engineering system. It eliminates the engineering analyses in the subsystems by replacing them with judiciously chosen analytical functions. With the cost of analysis eliminated, the simulator is used for experimentation with a large variety of candidate algorithms for multilevel design optimization to choose the best ones for the actual application. Thus, the simulator serves as a development tool for multilevel design optimization strategy. The simulator concept, implementation, and status are described and illustrated with examples. 12. Optimal design of passive gravity compensation system for articulated robots Energy Technology Data Exchange (ETDEWEB) Park, Jin Gyun; Lee, Jae Young; Kim, Sang Hyun; Kim, Sung Rak [Hyundai Heavy Industries Co. Ltd., Daejeon (Korea, Republic of) 2012-01-15 In this paper, the optimal design of a spring type gravity compensation system for an articulated robot is presented. Sequential quadratic programming (SQP) is adopted to resolve various nonlinear constraints in spring design such as stress, buckling, and fatigue constraints, and to reduce computation time. In addition, continuous relaxation method is used to explain the integer valued design variables. The simulation results show that the gravity compensation system designed by proposed method improves the performance effectively without additional weight gain in the main workspace. 13. Discourse in Systemic Operational Design National Research Council Canada - National Science Library DiPasquale, Joseph A 2007-01-01 ... of discourse's role in design. To answer this question, the author conducts a structured inquiry into the nature of discourse from the perspectives of agency, narrative and artifact structure, and socio-cultural relationships... 14. FIELD TEST PROGRAM TO DEVELOP COMPREHENSIVE DESIGN, OPERATING AND COST DATA FOR MERCURY CONTROL SYSTEMS ON NON-SCRUBBED COAL-FIRED BOILERS International Nuclear Information System (INIS) C. Jean Bustard 2001-01-01 With the Nation's coal-burning utilities facing the possibility of tighter controls on mercury pollutants, the U.S. Department of Energy is funding projects that could offer power plant operators better ways to reduce these emissions at much lower costs. Mercury is known to have toxic effects on the nervous system of humans and wildlife. Although it exists only in trace amounts in coal, mercury is released when coal burns and can accumulate on land and in water. In water, bacteria transform the metal into methylmercury, the most hazardous form of the metal. Methylmercury can collect in fish and marine mammals in concentrations hundreds of thousands times higher than the levels in surrounding waters. One of the goals of DOE is to develop technologies by 2005 that will be capable of cutting mercury emissions 50 to 70 percent at well under one-half of today's costs. ADA Environmental Solutions (ADA-ES) is managing a project to test mercury control technologies at full scale at four different power plants from 2000-2003. The ADA-ES project is focused on those power plants that are not equipped with wet flue gas desulfurization systems. ADA-ES will develop a portable system that will be moved to four different utility power plants for field testing. Each of the plants is equipped with either electrostatic precipitators or fabric filters to remove solid particles from the plant's flue gas. ADA-ES's technology will inject a dry sorbent, such as fly ash or activated carbon, that removes the mercury and makes it more susceptible to capture by the particulate control devices. A fine water mist may be sprayed into the flue gas to cool its temperature to the range where the dry sorbent is most effective. PG and E National Energy Group is providing two test sites that fire bituminous coals and are both equipped with electrostatic precipitators and carbon/ash separation systems. Wisconsin Electric Power Company is providing a third test site that burns Powder River Basin coal 15. FIELD TEST PROGRAM TO DEVELOP COMPREHENSIVE DESIGN, OPERATING AND COST DATA FOR MERCURY CONTROL SYSTEMS ON NON-SCRUBBED COAL-FIRED BOILERS International Nuclear Information System (INIS) Richard Schlager 2002-01-01 With the Nation's coal-burning utilities facing the possibility of tighter controls on mercury pollutants, the U.S. Department of Energy is funding projects that could offer power plant operators better ways to reduce these emissions at much lower costs. Mercury is known to have toxic effects on the nervous system of humans and wildlife. Although it exists only in trace amounts in coal, mercury is released when coal burns and can accumulate on land and in water. In water, bacteria transform the metal into methylmercury, the most hazardous form of the metal. Methylmercury can collect in fish and marine mammals in concentrations hundreds of thousands times higher than the levels in surrounding waters. One of the goals of DOE is to develop technologies by 2005 that will be capable of cutting mercury emissions 50 to 70 percent at well under one-half of today's costs. ADA Environmental Solutions (ADA-ES) is managing a project to test mercury control technologies at full scale at four different power plants from 2000-2003. The ADA-ES project is focused on those power plants that are not equipped with wet flue gas desulfurization systems. ADA-ES will develop a portable system that will be moved to four different utility power plants for field testing. Each of the plants is equipped with either electrostatic precipitators or fabric filters to remove solid particles from the plant's flue gas. ADA-ES's technology will inject a dry sorbent, such as fly ash or activated carbon, that removes the mercury and makes it more susceptible to capture by the particulate control devices. A fine water mist may be sprayed into the flue gas to cool its temperature to the range where the dry sorbent is most effective. PG and E National Energy Group is providing two test sites that fire bituminous coals and are both equipped with electrostatic precipitators and carbon/ash separation systems. Wisconsin Electric Power Company is providing a third test site that burns Powder River Basin (PRB 16. FIELD TEST PROGRAM TO DEVELOP COMPREHENSIVE DESIGN, OPERATING AND COST DATA FOR MERCURY CONTROL SYSTEMS ON NON-SCRUBBED COAL-FIRED BOILERS International Nuclear Information System (INIS) C. Jean Bustard 2001-01-01 With the Nation's coal-burning utilities facing the possibility of tighter controls on mercury pollutants, the U.S. Department of Energy is funding projects that could offer power plant operators better ways to reduce these emissions at much lower costs. Mercury is known to have toxic effects on the nervous system of humans and wildlife. Although it exists only in trace amounts in coal, mercury is released when coal burns and can accumulate on land and in water. In water, bacteria transform the metal into methylmercury, the most hazardous form of the metal. Methylmercury can collect in fish and marine mammals in concentrations hundreds of thousands times higher than the levels in surrounding waters. One of the goals of DOE is to develop technologies by 2005 that will be capable of cutting mercury emissions 50 to 70 percent at well under one-half of today's costs. ADA Environmental Solutions (ADA-ES) is managing a project to test mercury control technologies at full scale at four different power plants from 2000 to 2003. The ADA-ES project is focused on those power plants that are not equipped with wet flue gas desulfurization systems. ADA-ES will develop a portable system that will be moved to four different utility power plants for field testing. Each of the plants is equipped with either electrostatic precipitators or fabric filters to remove solid particles from the plant's flue gas. ADA-ES's technology will inject a dry sorbent, such as fly ash or activated carbon, that removes the mercury and makes it more susceptible to capture by the particulate control devices. A fine water mist may be sprayed into the flue gas to cool its temperature to the range where the dry sorbent is most effective. PG and E National Energy Group is providing two test sites that fire bituminous coals and are both equipped with electrostatic precipitators and carbon/ash separation systems. Wisconsin Electric Power Company is providing a third test site that burns Powder River Basin (PRB 17. FIELD TEST PROGRAM TO DEVELOP COMPREHENSIVE DESIGN, OPERATING AND COST DATA FOR MERCURY CONTROL SYSTEMS ON NON-SCRUBBED COAL-FIRED BOILERS International Nuclear Information System (INIS) C. Jean Bustard 2001-01-01 With the Nation's coal-burning utilities facing the possibility of tighter controls on mercury pollutants, the U.S. Department of Energy is funding projects that could offer power plant operators better ways to reduce these emissions at much lower costs. Mercury is known to have toxic effects on the nervous system of humans and wildlife. Although it exists only in trace amounts in coal, mercury is released when coal burns and can accumulate on land and in water. In water, bacteria transform the metal into methylmercury, the most hazardous form of the metal. Methylmercury can collect in fish and marine mammals in concentrations hundreds of thousands times higher than the levels in surrounding waters. One of the goals of DOE is to develop technologies by 2005 that will be capable of cutting mercury emissions 50 to 70 percent at well under one-half of today's costs. ADA Environmental Solutions (ADA-ES) is managing a project to test mercury control technologies at full scale at four different power plants from 2000-2003. The ADA-ES project is focused on those power plants that are not equipped with wet flue gas desulfurization systems. ADA-ES will develop a portable system that will be moved to four different utility power plants for field testing. Each of the plants is equipped with either electrostatic precipitators or fabric filters to remove solid particles from the plant's flue gas. ADA-ES's technology will inject a dry sorbent, such as fly ash or activated carbon, that removes the mercury and makes it more susceptible to capture by the particulate control devices. A fine water mist may be sprayed into the flue gas to cool its temperature to the range where the dry sorbent is most effective. PG and E National Energy Group is providing two test sites that fire bituminous coals and are both equipped with electrostatic precipitators and carbon/ash separation systems. Wisconsin Electric Power Company is providing a third test site that burns Powder River Basin (PRB 18. Systems Analysis for Program Planning and Cost Effectiveness. (An Application). Science.gov (United States) van Gigch, John P.; Hill, Richard E. This paper describes an effort to implement a cost-effectiveness program using systems analysis in an elementary school district, the Rio Linda Union School District in California. The systems design cycle employed has three phases, policy-making evaluation, and action-implementation. During the first phase, the general philosophy or mission of… 19. ALCATOR DCT MAGNETIC SYSTEMS DESIGN OpenAIRE Montgomery, D.; Schultz, O.; Thome, R. 1984-01-01 A 2 meter major radius tokamak with 24 (150 x 200 cm) 10 telsa peak field superconducting coils and an all superconductor PF system is described. All coil systems utilize internally-cooled conductor concepts. 20. Toroidal transformer design program with application to inverter circuitry Science.gov (United States) Dayton, J. A., Jr. 1972-01-01 Estimates of temperature, weight, efficiency, regulation, and final dimensions are included in the output of the computer program for the design of transformers for use in the basic parallel inverter. The program, written in FORTRAN 4, selects a tape wound toroidal magnetic core and, taking temperature, materials, core geometry, skin depth, and ohmic losses into account, chooses the appropriate wire sizes and number of turns for the center tapped primary and single secondary coils. Using the program, 2- and 4-kilovolt-ampere transformers are designed for frequencies from 200 to 3200 Hz and the efficiency of a basic transistor inverter is estimated. 1. Designing an Elderly Assistance Program Based-on Home Care Science.gov (United States) Umusya'adah, L.; Juwaedah, A.; Jubaedah, Y.; Ratnasusanti, H.; Puspita, R. H. 2018-02-01 PKH (Program Keluarga Harapan) is a program of Indonesia’s Government through the ministry of social directorate to accelerate the poverty reduction and the achievement of Millennium Development Goals (MDGs) target as well as the policies development in social protection and social welfare domain or commonly referred to as Indonesian Conditional Cash Transfer (CCT) Program. This research is motivated that existing participants of the family expectation program (PKH) that already exist in Sumedang, Indoensia, especially in the South Sumedang on the social welfare components is only limited to the health checking, while for assisting the elderly based Home Care program there has been no structured and systematic, where as the elderly still need assistance, especially from the family and community environment. This study uses a method of Research and Development with Model Addie which include analysis, design, development, implementation and evaluation. Participants in this study using purposive sampling, where selected families of PKH who provide active assistance to the elderly with 82 participants. The program is designed consists of program components: objectives, goals, forms of assistance, organizing institutions and implementing the program, besides, program modules include assisting the elderly. Form of assistance the elderly cover physical, social, mental and spiritual. Recommended for families and companions PKH, the program can be implemented to meet the various needs of the elderly. For the elderly should introspect, especially in the health and follow the advice recommended by related parties 2. Exploring Open-Ended Design Space of Mechatronic Systems DEFF Research Database (Denmark) Fan, Zhun; Wang, J.; Goodman, E. 2004-01-01 to generate multiple solutions, allowing the designer more latitude in choosing a model to implement. The approach in this paper is capable of providing a variety of design choices to the designer for further analysis, comparison and trade-off. The approach is shown to be efficient and effective in an example......To realize design automation of mechatronic systems, there are two major issues to be dealt with: open-topology generation of mechatronic systems and simulation or analysis of those models. For the first issue, we exploit the strong topology exploration capability of genetic programming to create...... when they represent mixed-energy-domain systems. We take advantage of bond graphs as a tool for multi- or mixed-domain modeling and simulation of mechatronic systems. Because there are many considerations in mechatronic system design that are not completely captured by a bond graph, we would like... 3. Multimedia programming using Max/MSP and TouchDesigner CERN Document Server Lechner, Patrik 2014-01-01 If you want to learn how to use Max 6 and/or TouchDesigner, or work in audio-visual real-time processing, this is the book for you. It is intended for intermediate users of both programs and can be helpful for artists, designers, musicians, VJs, and researchers. A basic understanding of audio principles is advantageous. 4. Detense Logistics Agency's Weapons Systems Support Program National Research Council Canada - National Science Library 1994-01-01 The Defense Logistics Agency's (DLA) Weapons Systems Support Program was established to enhance the Military Departments weapons systems readiness and sustainability by providing enhanced supply support levels for DLA managed items... 5. Harnessing VLSI System Design with EDA Tools CERN Document Server Kamat, Rajanish K; Gaikwad, Pawan K; Guhilot, Hansraj 2012-01-01 This book explores various dimensions of EDA technologies for achieving different goals in VLSI system design. Although the scope of EDA is very broad and comprises diversified hardware and software tools to accomplish different phases of VLSI system design, such as design, layout, simulation, testability, prototyping and implementation, this book focuses only on demystifying the code, a.k.a. firmware development and its implementation with FPGAs. Since there are a variety of languages for system design, this book covers various issues related to VHDL, Verilog and System C synergized with EDA tools, using a variety of case studies such as testability, verification and power consumption. * Covers aspects of VHDL, Verilog and Handel C in one text; * Enables designers to judge the appropriateness of each EDA tool for relevant applications; * Omits discussion of design platforms and focuses on design case studies; * Uses design case studies from diversified application domains such as network on chip, hospital on... 6. FIELD TEST PROGRAM TO DEVELOP COMPREHENSIVE DESIGN, OPERATING AND COST DATA FOR MERCURY CONTROL SYSTEMS ON NON-SCRUBBED COAL-FIRED BOILERS Energy Technology Data Exchange (ETDEWEB) Richard Schlager 2002-08-01 With the Nation's coal-burning utilities facing the possibility of tighter controls on mercury pollutants, the U.S. Department of Energy is funding projects that could offer power plant operators better ways to reduce these emissions at much lower costs. Mercury is known to have toxic effects on the nervous system of humans and wildlife. Although it exists only in trace amounts in coal, mercury is released when coal burns and can accumulate on land and in water. In water, bacteria transform the metal into methylmercury, the most hazardous form of the metal. Methylmercury can collect in fish and marine mammals in concentrations hundreds of thousands times higher than the levels in surrounding waters. One of the goals of DOE is to develop technologies by 2005 that will be capable of cutting mercury emissions 50 to 70 percent at well under one-half of today's costs. ADA Environmental Solutions (ADA-ES) is managing a project to test mercury control technologies at full scale at four different power plants from 2000-2003. The ADA-ES project is focused on those power plants that are not equipped with wet flue gas desulfurization systems. ADA-ES will develop a portable system that will be moved to four different utility power plants for field testing. Each of the plants is equipped with either electrostatic precipitators or fabric filters to remove solid particles from the plant's flue gas. ADA-ES's technology will inject a dry sorbent, such as fly ash or activated carbon, that removes the mercury and makes it more susceptible to capture by the particulate control devices. A fine water mist may be sprayed into the flue gas to cool its temperature to the range where the dry sorbent is most effective. PG&E National Energy Group is providing two test sites that fire bituminous coals and both are equipped with electrostatic precipitators and carbon/ash separation systems. Wisconsin Electric Power Company is providing a third test site that burns Powder 7. Design of Thermal Systems Using Topology Optimization DEFF Research Database (Denmark) Haertel, Jan Hendrik Klaas The goal of this thesis is to apply topology optimization to the design of di_erent thermal systems such as heat sinks and heat exchangers in order to improve the thermal performance of these systems compared to conventional designs. The design of thermal systems is a complex task that has...... of optimized designs are presented within this thesis. The main contribution of the thesis is the development of several numerical optimization models that are applied to different design challenges within thermal engineering. Topology optimization is applied in an industrial project to design the heat....... The design of 3D printed dry-cooled power plant condensers using a simpliffed thermouid topology optimization model is presented in another study. A benchmarking of the optimized geometries against a conventional heat exchanger design is conducted and the topology optimized designs show a superior... 8. System 80+trademark Standard Design: CESSAR design certification. Volume 25 International Nuclear Information System (INIS) 1997-01-01 This report has been prepared in support of the industry effort to standardize nuclear plant designs. This document describes the Combustion Engineering, Inc. System 80+trademark Standard Design. This volume contains sections 12 thru 16 of Chapter 19 -- Probabilistic Risk Assessment. Topics covered are: containment response analysis; consequence analysis; containment response and consequence analysis sensitivity analysis; summary and conclusions; and references 9. Generative design visualize, program, and create with processing CERN Document Server Bohnacker, Hartmut; Laub, Julia; Lazzeroni, Claudius 2012-01-01 Generative design is a revolutionary new method of creating artwork, models, and animations from sets of rules, or algorithms. By using accessible programming languages such as Processing, artists and designers are producing extravagant, crystalline structures that can form the basis of anything from patterned textiles and typography to lighting, scientific diagrams, sculptures, films, and even fantastical buildings. Opening with a gallery of thirty-five illustrated case studies, Generative Design takes users through specific, practical instructions on how to create their own visual experiments by combining simple-to-use programming codes with basic design principles. A detailed handbook of advanced strategies provides visual artists with all the tools to achieve proficiency. Both a how-to manual and a showcase for recent work in this exciting new field, Generative Design is the definitive study and reference book that designers have been waiting for. 10. CREDIT SYSTEM AND CREDIT GUARANTEE PROGRAMS OpenAIRE Turgay GECER 2012-01-01 Credit system is an integrated architecture consisted of financial information, credit rating, credit risk management, receivables and credit insurance systems, credit derivative markets and credit guarantee programs. The main purpose of the credit system is to provide the functioning of all credit channels and to make it easy to access of credit sources demanded by all of real and legal persons in any economic system. Credit guarantee program, the one of prominent elements of the credit syst... 11. Introduction to Space Systems Design and Synthesis CERN Document Server Aguirre, Miguel A 2013-01-01 The definition of all space systems starts with the establishment of its fundamental parameters: requirements to be fulfilled, overall system and satellite design, analysis and design of the critical elements, developmental approach, cost, and schedule. There are only a few texts covering early design of space systems and none of them has been specifically dedicated to it. Furthermore all existing space engineering books concentrate on analysis. None of them deal with space system synthesis – with the interrelations between all the elements of the space system. Introduction to Space Systems concentrates on understanding the interaction between all the forces, both technical and non-technical, which influence the definition of a space system. This book refers to the entire system: space and ground segments, mission objectives as well as to cost, risk, and mission success probabilities. Introduction to Space Systems is divided into two parts. The first part analyzes the process of space system design in an ab... 12. GESAT: System for management and evaluation of training programs International Nuclear Information System (INIS) Arjona, O.; Venegas, M.; Rodriguez, L.; Lopez, M. 1997-01-01 This paper describe the criteria considered to design the GESAT system, the elements considered to select the relational model, selection of the database language and the main features and possibilities of this system. GESAT allow the management of training programs based on the Systematic Approach to Training. Include the information related with all SAT phases, the results of the job analysis, training plans design, development of materials, training implementation, and the subsequent evaluation 13. 77 FR 70409 - System Safety Program Science.gov (United States) 2012-11-26 ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF TRANSPORTATION Federal Railroad Administration 49 CFR Part 270 2130-AC31 System Safety Program AGENCY: Federal Railroad... commuter and intercity passenger railroads to develop and implement a system safety program (SSP) to... 14. Process of system design and analysis International Nuclear Information System (INIS) Gardner, B. 1995-01-01 The design of an effective physical protection system includes the determination of the physical protection system objectives, the initial design of a physical protection system, the evaluation of the design, and, probably, a redesign or refinement of the system. To develop the objectives, the designer must begin by gathering information about facility operations and conditions, such as a comprehensive description of the facility, operating states, and the physical protection requirements. The designer then needs to define the threat. This involves considering factors about potential adversaries: Class of adversary, adversary's capabilities, and range of adversary's tactics. Next, the designer should identify targets. Determination of whether or not nuclear materials are attractive targets is based mainly on the ease or difficulty of acquisition and desirability of the materiaL The designer now knows the objectives of the physical protection system, that is, ''What to protect against whom.'' The next step is to design the system by determining how best to combine such elements as fences, vaults, sensors, procedures, communication devices, and protective force personnel to meet the objectives of the system. Once a physical protection system is designed, it must be analyzed and evaluated to ensure it meets the physical protection objectives. Evaluation must allow for features working together to assure protection rather than regarding each feature separately. Due to the complexity of protection systems, an evaluation usually requires modeling techniques. If any vulnerabilities are found, the initial system must be redesigned to correct the vulnerabilities and a reevaluation conducted 15. Ergonomics: an aid to system design International Nuclear Information System (INIS) McCafferty, D.B. 1990-01-01 In recent years, the engineering community has recognized that ergonomics can make significant contributions to system design. Working together engineers and ergonomists can create designs that effectively meet system goals. By considering the role of humans and technology in the context of systems and by reducing the potential for errors, gains can be made in overall system reliability. Such efforts can reduce the need for costly backfits and increase system efficiency. (author) 16. Optimized low-cost-array field designs for photovoltaic systems Science.gov (United States) Post, H. N.; Carmichael, D. C.; Castle, J. A. A comprehensive program to define and develop array field subsystems which can achieve the lowest possible lifecycle costs is discussed. The major activity of this program is described, namely, the design and development of optimized, modular array fields for photovoltaic (PV) systems. As part of this activity, design criteria and performance requirements for specific array subsystems including support structures, foundations, intermodule connections, field wiring, lightning protection, system grounding, site preparation, and monitoring and control were defined and evaluated. Similarly, fully integrated flat-panel array field designs, optimized for lowest lifecycle costs, were developed for system sizes ranging from 20 to 500 kW sub p. Key features, subsystem requirements, and projected costs for these array field designs are presented and discussed. 17. [Design of an HACCP program for a cocoa processing facility]. Science.gov (United States) López D'Sola, Patrizia; Sandia, María Gabriela; Bou Rached, Lizet; Hernández Serrano, Pilar 2012-12-01 The HACCP plan is a food safety management tool used to control physical, chemical and biological hazards associated to food processing through all the processing chain. The aim of this work is to design a HACCP Plan for a Venezuelan cocoa processing facility.The production of safe food products requires that the HACCP system be built upon a solid foundation of prerequisite programs such as Good Manufacturing Practices (GMP) and Sanitation Standard Operating Procedures (SSOP). The existence and effectiveness of these prerequisite programs were previously assessed.Good Agriculture Practices (GAP) audit to cocoa nibs suppliers were performed. To develop the HACCP plan, the five preliminary tasks and the seven HACCP principles were accomplished according to Codex Alimentarius procedures. Three Critical Control Points (CCP) were identified using a decision tree: winnowing (control of ochratoxin A), roasting (Salmonella control) and metallic particles detection. For each CCP, Critical limits were established, the Monitoring procedures, Corrective actions, Procedures for Verification and Documentation concerning all procedures and records appropriate to these principles and their application was established. To implement and maintain a HACCP plan for this processing plant is suggested. Recently OchratoxinA (OTA) has been related to cocoa beans. Although the shell separation from the nib has been reported as an effective measure to control this chemical hazard, ochratoxin prevalence study in cocoa beans produced in the country is recommended, and validate the winnowing step as well 18. Internet based remote cooperative engineering system for NSSS system design International Nuclear Information System (INIS) Kim, Y. S.; Lee, S. L. 2000-01-01 Implementation of information technology system through the nuclear power plant life cycle which covers site selection, design, construction, operation and decommission has been suggested continually by the reports or guidelines from NIRMA, INPO, NUMARC, USNRC and EPRI since late 1980's, and some of it has been actually implemented and applied partially to the practical design process. However, for the NSSS system design, a high level activity of nuclear power plant design phase, none of the effects has been reported with regard to implementing the information system. In Korea, KAERI studied NuIDEAS(Nuclear Integrated Database and Design Advancement System) in 1995, and KAERI (Korea Electric Power Research Institute) worked with CENP (Combustion Engineering Nuclear Power) for KNGR IMS(Information Management System) in 1997 as trials to adopt information system for NSSS system design. In this paper, after reviewing the pre-studied two information system, we introduce implementation of the information system for NSSS system design which is compatible with the on-going design works and can be used as means of concurrent engineering through internet. With this electronic design system, we expect increase of the design efficiency and productivity by switching from hard copy based design flow to internet based system. In addition, reliability and traceability of the design data is highly elevated by containing the native document file together with all the review, comment and resolution history in one database 19. DBADOSE: a PC program for stack design and the PRR-1 design basis accident International Nuclear Information System (INIS) Leopando, L.S. 1994-01-01 DBA D OSE is a program written to be used as a tool to verify the adequacy of the design of the stack of the Philippine Research Reactor-1 (PRR-1) under design basis accident conditions. DBA D OSE runs on IBM-compatible personal computers. In the design basis accident, a substantial amount of fission products is released into the air inside the reactor building. The emergency ventilation system is assumed to function, creating a negative air pressure inside the building that will prevent the uncontrolled release of fission products into the atmosphere. The emergency ventilation system will drive filtered building air through a stack to create the negative pressure. Unavoidably, some of the fission products will pass through the filter and will be discharged. The fission products will be carried by the wind beyond the reactor site and will cause some exposure of the public to radiation. DBA D OSE may be used to calculate the amounts of exposure dose for various stack configurations and meteorological conditions at given distances from the reactor. The exposure doses may be compared with acceptable limits. The source code of DBA D OSE contains approximately 3000 lines of FORTRAN-77 (written for the Microsoft Fortran 4.10 compiler) and 300 lines of assembler. DBA D OSE.EXE is only 58 kB in size and needs only about 71 kB of RAM to run. A math-coprocessor is not needed but will speed up runs considerably. (author). 8 refs., 8 tabs 20. Fundamental attributes of a practical configuration management program for nuclear plant design control International Nuclear Information System (INIS) Klein, S.M. 1988-06-01 This summarizes the results of an evaluation of findings identifies during a number of Safety-System Functional Inspections and Safety System Outage Modification Inspections which are related to configuration management for nuclear plant design control. A computerized database of these findings was generated from a review of the design inspection reports. Based on the results of the evaluation, attributes of a configuration management program were developed which are responsive to minimizing these types of inspection findings. Incorporation of these key attributes is considered good practice in the development of a configuration management program for design control at operating nuclear plants 1. The Engineering Compliance Program development process and its role in design International Nuclear Information System (INIS) 1997-12-01 This paper presents an overview of the Engineering Compliance Program (ECP) development process and its role in design. The ECP is a formal program to assess Nuclear Regulatory Commission (NRC) regulatory guidance in terms of precedence, industry experience documents, and codes and standards to determine their applicability to Mined Geologic Disposal System (MGDS) design. These determinations are documented in ECP Guidance Packages for MGDS Structures, Systems and Components (SSCs). This ensures that the license application appropriately reflects the MGDS design and facilitates NRC acceptance and compliance review 2. A design procedure and handling quality criteria for lateral directional flight control systems Science.gov (United States) Stein, G.; Henke, A. H. 1972-01-01 A practical design procedure for aircraft augmentation systems is described based on quadratic optimal control technology and handling-quality-oriented cost functionals. The procedure is applied to the design of a lateral-directional control system for the F4C aircraft. The design criteria, design procedure, and final control system are validated with a program of formal pilot evaluation experiments. 3. 78 FR 19261 - Safe Drinking Water Act Sole Source Aquifer Program; Designation of Bainbridge Island, Washington... Science.gov (United States) 2013-03-29 ... principle source of drinking water for the citizens of Bainbridge Island and that this aquifer system, if... designation. II. Basis for Determination EPA defines a sole or principle source aquifer as an aquifer or... ENVIRONMENTAL PROTECTION AGENCY Safe Drinking Water Act Sole Source Aquifer Program; Designation... 4. Development of intellectual reactor design system IRDS International Nuclear Information System (INIS) Kugo, T.; Tsuchihashi, K.; Nakagawa, M.; Mori, T. 1993-01-01 An intellectual reactor design system IRDS has been developed to support feasibility study and conceptual design of new type reactors in the fields of reactor core design including neutronics, thermal-hydraulics and fuel design. IRDS is an integrated software system in which a variety of computer codes in the different fields are installed. An integration of simulation modules are performed by the information transfer between modules through design model in which the design information of the current design work is stored. An object oriented architecture is realized in frame representation of core configuration in a design data base. The knowledge relating to design tasks to be performed are encapsulated, to support the conceptual design work. The system is constructed on an engineering workstation, and supports efficiently design work through man-machine interface adopting the advanced information processing technologies. Optimization methods for design parameters with use of the artificial intelligence technique are now under study, to reduce the parametric study work. A function to search design window in which design is feasible is realized in the fuel pin design. (orig.) 5. Usable Design of Civil Engineer Information Systems National Research Council Canada - National Science Library Kastenholz, Gunther 2005-01-01 .... Data was collected from a literature review of pertinent Civil Engineer information system design documents, and conclusions drawn about the existing level of specification of usability engineering principles... 6. Programming languages and operating systems used in data base systems International Nuclear Information System (INIS) Radulescu, T.G. 1977-06-01 Some apsects of the use of the programming languages and operating systems in the data base systems are presented. There are four chapters in this paper. In the first chapter we present some generalities about the programming languages. In the second one we describe the use of the programming languages in the data base systems. A classification of the programming languages used in data base systems is presented in the third one. An overview of the operating systems is made in the last chapter. (author) 7. The Air Program Information Management System (APIMS) Science.gov (United States) 2011-11-02 Technology November 2, 2011 The Air Program Information Management System (APIMS) Frank Castaneda, III, P.E. APIMS Program Manager AFCEE/TDNQ APIMS...NOV 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE The Air Program Information Management System (APIMS... Information Management System : Sustainability of Enterprise air quality management system • Aspects and Impacts to Process • Auditing and Measurement 8. Programs for Testing Processor-in-Memory Computing Systems Science.gov (United States) Katz, Daniel S. 2006-01-01 The Multithreaded Microbenchmarks for Processor-In-Memory (PIM) Compilers, Simulators, and Hardware are computer programs arranged in a series for use in testing the performances of PIM computing systems, including compilers, simulators, and hardware. The programs at the beginning of the series test basic functionality; the programs at subsequent positions in the series test increasingly complex functionality. The programs are intended to be used while designing a PIM system, and can be used to verify that compilers, simulators, and hardware work correctly. The programs can also be used to enable designers of these system components to examine tradeoffs in implementation. Finally, these programs can be run on non-PIM hardware (either single-threaded or multithreaded) using the POSIX pthreads standard to verify that the benchmarks themselves operate correctly. [POSIX (Portable Operating System Interface for UNIX) is a set of standards that define how programs and operating systems interact with each other. pthreads is a library of pre-emptive thread routines that comply with one of the POSIX standards. 9. Systemic Design: Two Canadian Case Studies Directory of Open Access Journals (Sweden) Alex Ryan 2014-12-01 Full Text Available This paper introduces two novel applications of systemic design to facilitate a comparison of alternative methodologies that integrate systems thinking and design. In the first case study, systemic design helped the Procurement Department at the University of Toronto re-envision how public policy is implemented and how value is created in the broader university purchasing ecosystem. This resulted in an estimated$1.5 million in savings in the first year, and a rise in user retention rates from 40% to 99%. In the second case study, systemic design helped the clean energy and natural resources group within the Government of Alberta to design a more efficient and effective resource management system and shift the way that natural resource departments work together. This resulted in the formation of a standing systemic design team and contributed to the creation of an integrated resource management system. A comparative analysis of the two projects identifies a shared set of core principles for systemic design as well as areas of differentiation that reveal potential for learning across methodologies. Together, these case studies demonstrate the complementarity of systems thinking and design thinking, and show how they may be integrated to guide positive change within complex sociotechnical systems.
10. Reeducation for Design Engineers in Fukuoka System LSI College
Science.gov (United States)
Hirakawa, Kazuyuki; Sasao, Tsutomu; Fukuda, Akira; Ito, Fumiaki
Silicon Sea Belt Project started in 2001 on keeping context of East Asian economic growth. Fukuoka System LSI College, a subsidiary of the project, opened on December for supplying reeducated design engineers to the semiconductor industries after trying System LSI design training programs in cooperation with industry, academia, and government. The college approaches the PDCA, i.e., Plan, Do, Check, and Action, techniques making up quality control methodologies in manufacturing, and has applied the PDCA techniques to improving qualities of the training programs. The major semiconductor companies have adopted our programs for eight years from 2004, and given our programs excellent scores. We hope our PDCA process, useful for human resource development of other technological fields.
11. Embedded systems design with special arithmetic and number systems
CERN Document Server
Sousa, Leonel; Chang, Chip-Hong
2017-01-01
This book introduces readers to alternative approaches to designing efficient embedded systems using unconventional number systems. The authors describe various systems that can be used for designing efficient embedded and application-specific processors, such as Residue Number System, Logarithmic Number System, Redundant Binary Number System Double-Base Number System, Decimal Floating Point Number System and Continuous Valued Number System. Readers will learn the strategies and trade-offs of using unconventional number systems in application-specific processors and be able to apply and design appropriate arithmetic operations from these number systems to boost the performance of digital systems. • Serves as a single-source reference to designing embedded systems with unconventional number systems • Covers theory as well as implementation on application-specific processors • Explains mathematical concepts in a manner accessible to readers with diverse backgrounds.
12. Small Water System Management Program: 100 K Area
International Nuclear Information System (INIS)
Hunacek, G.S. Jr.
1995-01-01
Purposes of this document are: to provide an overview of the service and potable water system presently in service at the Hanford Site's 100 K Area; to provide future system forecasts based on anticipated DOE activities and programs; to delineate performance, design, and operations criteria; and to describe planned improvements. The objective of the small water system management program is to assure the water system is properly and reliably managed and operated, and continues to exist as a functional and viable entity in accordance with WAC 246-290-410
13. Decentralized systems with design constraints
CERN Document Server
Mahmoud, Magdi S
2014-01-01
This volume provides a rigorous examination of the analysis, stability and control of large-scale systems, and addresses the difficulties that arise because of dimensionality, information structure constraints, parametric uncertainty and time-delays.
14. Simulation systems: design and applications
Directory of Open Access Journals (Sweden)
Liudmila Burtseva
1996-09-01
Full Text Available In this paper the history of Simulation System Group investigations is presented. Some important achievements in past and present time are marked. The directions of future investigations are discussed in the fourth section of the paper.
15. Architectural design of flue gas continuous emission monitoring system
Science.gov (United States)
Zhou, Hongfu; Jiang, Liangzhong; Tang, Yong; Yao, Xifan
2008-10-01
The paper presents the architectural design of flue gas continuous emission monitoring system, which uses computer, acquisition card and serial port communication card as hardware in the flue gas continuous emission monitoring system. In the CEMS, continuous emission monitoring system, it monitors dust in the flue gas, SO2, NOX, and some parameter on the flue gas emission, which includes mass flow, pressure, and temperature. For the software in the monitoring system, the research designs monitoring program in VC++, and realizes flue gas monitor with the architecture.
16. Integrated CAE system for nuclear power plants. Development of piping design check system
International Nuclear Information System (INIS)
Narikawa, Noboru; Sato, Teruaki
1994-01-01
Toshiba Corporation has developed and operated the integrated CAE system for nuclear power plants, the core of which is the engineering data base to manage accurately and efficiently enormous amount of data on machinery, equipment and piping. As the first step of putting knowledge base system to practical use, piping design check system has been developed. By automatically checking up piping design, this system aims at the prevention of overlooking mistakes, efficient design works and the overall quality improvement of design. This system is based on the thought that it supports designers, and final decision is made by designers. This system is composed of the integrated data base, a two-dimensional CAD system and three-dimensional CAD system. The piping design check system is one of the application systems of the integrated CAE system. Object-oriented programming is the base of the piping design check system, and design knowledge and CAD data are necessary. As to the method of realizing the check system, the flow of piping design, the checkup functions, the checkup of interference and attribute base, and the integration of the system are explained. (K.I)
17. Design of low noise imaging system
Science.gov (United States)
Hu, Bo; Chen, Xiaolai
2017-10-01
In order to meet the needs of engineering applications for low noise imaging system under the mode of global shutter, a complete imaging system is designed based on the SCMOS (Scientific CMOS) image sensor CIS2521F. The paper introduces hardware circuit and software system design. Based on the analysis of key indexes and technologies about the imaging system, the paper makes chips selection and decides SCMOS + FPGA+ DDRII+ Camera Link as processing architecture. Then it introduces the entire system workflow and power supply and distribution unit design. As for the software system, which consists of the SCMOS control module, image acquisition module, data cache control module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The imaging experimental results show that the imaging system exhibits a 2560*2160 pixel resolution, has a maximum frame frequency of 50 fps. The imaging quality of the system satisfies the requirement of the index.
18. Screening candidate systems engineers: a research design
CSIR Research Space (South Africa)
Goncalves, DP
2009-07-01
Full Text Available engineering screening methodology that could be used to screen potential systems engineers. According to their design, this can be achieved by defining a system engineering profile according to specific psychological attributes, and using this profile...
19. Axiomatic Design of Space Life Support Systems
Science.gov (United States)
Jones, Harry W.
2017-01-01
Systems engineering is an organized way to design and develop systems, but the initial system design concepts are usually seen as the products of unexplained but highly creative intuition. Axiomatic design is a mathematical approach to produce and compare system architectures. The two axioms are:- Maintain the independence of the functional requirements.- Minimize the information content (or complexity) of the design. The first axiom generates good system design structures and the second axiom ranks them. The closed system human life support architecture now implemented in the International Space Station has been essentially unchanged for fifty years. In contrast, brief missions such as Apollo and Shuttle have used open loop life support. As mission length increases, greater system closure and increased recycling become more cost-effective.Closure can be gradually increased, first recycling humidity condensate, then hygiene wastewater, urine, carbon dioxide, and water recovery brine. A long term space station or planetary base could implement nearly full closure, including food production. Dynamic systems theory supports the axioms by showing that fewer requirements, fewer subsystems, and fewer interconnections all increase system stability. If systems are too complex and interconnected, reliability is reduced and operations and maintenance become more difficult. Using axiomatic design shows how the mission duration and other requirements determine the best life support system design including the degree of closure.
20. Comparative analysis of nuclear reactor control system designs
International Nuclear Information System (INIS)
Russcher, G.E.
1975-01-01
Control systems are vital to the safe operation of nuclear reactors. Their seismic design requirements are some of the most important criteria governing reactor system design evaluation. Consequently, the seismic analysis for nuclear reactors is directed to include not only the mechanical and structural seismic capabilities of a reactor, but the control system functional requirements as well. In the study described an alternate conceptual design of a safety rod system was compared with a prototypic system design to assess their relative functional reliabilities under design seismic conditions. The comparative methods utilized standard success tree and decision tree techniques to determine the relative figures of merit. The study showed: (1) The methodology utilized can provide both qualitative and quantitative bases for design decisions regarding seismic functional capabilities of two systems under comparison, (2) the process emphasizes the visibility of particular design features that are subject to common mode failure while under seismic loading, and (3) minimal improvement was shown to be available in overall system seismic performance of an independent conceptual design, however, it also showed the system would be subject to a new set of operational uncertainties which would have to be resolved by extensive development programs
1. Power plant system assessment. Final report. SP-100 Program
International Nuclear Information System (INIS)
Anderson, R.V.; Atkins, D.F.; Bost, D.S.
1983-01-01
The purpose of this assessment was to provide system-level insights into 100-kWe-class space reactor electric systems. Using these insights, Rockwell was to select and perform conceptual design studies on a ''most attractive'' system that met the preliminary design goals and requirements of the SP-100 Program. About 4 of the 6 months were used in the selection process. The remaining 2 months were used for the system conceptual design studies. Rockwell completed these studies at the end of FY 1983. This report summarizes the results of the power plant system assessment and describes our choice for the most attractive system - the Rockwell SR-100G System (Space Reactor, 100 kWe, Growth) - a lithium-cooled UN-fueled fast reactor/Brayton turboelectric converter system
2. Power plant system assessment. Final report. SP-100 Program
Energy Technology Data Exchange (ETDEWEB)
Anderson, R.V.; Atkins, D.F.; Bost, D.S.; Berman, B.; Clinger, D.A.; Determan, W.R.; Drucker, G.S.; Glasgow, L.E.; Hartung, J.A.; Harty, R.B.
1983-10-31
The purpose of this assessment was to provide system-level insights into 100-kWe-class space reactor electric systems. Using these insights, Rockwell was to select and perform conceptual design studies on a ''most attractive'' system that met the preliminary design goals and requirements of the SP-100 Program. About 4 of the 6 months were used in the selection process. The remaining 2 months were used for the system conceptual design studies. Rockwell completed these studies at the end of FY 1983. This report summarizes the results of the power plant system assessment and describes our choice for the most attractive system - the Rockwell SR-100G System (Space Reactor, 100 kWe, Growth) - a lithium-cooled UN-fueled fast reactor/Brayton turboelectric converter system.
3. Aviation System Analysis Capability Executive Assistant Design
Science.gov (United States)
Roberts, Eileen; Villani, James A.; Osman, Mohammed; Godso, David; King, Brent; Ricciardi, Michael
1998-01-01
In this technical document, we describe the design developed for the Aviation System Analysis Capability (ASAC) Executive Assistant (EA) Proof of Concept (POC). We describe the genesis and role of the ASAC system, discuss the objectives of the ASAC system and provide an overview of components and models within the ASAC system, and describe the design process and the results of the ASAC EA POC system design. We also describe the evaluation process and results for applicable COTS software. The document has six chapters, a bibliography, three appendices and one attachment.
4. Advanced topics in security computer system design
International Nuclear Information System (INIS)
Stachniak, D.E.; Lamb, W.R.
1989-01-01
The capability, performance, and speed of contemporary computer processors, plus the associated performance capability of the operating systems accommodating the processors, have enormously expanded the scope of possibilities for designers of nuclear power plant security computer systems. This paper addresses the choices that could be made by a designer of security computer systems working with contemporary computers and describes the improvement in functionality of contemporary security computer systems based on an optimally chosen design. Primary initial considerations concern the selection of (a) the computer hardware and (b) the operating system. Considerations for hardware selection concern processor and memory word length, memory capacity, and numerous processor features
5. Hydrological Monitoring System Design and Implementation Based on IOT
Science.gov (United States)
Han, Kun; Zhang, Dacheng; Bo, Jingyi; Zhang, Zhiguang
In this article, an embedded system development platform based on GSM communication is proposed. Through its application in hydrology monitoring management, the author makes discussion about communication reliability and lightning protection, suggests detail solutions, and also analyzes design and realization of upper computer software. Finally, communication program is given. Hydrology monitoring system from wireless communication network is a typical practical application of embedded system, which has realized intelligence, modernization, high-efficiency and networking of hydrology monitoring management.
6. The design and implementation of vehicle scrapping programs
International Nuclear Information System (INIS)
Sahu, R.; Baxter, R.A.
1993-01-01
A number of metropolitan air basins in the US are currently faced with increased difficulty in attaining national and regional clean air standards. Significant controls on stationary sources over the years have allowed mobile sources to become the primary source of air emission in many areas. Programs allowing the use of mobile source offsets for stationary source emission by removal of older, higher emitting vehicles through scrappage programs are, therefore, conceptually attractive and are starting to be implemented. However, achieving success in such scrappage programs is a challenge given the associated technical, economic and social issues. This paper presents a discussion of the important issues that must be considered if vehicle scrappage programs are to be successful, including recent guidance and views of the EPA and state governments on the credits associated with the programs. Although the main focus of such programs is the reduction of criteria pollutants (CO, ROG, NO x , and PM 10 ), the impact on air toxics also has to be considered. The paper will then focus on the technical design of vehicle scrappage programs such that the resulting credits are real, verifiable, enforceable, and cost-effective. Information available under existing vehicle I/M programs along with economic, vehicle maintenance, and geographic data will be used with statistical techniques in order to meet predetermined program goals regarding emissions reduction and cost-effectiveness. A later case-study paper will discuss the actual implementation of such as program in an ozone non-attainment area
7. Design and development of virtual TXP control system software
International Nuclear Information System (INIS)
Wang Yunwei; Leng Shan; Liu Zhisheng; Wang Qiang; Shang Yanxia
2008-01-01
Taking distributed control system (DCS) of Siemens TELEPERM-XP (TXP) as the simulation object,Virtual TXP (VTXP) control system based on Virtual DCS with high fidelity and reliability was designed and developed on the platform of Windows. In the process of development, the method of object-oriented modeling and modularization program design are adopted, C++ language and technologies such as multithreading, ActiveX control, Socket network communication are used, to realize the wide range dynamic simulation and recreate the functions of the hardware and software of real TXP. This paper puts emphasis on the design and realization of Control server and Communication server. The development of Virtual TXP control system software is with great effect on the construction of simulation system and the design, commission, verification and maintenance of control system in large-scale power plants, nuclear power plants and combined cycle power plants. (authors)
8. Model-Based Design for Embedded Systems
CERN Document Server
Nicolescu, Gabriela
2009-01-01
Model-based design allows teams to start the design process from a high-level model that is gradually refined through abstraction levels to ultimately yield a prototype. This book describes the main facets of heterogeneous system design. It focuses on multi-core methodological issues, real-time analysis, and modeling and validation
9. Designing optimal mixtures using generalized disjunctive programming: Hull relaxations
OpenAIRE
2016-01-01
A general modeling framework for mixture design problems, which integrates Generalized Disjunctive Programming (GDP) into the Computer-Aided Mixture/blend Design (CAMbD) framework, was recently proposed (S. Jonuzaj, P.T. Akula, P.-M. Kleniati, C.S. Adjiman, 2016. AIChE Journal 62, 1616???1633). In this paper we derive Hull Relaxations (HR) of GDP mixture design problems as an alternative to the big-M (BM) approach presented in this earlier work. We show that in restricted mixture design probl...
10. Selecting, adapting, and sustaining programs in health care systems
Directory of Open Access Journals (Sweden)
Zullig LL
2015-04-01
Full Text Available Leah L Zullig,1,2 Hayden B Bosworth1–4 1Center for Health Services Research in Primary Care, Durham Veterans Affairs Medical Center, Durham, NC, USA; 2Department of Medicine, Duke University Medical Center, Durham, NC, USA; 3School of Nursing, 4Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, USA Abstract: Practitioners and researchers often design behavioral programs that are effective for a specific population or problem. Despite their success in a controlled setting, relatively few programs are scaled up and implemented in health care systems. Planning for scale-up is a critical, yet often overlooked, element in the process of program design. Equally as important is understanding how to select a program that has already been developed, and adapt and implement the program to meet specific organizational goals. This adaptation and implementation requires attention to organizational goals, available resources, and program cost. We assert that translational behavioral medicine necessitates expanding successful programs beyond a stand-alone research study. This paper describes key factors to consider when selecting, adapting, and sustaining programs for scale-up in large health care systems and applies the Knowledge to Action (KTA Framework to a case study, illustrating knowledge creation and an action cycle of implementation and evaluation activities. Keywords: program sustainability, diffusion of innovation, information dissemination, health services research, intervention studies
11. European passive plant program A design for the 21st century
International Nuclear Information System (INIS)
1998-01-01
In 1994, a group of European utilities initiated, together with Westinghouse and its industrial partner GENESI (an Italian consortium including ANSALDO and FIAT), a program designated EPP (European Passive Plant) to evaluate Westinghouse passive nuclear plant technology for application in Europe. The following major tasks were accomplished: (1) the impacts of the European utility requirements (EUR) on the Westinghouse nuclear island design were evaluated; and (2) a 1000 MWe passive plant reference design (EP1000) was established which conforms to the EUR and is expected to be licensable in Europe. With respect to safety systems and containment, the reference plant design closely follows that of the Westinghouse simplified pressurized water reactor (SPWR) design, while the AP600 plant design has been taken as the basis for the EP1000 reference design in the auxiliary system design areas. However, the EP1000 design also includes features required to meet the EUR, as well as key European licensing requirements. (orig.)
12. Design and fabrication of an automated temperature programmed ...
A completely automated temperature-programmed reaction (TPR) system for carrying out gas-solid catalytic reactions under atmospheric flow conditions is fabricated to study CO and hydrocarbon oxidation, and NO reduction. The system consists of an all-stainless steel UHV system, quadrupole mass spectrometer SX200 ...
13. Design and fabrication of an automated temperature programmed ...
Unknown
Abstract. A completely automated temperature-programmed reaction (TPR) system for carrying out gas–solid catalytic reactions under atmospheric flow conditions is fabricated to study CO and hydrocarbon oxidation, and NO reduction. The system consists of an all-stainless steel UHV system, quadrupole mass.
14. Design for a Program Visualization System.
Science.gov (United States)
1981-01-01
uses color bars to indicate type of comand , for example: etc. Still another possibility could be to use color to differentiate between declarations...defined and visible; moreover, the conditions They have added data definition constructs to theon process boxes embedded within corn...editc. to create NSO gramming techniques. The absence of a way to that contain data definitions and embedded PL/1 represent GOTOs effectively precludes
15. Probabilistic Design of Offshore Structural Systems
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
Probabilistic design of structural systems is considered in this paper. The reliability is estimated using first-order reliability methods (FORM). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisf......-analytical derivatives. Finally an example of probabilistic design of an offshore structure is considered.......Probabilistic design of structural systems is considered in this paper. The reliability is estimated using first-order reliability methods (FORM). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements...
16. Probabilistic Design of Offshore Structural Systems
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
1988-01-01
Probabilistic design of structural systems is considered in this paper. The reliability is estimated using first-order reliability methods (FORM). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisf......-analytical derivatives. Finally an example of probabilistic design of an offshore structure is considered.......Probabilistic design of structural systems is considered in this paper. The reliability is estimated using first-order reliability methods (FORM). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements...
17. Implantable intraocular pressure monitoring systems: Design considerations
KAUST Repository
2013-12-01
Design considerations and limitations of implantable Intraocular Pressure Monitoring (IOPM) systems are presented in this paper. Detailed comparison with the state of the art is performed to highlight the benefits and challenges of the proposed design. The system-on-chip, presented here, is battery free and harvests energy from incoming RF signals. This low-cost design, in standard CMOS process, does not require any external components or bond wires to function. This paper provides useful insights to the designers of implantable wireless sensors in terms of design choices and associated tradeoffs. © 2013 IEEE.
18. The system design of TRIO cinema Mission
Science.gov (United States)
Jin, Ho; Seon, Jongho; Kim, Khan-Hyuk; Lee, Dong-Hun; Kim, Kap-Sung; Lin, Robert; Parks, George; Tindall, Craig; Horbury, T. S.; Larson, Davin; Sample, John
TRIO (Triplet Ionospheric Observatory) CINEMA ( Cubesat for Ion, Neutral, Electron, MAg-netic fields) is a space science mission with three identical cubesats. The main scientific objec-tives are a multi-observation of ionospheric ENA (Energetic Neutral Atom) imaging, ionospheric signature of suprathermal electrons and ions and complementary measurements of magnetic fields for particle data. For this, Main payloads consist of a suprathermal electron, ion, neutral (STEIN) instrument and a 3-axis magnetometer of magnetoresistive sensors. The CINEMA is a 3-unit CubeSat, which translates to a 10 cm x 10 cm x 30 cm in volume and no more than four kilograms in mass. An attitude control system (ACS) uses torque coils, a sun sensor and the magnetometers and spin CINEMA spcaecraft 4 rpm with the spin axis perpendicular to the ecliptic plane. CINEMA will be placed into a high inclination low earth orbit that crosses the auroral zone and cusp. Three institutes are collaborating to develop CINEMA cubesats: i) two cubesats by Kyung Hee University (KHU) under their World Class University (WCU) program, ii) one cubesat by UC Berkeley under the NSF support, and iii) three magnetometers are provide by Imperial College, respectively. In this paper, we describe the system design and their performance of TR IO cinema mission. TRIO cinema's development of miniature in-strument and spacecraft spinning operation will play an important role for future nanosatellite space missions
19. The CANDU 9 distributed control system design process
International Nuclear Information System (INIS)
Harber, J.E.; Kattan, M.K.; Macbeth, M.J.
1997-01-01
Canadian designed CANDU pressurized heavy water nuclear reactors have been world leaders in electrical power generation. The CANDU 9 project is AECL's next reactor design. Plant control for the CANDU 9 station design is performed by a distributed control system (DCS) as compared to centralized control computers, analog control devices and relay logic used in previous CANDU designs. The selection of a DCS as the platform to perform the process control functions and most of the data acquisition of the plant, is consistent with the evolutionary nature of the CANDU technology. The control strategies for the DCS control programs are based on previous CANDU designs but are implemented on a new hardware platform taking advantage of advances in computer technology. This paper describes the design process for developing the CANDU 9 DCS. Various design activities, prototyping and analyses have been undertaken in order to ensure a safe, functional, and cost-effective design. (author)
20. Doppler Lidar System Design via Interdisciplinary Design Concept at NASA Langley Research Center - Part III
Science.gov (United States)
Barnes, Bruce W.; Sessions, Alaric M.; Beyon, Jeffrey; Petway, Larry B.
2014-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. The existing power system was analyzed to rank components in terms of inefficiency, power dissipation, footprint and mass. Design considerations and priorities are compared along with the results of each design iteration. Overall power system improvements are summarized for design implementations. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33669567108154297, "perplexity": 4513.445416229068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867095.70/warc/CC-MAIN-20180624215228-20180624235228-00527.warc.gz"} |
https://worldwidescience.org/topicpages/q/quasiaverages+broken+symmetry.html | #### Sample records for quasiaverages broken symmetry
1. Quasiaverages, symmetry breaking and irreducible Green functions method
Directory of Open Access Journals (Sweden)
A.L.Kuzemsky
2010-01-01
Full Text Available The development and applications of the method of quasiaverages to quantum statistical physics and to quantum solid state theory and, in particular, to quantum theory of magnetism, were considered. It was shown that the role of symmetry (and the breaking of symmetries in combination with the degeneracy of the system was reanalyzed and essentially clarified within the framework of the method of quasiaverages. The problem of finding the ferromagnetic, antiferromagnetic and superconducting "symmetry broken" solutions of the correlated lattice fermion models was discussed within the irreducible Green functions method. A unified scheme for the construction of generalized mean fields (elastic scattering corrections and self-energy (inelastic scattering in terms of the equations of motion and Dyson equation was generalized in order to include the "source fields". This approach complements previous studies of microscopic theory of antiferromagnetism and clarifies the concepts of Neel sublattices for localized and itinerant antiferromagnetism and "spin-aligning fields" of correlated lattice fermions.
2. A broken symmetry ontology: Quantum mechanics as a broken symmetry
International Nuclear Information System (INIS)
Buschmann, J.E.
1988-01-01
The author proposes a new broken symmetry ontology to be used to analyze the quantum domain. This ontology is motivated and grounded in a critical epistemological analysis, and an analysis of the basic role of symmetry in physics. Concurrently, he is led to consider nonheterogeneous systems, whose logical state space contains equivalence relations not associated with the causal relation. This allows him to find a generalized principle of symmetry and a generalized symmetry-conservation formalisms. In particular, he clarifies the role of Noether's theorem in field theory. He shows how a broken symmetry ontology already operates in a description of the weak interactions. Finally, by showing how a broken symmetry ontology operates in the quantum domain, he accounts for the interpretational problem and the essential incompleteness of quantum mechanics. He proposes that the broken symmetry underlying this ontological domain is broken dilation invariance
3. Broken color symmetry and weak currents
International Nuclear Information System (INIS)
Stech, B.
1976-01-01
Broken colour symmetry predicts a very rich spectrum of new particles. If broken colour is relevant at all, charged psi-particles should be found in particular at the 4 GeV region. For the weak hadronic currents no completely satisfactory suggestion exists. Broken colour symmetry describes qualitatively several of the new effects observed recently. (BJ) [de
4. Broken SU(4) symmetry and new resonance
International Nuclear Information System (INIS)
Ueda, Y.
1975-11-01
Weinberg's spectral function sum rules are modified to accommodate broken symmetry effects of SU(4). With a simple choice of the symmetry-breaking term, the spectral function sum rules yield the observed vector meson mass spectrum as well as sum rules for the e - e + decay rates of vector mesons. In particular, a new mass formula, which can be interpreted as the broken symmetry version of the Schwinger formula, is derived, the agreement with experiments is excellent. (Ueda, Y.)
5. Nobel Prize for work on broken symmetries
CERN Multimedia
2008-01-01
The 2008 Nobel Prize for Physics goes to three physicists who have worked on broken symmetries in particle physics. The announcement of the 2008 Nobel Prize for physics was transmitted to the Globe of Science and Innovation via webcast on the occasion of the preview of the Nobel Accelerator exhibition.On 7 October it was announced that the Royal Swedish Academy of Sciences had awarded the 2008 Nobel Prize for physics to three particle physicists for their fundamental work on the mechanisms of broken symmetries. Half the prize was awarded to Yoichiro Nambu of Fermilab for "the discovery of the mechanism of spontaneous broken symmetry in subatomic physics". The other half is shared by Makato Kobayashi of Japan’s KEK Institute and Toshihide Maskawa of the Yukawa Institute at the University of Kyoto "for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in Nature". At th...
6. Bag model with broken chiral symmetry
International Nuclear Information System (INIS)
Efrosinin, V.P.; Zaikin, D.A.
1986-01-01
A variant of the bag model in which chiral symmetry is broken and which provides a description of all the experimental data on the light hadrons, including the pion, is discussed. The pion and kaon decay constants are calculated in this model. The problem of taking into account the center-of-mass motion in bag models and the boundary conditions in the bag model with broken chiral symmetry are also discussed
7. Spontaneously broken global symmetries and cosmology
International Nuclear Information System (INIS)
Shafi, Q.; Vilenkin, A.
1984-01-01
Phase transitions associated with spontaneously broken global symmetries, in case these occur in nature, can have important cosmological implications. This is illustrated through two examples. The first one shows how the spontaneous breaking of a global U(1) symmetry, present, for instance, in the minimal SU(5) model, can lead to an inflationary phase. The second example illustrates how topologically stable strings associated with the breaking of U(1) symmetry make an appearance at (or near) the end of the inflationary era
8. Neutrino masses and spontaneously broken flavor symmetries
International Nuclear Information System (INIS)
Staudt, Christian
2014-01-01
We study the phenomenology of supersymmetric flavor models. We show how the predictions of models based on spontaneously broken non-Abelian discrete flavor symmetries are altered when we include so-called Kaehler corrections. Furthermore, we discuss anomaly-free discrete R symmetries which are compatible with SU(5) unification. We find a set of symmetries compatible with suppressed Dirac neutrino masses and a unique symmetry consistent with the Weinberg operator. We also study a pseudo-anomalous U(1) R symmetry which explains the fermion mass hierarchies and, when amended with additional singlet fields, ameliorates the fine-tuning problem.
9. The Broken Symmetry of Time
International Nuclear Information System (INIS)
Kastner, Ruth E.
2011-01-01
This paper seeks to clarify features of time asymmetry in terms of symmetry breaking. It is observed that, in general, a contingent situation or event requires the breaking of an underlying symmetry. The distinction between the universal anisotropy of temporal processes and the irreversibility of certain physical processes is clarified. It is also proposed that the Transactional Interpretation of quantum mechanics offers an effective way to explain general thermodynamic asymmetry in terms of the time asymmetry of radiation, where prior such efforts have fallen short.
10. The Broken Symmetry of Time
Science.gov (United States)
Kastner, Ruth E.
2011-11-01
This paper seeks to clarify features of time asymmetry in terms of symmetry breaking. It is observed that, in general, a contingent situation or event requires the breaking of an underlying symmetry. The distinction between the universal anisotropy of temporal processes and the irreversibility of certain physical processes is clarified. It is also proposed that the Transactional Interpretation of quantum mechanics offers an effective way to explain general thermodynamic asymmetry in terms of the time asymmetry of radiation, where prior such efforts have fallen short.
11. Renormalizable models with broken symmetries
International Nuclear Information System (INIS)
Becchi, C.; Rouet, A.; Stora, R.
1975-10-01
The results of the renormalized perturbation theory, in the absence of massless quanta, are summarized. The global symmetry breaking is studied and the associated currents are discussed in terms of the coupling with a classical Yang Mills field. Gauge theories are discussed; it is most likely that the natural set up should be the theory of fiber bundles and that making a choice of field coordinates makes the situation obscure. An attempt is made in view of clarifying the meaning of the Slavnov symmetry which characterizes gauge field theories [fr
12. Soft Terms from Broken Symmetries
CERN Document Server
Buican, Matthew
2010-01-01
In theories of phyiscs beyond the Standard Model (SM), visible sector fields often carry quantum numbers under additional gauge symmetries. One could then imagine a scenario in which these extra gauge symmetries play a role in transmitting supersymmetry breaking from a hidden sector to the Supersymmetric Standard Model (SSM). In this paper we present a general formalism for studying the resulting hidden sectors and calculating the corresponding gauge mediated soft parameters. We find that a large class of generic models features a leading universal contribution to the soft scalar masses that only depends on the scale of Higgsing, even if the model is strongly coupled. As a by-product of our analysis, we elucidate some IR aspects of the correlation functions in General Gauge Mediation. We also discuss possible phenomenological applications.
13. N=1 superstrings with spontaneously broken symmetries
International Nuclear Information System (INIS)
Ferrara, S.
1988-01-01
We construct N=1 chiral superstrings with spontaneously broken gauge symmetry in four space-time dimensions. These new string solutions are obtained by a generalized coordinate-dependent Z 2 orbifold compactification of some non-chiral five-dimensional N=1 and N=2 superstrings. The scale of symmetry breaking is arbitrary (at least classically) and it can be chosen hierarchically smaller than the string scale (α') -1/2 . (orig.)
14. Broken colour symmetry and liberated quarks
International Nuclear Information System (INIS)
Ma, E.
1976-01-01
A quark model of hadrons is presented and discussed, in which local SU(3) gauge symmetry is completely broken and yet asymptotic freedom is preserved. There is no infrared slavery in this model, and isolated quarks are free to exist. Colour becomes a global symmetry which is only approximate under SU(3) but nearly exact under SU(2) x U(1), as far as the usual hadron spectroscopy is concerned. (Auth.)
15. Quantum Space-Time Deformed Symmetries Versus Broken Symmetries
CERN Document Server
Amelino-Camelia, G
2002-01-01
Several recent studies have concerned the faith of classical symmetries in quantum space-time. In particular, it appears likely that quantum (discretized, noncommutative,...) versions of Minkowski space-time would not enjoy the classical Lorentz symmetries. I compare two interesting cases: the case in which the classical symmetries are "broken", i.e. at the quantum level some classical symmetries are lost, and the case in which the classical symmetries are "deformed", i.e. the quantum space-time has as many symmetries as its classical counterpart but the nature of these symmetries is affected by the space-time quantization procedure. While some general features, such as the emergence of deformed dispersion relations, characterize both the symmetry-breaking case and the symmetry-deformation case, the two scenarios are also characterized by sharp differences, even concerning the nature of the new effects predicted. I illustrate this point within an illustrative calculation concerning the role of space-time symm...
16. Holography with broken Poincaré symmetry
NARCIS (Netherlands)
Korovins, J.
2014-01-01
This thesis deals with the extensions of the holographic dualities to the situations where part of the Poincaré group has been broken. Such theories are particularly relevant for applications of gauge/gravity dualities to condensed matter systems, which usually exhibit non-relativistic symmetry.
17. Ratchet device with broken friction symmetry
DEFF Research Database (Denmark)
Norden, Bengt; Zolotaryuk, Yaroslav; Christiansen, Peter Leth
2002-01-01
An experimental setup (gadget) has been made for demonstration of a ratchet mechanism induced by broken symmetry of a dependence of dry friction on external forcing. This gadget converts longitudinal oscillating or fluctuating motion into a unidirectional rotation, the direction of which is in ac......An experimental setup (gadget) has been made for demonstration of a ratchet mechanism induced by broken symmetry of a dependence of dry friction on external forcing. This gadget converts longitudinal oscillating or fluctuating motion into a unidirectional rotation, the direction of which...... is in accordance with given theoretical arguments. Despite the setup being three dimensional, the ratchet rotary motion is proved to be described by one simple dynamic equation. This kind of motion is a result of the interplay of friction and inertia....
18. Random-phase approximation and broken symmetry
International Nuclear Information System (INIS)
Davis, E.D.; Heiss, W.D.
1986-01-01
The validity of the random-phase approximation (RPA) in broken-symmetry bases is tested in an appropriate many-body system for which exact solutions are available. Initially the regions of stability of the self-consistent quasiparticle bases in this system are established and depicted in a 'phase' diagram. It is found that only stable bases can be used in an RPA calculation. This is particularly true for those RPA modes which are not associated with the onset of instability of the basis; it is seen that these modes do not describe any excited state when the basis is unstable, although from a formal point of view they remain acceptable. The RPA does well in a stable broken-symmetry basis provided one is not too close to a point where a phase transition occurs. This is true for both energies and matrix elements. (author)
19. Quantum restoration of broken symmetry in onedimensional loop ...
Home; Journals; Pramana – Journal of Physics; Volume 82; Issue 6. Quantum restoration of broken symmetry in ... Keywords. Non-local transformation; broken symmetry; sine-Gordon; sech interaction. ... A specific type of classically broken symmetry is restored in quantum theory. One-dimensional sine-Gordon system and ...
20. Effective theories with broken flavour symmetry
International Nuclear Information System (INIS)
Miller, R.D.C.; McKellar, B.H.J.
1981-07-01
The work of Ovrut and Schnitzer on effective theories derived from a non Abelian Gauge Theory is generalised to include the physically interesting case of broken flavour symmetry. The calculations are performed at the 1-loop level. It is shown that at an intermediate stage in the calculations two distinct renormalised gauge coupling constants appear, one describing gauge field coupling to heavy particles and the other describing coupling to light particles. Appropriately modified Slavnov-Taylor identities are shown to hold. A simple alternative to the Ovrut-Schnitzer rules for calculating with effective theories is also considered
1. Broken symmetries and the Cabibbo angle
International Nuclear Information System (INIS)
Lanik, J.
1975-04-01
Under the assumption that the SU(3) symmetry is broken down by the strong and electromagnetic interactions, a phenomenological theory of the Cabibbo angle theta is proposed. In this theory the angle theta is fixed, linking together the Cabibbo rotation in the SU(3) space and complete SU(3) breaking consisting of both the SU(3) Hamiltonian and vacuum non-invariances. Assuming that the value of theta is zero in the soft-pion limit and that, in this limit, the only forces responsible for the isotopic symmetry breaking are the usual photonic forces it is shown that the usual electromagnetic interactions can contribute for the value of theta only through the non-vanishing vacuum expectation value of a certain scalar field. Within the framework of the (3,average3)+(3,average3) chiral symmetry-breaking model and through the use of the experimental value of the ratio GAMMA (K→μν)/GAMMA(π→μν), the presented Cabibbo angle theory predicts the value sintheta=0.25 which is in good agreement with experiment. (Lanik, J.)
2. Ratchet due to broken friction symmetry
DEFF Research Database (Denmark)
Norden, Bengt; Zolotaryuk, Yaroslav; Christiansen, Peter Leth
2002-01-01
A ratchet mechanism that occurs due to asymmetric dependence of the friction of a moving system on its velocity or a driving force is reported. For this kind of ratchet, instead of a particle moving in a periodic potential, the dynamics of which have broken space-time symmetry, the system must...... be provided with sonic internal structure realizing such a velocity- or force-friction dependence. For demonstration of a ratchet mechanism of this type, an experimental setup (gadget) that converts longitudinal oscillating or fluctuating motion into a unidirectional rotation has been built and experiments...... with it have been carried out. In this device, an asymmetry of friction dependence on an applied force appears, resulting in rectification of rotary motion, In experiments, our setup is observed to rotate only in one direction, which is in accordance with given theoretical arguments, Despite the setup being...
3. Neutrino mixing: from the broken μ-τ symmetry to the broken Friedberg–Lee symmetry
International Nuclear Information System (INIS)
Xing, Zhizhong
2007-01-01
I argue that the observed flavor structures of leptons and quarks might imply the existence of certain flavor symmetries. The latter should be a good starting point to build realistic models towards deeper understanding of the fermion mass spectra and flavor mixing patterns. The μ-τ permutation symmetry serves for such an example to interpret the almost maximal atmospheric neutrino mixing angle (θ 23 ~ 45°) and the strongly suppressed CHOOZ neutrino mixing angle (θ 13 < 10°). In this talk I like to highlight a new kind of flavor symmetry, the Friedberg–Lee symmetry, for the effective Majorana neutrino mass operator. Luo and I have shown that this symmetry can be broken in an oblique way, such that the lightest neutrino remains massless but an experimentally-favored neutrino mixing pattern is achievable. We get a novel prediction for θ 13 in the CP-conserving case: sinθ 13 = tanθ 12 |(1 - tanθ 23 )/(1 + tanθ 23 )|. Our scenario can simply be generalized to accommodate CP violation and be combined with the seesaw mechanism. Finally I stress the importance of probing possible effects of μ-τ symmetry breaking either in terrestrial neutrino oscillation experiments or with ultrahigh-energy cosmic neutrino telescopes. (author)
4. Superconducting cosmic strings in models with spontaneously broken family symmetry
International Nuclear Information System (INIS)
Bibilashvili, T.M.; Dvali, G.R.
1990-01-01
It is shown that superconducting cosmic strings with some specific properties naturally exist in models of spontaneously broken family symmetry. Superconductivity may be of both types - bosonic and fermionic. There exists a possible mechanism of string conservation. (orig.)
5. Symmetry restoration in spontaneously broken induced gravity
International Nuclear Information System (INIS)
Amati, D.; Russo, J.
1990-01-01
We investigate the recuperation of expected invariant behaviours in a non-metric gravity theory in which the full general relativistic invariance is broken spontaneously. We show how dangerous increasing energy behaviours of physical amplitudes cancel in a highly non-trivial way. This evidences the expected loss of the vacuum generated scale in the UV regime and gives support for the consistency of spontaneously broken gravity theories. (orig.)
6. Multiquark baryons with broken flavour symmetry 1
International Nuclear Information System (INIS)
Wroldsen, J.
The calculation of the spectrum of 4qq multiquark baryons is carried out, taking into account that SU(3) flavour is broken. To handle this problem, which includes manipulation of giant expressions for the wavefunctions, methods suitable for programming in SCHOONSCHIP are developed and employed. (Auth)
7. Elastoconductivity as a probe of broken mirror symmetries
Energy Technology Data Exchange (ETDEWEB)
Hlobil, Patrik; Maharaj, Akash V.; Hosur, Pavan; Shapiro, M. C.; Fisher, I. R.; Raghu, S.
2015-07-27
We propose the possible detection of broken mirror symmetries in correlated two-dimensional materials by elastotransport measurements. Using linear response theory we calculate the“shear conductivity” Γ x x , x y , defined as the linear change of the longitudinal conductivity σ x x due to a shear strain ε x y . This quantity can only be nonvanishing when in-plane mirror symmetries are broken and we discuss how candidate states in the cuprate pseudogap regime (e.g., various loop current or charge orders) may exhibit a finite shear conductivity. We also provide a realistic experimental protocol for detecting such a response.
8. Non-Gaussianity from Broken Symmetries
CERN Document Server
Kolb, Edward W; Vallinotto, A; Kolb, Edward W.; Riotto, Antonio; Vallinotto, Alberto
2006-01-01
Recently we studied inflation models in which the inflaton potential is characterized by an underlying approximate global symmetry. In the first work we pointed out that in such a model curvature perturbations are generated after the end of the slow-roll phase of inflation. In this work we develop further the observational implications of the model and compute the degree of non-Gaussianity predicted in the scenario. We find that the corresponding nonlinearity parameter, $f_{NL}$, can be as large as 10^2.
9. Weak interaction models with spontaneously broken left-right symmetry
International Nuclear Information System (INIS)
Mohapatra, R.H.
1978-01-01
The present status of weak interaction models with spontaneously broken left-right symmetry is reviewed. The theoretical basis for asymptotic parity conservation, manifest left-right symmetry in charged current weak interactions, natural parity conservation in neutral currents and CP-violation in the context of SU(2)/sub L/ circled x SU (2)/sub R/ circled x U(1) models are outlined in detail. Various directions for further research in the theoretical and experimental side are indicated
10. Discrete symmetries: A broken look at QCD
International Nuclear Information System (INIS)
Goldman, T.
1996-01-01
The alphabet soup of discrete symmetries is briefly surveyed with a view towards those which can be tested at LISS and two particularly interesting cases are called out. A LISS experiment may be able to distinguish CP violation that is not due to the QCD θ term. The elements of a model of parity violation in proton-nucleon scattering, which is consistent with lower energy LAMPF and ANL results, are reviewed in the light of new information on diquarks and the proton spin fraction carried by quarks. The prediction that the parity violating total cross section asymmetry should be large at LISS energies is confirmed. The results of such an experiment can be used both to obtain new information about the diquark substructure of the nucleon and to provide bounds on new right-chiral weak interactions
11. Light hadrons in the bag model with broken chiral symmetry
International Nuclear Information System (INIS)
Efrosinin, V.P.; Zaikin, D.A.
1987-01-01
A version of the bag model with broken chiral symmetry is proposed. A satisfactory description of the experimental data on light hadrons including the pion is obtained. The estimate of the pion-nucleon σ term is given in the framework of this model. The pion and kaon decay constants are calculated. The centre-of-mass motion problem in bag models is discussed
12. Spontaneous Broken Local Conformal Symmetry and Dark Energy Candidate
International Nuclear Information System (INIS)
Liu, Lu-Xin
2013-01-01
The local conformal symmetry is spontaneously broken down to the Local Lorentz invariance symmetry through the approach of nonlinear realization. The resulting effective Lagrangian, in the unitary gauge, describes a cosmological vector field non-minimally coupling to the gravitational field. As a result of the Higgs mechanism, the vector field absorbs the dilaton and becomes massive, but with an independent energy scale. The Proca type vector field can be modelled as dark energy candidate. The possibility that it further triggers Lorentz symmetry violation is also pointed out
13. Heavy axions from strong broken horizontal gauge symmetry
International Nuclear Information System (INIS)
Elliott, T.; King, S.F.
1993-01-01
We study the consequences of the existence and breaking of a Peccei-Quinn symmetry within the context of a dynamical model of electroweak symmetry breaking based on broken gauged flavour symmetries. We perform an estimate of the axion mass by including flavour instanton effects and show that, for low cut-offs, the axion is sufficiently massive to prevent it from being phenomenologically unacceptable. We conclude with an examination of the strong CP problem and show that our axion cannot solve the problem, though we indicate ways in which the model can be extended so that the strong CP problem is solved. (orig.)
14. Conformal bootstrap with slightly broken higher spin symmetry
Energy Technology Data Exchange (ETDEWEB)
Alday, Luis F. [Mathematical Institute, University of Oxford,Andrew Wiles Building, Radcliffe Observatory Quarter,Woodstock Road, Oxford, OX2 6GG (United Kingdom); Zhiboedov, Alexander [Center for the Fundamental Laws of Nature,Harvard University, Cambridge, MA 02138 (United States)
2016-06-16
We consider conformal field theories with slightly broken higher spin symmetry in arbitrary spacetime dimensions. We analyze the crossing equation in the double light-cone limit and solve for the anomalous dimensions of higher spin currents γ{sub s} with large spin s. The result depends on the symmetries and the spectrum of the unperturbed conformal field theory. We reproduce all known results and make further predictions. In particular we make a prediction for the anomalous dimensions of higher spin currents in the 3d Ising model.
15. Massive Kaluza-Klein theories and their spontaneously broken symmetries
International Nuclear Information System (INIS)
Hohm, O.
2006-07-01
In this thesis we investigate the effective actions for massive Kaluza-Klein states, focusing on the massive modes of spin-3/2 and spin-2 fields. To this end we determine the spontaneously broken gauge symmetries associated to these 'higher-spin' states and construct the unbroken phase of the Kaluza-Klein theory. We show that for the particular background AdS 3 x S 3 x S 3 a consistent coupling of the first massive spin-3/2 multiplet requires an enhancement of local supersymmetry, which in turn will be partially broken in the Kaluza-Klein vacuum. The corresponding action is constructed as a gauged maximal supergravity in D=3. Subsequently, the symmetries underlying an infinite tower of massive spin-2 states are analyzed in case of a Kaluza-Klein compactification of four-dimensional gravity to D=3. It is shown that the resulting gravity-spin-2 theory is given by a Chern-Simons action of an affine algebra and also allows a geometrical interpretation in terms of 'algebra-valued' differential geometry. The global symmetry group is determined, which contains an affine extension of the Ehlers group. We show that the broken phase can in turn be constructed via gauging a certain subgroup of the global symmetry group. Finally, deformations of the Kaluza-Klein theory on AdS 3 x S 3 x S 3 and the corresponding symmetry breakings are analyzed as possible applications for the AdS/CFT correspondence. (Orig.)
16. Massive Kaluza-Klein theories and their spontaneously broken symmetries
Energy Technology Data Exchange (ETDEWEB)
Hohm, O.
2006-07-15
In this thesis we investigate the effective actions for massive Kaluza-Klein states, focusing on the massive modes of spin-3/2 and spin-2 fields. To this end we determine the spontaneously broken gauge symmetries associated to these 'higher-spin' states and construct the unbroken phase of the Kaluza-Klein theory. We show that for the particular background AdS{sub 3} x S{sup 3} x S{sup 3} a consistent coupling of the first massive spin-3/2 multiplet requires an enhancement of local supersymmetry, which in turn will be partially broken in the Kaluza-Klein vacuum. The corresponding action is constructed as a gauged maximal supergravity in D=3. Subsequently, the symmetries underlying an infinite tower of massive spin-2 states are analyzed in case of a Kaluza-Klein compactification of four-dimensional gravity to D=3. It is shown that the resulting gravity-spin-2 theory is given by a Chern-Simons action of an affine algebra and also allows a geometrical interpretation in terms of 'algebra-valued' differential geometry. The global symmetry group is determined, which contains an affine extension of the Ehlers group. We show that the broken phase can in turn be constructed via gauging a certain subgroup of the global symmetry group. Finally, deformations of the Kaluza-Klein theory on AdS{sub 3} x S{sup 3} x S{sup 3} and the corresponding symmetry breakings are analyzed as possible applications for the AdS/CFT correspondence. (Orig.)
17. Entanglement entropy in quantum spin chains with broken reflection symmetry
International Nuclear Information System (INIS)
2010-01-01
We investigate the entanglement entropy of a block of L sites in quasifree translation-invariant spin chains concentrating on the effect of reflection-symmetry breaking. The Majorana two-point functions corresponding to the Jordan-Wigner transformed fermionic modes are determined in the most general case; from these, it follows that reflection symmetry in the ground state can only be broken if the model is quantum critical. The large L asymptotics of the entropy are calculated analytically for general gauge-invariant models, which have, until now, been done only for the reflection-symmetric sector. Analytical results are also derived for certain nongauge-invariant models (e.g., for the Ising model with Dzyaloshinskii-Moriya interaction). We also study numerically finite chains of length N with a nonreflection-symmetric Hamiltonian and report that the reflection symmetry of the entropy of the first L spins is violated but the reflection-symmetric Calabrese-Cardy formula is recovered asymptotically. Furthermore, for noncritical reflection-symmetry-breaking Hamiltonians, we find an anomaly in the behavior of the saturation entropy as we approach the critical line. The paper also provides a concise but extensive review of the block-entropy asymptotics in translation-invariant quasifree spin chains with an analysis of the nearest-neighbor case and the enumeration of the yet unsolved parts of the quasifree landscape.
18. Spontaneously broken spacetime symmetries and the role of inessential Goldstones
Science.gov (United States)
Klein, Remko; Roest, Diederik; Stefanyszyn, David
2017-10-01
In contrast to internal symmetries, there is no general proof that the coset construction for spontaneously broken spacetime symmetries leads to universal dynamics. One key difference lies in the role of Goldstone bosons, which for spacetime symmetries includes a subset which are inessential for the non-linear realisation and hence can be eliminated. In this paper we address two important issues that arise when eliminating inessential Goldstones. The first concerns the elimination itself, which is often performed by imposing so-called inverse Higgs constraints. Contrary to claims in the literature, there are a series of conditions on the structure constants which must be satisfied to employ the inverse Higgs phenomenon, and we discuss which parametrisation of the coset element is the most effective in this regard. We also consider generalisations of the standard inverse Higgs constraints, which can include integrating out inessential Goldstones at low energies, and prove that under certain assumptions these give rise to identical effective field theories for the essential Goldstones. Secondly, we consider mappings between non-linear realisations that differ both in the coset element and the algebra basis. While these can always be related to each other by a point transformation, remarkably, the inverse Higgs constraints are not necessarily mapped onto each other under this transformation. We discuss the physical implications of this non-mapping, with a particular emphasis on the coset space corresponding to the spontaneous breaking of the Anti-De Sitter isometries by a Minkowski probe brane.
19. Two phases of the anyon gas and broken T symmetry
International Nuclear Information System (INIS)
Canright, G.S.; Rojo, A.G.
1991-01-01
This paper reports the first exact finite-temperature study of anyons. The authors' method is an extension to finite T of earlier numerical work with small numbers of anyons on a lattice. We study the spontaneous magnetization M 0 (T), since the signature has been identified as a key signature of broken T symmetry for anyon models. Our results confirm the two-phase picture suggested by earlier work: The authors find a low-temperature regime where M 0 is very small or zero, and a high-temperature regime where M 0 is of O(0.1 μ B ) per particle. In the high-temperature regime the authors can obtain an excellent estimate of M 0 (T) in the thermodynamic limit (which we call M 0 ∞ ). since our finite-size results extrapolate smoothly with little scatter. The authors' values for M 0 ∞ can then be compared with the results of μSR experiments on high-temperature superconductors, which set an upper experimental bound on the internal fields from such moments. The authors find that M 0 ∞ in a bulk material of many planes will almost certainly give a signal well above this threshold if (and only if) the planes are ordered ferromagnetically. In the antiferromagnetic case (which is strongly favored energetically) the signal from M 0 ∞ is probably undetectable. Finally, we estimate the transition temperature T c from our finite-size studies, obtaining a value on the order of a few hundred Kelvins
20. Gauge-Higgs unification with broken flavour symmetry
Energy Technology Data Exchange (ETDEWEB)
Olschewsky, M.
2007-05-15
We study a five-dimensional Gauge-Higgs unification model on the orbifold S{sup 1}/Z{sub 2} based on the extended standard model (SM) gauge group SU(2){sub L} x U(1){sub Y} x SO(3){sub F}. The group SO(3){sub F} is treated as a chiral gauged flavour symmetry. Electroweak-, flavour- and Higgs interactions are unified in one single gauge group SU(7). The unified gauge group SU(7) is broken down to SU(2){sub L} x U(1){sub Y} x SO(3){sub F} by orbifolding and imposing Dirichlet and Neumann boundary conditions. The compactification scale of the theory is O(1) TeV. Furthermore, the orbifold S{sup 1}/Z{sub 2} is put on a lattice. This setting gives a well-defined staring point for renormalisation group (RG) transformations. As a result of the RG-flow, the bulk is integrated out and the extra dimension will consist of only two points: the orbifold fixed points. The model obtained this way is called an effective bilayered transverse lattice model. Parallel transporters (PT) in the extra dimension become nonunitary as a result of the blockspin transformations. In addition, a Higgs potential V({phi}) emerges naturally. The PTs can be written as a product e{sup A{sub y}}e{sup {eta}}e{sup A{sub y}} of unitary factors e{sup A{sub y}} and a selfadjoint factor e{sup {eta}}. The reduction 48 {yields} 35 + 6 + anti 6 + 1 of the adjoint representation of SU(7) with respect to SU(6) contains SU(2){sub L} x U(1){sub Y} x SO(3){sub F} leads to three SU(2){sub L} Higgs doublets: one for the first, one for the second and one for the third generation. Their zero modes serve as a substitute for the SM Higgs. When the extended SM gauge group SU(2){sub L} x U(1){sub Y} x SO(3){sub F} is spontaneously broken down to U(1){sub em}, an exponential gauge boson mass splitting occurs naturally. At a first step SU(2){sub L} x U(1){sub Y} x SO(3){sub F} is broken to SU(2){sub L} x U(1){sub Y} by VEVs for the selfadjoint factor e{sup {eta}}. This breaking leads to masses of flavour changing SO(3){sub F
1. Gauge-Higgs unification with broken flavour symmetry
International Nuclear Information System (INIS)
Olschewsky, M.
2007-05-01
We study a five-dimensional Gauge-Higgs unification model on the orbifold S 1 /Z 2 based on the extended standard model (SM) gauge group SU(2) L x U(1) Y x SO(3) F . The group SO(3) F is treated as a chiral gauged flavour symmetry. Electroweak-, flavour- and Higgs interactions are unified in one single gauge group SU(7). The unified gauge group SU(7) is broken down to SU(2) L x U(1) Y x SO(3) F by orbifolding and imposing Dirichlet and Neumann boundary conditions. The compactification scale of the theory is O(1) TeV. Furthermore, the orbifold S 1 /Z 2 is put on a lattice. This setting gives a well-defined staring point for renormalisation group (RG) transformations. As a result of the RG-flow, the bulk is integrated out and the extra dimension will consist of only two points: the orbifold fixed points. The model obtained this way is called an effective bilayered transverse lattice model. Parallel transporters (PT) in the extra dimension become nonunitary as a result of the blockspin transformations. In addition, a Higgs potential V(Φ) emerges naturally. The PTs can be written as a product e A y e η e A y of unitary factors e A y and a selfadjoint factor e η . The reduction 48 → 35 + 6 + anti 6 + 1 of the adjoint representation of SU(7) with respect to SU(6) contains SU(2) L x U(1) Y x SO(3) F leads to three SU(2) L Higgs doublets: one for the first, one for the second and one for the third generation. Their zero modes serve as a substitute for the SM Higgs. When the extended SM gauge group SU(2) L x U(1) Y x SO(3) F is spontaneously broken down to U(1) em , an exponential gauge boson mass splitting occurs naturally. At a first step SU(2) L x U(1) Y x SO(3) F is broken to SU(2) L x U(1) Y by VEVs for the selfadjoint factor e η . This breaking leads to masses of flavour changing SO(3) F gauge bosons much above the compactification scale. Such a behaviour has no counterpart within the customary approximation scheme of an ordinary orbifold theory. This way tree
2. Finite-temperature effective potential of a system with spontaneously broken symmetry
Energy Technology Data Exchange (ETDEWEB)
Zemskov, E.P. [Yaroslavl State Technical Univ. (Russian Federation)
1995-12-01
A quantum-mechanical system with spontaneously broken symmetry is considered the effective potential is determined, and it is shown that with reduction of temperature the system undergoes a phase transition of the first kind.
3. Broken dynamical symmetries in quantum mechanics and phase transition phenomena
International Nuclear Information System (INIS)
Guenther, N.J.
1979-12-01
This thesis describes applications of dynamical symmetries to problems in quantum mechanics and many-body physics where the latter is formulated as a Euclidean scalar field theory in d-space dimensions. By invoking the concept of a dynamical symmetry group a unified understanding of apparently disparate results is achieved. (author)
4. Fluctuation relations for equilibrium states with broken discrete or continuous symmetries
International Nuclear Information System (INIS)
Lacoste, D; Gaspard, P
2015-01-01
Isometric fluctuation relations are deduced for the fluctuations of the order parameter in equilibrium systems of condensed-matter physics with broken discrete or continuous symmetries. These relations are similar to their analogues obtained for non-equilibrium systems where the broken symmetry is time reversal. At equilibrium, these relations show that the ratio of the probabilities of opposite fluctuations goes exponentially with the symmetry-breaking external field and the magnitude of the fluctuations. These relations are applied to the Curie–Weiss, Heisenberg, and XY models of magnetism where the continuous rotational symmetry is broken, as well as to the q-state Potts model and the p-state clock model where discrete symmetries are broken. Broken symmetries are also considered in the anisotropic Curie–Weiss model. For infinite systems, the results are calculated using large-deviation theory. The relations are also applied to mean-field models of nematic liquid crystals where the order parameter is tensorial. Moreover, their extension to quantum systems is also deduced. (paper)
5. Nonreciprocal Linear Transmission of Sound in a Viscous Environment with Broken P Symmetry
Science.gov (United States)
Walker, E.; Neogi, A.; Bozhko, A.; Zubov, Yu.; Arriaga, J.; Heo, H.; Ju, J.; Krokhin, A. A.
2018-05-01
Reciprocity is a fundamental property of the wave equation in a linear medium that originates from time-reversal symmetry, or T symmetry. For electromagnetic waves, reciprocity can be violated by an external magnetic field. It is much harder to realize nonreciprocity for acoustic waves. Here we report the first experimental observation of linear nonreciprocal transmission of ultrasound through a water-submerged phononic crystal consisting of asymmetric rods. Viscosity of water is the factor that breaks the T symmetry. Asymmetry, or broken P symmetry along the direction of sound propagation, is the second necessary factor for nonreciprocity. Experimental results are in agreement with numerical simulations based on the Navier-Stokes equation. Our study demonstrates that a medium with broken PT symmetry is acoustically nonreciprocal. The proposed passive nonreciprocal device is cheap, robust, and does not require an energy source.
6. Broken SU(8) symmetry and the new particles
International Nuclear Information System (INIS)
Kramer, G.; Schiller, D.H.
1976-05-01
We study the mass spectra and wave functions for vector and pseudoscalar mesons in broken SU(8) (SU(8) is contained in SU(4)F * SU(2)J), where F stands for flavour and J for usual spin. The connection with the standard mass breaking in SU(4)F is worked out. We find that even in the presence of strong SU(8) breaking the ideal mixing scheme for the vector mesons can be approximately retained. For the pseudoscalar mesons the mixing of the singlet with the 63-plet representation of SU(8) turns out to be essential and stongly nonideal. (orig.) [de
7. Phenomenology of the standard model under conditions of spontaneously broken mirror symmetry
Energy Technology Data Exchange (ETDEWEB)
Dyatlov, I. T., E-mail: dyatlov@thd.pnpi.spb.ru [National Research Center Kurchatov Institute, Petersburg Nuclear Physics Institute (Russian Federation)
2017-03-15
Spontaneously broken mirror symmetry is able to reproduce observed qualitative properties of weak mixing for quark and leptons. Under conditions of broken mirror symmetry, the phenomenology of leptons—that is, small neutrino masses and a mixing character other than that in the case of quarks—requires the Dirac character of the neutrinos and the existence of processes violating the total lepton number. Such processes involve heavy mirror neutrinos; that is, they proceed at very high energies. Here, CP violation implies that a P-even mirror-symmetric Lagrangian must simultaneously be T-odd and, according to the CPT theorem, C-odd. All these properties create preconditions for the occurrence of leptogenesis, which is a mechanism of the emergence of the baryon–lepton asymmetry of the universe in models featuring broken mirror symmetry.
8. Broken chiral symmetry and the structure of hadrons
International Nuclear Information System (INIS)
Spence, W.L.
1982-01-01
The spontaneous breaking of chiral symmetry plays a decisive role in the structure of hadrons composed of light quarks. The formalism by which the dynamics of chiral symmetry breaking and its implications for hadronic structure can be explored in a simplified world in which fully relativistic zero-bare-mass quarks interact through a chirally symmetric instantaneous confining potential is presented. By thus modeling the essentials of the chiral limit-N/sub c/ infinity limit of QCD contact is made with the successes of existent semiphenomenological models of hadrons but post assumptions which explicitly violate chiral symetry are avoided. This revised approach then makes possible a unification of the dynamics of hadron structure with the mechanism of spontaneous chiral breaking and guarantees the appearance of the correct Goldstone excitations. The chiral breaking order parameter (absolute value anti psi psi), effective quark mass, and Goldstone boson wave function are obtainable by solving a single non-linear integral equation once a potential has been prescribed. The stability of the chiral asymmetric vacuum must then be established by studying the linear eigenvalue problem which determines the spectrum of states with vacuum quantum numbers. The nature of the instability of the chiral symmetric vacuum that leads to spontaneous symmetry breaking is explained and its apparent contingency on details of the dynamics is emphasized. It is argued that a single massless fermion in a chirally symmetric potential does form bound states for which a semi-classical description is given. Coupling to vacuum pairs of such bound states occasions the possibility of chiral symmetry breakdown
9. Contraction of broken symmetries via Kac-Moody formalism
International Nuclear Information System (INIS)
Daboul, Jamil
2006-01-01
I investigate contractions via Kac-Moody formalism. In particular, I show how the symmetry algebra of the standard two-dimensional Kepler system, which was identified by Daboul and Slodowy as an infinite-dimensional Kac-Moody loop algebra, and was denoted by H 2 , gets reduced by the symmetry breaking term, defined by the Hamiltonian H(β)=(1/2m)(p 1 2 +p 2 2 )-α/r-βr -1/2 cos((φ-γ)/2). For this H(β) I define two symmetry loop algebras L i (β), i=1,2, by choosing the 'basic generators' differently. These L i (β) can be mapped isomorphically onto subalgebras of H 2 , of codimension two or three, revealing the reduction of symmetry. Both factor algebras L i (β)/I i (E,β), relative to the corresponding energy-dependent ideals I i (E,β), are isomorphic to so(3) and so(2,1) for E 0, respectively, just as for the pure Kepler case. However, they yield two different nonstandard contractions as E→0, namely to the Heisenberg-Weyl algebra h 3 =w 1 or to an Abelian Lie algebra, instead of the Euclidean algebra e(2) for the pure Kepler case. The above-noted example suggests a general procedure for defining generalized contractions, and also illustrates the 'deformation contraction hysteresis', where contraction which involves two contraction parameters can yield different contracted algebras, if the limits are carried out in different order
10. Chiral pair of Fermi arcs, anomaly cancellation, and spin or valley Hall effects in Weyl metals with broken inversion symmetry
Science.gov (United States)
Jang, Iksu; Kim, Ki-Seok
2018-04-01
Anomaly cancellation has been shown to occur in broken time-reversal symmetry Weyl metals, which explains the existence of a Fermi arc. We extend this result in the case of broken inversion symmetry Weyl metals. Constructing a minimal model that takes a double pair of Weyl points, we demonstrate the anomaly cancellation explicitly. This demonstration explains why a chiral pair of Fermi arcs appear in broken inversion symmetry Weyl metals. In particular, we find that this pair of Fermi arcs gives rise to either "quantized" spin Hall or valley Hall effects, which corresponds to the "quantized" version of the charge Hall effect in broken time-reversal symmetry Weyl metals.
11. Broken Weyl symmetry. [Gauge model, coupling, Higgs field
Energy Technology Data Exchange (ETDEWEB)
Domokos, G.
1976-05-01
It is argued that conformal symmetry can be properly understood in the framework of field theories in curved space. In such theories, invariance is required under general coordinate transformations and conformal rescalings. A gauge model coupled to a Higgs field is examined. In the tree approximation, the vacuum solution exhibits two Higgs phenomena; both the phase (Goldstone boson) and the coordinate dependent part of the radial component of the scalar field can be removed by a Higgs-Kibble transformation. The resulting vacuum solution corresponds to a space of constant curvature and constant vacuum expectation value of the scalar field.
12. Interface properties of superlattices with artificially broken symmetry
International Nuclear Information System (INIS)
Lottermoser, Th.; Yamada, H.; Matsuno, J.; Arima, T.; Kawasaki, M.; Tokura, Y.
2007-01-01
We have used superlattices made of thin layers of transition metal oxides to design the so-called multiferroics, i.e. materials possessing simultaneously an electric polarization and a magnetic ordering. The polarization originates from the asymmetric stacking order accompanied by charge transfer effects, while the latter one also influences the magnetic properties of the interfaces. Due to the breaking of space and time-reversal symmetry by multiple ordering mechanism magnetic second harmonic generation is proven to be an ideal method to investigate the electric and magnetic properties of the superlattices
13. On hierarchy in asymptotic reconstruction of spontaneously broken isotopic symmetry
International Nuclear Information System (INIS)
Ermolaev, B.I.
1978-01-01
The isotopic features of the effective current-current lagrangian of the Lsub(eff) electromagnetic-weak interaction between elementary particles are treated at large momentum transfers using the Weinberg-Salam model. Transition to other models may be made by analogy. It is shown that when the collision energies of elementary particles exceed 90 GeV one may expect the hierarchy in the asymptotic reconstruction of the isotopic symmetry. Such hierarchy could be observed, in particular, in experiments on elastic leptonic collisions at high energies
14. Broken symmetry in a two-qubit quantum control landscape
Science.gov (United States)
Bukov, Marin; Day, Alexandre G. R.; Weinberg, Phillip; Polkovnikov, Anatoli; Mehta, Pankaj; Sels, Dries
2018-05-01
We analyze the physics of optimal protocols to prepare a target state with high fidelity in a symmetrically coupled two-qubit system. By varying the protocol duration, we find a discontinuous phase transition, which is characterized by a spontaneous breaking of a Z2 symmetry in the functional form of the optimal protocol, and occurs below the quantum speed limit. We study in detail this phase and demonstrate that even though high-fidelity protocols come degenerate with respect to their fidelity, they lead to final states of different entanglement entropy shared between the qubits. Consequently, while globally both optimal protocols are equally far away from the target state, one is locally closer than the other. An approximate variational mean-field theory which captures the physics of the different phases is developed.
15. Broken flavor symmetries in high energy particle phenomenology
International Nuclear Information System (INIS)
Antaramian, A.
1995-01-01
Over the past couple of decades, the Standard Model of high energy particle physics has clearly established itself as an invaluable tool in the analysis of high energy particle phenomenon. However, from a field theorists point of view, there are many dissatisfying aspects to the model. One of these, is the large number of free parameters in the theory arising from the Yukawa couplings of the Higgs doublet. In this thesis, we examine various issues relating to the Yukawa coupeng structure of high energy particle field theories. We begin by examining extensions to the Standard Model of particle physics which contain additional scalar fields. By appealing to the flavor structure observed in the fermion mass and Kobayashi-Maskawa matrices, we propose a reasonable phenomenological parameterization of the new Yukawa couplings based on the concept of approximate flavor symmetries. It is shown that such a parameterization eliminates the need for discrete symmetries which limit the allowed couplings of the new scalars. New scalar particles which can mediate exotic flavor changing reactions can have masses as low as the weak scale. Next, we turn to the issue of neutrino mass matrices, where we examine a particular texture which leads to matter independent neutrino oscillation results for solar neutrinos. We, then, examine the basis for extremely strict limits placed on flavor changing interactions which also break lepton- and/or baryon-number. These limits are derived from cosmological considerations. Finally, we embark on an extended analysis of proton decay in supersymmetric SO(10) grand unified theories. In such theories, the dominant decay diagrams involve the Yukawa couplings of a heavy triplet superfield. We argue that past calculations of proton decay which were based on the minimal supersymmetric SU(5) model require reexamination because the Yukawa couplings of that theory are known to be wrong
16. Broken symmetries at high temperatures and the problem of baryon excess of the universe
CERN Document Server
Mohapatra, Rabindra N
1979-01-01
We discuss a class of gauge theories, where spontan- eously broken symmetries, instead of being restored, persist as the temperature is increased. Applying these ideas to the specific case of the soft CP- viola tion in grand unified theories, we discuss a mechanism to generate the baryon to entropy ratio of the universe.
17. The energy-momentum spectrum in local field theories with broken Lorentz-symmetry
International Nuclear Information System (INIS)
Borchers, H.J.; Buchholz, D.
1984-05-01
Assuming locality of the observables and positivity of the energy it is shown that the joint spectrum of the energy-momentum operators has a Lorentz-invariant lower boundary in all superselection sectors. This result is of interest if the Lorentz-symmetry is (spontaneously) broken, such as in the charged sectors of quantum electrodynamics. (orig.)
18. Symmetry protected topological charge in symmetry broken phase: Spin-Chern, spin-valley-Chern and mirror-Chern numbers
International Nuclear Information System (INIS)
Ezawa, Motohiko
2014-01-01
The Chern number is a genuine topological number. On the other hand, a symmetry protected topological (SPT) charge is a topological number only when a symmetry exists. We propose a formula for the SPT charge as a derivative of the Chern number in terms of the Green function in such a way that it is valid and related to the associated Hall current even when the symmetry is broken. We estimate the amount of deviation from the quantized value as a function of the strength of the broken symmetry. We present two examples. First, we consider Dirac electrons with the spin–orbit coupling on honeycomb lattice, where the SPT charges are given by the spin-Chern, valley-Chern and spin-valley-Chern numbers. Though the spin-Chern charge is not quantized in the presence of the Rashba coupling, the deviation is estimated to be 10 −7 in the case of silicene, a silicon cousin of graphene. Second, we analyze the effect of the mirror-symmetry breaking of the mirror-Chern number in a thin-film of topological crystalline insulator.
19. Localized surface plasmon resonance properties of symmetry-broken Au-ITO-Ag multilayered nanoshells
Science.gov (United States)
Lv, Jingwei; Mu, Haiwei; Lu, Xili; Liu, Qiang; Liu, Chao; Sun, Tao; Chu, Paul K.
2018-06-01
The plasmonic properties of symmetry-broken Au-ITO-Ag multilayered nanoshells by shell cutting are studied by the finite element method. The influence of the polarization of incident light and geometrical parameters on the plasmon resonances of the multilayered nanoshells are investigated. The polarization-dependent multiple plasmon resonances appear from the multilayered nanoshells due to symmetry breaking. In nanostructures with a broken symmetry, the localized surface plasmon resonance modes are enhanced resulting in higher order resonances. According to the plasmon hybridization theory, these resonance modes and greater spectral tunability derive from the interactions of an admixture of both primitive and multipolar modes between the inner Au core and outer Ag shell. By changing the radius of the Au core, the extinction resonance modes of the multilayered nanoshells can be easily tuned to the near-infrared region. To elucidate the symmetry-broken effects of multilayered nanoshells, we link the geometrical asymmetry to the asymmetrical distributions of surface charges and demonstrate dipolar and higher order plasmon modes with large associated field enhancements at the edge of the Ag rim. The spectral tunability of the multiple resonance modes from visible to near-infrared is investigated and the unique properties are attractive to applications including angularly selective filtering to biosensing.
20. Remarks on broken chiral SU(5) x SU(5) symmetry and B mesons
International Nuclear Information System (INIS)
Kim, D.Y.; Sinha, S.N.
1985-01-01
In a recent paper, Hatzis has estimated the masses and weak decay constants of b-flavored pseudoscalar mesons in a broken chiral SU(5) x SU(5) symmetry method. The estimated weak decay constant of B meson, f sub(B) f sub(K)(f sub(B)/f sub(K) approximately equal to 1.4) evaluated by Mathur et al. with the quantum chromodynamics (QCD) sum-rule model. We re-examined the problem applying the broken chiral SU(5) x SU(5) symmetry approach using a set of mass formulae. With this method we estimate the symmetry-breaking parameters and decay constants of pseudoscalar mesons. We found a consistent result for the decay constant: f sub(K) < or approximately equal to f sub(D) < or approximately equal to f sub(B). The explicit numerical value of these constants, however, are lower than that of the QCD sum rule. This may be due to the limited validity of the broken chiral symmetry approach for heavy mesons
1. Broken Symmetry
CERN Multimedia
CERN. Geneva
2011-01-01
- The discovery of subatomic structures and of the concomitant weak and strong short-range forces raised the question of how to cope with short-range forces in relativistic quantum field theory. The Fermi theory of weak interactions, formulated in terms of point-like current-current interaction, was well-defined in lowest order perturbation theory and accounted for existing experimental data.However, it was inconsistent in higher orders because of uncontrollable divergent quant...
2. Broken symmetries at high temperatures and the problem of baryon excess of the universe
International Nuclear Information System (INIS)
Mohapatra, R.N.; Senjanovic, G.
1979-06-01
A class of gauge theories, where spontaneously broken symmetries, instead of being restored, persist as the temperature is increased is discussed. A renormalization group analysis of this phenomena suggests that there may be more than one phase transition in these models with at least one symmetric phase. Applying these ideas to the specific case of soft CP-violation in grand unified theories, a mechanism to generate the baryon to entropy ratio of the universe is discussed. 34 references
3. A topological approach unveils system invariances and broken symmetries in the brain.
Science.gov (United States)
Tozzi, Arturo; Peters, James F
2016-05-01
Symmetries are widespread invariances underscoring countless systems, including the brain. A symmetry break occurs when the symmetry is present at one level of observation but is hidden at another level. In such a general framework, a concept from algebraic topology, namely, the Borsuk-Ulam theorem (BUT), comes into play and sheds new light on the general mechanisms of nervous symmetries. The BUT tells us that we can find, on an n-dimensional sphere, a pair of opposite points that have the same encoding on an n - 1 sphere. This mapping makes it possible to describe both antipodal points with a single real-valued vector on a lower dimensional sphere. Here we argue that this topological approach is useful for the evaluation of hidden nervous symmetries. This means that symmetries can be found when evaluating the brain in a proper dimension, although they disappear (are hidden or broken) when we evaluate the same brain only one dimension lower. In conclusion, we provide a topological methodology for the evaluation of the most general features of brain activity, i.e., the symmetries, cast in a physical/biological fashion that has the potential to be operationalized. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
4. On the effect of scalar partons at short distances in unified theories with spontaneously broken colour symmetry
International Nuclear Information System (INIS)
Craigie, N.S.; Salam, Abdus
1979-05-01
The effect of scalar partons arising in QCD if the colour symmetry is spontaneously broken is discussed. The authors use a previous result, which states that such scalars can be incorporated into the theory without disturbing asymptotic freedom. (author)
5. Symmetry broken and restored coupled-cluster theory: I. Rotational symmetry and angular momentum
International Nuclear Information System (INIS)
Duguet, T
2015-01-01
We extend coupled-cluster (CC) theory performed on top of a Slater determinant breaking rotational symmetry to allow for the exact restoration of the angular momentum at any truncation order. The main objective relates to the description of near-degenerate finite quantum systems with an open-shell character. As such, the newly developed many-body formalism offers a wealth of potential applications and further extensions dedicated to the ab initio description of, e.g., doubly open-shell atomic nuclei and molecule dissociation. The formalism, which encompasses both single-reference CC theory and projected Hartree–Fock theory as particular cases, permits the computation of usual sets of connected diagrams while consistently incorporating static correlations through the highly non-perturbative restoration of rotational symmetry. Interestingly, the yrast spectroscopy of the system, i.e. the lowest energy associated with each angular momentum, is accessed within a single calculation. A key difficulty presently overcome relates to the necessity to handle generalized energy and norm kernels for which naturally terminating CC expansions could be eventually obtained. The present work focuses on SU(2) but can be extended to any (locally) compact Lie group and to discrete groups, such as most point groups. In particular, the formalism will be soon generalized to U(1) symmetry associated with particle number conservation. This is relevant to Bogoliubov CC theory that was recently applied to singly open-shell nuclei. (paper)
6. Spontaneously broken continuous symmetries in hyperbolic (or open) de Sitter spacetime
International Nuclear Information System (INIS)
Ratra, B.
1994-01-01
The functional Schroedinger approach is used to study scalar field theory in hyperbolic (or open) de Sitter spacetime. While on intermediate length scales (small compared to the spatial curvature length scale) the massless minimally coupled scalar field two-point correlation function does have a term that varies logarithmically with scale, as in flat and closed de Sitter spacetime, the spatial curvature tames the infrared behavior of this correlation function at larger scales in the open model. As a result, and contrary to what happens in flat and closed de Sitter spacetime, spontaneously broken continuous symmetries are not restored in open de Sitter spacetime (with more than one spatial dimension)
7. Broken symmetry of Lie groups of transformation generating general relativistic theories of gravitation
International Nuclear Information System (INIS)
Halpern, L.
1981-01-01
Invariant varieties of suitable semisimple groups of transformations can serve as models of the space-time of the universe. The metric is expressible in terms of the basis vectors of the group. The symmetry of the group is broken by introducing a gauge formalism in the space of the basis vectors with the adjoint group as gauge group. The gauge potentials are expressible in terms of the basis vectors for the case of the De Sitter group. The resulting gauge theory is equivalent to De Sitter covariant general relativity. Group covariant generalizations of gravitational theory are discussed. (Auth.)
8. Experimental study of broadband unidirectional splitting in photonic crystal gratings with broken structural symmetry
Science.gov (United States)
Colak, Evrim; Serebryannikov, Andriy E.; Ozgur Cakmak, A.; Ozbay, Ekmel
2013-04-01
It is experimentally demonstrated that the combination of diode and splitter functions can be realized in one broadband reciprocal device. The suggested performance is based on the dielectric photonic crystal grating whose structural symmetry is broken owing to non-deep corrugations placed at one of the two interfaces. The study has been performed at a normally incident beam-type illumination obtained from a microwave horn antenna. The two unidirectionally transmitted, deflected beams can show large magnitude and high contrast, while the angular distance between their maxima is 90° and larger. The dual-band unidirectional splitting is possible when using TM and TE polarizations.
9. How to fix a broken symmetry: quantum dynamics of symmetry restoration in a ferromagnetic Bose-Einstein condensate
International Nuclear Information System (INIS)
Damski, Bogdan; Zurek, Wojciech H
2008-01-01
We discuss the dynamics of a quantum phase transition in a spin-1 Bose-Einstein condensate when it is driven from the magnetized broken-symmetry phase to the unmagnetized 'symmetric' polar phase. We determine where the condensate goes out of equilibrium as it approaches the critical point, and compute the condensate magnetization at the critical point. This is done within a quantum Kibble-Zurek scheme traditionally employed in the context of symmetry-breaking quantum phase transitions. Then we study the influence of the non-equilibrium dynamics near a critical point on the condensate magnetization. In particular, when the quench stops at the critical point, nonlinear oscillations of magnetization occur. They are characterized by a period and an amplitude that are inversely proportional. If we keep driving the condensate far away from the critical point through the unmagnetized 'symmetric' polar phase, the amplitude of magnetization oscillations slowly decreases reaching a nonzero asymptotic value. That process is described by an equation that can be mapped onto the classical mechanical problem of a particle moving under the influence of harmonic and 'anti-friction' forces whose interplay leads to surprisingly simple fixed-amplitude oscillations. We obtain several scaling results relating the condensate magnetization to the quench rate, and verify numerically all analytical predictions
10. Conductance fluctuations in disordered superconductors with broken time-reversal symmetry near two dimensions
International Nuclear Information System (INIS)
Ryu, S.; Furusaki, A.; Ludwig, A.W.W.; Mudry, C.
2007-01-01
We extend the analysis of the conductance fluctuations in disordered metals by Altshuler, Kravtsov, and Lerner (AKL) to disordered superconductors with broken time-reversal symmetry in d=(2+ε) dimensions (symmetry classes C and D of Altland and Zirnbauer). Using a perturbative renormalization group analysis of the corresponding non-linear sigma model (NLσM) we compute the anomalous scaling dimensions of the dominant scalar operators with 2s gradients to one-loop order. We show that, in analogy with the result of AKL for ordinary, metallic systems (Wigner-Dyson classes), an infinite number of high-gradient operators would become relevant (in the renormalization group sense) near two dimensions if contributions beyond one-loop order are ignored. We explore the possibility to compare, in symmetry class D, the ε=(2-d) expansion in d<2 with exact results in one dimension. The method we use to perform the one-loop renormalization analysis is valid for general symmetric spaces of Kaehler type, and suggests that this is a generic property of the perturbative treatment of NLσMs defined on Riemannian symmetric target spaces
11. Topological Creation of Acoustic Pseudospin Multipoles in a Flow-Free Symmetry-Broken Metamaterial Lattice
Science.gov (United States)
Zhang, Zhiwang; Wei, Qi; Cheng, Ying; Zhang, Ting; Wu, Dajian; Liu, Xiaojun
2017-02-01
The discovery of topological acoustics has revolutionized fundamental concepts of sound propagation, giving rise to strikingly unconventional acoustic edge modes immune to scattering. Because of the spinless nature of sound, the "spinlike" degree of freedom crucial to topological states in acoustic systems is commonly realized with circulating background flow or preset coupled resonator ring waveguides, which drastically increases the engineering complexity. Here we realize the acoustic pseudospin multipolar states in a simple flow-free symmetry-broken metamaterial lattice, where the clockwise (anticlockwise) sound propagation within each metamolecule emulates pseudospin down (pseudospin up). We demonstrate that tuning the strength of intermolecular coupling by simply contracting or expanding the metamolecule can induce the band inversion effect between the pseudospin dipole and quadrupole, which leads to a topological phase transition. Topologically protected edge states and reconfigurable topological one-way transmission for sound are further demonstrated. These results provide diverse routes to construct novel acoustic topological insulators with versatile applications.
12. New Boundary-Driven Twist States in Systems with Broken Spatial Inversion Symmetry
Science.gov (United States)
Hals, Kjetil M. D.; Everschor-Sitte, Karin
2017-09-01
A full description of a magnetic sample includes a correct treatment of the boundary conditions (BCs). This is in particular important in thin film systems, where even bulk properties might be modified by the properties of the boundary of the sample. We study generic ferromagnets with broken spatial inversion symmetry and derive the general micromagnetic BCs of a system with Dzyaloshinskii-Moriya interaction (DMI). We demonstrate that the BCs require the full tensorial structure of the third-rank DMI tensor and not just the antisymmetric part, which is usually taken into account. Specifically, we study systems with C∞ v symmetry and explore the consequences of the DMI. Interestingly, we find that the DMI already in the simplest case of a ferromagnetic thin film leads to a purely boundary-driven magnetic twist state at the edges of the sample. The twist state represents a new type of DMI-induced spin structure, which is completely independent of the internal DMI field. We estimate the size of the texture-induced magnetoresistance effect being in the range of that of domain walls.
13. Fibre multi-wave mixing combs reveal the broken symmetry of Fermi-Pasta-Ulam recurrence
Science.gov (United States)
Mussot, Arnaud; Naveau, Corentin; Conforti, Matteo; Kudlinski, Alexandre; Copie, Francois; Szriftgiser, Pascal; Trillo, Stefano
2018-05-01
In optical fibres, weak modulations can grow at the expense of a strong pump to form a triangular comb of sideband pairs, until the process is reversed. Repeated cycles of such conversion and back-conversion constitute a manifestation of the universal nonlinear phenomenon known as Fermi-Pasta-Ulam recurrence. However, it remains a major challenge to observe the coexistence of different types of recurrences owing to the spontaneous symmetry-breaking nature of such a phenomenon. Here, we implement a novel non-destructive technique that allows the evolution in amplitude and phase of frequency modes to be reconstructed via post-processing of the fibre backscattered light. We clearly observe how control of the input modulation seed results in different recursive behaviours emerging from the phase-space structure dictated by the spontaneously broken symmetry. The proposed technique is an important tool to characterize other mixing processes and new regimes of rogue-wave formation and wave turbulence in fibre optics.
14. Chiral Lagrangian with broken scale: Testing the restoration of symmetries in astrophysics and in the laboratory
International Nuclear Information System (INIS)
Bonanno, Luca; Drago, Alessandro
2009-01-01
We study matter at high density and temperature using a chiral Lagrangian in which the breaking of scale invariance is regulated by the value of a scalar field, called dilaton [E. K. Heide, S. Rudaz, and P. J. Ellis, Nucl. Phys. A571, 713 (1994); G. W. Carter, P. J. Ellis, and S. Rudaz, Nucl. Phys. A603, 367 (1996); G. W. Carter, P. J. Ellis, and S. Rudaz, Nucl. Phys. A618, 317 (1997); G. W. Carter and P. J. Ellis, Nucl. Phys. A628, 325 (1998)]. We provide a phase diagram describing the restoration of chiral and scale symmetries. We show that chiral symmetry is restored at large temperatures, but at low temperatures it remains broken at all densities. We also show that scale invariance is more easily restored at low rather than large baryon densities. The masses of vector-mesons scale with the value of the dilaton and their values initially slightly decrease with the density but then they increase again for densities larger than ∼3ρ 0 . The pion mass increases continuously with the density and at ρ 0 and T=0 its value is ∼30 MeV larger than in the vacuum. We show that the model is compatible with the bounds stemming from astrophysics, as, e.g., the one associated with the maximum mass of a neutron star. The most striking feature of the model is a very significant softening at large densities, which manifests also as a strong reduction of the adiabatic index. Although the softening has probably no consequence for supernova explosion via the direct mechanism, it could modify the signal in gravitational waves associated with the merging of two neutron stars.
15. Study of spontaneously broken conformal symmetry in curved space-times
International Nuclear Information System (INIS)
Janson, M.M.
1977-05-01
Spontaneous breakdown of Weyl invariance (local scale invariance) in a conformally-invariant extension of a gauge model for weak and electromagnetic interactions is considered. The existence of an asymmetric vacuum for the Higgs field, phi, is seen to depend on the space-time structure via the Gursey-Penrose term, approximately phi + phi R, in the action. (R denotes the scalar curvature.) The effects of a prescribed space-time structure on spontaneously broken Weyl invariance is investigated. In a cosmological space-time, it is found that initially, in the primordial fireball, the symmetry must hold exactly. Spontaneous symmetry breaking (SSB) develops as the universe expands and cools. Consequences of this model include a dependence of G/sub F/, the effective weak interaction coupling strength, on ''cosmic time.'' It is seen to decrease monotonically; in the present epoch (G/sub F//G/sub F/)/sub TODAY/ approximately less than 10 -10 (year) -1 . The effects of the Schwarzschild geometry on SSB are explored. In the interior of a neutron star the Higgs vacuum expectation value, and consequently G/sub F/, is found to have a radial dependence. The magnitude of this variation does not warrant revision of present models of neutron star structures. Another perspective on the problem considered a theory of gravitation (conformal relativity) to be incorporated in the conformally invariant gauge model of weak and electromagnetic interactions. If SSB develops, the vacuum gravitational field equations are the Einstein field equations with a cosmological constant. The stability of the asymmetric vacuum solution is investigated to ascertain whether SSB can occur
16. Generation of parasitic axial flow by drift wave turbulence with broken symmetry: Theory and experiment
Science.gov (United States)
Hong, R.; Li, J. C.; Hajjar, R.; Chakraborty Thakur, S.; Diamond, P. H.; Tynan, G. R.
2018-05-01
Detailed measurements of intrinsic axial flow generation parallel to the magnetic field in the controlled shear decorrelation experiment linear plasma device with no axial momentum input are presented and compared to theory. The results show a causal link from the density gradient to drift-wave turbulence with broken spectral symmetry and development of the axial mean parallel flow. As the density gradient steepens, the axial and azimuthal Reynolds stresses increase and radially sheared azimuthal and axial mean flows develop. A turbulent axial momentum balance analysis shows that the axial Reynolds stress drives the radially sheared axial mean flow. The turbulent drive (Reynolds power) for the azimuthal flow is an order of magnitude greater than that for axial flow, suggesting that the turbulence fluctuation levels are set by azimuthal flow shear regulation. The direct energy exchange between axial and azimuthal mean flows is shown to be insignificant. Therefore, the axial flow is parasitic to the turbulence-zonal flow system and is driven primarily by the axial turbulent stress generated by that system. The non-diffusive, residual part of the axial Reynolds stress is found to be proportional to the density gradient and is formed due to dynamical asymmetry in the drift-wave turbulence.
17. A solution to the rho-π puzzle: Spontaneously broken symmetries of the quark model
International Nuclear Information System (INIS)
Caldi, D.G.; Pagels, H.
1976-01-01
This article proposes a solution to the long-standing rho-π puzzle: How can the rho and π be members of a quark model U(6) 36 and the π be a Nambu-Goldstone boson satisfying partial conservation of the axial-vector current (PCAC) Our solution to the puzzle requires a revision of conventional concepts regarding the vector mesons rho, ω, K*, and phi. Just as the π is a Goldstone state, a collective excitation of the Nambu--Jona-Lasinio type, transforming as a member of the (3, 3) + (3, 3) representation of the chiral SU(3) x SU(3) group, so also the rho transforms like (3, 3) + (3, 3) and is also a collective state, a ''dormant'' Goldstone boson that is a true Goldstone boson in the static chiral U(6) x U(6) limit. The static chiral U(6) x U(6) is to be spontaneously broken to static U(6) in the vacuum. Relativisitc effects provide for U(6) breaking and a massive rho. This viewpoint has many consequences. Vector-meson dominance is a consequence of spontaneously broken chiral symmetry: the mechanism that couples the axial-vector current to the π couples the vector current to the rho. The transition rate is calculated as γ/sub rho/ -1 = f/sub pi//m/sub rho/ in rough agreement with experiment. This picture requires soft rho's to decouple. The chiral partner of the rho is not the A 1 but the B (1235). The experimental absence of the A 1 is no longer a theoretical embarrassment in this scheme. As the analog of PCAC for the pion we establish a tensor-field identity for the rho meson in which the rho is interpreted as a dormant Goldstone state. The decays delta → eta + π, B → ω + π, epsilon → 2π are estimated and are found to be in agreement with the observed rates. A static U(6) x U(6) generalization of the Σ model is presented with the π, rho, sigma, B in the (6, 6) + (6, 6) representation. The rho emerges as a dormant Goldstone boson in this model
18. Symanzik–Becchi–Rouet–Stora lessons on renormalizable models with broken symmetry: The case of Lorentz violation
Energy Technology Data Exchange (ETDEWEB)
Del Cima, Oswaldo M.; Franco, Daniel H.T.; Piguet, Olivier, E-mail: opiguet@pq.cnpq.br
2016-11-15
In this paper, we revisit the issue intensively studied in recent years on the generation of terms by radiative corrections in models with broken Lorentz symmetry. The algebraic perturbative method of handling the problem of renormalization of the theories with Lorentz symmetry breaking, is used. We hope to make clear the Symanzik's aphorism: “Whether you like it or not, you have to include in the lagrangian all counter terms consistent with locality and power-counting, unless otherwise constrained by Ward identities.”{sup 1}.
19. The renormalization group of relativistic quantum field theory as a set of generalized, spontaneously broken, symmetry transformations
International Nuclear Information System (INIS)
Maris, Th.A.J.
1976-01-01
The renormalization group theory has a natural place in a general framework of symmetries in quantum field theories. Seen in this way, a 'renormalization group' is a one-parametric subset of the direct product of dilatation and renormalization groups. This subset of spontaneously broken symmetry transformations connects the inequivalent solutions generated by a parameter-dependent regularization procedure, as occurs in renormalized perturbation theory. By considering the global, rather than the infinitesimal, transformations, an expression for general vertices is directly obtained, which is the formal solution of exact renormalization group equations [pt
20. Non-local ground-state functional for quantum spin chains with translational broken symmetry
Energy Technology Data Exchange (ETDEWEB)
Libero, Valter L.; Penteado, Poliana H.; Veiga, Rodrigo S. [Universidade de Sao Paulo (IFSC/USP), Sao Carlos, SP (Brazil). Inst. de Fisica
2011-07-01
Full text. Thanks to the development and use of new materials with special doping, it becomes relevant the study of Heisenberg spin-chains with broken translational symmetry, induced for instance by finite-size effects, bond defects or by impurity spin in the chain. The exact numerical results demands huge computational efforts, due to the size of the Hilbert space involved and the lack of symmetry to exploit. Density Functional Theory (DFT) has been considered a simple alternative to obtain ground-state properties for such systems. Usually, DFT starts with a uniform system to build the correlation energy and after implement a local approximation to construct local functionals. Based on our prove of the Hohenberg-Kohn theorem for Heisenberg models, and in order to describe more realistic models, we have recently developed a non-local exchange functional for the ground-state energy of quantum-spin chains. A alternating-bond chain is used to obtain the correlation energy and a local unit-cell approximation - LUCA, is defined in the context of DFT. The alternating chain is a good starting point to construct functionals since it is intrinsically non-homogeneous, therefore instead of the usual local approximation (like LDA for electronic systems) we need to introduce an approximation based upon a unit cell concept, that renders a non-local functional in the bond exchange interaction. The agreement with exact numerical data (obtained only for small chains, although the functional can be applied for chains with arbitrary size) is significantly better than in our previous local formulation, even for chains with several ferromagnetic or antiferromagnetic bond defects. These results encourage us to extend the concept of LUCA for chains with alternating-spin magnitudes. We also have constructed a non-local functional based on an alternating-spin chain, instead of a local alternating-bond, using spin-wave-theory. Because of its non-local nature, this functional is expected to
1. Non-local ground-state functional for quantum spin chains with translational broken symmetry
International Nuclear Information System (INIS)
Libero, Valter L.; Penteado, Poliana H.; Veiga, Rodrigo S.
2011-01-01
Full text. Thanks to the development and use of new materials with special doping, it becomes relevant the study of Heisenberg spin-chains with broken translational symmetry, induced for instance by finite-size effects, bond defects or by impurity spin in the chain. The exact numerical results demands huge computational efforts, due to the size of the Hilbert space involved and the lack of symmetry to exploit. Density Functional Theory (DFT) has been considered a simple alternative to obtain ground-state properties for such systems. Usually, DFT starts with a uniform system to build the correlation energy and after implement a local approximation to construct local functionals. Based on our prove of the Hohenberg-Kohn theorem for Heisenberg models, and in order to describe more realistic models, we have recently developed a non-local exchange functional for the ground-state energy of quantum-spin chains. A alternating-bond chain is used to obtain the correlation energy and a local unit-cell approximation - LUCA, is defined in the context of DFT. The alternating chain is a good starting point to construct functionals since it is intrinsically non-homogeneous, therefore instead of the usual local approximation (like LDA for electronic systems) we need to introduce an approximation based upon a unit cell concept, that renders a non-local functional in the bond exchange interaction. The agreement with exact numerical data (obtained only for small chains, although the functional can be applied for chains with arbitrary size) is significantly better than in our previous local formulation, even for chains with several ferromagnetic or antiferromagnetic bond defects. These results encourage us to extend the concept of LUCA for chains with alternating-spin magnitudes. We also have constructed a non-local functional based on an alternating-spin chain, instead of a local alternating-bond, using spin-wave-theory. Because of its non-local nature, this functional is expected to
2. Quasi-Unit-Cell Model for an Al-Ni-Co Ideal Quasicrystal based on Clusters with Broken Tenfold Symmetry
International Nuclear Information System (INIS)
Abe, Eiji; Saitoh, Koh; Takakura, H.; Tsai, A. P.; Steinhardt, P. J.; Jeong, H.-C.
2000-01-01
We present new evidence supporting the quasi-unit-cell description of the Al 72 Ni 20 Co 8 decagonal quasicrystal which shows that the solid is composed of repeating, overlapping decagonal cluster columns with broken tenfold symmetry. We propose an atomic model which gives a significantly improved fit to electron microscopy experiments compared to a previous proposal by us and to alternative proposals with tenfold symmetric clusters. (c) 2000 The American Physical Society
3. Multistabilities and symmetry-broken one-color and two-color states in closely coupled single-mode lasers.
Science.gov (United States)
Clerkin, Eoin; O'Brien, Stephen; Amann, Andreas
2014-03-01
We theoretically investigate the dynamics of two mutually coupled, identical single-mode semi-conductor lasers. For small separation and large coupling between the lasers, symmetry-broken one-color states are shown to be stable. In this case the light outputs of the lasers have significantly different intensities while at the same time the lasers are locked to a single common frequency. For intermediate coupling we observe stable symmetry-broken two-color states, where both lasers lase simultaneously at two optical frequencies which are separated by up to 150 GHz. Using a five-dimensional model, we identify the bifurcation structure which is responsible for the appearance of symmetric and symmetry-broken one-color and two-color states. Several of these states give rise to multistabilities and therefore allow for the design of all-optical memory elements on the basis of two coupled single-mode lasers. The switching performance of selected designs of optical memory elements is studied numerically.
4. Model-Independent Analysis of Tri-bimaximal Mixing: A Softly-Broken Hidden or an Accidental Symmetry?
Energy Technology Data Exchange (ETDEWEB)
Albright, Carl H.; /Northern Illinois U. /Fermilab; Rodejohann, Werner; /Heidelberg, Max Planck Inst.
2008-04-01
To address the issue of whether tri-bimaximal mixing (TBM) is a softly-broken hidden or an accidental symmetry, we adopt a model-independent analysis in which we perturb a neutrino mass matrix leading to TBM in the most general way but leave the three texture zeros of the diagonal charged lepton mass matrix unperturbed. We compare predictions for the perturbed neutrino TBM parameters with those obtained from typical SO(10) grand unified theories with a variety of flavor symmetries. Whereas SO(10) GUTs almost always predict a normal mass hierarchy for the light neutrinos, TBM has a priori no preference for neutrino masses. We find, in particular for the latter, that the value of |U{sub e3}| is very sensitive to the neutrino mass scale and ordering. Observation of |U{sub e3}|{sup 2} > 0.001 to 0.01 within the next few years would be incompatible with softly-broken TBM and a normal mass hierarchy and would suggest that the apparent TBM symmetry is an accidental symmetry instead. No such conclusions can be drawn for the inverted and quasi-degenerate hierarchy spectra.
5. Group theoretical classification of broken symmetry states of the two-fold degenerate Hubbard model on a triangular lattice
International Nuclear Information System (INIS)
Masago, Akira; Suzuki, Naoshi
2001-01-01
By a group theoretical procedure we derive the possible spontaneously broken-symmetry states for the two-fold degenerate Hubbard model on a two-dimensional triangular lattice. For ordering wave vectors corresponding to the points Γ and K in the first BZ we find 22 states which include 16 collinear and six non-collinear states. The collinear states include the usual SDW and CDW states which appear also in the single-band Hubbard model. The non-collinear states include exotic ordering states of orbitals and spins as well as the triangular arrangement of spins
6. Master formula approach to broken chiral U(3)xU(3) symmetry
Energy Technology Data Exchange (ETDEWEB)
Hiroyuki Kamano
2010-04-01
The master formula approach to chiral symmetry breaking proposed by Yamagishi and Zahed is extended to the U_R(3)xU_L(3) group, in which effects of the U_A(1) anomaly and the flavor symmetry breaking m_u \
7. Correlation and disorder-enhanced nematic spin response in superconductors with weakly broken rotational symmetry
DEFF Research Database (Denmark)
Andersen, Brian Møller; Graser, S.; Hirschfeld, P. J.
2012-01-01
Recent experimental and theoretical studies have highlighted the possible role of an electronic nematic liquid in underdoped cuprate superconductors. We calculate, within a model of d-wave superconductor with Hubbard correlations, the spin susceptibility in the case of a small explicitly broken...
8. Spontaneously broken symmetry of vacuum in external gravitational fields of isotropic cosmological models
International Nuclear Information System (INIS)
Veryaskin, A.V.; Lapchinskij, V.G.; Nekrasov, V.I.; Rubakov, V.A.
1981-01-01
Behaviour of vacuum symmetry in the model of self-acting scalar field in the open and closed isotropic cosmological spaces is investigated. Considered are the cases with the mass squared of the scalar field m 2 >0, m 2 =0 and m 2 2 2 =0 at exponentially large scale factors the study of the problem on the behaviour of the symmetry requires exceeding the limits of the perturbation theory. The final behaviour of the vacuum symmetry in the open model at small radii depends on combined effect of all the external factors [ru
9. Lattice QCD with light quark masses: Does chiral symmetry get broken spontaneously
International Nuclear Information System (INIS)
Barbour, I.M.; Schierholz, G.; Teper, M.; Gilchrist, J.P.; Schneider, H.
1983-03-01
We present a first direct calculation of the properties of QCD for the small quark masses of phenomenological interest without extrapolations. We describe methods specially adapted to invert the fermion matrix at small quark masses. We use these methods to calculate directly on presently used lattice sizes with different boundary conditions. As is to be expected for a finite system, we do not observe spontaneous chiral symmetry breaking. By comparing the results obtained on lattices of different size we see, however, indications that are consistent with eventual spontaneous chiral symmetry breaking in the infinite volume limit. Our calculations underline the importance of using antiperiodic boundary conditions for fermions. (orig.)
10. Broken SU(5) x SU(5) chiral symmetry and the classification of B mesons
International Nuclear Information System (INIS)
Hatzis, M.
1984-01-01
We consider broken SU(5) x SU(5) chiral summetry and we assume that the vacuum is SU(5)-symmetric. Using the observed mass spectrum of pseudoscalar mesons, and setting the bu mass in the range 5.2 +- 0.06 GeV, we predict the masses of bs, bc, and etasub(b) states as well as axial current couplings fsub(i)/fsub(π). SU(5) x SU(5) is found to be consistent with SU(4) x SU(4) breaking. The problem of eta - eta' - eta sub(c) - eta sub(b) mixing is also discussed
11. Broken symmetries and directed collective energy transport in spatially extended systems
DEFF Research Database (Denmark)
Flach, S.; Zolotaryuk, Yaroslav; Miroshnichenko, A. E.
2002-01-01
We study the appearance of directed energy current in homogeneous spatially extended systems coupled to a heat bath in the presence of an external ac field E(t) . The systems are described by nonlinear field equations. By making use of a symmetry analysis, we predict the right choice of E(t) and ...
12. Influence of broken flavor and C and P symmetry on the quark propagator
Energy Technology Data Exchange (ETDEWEB)
Maas, Axel; Mian, Walid Ahmed [University of Graz, Institute of Physics, NAWI Graz, Graz (Austria)
2017-02-15
Embedding QCD into the standard model breaks various symmetries of QCD explicitly, especially C and P. While these effects are usually perturbatively small, they can be amplified in extreme environments like merging neutron stars or by the interplay with new physics. To correctly treat these cases requires fully backcoupled calculations. To pave the way for later investigations of hadronic physics, we study the QCD quark propagator coupled to an explicit breaking. This substantially increases the tensor structure even for this simplest correlation function. To cope with the symmetry structure, and covering all possible quark masses, from the top quark mass to the chiral limit, we employ Dyson-Schwinger equations. While at weak breaking the qualitative effects have similar trends as in perturbation theory, even moderately strong breakings lead to qualitatively different effects, non-linearly amplified by the strong interactions. (orig.)
13. Broken symmetries at the origin of matter, at the origin of life and at the origin of culture
International Nuclear Information System (INIS)
1998-01-01
In earliest cosmic history the universe started with matter and not with antimatter. Shortly after the beginning the electroweak interaction - prominent in nuclear β decay - acted as a left-hander. Much later, in pre biotic evolution, optically left-handed amino acids determined the unique signature of following terrestrial organic life. Again ae- ons later, homo sapiens appears as predominantly right handed and creates cultures with many broken symmetries. Along these pathways of history it was essential that choices were made - left or right, matter or antimatter - but on several instances it seemed less relevant which choice were made. We think that biochirality occurred by global chance; perhaps by local necessity, but without causal links to the PCT theorem. In other cases - e.g. the standardization to right-handed screws - the choice will have been made by causal necessity. (author)
14. Broken symmetry phase transition in solid p-H 2, o-D 2 and HD: crystal field effects
Science.gov (United States)
Freiman, Yu. A.; Hemley, R. J.; Jezowski, A.; Tretyak, S. M.
1999-04-01
We report the effect of the crystal field (CF) on the broken symmetry phase transition (BSP) in solid parahydrogen, orthodeuterium, and hydrogen deuteride. The CF was calculated taking into account a distortion from the ideal HCP structure. We find that, in addition to the molecular field generated by the coupling terms in the intermolecular potential, the Hamiltonian of the system contains a crystal-field term, originating from single-molecular terms in the intermolecular potential. Ignoring the CF is the main cause of the systematic underestimation of the transition pressure, characteristic of published theories of the BSP transition. The distortion of the lattice that gives rise to the negative CF in response to the applied pressure is in accord with the general Le Chatelier-Braun principle.
15. Broken symmetries at the origin of matter, at the origin of life and at the origin of culture
Energy Technology Data Exchange (ETDEWEB)
Klinken, J. van [Kernfysisch Versneller Instituut, University of Groningen, Groningen (Netherlands)
1998-01-01
In earliest cosmic history the universe started with matter and not with antimatter. Shortly after the beginning the electroweak interaction - prominent in nuclear {beta} decay - acted as a left-hander. Much later, in pre biotic evolution, optically left-handed amino acids determined the unique signature of following terrestrial organic life. Again ae- ons later, homo sapiens appears as predominantly right handed and creates cultures with many broken symmetries. Along these pathways of history it was essential that choices were made - left or right, matter or antimatter - but on several instances it seemed less relevant which choice were made. We think that biochirality occurred by global chance; perhaps by local necessity, but without causal links to the PCT theorem. In other cases - e.g. the standardization to right-handed screws - the choice will have been made by causal necessity. (author) 14 refs, 8 figs, 1 tab
16. Transport theory for energetic alpha particles and tolerable magnitude of error fields in tokamaks with broken symmetry
International Nuclear Information System (INIS)
Shaing, K.C.; Hsu, C.T.
2014-01-01
A transport theory for energetic fusion born alpha particles in tokamaks with broken symmetry has been developed. The theory is a generalization of the theory for neoclassical toroidal plasma viscosity for thermal particles in tokamaks. It is shown that the radial energy transport rate can be comparable to the slowing down rate for energetic alpha particles when the ratio of the typical magnitude of the perturbed magnetic field strength to that of the equilibrium magnetic field strength is of the order of 10 −4 or larger. This imposes a constraint on the magnitude of the error fields in thermonuclear fusion reactors. The implications on stellarators as potential fusion reactors are also discussed. (paper)
17. Broken symmetry within crystallographic super-spaces: structural and dynamical aspects
International Nuclear Information System (INIS)
Mariette, Celine
2013-01-01
Aperiodic crystals have the property to possess long range order without translational symmetry. These crystals are described within the formalism of super-space crystallography. In this manuscript, we will focus on symmetry breaking which take place in such crystallographic super-space groups, considering the prototype family of n-alkane/urea. Studies performed by X-ray diffraction using synchrotron sources reveal multiple structural solutions implying or not changes of the dimension of the super-space. Once the characterization of the order parameter and of the symmetry breaking is done, we present the critical pre-transitional phenomena associated to phase transitions of group/subgroup types. Coherent neutron scattering and inelastic X-ray scattering allow a dynamical analysis of different kind of excitations in these materials (phonons, phasons). The inclusion compounds with short guest molecules (alkane C n H 2n+2 , n varying from 7 to 13) show at room temperature unidimensional 'liquid-like' phases. The dynamical disorder along the incommensurate direction of these materials generates new structural solutions at low temperature (inter-modulated monoclinic composite, commensurate lock-in). (author) [fr
18. One-way propagation of bulk states and robust edge states in photonic crystals with broken inversion and time-reversal symmetries
Science.gov (United States)
Lu, Jin-Cheng; Chen, Xiao-Dong; Deng, Wei-Min; Chen, Min; Dong, Jian-Wen
2018-07-01
The valley is a flexible degree of freedom for light manipulation in photonic systems. In this work, we introduce the valley concept in magnetic photonic crystals with broken inversion symmetry. One-way propagation of bulk states is demonstrated by exploiting the pseudo-gap where bulk states only exist at one single valley. In addition, the transition between Hall and valley-Hall nontrivial topological phases is also studied in terms of the competition between the broken inversion and time-reversal symmetries. At the photonic boundary between two topologically distinct photonic crystals, we illustrate the one-way propagation of edge states and demonstrate their robustness against defects.
19. Charge transport of graphene ferromagnetic-insulator-superconductor junction with pairing state of broken time reversal symmetry
Directory of Open Access Journals (Sweden)
Yaser Hajati
2015-04-01
Full Text Available We investigate the charge transport through a graphene-based ferromagnetic-insulator-superconductor junction with a broken time reversal symmetry (BTRS of dx2−y2 + is and dx2−y2 + idxy superconductor using the extended Blonder-Tinkham-Klapwijk formalism. Our analysis have shown several charateristics in this junction, providing a useful probe to understand the role of the order parameter symmetry in the superconductivity. We find that the presence of the BTRS (X state in the superconductor region has a strong effect on the tunneling conductance curves which leads to a decrease in the height of the zero-bias conductance peak (ZBCP. In particular, we show that the magnitude of the superconducting proximity effect depends to a great extent on X and by increasing X, the zero-bias charge conductance oscillations with respect to the rotation angle β are suppressed. In addition, we find that at the maximum rotation angle β = π/4, introducing BTRS in the FIS junction causes oscillatory behavior of the zero-bias charge conductance with the barrier strength (χG by a period of π and by approaching the X to 1, the amplitude of charge conductance oscillations increases. This behavior is drastically different from none BTRS similar graphene junctions. At last, we suggest an experimental setup for verifying our predicted effects.
20. Marginal deformations of 3d supersymmetric U(N) model and broken higher spin symmetry
Energy Technology Data Exchange (ETDEWEB)
Hikida, Yasuaki [Center for Gravitational Physics, Yukawa Institute for Theoretical Physics, Kyoto University,Kyoto 606-8502 (Japan); Wada, Taiki [Department of Physical Sciences, College of Science and Engineering, Ritsumeikan University,Shiga 525-8577 (Japan)
2017-03-08
We examine the marginal deformations of double-trace type in 3d supersymmetric U(N) model with N complex free bosons and fermions. We compute the anomalous dimensions of higher spin currents to the 1/N order but to all orders in the deformation parameters by mainly applying the conformal perturbation theory. The 3d field theory is supposed to be dual to 4d supersymmetric Vasiliev theory, and the marginal deformations are argued to correspond to modifying boundary conditions for bulk scalars and fermions. Thus the modification should break higher spin gauge symmetry and generate the masses of higher spin fields. We provide supports for the dual interpretation by relating bulk computation in terms of Witten diagrams to boundary one in conformal perturbation theory.
1. Marginal deformations of 3d supersymmetric U(N) model and broken higher spin symmetry
International Nuclear Information System (INIS)
2017-01-01
We examine the marginal deformations of double-trace type in 3d supersymmetric U(N) model with N complex free bosons and fermions. We compute the anomalous dimensions of higher spin currents to the 1/N order but to all orders in the deformation parameters by mainly applying the conformal perturbation theory. The 3d field theory is supposed to be dual to 4d supersymmetric Vasiliev theory, and the marginal deformations are argued to correspond to modifying boundary conditions for bulk scalars and fermions. Thus the modification should break higher spin gauge symmetry and generate the masses of higher spin fields. We provide supports for the dual interpretation by relating bulk computation in terms of Witten diagrams to boundary one in conformal perturbation theory.
2. Coherent non-linear optical response in SU(2) symmetry broken single and bilayer graphene
International Nuclear Information System (INIS)
Kumar, Vipin; Enamullah,; Kumar, Upendra; Setlur, Girish S.
2014-01-01
Anomalous Rabi oscillations in single and bilayer graphene, in the absence of time-reversal symmetry, are described. The main findings of this work are that intra-layer sublattice space asymmetry has a remarkable effect on anomalous Rabi frequency in single and bilayer graphene, namely it is offset by the asymmetry parameter. However, the conventional Rabi frequency is nearly independent of the asymmetry parameter. Inter-layer asymmetry in bilayer graphene has an even more significant effect on anomalous Rabi frequency. When inter-layer asymmetry is taken into account, the anomalous Rabi frequency versus the external field goes through a minimum. The induced current in the frequency domain in these systems shows a finite threshold behavior even for vanishingly small applied fields. These offset oscillations are attributable to the asymmetry parameter in these systems, and are observable only for weak applied fields. For stronger applied fields these phenomena tend towards those without asymmetry
3. REVIEWS OF TOPICAL PROBLEMS: Broken symmetry and magnetoacoustic effects in ferroand antiferromagnetics
Science.gov (United States)
Turov, Evgenii A.; Shavrov, Vladimir G.
1983-07-01
This review of some aspects of the magnetoacoustics of ferro- and antiferromagnetic materials has been written in connection with the 25th anniversary of the rise of this field of physics of magnetic phenomena. Primary attention is paid to relatively new problems that have not been reflected in the existing monographs and reviews. The topic is a group of linear magnetoacoustic effects that manifest spontaneous symmetry breaking caused by magnetic ordering in a system of two coupled fields: the magnetization field M (r) and the deformation field uij(r). To some extent these effects are analogous to the Higgs effect in the theory of elementary particles (the Higgs mechanism of the origin of the mass of a particle) or the Meissner effect in the theory of superconductivity. A direct analog of the stated effects is the so-called magnetoelastic gap in the magnon spectrum, while an analog of an accompanying effect is the softening of the quasiacoustic modes interacting with it (up to the vanishing of the corresponding dynamic elastic moduli). However, a characteristic feature of such effects in crystalline (anisotropic) magnetic materials is that they are manifested mainly near points of magnetic (spin-reorientation) phase transitions. This review treats the coupled magnetoelastic waves in ferro- and antiferromagnetic materials of different types that show phase transitions with respect to temperature, magnetic field, or pressure.
4. (Pseudo-Goldstone boson interaction in D=2+1 systems with a spontaneously broken internal rotation symmetry
Directory of Open Access Journals (Sweden)
Christoph P. Hofmann
2016-03-01
Full Text Available The low-temperature properties of systems characterized by a spontaneously broken internal rotation symmetry, O(N→O(N−1, are governed by Goldstone bosons and can be derived systematically within effective Lagrangian field theory. In the present study we consider systems living in two spatial dimensions, and evaluate their partition function at low temperatures and weak external fields up to three-loop order. Although our results are valid for any such system, here we use magnetic terminology, i.e., we refer to quantum spin systems. We discuss the sign of the (pseudo-Goldstone boson interaction in the pressure, staggered magnetization, and susceptibility as a function of an external staggered field for general N. As it turns out, the d=2+1 quantum XY model (N=2 and the d=2+1 Heisenberg antiferromagnet (N=3, are rather special, as they represent the only cases where the spin-wave interaction in the pressure is repulsive in the whole parameter regime where the effective expansion applies. Remarkably, the d=2+1 XY model is the only system where the interaction contribution in the staggered magnetization (susceptibility tends to positive (negative values at low temperatures and weak external field.
5. Importance of Broken Gauge Symmetry in Addressing Three, Key, Unanswered Questions Posed by Low Nuclear Reactions (LENR's)
Science.gov (United States)
Chubb, Scott
2003-03-01
Three, Key, Unanswered Questions posed by LENR's are: 1. How do we explain the lack of high energy particles (HEP's)? 2. Can we understand and prioritize the way coupling can occur between nuclear- and atomic- lengthscales, and 3. What are the roles of Surface-Like (SL), as opposed to Bulk-Like (BL), processes in triggering nuclear phenomena. One important source of confusion associated with each of these questions is the common perception that the quantum mechanical phases of different particles are not correlated with each other. When the momenta p of interacting particles is large, and reactions occur rapidly (between HEP's, for example), this is a valid assumption. But when the relative difference in p becomes vanishingly small, between one charge, and many others, as a result of implicit electromagnetic coupling, each charge can share a common phase, relative to the others, modulo 2nπ, where n is an integer, even when outside forces are introduced. The associated forms of broken gauge symmetry, distinguish BL from SL phenomena, at room temperature, also explain super- and normal- conductivity in solids, and can be used to address the Three, Key, Unanswered Questions posed by LENR's.
6. Time-dependent Gross-Pitaevskii equation for composite bosons as the strong-coupling limit of the fermionic broken-symmetry random-phase approximation
International Nuclear Information System (INIS)
Strinati, G.C.; Pieri, P.
2004-01-01
The linear response to a space- and time-dependent external disturbance of a system of dilute condensed composite bosons at zero temperature, as obtained from the linearized version of the time-dependent Gross-Pitaevskii equation, is shown to result also from the strong-coupling limit of the time-dependent BCS (or broken-symmetry random-phase) approximation for the constituent fermions subject to the same external disturbance. In this way, it is possible to connect excited-state properties of the bosonic and fermionic systems by placing the Gross-Pitaevskii equation in perspective with the corresponding fermionic approximations
7. Effects of Broken Symmetry in Tokamaks: Global Braking of Toroidal Rotation and Self-consistent Determination of Neoclassical Magnetic Islands Velocity
International Nuclear Information System (INIS)
Lazzaro, Enzo
2009-01-01
Established results of neoclassical kinetic theory are used in a fluid model to show that in low collisionality regimes (ν and 1/ν) the propagation velocity of Neoclassical Tearing Modes (NTM) magnetic islands of sufficient width is determined self-consistently by the Neoclassical Toroidal Viscosity (NTV) appearing because of broken symmetry. The NTV effect on bulk plasma rotation, may also explain recent observations on momentum transport. At the same time this affects the role of the neoclassical ion polarization current on neoclassical tearing modes (NTM) stability.
8. One-loop corrections to the perturbative unitarity bounds in the CP-conserving two-Higgs doublet model with a softly broken ℤ{sub 2} symmetry
Energy Technology Data Exchange (ETDEWEB)
Grinstein, Benjamín [Department of Physics, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093 (United States); Murphy, Christopher W. [Scuola Normale Superiore, Piazza dei Cavalieri 7, Pisa 56126 (Italy); Uttayarat, Patipan [Department of Physics, Srinakharinwirot University, Wattana, Bangkok 10110 (Thailand)
2016-06-13
We compute all of the one-loop corrections that are enhanced, O(λ{sub i}λ{sub j}/16π{sup 2}), in the limit s≫|λ{sub i}|v{sup 2}≫M{sub W}{sup 2}, s≫m{sub 12}{sup 2} to all the 2→2 longitudinal vector boson and Higgs boson scattering amplitudes in the CP-conserving two-Higgs doublet model with a softly broken ℤ{sub 2} symmetry. In the two simplified scenarios we study, the typical bound we find is |λ{sub i}(s)|⪅4.
9. Broken space-time symmetries and mechanisms of rectification of ac fields by nonlinear (non)adiabatic response
DEFF Research Database (Denmark)
Denisov, S.; Flach, S.; Ovchinnikov, A. A.
2002-01-01
We consider low-dimensional dynamical systems exposed to a heat bath and to additional ac fields. The presence of these ac fields may lead to a breaking of certain spatial or temporal symmetries, which in turn cause nonzero averages of relevant observables. Nonlinear (non)adiabatic response is em...... is employed to explain the effect. We consider a case of a particle in a periodic potential as an example and discuss the relevant symmetry breakings and the mechanisms of rectification of the current in such a system.......We consider low-dimensional dynamical systems exposed to a heat bath and to additional ac fields. The presence of these ac fields may lead to a breaking of certain spatial or temporal symmetries, which in turn cause nonzero averages of relevant observables. Nonlinear (non)adiabatic response...
10. Inversion of single-particle levels in nuclear Hartree-Fock and Brueckner-HF calculations with broken symmetry
International Nuclear Information System (INIS)
Becker, R.L.; Svenne, J.P.
1975-12-01
Energy levels of states connected by a symmetry of the Hamiltonian normally should be degenerate. In self-consistent field theories, when only one of a pair of single-particle levels connected by a symmetry of the full Hamiltonian is occupied, the degeneracy is split and the unoccupied level often lies below the occupied one. Inversions of neutron-proton (charge) and time-reversal doublets in odd nuclei, charge doublets in even nuclei with a neutron excess, and spin-orbit doublets in spherical configurations with spin-unsaturated shells are examined. The origin of the level inversion is investigated, and the following explanation offered. Unoccupied single-particle levels, from a calculation in an A-particle system, should be interpreted as levels of the (A + 1)-particle system. When the symmetry-related level, occupied in the A-particle system, is also calculated in the (A + 1)-particle system it is degenerate with or lies lower than the other. That is, when both levels are calculated in the (A + 1)-particle system, they are not inverted. It is demonstrated that the usual prescription to occupy the lowest-lying orbitals should be modified to refer to the single-particle energies calculated in the (A + 1)- or the (A - 1)-particle system. This observation is shown to provide a justification for avoiding an oscillation of occupancy between symmetry-related partners in successive iterations leading to a self-consistency. It is pointed out that two degenerate determinants arise from occupying one or the other partner of an initially degenerate pair of levels and then iterating to self-consistency. The existence of the degenerate determinants indicates the need for introducing correlations, either by mixing the two configurations or by allowing additional symmetry-breaking (resulting in a more highly deformed non-degenerate configuration). 2 figures, 3 tables, 43 references
11. Functional approach for pairing in finite systems: How to define restoration of broken symmetries in Energy Density Functional theory?
International Nuclear Information System (INIS)
Hupin, G; Lacroix, D; Bender, M
2011-01-01
The Multi-Reference Energy Density Functional (MR-EDF) approach (also called configuration mixing or Generator Coordinate Method), that is commonly used to treat pairing in finite nuclei and project onto particle number, is re-analyzed. It is shown that, under certain conditions, the MR-EDF energy can be interpreted as a functional of the one-body density matrix of the projected state with good particle number. Based on this observation, we propose a new approach, called Symmetry-Conserving EDF (SC-EDF), where the breaking and restoration of symmetry are accounted for simultaneously. We show, that such an approach is free from pathologies recently observed in MR-EDF and can be used with a large flexibility on the density dependence of the functional.
12. Are trinuclear superhalogens promising candidates for building blocks of novel magnetic materials? A theoretical prospect from combined broken-symmetry density functional theory and ab initio study.
Science.gov (United States)
Yu, Yang; Li, Chen; Yin, Bing; Li, Jian-Li; Huang, Yuan-He; Wen, Zhen-Yi; Jiang, Zhen-Yi
2013-08-07
The structures, relative stabilities, vertical electron detachment energies, and magnetic properties of a series of trinuclear clusters are explored via combined broken-symmetry density functional theory and ab initio study. Several exchange-correlation functionals are utilized to investigate the effects of different halogen elements and central atoms on the properties of the clusters. These clusters are shown to possess stronger superhalogen properties than previously reported dinuclear superhalogens. The calculated exchange coupling constants indicate the antiferromagnetic coupling between the transition metal ions. Spin density analysis demonstrates the importance of spin delocalization in determining the strengths of various couplings. Spin frustration is shown to occur in some of the trinuclear superhalogens. The coexistence of strong superhalogen properties and spin frustration implies the possibility of trinuclear superhalogens working as the building block of new materials of novel magnetic properties.
13. Broken symmetry in the mean field theory of the ising spin glass: replica way and no replica way
International Nuclear Information System (INIS)
De Dominicis, C.
1983-06-01
We review the type of symmetry breaking involved in the solution discovered by Parisi and in the static derivation of the solution first introduced via dynamics by Sompolinsky. We turn to a formulation of the problem due to Thouless, Anderson and Palmer (TAP) that put a set of equations for the magnetization. A probability law for the magnetization is then built. We consider two cases: (i) a canonical distribution which is shown to give indentical results to the Hamiltonian formulation under a weak and physical assumption and (ii) a white distribution characterized by two matrices and a response. We show what symmetry breaking is necessary to recover Sompolinsky free energy. In section III we supplement replica indices in the Hamiltonian approach by ''time'' indices ans show in particular that the analytic continuation involved in Sompolinsky's equilibrium derivation, is trying to mimick a translational symmetry breaking in ''time'' that incorporates Sompolinsky's ansatz of a long time scale sequence. In section IV we apply the same treatment to the white average approach and show that, replicas can be altogether discorded and replaced by ''time''. Finally, we briefly discuss the attribution of distinct answers for the standard spin glass order parameter depending on the physical situation: equilibrium or non equilibrium associated with canonical or white (non canonical) initial conditions and density matrices
14. En route to Background Independence: Broken split-symmetry, and how to restore it with bi-metric average actions
International Nuclear Information System (INIS)
Becker, D.; Reuter, M.
2014-01-01
The most momentous requirement a quantum theory of gravity must satisfy is Background Independence, necessitating in particular an ab initio derivation of the arena all non-gravitational physics takes place in, namely spacetime. Using the background field technique, this requirement translates into the condition of an unbroken split-symmetry connecting the (quantized) metric fluctuations to the (classical) background metric. If the regularization scheme used violates split-symmetry during the quantization process it is mandatory to restore it in the end at the level of observable physics. In this paper we present a detailed investigation of split-symmetry breaking and restoration within the Effective Average Action (EAA) approach to Quantum Einstein Gravity (QEG) with a special emphasis on the Asymptotic Safety conjecture. In particular we demonstrate for the first time in a non-trivial setting that the two key requirements of Background Independence and Asymptotic Safety can be satisfied simultaneously. Carefully disentangling fluctuation and background fields, we employ a ‘bi-metric’ ansatz for the EAA and project the flow generated by its functional renormalization group equation on a truncated theory space spanned by two separate Einstein–Hilbert actions for the dynamical and the background metric, respectively. A new powerful method is used to derive the corresponding renormalization group (RG) equations for the Newton- and cosmological constant, both in the dynamical and the background sector. We classify and analyze their solutions in detail, determine their fixed point structure, and identify an attractor mechanism which turns out instrumental in the split-symmetry restoration. We show that there exists a subset of RG trajectories which are both asymptotically safe and split-symmetry restoring: In the ultraviolet they emanate from a non-Gaussian fixed point, and in the infrared they loose all symmetry violating contributions inflicted on them by the
15. Renormalization of the scalar field theory with spontaneously broken discrete symmetry without shifting the field vacuum expectation value
International Nuclear Information System (INIS)
Solin, J.
1988-01-01
The one-loop renormalization of the λφ 4 theory with a spontaneous breaking of its discrete (reflection) symmetry is analyzed. It is explicitly shown that it is not necessary to forcefully eliminate the linear counterterm in the shifted field (accomplished usually by shifting the vacuum expectation value of the field) in order to have the renormalized Lagrangian still formally invariant under the original discrete symmetry. It is further shown, using the normal-ordering procedure, that the renormalization carried out in the customary form completely wipes out the tadpole diagram contributions from the original Lagrangian. As a consequence, the same renormalized Lagrangian can be also obtained from the original bare Lagrangian which, however, has been normal-ordered and as such cannot cause the linear counterterm in the shifted field since now the tadpole diagrams are absent altogether. These analyses should support the view that the vacuum expectation value of the field is of a group-theoretical origin rather than a field-theoretical origin, and as such should not change independently of the shifted field in the course of renormalization
16. Effect of broken axial symmetry on the electric dipole strength and the collective enhancement of level densities in heavy nuclei
Science.gov (United States)
Grosse, E.; Junghans, A. R.; Wilson, J. N.
2017-11-01
The basic parameters for calculations of radiative neutron capture, photon strength functions and nuclear level densities near the neutron separation energy are determined based on experimental data without an ad hoc assumption about axial symmetry—at variance to previous analysis. Surprisingly few global fit parameters are needed in addition to information on nuclear deformation, taken from Hartree Fock Bogolyubov calculations with the Gogny force, and the generator coordinator method assures properly defined angular momentum. For a large number of nuclei the GDR shapes and the photon strength are described by the sum of three Lorentzians, extrapolated to low energies and normalised in accordance to the dipole sum rule. Level densities are influenced strongly by the significant collective enhancement based on the breaking of shape symmetry. The replacement of axial symmetry by the less stringent requirement of invariance against rotation by 180° leads to a novel prediction for radiative neutron capture. It compares well to recent compilations of average radiative widths and Maxwellian average cross sections for neutron capture by even target nuclei. An extension to higher spin promises a reliable prediction for various compound nuclear reactions also outside the valley of stability. Such predictions are of high importance for future nuclear energy systems and waste transmutation as well as for the understanding of the cosmic synthesis of heavy elements.
17. Raman Signatures of Broken Inversion Symmetry and In-Plane Anisotropy in Type-II Weyl Semimetal Candidate TaIrTe4.
Science.gov (United States)
Liu, Yinan; Gu, Qiangqiang; Peng, Yu; Qi, Shaomian; Zhang, Na; Zhang, Yinong; Ma, Xiumei; Zhu, Rui; Tong, Lianming; Feng, Ji; Liu, Zheng; Chen, Jian-Hao
2018-05-07
The layered ternary compound TaIrTe 4 is an important candidate to host the recently predicted type-II Weyl fermions. However, a direct and definitive proof of the absence of inversion symmetry in this material, a prerequisite for the existence of Weyl Fermions, has so far remained evasive. Herein, an unambiguous identification of the broken inversion symmetry in TaIrTe 4 is established using angle-resolved polarized Raman spectroscopy. Combining with high-resolution transmission electron microscopy, an efficient and nondestructive recipe to determine the exact crystallographic orientation of TaIrTe 4 crystals is demonstrated. Such technique could be extended to the fast identification and characterization of other type-II Weyl fermions candidates. A surprisingly strong in-plane electrical anisotropy in TaIrTe 4 thin flakes is also revealed, up to 200% at 10 K, which is the strongest known electrical anisotropy for materials with comparable carrier density, notably in such good metals as copper and silver. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
18. Model for particle masses, flavor mixing, and CP violation, based on spontaneously broken discrete chiral symmetry as the origin of families
International Nuclear Information System (INIS)
1999-01-01
We construct extensions of the standard model based on the hypothesis that Higgs bosons also exhibit a family structure and that the flavor weak eigenstates in the three families are distinguished by a discrete Z 6 chiral symmetry that is spontaneously broken by the Higgs sector. We study in detail at the tree level models with three Higgs doublets and with six Higgs doublets comprising two weakly coupled sets of three. In a leading approximation of S 3 cyclic permutation symmetry the three-Higgs-doublet model gives a open-quotes democraticclose quotes mass matrix of rank 1, while the six-Higgs-doublet model gives either a rank-1 mass matrix or, in the case when it spontaneously violates CP, a rank-2 mass matrix corresponding to nonzero second family masses. In both models, the CKM matrix is exactly unity in the leading approximation. Allowing small explicit violations of cyclic permutation symmetry generates small first family masses in the six-Higgs-doublet model, and first and second family masses in the three-Higgs-doublet model, and gives a nontrivial CKM matrix in which the mixings of the first and second family quarks are naturally larger than mixings involving the third family. Complete numerical fits are given for both models, flavor-changing neutral current constraints are discussed in detail, and the issues of unification of couplings and neutrino masses are addressed. On a technical level, our analysis uses the theory of circulant and retrocirculant matrices, the relevant parts of which are reviewed. copyright 1998 The American Physical Society
19. Sixty years of broken symmetries in quantum physics (from the Bogoliubov theory of superfluidity to the Standard Model)
International Nuclear Information System (INIS)
Shirkov, Dmitrii V
2009-01-01
This is a retrospective historical review of the ideas that led to the concept of the spontaneous symmetry breaking (SSB), the issue that has been implemented in quantum field theory in the form of the Higgs mechanism. The key stages covered include: the Bogoliubov microscopic theory of superfluidity (1946); the Bardeen-Cooper-Schrieffer-Bogoliubov microscopic theory of superconductivity (1957); superconductivity as superfluidity of Cooper pairs (Bogoliubov, 1958); the extension of the SSB concept to simple quantum field models (early 1960s); triumph of the Higgs model in electroweak theory (early 1980s). The role and status of the Higgs mechanism in the current Standard Model are discussed. (oral issue of the journal 'uspekhi fizicheskikh nauk')
20. Spin-flip dynamics of the Curie-Weiss model Loss of Gibbsianness with possibly broken symmetry.
CERN Document Server
Külske, C
2005-01-01
We study the conditional probabilities of the Curie-Weiss Ising model in vanishing external field under a symmetric independent stochastic spin-flip dynamics and discuss their set of bad configurations (points of discontinuity). We exhibit a complete analysis of the transition between Gibbsian and non-Gibbsian behavior as a function of time, extending the results for the corresponding lattice model, where only partial answers can be obtained. For initial inverse temperature $\\b \\leq 1$, we prove that the time-evolved measure is always Gibbsian. For $1 \\frac{3}{2}$, we observe the new phenomenon of symmetry-breaking of bad configurations: The time-evolved measure loses its Gibbsian character at a sharp transition time, and bad configurations with non-zero spin-average appear. These bad configurations merge into a neutral configuration at a later transition time, while the measure stays non-Gibbs. In our proof we give a detailed analysis of the phase-diagram of a Curie-Weiss random field Ising model with possi...
1. Quantum master equation method based on the broken-symmetry time-dependent density functional theory: application to dynamic polarizability of open-shell molecular systems.
Science.gov (United States)
Kishi, Ryohei; Nakano, Masayoshi
2011-04-21
A novel method for the calculation of the dynamic polarizability (α) of open-shell molecular systems is developed based on the quantum master equation combined with the broken-symmetry (BS) time-dependent density functional theory within the Tamm-Dancoff approximation, referred to as the BS-DFTQME method. We investigate the dynamic α density distribution obtained from BS-DFTQME calculations in order to analyze the spatial contributions of electrons to the field-induced polarization and clarify the contributions of the frontier orbital pair to α and its density. To demonstrate the performance of this method, we examine the real part of dynamic α of singlet 1,3-dipole systems having a variety of diradical characters (y). The frequency dispersion of α, in particular in the resonant region, is shown to strongly depend on the exchange-correlation functional as well as on the diradical character. Under sufficiently off-resonant condition, the dynamic α is found to decrease with increasing y and/or the fraction of Hartree-Fock exchange in the exchange-correlation functional, which enhances the spin polarization, due to the decrease in the delocalization effects of π-diradical electrons in the frontier orbital pair. The BS-DFTQME method with the BHandHLYP exchange-correlation functional also turns out to semiquantitatively reproduce the α spectra calculated by a strongly correlated ab initio molecular orbital method, i.e., the spin-unrestricted coupled-cluster singles and doubles.
2. Broken bone
Science.gov (United States)
... Drugs & Supplements Videos & Tools Español You Are Here: Home → Medical Encyclopedia → Broken bone URL of this page: //medlineplus.gov/ency/ ... following steps to reduce your risk of a broken bone: Wear protective ... pads. Create a safe home for young children. Place a gate at stairways ...
3. Broken Arm
Science.gov (United States)
... of falling — including football, soccer, gymnastics, skiing and skateboarding — also increases the risk of a broken arm. ... for high-risk activities, such as in-line skating, snowboarding, rugby and football. Don't smoke. Smoking ...
4. Some symmetries in nuclei
International Nuclear Information System (INIS)
Henley, E.M.
1981-09-01
Internal and space-time symmetries are discussed in this group of lectures. The first of the lectures deals with an internal symmetry, or rather two related symmetries called charge independence and charge symmetry. The next two discuss space-time symmetries which also hold approximately, but are broken only by the weak forces; that is, these symmetries hold for both the hadronic and electromagnetic forces
5. Spontaneously broken mass
International Nuclear Information System (INIS)
Endlich, Solomon; Nicolis, Alberto; Penco, Riccardo
2015-01-01
The Galilei group involves mass as a central charge. We show that the associated superselection rule is incompatible with the observed phenomenology of superfluid helium 4: this is recovered only under the assumption that mass is spontaneously broken. This remark is somewhat immaterial for the real world, where the correct space-time symmetries are encoded by the Poincaré group, which has no central charge. Yet it provides an explicit example of how superselection rules can be experimentally tested. We elaborate on what conditions must be met for our ideas to be generalizable to the relativistic case of the integer/half-integer angular momentum superselection rule.
6. High efficiency all-optical plasmonic diode based on a nonlinear side-coupled waveguide-cavity structure with broken symmetry
Science.gov (United States)
Liang, Hong-Qin; Liu, Bin; Hu, Jin-Feng; He, Xing-Dao
2018-05-01
An all-optical plasmonic diode, comprising a metal-insulator-metal waveguide coupled with a stub cavity, is proposed based on a nonlinear Fano structure. The key technique used is to break structural spatial symmetry by a simple reflector layer in the waveguide. The spatial asymmetry of the structure gives rise to the nonreciprocity of coupling efficiencies between the Fano cavity and waveguides on both sides of the reflector layer, leading to a nonreciprocal nonlinear response. Transmission properties and dynamic responses are numerically simulated and investigated by the nonlinear finite-difference time-domain method. In the proposed structure, high-efficiency nonreciprocal transmission can be achieved with a low power threshold and an ultrafast response time (subpicosecond level). A high maximum transmittance of 89.3% and an ultra-high transmission contrast ratio of 99.6% can also be obtained. The device can be flexibly adjusted for working wavebands by altering the stub cavity length.
7. Time-dependent broken-symmetry density functional theory simulation of the optical response of entangled paramagnetic defects: Color centers in lithium fluoride
Science.gov (United States)
Janesko, Benjamin G.
2018-02-01
Parameter-free atomistic simulations of entangled solid-state paramagnetic defects may aid in the rational design of devices for quantum information science. This work applies time-dependent density functional theory (TDDFT) embedded-cluster simulations to a prototype entangled-defect system, namely two adjacent singlet-coupled F color centers in lithium fluoride. TDDFT calculations accurately reproduce the experimental visible absorption of both isolated and coupled F centers. The most accurate results are obtained by combining spin symmetry breaking to simulate strong correlation, a large fraction of exact (Hartree-Fock-like) exchange to minimize the defect electrons' self-interaction error, and a standard semilocal approximation for dynamical correlations between the defect electrons and the surrounding ionic lattice. These results motivate application of two-reference correlated ab initio approximations to the M-center, and application of TDDFT in parameter-free simulations of more complex entangled paramagnetic defect architectures.
8. Symmetries of Chimera States
Science.gov (United States)
Kemeth, Felix P.; Haugland, Sindre W.; Krischer, Katharina
2018-05-01
Symmetry broken states arise naturally in oscillatory networks. In this Letter, we investigate chaotic attractors in an ensemble of four mean-coupled Stuart-Landau oscillators with two oscillators being synchronized. We report that these states with partially broken symmetry, so-called chimera states, have different setwise symmetries in the incoherent oscillators, and in particular, some are and some are not invariant under a permutation symmetry on average. This allows for a classification of different chimera states in small networks. We conclude our report with a discussion of related states in spatially extended systems, which seem to inherit the symmetry properties of their counterparts in small networks.
9. Spontaneously broken abelian gauge invariant supersymmetric model
International Nuclear Information System (INIS)
Mainland, G.B.; Tanaka, K.
A model is presented that is invariant under an Abelian gauge transformation and a modified supersymmetry transformation. This model is broken spontaneously, and the interplay between symmetry breaking, Goldstone particles, and mass breaking is studied. In the present model, spontaneously breaking the Abelian symmetry of the vacuum restores the invariance of the vacuum under a modified supersymmetry transformation. (U.S.)
10. Enhanced interfacial Dzyaloshinskii-Moriya interaction and isolated skyrmions in the inversion-symmetry-broken Ru/Co/W/Ru films
Science.gov (United States)
Samardak, Alexander; Kolesnikov, Alexander; Stebliy, Maksim; Chebotkevich, Ludmila; Sadovnikov, Alexandr; Nikitov, Sergei; Talapatra, Abhishek; Mohanty, Jyoti; Ognev, Alexey
2018-05-01
An enhancement of the spin-orbit effects arising on an interface between a ferromagnet (FM) and a heavy metal (HM) is possible through the strong breaking of the structural inversion symmetry in the layered films. Here, we show that an introduction of an ultrathin W interlayer between Co and Ru in Ru/Co/Ru films enables to preserve perpendicular magnetic anisotropy (PMA) and simultaneously induce a large interfacial Dzyaloshinskii-Moriya interaction (iDMI). The study of the spin-wave propagation in the Damon-Eshbach geometry by Brillouin light scattering spectroscopy reveals the drastic increase in the iDMI value with the increase in W thickness (tW). The maximum iDMI of -3.1 erg/cm2 is observed for tW = 0.24 nm, which is 10 times larger than for the quasi-symmetrical Ru/Co/Ru films. We demonstrate the evidence of the spontaneous field-driven nucleation of isolated skyrmions supported by micromagnetic simulations. Magnetic force microscopy measurements reveal the existence of sub-100-nm skyrmions in the zero magnetic field. The ability to simultaneously control the strength of PMA and iDMI in quasi-symmetrical HM/FM/HM trilayer systems through the interface engineered inversion asymmetry at the nanoscale excites new fundamental and practical interest in ultrathin ferromagnets, which are a potential host for stable magnetic skyrmions.
11. Current induced magnetic flux response in frustrated three-band superconductors as a bulk probe of broken time reversal symmetry (BTRS) ground states
Energy Technology Data Exchange (ETDEWEB)
Yerin, Yuriy; Omelyanchouk, Alexander [Verkin Inst. for Low Temperature Physics and Engineering. 61103 Kharkiv (Ukraine); Drechsler, Stefan-Ludwig; Brink, Jeroen van den; Efremov, Dmitriy [Inst. for Theorretical Solid State Physics at the Leibniz Inst. for Solid State an Materials Research, IFW-Dresden, D-01171 Dresden (Germany)
2016-07-01
Within the Ginzburg-Landau formalism we provide a classification of all possible ground states (GS) of a three-band superconductor (3BSC) where either frustrated states with BTRS or a single non-BTRS GS with unconventional/conventional s-wave symmetry, respectively, exist. The necessary condition for a BTRS GS in general cannot be reduced to a ''-''sign of the product of all interband couplings (IBC) valid in the case of 3 equivalent bands with repulsive equal IBC, only. It corresponds to a maximal IBC frustration. We show that with increasing diversity of the parameter space this frustration is reduced and the regions of possible BTRS GS start to shrink. We track possible evolutions of a BTRS GS of a 3BSC based doubly-connected system in an external magnetic field. Depending on its parameters, a magnetic flux can induce various current density leaps, connected with adiabatic or non-adiabatic transitions from BTRS to non-BTRS states and vice versa. The current induced magnetic flux response of samples with a doubly-connected geometry e.g. as a thin tube provides a suitable experimental tool for the detection of BTRS GS.
12. Gauge symmetry breaking
International Nuclear Information System (INIS)
Weinberg, S.
1976-01-01
The problem of how gauge symmetries of the weak interactions get broken is discussed. Some reasons why such a heirarchy of gauge symmetry breaking is needed, the reason gauge heirarchies do not seem to arise in theories of a given and related type, and the implications of theories with dynamical symmetry breaking, which can exhibit a gauge hierarchy
13. Symmetry-broken states in a system of interacting bosons on a two-leg ladder with a uniform Abelian gauge field
Science.gov (United States)
Greschner, S.; Piraud, M.; Heidrich-Meisner, F.; McCulloch, I. P.; Schollwöck, U.; Vekua, T.
2016-12-01
We study the quantum phases of bosons with repulsive contact interactions on a two-leg ladder in the presence of a uniform Abelian gauge field. The model realizes many interesting states, including Meissner phases, vortex fluids, vortex lattices, charge density waves, and the biased-ladder phase. Our work focuses on the subset of these states that breaks a discrete symmetry. We use density matrix renormalization group simulations to demonstrate the existence of three vortex-lattice states at different vortex densities and we characterize the phase transitions from these phases into neighboring states. Furthermore, we provide an intuitive explanation of the chiral-current reversal effect that is tied to some of these vortex lattices. We also study a charge-density-wave state that exists at 1/4 particle filling at large interaction strengths and flux values close to half a flux quantum. By changing the system parameters, this state can transition into a completely gapped vortex-lattice Mott-insulating state. We elucidate the stability of these phases against nearest-neighbor interactions on the rungs of the ladder relevant for experimental realizations with a synthetic lattice dimension. A charge-density-wave state at 1/3 particle filling can be stabilized for flux values close to half a flux quantum and for very strong on-site interactions in the presence of strong repulsion on the rungs. Finally, we analytically describe the emergence of these phases in the low-density regime, and, in particular, we obtain the boundaries of the biased-ladder phase, i.e., the phase that features a density imbalance between the legs. We make contact with recent quantum-gas experiments that realized related models and discuss signatures of these quantum states in experimentally accessible observables.
14. Broken Bones (For Parents)
Science.gov (United States)
... Safe Videos for Educators Search English Español Broken Bones KidsHealth / For Parents / Broken Bones What's in this ... bone fragments in place. When Will a Broken Bone Heal? Fractures heal at different rates, depending upon ...
15. Some General Thoughts about Broken Symmetry.
Science.gov (United States)
1981-01-21
Higgs boson excitations, the long-range elastic-like forces (such as Suhl-Nakamura interactions in magnets) but most important of all the property I call...parameter is also a constant of the motion, which has important consequences for the nature of the relevant Goldstone bosons . It is too bad that in
Science.gov (United States)
... Safe Videos for Educators Search English Español Broken Bones KidsHealth / For Kids / Broken Bones What's in this ... sticking through the skin . What Happens When a Bone Breaks? It hurts to break a bone! It's ...
17. Sequential flavor symmetry breaking
International Nuclear Information System (INIS)
Feldmann, Thorsten; Jung, Martin; Mannel, Thomas
2009-01-01
The gauge sector of the standard model exhibits a flavor symmetry that allows for independent unitary transformations of the fermion multiplets. In the standard model the flavor symmetry is broken by the Yukawa couplings to the Higgs boson, and the resulting fermion masses and mixing angles show a pronounced hierarchy. In this work we connect the observed hierarchy to a sequence of intermediate effective theories, where the flavor symmetries are broken in a stepwise fashion by vacuum expectation values of suitably constructed spurion fields. We identify the possible scenarios in the quark sector and discuss some implications of this approach.
18. Sequential flavor symmetry breaking
Science.gov (United States)
Feldmann, Thorsten; Jung, Martin; Mannel, Thomas
2009-08-01
The gauge sector of the standard model exhibits a flavor symmetry that allows for independent unitary transformations of the fermion multiplets. In the standard model the flavor symmetry is broken by the Yukawa couplings to the Higgs boson, and the resulting fermion masses and mixing angles show a pronounced hierarchy. In this work we connect the observed hierarchy to a sequence of intermediate effective theories, where the flavor symmetries are broken in a stepwise fashion by vacuum expectation values of suitably constructed spurion fields. We identify the possible scenarios in the quark sector and discuss some implications of this approach.
19. Parastatistics and gauge symmetries
International Nuclear Information System (INIS)
Govorkov, A.B.
1982-01-01
A possible formulation of gauge symmetries in the Green parafield theory is analysed and the SO(3) gauge symmetry is shown to be on a distinct status. The Greenberg paraquark hypothesis turns out to be not equivalent to the hypothesis of quark colour SU(3)sub(c) symmetry. Specific features of the gauge SO(3) symmetry are discussed, and a possible scheme where it is an exact subgroup of the broken SU(3)sub(c) symmetry is proposed. The direct formulation of the gauge principle for the parafield represented by quaternions is also discussed
20. Broken Scale Invariance and Anomalous Dimensions
Science.gov (United States)
Wilson, K. G.
1970-05-01
Mack and Kastrup have proposed that broken scale invariance is a symmetry of strong interactions. There is evidence from the Thirring model and perturbation theory that the dimensions of fields defined by scale transformations will be changed by the interaction from their canonical values. We review these ideas and their consequences for strong interactions.
1. Symmetries in nuclei
International Nuclear Information System (INIS)
Arima, A.
2003-01-01
(1) There are symmetries in nature, and the concept of symmetry has been used in art and architecture. The symmetry is evaluated high in the European culture. In China, the symmetry is broken in the paintings but it is valued in the architecture. In Japan, however, the symmetry has been broken everywhere. The serious and interesting question is why these differences happens? (2) In this lecture, I reviewed from the very beginning the importance of the rotational symmetry in quantum mechanics. I am sorry to be too fundamental for specialists of nuclear physics. But for people who do not use these theories, I think that you could understand the mathematical aspects of quantum mechanics and the relation between the angular momentum and the rotational symmetry. (3) To the specialists of nuclear physics, I talked about my idea as follows: dynamical treatment of collective motions in nuclei by IBM, especially the meaning of the degeneracy observed in the rotation bands top of γ vibration and β vibration, and the origin of pseudo-spin symmetry. Namely, if there is a symmetry, a degeneracy occurs. Conversely, if there is a degeneracy, there must be a symmetry. I discussed some details of the observed evidence and this correspondence is my strong belief in physics. (author)
2. Generalized global symmetries
International Nuclear Information System (INIS)
Gaiotto, Davide; Kapustin, Anton; Seiberg, Nathan; Willett, Brian
2015-01-01
A q-form global symmetry is a global symmetry for which the charged operators are of space-time dimension q; e.g. Wilson lines, surface defects, etc., and the charged excitations have q spatial dimensions; e.g. strings, membranes, etc. Many of the properties of ordinary global symmetries (q=0) apply here. They lead to Ward identities and hence to selection rules on amplitudes. Such global symmetries can be coupled to classical background fields and they can be gauged by summing over these classical fields. These generalized global symmetries can be spontaneously broken (either completely or to a subgroup). They can also have ’t Hooft anomalies, which prevent us from gauging them, but lead to ’t Hooft anomaly matching conditions. Such anomalies can also lead to anomaly inflow on various defects and exotic Symmetry Protected Topological phases. Our analysis of these symmetries gives a new unified perspective of many known phenomena and uncovers new results.
3. Broken or dislocated jaw
Science.gov (United States)
... broken or dislocated jaw requires prompt medical attention. Emergency symptoms include difficulty breathing or heavy bleeding. ... safety equipment, such as a helmet when playing football, or using ... can prevent or minimize some injuries to the face or jaw.
4. ''Natural'' left-right symmetry
International Nuclear Information System (INIS)
Mohapatra, R.N.; Pati, J.C.
1975-01-01
It is remarked that left-right symmetry of the starting gauge interactions is retained as a ''natural'' symmetry if it is broken in no way except possibly by mass terms in the Lagrangian. The implications of this result for the unification of coupling constants and for parity nonconservation at low and high energies are stressed
5. Neutrino masses and family symmetry
International Nuclear Information System (INIS)
Grinstein, B.; Preskill, J.; Wise, M.B.
1985-01-01
Neutrino masses in the 100 eV-1 MeV range are permitted if there is a spontaneously broken global family symmetry that allows the heavy neutrinos to decay by Goldstone boson emission with a cosmologically acceptable lifetime. The family symmetry may be either abelian or nonabelian; we present models illustrating both possibilities. If the family symmetry is nonabelian, then the decay tau -> μ + Goldstone boson or tau -> e + Goldstone may have an observable rate. (orig.)
6. Broken superfluid in dense quark matter
Energy Technology Data Exchange (ETDEWEB)
Parganlija, Denis; Schmitt, Andreas [Institut fuer Theoretische Physik, Technische Universitaet Wien, 1040 Vienna (Austria); Alford, Mark [Department of Physics, Washington University St Louis, MO, 63130 (United States)
2014-07-01
Quark matter at high densities is a superfluid. Properties of the superfluid become highly non-trivial if the effects of strange-quark mass and the weak interactions are considered. These properties are relevant for a microscopic description of compact stars. We discuss the effect of a (small) explicitly symmetry-breaking term on the properties of a zero-temperature superfluid in a relativistic φ{sup 4} theory. If the U(1) symmetry is exact, chemical potential and superflow can be equivalently introduced either via (1) a background gauge field or (2) a topologically nontrivial mode. However, in the case of the explicitly broken symmetry, we demonstrate that the scenarios (1) and (2) lead to quantitatively different results for the mass of the pseudo-Goldstone mode and the critical velocity for superfluidity.
7. Global U(1 ) Y⊗BRST symmetry and the LSS theorem: Ward-Takahashi identities governing Green's functions, on-shell T -matrix elements, and the effective potential in the scalar sector of the spontaneously broken extended Abelian Higgs model
Science.gov (United States)
Lynn, Bryan W.; Starkman, Glenn D.
2017-09-01
The weak-scale U (1 )Y Abelian Higgs model (AHM) is the simplest spontaneous symmetry breaking (SSB) gauge theory: a scalar ϕ =1/√{2 }(H +i π )≡1/√{2 }H ˜ei π ˜/⟨H ⟩ and a vector Aμ. The extended AHM (E-AHM) adds certain heavy (MΦ2,Mψ2˜MHeavy2≫⟨H ⟩2˜mWeak2 ) spin S =0 scalars Φ and S =1/2 fermions ψ . In Lorenz gauge, ∂μAμ=0 , the SSB AHM (and E-AHM) has a global U (1 )Y conserved physical current, but no conserved charge. As shown by T. W. B. Kibble, the Goldstone theorem applies, so π ˜ is a massless derivatively coupled Nambu-Goldstone boson (NGB). Proof of all-loop-orders renormalizability and unitarity for the SSB case is tricky because the Becchi-Rouet-Stora-Tyutin (BRST)-invariant Lagrangian is not U (1 )Y symmetric. Nevertheless, Slavnov-Taylor identities guarantee that on-shell T-matrix elements of physical states Aμ,ϕ , Φ , ψ (but not ghosts ω , η ¯ ) are independent of anomaly-free local U (1 )Y gauge transformations. We observe here that they are therefore also independent of the usual anomaly-free U (1 )Y global/rigid transformations. It follows that the associated global current, which is classically conserved only up to gauge-fixing terms, is exactly conserved for amplitudes of physical states in the AHM and E-AHM. We identify corresponding "undeformed" [i.e. with full global U (1 )Y symmetry] Ward-Takahashi identities (WTI). The proof of renormalizability and unitarity, which relies on BRST invariance, is undisturbed. In Lorenz gauge, two towers of "1-soft-pion" SSB global WTI govern the ϕ -sector, and represent a new global U (1 )Y⊗BRST symmetry not of the Lagrangian but of the physics. The first gives relations among off-shell Green's functions, yielding powerful constraints on the all-loop-orders ϕ -sector SSB E-AHM low-energy effective Lagrangian and an additional global shift symmetry for the NGB: π ˜→π ˜+⟨H ⟩θ . A second tower, governing on-shell T-matrix elements, replaces the old Adler
8. Fractures (Broken Bones): First Aid
Science.gov (United States)
First aid Fractures (broken bones) Fractures (broken bones): First aid By Mayo Clinic Staff A fracture is a ... 10, 2018 Original article: http://www.mayoclinic.org/first-aid/first-aid-fractures/basics/ART-20056641 . Mayo Clinic ...
9. Mass splittings within composite Goldstone supermultiplets from broken supersymmetry
International Nuclear Information System (INIS)
Clark, T.E.; Love, S.T.
1985-01-01
The supersymmetric (SUSY) Dashen formulas are modified to include effects of softly broken supersymmetry and are used to compute the mass splittings and differences in decay constants among the various components of a Goldstone supermultiplet. The general results are applied to chiral-symmetry breaking in two-flavor SUSY QCD
10. Topics in broken supersymmetry
International Nuclear Information System (INIS)
Lee, I.H.
1984-01-01
Studies on two topics in the framework of broken supersymmetry are presented. Chapter I is a brief introduction in which the motivation and the background of this work are discussed. In Chapter II, the author studies the decay K + → π + γγ in models with spontaneous supersymmetry breaking and find that it is generally suppressed relative to the decay K + → π + anti nu nu of the conventional model, except possibly for a class of models where the scalar quark masses are generated by radiative corrections from a much larger supersymmetry breaking scale. For a small range of scalar quark and photino mass parameters, the cascade decay process K + → π + π 0 → π + γγ will become dominant over the anti nu nu mode. The author also comments on the possibility of probing the neutrino mass through the K + → π + π 0 → π + anti nu nu cascade decay. Chapter III is concerned with the implications of explicit lepton number violating soft operators in a general low energy effective theory with softly broken supersymmetry
11. Spontaneously broken realization of supersymmetry in supergravity
International Nuclear Information System (INIS)
Ferrara, S.; Trieste Univ.
1979-01-01
It is shown that if supersymmetry is relevant for the physical world it must be broken either spontaneously or explicitly. Renormalizability and simplicity are in favor of a spontaneous realization of the symmetry breaking. When supersymmetry is spontaneously broken the spinorial analogue of the Goldstone phenomenon occurs, namely massless particles arise in the spectrum of the theory which carry the same quantum numbers of the broken generators Qsup(i) they are N spin 1/2 Goldstone fermions (goldstinos). These particles may be eaten by spin 3/2 gauge particles (gravitinos) when supersymmetry is gauged. It is shown that both the Higgs effect and super Higgs effect have taken place. 8 of the spin 1/2 particles have been eaten by the spin 3/2 particles and 24 of 70 scalars have been eaten by the spin 3/2 particles and 24 of 70 scalars have been eaten by 24 of the 28 vector particles to provide them with mass. The conclusion is that the number of mass relations is, in general, equal to r-1, where r is the rank of the algebra which generates the spectrum
12. Symmetry and symmetry breaking
International Nuclear Information System (INIS)
Balian, R.; Lambert, D.; Brack, A.; Lachieze-Rey, M.; Emery, E.; Cohen-Tannoudji, G.; Sacquin, Y.
1999-01-01
The symmetry concept is a powerful tool for our understanding of the world. It allows a reduction of the volume of information needed to apprehend a subject thoroughly. Moreover this concept does not belong to a particular field, it is involved in the exact sciences but also in artistic matters. Living beings are characterized by a particular asymmetry: the chiral asymmetry. Although this asymmetry is visible in whole organisms, it seems it comes from some molecules that life always produce in one chirality. The weak interaction presents also the chiral asymmetry. The mass of particles comes from the breaking of a fundamental symmetry and the void could be defined as the medium showing as many symmetries as possible. The texts put together in this book show to a great extent how symmetry goes far beyond purely geometrical considerations. Different aspects of symmetry ideas are considered in the following fields: the states of matter, mathematics, biology, the laws of Nature, quantum physics, the universe, and the art of music. (A.C.)
13. Effective lagrangian description on discrete gauge symmetries
International Nuclear Information System (INIS)
Banks, T.
1989-01-01
We exhibit a simple low-energy lagrangian which describes a system with a discrete remnant of a spontaneously broken continuous gauge symmetry. The lagrangian gives a simple description of the effects ascribed to such systems by Krauss and Wilczek: black holes carry discrete hair and interact with cosmic strings, and wormholes cannot lead to violation of discrete gauge symmetries. (orig.)
14. Pauli-Guersey symmetry in gauge theories
International Nuclear Information System (INIS)
Stern, J.
1983-05-01
Gauge theories with massless or massive fermions in a selfcontragredient representation exhibit global symmetries of Pauli-Guersey type. Some of them are broken spontaneously leading to a difermion Goldstone bosons. An example of a boson version of the Pauli-Guersey symmetry is provided by the Weinberg-Salam model in the limit THETAsub(w)→O
15. Dihedral flavor symmetries
Energy Technology Data Exchange (ETDEWEB)
Blum, Alexander Simon
2009-06-10
This thesis deals with the possibility of describing the flavor sector of the Standard Model of Particle Physics (with neutrino masses), that is the fermion masses and mixing matrices, with a discrete, non-abelian flavor symmetry. In particular, mass independent textures are considered, where one or several of the mixing angles are determined by group theory alone and are independent of the fermion masses. To this end a systematic analysis of a large class of discrete symmetries, the dihedral groups, is analyzed. Mass independent textures originating from such symmetries are described and it is shown that such structures arise naturally from the minimization of scalar potentials, where the scalars are gauge singlet flavons transforming non-trivially only under the flavor group. Two models are constructed from this input, one describing leptons, based on the group D{sub 4}, the other describing quarks and employing the symmetry D{sub 14}. In the latter model it is the quark mixing matrix element V{sub ud} - basically the Cabibbo angle - which is at leading order predicted from group theory. Finally, discrete flavor groups are discussed as subgroups of a continuous gauge symmetry and it is shown that this implies that the original gauge symmetry is broken by fairly large representations. (orig.)
16. Dihedral flavor symmetries
International Nuclear Information System (INIS)
Blum, Alexander Simon
2009-01-01
This thesis deals with the possibility of describing the flavor sector of the Standard Model of Particle Physics (with neutrino masses), that is the fermion masses and mixing matrices, with a discrete, non-abelian flavor symmetry. In particular, mass independent textures are considered, where one or several of the mixing angles are determined by group theory alone and are independent of the fermion masses. To this end a systematic analysis of a large class of discrete symmetries, the dihedral groups, is analyzed. Mass independent textures originating from such symmetries are described and it is shown that such structures arise naturally from the minimization of scalar potentials, where the scalars are gauge singlet flavons transforming non-trivially only under the flavor group. Two models are constructed from this input, one describing leptons, based on the group D 4 , the other describing quarks and employing the symmetry D 14 . In the latter model it is the quark mixing matrix element V ud - basically the Cabibbo angle - which is at leading order predicted from group theory. Finally, discrete flavor groups are discussed as subgroups of a continuous gauge symmetry and it is shown that this implies that the original gauge symmetry is broken by fairly large representations. (orig.)
17. Gauge origin of discrete flavor symmetries in heterotic orbifolds
Directory of Open Access Journals (Sweden)
Florian Beye
2014-09-01
Full Text Available We show that non-Abelian discrete symmetries in orbifold string models have a gauge origin. This can be understood when looking at the vicinity of a symmetry enhanced point in moduli space. At such an enhanced point, orbifold fixed points are characterized by an enhanced gauge symmetry. This gauge symmetry can be broken to a discrete subgroup by a nontrivial vacuum expectation value of the Kähler modulus T. Using this mechanism it is shown that the Δ(54 non-Abelian discrete symmetry group originates from a SU(3 gauge symmetry, whereas the D4 symmetry group is obtained from a SU(2 gauge symmetry.
18. Axions from chiral family symmetry
International Nuclear Information System (INIS)
Chang, D.; Pal, P.B.; Maryland Univ., College Park; Senjanovic, G.
1985-01-01
We investigate the possibility that family symmetry, Gsub(F), is spontaneously broken chiral global symmetry. We classify the interesting cases when family symmetry can result in an automatic Peccei-Quinn symmetry U(1)sub(PQ) and thus provide a solution to the strong CP problem. The result disfavors having two or four families. For more than four families, U(1)sub(PQ) is in general automatic. In the case of three families, a unique Higgs sector allows U(1)sub(PQ) in the simplest case of Gsub(F)=[SU(3)] 3 . Cosmological consideration also puts strong constraint on the number of families. For Gsub(F)=[SU(N)] 3 cosmology singles out the three-family (N=3) case as a unique solution if there are three light neutrinos. Possible implication of decoupling theorem as applied to family symmetry breaking is also discussed. (orig.)
19. Large leptonic Dirac CP phase from broken democracy with random perturbations
Science.gov (United States)
Ge, Shao-Feng; Kusenko, Alexander; Yanagida, Tsutomu T.
2018-06-01
A large value of the leptonic Dirac CP phase can arise from broken democracy, where the mass matrices are democratic up to small random perturbations. Such perturbations are a natural consequence of broken residual S3 symmetries that dictate the democratic mass matrices at leading order. With random perturbations, the leptonic Dirac CP phase has a higher probability to attain a value around ± π / 2. Comparing with the anarchy model, broken democracy can benefit from residual S3 symmetries, and it can produce much better, realistic predictions for the mass hierarchy, mixing angles, and Dirac CP phase in both quark and lepton sectors. Our approach provides a general framework for a class of models in which a residual symmetry determines the general features at leading order, and where, in the absence of other fundamental principles, the symmetry breaking appears in the form of random perturbations.
20. Hausdorff dimensions for sets with broken scaling symmetry
International Nuclear Information System (INIS)
Umberger, D.K.; Mayer-Kress, G.; Jen, E.
1985-01-01
Based on Hausdorff's original approach to fractional dimensions, we study systems which are not sufficiently characterized by their ''fractal'' or scaling dimension. We construct informative examples of such sets and relate them to sets observed in the context of dynamical systems. 18 refs., 5 figs
1. The geometry of lie algebras and broken SO(6) symmetries
International Nuclear Information System (INIS)
Lawrence, T.R.
2001-10-01
Non-linear realisations of the groups SU(2), SO(1,4) and SO(2,4) are analysed, described by the coset spaces SU(2)/U(1), SO(1,4)/SO(1,3) and SO(2,4)/SO(1,3) x SO(1,1). The Lie algebras of certain special unitary and special orthogonal groups are studied and their projection operators are determined in order to facilitate the above analyses, in particular that of SO(2,4)/SO(l,3) x SO(1,1). The analysis consists of determining the transformation properties of the Goldstone bosons, constructing the most general possible Lagrangian for the realisations and finding the metric of the coset space. (author)
2. Hadrons and broken symmetries with WASA-at-COSY
Physics of Hadrons and QCD Volume 75 Issue 2 August 2010 pp 225-234 ... is an internal experiment at the cooler synchrotron (COSY) in Jülich, Germany. ... Higher orders in chiral perturbation theory are probed with the → 0 decay.
3. Hadrons and broken symmetries with WASA- at-COSY
Abstract. The WASA Detector Facility is an internal experiment at the cooler syn- .... This is an improvement of more than two orders of magnitude in the event ..... demonstrates the quality of the data for events coming from the reaction 1 GeV.
4. Quantum restoration of broken symmetry in one- dimensional loop ...
Though quantum theory and classical theory are completely different from ... authors) that the classical property of a system and the classical limit of the ..... pactly supported function (for example, the alternate deposition of thin layers of GaAs.
5. Comment on bag models with spontaneously broken color symmetry
International Nuclear Information System (INIS)
Jandel, M.
1985-01-01
A recently suggested field-theoretic bag model, where gluons are confined via a Higgs mechanism, is discussed. It is found that the proposed model creates gluon boundary conditions that break global SU/sub c/(3) invariance. A modified scheme that removes this anomaly is suggested. However, some severe generic problems remain. Examples are the lack of a suppression mechanism for states with open color and the large surface energy of the bag states
6. Broken toe - self-care
Science.gov (United States)
Fractured toe - self-care; Broken bone - toe - self-care; Fracture - toe - self-care; Fracture phalanx - toe ... often treated without surgery and can be taken care of at home. Severe injuries include: Breaks that ...
7. What is Broken Heart Syndrome
Science.gov (United States)
... pumping action and blood flow, go to the Health Topics How the Heart Works article.) Researchers are trying to identify the precise way in which the stress hormones affect the heart. Broken heart syndrome may result from ...
8. Job loss and broken partnerships
DEFF Research Database (Denmark)
Kriegbaum, Margit; Christensen, Ulla; Lund, Rikke
2008-01-01
The aim of this study was to investigate the effects of the accumulated number of job losses and broken partnerships (defined as the end of cohabitation) on the risk of fatal and nonfatal events of ischemic heart disease (IHD).......The aim of this study was to investigate the effects of the accumulated number of job losses and broken partnerships (defined as the end of cohabitation) on the risk of fatal and nonfatal events of ischemic heart disease (IHD)....
9. Deep inelastic scattering in spontaneously broken gauge models
International Nuclear Information System (INIS)
Goloskokov, S.V.; Mikhov, S.G.; Morozov, P.T.; Stamenov, D.B.
1975-01-01
Deep inelastic lepton hadron scattering in the simplest spontaneously broken symmetry (the Kibble model) is analyzed. A hypothesis that the invariant coupling constant of the quartic selfinteraction for large spacelike momenta tends to a finite asymptotic value without spoiling the asymptotic freedom for the invariant coupling constant of the Yang-Mills field is used. It is shown that Biorken scaling for the moments of the structure functions of the deep inelastic lepton hadron scattering is violated by powers of logarithms
10. Leptogenesis and residual CP symmetry
International Nuclear Information System (INIS)
Chen, Peng; Ding, Gui-Jun; King, Stephen F.
2016-01-01
We discuss flavour dependent leptogenesis in the framework of lepton flavour models based on discrete flavour and CP symmetries applied to the type-I seesaw model. Working in the flavour basis, we analyse the case of two general residual CP symmetries in the neutrino sector, which corresponds to all possible semi-direct models based on a preserved Z 2 in the neutrino sector, together with a CP symmetry, which constrains the PMNS matrix up to a single free parameter which may be fixed by the reactor angle. We systematically study and classify this case for all possible residual CP symmetries, and show that the R-matrix is tightly constrained up to a single free parameter, with only certain forms being consistent with successful leptogenesis, leading to possible connections between leptogenesis and PMNS parameters. The formalism is completely general in the sense that the two residual CP symmetries could result from any high energy discrete flavour theory which respects any CP symmetry. As a simple example, we apply the formalism to a high energy S 4 flavour symmetry with a generalized CP symmetry, broken to two residual CP symmetries in the neutrino sector, recovering familiar results for PMNS predictions, together with new results for flavour dependent leptogenesis.
11. Inertial Symmetry Breaking
Energy Technology Data Exchange (ETDEWEB)
Hill, Christopher T.
2018-03-19
We review and expand upon recent work demonstrating that Weyl invariant theories can be broken "inertially," which does not depend upon a potential. This can be understood in a general way by the "current algebra" of these theories, independently of specific Lagrangians. Maintaining the exact Weyl invariance in a renormalized quantum theory can be accomplished by renormalization conditions that refer back to the VEV's of fields in the action. We illustrate the computation of a Weyl invariant Coleman-Weinberg potential that breaks a U(1) symmetry together,with scale invariance.
12. Broken Homes: Impact on Adolescents.
Science.gov (United States)
Koziey, Paul W.; Davies, Leigh
1982-01-01
Tends to support assertion that children from homes broken by separation, divorce, or death are less well-adjusted in terms of California Personality Inventory scales of self-control, socialization, femininity, and good impression, than children from intact homes. Age and sex were not found to be linked to the degree of maladjustment. (AH)
13. Unbroken versus broken mirror world: a tale of two vacua
International Nuclear Information System (INIS)
Foot, R.; Lew, H.; Volkas, R.R.
2000-01-01
If the Lagrangian of nature respects parity invariance then there are two distinct possibilities: either parity is unbroken by the vacuum or it is spontaneously broken. We examine the two simplest phenomenologically consistent gauge models which have unbroken and spontaneously broken parity symmetries, respectively. These two models have a Lagrangian of the same form, but a different parameter range is chosen in the Higgs potential. They both predict the existence of dark matter and can explain the MACHO events. However, the models predict quite different neutrino physics. Although both have light mirror (effectively sterile) neutrinos, the ordinary-mirror neutrino mixing angles are unobservably tiny in the broken parity case. The minimal broken parity model therefore cannot simultaneously explain the solar, atmospheric and LSND data. By contrast, the unbroken parity version can explain all of the neutrino anomalies. Furthermore, we argue that the unbroken case provides the most natural explanation of the neutrino physics anomalies (irrespective of whether evidence from the LSND experiment is included) because of its characteristic maximal mixing prediction. (author)
14. On the origin of neutrino flavour symmetry
International Nuclear Information System (INIS)
King, Stephen F.; Luhn, Christoph
2009-01-01
We study classes of models which are based on some discrete family symmetry which is completely broken such that the observed neutrino flavour symmetry emerges indirectly as an accidental symmetry. For such 'indirect' models we discuss the D-term flavon vacuum alignments which are required for such an accidental flavour symmetry consistent with tri-bimaximal lepton mixing to emerge. We identify large classes of suitable discrete family symmetries, namely the Δ(3n 2 ) and Δ(6n 2 ) groups, together with other examples such as Z 7 x Z 3 . In such indirect models the implementation of the type I see-saw mechanism is straightforward using constrained sequential dominance. However the accidental neutrino flavour symmetry may be easily violated, for example leading to a large reactor angle, while maintaining accurately the tri-bimaximal solar and atmospheric predictions.
15. Broken supersymmetries and shifted superpropagators
International Nuclear Information System (INIS)
Helayel-Neto, J.A.; Rabelo de Carvalho, F.A.B.; Smith, A.W.
1985-06-01
Superfield Feynman rules are derived for a general case where global supersymmetry is spontaneously broken by F-terms. The complete superspace dependence of the superpropagators is factored out and they are employed to discuss the corrections to the effective action and the non-renormalization theorems. Their coupling to external gauge superfields is also contemplated and finite matter contributions to the gaugino mass and the Fayet-Iliopoulos term are considered. (author)
16. Pole Inflation - Shift Symmetry and Universal Corrections
NARCIS (Netherlands)
Broy, Benedict J.; Galante, Mario; Roest, Diederik; Westphal, Alexander
2015-01-01
An appealing explanation for the Planck data is provided by inflationary models with a singular non-canonical kinetic term: a Laurent expansion of the kinetic function translates into a potential with a nearly shift-symmetric plateau in canonical fields. The shift symmetry can be broken at large
17. Gauge symmetry breaking in gauge theories -- in search of clarification
NARCIS (Netherlands)
Friederich, Simon
2013-01-01
The paper investigates the spontaneous breaking of gauge symmetries in gauge theories from a philosophical angle, taking into account the fact that the notion of a spontaneously broken local gauge symmetry, though widely employed in textbook expositions of the Higgs mechanism, is not supported by
18. Can the family group be a global symmetry
International Nuclear Information System (INIS)
Reiss, D.B.
1982-01-01
We consider the possibility that the family group may be a spontaneously broken continuous global symmetry. In the context of grand unification, the couplings of the associated Goldstone bosons to fermions can be sufficiently suppressed so as to satisfy the phenomenological bounds. For a maximal family symmetry this requires a large number of Higgs fields. (orig.)
19. Symmetry breaking patterns for inflation
Science.gov (United States)
Klein, Remko; Roest, Diederik; Stefanyszyn, David
2018-06-01
We study inflationary models where the kinetic sector of the theory has a non-linearly realised symmetry which is broken by the inflationary potential. We distinguish between kinetic symmetries which non-linearly realise an internal or space-time group, and which yield a flat or curved scalar manifold. This classification leads to well-known inflationary models such as monomial inflation and α-attractors, as well as a new model based on fixed couplings between a dilaton and many axions which non-linearly realises higher-dimensional conformal symmetries. In this model, inflation can be realised along the dilatonic direction, leading to a tensor-to-scalar ratio r ˜ 0 .01 and a spectral index n s ˜ 0 .975. We refer to the new model as ambient inflation since inflation proceeds along an isometry of an anti-de Sitter ambient space-time, which fully determines the kinetic sector.
20. Symmetry witnesses
Science.gov (United States)
Aniello, Paolo; Chruściński, Dariusz
2017-07-01
A symmetry witness is a suitable subset of the space of selfadjoint trace class operators that allows one to determine whether a linear map is a symmetry transformation, in the sense of Wigner. More precisely, such a set is invariant with respect to an injective densely defined linear operator in the Banach space of selfadjoint trace class operators (if and) only if this operator is a symmetry transformation. According to a linear version of Wigner’s theorem, the set of pure states—the rank-one projections—is a symmetry witness. We show that an analogous result holds for the set of projections with a fixed rank (with some mild constraint on this rank, in the finite-dimensional case). It turns out that this result provides a complete classification of the sets of projections with a fixed rank that are symmetry witnesses. These particular symmetry witnesses are projectable; i.e. reasoning in terms of quantum states, the sets of ‘uniform’ density operators of corresponding fixed rank are symmetry witnesses too.
1. Hairs of discrete symmetries and gravity
Energy Technology Data Exchange (ETDEWEB)
Choi, Kang Sin [Scranton Honors Program, Ewha Womans University, Seodaemun-Gu, Seoul 03760 (Korea, Republic of); Center for Fields, Gravity and Strings, CTPU, Institute for Basic Sciences, Yuseong-Gu, Daejeon 34047 (Korea, Republic of); Kim, Jihn E., E-mail: jihnekim@gmail.com [Department of Physics, Kyung Hee University, 26 Gyungheedaero, Dongdaemun-Gu, Seoul 02447 (Korea, Republic of); Center for Axion and Precision Physics Research (IBS), 291 Daehakro, Yuseong-Gu, Daejeon 34141 (Korea, Republic of); Kyae, Bumseok [Department of Physics, Pusan National University, 2 Busandaehakro-63-Gil, Geumjeong-Gu, Busan 46241 (Korea, Republic of); Nam, Soonkeon [Department of Physics, Kyung Hee University, 26 Gyungheedaero, Dongdaemun-Gu, Seoul 02447 (Korea, Republic of)
2017-06-10
Gauge symmetries are known to be respected by gravity because gauge charges carry flux lines, but global charges do not carry flux lines and are not conserved by gravitational interaction. For discrete symmetries, they are spontaneously broken in the Universe, forming domain walls. Since the realization of discrete symmetries in the Universe must involve the vacuum expectation values of Higgs fields, a string-like configuration (hair) at the intersection of domain walls in the Higgs vacua can be realized. Therefore, we argue that discrete charges are also respected by gravity.
2. Hairs of discrete symmetries and gravity
Directory of Open Access Journals (Sweden)
Kang Sin Choi
2017-06-01
Full Text Available Gauge symmetries are known to be respected by gravity because gauge charges carry flux lines, but global charges do not carry flux lines and are not conserved by gravitational interaction. For discrete symmetries, they are spontaneously broken in the Universe, forming domain walls. Since the realization of discrete symmetries in the Universe must involve the vacuum expectation values of Higgs fields, a string-like configuration (hair at the intersection of domain walls in the Higgs vacua can be realized. Therefore, we argue that discrete charges are also respected by gravity.
3. Holography without translational symmetry
CERN Document Server
Vegh, David
2013-01-01
We propose massive gravity as a holographic framework for describing a class of strongly interacting quantum field theories with broken translational symmetry. Bulk gravitons are assumed to have a Lorentz-breaking mass term as a substitute for spatial inhomogeneities. This breaks momentum-conservation in the boundary field theory. At finite chemical potential, the gravity duals are charged black holes in asymptotically anti-de Sitter spacetime. The conductivity in these systems generally exhibits a Drude peak that approaches a delta function in the massless gravity limit. Furthermore, the optical conductivity shows an emergent scaling law: $|\\sigma(\\omega)| \\approx {A \\over \\omega^{\\alpha}} + B$. This result is consistent with that found earlier by Horowitz, Santos, and Tong who introduced an explicit inhomogeneous lattice into the system.
4. Mirror symmetry
CERN Document Server
Voisin, Claire
1999-01-01
This is the English translation of Professor Voisin's book reflecting the discovery of the mirror symmetry phenomenon. The first chapter is devoted to the geometry of Calabi-Yau manifolds, and the second describes, as motivation, the ideas from quantum field theory that led to the discovery of mirror symmetry. The other chapters deal with more specialized aspects of the subject: the work of Candelas, de la Ossa, Greene, and Parkes, based on the fact that under the mirror symmetry hypothesis, the variation of Hodge structure of a Calabi-Yau threefold determines the Gromov-Witten invariants of its mirror; Batyrev's construction, which exhibits the mirror symmetry phenomenon between hypersurfaces of toric Fano varieties, after a combinatorial classification of the latter; the mathematical construction of the Gromov-Witten potential, and the proof of its crucial property (that it satisfies the WDVV equation), which makes it possible to construct a flat connection underlying a variation of Hodge structure in the ...
5. Necessity of intermediate mass scales in grand unified theories with spontaneously broken CP invariance
International Nuclear Information System (INIS)
Senjanovic, G.
1982-07-01
It is demonstrated that the spontaneous breakdown of CP invariance in grand unified theories requires the presence of intermediate mass scales. The simplest realization is provided by weakly broken left-right symmetry in the context of SU(2)sub(L) x SU(2)sub(R) x U(1)sub(B-L) model embedded in grand unified theories. (author)
6. A search for symmetries in the genetic code
International Nuclear Information System (INIS)
Hornos, J.E.M.; Hornos, Y.M.M.
1991-01-01
A search for symmetries based on the classification theorem of Cartan for the compact simple Lie algebras is performed to verify to what extent the genetic code is a manifestation of some underlying symmetry. An exact continuous symmetry group cannot be found to reproduce the present, universal code. However a unique approximate symmetry group is compatible with codon assignment for the fundamental amino acids and the termination codon. In order to obtain the actual genetic code, the symmetry must be slightly broken. (author). 27 refs, 3 figs, 6 tabs
7. Dynamical Symmetry Breaking of Maximally Generalized Yang-Mills Model and Its Restoration at Finite Temperatures
International Nuclear Information System (INIS)
Wang Dianfu
2008-01-01
In terms of the Nambu-Jona-Lasinio mechanism, dynamical breaking of gauge symmetry for the maximally generalized Yang-Mills model is investigated. The gauge symmetry behavior at finite temperature is also investigated and it is shown that the gauge symmetry broken dynamically at zero temperature can be restored at finite temperatures
8. BOOK REVIEW: Symmetry Breaking
Science.gov (United States)
Ryder, L. H.
2005-11-01
One of the most fruitful and enduring advances in theoretical physics during the last half century has been the development of the role played by symmetries. One needs only to consider SU(3) and the classification of elementary particles, the Yang Mills enlargement of Maxwell's electrodynamics to the symmetry group SU(2), and indeed the tremendous activity surrounding the discovery of parity violation in the weak interactions in the late 1950s. This last example is one of a broken symmetry, though the symmetry in question is a discrete one. It was clear to Gell-Mann, who first clarified the role of SU(3) in particle physics, that this symmetry was not exact. If it had been, it would have been much easier to discover; for example, the proton, neutron, Σ, Λ and Ξ particles would all have had the same mass. For many years the SU(3) symmetry breaking was assigned a mathematical form, but the importance of this formulation fell away when the quark model began to be taken seriously; the reason the SU(3) symmetry was not exact was simply that the (three, in those days) quarks had different masses. At the same time, and in a different context, symmetry breaking of a different type was being investigated. This went by the name of `spontaneous symmetry breaking' and its characteristic was that the ground state of a given system was not invariant under the symmetry transformation, though the interactions (the Hamiltonian, in effect) was. A classic example is ferromagnetism. In a ferromagnet the atomic spins are aligned in one direction only—this is the ground state of the system. It is clearly not invariant under a rotation, for that would change the ground state into a (similar but) different one, with the spins aligned in a different direction; this is the phenomenon of a degenerate vacuum. The contribution of the spin interaction, s1.s2, to the Hamiltonian, however, is actually invariant under rotations. As Coleman remarked, a little man living in a ferromagnet would
9. Lie-algebra approach to symmetry breaking
International Nuclear Information System (INIS)
Anderson, J.T.
1981-01-01
A formal Lie-algebra approach to symmetry breaking is studied in an attempt to reduce the arbitrariness of Lagrangian (Hamiltonian) models which include several free parameters and/or ad hoc symmetry groups. From Lie algebra it is shown that the unbroken Lagrangian vacuum symmetry can be identified from a linear function of integers which are Cartan matrix elements. In broken symmetry if the breaking operators form an algebra then the breaking symmetry (or symmetries) can be identified from linear functions of integers characteristic of the breaking symmetries. The results are applied to the Dirac Hamiltonian of a sum of flavored fermions and colored bosons in the absence of dynamical symmetry breaking. In the partially reduced quadratic Hamiltonian the breaking-operator functions are shown to consist of terms of order g 2 , g, and g 0 in the color coupling constants and identified with strong (boson-boson), medium strong (boson-fermion), and fine-structure (fermion-fermion) interactions. The breaking operators include a boson helicity operator in addition to the familiar fermion helicity and ''spin-orbit'' terms. Within the broken vacuum defined by the conventional formalism, the field divergence yields a gauge which is a linear function of Cartan matrix integers and which specifies the vacuum symmetry. We find that the vacuum symmetry is chiral SU(3) x SU(3) and the axial-vector-current divergence gives a PCAC -like function of the Cartan matrix integers which reduces to PCAC for SU(2) x SU(2) breaking. For the mass spectra of the nonets J/sup P/ = 0 - ,1/2 + ,1 - the integer runs through the sequence 3,0,-1,-2, which indicates that the breaking subgroups are the simple Lie groups. Exact axial-vector-current conservation indicates a breaking sum rule which generates octet enhancement. Finally, the second-order breaking terms are obtained from the second-order spin tensor sum of the completely reduced quartic Hamiltonian
10. Big break for charge symmetry
CERN Document Server
Miller, G A
2003-01-01
Two new experiments have detected charge-symmetry breaking, the mechanism responsible for protons and neutrons having different masses. Symmetry is a crucial concept in the theories that describe the subatomic world because it has an intimate connection with the laws of conservation. The theory of the strong interaction between quarks - quantum chromodynamics - is approximately invariant under what is called charge symmetry. In other words, if we swap an up quark for a down quark, then the strong interaction will look almost the same. This symmetry is related to the concept of sup i sospin sup , and is not the same as charge conjugation (in which a particle is replaced by its antiparticle). Charge symmetry is broken by the competition between two different effects. The first is the small difference in mass between up and down quarks, which is about 200 times less than the mass of the proton. The second is their different electric charges. The up quark has a charge of +2/3 in units of the proton charge, while ...
11. [Dehydration due to "mouth broken"].
Science.gov (United States)
Meijler, D P M; van Mossevelde, P W J; van Beek, R H T
2012-09-01
Two children were admitted to a medical centre due to dehydration after an oral injury and the extraction of a tooth. One child complained of "mouth broken". Dehydration is the most common water-electrolyte imbalance in children. Babies and young children are prone to dehydration due to their relatively large body surface area, the high percentage extracellular fluid, and the limited ability of the kidneys to conserve water. After the removal ofa tooth, after an oral trauma or in case of oral discomfort, a child is at greater risk of dehydration by reduced fluid and food intake due to oral pain and/or discomfort and anxiety to drink. In those cases, extra attention needs to be devoted to the intake of fluids.
12. Translational Symmetry and Microscopic Constraints on Symmetry-Enriched Topological Phases: A View from the Surface
Directory of Open Access Journals (Sweden)
Meng Cheng
2016-12-01
Full Text Available The Lieb-Schultz-Mattis theorem and its higher-dimensional generalizations by Oshikawa and Hastings require that translationally invariant 2D spin systems with a half-integer spin per unit cell must either have a continuum of low energy excitations, spontaneously break some symmetries, or exhibit topological order with anyonic excitations. We establish a connection between these constraints and a remarkably similar set of constraints at the surface of a 3D interacting topological insulator. This, combined with recent work on symmetry-enriched topological phases with on-site unitary symmetries, enables us to develop a framework for understanding the structure of symmetry-enriched topological phases with both translational and on-site unitary symmetries, including the effective theory of symmetry defects. This framework places stringent constraints on the possible types of symmetry fractionalization that can occur in 2D systems whose unit cell contains fractional spin, fractional charge, or a projective representation of the symmetry group. As a concrete application, we determine when a topological phase must possess a “spinon” excitation, even in cases when spin rotational invariance is broken down to a discrete subgroup by the crystal structure. We also describe the phenomena of “anyonic spin-orbit coupling,” which may arise from the interplay of translational and on-site symmetries. These include the possibility of on-site symmetry defect branch lines carrying topological charge per unit length and lattice dislocations inducing degeneracies protected by on-site symmetry.
13. Emergent Electroweak Symmetry Breaking with Composite W, Z Bosons
CERN Document Server
Cui, Yanou; Wells, James D
2009-01-01
We present a model of electroweak symmetry breaking in a warped extra dimension where electroweak symmetry is broken at the UV (or Planck) scale. An underlying conformal symmetry is broken at the IR (or TeV) scale generating masses for the electroweak gauge bosons without invoking a Higgs mechanism. By the AdS/CFT correspondence the W,Z bosons are identified as composite states of a strongly-coupled gauge theory, suggesting that electroweak symmetry breaking is an emergent phenomenon at the IR scale. The model satisfies electroweak precision tests with reasonable fits to the S and T parameter. In particular the T parameter is sufficiently suppressed since the model naturally admits a custodial SU(2) symmetry. The composite nature of the W,Z-bosons provide a novel possibility of unitarizing WW scattering via form factor suppression. Constraints from LEP and the Tevatron as well as discovery opportunities at the LHC are discussed for these composite electroweak gauge bosons.
14. Structural symmetry and protein function.
Science.gov (United States)
Goodsell, D S; Olson, A J
2000-01-01
The majority of soluble and membrane-bound proteins in modern cells are symmetrical oligomeric complexes with two or more subunits. The evolutionary selection of symmetrical oligomeric complexes is driven by functional, genetic, and physicochemical needs. Large proteins are selected for specific morphological functions, such as formation of rings, containers, and filaments, and for cooperative functions, such as allosteric regulation and multivalent binding. Large proteins are also more stable against denaturation and have a reduced surface area exposed to solvent when compared with many individual, smaller proteins. Large proteins are constructed as oligomers for reasons of error control in synthesis, coding efficiency, and regulation of assembly. Symmetrical oligomers are favored because of stability and finite control of assembly. Several functions limit symmetry, such as interaction with DNA or membranes, and directional motion. Symmetry is broken or modified in many forms: quasisymmetry, in which identical subunits adopt similar but different conformations; pleomorphism, in which identical subunits form different complexes; pseudosymmetry, in which different molecules form approximately symmetrical complexes; and symmetry mismatch, in which oligomers of different symmetries interact along their respective symmetry axes. Asymmetry is also observed at several levels. Nearly all complexes show local asymmetry at the level of side chain conformation. Several complexes have reciprocating mechanisms in which the complex is asymmetric, but, over time, all subunits cycle through the same set of conformations. Global asymmetry is only rarely observed. Evolution of oligomeric complexes may favor the formation of dimers over complexes with higher cyclic symmetry, through a mechanism of prepositioned pairs of interacting residues. However, examples have been found for all of the crystallographic point groups, demonstrating that functional need can drive the evolution of
15. Flavor universal dynamical electroweak symmetry breaking
International Nuclear Information System (INIS)
Burdman, G.; Evans, N.
1999-01-01
The top condensate seesaw mechanism of Dobrescu and Hill allows electroweak symmetry to be broken while deferring the problem of flavor to an electroweak singlet, massive sector. We provide an extended version of the singlet sector that naturally accommodates realistic masses for all the standard model fermions, which play an equal role in breaking electroweak symmetry. The models result in a relatively light composite Higgs sector with masses typically in the range of (400 - 700) GeV. In more complete models the dynamics will presumably be driven by a broken gauged family or flavor symmetry group. As an example of the higher scale dynamics a fully dynamical model of the quark sector with a GIM mechanism is presented, based on an earlier top condensation model of King using broken family gauge symmetry interactions (that model was itself based on a technicolor model of Georgi). The crucial extra ingredient is a reinterpretation of the condensates that form when several gauge groups become strong close to the same scale. A related technicolor model of Randall which naturally includes the leptons too may also be adapted to this scenario. We discuss the low energy constraints on the massive gauge bosons and scalars of these models as well as their phenomenology at the TeV scale. copyright 1999 The American Physical Society
16. Softly broken N=2 QCD
CERN Document Server
Alvarez-Gaumé, Luís; Kounnas, Costas; Marino, M; Alvarez-Gaume, Luis; Distler, Jacques; Kounnas, Costas; Marino, Marcos
1996-01-01
We analyze the possible soft breaking of N=2 supersymmetric Yang-Mills theory with and without matter flavour preserving the analyticity properties of the Seiberg-Witten solution. For small supersymmetry breaking parameter with respect to the dynamical scale of the theory we obtain an exact expression for the effective potential. We describe in detail the onset of the confinement transition and some of the patterns of chiral symmetry breaking. If we extrapolate the results to the limit where supersymmetry decouples, we obtain hints indicating that perhaps a description of the QCD vacuum will require the use of Lagrangians containing simultaneously mutually non-local degrees of freedom (monopoles and dyons).
17. Symmetry, Symmetry Breaking and Topology
Directory of Open Access Journals (Sweden)
Siddhartha Sen
2010-07-01
Full Text Available The ground state of a system with symmetry can be described by a group G. This symmetry group G can be discrete or continuous. Thus for a crystal G is a finite group while for the vacuum state of a grand unified theory G is a continuous Lie group. The ground state symmetry described by G can change spontaneously from G to one of its subgroups H as the external parameters of the system are modified. Such a macroscopic change of the ground state symmetry of a system from G to H correspond to a “phase transition”. Such phase transitions have been extensively studied within a framework due to Landau. A vast range of systems can be described using Landau’s approach, however there are also systems where the framework does not work. Recently there has been growing interest in looking at such non-Landau type of phase transitions. For instance there are several “quantum phase transitions” that are not of the Landau type. In this short review we first describe a refined version of Landau’s approach in which topological ideas are used together with group theory. The combined use of group theory and topological arguments allows us to determine selection rule which forbid transitions from G to certain of its subgroups. We end by making a few brief remarks about non-Landau type of phase transition.
18. Self Derogation and Childhood Broken Home
Science.gov (United States)
Kaplan, Howard B; Pokorny, Alex D.
1971-01-01
The data from this study makes clear that it is not the fact of broken homes per se that is related to self derogation but rather the particular characteristics of the broken home situation. Prediction of self derogation is also contingent upon such subject characteristics as race, sex and social class. (Author/CG)
19. "Broken Expectations" from a Global Business Perspective
NARCIS (Netherlands)
Koca, A.; Karapanos, E.; Brombacher, A.C.
2009-01-01
Especially in the past few years, there has been an increase in the rejection rate of interactive consumer electronics products in the field, not due to broken hardware or software, but due to ‘broken expectations’ of users. However, operational methods to capture triggering contextual reasons are
20. arXiv Global $SU(2)_L \\otimes$BRST symmetry and its LSS theorem: Ward-Takahashi identities governing Green's functions, on-shell T-Matrix elements, and $V_{eff}$, in the scalar-sector of certain spontaneously broken non-Abelian gauge theories
CERN Document Server
Güngör, Özenç; Starkman, Glenn D.; Stora, Raymond
This work is dedicated to the memory of Raymond Stora (1930-2015). $SU(2)_L$ is the simplest spontaneous symmetry breaking (SSB) non-Abelian gauge theory: a complex scalar doublet $\\phi=\\frac{1}{\\sqrt{2}}\\begin{bmatrix}H+i\\pi_3-\\pi_2 +i\\pi_1\\end{bmatrix}\\equiv\\frac{1}{\\sqrt{2}}\\tilde{H}e^{2i\\tilde{t}\\cdot\\tilde{\\vec{\\pi}}/}\\begin{bmatrix}10\\end{bmatrix}$ and a vector $\\vec{W}^\\mu$. In Landau gauge, $\\vec{W}^\\mu$ is transverse, $\\vec{\\tilde{\\pi}}$ are massless derivatively coupled Nambu-Goldstone bosons (NGB). A global shift symmetry enforces $m^{2}_{\\tilde{\\pi}}=0$. We observe that on-shell T-matrix elements of physical states $\\vec{W}^\\mu$,$\\phi$ are independent of global $SU(2)_{L}$ transformations, and the associated global current is exactly conserved for amplitudes of physical states. We identify two towers of "1-soft-pion" global Ward-Takahashi Identities (WTI), which govern the $\\phi$-sector, and represent a new global symmetry, $SU(2)_L\\otimes$BRST, a symmetry not of the Lagrangian but of the physical...
1. Universe symmetries
International Nuclear Information System (INIS)
Souriau, J.M.
1984-01-01
The sky uniformity can be noticed in studying the repartition of objects far enough. The sky isotropy description uses space rotations. The group theory elements will allow to give a meaning at the same time precise and general to the word a ''symmetry''. Universe models are reviewed, which must have both of the following qualities: - conformity with the physic known laws; - rigorous symmetry following one of the permitted groups. Each of the models foresees that universe evolution obeys an evolution equation. Expansion and big-bang theory are recalled. Is universe an open or closed space. Universe is also electrically neutral. That leads to a work hypothesis: the existing matter is not given data of universe but it appeared by evolution from nothing. Problem of matter and antimatter is then raised up together with its place in universe [fr
2. Dynamically broken gauge model without fundamental scalar fields
International Nuclear Information System (INIS)
Snyderman, N.J.; Guralnik, G.S.
1976-01-01
It is shown that the structure that must be generated by dynamical symmetry breaking solutions to gauge theories can be explicitly implemented with a 4-fermion interaction. This structure arises in order to obtain consistency with the constraints imposed by a Goldstone commutator proportional to [anti psi psi]. One demonstrates these ideas within the context of axial electrodynamics, dynamically breaking chiral symmetry. As a pre-requisite it is shown how the Nambu-Jona-Lasinio model becomes renormalizable with respect to a systematic approximation scheme that respects the Goldstone commutator of dynamically broken chiral symmetry to each order of approximation. (This approximation scheme is equivalent to a l/N expansion, where N is set to unity at the end of the calculations). This solution generates new interactions not explicitly present in the original Lagrangian and does not have a 4-fermion contact interaction. The renormalized Green's functions are shown to correspond to those of the sigma-model, summed as though the fermions had N components, and for which lambda 0 = 2g 0 2 . This correspondence is exact except for the possibility that the renormalized coupling of the Nambu-Jona-Lasinio model may be a determined number
3. Dynamically broken gauge model without fundamental scalar fields
Energy Technology Data Exchange (ETDEWEB)
Snyderman, N. J.; Guralnik, G. S.
1976-01-01
It is shown that the structure that must be generated by dynamical symmetry breaking solutions to gauge theories can be explicitly implemented with a 4-fermion interaction. This structure arises in order to obtain consistency with the constraints imposed by a Goldstone commutator proportional to (anti psi psi). One demonstrates these ideas within the context of axial electrodynamics, dynamically breaking chiral symmetry. As a pre-requisite it is shown how the Nambu-Jona-Lasinio model becomes renormalizable with respect to a systematic approximation scheme that respects the Goldstone commutator of dynamically broken chiral symmetry to each order of approximation. (This approximation scheme is equivalent to a l/N expansion, where N is set to unity at the end of the calculations). This solution generates new interactions not explicitly present in the original Lagrangian and does not have a 4-fermion contact interaction. The renormalized Green's functions are shown to correspond to those of the sigma-model, summed as though the fermions had N components, and for which lambda/sub 0/ = 2g/sub 0//sup 2/. This correspondence is exact except for the possibility that the renormalized coupling of the Nambu-Jona-Lasinio model may be a determined number.
4. Symmetry realization via a dynamical inverse Higgs mechanism
Science.gov (United States)
Rothstein, Ira Z.; Shrivastava, Prashant
2018-05-01
The Ward identities associated with spontaneously broken symmetries can be saturated by Goldstone bosons. However, when space-time symmetries are broken, the number of Goldstone bosons necessary to non-linearly realize the symmetry can be less than the number of broken generators. The loss of Goldstones may be due to a redundancy or the generation of a gap. In either case the associated Goldstone may be removed from the spectrum. This phenomena is called an Inverse Higgs Mechanism (IHM) and its appearance has a well defined mathematical condition. However, there are cases when a Goldstone boson associated with a broken generator does not appear in the low energy theory despite the lack of the existence of an associated IHM. In this paper we will show that in such cases the relevant broken symmetry can be realized, without the aid of an associated Goldstone, if there exists a proper set of operator constraints, which we call a Dynamical Inverse Higgs Mechanism (DIHM). We consider the spontaneous breaking of boosts, rotations and conformal transformations in the context of Fermi liquids, finding three possible paths to symmetry realization: pure Goldstones, no Goldstones and DIHM, or some mixture thereof. We show that in the two dimensional degenerate electron system the DIHM route is the only consistent way to realize spontaneously broken boosts and dilatations, while in three dimensions these symmetries could just as well be realized via the inclusion of non-derivatively coupled Goldstone bosons. We present the action, including the leading order non-linearities, for the rotational Goldstone (angulon), and discuss the constraint associated with the possible DIHM that would need to be imposed to remove it from the spectrum. Finally we discuss the conditions under which Goldstone bosons are non-derivatively coupled, a necessary condition for the existence of a Dynamical Inverse Higgs Constraint (DIHC), generalizing the results for Vishwanath and Wantanabe.
5. Gapless Symmetry-Protected Topological Order
Directory of Open Access Journals (Sweden)
Thomas Scaffidi
2017-11-01
Full Text Available We introduce exactly solvable gapless quantum systems in d dimensions that support symmetry-protected topological (SPT edge modes. Our construction leads to long-range entangled, critical points or phases that can be interpreted as critical condensates of domain walls “decorated” with dimension (d-1 SPT systems. Using a combination of field theory and exact lattice results, we argue that such gapless SPT systems have symmetry-protected topological edge modes that can be either gapless or symmetry broken, leading to unusual surface critical properties. Despite the absence of a bulk gap, these edge modes are robust against arbitrary symmetry-preserving local perturbations near the edges. In two dimensions, we construct wave functions that can also be interpreted as unusual quantum critical points with diffusive scaling in the bulk but ballistic edge dynamics.
6. Is CP a gauge symmetry?
International Nuclear Information System (INIS)
Choi, K.; Kaplan, D.B.; Nelson, A.E.
1993-01-01
Conventional solutions to the strong CP problem all require the existence of global symmetries. However, quantum gravity may destroy global symmetries, making it hard to understand why the electric dipole moment of the neutron (EDMN) is so small. We suggest here that CP is actually a discrete gauge symmetry, and is therefore not violated by quantum gravity. We show that four-dimensional CP can arise as a discrete gauge symmetry in theories with dimensional compactification, if the original number of Minkowski dimensions equals 8k+1, 8k+2 or 8k+3, and if there are certain restrictions on the gauge group; these conditions are met by superstrings. CP may then be broken spontaneously below 10 9 GeV, explaining the observed CP violation in the kaon system without inducing a large EDMN. We discuss the phenomenology of such models, as well as the peculiar properties of cosmic 'SP strings' which could be produced at the compactification scale. Such strings have the curious property that a particle carried around the string is turned into its CP conjugate. A single CP string renders four-dimensional space-time nonorientable. (orig.)
7. Discrete symmetries and their stringy origin
International Nuclear Information System (INIS)
Mayorga Pena, Damian Kaloni
2014-05-01
Discrete symmetries have proven to be very useful in controlling the phenomenology of theories beyond the standard model. In this work we explore how these symmetries emerge from string compactifications. Our approach is twofold: On the one hand, we consider the heterotic string on orbifold backgrounds. In this case the discrete symmetries can be derived from the orbifold conformal field theory, and it can be shown that they are in close relation with the orbifold geometry. We devote special attention to R-symmetries, which arise from discrete remnants of the Lorentz group in compact space. Further we discuss the physical implications of these symmetries both in the heterotic mini-landscape and in newly constructed models based on the Z 2 x Z 4 orbifold. In both cases we observe that the discrete symmetries favor particular locations in the orbifold where the particles of standard model should live. On the other hand we consider a class of F-theory models exhibiting an SU(5) gauge group, times additional U(1) symmetries. In this case, the smooth compactification background does not permit us to track the discrete symmetries as transparently as in orbifold models. Hence, we follow a different approach and search for discrete subgroups emerging after the U(1)s are broken. We observe that in this approach it is possible to obtain the standard Z 2 matter parity of the MSSM.
8. Cosmoparticle physics of family symmetry breaking
International Nuclear Information System (INIS)
Khlopov, M.Yu.
1993-07-01
The foundations of both particle theory and cosmology are hidden at super energy scale and can not be tested by direct laboratory means. Cosmoparticle physics is developed to probe these foundations by the proper combination of their indirect effects, thus providing definite conclusions on their reliability. Cosmological and astrophysical tests turn to be complementary to laboratory searches of rare processes, induced by new physics, as it can be seen in the case of gauge theory of broken symmetry of quark and lepton families, ascribing to the hierarchy of the horizontal symmetry breaking the observed hierarchy of masses and the mixing between quark and lepton families. 36 refs
9. Off-shell Ward identities and gauge symmetries in string theory
International Nuclear Information System (INIS)
Porrati, M.
1989-01-01
I describe a new method of obtaining gauge-symmetry transformation laws for the effective lagrangian of an arbitrary string theory. The method applies to exact as well as spontaneously broken gauge symmetries. The transformation laws, exact to all orders in α' are determined inductively in the number of fields by the corresponding off-shell Ward identities. The case of broken supersymmetry is examined in some detail. (orig.)
10. A reciprocal of Coleman's theorem and the quantum statistics of systems with spontaneous symmetry breaking
International Nuclear Information System (INIS)
Chaichian, M.; Montonen, C.; Perez Rojas, H.
1991-01-01
The completely different conservation properties of charges associated to unbroken and broken symmetries are discussed. The impossibility of establishing a conservation law for nondegenerate Hilbert space representations in the broken case leads to a reciprocal of Coleman's theorem. The quantum statistical implication is that these charges cannot be introduced as conserved operators in the density matrix. (orig.)
11. Extensions of automorphisms and gauge symmetries
International Nuclear Information System (INIS)
Buchholz, D.; Doplicher, S.; Longo, R.; Roberts, J.E.
1993-01-01
We characterize the automophisms of a C*-algebra A which extend to automorphisms of the crossed product B of A by a compact group dual. The case where the inclusion A contains or equal to B is equipped with a group of automorphisms commuting with the dual action is also treated. These results are applied to the analysis of broken gauge symmetries in Quantum Field Theory to draw conclusions on the structure of the degenerate vacua on the field algebra. (orig.)
12. Scale gauge symmetry and the standard model
International Nuclear Information System (INIS)
Sola, J.
1990-01-01
This paper speculates on a version of the standard model of the electroweak and strong interactions coupled to gravity and equipped with a spontaneously broken, anomalous, conformal gauge symmetry. The scalar sector is virtually absent in the minimal model but in the general case it shows up in the form of a nonlinear harmonic map Lagrangian. A Euclidean approach to the phenological constant problem is also addressed in this framework
13. Is space-time symmetry a suitable generalization of parity-time symmetry?
International Nuclear Information System (INIS)
Amore, Paolo; Fernández, Francisco M.; Garcia, Javier
2014-01-01
We discuss space-time symmetric Hamiltonian operators of the form H=H 0 +igH ′ , where H 0 is Hermitian and g real. H 0 is invariant under the unitary operations of a point group G while H ′ is invariant under transformation by elements of a subgroup G ′ of G. If G exhibits irreducible representations of dimension greater than unity, then it is possible that H has complex eigenvalues for sufficiently small nonzero values of g. In the particular case that H is parity-time symmetric then it appears to exhibit real eigenvalues for all 0symmetry and perturbation theory enable one to predict whether H may exhibit real or complex eigenvalues for g>0. We illustrate the main theoretical results and conclusions of this paper by means of two- and three-dimensional Hamiltonians exhibiting a variety of different point-group symmetries. - Highlights: • Space-time symmetry is a generalization of PT symmetry. • The eigenvalues of a space-time Hamiltonian are either real or appear as pairs of complex conjugate numbers. • In some cases all the eigenvalues are real for some values of a potential-strength parameter g. • At some value of g space-time symmetry is broken and complex eigenvalues appear. • Some multidimensional oscillators exhibit broken space-time symmetry for all values of g
14. Domain walls and the C P anomaly in softly broken supersymmetric QCD
Science.gov (United States)
Draper, Patrick
2018-04-01
In ordinary QCD with light, degenerate, fundamental flavors, C P symmetry is spontaneously broken at θ =π , and domain wall solutions connecting the vacua can be constructed in chiral perturbation theory. In some cases the breaking of C P saturates a 't Hooft anomaly, and anomaly inflow requires nontrivial massless excitations on the domain walls. Analogously, C P can be spontaneously broken in supersymmetric QCD (SQCD) with light flavors and small soft breaking parameters. We study C P breaking and domain walls in softly broken SQCD with Nfcomputed at leading order in the soft breaking parameters, producing a phase diagram for the stable wall trajectory. We also comment on domain walls in the similar case of QCD with an adjoint and fundamental flavors, and on the impact of adding an axion in this theory.
15. Inflation via Gravitino Condensation in Dynamically Broken Supergravity
CERN Document Server
Alexandre, Jean; Mavromatos, Nick E
2015-01-01
Gravitino-condensate-induced inflation via the super-Higgs effect is a UV-motivated scenario for both inflating the early universe and breaking local supersymmetry dynamically, entirely independent of any coupling to external matter. As an added benefit, this also removes the (as of yet unobserved) massless Goldstino associated to global supersymmetry breaking from the particle spectrum. In this review we detail the pertinent properties and outline previously hidden details of the various steps required in this context in order to make contact with current inflationary phenomenology. The class of models of SUGRA we use to exemplify our approach are minimal four-dimensional N=1 supergravity and conformal extensions thereof (with broken conformal symmetry). Therein, the gravitino condensate itself can play the role of the inflaton, however the requirement of slow-roll necessitates unnaturally large values of the wave-function renormalisation. Nevertheless, there is an alternative scenario that may provide Staro...
16. Supersymmetry and intermediate symmetry breaking in SO(10) superunification
International Nuclear Information System (INIS)
Asatryan, H.M.; Ioannisyan, A.N.
1985-01-01
A scheme of simultaneous breakdown of intermediate symmetry SO(10) → SU(3)sub(c) x U(1) x SU(2)sub(L) x SU(2)sub(R) and supersymmetry by means of a single scale parameter is suggested. This intermediate symmetry, which is preferable physically, owing to the broken supersymmetry has a minimum lying lower than SU(4) x SU(2)sub(L) x SU(2)sub(R). The intermediate symmetry is broken by the vacuum expectation value of the Higgs superfields. Owing to the quantum corrections the potential minimum turns out to correspond to breakdown of the intermediate symmetry up to the standard group SU(3)sub(c) x SU(2)sub(L) x U(1)sub(y). The value of the Weinberg angle is less than that in the supersymmetric SU(5) model and agrees with the experiment
17. Neutrino Masses with Inverse Hierarchy from Broken $L_{e}-L_{\\mu}-L_{\\tau}$: a Reappraisal
CERN Document Server
Altarelli, Guido; Altarelli, Guido; Franceschini, Roberto
2006-01-01
We discuss a class of models of neutrino masses and mixings with inverse hierarchy based on a broken U(1)_F flavour symmetry with charge L_e-L_\\mu-L_\\tau. The symmetry breaking sector receives separate contributions from flavon vev breaking terms and from soft mass breaking in the right handed Majorana sector. The model is able to reproduce in a natural way all observed features of the charged lepton mass spectrum and of neutrino masses and mixings (even with arbitrarily small \\theta_{13}), with the exception of a moderate fine tuning which is needed to accomodate the observed small value of r = Delta m^2_{sol} / Delta m^2_{atm}.
18. Perilaku Agresif Siswa dari Keluarga Broken Home
Directory of Open Access Journals (Sweden)
Randi Pratama
2016-12-01
Full Text Available This research is based because of the aggressive behavior shown by the students, especially students who come from a broken home. The purpose of this study is to describe the aggressive behavior that is owned by a student who comes from a broken home in terms of attacking people physically, verbally, and damaging and destroying property and wealth of others. The results of this research shows that in general student’s aggressivebehavior are on average level. Implications of research in guidance and counseling is as the basis for programs to prevent and cope with aggressive behavior that is owned by the students, especially students who come from a broken home. Cooperation with the homeroom teacher mentors, teachers and other school personnel will also help identify students who have an aggressive behavior, especially students who come from a broken home to immediately provided services.
19. Open-string models with broken supersymmetry
International Nuclear Information System (INIS)
Sagnotti, A.
2002-01-01
I review the salient features of three classes of open-string models with broken supersymmetry. These suffice to exhibit, in relatively simple settings, the two phenomena of 'brane supersymmetry' and 'brane supersymmetry breaking'. In the first class of models, to lowest order supersymmetry is broken both in the closed and in the open sectors. In the second class of models, to lowest order supersymmetry is broken in the closed sector, but is exact in the open sector, at least for the low-lying modes, and often for entire towers of string excitations. Finally, in the third class of models, to lowest order supersymmetry is exact in the closed (bulk) sector, but is broken in the open sector. Brane supersymmetry breaking provides a natural solution to some old difficulties met in the construction of open-string vacua. (author)
20. Open-string models with broken supersymmetry
International Nuclear Information System (INIS)
Sagnotti, Augusto
2000-01-01
We review the salient features of three classes of open-string models with broken supersymmetry. These suffice to exhibit, in relatively simple settings, the two phenomena of 'brane supersymmetry' and 'brane supersymmetry breaking'. In the first class of models, to lowest order supersymmetry is broken both in the closed and in the open sectors. In the second class of models, to lowest order supersymmetry is broken in the closed sector, but is exact in the open sector, at least for the low-lying modes, and often for entire towers of string excitations. Finally, in the third class of models, to lowest order supersymmetry is exact in the closed (bulk) sector, but is broken in the open sector. Brane supersymmetry breaking provides a natural solution to some old difficulties met in the construction of open-string vacua
1. Subconjunctival Hemorrhage (Broken Blood Vessel in Eye)
Science.gov (United States)
Subconjunctival hemorrhage (broken blood vessel in eye) Overview A subconjunctival hemorrhage (sub-kun-JUNK-tih-vul HEM-uh-ruj) ... may not even realize you have a subconjunctival hemorrhage until you look in the mirror and notice ...
2. Phenomenology of muon number violation in spontaneously broken gauge theories
International Nuclear Information System (INIS)
Shanker, O.U.
1980-01-01
The phenomenology of muon number violation in gauge theories of weak and electromagnetic interactions is studied. In the first chapter a brief introduction to the concept of muon number and to spontaneously broken gauge theories is given. A review of the phenomenology and experimental situation regarding different muon number violating processes is made in the second chapter. A detailed phenomenological study of the μe conversion process μ - + (A,Z) → e - + (A,Z) is given in the third chapter. In the fourth chapter some specific gauge theories incorporating spontaneously broken horizontal gauge symmetries between different fermion generations are discussed with special reference to muon number violation in the theories. The μe conversion process seems to be a good process to search for muon number violation if it occurs. The K/sub L/-K/sub S/ mass difference is likely to constrain muon number violating rates to lie far below present experimental limits unless strangeness changing neutral currents changing strangeness by two units are suppressed
3. Quark diquark symmetry breaking
International Nuclear Information System (INIS)
Souza, M.M. de
1980-01-01
Assuming the baryons are made of quark-diquark pairs, the wave functions for the 126 allowed ground states are written. The quark creation and annihilations operators are generalized to describe the quark-diquark structure in terms of a parameter σ. Assuming that all quark-quark interactions are mediated by gluons transforming like an octet of vector mesons, the effective Hamiltonian and the baryon masses as constraint equations for the elements of the mass matrix is written. The symmetry is the SU(6) sub(quark)x SU(21) sub(diquark) broken by quark-quark interactions respectively invariant under U(6), U(2) sub(spin), U(3) and also interactions transforming like the eighth and the third components of SU(3). In the limit of no quark-diquark structure (σ = 0), the ground state masses is titted to within 1% of the experimental data, except for the Δ(1232), where the error is almost 2%. Expanding the decuplet mass equations in terms of σ and keeping terms only up to the second order, this error is reduced to 67%. (Author) [pt
4. Introduction to symmetry-breaking phenomena in physics
CERN Multimedia
CERN. Geneva. Audiovisual Unit
2001-01-01
The notion of broken symmetries started slowly to emerge in the 19th century. The early studies of Pasteur on the parity asymmetry of life, the studies of Curie on piezoelectricity and on the symmetries of effects versus the symmetry of causes ( which clearly excluded spontaneous symmetry breaking), are important historical landmarks. However the possibility of spontaneous symmetry breaking within the usual principles of statistical mechanics, waited for the work of Peierls and Onsager. The whole theory of phase transitions and critical phenomena, as well as the construction of field theoretic models as long distance limit of yet unknown physics, relies nowadays on the concept of criticality associated to spontaneous symmetry breaking. The phenomena of Goldstone bosons, of Meissner-Higgs effects, are central to the theory of condensed matter as well as to particle physics. In cosmology as well, the various inflationary scenarios begin similarly with this same concept. The three lectures will provide a simple ...
5. Scale-chiral symmetry, ω meson, and dense baryonic matter
Science.gov (United States)
Ma, Yong-Liang; Rho, Mannque
2018-05-01
It is shown that explicitly broken scale symmetry is essential for dense skyrmion matter in hidden local symmetry theory. Consistency with the vector manifestation fixed point for the hidden local symmetry of the lowest-lying vector mesons and the dilaton limit fixed point for scale symmetry in dense matter is found to require that the anomalous dimension (|γG2| ) of the gluon field strength tensor squared (G2 ) that represents the quantum trace anomaly should be 1.0 ≲|γG2|≲3.5 . The magnitude of |γG2| estimated here will be useful for studying hadron and nuclear physics based on the scale-chiral effective theory. More significantly, that the dilaton limit fixed point can be arrived at with γG2≠0 at some high density signals that scale symmetry can arise in dense medium as an "emergent" symmetry.
6. Chiral symmetry breaking and confinement - solutions of relativistic wave equations
International Nuclear Information System (INIS)
Murugesan, P.
1983-01-01
In this thesis, an attempt is made to explore the question whether confinement automatically leads to chiral symmetry breaking. While it should be accepted that chiral symmetry breaking manifests in nature in the absence of scalar partners of pseudoscalar mesons, it does not necessarily follow that confinement should lead to chiral symmetry breaking. If chiral conserving forces give rise to observed spectrum of hadrons, then the conjuncture that confinement is responsible for chiral symmetry breaking is not valid. The method employed to answer the question whether confinement leads to chiral symmetry breaking or not is to solve relativistic wave equations by introducing chiral conserving as well as chiral breaking confining potentials and compare the results with experimental observations. It is concluded that even though chiral symmetry is broken in nature, confinement of quarks need not be the cause of it
7. Dark matter and global symmetries
Directory of Open Access Journals (Sweden)
Yann Mambrini
2016-09-01
Full Text Available General considerations in general relativity and quantum mechanics are known to potentially rule out continuous global symmetries in the context of any consistent theory of quantum gravity. Assuming the validity of such considerations, we derive stringent bounds from gamma-ray, X-ray, cosmic-ray, neutrino, and CMB data on models that invoke global symmetries to stabilize the dark matter particle. We compute up-to-date, robust model-independent limits on the dark matter lifetime for a variety of Planck-scale suppressed dimension-five effective operators. We then specialize our analysis and apply our bounds to specific models including the Two-Higgs-Doublet, Left–Right, Singlet Fermionic, Zee–Babu, 3-3-1 and Radiative See-Saw models. Assuming that (i global symmetries are broken at the Planck scale, that (ii the non-renormalizable operators mediating dark matter decay have O(1 couplings, that (iii the dark matter is a singlet field, and that (iv the dark matter density distribution is well described by a NFW profile, we are able to rule out fermionic, vector, and scalar dark matter candidates across a broad mass range (keV–TeV, including the WIMP regime.
8. Softly Broken Lepton Numbers: an Approach to Maximal Neutrino Mixing
International Nuclear Information System (INIS)
Grimus, W.; Lavoura, L.
2001-01-01
We discuss models where the U(1) symmetries of lepton numbers are responsible for maximal neutrino mixing. We pay particular attention to an extension of the Standard Model (SM) with three right-handed neutrino singlets in which we require that the three lepton numbers L e , L μ , and L τ be separately conserved in the Yukawa couplings, but assume that they are softly broken by the Majorana mass matrix M R of the neutrino singlets. In this framework, where lepton-number breaking occurs at a scale much higher than the electroweak scale, deviations from family lepton number conservation are calculable, i.e., finite, and lepton mixing stems exclusively from M R . We show that in this framework either maximal atmospheric neutrino mixing or maximal solar neutrino mixing or both can be imposed by invoking symmetries. In this way those maximal mixings are stable against radiative corrections. The model which achieves maximal (or nearly maximal) solar neutrino mixing assumes that there are two different scales in M R and that the lepton number (dash)L=L e -L μ -L τ 1 is conserved in between them. We work out the difference between this model and the conventional scenario where (approximate) (dash)L invariance is imposed directly on the mass matrix of the light neutrinos. (author)
9. Spontaneous breakdown of PT symmetry in the complex Coulomb ...
P T symmetry is spontaneously broken, however, for complex values of the form L = − 1 2 + i . In this case the potential remains P T -symmetric, while the two independent solutions are transformed to each other by the P T operation and at the same time, the two series of discrete energy eigenvalues turn into each ...
10. Nonperturbative calculation of symmetry breaking in quantum field theory
OpenAIRE
Bender, Carl M.; Milton, Kimball A.
1996-01-01
A new version of the delta expansion is presented, which, unlike the conventional delta expansion, can be used to do nonperturbative calculations in a self-interacting scalar quantum field theory having broken symmetry. We calculate the expectation value of the scalar field to first order in delta, where delta is a measure of the degree of nonlinearity in the interaction term.
11. Topological symmetry breakdown in cholesterics, nematics, and 3He
International Nuclear Information System (INIS)
Balachandran, A.P.; Lizzi, F.; Rodgers, V.G.J.
1984-01-01
Cholesterics, uniaxial and biaxial nematics, and the dipole-free A phase of superfluid 3 He are characterized by order parameters which are left invariant by suitable ''symmetry'' groups H. We show that in the presence of defects, the full group H may not be implementable on the states because of topological obstructions. Thus H is topologically broken in the presence of suitable defects
12. CP and other gauge symmetries in string theory
International Nuclear Information System (INIS)
Dine, M.; Leigh, R.G.; MacIntire, D.A.
1992-01-01
We argue that CP is a gauge symmetry in string theory. As a consequence, CP cannot be explicitly broken either perturbatively or nonperturbatively; there can be no nonperturbative CP-violating parameters. String theory is thus an example of a theory where all θ angles arise due to spontaneous CP violation, and are in principle calculable
13. Symmetries in eleven dimensional supergravity compactified on a parallelized seven sphere
CERN Document Server
Englert, F; Spindel, P
1983-01-01
We analyse, in eleven-dimensional supergravity compactified on S7, the spontaneous symmetry breaking induced by a spontaneous parallelization of the sphere. The eight supersymmetries are broken at a common scale and the SO(8) gauge group is reduced to Spin (7). Such a large residual symmetry has a simple geometrical significance revealed through use of octonions; this is explained in elementary terms.
14. Bosonization, dual transformation and non-local hidden symmetry in two dimensions
International Nuclear Information System (INIS)
Hata, Hiroyuki
1985-01-01
The non-local hidden symmetry is investigated in the bosonized non-abelian Thirring model and the dual representation of the chiral model. In these representations the first non-local symmetry is spontaneously broken in naive pertubation theory. (orig.)
15. Large lepton mixings from continuous symmetries
International Nuclear Information System (INIS)
Everett, Lisa; Ramond, Pierre
2007-01-01
Within the broad context of quark-lepton unification, we investigate the implications of broken continuous family symmetries which result from requiring that in the limit of exact symmetry, the Dirac mass matrices yield hierarchical masses for the quarks and charged leptons, but lead to degenerate light neutrino masses as a consequence of the seesaw mechanism, without requiring hierarchical right-handed neutrino mass terms. Quark mixing is then naturally small and proportional to the size of the perturbation, but lepton mixing is large as a result of degenerate perturbation theory, shifted from maximal mixing by the size of the perturbation. Within this approach, we study an illustrative two-family prototype model with an SO(2) family symmetry, and discuss extensions to three-family models
16. The symmetry of large N=4 holography
International Nuclear Information System (INIS)
Gaberdiel, Matthias R.; Peng, Cheng
2014-01-01
For the proposed duality relating a family of N=4 superconformal coset models to a certain supersymmetric higher spin theory on AdS_3, the asymptotic symmetry algebra of the bulk description is determined. It is shown that, depending on the choice of the boundary charges, one may obtain either the linear or the non-linear superconformal algebra on the boundary. We compare the non-linear version of the asymptotic symmetry algebra with the non-linear coset algebra and find non-trivial agreement in the ’t Hooft limit, thus giving strong support for the proposed duality. As a by-product of our analysis we also show that the W_∞ symmetry of the coset theory is broken under the exactly marginal perturbation that preserves the N=4 superconformal algebra
17. Recurrence and symmetry of time series: Application to transition detection
International Nuclear Information System (INIS)
Girault, Jean-Marc
2015-01-01
Highlights: •A new theoretical framework based on the symmetry concept is proposed. •Four types of symmetry present in any time series were analyzed. •New descriptors make possible the analysis of regime changes in logistic systems. •Chaos–chaos, chaos–periodic, symmetry-breaking, symmetry-increasing bifurcations can be detected. -- Abstract: The study of transitions in low dimensional, nonlinear dynamical systems is a complex problem for which there is not yet a simple, global numerical method able to detect chaos–chaos, chaos–periodic bifurcations and symmetry-breaking, symmetry-increasing bifurcations. We present here for the first time a general framework focusing on the symmetry concept of time series that at the same time reveals new kinds of recurrence. We propose several numerical tools based on the symmetry concept allowing both the qualification and quantification of different kinds of possible symmetry. By using several examples based on periodic symmetrical time series and on logistic and cubic maps, we show that it is possible with simple numerical tools to detect a large number of bifurcations of chaos–chaos, chaos–periodic, broken symmetry and increased symmetry types
18. The geometric role of symmetry breaking in gravity
International Nuclear Information System (INIS)
Wise, Derek K
2012-01-01
In gravity, breaking symmetry from a group G to a group H plays the role of describing geometry in relation to the geometry of the homogeneous space G/H. The deep reason for this is Cartan's 'method of equivalence,' giving, in particular, an exact correspondence between metrics and Cartan connections. I argue that broken symmetry is thus implicit in any gravity theory, for purely geometric reasons. As an application, I explain how this kind of thinking gives a new approach to Hamiltonian gravity in which an observer field spontaneously breaks Lorentz symmetry and gives a Cartan connection on space.
19. Spontaneous symmetry breaking in 4-dimensional heterotic string
International Nuclear Information System (INIS)
Maharana, J.
1989-07-01
The evolution of a 4-dimensional heterotic string is considered in the background of its massless excitations such as graviton, antisymmetric tensor, gauge fields and scalar bosons. The compactified bosonic coordinates are fermionized. The world-sheet supersymmetry requirement enforces Thirring-like four fermion coupling to the background scalar fields. The non-abelian gauge symmetry is exhibited through the Ward identities of the S-matrix elements. The spontaneous symmetry breaking mechanism is exhibited through the broken Ward identities. An effective 4-dimensional action is constructed and the consequence of spontaneous symmetry breaking is envisaged for the effective action. 19 refs
20. Dynamics of symmetry breaking during quantum real-time evolution in a minimal model system.
Science.gov (United States)
Heyl, Markus; Vojta, Matthias
2014-10-31
One necessary criterion for the thermalization of a nonequilibrium quantum many-particle system is ergodicity. It is, however, not sufficient in cases where the asymptotic long-time state lies in a symmetry-broken phase but the initial state of nonequilibrium time evolution is fully symmetric with respect to this symmetry. In equilibrium, one particular symmetry-broken state is chosen as a result of an infinitesimal symmetry-breaking perturbation. From a dynamical point of view the question is: Can such an infinitesimal perturbation be sufficient for the system to establish a nonvanishing order during quantum real-time evolution? We study this question analytically for a minimal model system that can be associated with symmetry breaking, the ferromagnetic Kondo model. We show that after a quantum quench from a completely symmetric state the system is able to break its symmetry dynamically and discuss how these features can be observed experimentally.
1. Symmetry breaking in gauge glasses
International Nuclear Information System (INIS)
Hansen, K.
1988-09-01
In order to explain why nature selects the gauge groups of the Standard Model, Brene and Nielsen have proposed a way to break gauge symmetry which does not rely on the existence of a Higgs field. The observed gauge groups will in this scheme appear as the only surviving ones when this mechanism is applied to a random selection of gauge groups. The essential assumption is a discrete space-time with random couplings. Some working assumptions were made for computational reasons of which the most important is that quantum fluctuations were neclected. This work presents an example which under the same conditions show that a much wider class of groups than predicted by Brene and Nielsen will be broken. In particular no possible Standard Model Group survives unbroken. Numerical calculations support the analytical result. (orig.)
2. Chiral symmetry and chiral-symmetry breaking
International Nuclear Information System (INIS)
Peskin, M.E.
1982-12-01
These lectures concern the dynamics of fermions in strong interaction with gauge fields. Systems of fermions coupled by gauge forces have a very rich structure of global symmetries, which are called chiral symmetries. These lectures will focus on the realization of chiral symmetries and the causes and consequences of thier spontaneous breaking. A brief introduction to the basic formalism and concepts of chiral symmetry breaking is given, then some explicit calculations of chiral symmetry breaking in gauge theories are given, treating first parity-invariant and then chiral models. These calculations are meant to be illustrative rather than accurate; they make use of unjustified mathematical approximations which serve to make the physics more clear. Some formal constraints on chiral symmetry breaking are discussed which illuminate and extend the results of our more explicit analysis. Finally, a brief review of the phenomenological theory of chiral symmetry breaking is presented, and some applications of this theory to problems in weak-interaction physics are discussed
3. Vacuum solutions of a gravity model with vector-induced spontaneous Lorentz symmetry breaking
International Nuclear Information System (INIS)
Bertolami, O.; Paramos, J.
2005-01-01
We study the vacuum solutions of a gravity model where Lorentz symmetry is spontaneously broken once a vector field acquires a vacuum expectation value. Results are presented for the purely radial Lorentz symmetry breaking (LSB), radial/temporal LSB and axial/temporal LSB. The purely radial LSB result corresponds to new black hole solutions. When possible, parametrized post-Newtonian parameters are computed and observational boundaries used to constrain the Lorentz symmetry breaking scale
4. e +e- modes and U(1) spontaneous chiral symmetry breaking
International Nuclear Information System (INIS)
Steininger, K.
1992-01-01
In this paper, motivated by evidence for a chiral phase transition in strong coupling lattice QED, the authors calculate the two-particle spectrum of the broken QED phase. This is done in the framework of a Nambu and Jona-Lasinio model with U(1) symmetry including chiral symmetry and symmetry breaking properties of QED. The second order chiral phase transition behavior in our model and in lattice QED are in excellent agreement. The authors then present a detailed analysis of the spectra of the e + e - modes in the broken phase. The authors examine whether these modes have any possible relationship to the narrow e + e - resonances found in soft heavy ion collisions at GSL. The authors' answer is negative
5. Spontaneously broken version of N=4 supersymmetry
International Nuclear Information System (INIS)
Terent'ev, M.V.
1989-01-01
The special scenario of reduction from the space of D=10 dimensions is used to construct the theory with describes interaction of supergravity with only one multiplet of matter in the framework of spontaneously broken N=4 supersymmetry. 6 refs.; 1 fig
6. Of Slot Machines and Broken Test Tubes
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 5. Of Slot Machines and Broken Test Tubes. S Mahadevan. General Article Volume 19 Issue 5 May 2014 pp 395-405. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/019/05/0395-0405. Keywords.
7. Symmetries and nuclei
International Nuclear Information System (INIS)
Henley, E.M.
1987-01-01
Nuclei are very useful for testing symmetries, and for studies of symmetry breaking. This thesis is illustrated for two improper space-time transformations, parity and time-reversal and for one internal symmetry: charge symmetry and independence. Recent progress and present interest is reviewed. 23 refs., 8 figs., 2 tabs
8. Symmetries and symmetry breaking beyond the electroweak theory
International Nuclear Information System (INIS)
Grojean, Ch.
1999-01-01
The Glashow-Salam-Weinberg theory describing electroweak interactions is one of the best successes of quantum field theory; it has passed all the experimental tests of particles physics with a high accuracy. However, this theory suffers from some deficiencies in the sense that some parameters, especially those involved in the generation of the mass of the elementary particles, are fixed to unnatural values. Moreover gravitation whose quantization cannot be achieved in ordinary quantum filed theory is hot taken into account. The aim of this PhD dissertation is to study some theories beyond the Standard Model and inspired by superstring theories. My endeavour has been to develop theoretical aspects of an effective dynamical description of one of the soltonic states of the strongly coupled strings. An important part of my results is also devoted to a more phenomenological analysis of the low energy effects of the symmetries that assure the coherence of the theories at high energy: these symmetries could explain the fermion mass hierarchy and could be directly observable in collider experiments. It is also shown how the geometrical properties of compactified spaces characterize the vacuum of string theory in a non-perturbative regime; such a vacuum can be used to construct a unified theory of gauge and gravitational interactions with a supersymmetry softy broken at a TcV scale. (author)
9. Contact symmetries and Hamiltonian thermodynamics
International Nuclear Information System (INIS)
Bravetti, A.; Lopez-Monsalvo, C.S.; Nettel, F.
2015-01-01
It has been shown that contact geometry is the proper framework underlying classical thermodynamics and that thermodynamic fluctuations are captured by an additional metric structure related to Fisher’s Information Matrix. In this work we analyse several unaddressed aspects about the application of contact and metric geometry to thermodynamics. We consider here the Thermodynamic Phase Space and start by investigating the role of gauge transformations and Legendre symmetries for metric contact manifolds and their significance in thermodynamics. Then we present a novel mathematical characterization of first order phase transitions as equilibrium processes on the Thermodynamic Phase Space for which the Legendre symmetry is broken. Moreover, we use contact Hamiltonian dynamics to represent thermodynamic processes in a way that resembles the classical Hamiltonian formulation of conservative mechanics and we show that the relevant Hamiltonian coincides with the irreversible entropy production along thermodynamic processes. Therefore, we use such property to give a geometric definition of thermodynamically admissible fluctuations according to the Second Law of thermodynamics. Finally, we show that the length of a curve describing a thermodynamic process measures its entropy production
10. Symmetry breaking and scalar bosons
International Nuclear Information System (INIS)
Gildener, E.; Weinberg, S.
1976-01-01
There are reasons to suspect that the spontaneous breakdown of the gauge symmetries of the observed weak and electromagnetic interactions may be produced by the vacuum expectation values of massless weakly coupled elementary scalar fields. A method is described for finding the broken-symmetry solutions of such theories even when they contain arbitrary numbers of scalar fields with unconstrained couplings. In any such theory, there should exist a number of heavy Higgs bosons, with masses comparable to the intermediate vector bosons, plus one light Higgs boson, or ''scalon'' with mass of order αG/sub F/sub 1/2/. The mass and couplings of the scalon are calculable in terms of other masses, even without knowing all the details of the theory. For an SU(2) direct-product U(1) model with arbitrary numbers of scalar isodoublets, the scalon mass is greater than 5.26 GeV; a likely value is 7--10 GeV. The production and decay of the scalon are briefly considered. Some comments are offered on the relation between the mass scales associated with the weak and strong interactions
11. Patterns of symmetry breaking in chiral QCD
Science.gov (United States)
Bolognesi, Stefano; Konishi, Kenichi; Shifman, Mikhail
2018-05-01
We consider S U (N ) Yang-Mills theory with massless chiral fermions in a complex representation of the gauge group. The main emphasis is on the so-called hybrid ψ χ η model. The possible patterns of realization of the continuous chiral flavor symmetry are discussed. We argue that the chiral symmetry is broken in conjunction with a dynamical Higgsing of the gauge group (complete or partial) by bifermion condensates. As a result a color-flavor locked symmetry is preserved. The 't Hooft anomaly matching proceeds via saturation of triangles by massless composite fermions or, in a mixed mode, i.e. also by the "weakly" coupled fermions associated with dynamical Abelianization, supplemented by a number of Nambu-Goldstone mesons. Gauge-singlet condensates are of the multifermion type and, though it cannot be excluded, the chiral symmetry realization via such gauge invariant condensates is more contrived (requires a number of four-fermion condensates simultaneously and, even so, problems remain) and less plausible. We conclude that in the model at hand, chiral flavor symmetry implies dynamical Higgsing by bifermion condensates.
12. The symmetry of man.
Science.gov (United States)
Ermolenko, Alexander E; Perepada, Elena A
2007-01-01
The paper contains a description of basic regularities in the manifestation of symmetry of human structural organization and its ontogenetic and phylogenetic development. A concept of macrobiocrystalloid with inherent complex symmetry is proposed for the description of the human organism in its integrity. The symmetry can be characterized as two-plane radial (quadrilateral), where the planar symmetry is predominant while the layout of organs of radial symmetry is subordinated to it. Out of the two planes of symmetry (sagittal and horizontal), the sagittal plane is predominant. The symmetry of the chromosome, of the embrio at the early stages of cell cleavage as well as of some organs and systems in their phylogenetic development is described. An hypothesis is postulated that the two-plane symmetry is formed by two mechanisms: a) the impact of morphogenetic fields of the whole crystalloid organism during embriogenesis and, b) genetic mechanisms of the development of chromosomes having two-plane symmetry.
13. Simple Technique for Removing Broken Pedicular Screws
Directory of Open Access Journals (Sweden)
A Agrawal
2014-03-01
Full Text Available The procedure for removing a broken pedicle screw should ideally be technically easy and minimally invasive, as any damage to the pedicle, during removal of the broken screw, may weaken the pedicle, thus compromising on the success of re-instrumentation. We describe the case of a 32-year old man who had undergone surgery for traumatic third lumbar vertebral body fracture three years prior to current admission and had developed the complication of pedicle screw breakage within the vertebral body. The patient underwent re-exploration and removal of the distal screws. Through a paravertebral incision and muscle separation, the screws and rods were exposed and the implants were removed.
14. Need for spontaneous breakdown of chiral symmetry
International Nuclear Information System (INIS)
Salomone, A.; Schechter, J.; Tudron, T.
1981-01-01
The question of whether the chiral symmetry of the theory of strong interactions (with massless quarks) is required to be spontaneously broken is examined in the framework of a previously discussed effective Lagrangian for quantum chromodynamics. The assumption that physical masses of the theory be finite leads in a very direct way to the necessity of spontaneous breakdown. This result holds for all N/sub F/> or =2, where N/sub F/ is the number of different flavors of light quarks. The atypical cases N/sub F/ = 1,2 are discussed separately
15. New particles and breaking the colour symmetry
International Nuclear Information System (INIS)
Krolikowski, W.
1975-01-01
In the framework of one-gluon-exchange static forces mediated by a colour octet or nonet of vector gluons, we discuss quark binding in coloured-meson states and its connection with breaking the colour symmetry. A possible identification of psi (3.1), psi(3.7) and the broad bump at 4.1 GeV with some coloured bound states of quarks and antiquarks is pointed out. This identification implies the existence of a second bump in the region of 5 GeV. The general conclusion of the paper is that the colour interpretation of the new particles may be true only if the colour symmetry is badly broken (provided the considered forces are relevant). (author)
16. Lepton flavor violation and seesaw symmetries
Energy Technology Data Exchange (ETDEWEB)
Aristizabal Sierra, D., E-mail: daristizabal@ulg.ac.be [Universite de Liege, IFPA, Department AGO (Belgium)
2013-03-15
When the standard model is extended with right-handed neutrinos the symmetries of the resulting Lagrangian are enlarged with a new global U(1){sub R} Abelian factor. In the context of minimal seesaw models we analyze the implications of a slightly broken U(1){sub R} symmetry on charged lepton flavor violating decays. We find, depending on the R-charge assignments, models where charged lepton flavor violating rates can be within measurable ranges. In particular, we show that in the resulting models due to the structure of the light neutrino mass matrix muon flavor violating decays are entirely determined by neutrino data (up to a normalization factor) and can be sizable in a wide right-handed neutrino mass range.
17. Broken Homes: Stable Risk, Changing Reasons, Changing Forms.
Science.gov (United States)
Sweetser, Dorrian Apple
1985-01-01
Cohort membership and two measures of social disadvantage were used as explanatory variables in analysis of the risk of growing up in a broken home and of the living arrangements of children with broken homes. The risk of a broken home by age 16 proved to be stable across cohorts and greater for those from disadvantaged homes. (Author/BL)
18. Origin of family symmetries
International Nuclear Information System (INIS)
Nilles, Hans Peter
2012-04-01
Discrete (family) symmetries might play an important role in models of elementary particle physics. We discuss the origin of such symmetries in the framework of consistent ultraviolet completions of the standard model in field and string theory. The symmetries can arise due to special geometrical properties of extra compact dimensions and the localization of fields in this geometrical landscape. We also comment on anomaly constraints for discrete symmetries.
19. Origin of family symmetries
Energy Technology Data Exchange (ETDEWEB)
Nilles, Hans Peter [Bonn Univ. (Germany). Bethe Center for Theoretical Physics; Bonn Univ. (Germany). Physikalisches Inst.; Ratz, Michael [Technische Univ. Muenchen, Garching (Germany). Physik-Department; Vaudrevange, Patrick K.S. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2012-04-15
Discrete (family) symmetries might play an important role in models of elementary particle physics. We discuss the origin of such symmetries in the framework of consistent ultraviolet completions of the standard model in field and string theory. The symmetries can arise due to special geometrical properties of extra compact dimensions and the localization of fields in this geometrical landscape. We also comment on anomaly constraints for discrete symmetries.
20. Strength Assessment of Broken Rock Postgrouting Reinforcement Based on Initial Broken Rock Quality and Grouting Quality
Directory of Open Access Journals (Sweden)
Hongfa Xu
2017-01-01
Full Text Available To estimate postgrouting rock mass strength growth is important for engineering design. In this paper, using self-developed indoor pressure-grouting devices, 19 groups of test cubic blocks were made of the different water cement ratio grouting into the broken rock of three kinds of particle sizes. The shear strength parameters of each group under different conditions were tested. Then this paper presents a quantitative calculation method for predicting the strength growth of grouted broken rock. Relational equations were developed to investigate the relationship between the growth rates of uniaxial compressive strength (UCS, absolute value of uniaxial tensile strength (AUTS, internal friction angle, and cohesion for post- to pregrouting broken rock based on Mohr-Coulomb strength criterion. From previous test data, the empirical equation between the growth rate of UCS and the ratio of the initial rock mass UCS to the grout concretion UCS has been determined. The equations of the growth rates of the internal friction coefficient and UCS for grouting broken rock with rock mass rating (RMR and its increment have been established. The calculated results are consistent with the experimental results. These observations are important for engineered design of grouting reinforcement for broken rock mass.
1. Fine-tuning problem in renormalized perturbation theory: Spontaneously-broken gauge models
Energy Technology Data Exchange (ETDEWEB)
Foda, O.E. (Purdue Univ., Lafayette, IN (USA). Dept. of Physics)
1983-04-28
We study the stability of tree-level gauge hierarchies at higher orders in renormalized perturbation theory, in a model with spontaneously-broken gauge symmetries. We confirm previous results indicating that if the model is renormalized using BPHZ, then the tree-level hierarchy is not upset by the radiative corrections. Consequently, no fine-tuning of the initial parameters is required to maintain it, in contrast to the result obtained using Dimensional Renormalization. This verifies the conclusion that the need for fine-tuning, when it arises, is an artifact of the application of a certain class of renormalization schemes.
2. The fine-tuning problem in renormalized perturbation theory: Spontaneously-broken gauge models
International Nuclear Information System (INIS)
Foda, O.E.
1983-01-01
We study the stability of tree-level gauge hierarchies at higher orders in renormalized perturbation theory, in a model with spontaneously-broken gauge symmetries. We confirm previous results indicating that if the model is renormalized using BPHZ, then the tree-level hierarchy is not upset by the radiative corrections. Consequently, no fine-tuning of the initial parameters is required to maintain it, in contrast to the result obtained using Dimensional Renormalization. This verifies the conclusion that the need for fine-tuning, when it arises, is an artifact of the application of a certain class of renormalization schemes. (orig.)
3. Symmetry, asymmetry and dissymmetry
International Nuclear Information System (INIS)
Wackenheim, A.; Zollner, G.
1987-01-01
The authors discuss the concept of symmetry and defect of symmetry in radiological imaging and recall the definition of asymmetry (congenital or constitutional) and dissymmetry (acquired). They then describe a rule designed for the cognitive method of automatic evaluation of shape recognition data and propose the use of reversal symmetry [fr
4. Symmetry and electromagnetism
International Nuclear Information System (INIS)
Fuentes Cobas, L.E.; Font Hernandez, R.
1993-01-01
An analytical treatment of electrostatic and magnetostatic field symmetry, as a function of charge and current distribution symmetry, is proposed. The Newmann Principle, related to the cause-effect symmetry relation, is presented and applied to the characterization of simple configurations. (Author) 5 refs
5. Weak C* Hopf Symmetry
OpenAIRE
Rehren, K. -H.
1996-01-01
Weak C* Hopf algebras can act as global symmetries in low-dimensional quantum field theories, when braid group statistics prevents group symmetries. Possibilities to construct field algebras with weak C* Hopf symmetry from a given theory of local observables are discussed.
6. Spin-rotation symmetry breaking and triplet superconducting state in doped topological insulator CuxBi2Se3
Science.gov (United States)
Zheng, Guo-Qing
Spontaneous symmetry breaking is an important concept for understanding physics ranging from the elementary particles to states of matter. For example, the superconducting state breaks global gauge symmetry, and unconventional superconductors can break additional symmetries. In particular, spin rotational symmetry is expected to be broken in spin-triplet superconductors. However, experimental evidence for such symmetry breaking has not been obtained so far in any candidate compounds. We report 77Se nuclear magnetic resonance measurements which showed that spin rotation symmetry is spontaneously broken in the hexagonal plane of the electron-doped topological insulator Cu0.3Bi2Se3 below the superconducting transition temperature Tc =3.4 K. Our results not only establish spin-triplet (odd parity) superconductivity in this compound, but also serve to lay a foundation for the research of topological superconductivity (Ref.). We will also report the doping mechanism and superconductivity in Sn1-xInxTe.
7. No-go for tree-level R-symmetry breaking
Energy Technology Data Exchange (ETDEWEB)
Liu, Feihu [University of Electronic Science and Technology of China, School of Physical Electronics, Chengdu (China); Liu, Muyang [Sichuan University, Center for Theoretical Physics, College of Physical Science and Technology, Chengdu (China); Sun, Zheng [Sichuan University, Center for Theoretical Physics, College of Physical Science and Technology, Chengdu (China); Chinese Academy of Sciences, CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Beijing (China)
2017-11-15
We show that in gauge mediation models with tree-level R-symmetry breaking where supersymmetry and R-symmetries are broken by different fields, the gaugino mass either vanishes at one loop or finds a contribution from loop-level R-symmetry breaking. Thus tree-level R-symmetry breaking for phenomenology is either no-go or redundant in the simplest type of models. Including explicit messenger mass terms in the superpotential with a particular R-charge arrangement is helpful to bypass the no-go theorem, and the resulting gaugino mass is suppressed by the messenger mass scale. (orig.)
8. Performance improvements of symmetry-breaking reflector structures in nonimaging devices
Science.gov (United States)
Winston, Roland
2004-01-13
A structure and method for providing a broken symmetry reflector structure for a solar concentrator device. The component of the optical direction vector along the symmetry axis is conserved for all rays propagated through a translationally symmetric optical device. This quantity, referred to as the translational skew invariant, is conserved in rotationally symmetric optical systems. Performance limits for translationally symmetric nonimaging optical devices are derived from the distributions of the translational skew invariant for the optical source and for the target to which flux is to be transferred. A numerically optimized non-tracking solar concentrator utilizing symmetry-breaking reflector structures can overcome the performance limits associated with translational symmetry.
9. Symmetry in running.
Science.gov (United States)
Raibert, M H
1986-03-14
Symmetry plays a key role in simplifying the control of legged robots and in giving them the ability to run and balance. The symmetries studied describe motion of the body and legs in terms of even and odd functions of time. A legged system running with these symmetries travels with a fixed forward speed and a stable upright posture. The symmetries used for controlling legged robots may help in elucidating the legged behavior of animals. Measurements of running in the cat and human show that the feet and body sometimes move as predicted by the even and odd symmetry functions.
10. Leptonic Dirac CP violation predictions from residual discrete symmetries
Directory of Open Access Journals (Sweden)
I. Girardi
2016-01-01
Full Text Available Assuming that the observed pattern of 3-neutrino mixing is related to the existence of a (lepton flavour symmetry, corresponding to a non-Abelian discrete symmetry group Gf, and that Gf is broken to specific residual symmetries Ge and Gν of the charged lepton and neutrino mass terms, we derive sum rules for the cosine of the Dirac phase δ of the neutrino mixing matrix U. The residual symmetries considered are: i Ge=Z2 and Gν=Zn, n>2 or Zn×Zm, n,m≥2; ii Ge=Zn, n>2 or Zn×Zm, n,m≥2 and Gν=Z2; iii Ge=Z2 and Gν=Z2; iv Ge is fully broken and Gν=Zn, n>2 or Zn×Zm, n,m≥2; and v Ge=Zn, n>2 or Zn×Zm, n,m≥2 and Gν is fully broken. For given Ge and Gν, the sum rules for cosδ thus derived are exact, within the approach employed, and are valid, in particular, for any Gf containing Ge and Gν as subgroups. We identify the cases when the value of cosδ cannot be determined, or cannot be uniquely determined, without making additional assumptions on unconstrained parameters. In a large class of cases considered the value of cosδ can be unambiguously predicted once the flavour symmetry Gf is fixed. We present predictions for cosδ in these cases for the flavour symmetry groups Gf=S4, A4, T′ and A5, requiring that the measured values of the 3-neutrino mixing parameters sin2θ12, sin2θ13 and sin2θ23, taking into account their respective 3σ uncertainties, are successfully reproduced.
11. Investigation of spontaneously broken gauge theories
International Nuclear Information System (INIS)
Nagy, T.
1978-01-01
Spontaneously broken gauge theories (SBGT) with effects treated perturbatively are investigated. The general structure of SBGT is exhibited and gauge invariant renormalization program for practical calculations is set up. The proof of renormalizability of Lee and Zinn-Justin are extended to the problems of SBGT. A general semisimple compact gauge group is used. Arbitrary fermion and scalar multiplets are considered. The structure of the Lagrangian is discussed. The problem of quantization is described and the definition of the generating functionals of the Green functions and the Green functions themselves is given
12. Structure of pheomenological lagrangians for broken supersymmetry
International Nuclear Information System (INIS)
Uematsu, T.; Zachos, C.K.
1982-01-01
We consider the explicit connection between linear representations of supersymetry and the non-linear realizations associated with the generic effective lagrangians of the Volkov-Akulov type. We specify and illustrate a systematic approach for deriving the appropriate phenomenological lagrangian by transforming a pedagogical linear model, in which supersymmetry is broken at the tree level, into its corresponding non-linear lagrangian, in close analogy to the linear sigma model of pion dynamics. We discuss the significance and some properties of such phenomenological lagrangians. (orig.)
13. Neutrino mass and mixing with discrete symmetry
International Nuclear Information System (INIS)
King, Stephen F; Luhn, Christoph
2013-01-01
This is a review paper about neutrino mass and mixing and flavour model building strategies based on discrete family symmetry. After a pedagogical introduction and overview of the whole of neutrino physics, we focus on the PMNS mixing matrix and the latest global fits following the Daya Bay and RENO experiments which measure the reactor angle. We then describe the simple bimaximal, tri-bimaximal and golden ratio patterns of lepton mixing and the deviations required for a non-zero reactor angle, with solar or atmospheric mixing sum rules resulting from charged lepton corrections or residual trimaximal mixing. The different types of see-saw mechanism are then reviewed as well as the sequential dominance mechanism. We then give a mini-review of finite group theory, which may be used as a discrete family symmetry broken by flavons either completely, or with different subgroups preserved in the neutrino and charged lepton sectors. These two approaches are then reviewed in detail in separate chapters including mechanisms for flavon vacuum alignment and different model building strategies that have been proposed to generate the reactor angle. We then briefly review grand unified theories (GUTs) and how they may be combined with discrete family symmetry to describe all quark and lepton masses and mixing. Finally, we discuss three model examples which combine an SU(5) GUT with the discrete family symmetries A 4 , S 4 and Δ(96). (review article)
14. Electroweak symmetry breaking: Higgs/whatever
International Nuclear Information System (INIS)
Chanowitz, M.S.
1990-01-01
In these two lectures the author discusses electroweak symmetry breaking from a general perspective, stressing properties that are model independent and follow just from the assumption that the electroweak interactions are described by a spontaneously broken gauge theory. This means he assumes the Higgs mechanism though not necessarily the existence of Higgs bosons. The first lecture presents the general framework of a spontaneously broken gauge theory: (1) the Higgs mechanism sui generis, with or without Higgs boson(s) and (2) the implications of symmetry and unitarity for the mass scale and interaction strength of the new physics that the Higgs mechanism requires. In addition he reviews a softer theoretical argument based on the naturalness problem which leads to a prejudice against Higgs bosons unless they are supersymmetric. This is a prejudice, not a theorem, and it could be overturned in the future by a clever new idea. In the second lecture he illustrates the general framework by reviewing some specific models: (1) the Weinberg-Salam model of the Higgs sector; (2) the minimal supersymmetric extension of the Weinberg-Salam model; and (3) technicolor as an example of the Higgs mechanism without Higgs bosons. He concludes the second lecture with a discussion of strong WW scattering that must occur if L SB lives above 1 TeV. In particular he describes some of the experimental signals and backgrounds at the SSC. 57 refs., 12 figs
15. Invariant renormalization method for nonlinear realizations of dynamical symmetries
International Nuclear Information System (INIS)
Kazakov, D.I.; Pervushin, V.N.; Pushkin, S.V.
1977-01-01
The structure of ultraviolet divergences is investigated for the field theoretical models with nonlinear realization of the arbitrary semisimple Lie group, with spontaneously broken symmetry of vacuum. An invariant formulation of the background field method of renormalization is proposed which gives the manifest invariant counterterms off mass shell. A simple algorithm for construction of counterterms is developed. It is based on invariants of the group of dynamical symmetry in terms of the Cartan forms. The results of one-loop and two-loop calculations are reported
16. Applications of flavor symmetry to the phenomenology of elementary particles
International Nuclear Information System (INIS)
Kaeding, T.A.
1995-05-01
Some applications of flavor symmetry are examined. Approximate flavor symmetries and their consequences in the MSSM (Minimal Supersymmetric Standard Model) are considered, and found to give natural values for the possible B- and L-violating couplings that are empirically acceptable, except for the case of proton decay. The coupling constants of SU(3) are calculated and used to parameterize the decays of the D mesons in broken flavor SU(3). The resulting couplings are used to estimate the long-distance contributions to D-meson mixing
17. Strong evidence for spontaneous chiral symmetry breaking in (quenched) QCD
International Nuclear Information System (INIS)
Barbour, I.M.; Gibbs, P.; Schierholz, G.; Teper, M.; Gilchrist, J.P.; Schneider, H.
1983-09-01
We calculate the chiral condensate for all quark masses using Kogut-Susskind fermions in lattice-regularized quenched QCD. The large volume behaviour of at small quark masses demonstrates that the explicit U(1) chiral symmetry is spontaneously broken. We perform the calculation for β = 5.1 to 5.9 and find very good continuum renormalization group behaviour. We infer that the spontaneous breaking we observe belongs to continuum QCD. This constitutes the first unambiguous demonstration of spontaneous chiral symmetry breaking in continuum quenched QCD. (orig.)
18. Discrete quark-lepton symmetry need not pose a cosmological domain wall problem
International Nuclear Information System (INIS)
Lew, H.; Volkas, R.R.
1992-01-01
Quarks and leptons may be related to each other through a spontaneously broken discrete symmetry. Models with acceptable and interesting collider phenomenology have been constructed which incorporate this idea. However, the standard Hot Big Bang model of cosmology is generally considered to eschew spontaneously broken discrete symmetries because they often lead to the formation of unacceptably massive domain walls. It is pointed out that there are a number of plausible quark-lepton symmetric models in nature which do not produce cosmologically troublesome domain walls. 30 refs
19. Symmetry and symmetry breaking in quantum mechanics
International Nuclear Information System (INIS)
Chomaz, Philippe
1998-01-01
In the world of infinitely small, the world of atoms, nuclei and particles, the quantum mechanics enforces its laws. The discovery of Quanta, this unbelievable castration of the Possible in grains of matter and radiation, in discrete energy levels compels us of thinking the Single to comprehend the Universal. Quantum Numbers, magic Numbers and Numbers sign the wave. The matter is vibration. To describe the music of the world one needs keys, measures, notes, rules and partition: one needs quantum mechanics. The particles reduce themselves not in material points as the scholars of the past centuries thought, but they must be conceived throughout the space, in the accomplishment of shapes of volumes. When Einstein asked himself whether God plays dice, there was no doubt among its contemporaries that if He exists He is a geometer. In a Nature reduced to Geometry, the symmetries assume their role in servicing the Harmony. The symmetries allow ordering the energy levels to make them understandable. They impose there geometrical rules to the matter waves, giving them properties which sometimes astonish us. Hidden symmetries, internal symmetries and newly conceived symmetries have to be adopted subsequently to the observation of some order in this world of Quanta. In turn, the symmetries provide new observables which open new spaces of observation
20. Absorption of solar radiation in broken clouds
Energy Technology Data Exchange (ETDEWEB)
Zuev, V.E.; Titov, G.A.; Zhuravleva, T.B. [Institute of Atmospheric Optics, Tomsk (Russian Federation)
1996-04-01
It is recognized now that the plane-parallel model unsatisfactorily describes the transfer of radiation through broken clouds and that, consequently, the radiation codes of general circulation models (GCMs) must be refined. However, before any refinement in a GCM code is made, it is necessary to investigate the dependence of radiative characteristics on the effects caused by the random geometry of cloud fields. Such studies for mean fluxes of downwelling and upwelling solar radiation in the visible and near-infrared (IR) spectral range were performed by Zuev et al. In this work, we investigate the mean spectral and integrated absorption of solar radiation by broken clouds (in what follows, the term {open_quotes}mean{close_quotes} will be implied but not used, for convenience). To evaluate the potential effect of stochastic geometry, we will compare the absorption by cumulus (0.5 {le} {gamma} {le} 2) to that by equivalent stratus ({gamma} <<1) clouds; here {gamma} = H/D, H is the cloud layer thickness and D the characteristic horizontal cloud size. The equivalent stratus clouds differ from cumulus only in the aspect ratio {gamma}, all the other parameters coinciding.
1. Mass generation and chiral symmetry breaking by pseudoparticles
International Nuclear Information System (INIS)
Hietarinta, J.; Palmer, W.F.; Pinsky, S.S.
1978-01-01
Massless QCD is studied with regard to mass generation and chiral SU(N/sub f/) symmetry breaking from pseudoparticle effects. While mass is generated when there is only one massless quark, and chiral U(1) is always broken, no rigorous indication of the breaking of chiral SU(N/sub f/) and mass generation is seen when there are more than one massless quarks in the original theory
2. Heavy quark condensates from dynamically borken flavour symmetry
International Nuclear Information System (INIS)
Elliott, T.; King, S.F.
1992-01-01
We study the dynamics of top quark condensation induced by gauge interactions resulting from a broken flavour symmetry. The gap equation in dressed ladder approximation is solved numerically to obtain directly the top quark mass. The new high energy dynamics reduces the prediction of m t somewhat, but the usual problems of m t being too large and fine tuning remain. In order to solve these problems we extend our discussion to include fourth generation quark condensates. (orig.)
3. Vacuum polarization and dynamical chiral symmetry breaking in quantum electrodynamics
International Nuclear Information System (INIS)
Gusynin, V.P.
1989-01-01
The Schwinger-Dyson equation in the ladder approximation is considered for the fermion mass function taking into account the vacuum polarization effects. It is shown that even in the 'zero-charge' situation there exists, at rather large coupling constant (α>α c >0), a solution with spontaneously broken chiral symmetry. The existence of the local limit in the model concerned is discussed. 30 refs.; 1 fig
4. Dynamical symmetry breaking with hypercolour and high colour representations
International Nuclear Information System (INIS)
Zoupanos, G.
1985-01-01
A model is presented in which the electroweak gauge group is spontaneously broken according to a dynamical scenario based on the existence of high colour representations. An unattractive feature of this scenario was the necessity to introduce elementary Higgs fields in order to obtain the spontaneous symmetry breaking of part of the theory. In the present model, this breaking can also be understood dynamically with the introduction of hypercolour interactions. (author)
5. Symmetries in nature
International Nuclear Information System (INIS)
Mainzer, K.
1988-01-01
Symmetry, disymmetry, chirality etc. are well-known topics in chemistry. But they cannot only be found on the molecular level of matter. Atoms and elementary particles in physics are also characterized by particular symmetry groups. Even living organisms and populations on the macroscopic level have functional properties of symmetry. The whole physical, chemical, and biological evolution seems to be regulated by the emergence of new symmetries and the breaking down of old ones. One is reminded of Heisenberg's famous statement: 'Die letzte Wurzel der Erscheinungen ist also nicht die Materie, sondern das mathematische Gesetz, die Symmetrie, die mathematische Form' (Wandlungen in den Grundlagen der Naturwissenschaften, 1959). Historically the belief in symmetry and simplicity of nature has a long philosophical tradition from the Pythagoreans, Plato and Greek astronomers to Kepler and modern scientists. Today, 'symmetries in nature' is a common topic of mathematics, physics, chemistry, and biology. A lot of Nobel prizes were given in honour of inquiries concerning symmetries in nature. The fascination of symmetries is not only motivated by science, but by art and religion too. Therefore 'symmetris in nature' is an interdisciplinary topic which may help to overcome C.P. Snow's 'Two Cultures' of natural sciences and humanities. (author) 17 refs., 21 figs
6. Symmetries in nature
Energy Technology Data Exchange (ETDEWEB)
Mainzer, K
1988-05-01
Symmetry, disymmetry, chirality etc. are well-known topics in chemistry. But they cannot only be found on the molecular level of matter. Atoms and elementary particles in physics are also characterized by particular symmetry groups. Even living organisms and populations on the macroscopic level have functional properties of symmetry. The whole physical, chemical, and biological evolution seems to be regulated by the emergence of new symmetries and the breaking down of old ones. One is reminded of Heisenberg's famous statement: 'Die letzte Wurzel der Erscheinungen ist also nicht die Materie, sondern das mathematische Gesetz, die Symmetrie, die mathematische Form' (Wandlungen in den Grundlagen der Naturwissenschaften, 1959). Historically the belief in symmetry and simplicity of nature has a long philosophical tradition from the Pythagoreans, Plato and Greek astronomers to Kepler and modern scientists. Today, 'symmetries in nature' is a common topic of mathematics, physics, chemistry, and biology. A lot of Nobel prizes were given in honour of inquiries concerning symmetries in nature. The fascination of symmetries is not only motivated by science, but by art and religion too. Therefore 'symmetris in nature' is an interdisciplinary topic which may help to overcome C.P. Snow's 'Two Cultures' of natural sciences and humanities. (author) 17 refs., 21 figs.
7. Big break for charge symmetry
Energy Technology Data Exchange (ETDEWEB)
Miller, G.A. [Department of Physics, University of Washington, Seattle (United States); Kolck, U. van [Department of Physics, University of Arizona, Tucson (United States)
2003-06-01
Two new experiments have detected charge-symmetry breaking, the mechanism responsible for protons and neutrons having different masses. Symmetry is a crucial concept in the theories that describe the subatomic world because it has an intimate connection with the laws of conservation. The theory of the strong interaction between quarks - quantum chromodynamics - is approximately invariant under what is called charge symmetry. In other words, if we swap an up quark for a down quark, then the strong interaction will look almost the same. This symmetry is related to the concept of {sup i}sospin{sup ,} and is not the same as charge conjugation (in which a particle is replaced by its antiparticle). Charge symmetry is broken by the competition between two different effects. The first is the small difference in mass between up and down quarks, which is about 200 times less than the mass of the proton. The second is their different electric charges. The up quark has a charge of +2/3 in units of the proton charge, while the down quark has a negative charge of -1/3. If charge symmetry was exact, the proton and the neutron would have the same mass and they would both be electrically neutral. This is because the proton is made of two up quarks and a down quark, while the neutron comprises two downs and an up. Replacing up quarks with down quarks, and vice versa, therefore transforms a proton into a neutron. Charge-symmetry breaking causes the neutron to be about 0.1% heavier than the proton because the down quark is slightly heavier than the up quark. Physicists had already elucidated certain aspects of charge-symmetry breaking, but our spirits were raised greatly when we heard of the recent work of Allena Opper of Ohio University in the US and co-workers at the TRIUMF laboratory in British Columbia, Canada. Her team has been trying to observe a small charge-symmetry-breaking effect for several years, using neutron beams at the TRIUMF accelerator. The researchers studied the
8. CP nonconservation in dynamically broken gauge theories
International Nuclear Information System (INIS)
Lane, K.
1981-01-01
The recent proposal of Eichten, Lane, and Preskill for CP nonconservation in electroweak gauge theories with dynamical symmetry breaking is reviewed. Through the alignment of the vacuum with the explicit chiral symmetry breaking Hamiltonian, these theories provide a natural way to understand the dynamical origin of CP nonconservation. Special attention is paid to the problem of strong CP violation. Even through all vacuum angles are zero, this problem is not automatically avoided. In the absence of strong CP violation, the neutron electric dipole moment is expected to be 10 -24 -10 -26 e-cm. A new class of models is proposed in which both strong CP violation and large /ΔS/ = 2 effects may be avoided. In these models, /ΔC/ = 2 processes such as D/sup o/ D/sup -o/ mixing may be large enough to observe
9. Parent of origin, mosaicism, and recurrence risk: probabilistic modeling explains the broken symmetry of transmission genetics.
Science.gov (United States)
Campbell, Ian M; Stewart, Jonathan R; James, Regis A; Lupski, James R; Stankiewicz, Paweł; Olofsson, Peter; Shaw, Chad A
2014-10-02
Most new mutations are observed to arise in fathers, and increasing paternal age positively correlates with the risk of new variants. Interestingly, new mutations in X-linked recessive disease show elevated familial recurrence rates. In male offspring, these mutations must be inherited from mothers. We previously developed a simulation model to consider parental mosaicism as a source of transmitted mutations. In this paper, we extend and formalize the model to provide analytical results and flexible formulas. The results implicate parent of origin and parental mosaicism as central variables in recurrence risk. Consistent with empirical data, our model predicts that more transmitted mutations arise in fathers and that this tendency increases as fathers age. Notably, the lack of expansion later in the male germline determines relatively lower variance in the proportion of mutants, which decreases with paternal age. Subsequently, observation of a transmitted mutation has less impact on the expected risk for future offspring. Conversely, for the female germline, which arrests after clonal expansion in early development, variance in the mutant proportion is higher, and observation of a transmitted mutation dramatically increases the expected risk of recurrence in another pregnancy. Parental somatic mosaicism considerably elevates risk for both parents. These findings have important implications for genetic counseling and for understanding patterns of recurrence in transmission genetics. We provide a convenient online tool and source code implementing our analytical results. These tools permit varying the underlying parameters that influence recurrence risk and could be useful for analyzing risk in diverse family structures. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
10. Experimental verification of acoustic pseudospin multipoles in a symmetry-broken snowflakelike topological insulator
Science.gov (United States)
Zhang, Zhiwang; Tian, Ye; Cheng, Ying; Liu, Xiaojun; Christensen, Johan
2017-12-01
Topologically protected wave engineering in artificially structured media resides at the frontier of ongoing metamaterials research, which is inspired by quantum mechanics. Acoustic analogs of electronic topological insulators have recently led to a wealth of new opportunities in manipulating sound propagation by means of robust edge mode excitations through analogies drawn to exotic quantum states. A variety of artificial acoustic systems hosting topological edge states have been proposed analogous to the quantum Hall effect, topological insulators, and Floquet topological insulators in electronic systems. However, those systems were characterized by a fixed geometry and a very narrow frequency response, which severely hinders the exploration and design of useful applications. Here we establish acoustic multipolar pseudospin states as an engineering degree of freedom in time-reversal invariant flow-free phononic crystals and develop reconfigurable topological insulators through rotation of their meta-atoms and reshaping of the metamolecules. Specifically, we show how rotation forms man-made snowflakelike molecules, whose topological phase mimics pseudospin-down (pseudospin-up) dipolar and quadrupolar states, which are responsible for a plethora of robust edge confined properties and topological controlled refraction disobeying Snell's law.
11. Noise-induced drift in systems with broken symmetry and classical routes to superconductivity
International Nuclear Information System (INIS)
Shapiro, V.E.
1994-01-01
We discuss concepts and mechanisms of particle motion in a variety of conditions of asymmetry towards spatial inversion that suggest an idea for the possibility of persistent currents within classical statistical considerations. We expose misapplications of Gibbs statistics and the Langevin approach and show that the idea does not contradict general principles. It gains support from the classical mechanism of capillary wave instability and keeps within the detailed balance and fluctuation-dissipation theorems. (author). 7 refs., 2 figs
12. Investigation of broken symmetry of Sb/Cu(111) surface alloys by VT-STM
CSIR Research Space (South Africa)
Ndlovu, GF
2011-07-01
Full Text Available This work present an in situ Variable Temperature Scanning Tunneling Microscopy (VT-STM) study of the Sb/Cu(111) system studied at various temperatures. The experimental data support a structural model in which Sb atoms displace up to 1...
13. Thickness dependence of the interfacial Dzyaloshinskii-Moriya interaction in inversion symmetry broken systems
NARCIS (Netherlands)
Cho, J.; Kim, N.H.; Lee, S.; Kim, J.S.; Lavrijsen, R.; Solignac, A.M.P.; Yin, Y.; Han, D.; Hoof, N.J.J.; Swagten, H.J.M.; Koopmans, B.; You, C.-H.
In magnetic multilayer systems, a large spin-orbit coupling at the interface between heavy metals and ferromagnets can lead to intriguing phenomena such as the perpendicular magnetic anisotropy, the spin Hall effect, the Rashba effect, and especially the interfacial Dzyaloshinskii–Moriya (IDM)
14. Broken flow symmetry explains the dynamics of small particles in deterministic lateral displacement arrays.
Science.gov (United States)
Kim, Sung-Cheol; Wunsch, Benjamin H; Hu, Huan; Smith, Joshua T; Austin, Robert H; Stolovitzky, Gustavo
2017-06-27
Deterministic lateral displacement (DLD) is a technique for size fractionation of particles in continuous flow that has shown great potential for biological applications. Several theoretical models have been proposed, but experimental evidence has demonstrated that a rich class of intermediate migration behavior exists, which is not predicted. We present a unified theoretical framework to infer the path of particles in the whole array on the basis of trajectories in a unit cell. This framework explains many of the unexpected particle trajectories reported and can be used to design arrays for even nanoscale particle fractionation. We performed experiments that verify these predictions and used our model to develop a condenser array that achieves full particle separation with a single fluidic input.
15. Structural versus electronic distortions of symmetry-broken IrTe$_2$
OpenAIRE
Kim, Hyo Sung; Kim, Tae-Hwan; Yang, Junjie; Cheong, Sang-Wook; Yeom, Han Woong
2014-01-01
We investigate atomic and electronic structures of the intriguing low temperature phase of IrTe2 using high-resolution scanning tunneling microscopy and spectroscopy. We confirm various stripe superstructures such as $\\times$3, $\\times$5, and $\\times$8. The strong vertical and lateral distortions of the lattice for the stripe structures are observed in agreement with recent calculations. The spatial modulations of electronic density of states are clearly identified as separated from the struc...
16. Broken-symmetry ground states in .nu.=2 bilayer quantum Hall systems
Czech Academy of Sciences Publication Activity Database
MacDonald, A. H.; Rajaraman, R. U.; Jungwirth, Tomáš
1999-01-01
Roč. 60, č. 12 (1999), s. 8817-8826 ISSN 0163-1829 R&D Projects: GA ČR GA202/98/0085; GA MŠk ME 104 Institutional research plan: CEZ:AV0Z1010914 Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 3.008, year: 1999
17. Control the polarization state of light with symmetry-broken metallic metastructures
International Nuclear Information System (INIS)
Xiong, Xiang; Jiang, Shang-Chi; Hu, Yuan-Sheng; Hu, Yu-Hui; Wang, Zheng-Han; Peng, Ru-Wen; Wang, Mu
2015-01-01
Controlling the polarization state, the transmission direction, the amplitude and the phase of light in a very limited space is essential for the development of on-chip photonics. Over the past decades, numerous sub-wavelength metallic microstructures have been proposed and fabricated to fulfill these demands. In this article, we review our efforts in achieving negative refractive index, controlling the polarization state, and tuning the amplitude of light with two-dimensional (2D) and three-dimensional (3D) microstructures. We designed an assembly of stacked metallic U-shaped resonators that allow achieving negative refraction for pure magnetic and electric responses respectively at the same frequency by selecting the polarization of incident light. Based on this, we tune the permittivity and permeability of the structure, and achieve negative refractive index. Further, by control the excitation and radiation of surface electric current on a number of 2D and 3D asymmetric metallic metastructures, we are able to control the polarization state of light. It is also demonstrated that with a stereostructured metal film, the whole metal surfaces can be used to construct either polarization-sensitive or polarization-insensitive prefect absorbers, with the advantage of efficient heat dissipation and electric conductivity. Our practice shows that metamaterials, including metasurface, indeed help to master light in nanoscale, and are promising in the development of new generation of photonics
18. Odd-parity magnetoresistance in pyrochlore iridate thin films with broken time-reversal symmetry
Science.gov (United States)
Fujita, T. C.; Kozuka, Y.; Uchida, M.; Tsukazaki, A.; Arima, T.; Kawasaki, M.
2015-01-01
A new class of materials termed topological insulators have been intensively investigated due to their unique Dirac surface state carrying dissipationless edge spin currents. Recently, it has been theoretically proposed that the three dimensional analogue of this type of band structure, the Weyl Semimetal phase, is materialized in pyrochlore oxides with strong spin-orbit coupling, accompanied by all-in-all-out spin ordering. Here, we report on the fabrication and magnetotransport of Eu2Ir2O7 single crystalline thin films. We reveal that one of the two degenerate all-in-all-out domain structures, which are connected by time-reversal operation, can be selectively formed by the polarity of the cooling magnetic field. Once formed, the domain is robust against an oppositely polarised magnetic field, as evidenced by an unusual odd field dependent term in the magnetoresistance and an anomalous term in the Hall resistance. Our findings pave the way for exploring the predicted novel quantum transport phenomenon at the surfaces/interfaces or magnetic domain walls of pyrochlore iridates. PMID:25959576
19. Hierarchy stability for spontaneously broken theories
Energy Technology Data Exchange (ETDEWEB)
Galvan, J B; Perez-Mercader, J; Sanchez, F J
1987-04-16
By using Weisberger's method for the integration of heavy degrees of freedom in multiscale theories, we show that tree level hierarchies are not destabilized byquantum corrections in a two-scale, two scalar field theory model where the heavy sector undergoes spontaneous symmetry breaking. We see explicitly the role played by the one-loop heavy log corrections to the effective parameters in maintaining the original tree level hierarchy and in keeping the theory free of hierarchy problems.
20. Hierarchy stability for spontaneously broken theories
International Nuclear Information System (INIS)
Galvan, J.B.; Perez-Mercader, J.; Sanchez, F.J.
1987-01-01
By using Weisberger's method for the integration of heavy degrees of freedom in multiscale theories, we show that tree level hierarchies are not destabilized byquantum corrections in a two-scale, two scalar field theory model where the heavy sector undergoes spontaneous symmetry breaking. We see explicitly the role played by the one-loop heavy log corrections to the effective parameters in maintaining the original tree level hierarchy and in keeping the theory free of hierarchy problems. (orig.)
1. Spontaneous Symmetry Breaking and Nambu–Goldstone Bosons in Quantum Many-Body Systems
Directory of Open Access Journals (Sweden)
Tomáš Brauner
2010-04-01
Full Text Available Spontaneous symmetry breaking is a general principle that constitutes the underlying concept of a vast number of physical phenomena ranging from ferromagnetism and superconductivity in condensed matter physics to the Higgs mechanism in the standard model of elementary particles. I focus on manifestations of spontaneously broken symmetries in systems that are not Lorentz invariant, which include both nonrelativistic systems as well as relativistic systems at nonzero density, providing a self-contained review of the properties of spontaneously broken symmetries specific to such theories. Topics covered include: (i Introduction to the mathematics of spontaneous symmetry breaking and the Goldstone theorem. (ii Minimization of Higgs-type potentials for higher-dimensional representations. (iii Counting rules for Nambu–Goldstone bosons and their dispersion relations. (iv Construction of effective Lagrangians. Specific examples in both relativistic and nonrelativistic physics are worked out in detail.
2. From physical symmetries to emergent gauge symmetries
International Nuclear Information System (INIS)
Barceló, Carlos; Carballo-Rubio, Raúl; Di Filippo, Francesco; Garay, Luis J.
2016-01-01
Gauge symmetries indicate redundancies in the description of the relevant degrees of freedom of a given field theory and restrict the nature of observable quantities. One of the problems faced by emergent theories of relativistic fields is to understand how gauge symmetries can show up in systems that contain no trace of these symmetries at a more fundamental level. In this paper we start a systematic study aimed to establish a satisfactory mathematical and physical picture of this issue, dealing first with abelian field theories. We discuss how the trivialization, due to the decoupling and lack of excitation of some degrees of freedom, of the Noether currents associated with physical symmetries leads to emergent gauge symmetries in specific situations. An example of a relativistic field theory of a vector field is worked out in detail in order to make explicit how this mechanism works and to clarify the physics behind it. The interplay of these ideas with well-known results of importance to the emergent gravity program, such as the Weinberg-Witten theorem, are discussed.
3. The Symmetry of Multiferroics
OpenAIRE
Harris, A. Brooks
2006-01-01
This paper represents a detailed instruction manual for constructing the Landau expansion for magnetoelectric coupling in incommensurate ferroelectric magnets. The first step is to describe the magnetic ordering in terms of symmetry adapted coordinates which serve as complex valued magnetic order parameters whose transformation properties are displayed. In so doing we use the previously proposed technique to exploit inversion symmetry, since this symmetry had been universally overlooked. Havi...
4. Effective action of softly broken supersymmetric theories
International Nuclear Information System (INIS)
2006-12-01
We study the renormalization of (softly) broken supersymmetric theories at the one loop level in detail. We perform this analysis in a superspace approach in which the supersymmetry breaking interactions are parameterized using spurion insertions. We comment on the uniqueness of this parameterization. We compute the one loop renormalization of such theories by calculating superspace vacuum graphs with multiple spurion insertions. To preform this computation efficiently we develop algebraic properties of spurion operators, that naturally arise because the spurions are often surrounded by superspace projection operators. Our results are general apart from the restrictions that higher super covariant derivative terms and some finite effects due to non-commutativity of superfield dependent mass matrices are ignored. One of the soft potentials induces renormalization of the Kaehler potential. (author)
5. Spontaneously broken extended supersymmetry: Full superfield formulation
International Nuclear Information System (INIS)
Kandelakis, E.S.
1984-01-01
The superfield description, given by Samuel and Wess, of the non-linear Akulov-Volkov realization of (broken) supersymmetry, is generalized for the interesting cases of N=2 and 4 extended supersymmetry. The generalization, in terms of the full-superfield formulation, is straightforward. For the proof we first define the corresponding THETA-algebras; we then present explicitly many of the calculations. The schematic explanation makes the generalization manifest. We perform, for N=2, the coupling of the A-V field to standard-matter, in the way introduced by S-W, and schematically we make manifest the generalization for every N. The importance of our results consists in a complete, calculable description of the A-V fields (goldstinos) and of their interactions, easily applied to the tasks of today's phenomenology. (orig.) [de
6. Approximate and renormgroup symmetries
Energy Technology Data Exchange (ETDEWEB)
Ibragimov, Nail H. [Blekinge Institute of Technology, Karlskrona (Sweden). Dept. of Mathematics Science; Kovalev, Vladimir F. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Mathematical Modeling
2009-07-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
7. Approximate and renormgroup symmetries
International Nuclear Information System (INIS)
Ibragimov, Nail H.; Kovalev, Vladimir F.
2009-01-01
''Approximate and Renormgroup Symmetries'' deals with approximate transformation groups, symmetries of integro-differential equations and renormgroup symmetries. It includes a concise and self-contained introduction to basic concepts and methods of Lie group analysis, and provides an easy-to-follow introduction to the theory of approximate transformation groups and symmetries of integro-differential equations. The book is designed for specialists in nonlinear physics - mathematicians and non-mathematicians - interested in methods of applied group analysis for investigating nonlinear problems in physical science and engineering. (orig.)
8. Modified broken rice starch as fat substitute in sausages
Directory of Open Access Journals (Sweden)
Valéria Maria Limberger
2011-09-01
Full Text Available The demand for low-fat beef products has led the food industry to use fat substitutes such as modified starch. About 14% of broken rice is generated during processing. Nevertheless, this by-product contains high levels of starch; being therefore, great raw material for fat substitution. This study evaluated the applicability of chemically and physically modified broken rice starch as fat substitute in sausages. Extruded and phosphorylated broken rice was used in low-fat sausage formulation. All low-fat sausages presented about 55% reduction in the fat content and around 28% reduction in the total caloric value. Fat replacement with phosphorylated and extruded broken rice starch increased the texture acceptability of low-fat sausages, when compared to low-fat sausages with no modified broken rice. Results suggest that modified broken rice can be used as fat substitute in sausage formulations, yielding lower caloric value products with acceptable sensory characteristics.
9. Holographic theories of electroweak symmetry breaking without a Higgs Boson
International Nuclear Information System (INIS)
Burdman, Gustavo; Nomura, Yasunori
2003-01-01
Recently, realistic theories of electroweak symmetry breaking have been constructed in which the electroweak symmetry is broken by boundary conditions imposed at a boundary of higher dimensional spacetime. These theories have equivalent 4D dual descriptions, in which the electroweak symmetry is dynamically broken by non-trivial infrared dynamics of some gauge interaction, whose gauge coupling (tilde g) and size N satisfy (tilde g) 2 N ∼> 16π 2 . Such theories allow one to calculate electroweak radiative corrections, including the oblique parameters S, T and U, as long as (tilde g) 2 N/16π 2 and N are sufficiently larger than unity. We study how the duality between the 4D and 5D theories manifests itself in the computation of various physical quantities. In particular, we calculate the electroweak oblique parameters in a warped 5D theory where the electroweak symmetry is broken by boundary conditions at the infrared brane. We show that the value of S obtained in the minimal theory exceeds the experimental bound if the theory is in a weakly coupled regime. This requires either an extension of the minimal model or departure from weak coupling. A particularly interesting scenario is obtained if the gauge couplings in the 5D theory take the largest possible values--the value suggested by naive dimensional analysis. We argue that such a theory can provide a potentially consistent picture for dynamical electroweak symmetry breaking: corrections to the electroweak observables are sufficiently small while realistic fermion masses are obtained without conflicting with bounds from flavor violation. The theory contains only the standard model quarks, leptons and gauge bosons below ≅2 TeV, except for a possible light scalar associated with the radius of the extra dimension. At ≅2 TeV increasingly broad string resonances appear. An analysis of top-quark phenomenology and flavor violation is also presented, which is applicable to both the weakly-coupled and strongly
10. Logotherapy Counseling to Improve Acceptance of Broken Home Child
OpenAIRE
Erlangga, Erwin
2017-01-01
This study aims to increase the enrollment of children of a broken home that life has meaning. Subjects are 100 children in Demak whose families experiencing divorce. Research themes include three things: individual counseling, engineering logotherapy, reception, and a child of a broken home. Data obtained based on interviews, observation, and psychological scale showed that of the 100 children of a broken home has a low acceptance that individual counseling with logotherapy techniques were c...
11. Anomaly-free gauged R-symmetry in local supersymmetry
International Nuclear Information System (INIS)
Chamseddine, A.H.; Dreiner, H.
1996-01-01
We discuss local R-symmetry as a potentially powerful new model building tool. We first review and clarify that a U(1) R-symmetry can only be gauged in local and not in global supersymmetry. We determine the anomaly-cancellation conditions for the gauged R-symmetry. For the standard superpotential these equations have no solution, independently of how many Standard Model singlets are added to the model. There is also no solution when we increase the number of families and the number of pairs of Higgs doublets. When the Green-Schwarz mechanism is employed to cancel the anomalies, solutions only exist for a large number of singlets. We find many anomaly-free family-independent models with an extra SU(3) c octet chiral superfield. We consider in detail the conditions for an anomaly-free family-dependent U(1) R and find solutions with one, two, three and four extra singlets. Only with three and four extra singlets do we naturally obtain sfermion masses of the order of the weak scale. For these solutions we consider the spontaneous breaking of supersymmetry and the R-symmetry in the context of local supersymmetry. In general the U(1) R gauge group is broken at or close to the Planck scale. We consider the effects of the R-symmetry on baryon- and lepton-number violation in supersymmetry. There is no logical connection between a conserved R-symmetry and a conserved R-parity. For conserved R-symmetry we have models for all possibilities of conserved or broken R-parity. Most models predict dominant effects which could be observed at HERA. (orig.)
12. Logotherapy Counseling to Improve Acceptance of Broken Home Child
Directory of Open Access Journals (Sweden)
Erwin Erlangga
2017-08-01
Full Text Available This study aims to increase the enrollment of children of a broken home that life has meaning. Subjects are 100 children in Demak whose families experiencing divorce. Research themes include three things: individual counseling, engineering logotherapy, reception, and a child of a broken home. Data obtained based on interviews, observation, and psychological scale showed that of the 100 children of a broken home has a low acceptance that individual counseling with logotherapy techniques were considered appropriate to increase the enrollment of children of a broken home. Factors - factors that affect the acceptance of a child of a broken home is self-blame, anger and did not have a purpose in life again. In addition the environment is also a significant effect on the enrollment of children of a broken home. Environmental labeling of families experiencing divorce as a family that failed so that children are increasingly stressed with the stamp of the community. Based on the field test results, the level of acceptance of the child of a broken home increases after the individual is given counseling services with logotherapy techniques. Indicated by changes in the level of acceptance of children of a broken home before being given treatment (initial evaluation and after (final evaluation of 130 points. The results of effectiveness test statistic t test calculations also showed 0,010 <0.05.It was concluded that counseling individuals with logotherapy effective technique to increase the enrollment of children of a broken home
13. Model for predicting the frequency of broken rails
Directory of Open Access Journals (Sweden)
S. Vesković
2012-04-01
Full Text Available Broken rails can cause train delays, trains cancelations and, unfortunately, they are common causes of accidents. This affects planning of a resources, budget and organization of railway track maintenance. Planning of railway track maintenance cannot be done without an estimation of number of rails that will be replaced due to the broken rail incidents. There are many factors that influence broken rails and the most common are: rail age, annual gross tonnage, degree of curve and temperature in the time of breakage. The fuzzy logic model uses acquired data as input variables to predict the frequency of broken rails for the certain rail types on some Sections.
14. Summary: Symmetries and spin
International Nuclear Information System (INIS)
Haxton, W.C.
1988-01-01
I discuss a number of the themes of the Symmetries and Spin session of the 8th International Symposium on High Energy Spin Physics: parity nonconservation, CP/T nonconservation, and tests of charge symmetry and charge independence. 28 refs., 1 fig
15. Symmetry Festival 2016
CERN Document Server
2016-01-01
The Symmetry Festival is a science and art program series, the most important periodic event (see its history) to bring together scientists, artists, educators and practitioners interested in symmetry (its roots, what is behind, applications, etc.), or in the consequences of its absence.
16. Quantum symmetry for pedestrians
International Nuclear Information System (INIS)
Mack, G.; Schomerus, V.
1992-03-01
Symmetries more general than groups are possible in quantum therory. Quantum symmetries in the narrow sense are compatible with braid statistics. They are theoretically consistent much as supersymmetry is, and they could lead to degenerate multiplets of excitations with fractional spin in thin films. (orig.)
17. Wigner's Symmetry Representation Theorem
At the Heart of Quantum Field Theory! Aritra Kr. ... principle of symmetry was not held as something very fundamental ... principle of local symmetry: the laws of physics are invariant un- .... Next, we would show that different coefficients of a state ...
18. Charged fluids with symmetries
It is possible to introduce many types of symmetries on the manifold which restrict the ... metric tensor field and generate constants of the motion along null geodesics .... In this analysis we have studied the role of symmetries for charged perfect ...
19. Symmetry and Interculturality
Science.gov (United States)
Marchis, Iuliana
2009-01-01
Symmetry is one of the fundamental concepts in Geometry. It is a Mathematical concept, which can be very well connected with Art and Ethnography. The aim of the article is to show how to link the geometrical concept symmetry with interculturality. For this mosaics from different countries are used.
20. Chiral symmetry restoration and quasi-elastic electron-nucleus scattering
International Nuclear Information System (INIS)
Henley, E.M.; Krein, G.
1989-01-01
Chiral symmetry is known to be an important concept in hadronic interactions. It holds in QCD, but is known to be broken at low energies. It is therefore useful to study chiral symmetry and its breaking together with its consequences in nuclear physics. It is the latter phenomena we consider here. It is difficult to study nonperturbative QCD at low energies and models are needed. The Nambu-Jona-Lasinio (NJL) model fits this category; it incorporates chiral symmetry and its breaking, and allows one to study its effects in nucleons and nuclei. In particular, the constituent quark mass varies with density (ρ) and temperature (T). At high ρ and T chiral symmetry is restored. It is the ρ dependence which yields important effects in electron scattering due to partial restoration of chiral symmetry in nuclei. We begin with the NJL model with a small chiral symmetry breaking
1. Symmetry aspects in emergent quantum mechanics
Science.gov (United States)
Elze, Hans-Thomas
2009-06-01
We discuss an explicit realization of the dissipative dynamics anticipated in the proof of 't Hooft's existence theorem, which states that 'For any quantum system there exists at least one deterministic model that reproduces all its dynamics after prequantization'. - There is an energy-parity symmetry hidden in the Liouville equation, which mimics the Kaplan-Sundrum protective symmetry for the cosmological constant. This symmetry may be broken by the coarse-graining inherent in physics at scales much larger than the Planck length. We correspondingly modify classical ensemble theory by incorporating dissipative fluctuations (information loss) - which are caused by discrete spacetime continually 'measuring' matter. In this way, aspects of quantum mechanics, such as the von Neumann equation, including a Lindblad term, arise dynamically and expectations of observables agree with the Born rule. However, the resulting quantum coherence is accompanied by an intrinsic decoherence and continuous localization mechanism. Our proposal leads towards a theory that is linear and local at the quantum mechanical level, but the relation to the underlying classical degrees of freedom is nonlocal.
2. Financial Symmetry and Moods in the Market
Science.gov (United States)
Savona, Roberto; Soumare, Maxence; Andersen, Jørgen Vitting
2015-01-01
This paper studies how certain speculative transitions in financial markets can be ascribed to a symmetry break that happens in the collective decision making. Investors are assumed to be bounded rational, using a limited set of information including past price history and expectation on future dividends. Investment strategies are dynamically changed based on realized returns within a game theoretical scheme with Nash equilibria. In such a setting, markets behave as complex systems whose payoff reflect an intrinsic financial symmetry that guarantees equilibrium in price dynamics (fundamentalist state) until the symmetry is broken leading to bubble or anti-bubble scenarios (speculative state). We model such two-phase transition in a micro-to-macro scheme through a Ginzburg-Landau-based power expansion leading to a market temperature parameter which modulates the state transitions in the market. Via simulations we prove that transitions in the market price dynamics can be phenomenologically explained by the number of traders, the number of strategies and amount of information used by agents, all included in our market temperature parameter. PMID:25856392
3. Conformal Symmetry as a Template for QCD
Energy Technology Data Exchange (ETDEWEB)
Brodsky, S
2004-08-04
Conformal symmetry is broken in physical QCD; nevertheless, one can use conformal symmetry as a template, systematically correcting for its nonzero {beta} function as well as higher-twist effects. For example, commensurate scale relations which relate QCD observables to each other, such as the generalized Crewther relation, have no renormalization scale or scheme ambiguity and retain a convergent perturbative structure which reflects the underlying conformal symmetry of the classical theory. The ''conformal correspondence principle'' also dictates the form of the expansion basis for hadronic distribution amplitudes. The AdS/CFT correspondence connecting superstring theory to superconformal gauge theory has important implications for hadron phenomenology in the conformal limit, including an all-orders demonstration of counting rules for hard exclusive processes as well as determining essential aspects of hadronic light-front wavefunctions. Theoretical and phenomenological evidence is now accumulating that QCD couplings based on physical observables such as {tau} decay become constant at small virtuality; i.e., effective charges develop an infrared fixed point in contradiction to the usual assumption of singular growth in the infrared. The near-constant behavior of effective couplings also suggests that QCD can be approximated as a conformal theory even at relatively small momentum transfer. The importance of using an analytic effective charge such as the pinch scheme for unifying the electroweak and strong couplings and forces is also emphasized.
4. Financial symmetry and moods in the market.
Directory of Open Access Journals (Sweden)
Roberto Savona
Full Text Available This paper studies how certain speculative transitions in financial markets can be ascribed to a symmetry break that happens in the collective decision making. Investors are assumed to be bounded rational, using a limited set of information including past price history and expectation on future dividends. Investment strategies are dynamically changed based on realized returns within a game theoretical scheme with Nash equilibria. In such a setting, markets behave as complex systems whose payoff reflect an intrinsic financial symmetry that guarantees equilibrium in price dynamics (fundamentalist state until the symmetry is broken leading to bubble or anti-bubble scenarios (speculative state. We model such two-phase transition in a micro-to-macro scheme through a Ginzburg-Landau-based power expansion leading to a market temperature parameter which modulates the state transitions in the market. Via simulations we prove that transitions in the market price dynamics can be phenomenologically explained by the number of traders, the number of strategies and amount of information used by agents, all included in our market temperature parameter.
5. Conformal Symmetry as a Template for QCD
International Nuclear Information System (INIS)
Brodsky, S
2004-01-01
Conformal symmetry is broken in physical QCD; nevertheless, one can use conformal symmetry as a template, systematically correcting for its nonzero β function as well as higher-twist effects. For example, commensurate scale relations which relate QCD observables to each other, such as the generalized Crewther relation, have no renormalization scale or scheme ambiguity and retain a convergent perturbative structure which reflects the underlying conformal symmetry of the classical theory. The ''conformal correspondence principle'' also dictates the form of the expansion basis for hadronic distribution amplitudes. The AdS/CFT correspondence connecting superstring theory to superconformal gauge theory has important implications for hadron phenomenology in the conformal limit, including an all-orders demonstration of counting rules for hard exclusive processes as well as determining essential aspects of hadronic light-front wavefunctions. Theoretical and phenomenological evidence is now accumulating that QCD couplings based on physical observables such as τ decay become constant at small virtuality; i.e., effective charges develop an infrared fixed point in contradiction to the usual assumption of singular growth in the infrared. The near-constant behavior of effective couplings also suggests that QCD can be approximated as a conformal theory even at relatively small momentum transfer. The importance of using an analytic effective charge such as the pinch scheme for unifying the electroweak and strong couplings and forces is also emphasized
6. Symmetry and symmetry breaking in modern physics
International Nuclear Information System (INIS)
Barone, M; Theophilou, A K
2008-01-01
In modern physics, the theory of symmetry, i.e. group theory, is a basic tool for understanding and formulating the fundamental principles of Physics, like Relativity, Quantum Mechanics and Particle Physics. In this work we focus on the relation between Mathematics, Physics and objective reality
7. SO(8) fermion dynamical symmetry and strongly correlated quantum Hall states in monolayer graphene
Science.gov (United States)
Wu, Lian-Ao; Murphy, Matthew; Guidry, Mike
2017-03-01
A formalism is presented for treating strongly correlated graphene quantum Hall states in terms of an SO(8) fermion dynamical symmetry that includes pairing as well as particle-hole generators. The graphene SO(8) algebra is isomorphic to an SO(8) algebra that has found broad application in nuclear physics, albeit with physically very different generators, and exhibits a strong formal similarity to SU(4) symmetries that have been proposed to describe high-temperature superconductors. The well-known SU(4) symmetry of quantum Hall ferromagnetism for single-layer graphene is recovered as one subgroup of SO(8), but the dynamical symmetry structure associated with the full set of SO(8) subgroup chains extends quantum Hall ferromagnetism and allows analytical many-body solutions for a rich set of collective states exhibiting spontaneously broken symmetry that may be important for the low-energy physics of graphene in strong magnetic fields. The SO(8) symmetry permits a natural definition of generalized coherent states that correspond to symmetry-constrained Hartree-Fock-Bogoliubov solutions, or equivalently a microscopically derived Ginzburg-Landau formalism, exhibiting the interplay between competing spontaneously broken symmetries in determining the ground state.
8. Wess-Zumino model as linear σ-model of spontaneously broken conformal and OSp (1,4)-supersymmetries
International Nuclear Information System (INIS)
Ivanov, E.A.
1979-01-01
The massless Wess-Zumino model is shown to exhibit the spontaneous breaking of global conformal and orthosymplectic supersymmetries on account of the Fubini-type classical solutions to the equations of motion. The group structure of spontaneously broken phase is studied and its particle spectrum is analyzed. The little group of the ground state is found to be the graded subgroup OSp(1,4) of the conformal supergroup. The symmetry with respect to another OSp(1,4) subgroup (OSp(1,4))Ois broken to (2,3)-symmetry with emergence of massive Goldstone fermion. The superfield Weyl transformation is defined and with its help the model action is rewritten in terms of superspace OSp(1,4)/O(1,3), spinorial extension of anti de Sitter space. In such a representation the spontaneously broken phase admits the standard σ-model interpretation. We also construct the OSp(1,4)-analog of the massive Wess-Zumino model and examine its vacuum structure. An effect of the spontaneous breaking of P- and CP-parities with the strength related to anti de Sitter radius is found
9. Nonabelian family symmetry and the origin of fermion masses and mixing angles
International Nuclear Information System (INIS)
Soldate, M.; Reno, M.H.; Hill, C.T.
1986-01-01
The origin of fermion masses and mixing angles is studied in a class of gauged family-symmetry models broken by elementary Higgs scalars at ≅10 3 TeV. It is found that large hierarchies among fermion masses can be produced more naturally in a model with four generations rather than three. (orig.)
10. On the SL(2,R) symmetry in Yang-Mills theories in the Landau, Curci-Ferrari and maximal abelian gauge
International Nuclear Information System (INIS)
Dudal, David; Verschelde, Henri; Rodino Lemes, Vitor Emanuel; Sarandy, Marcelo S.; Sorella, Silvio Paolo; Picariello, Marco
2002-01-01
The existence of a SL(2;R) symmetry is discussed in SU(N) Yang-Mills in the maximal abelian gauge. This symmetry, also present in the Landau and Curci-Ferrari gauge, ensures the absence of tachyons in the maximal abelian gauge. In all these gauges, SL(2;R) turns out to be dynamically broken by ghost condensates. (author)
11. Effects of rotational symmetry breaking in polymer-coated nanopores
Science.gov (United States)
Osmanović, D.; Kerr-Winter, M.; Eccleston, R. C.; Hoogenboom, B. W.; Ford, I. J.
2015-01-01
The statistical theory of polymers tethered around the inner surface of a cylindrical channel has traditionally employed the assumption that the equilibrium density of the polymers is independent of the azimuthal coordinate. However, simulations have shown that this rotational symmetry can be broken when there are attractive interactions between the polymers. We investigate the phases that emerge in these circumstances, and we quantify the effect of the symmetry assumption on the phase behavior of the system. In the absence of this assumption, one can observe large differences in the equilibrium densities between the rotationally symmetric case and the non-rotationally symmetric case. A simple analytical model is developed that illustrates the driving thermodynamic forces responsible for this symmetry breaking. Our results have implications for the current understanding of the behavior of polymers in cylindrical nanopores.
12. Effects of rotational symmetry breaking in polymer-coated nanopores
Energy Technology Data Exchange (ETDEWEB)
Osmanović, D.; Hoogenboom, B. W.; Ford, I. J. [London Centre for Nanotechnology (LCN) and Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom); Kerr-Winter, M.; Eccleston, R. C. [Centre for Mathematics, Physics and Engineering in the Life Sciences and Experimental Biology, University College London, Gower Street, London WC1E 6BT (United Kingdom)
2015-01-21
The statistical theory of polymers tethered around the inner surface of a cylindrical channel has traditionally employed the assumption that the equilibrium density of the polymers is independent of the azimuthal coordinate. However, simulations have shown that this rotational symmetry can be broken when there are attractive interactions between the polymers. We investigate the phases that emerge in these circumstances, and we quantify the effect of the symmetry assumption on the phase behavior of the system. In the absence of this assumption, one can observe large differences in the equilibrium densities between the rotationally symmetric case and the non-rotationally symmetric case. A simple analytical model is developed that illustrates the driving thermodynamic forces responsible for this symmetry breaking. Our results have implications for the current understanding of the behavior of polymers in cylindrical nanopores.
13. Spontaneous symmetry breaking in the $S_3$-symmetric scalar sector
CERN Document Server
Emmanuel-Costa, D.; Osland, P.; Rebelo, M.N.
2016-02-23
We present a detailed study of the vacua of the $S_3$-symmetric three-Higgs-doublet potential, specifying the region of parameters where these minimisation solutions occur. We work with a CP conserving scalar potential and analyse the possible real and complex vacua with emphasis on the cases in which the CP symmetry can be spontaneously broken. Results are presented both in the reducible-representation framework of Derman, and in the irreducible-representation framework. Mappings between these are given. Some of these implementations can in principle accommodate dark matter and for that purpose it is important to identify the residual symmetries of the potential after spontaneous symmetry breakdown. We are also concerned with constraints from vacuum stability.
14. Planck driven by vision, broken by war
CERN Document Server
Brown, Brandon R
2015-01-01
Planck's Law, an equation used by physicists to determine the radiation leaking from any object in the universe, was described by Albert Einstein as "the basis of all twentieth-century physics." Max Planck is credited with being the father of quantum theory, and his work laid the foundation for our modern understanding of matter and energetic processes. But Planck's story is not well known, especially in the United States. A German physicist working during the first half of the twentieth century, his library, personal journals, notebooks, and letters were all destroyed with his home in World War II. What remains, other than his contributions to science, are handwritten letters in German shorthand, and tributes from other scientists of the time, including his close friend Albert Einstein. In Planck: Driven by Vision, Broken by War, Brandon R. Brown interweaves the voices and writings of Planck, his family, and his contemporaries-with many passages appearing in English for the first time-to create a portrait of...
15. Imprints of supersymmetry in the Lorentz-symmetry breaking of Gauge Theories
Energy Technology Data Exchange (ETDEWEB)
Belich, H [Universidade Federal do Espirito Santo (UFES), Vitoria, ES (Brazil); Dias, G S; Leal, F J.L. [Instituto Federal de Educacao, Ciencia e Tecnologia do Espirito Santo (IFES), Vitoria, ES (Brazil); Durand, L G; Helayel-Neto, Jose Abdalla; Spalenza, W [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil); Grupo de Fisica Teorica Jose Leite Lopes (GFT-JLL), Petropolis, RJ (Brazil)
2011-07-01
Full text: The breaking of Lorentz symmetry that may take place at very high energies opens up a venue for the discussion of the interplay between the violations of supersymmetry and relativistic symmetry. Recently, there have appeared in the literature models which propose a residual (non-relativistic) supersymmetry after Lorentz symmetry has been broken in a Horava gravity scenario. We here propose an N=1-supersymmetric Abelian gauge model which realises the breaking of Lorentz invariance by means of a CPT-even term. Our attempt assumes the point of view that supersymmetry and Lorentz symmetry are broken down at the same scale. If this is the case, the fermionic sector of the supermultiplets that accomplish the breaking of the symmetries into consideration may give rise to condensates that play an important role in the photon and photino dispersion relations. Contemporarily, they may also point to a more fundamental origin for the (bosonic) tensors usually associated to the backgrounds that parametrize Lorentz-symmetry breaking. We also highlight that, by studying the the violation of Lorentz symmetry in connection with supersymmetry, we find out that the Myers-Pospelov Electrodynamics, proposed on the basis of an analysis of the set of dimension-five operators, naturally appears in the bosonic sector of our model. Also, as a result of the interconnection between the supersymmetry and Lorentz-symmetry breakings, the photino-photino and photon-photino mixings that correspond to the supersymmetric completion of the Myers-Pospelov purely photonic terms come out. Finally, we present some comments on the possible modifications the supersymmetric fermions may introduce in the dispersion relations for particles at (high) energies close to the scale where supersymmetry and Lorentz symmetry are broken. (author)
16. Imprints of supersymmetry in the Lorentz-symmetry breaking of Gauge Theories
International Nuclear Information System (INIS)
Belich, H.; Dias, G.S.; Leal, F.J.L.; Durand, L.G.; Helayel-Neto, Jose Abdalla; Spalenza, W.
2011-01-01
Full text: The breaking of Lorentz symmetry that may take place at very high energies opens up a venue for the discussion of the interplay between the violations of supersymmetry and relativistic symmetry. Recently, there have appeared in the literature models which propose a residual (non-relativistic) supersymmetry after Lorentz symmetry has been broken in a Horava gravity scenario. We here propose an N=1-supersymmetric Abelian gauge model which realises the breaking of Lorentz invariance by means of a CPT-even term. Our attempt assumes the point of view that supersymmetry and Lorentz symmetry are broken down at the same scale. If this is the case, the fermionic sector of the supermultiplets that accomplish the breaking of the symmetries into consideration may give rise to condensates that play an important role in the photon and photino dispersion relations. Contemporarily, they may also point to a more fundamental origin for the (bosonic) tensors usually associated to the backgrounds that parametrize Lorentz-symmetry breaking. We also highlight that, by studying the the violation of Lorentz symmetry in connection with supersymmetry, we find out that the Myers-Pospelov Electrodynamics, proposed on the basis of an analysis of the set of dimension-five operators, naturally appears in the bosonic sector of our model. Also, as a result of the interconnection between the supersymmetry and Lorentz-symmetry breakings, the photino-photino and photon-photino mixings that correspond to the supersymmetric completion of the Myers-Pospelov purely photonic terms come out. Finally, we present some comments on the possible modifications the supersymmetric fermions may introduce in the dispersion relations for particles at (high) energies close to the scale where supersymmetry and Lorentz symmetry are broken. (author)
17. Hidden gauge symmetry
International Nuclear Information System (INIS)
O'Raifeartaigh, L.
1979-01-01
This review describes the principles of hidden gauge symmetry and of its application to the fundamental interactions. The emphasis is on the structure of the theory rather than on the technical details and, in order to emphasise the structure, gauge symmetry and hidden symmetry are first treated as independent phenomena before being combined into a single (hidden gauge symmetric) theory. The main application of the theory is to the weak and electromagnetic interactions of the elementary particles, and although models are used for comparison with experiment and for illustration, emphasis is placed on those features of the application which are model-independent. (author)
18. Physics from symmetry
CERN Document Server
Schwichtenberg, Jakob
2015-01-01
This is a textbook that derives the fundamental theories of physics from symmetry. It starts by introducing, in a completely self-contained way, all mathematical tools needed to use symmetry ideas in physics. Thereafter, these tools are put into action and by using symmetry constraints, the fundamental equations of Quantum Mechanics, Quantum Field Theory, Electromagnetism, and Classical Mechanics are derived. As a result, the reader is able to understand the basic assumptions behind, and the connections between the modern theories of physics. The book concludes with first applications of the previously derived equations.
19. Traces of Lorentz symmetry breaking in a hydrogen atom at ground state
Science.gov (United States)
Borges, L. H. C.; Barone, F. A.
2016-02-01
Some traces of a specific Lorentz symmetry breaking scenario in the ground state of the hydrogen atom are investigated. We use standard Rayleigh-Schrödinger perturbation theory in order to obtain the corrections to the ground state energy and the wave function. It is shown that an induced four-pole moment arises, due to the Lorentz symmetry breaking. The model considered is the one studied in Borges et al. (Eur Phys J C 74:2937, 2014), where the Lorentz symmetry is broken in the electromagnetic sector.
20. Traces of Lorentz symmetry breaking in a hydrogen atom at ground state
Energy Technology Data Exchange (ETDEWEB)
Borges, L.H.C. [Universidade Federal do ABC, Centro de Ciencias Naturais e Humanas, Santo Andre, SP (Brazil); Barone, F.A. [IFQ-Universidade Federal de Itajuba, Itajuba, MG (Brazil)
2016-02-15
Some traces of a specific Lorentz symmetry breaking scenario in the ground state of the hydrogen atom are investigated. We use standard Rayleigh-Schroedinger perturbation theory in order to obtain the corrections to the ground state energy and the wave function. It is shown that an induced four-pole moment arises, due to the Lorentz symmetry breaking. The model considered is the one studied in Borges et al. (Eur Phys J C 74:2937, 2014), where the Lorentz symmetry is broken in the electromagnetic sector. (orig.)
1. Traces of Lorentz symmetry breaking in a hydrogen atom at ground state
International Nuclear Information System (INIS)
Borges, L.H.C.; Barone, F.A.
2016-01-01
Some traces of a specific Lorentz symmetry breaking scenario in the ground state of the hydrogen atom are investigated. We use standard Rayleigh-Schroedinger perturbation theory in order to obtain the corrections to the ground state energy and the wave function. It is shown that an induced four-pole moment arises, due to the Lorentz symmetry breaking. The model considered is the one studied in Borges et al. (Eur Phys J C 74:2937, 2014), where the Lorentz symmetry is broken in the electromagnetic sector. (orig.)
2. Generalized symmetry algebras
International Nuclear Information System (INIS)
Dragon, N.
1979-01-01
The possible use of trilinear algebras as symmetry algebras for para-Fermi fields is investigated. The shortcomings of the examples are argued to be a general feature of such generalized algebras. (author)
3. Gauge symmetry from decoupling
Directory of Open Access Journals (Sweden)
C. Wetterich
2017-02-01
Full Text Available Gauge symmetries emerge from a redundant description of the effective action for light degrees of freedom after the decoupling of heavy modes. This redundant description avoids the use of explicit constraints in configuration space. For non-linear constraints the gauge symmetries are non-linear. In a quantum field theory setting the gauge symmetries are local and can describe Yang–Mills theories or quantum gravity. We formulate gauge invariant fields that correspond to the non-linear light degrees of freedom. In the context of functional renormalization gauge symmetries can emerge if the flow generates or preserves large mass-like terms for the heavy degrees of freedom. They correspond to a particular form of gauge fixing terms in quantum field theories.
4. Segmentation Using Symmetry Deviation
DEFF Research Database (Denmark)
Hollensen, Christian; Højgaard, L.; Specht, L.
2011-01-01
of the CT-scans into a single atlas. Afterwards the standard deviation of anatomical symmetry for the 20 normal patients was evaluated using non-rigid registration and registered onto the atlas to create an atlas for normal anatomical symmetry deviation. The same non-rigid registration was used on the 10...... hypopharyngeal cancer patients to find anatomical symmetry and evaluate it against the standard deviation of the normal patients to locate pathologic volumes. Combining the information with an absolute PET threshold of 3 Standard uptake value (SUV) a volume was automatically delineated. The overlap of automated....... The standard deviation of the anatomical symmetry, seen in figure for one patient along CT and PET, was extracted for normal patients and compared with the deviation from cancer patients giving a new way of determining cancer pathology location. Using the novel method an overlap concordance index...
5. Statistical symmetries in physics
International Nuclear Information System (INIS)
1994-01-01
Every law of physics is invariant under some group of transformations and is therefore the expression of some type of symmetry. Symmetries are classified as geometrical, dynamical or statistical. At the most fundamental level, statistical symmetries are expressed in the field theories of the elementary particles. This paper traces some of the developments from the discovery of Bose statistics, one of the two fundamental symmetries of physics. A series of generalizations of Bose statistics is described. A supersymmetric generalization accommodates fermions as well as bosons, and further generalizations, including parastatistics, modular statistics and graded statistics, accommodate particles with properties such as 'colour'. A factorization of elements of ggl(n b ,n f ) can be used to define truncated boson operators. A general construction is given for q-deformed boson operators, and explicit constructions of the same type are given for various 'deformed' algebras. A summary is given of some of the applications and potential applications. 39 refs., 2 figs
6. Wigner's Symmetry Representation Theorem
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 10. Wigner's Symmetry Representation Theorem: At the Heart of Quantum Field Theory! Aritra Kr Mukhopadhyay. General Article Volume 19 Issue 10 October 2014 pp 900-916 ...
7. Dynamical symmetries for fermions
International Nuclear Information System (INIS)
Guidry, M.
1989-01-01
An introduction is given to the Fermion Dynamical Symmetry Model (FDSM). The analytical symmetry limits of the model are then applied to the calculation of physical quantities such as ground-state masses and B(E 2 ) values in heavy nuclei. These comparisons with data provide strong support for a new principle of collective motion, the Dynamical Pauli Effect, and suggest that dynamical symmetries which properly account for the pauli principle are much more persistent in nuclear structure than the corresponding boson symmetries. Finally, we present an assessment of criticisms which have been voiced concerning the FDSM, and a discussion of new phenomena and ''exotic spectroscopy'' which may be suggested by the model. 14 refs., 8 figs., 4 tabs
8. Flavour from accidental symmetries
International Nuclear Information System (INIS)
Ferretti, Luca; King, Stephen F.; Romanino, Andrea
2006-01-01
We consider a new approach to fermion masses and mixings in which no special 'horizontal' dynamics is invoked to account for the hierarchical pattern of charged fermion masses and for the peculiar features of neutrino masses. The hierarchy follows from the vertical, family-independent structure of the model, in particular from the breaking pattern of the Pati-Salam group. The lightness of the first two fermion families can be related to two family symmetries emerging in this context as accidental symmetries
9. Broken ergodicity in two-dimensional homogeneous magnetohydrodynamic turbulence
International Nuclear Information System (INIS)
Shebalin, John V.
2010-01-01
Two-dimensional (2D) homogeneous magnetohydrodynamic (MHD) turbulence has many of the same qualitative features as three-dimensional (3D) homogeneous MHD turbulence. These features include several ideal (i.e., nondissipative) invariants along with the phenomenon of broken ergodicity (defined as nonergodic behavior over a very long time). Broken ergodicity appears when certain modes act like random variables with mean values that are large compared to their standard deviations, indicating a coherent structure or dynamo. Recently, the origin of broken ergodicity in 3D MHD turbulence that is manifest in the lowest wavenumbers was found. Here, we study the origin of broken ergodicity in 2D MHD turbulence. It will be seen that broken ergodicity in ideal 2D MHD turbulence can be manifest in the lowest wavenumbers of a finite numerical model for certain initial conditions or in the highest wavenumbers for another set of initial conditions. The origins of broken ergodicity in an ideal 2D homogeneous MHD turbulence are found through an eigenanalysis of the covariance matrices of the probability density function and by an examination of the associated entropy functional. When the values of ideal invariants are kept fixed and grid size increases, it will be shown that the energy in a few large modes remains constant, while the energy in any other mode is inversely proportional to grid size. Also, as grid size increases, we find that broken ergodicity becomes manifest at more and more wavenumbers.
10. Parity Symmetry and Parity Breaking in the Quantum Rabi Model with Addition of Ising Interaction
International Nuclear Information System (INIS)
Wang Qiong; He Zhi; Yao Chun-Mei
2015-01-01
We explore the possibility to generate new parity symmetry in the quantum Rabi model after a bias is introduced. In contrast to a mathematical treatment in a previous publication [J. Phys. A 46 (2013) 265302], we consider a physically realistic method by involving an additional spin into the quantum Rabi model to couple with the original spin by an Ising interaction, and then the parity symmetry is broken as well as the scaling behavior of the ground state by introducing a bias. The rule can be found that the parity symmetry is broken by introducing a bias and then restored by adding new degrees of freedom. Experimental feasibility of realizing the models under discussion is investigated. (paper)
11. On the membrane paradigm and spontaneous breaking of horizon BMS symmetries
International Nuclear Information System (INIS)
Eling, Christopher; Oz, Yaron
2016-01-01
We consider a BMS-type symmetry action on isolated horizons in asymptotically flat spacetimes. From the viewpoint of the non-relativistic field theory on a horizon membrane, supertranslations shift the field theory spatial momentum. The latter is related by a Ward identity to the particle number symmetry current and is spontaneously broken. The corresponding Goldstone boson shifts the horizon angular momentum and can be detected quantum mechanically. Similarly, area preserving superrotations are spontaneously broken on the horizon membrane and we identify the corresponding gapless modes. In asymptotically AdS spacetimes we study the BMS-type symmetry action on the horizon in a holographic superfluid dual. We identify the horizon supertranslation Goldstone boson as the holographic superfluid Goldstone mode.
12. Evaluation of the Legibility of Broken Lines for Partial Sight
OpenAIRE
小林, 秀之
2000-01-01
The present study was designed to investigate the legibility of broken lines for persons with partial sight. The subjects were 10 persons with simulated partial sight, and 4 persons with partial sight. The simulation was obtained using filters and convex lenses. The 30 kind of broken lines was evaluated by the original test that the subjects were read directions of the broken lines in distinction from solid lines. The thickness of lines varied from 0.1mm. to 0.7mm. in 4 steps. The results...
13. Explicitly broken supersymmetry with exactly massless moduli
Energy Technology Data Exchange (ETDEWEB)
Dong, Xi [Stanford Institute for Theoretical Physics, Department of Physics, Stanford University,Stanford, CA 94305 (United States); Freedman, Daniel Z. [Stanford Institute for Theoretical Physics, Department of Physics, Stanford University,Stanford, CA 94305 (United States); Center for Theoretical Physics and Department of Mathematics,Massachusetts Institute of Technology,Cambridge, MA 02139 (United States); Zhao, Yue [Stanford Institute for Theoretical Physics, Department of Physics, Stanford University,Stanford, CA 94305 (United States)
2016-06-16
The AdS/CFT correspondence is applied to an analogue of the little hierarchy problem in three-dimensional supersymmetric theories. The bulk is governed by a supergravity theory in which a U(1) × U(1) R-symmetry is gauged by Chern-Simons fields. The bulk theory is deformed by a boundary term quadratic in the gauge fields. It breaks SUSY completely and sources an exactly marginal operator in the dual CFT. SUSY breaking is communicated by gauge interactions to bulk scalar fields and their spinor superpartners. The bulk-to-boundary propagator of the Chern-Simons fields is a total derivative with respect to the bulk coordinates. Integration by parts and the Ward identity permit evaluation of SUSY breaking effects to all orders in the strength of the deformation. The R-charges of scalars and spinors differ so large SUSY breaking mass shifts are generated. Masses of R-neutral particles such as scalar moduli are not shifted to any order in the deformation strength, despite the fact that they may couple to R-charged fields running in loops. We also obtain a universal deformation formula for correlation functions under an exactly marginal deformation by a product of holomorphic and anti-holomorphic U(1) currents.
14. Quadratic contributions of softly broken supersymmetry in the light of loop regularization
Energy Technology Data Exchange (ETDEWEB)
Bai, Dong [Chinese Academy of Sciences, Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Beijing (China); University of Chinese Academy of Sciences, School of Physical Sciences, Beijing (China); Wu, Yue-Liang [Chinese Academy of Sciences, Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Beijing (China); International Centre for Theoretical Physics Asia-Pacific (ICTP-AP), Beijing (China); University of Chinese Academy of Sciences, School of Physical Sciences, Beijing (China)
2017-09-15
Loop regularization (LORE) is a novel regularization scheme in modern quantum field theories. It makes no change to the spacetime structure and respects both gauge symmetries and supersymmetry. As a result, LORE should be useful in calculating loop corrections in supersymmetry phenomenology. To further demonstrate its power, in this article we revisit in the light of LORE the old issue of the absence of quadratic contributions (quadratic divergences) in softly broken supersymmetric field theories. It is shown explicitly by Feynman diagrammatic calculations that up to two loops the Wess-Zumino model with soft supersymmetry breaking terms (WZ' model), one of the simplest models with the explicit supersymmetry breaking, is free of quadratic contributions. All the quadratic contributions cancel with each other perfectly, which is consistent with results dictated by the supergraph techniques. (orig.)
15. Nonrestoration of spontaneously broken P, CP and PQ at high temperature
International Nuclear Information System (INIS)
Dvali G.; Melfo, A.; Senjanovic, G.
1996-01-01
The possibility of P and CP violation at high temperature in models where these symmetries are spontaneously broken is investigated. It is found that in minimal models that include singlet fields, high T nonrestoration is possible for a wide range of parameters of the theory, in particular in models of CP violation with a CP-odd Higgs field. The same holds true for the invisible axion version of the Peccei-Quinn mechanism. This can provide both a way out for the domain wall problem in these theories and the CP violation required for baryogenesis. In the case of spontaneous P violation it turns out that high T nonrestoration required going beyond the minimal model. The results are shown to hold true when next-to-leading order effects are considered. (author). 33 refs, 3 figs
16. Large Top-Quark Mass and Nonlinear Representation of Flavor Symmetry
International Nuclear Information System (INIS)
Feldmann, Thorsten; Mannel, Thomas
2008-01-01
We consider an effective theory (ET) approach to flavor-violating processes beyond the standard model, where the breaking of flavor symmetry is described by spurion fields whose low-energy vacuum expectation values are identified with the standard model Yukawa couplings. Insisting on canonical mass dimensions for the spurion fields, the large top-quark Yukawa coupling also implies a large expectation value for the associated spurion, which breaks part of the flavor symmetry already at the UV scale Λ of the ET. Below that scale, flavor symmetry in the ET is represented in a nonlinear way by introducing Goldstone modes for the partly broken flavor symmetry and spurion fields transforming under the residual symmetry. As a result, the dominance of certain flavor structures in rare quark decays can be understood in terms of the 1/Λ expansion in the ET
17. Spontaneous symmetry breaking in vortex systems with two repulsive lengthscales.
Science.gov (United States)
Curran, P J; Desoky, W M; Milosević, M V; Chaves, A; Laloë, J-B; Moodera, J S; Bending, S J
2015-10-23
Scanning Hall probe microscopy (SHPM) has been used to study vortex structures in thin epitaxial films of the superconductor MgB2. Unusual vortex patterns observed in MgB2 single crystals have previously been attributed to a competition between short-range repulsive and long-range attractive vortex-vortex interactions in this two band superconductor; the type 1.5 superconductivity scenario. Our films have much higher levels of disorder than bulk single crystals and therefore both superconducting condensates are expected to be pushed deep into the type 2 regime with purely repulsive vortex interactions. We observe broken symmetry vortex patterns at low fields in all samples after field-cooling from above Tc. These are consistent with those seen in systems with competing repulsions on disparate length scales, and remarkably similar structures are reproduced in dirty two band Ginzburg-Landau calculations, where the simulation parameters have been defined by experimental observations. This suggests that in our dirty MgB2 films, the symmetry of the vortex structures is broken by the presence of vortex repulsions with two different lengthscales, originating from the two distinct superconducting condensates. This represents an entirely new mechanism for spontaneous symmetry breaking in systems of superconducting vortices, with important implications for pinning phenomena and high current density applications.
18. Frozen and broken color: a matrix Schroedinger equation in the semiclassical limit
International Nuclear Information System (INIS)
Orbach, H.S.
1981-01-01
We consider the case of frozen color, i.e, where global color symmetry remains exact, but where colored states have a mass large compared to color-singlet mesons. Using semiclassical WKB formalism, we construct the spectrum of bound states. In order to determine the charge of the constituents, we then consider deep-inelastic scattering of an external probe (e.g., lepton) from our one-dimensional meson. We calculate explicitly the structure function, W, in the WKB limit and show how Lipkin's mechanism is manifested, as well as how scaling behavior comes. We derive the WKB formalism as a special case of a method of obtaining WKB type solutions for generalized Schroedinger equations for which the Hamiltonian is an arbitrary matrix function of any number of pairs of canonical operators. We generalize these considerations to the case of broken color symmetry - but where the breaking is not so strong as to allow low-lying states to have a large amount of mixing with the colored states. In this case, the degeneracy of excited colored states can be broken. We find that local excitation of color guarantees global excitation of color; i.e., if at a given energy colored semiclassical states can be constructed with size comparable to that of the ground state wave function, colored states of that energy will also exist in the spectrum of the full theory and will be observed. However, global existence of color does not guarantee the excitation of colored states via deep-inelastic processes
19. A Note on a Broken-Cycle Theorem for Hypergraphs
Directory of Open Access Journals (Sweden)
Trinks Martin
2014-08-01
Full Text Available Whitney’s Broken-cycle Theorem states the chromatic polynomial of a graph as a sum over special edge subsets. We give a definition of cycles in hypergraphs that preserves the statement of the theorem there
20. Effects of Initial Symmetry on the Global Symmetry of One-Dimensional Legal Cellular Automata
Directory of Open Access Journals (Sweden)
Ikuko Tanaka
2015-09-01
Full Text Available To examine the development of pattern formation from the viewpoint of symmetry, we applied a two-dimensional discrete Walsh analysis to a one-dimensional cellular automata model under two types of regular initial conditions. The amount of symmetropy of cellular automata (CA models under regular and random initial conditions corresponds to three Wolfram’s classes of CAs, identified as Classes II, III, and IV. Regular initial conditions occur in two groups. One group that makes a broken, regular pattern formation has four types of symmetry, whereas the other group that makes a higher hierarchy pattern formation has only two types. Additionally, both final pattern formations show an increased amount of symmetropy as time passes. Moreover, the final pattern formations are affected by iterations of base rules of CA models of chaos dynamical systems. The growth design formations limit possibilities: the ratio of developing final pattern formations under a regular initial condition decreases in the order of Classes III, II, and IV. This might be related to the difference in degree in reference to surrounding conditions. These findings suggest that calculations of symmetries of the structures of one-dimensional cellular automata models are useful for revealing rules of pattern generation for animal bodies.
1. Symmetry non-restoration at high temperature and supersymmetry
CERN Document Server
Dvali, Gia; Dvali, Gia
1996-01-01
We analyse the high temperature behaviour of softly broken supersymmetric theories taking into account the role played by effective non-renormalizable terms generated by the decoupling of superheavy degrees of freedom or the Planck scale physics. It turns out that discrete or continuous symmetries, spontaneously broken at intermediate scales, may never be restored, at least up to temperatures of the cutoff scale. There are a few interesting differences from the usual non-restoration in non-supersymmetric theories case where one needs at least two Higgs fields and non-restoration takes place for a range of parameters only. We show that with non-renormalizable interactions taken into account the non-restoration can occur for any nonzero range of parameters even for a single Higgs field. We show that such theories in general solve the cosmological domain wall problem, since the thermal production of the dangerous domain walls is enormously suppressed.
2. Broken homes, parental psychiatric illness, and female delinquency.
Science.gov (United States)
Offord, David R; Abrams, Nola; Allen, Nancy; Poushinsky, Mary
1979-04-01
Fifty-nine families with delinquent daughters were compared with 59 families, matched on socioeconomic class, with daughters of the same age who were not delinquent. The frequency of broken homes was found to be the strongest distinguishing factor between probands and controls. Parental disabilities appeared to play a part in the incidence of delinquency among girls, particularly when the disabilities result in a broken home.
3. Characterization and Preparation of Broken Rice Proteins Modified by Proteases
Directory of Open Access Journals (Sweden)
Lixia Hou
2010-01-01
Full Text Available Broken rice is an underutilized by-product of milling. Proteins prepared from broken rice by treatments with alkaline protease and papain have been characterized with regard to nutritional and functional properties. The protein content and the protein recovery were 56.45 and 75.45 % for alkaline protease treatment, and 65.45 and 46.32 % for papain treatment, respectively. Protease treatment increased the lysine and valine content, leading to a more balanced amino acid profile. Broken rice proteins had high emulsifying capacity, 58.3–71.6 % at neutral pH, and adequate water holding capacity, ranging from 1.96 to 2.93 g/g of proteins. At pH=7.0, the broken rice protein had the highest water holding capacity and the best interfacial activities (emulsifying capacity, emulsifying stability, foaming capacity and foaming stability, which may be the result of the higher solubility at pH=7.0. The interfacial activities increased with the increase in the mass fraction of broken rice proteins. The proteins prepared by the papain treatment had higher water holding capacity (p>0.05, emulsifying capacity (p0.05 than alkaline protease treatment at the same pH or mass fraction. To test the fortification of food products with broken rice proteins, pork sausages containing the proteins were prepared. Higher yield of the sausages was obtained with the increased content of broken rice proteins, in the range of 2.0–9.0 %. The results indicate that broken rice proteins have potential to be used as the protein fortification ingredient for food products.
4. Symmetry of priapulids (Priapulida). 2. Symmetry of larvae.
Science.gov (United States)
Adrianov, A V; Malakhov, V V
2001-02-01
5. Offline detection of broken rotor bars in AC induction motors
Science.gov (United States)
Powers, Craig Stephen
ABSTRACT. OFFLINE DETECTION OF BROKEN ROTOR BARS IN AC INDUCTION MOTORS. The detection of the broken rotor bar defect in medium- and large-sized AC induction machines is currently one of the most difficult tasks for the motor condition and monitoring industry. If a broken rotor bar defect goes undetected, it can cause a catastrophic failure of an expensive machine. If a broken rotor bar defect is falsely determined, it wastes time and money to physically tear down and inspect the machine only to find an incorrect diagnosis. Previous work in 2009 at Baker/SKF-USA in collaboration with the Korea University has developed a prototype instrument that has been highly successful in correctly detecting the broken rotor bar defect in ACIMs where other methods have failed. Dr. Sang Bin and his students at the Korea University have been using this prototype instrument to help the industry save money in the successful detection of the BRB defect. A review of the current state of motor conditioning and monitoring technology for detecting the broken rotor bar defect in ACIMs shows improved detection of this fault is still relevant. An analysis of previous work in the creation of this prototype instrument leads into the refactoring of the software and hardware into something more deployable, cost effective and commercially viable.
6. Rigidity and symmetry
CERN Document Server
Weiss, Asia; Whiteley, Walter
2014-01-01
This book contains recent contributions to the fields of rigidity and symmetry with two primary focuses: to present the mathematically rigorous treatment of rigidity of structures, and to explore the interaction of geometry, algebra, and combinatorics. Overall, the book shows how researchers from diverse backgrounds explore connections among the various discrete structures with symmetry as the unifying theme. Contributions present recent trends and advances in discrete geometry, particularly in the theory of polytopes. The rapid development of abstract polytope theory has resulted in a rich theory featuring an attractive interplay of methods and tools from discrete geometry, group theory, classical geometry, hyperbolic geometry and topology. The volume will also be a valuable source as an introduction to the ideas of both combinatorial and geometric rigidity theory and its applications, incorporating the surprising impact of symmetry. It will appeal to students at both the advanced undergraduate and gradu...
7. Physics from symmetry
CERN Document Server
Schwichtenberg, Jakob
2018-01-01
This is a textbook that derives the fundamental theories of physics from symmetry. It starts by introducing, in a completely self-contained way, all mathematical tools needed to use symmetry ideas in physics. Thereafter, these tools are put into action and by using symmetry constraints, the fundamental equations of Quantum Mechanics, Quantum Field Theory, Electromagnetism, and Classical Mechanics are derived. As a result, the reader is able to understand the basic assumptions behind, and the connections between the modern theories of physics. The book concludes with first applications of the previously derived equations. Thanks to the input of readers from around the world, this second edition has been purged of typographical errors and also contains several revised sections with improved explanations. .
8. Chiral symmetry breaking and the pion quark structure
International Nuclear Information System (INIS)
Bernard, V.
1986-01-01
The mechanism of dynamical breaking of chiral symmetry in hadronic matter is first studied in the framework of the Nambu and Jona-Lasinio model on one hand and its generalisation to finite hadron size on the other hand. The analysis uses a variational procedure modelled after the BCS superconductor. Our study indicates for example, a great sensitivity of various quantities characterizing the breaking of symmetry to the shape of the interaction. Also the mechanism of breaking of chiral symmetry is essentially related to the mechanism of confinement. When a symmetry is spontaneously broken, there exists a Goldstone particle of zero mass. This is true in our model. This particle, the pion, is obtained as solution of a Bethe Salpeter equation for a qantiq bound state. This enables us to establish a connection between the pion as a Goldstone boson related to spontaneous symmetry breaking and the quark-antiquark structure of the pion. The finite mass of the physical pion is obtained with non zero current quark mass. Various properties of this particle are then studied in the RPA formalism. One important point of our model is the highly collective character of the pion. 85 refs [fr
9. Probing Fundamental Symmetries: Questioning the Very Basics of Conservation Laws
Science.gov (United States)
Mohanmurthy, Prajwal
2017-09-01
Is the Lorentz-CPT symmetry, a core component of the standard model, valid? To what extent are the CP and T symmetries broken in the strong sector? What are we doing about the existing strong-CP problem? Do neutrons oscillate (like neutral kaons) or break the (Baryon - Lepton) number conservation? In this presentation, we will go over some of the experiments probing fundamental symmetries trying to answer the above questions. I will, very briefly, introduce the CompEx & nEx experiments probing the Lorentz symmetry in the electromagnetic (EM) sector, the nEDM experiment probing CP and T symmetries in the strong sector, NStar experiment searching for neutron oscillations, MASS & BDX experiments searching for axion like particles & dark matter. We will then briefly touch upon the highlights of these experiments and focus on the path we are taking towards answering those questions while also connecting the dots [experiments] with CEU. PM would like to acknowledge support from SERI SNSF Grant 2015.0594.
10. Constraints on GUTS with Coleman-Weinberg symmetry breaking
International Nuclear Information System (INIS)
Sher, M.A.
1981-01-01
A popular assumption introduced by Coleman and Weinberg is that the elementary Higgs scalars of a gauge theory are massless at the tree level; the symmetry breakdown is then entirely due to quantum radiative corrections. In grand unified theories (GUTS), this assumption becomes particularly attractive. Many GUTS have intermediate mass scales [scales of symmetry breaking between baryon number generation and SU(2) x U(1) breaking], and it is attractive to apply the Coleman-Weinberg assumption to all stages of symmetry breaking after baryon number generation. In this paper, it is shown that most such GUTS are phenomenologically unacceptable. The reason is that as the universe cools, at each scale of symmetry breaking there will be a phase transition; if the symmetry is broken a la Coleman-Weinberg, this transition is strongly first order and thus generates entropy, decreasing the previously generated baryon number to entropy ratio by a large, and perhaps unacceptable amount. The entropy generated in a general intermediate mass scale transition is calculated, and the severe constraints that any Coleman-Weinberg-type GUT with intermediate mass scales must satisfy (in order to avoid excessive entropy generation) are found. Turning to specific models, it is shown that all intermediate mass scale transitions associated with SO(10) do not satisfy these constraints; the Coleman-Weinberg form of these transitions is inconsistent with cosmological observations and is thus phenomenologically unacceptable. (orig.)
11. Symmetry, structure, and spacetime
CERN Document Server
Rickles, Dean
2007-01-01
In this book Rickles considers several interpretative difficulties raised by gauge-type symmetries (those that correspond to no change in physical state). The ubiquity of such symmetries in modern physics renders them an urgent topic in philosophy of physics. Rickles focuses on spacetime physics, and in particular classical and quantum general relativity. Here the problems posed are at their most pathological, involving the apparent disappearance of spacetime! Rickles argues that both traditional ontological positions should be replaced by a structuralist account according to which relational
12. Symmetry and inflation
International Nuclear Information System (INIS)
Chimento, Luis P.
2002-01-01
We find the group of symmetry transformations under which the Einstein equations for the spatially flat Friedmann-Robertson-Walker universe are form invariant. They relate the energy density and the pressure of the fluid to the expansion rate. We show that inflation can be obtained from nonaccelerated scenarios by a symmetry transformation. We derive the transformation rule for the spectrum and spectral index of the curvature perturbations. Finally, the group is extended to investigate inflation in the anisotropic Bianchi type-I spacetime and the brane-world cosmology
13. Renormalization of the Nambu-Jona Lasinio model and spontaneously broken Abelian Gauge model without fundamental scalar fields
International Nuclear Information System (INIS)
Snyderman, N.J.
1976-01-01
The Schwinger-Dyson equation for the Nambu-Jona Lasinio model is solved systematically subject to the constraint of spontaneously broken chiral symmetry. The solution to this equation generates interactions not explicitly present in the original Lagrangian, and the original 4-fermion interaction is not present in the solution. The theory creates bound-states with respect to which a perturbation theory consistent with the chiral symmetry is set up. The analysis suggests that this theory is renormalizable in the sense that all divergences can be grouped into a few arbitrary parameters. The renormalized propagators of this model are shown to be identical to those of a new solution to the sigma-model in which the bare 4-field coupling lambda 0 is chosen to be twice the π-fermion coupling g 0 . Also considered is spontaneously broken abelian gauge model without fundamental scalar fields by coupling an axial vector gauge field to the N ambu-Jona Lasinio model. It is shown how the Goldstone consequence of spontaneous symmetry breaking is avoided in the radiation gauge, and verify the Guralnik, Hagen, and Kibble theorem that under these conditions the global charge conservation is lost even though there is still local current conservation. This is contrasted with the Lorentz gauge situation. This also demonstrated the way the various noncovariant components of the massive gauge field combine in a gauge invariant scattering amplitude to propagate covariantly as a massive spin-1 particle, and this is compared with the Lorentz gauge calculation. F inally, a new model of interacting massless fermions is introduced, based on the models of Nambu and Jona Lasinio, and the Bjorken, which spontaneously breaks both chiral symmetry and Lorentz invariance. The content of this model is the same as that of the gauge model without fundamental scalar fields, but without fundamental gauge fields as well
14. Spatial and Spin Symmetry Breaking in Semidefinite-Programming-Based Hartree-Fock Theory.
Science.gov (United States)
Nascimento, Daniel R; DePrince, A Eugene
2018-05-08
The Hartree-Fock problem was recently recast as a semidefinite optimization over the space of rank-constrained two-body reduced-density matrices (RDMs) [ Phys. Rev. A 2014 , 89 , 010502(R) ]. This formulation of the problem transfers the nonconvexity of the Hartree-Fock energy functional to the rank constraint on the two-body RDM. We consider an equivalent optimization over the space of positive semidefinite one-electron RDMs (1-RDMs) that retains the nonconvexity of the Hartree-Fock energy expression. The optimized 1-RDM satisfies ensemble N-representability conditions, and ensemble spin-state conditions may be imposed as well. The spin-state conditions place additional linear and nonlinear constraints on the 1-RDM. We apply this RDM-based approach to several molecular systems and explore its spatial (point group) and spin ( Ŝ 2 and Ŝ 3 ) symmetry breaking properties. When imposing Ŝ 2 and Ŝ 3 symmetry but relaxing point group symmetry, the procedure often locates spatial-symmetry-broken solutions that are difficult to identify using standard algorithms. For example, the RDM-based approach yields a smooth, spatial-symmetry-broken potential energy curve for the well-known Be-H 2 insertion pathway. We also demonstrate numerically that, upon relaxation of Ŝ 2 and Ŝ 3 symmetry constraints, the RDM-based approach is equivalent to real-valued generalized Hartree-Fock theory.
15. Combining symmetry collective states with coupled-cluster theory: Lessons from the Agassi model Hamiltonian
Science.gov (United States)
Hermes, Matthew R.; Dukelsky, Jorge; Scuseria, Gustavo E.
2017-06-01
The failures of single-reference coupled-cluster theory for strongly correlated many-body systems is flagged at the mean-field level by the spontaneous breaking of one or more physical symmetries of the Hamiltonian. Restoring the symmetry of the mean-field determinant by projection reveals that coupled-cluster theory fails because it factorizes high-order excitation amplitudes incorrectly. However, symmetry-projected mean-field wave functions do not account sufficiently for dynamic (or weak) correlation. Here we pursue a merger of symmetry projection and coupled-cluster theory, following previous work along these lines that utilized the simple Lipkin model system as a test bed [J. Chem. Phys. 146, 054110 (2017), 10.1063/1.4974989]. We generalize the concept of a symmetry-projected mean-field wave function to the concept of a symmetry projected state, in which the factorization of high-order excitation amplitudes in terms of low-order ones is guided by symmetry projection and is not exponential, and combine them with coupled-cluster theory in order to model the ground state of the Agassi Hamiltonian. This model has two separate channels of correlation and two separate physical symmetries which are broken under strong correlation. We show how the combination of symmetry collective states and coupled-cluster theory is effective in obtaining correlation energies and order parameters of the Agassi model throughout its phase diagram.
16. Spontaneous Time Symmetry Breaking in System with Mixed Strategy Nash Equilibrium: Evidences in Experimental Economics Data
Science.gov (United States)
Wang, Zhijian; Xu, Bin; Zhejiang Collaboration
2011-03-01
In social science, laboratory experiment with human subjects' interaction is a standard test-bed for studying social processes in micro level. Usually, as in physics, the processes near equilibrium are suggested as stochastic processes with time-reversal symmetry (TRS). To the best of our knowledge, near equilibrium, the breaking time symmetry, as well as the existence of robust time anti-symmetry processes, has not been reported clearly in experimental economics till now. By employing Markov transition method to analysis the data from human subject 2x2 Games with wide parameters and mixed Nash equilibrium, we study the time symmetry of the social interaction process near Nash equilibrium. We find that, the time symmetry is broken, and there exists a robust time anti-symmetry processes. We also report the weight of the time anti-symmetry processes in the total processes of each the games. Evidences in laboratory marketing experiments, at the same time, are provided as one-dimension cases. In these cases, time anti-symmetry cycles can also be captured. The proposition of time anti-symmetry processes is small, but the cycles are distinguishable.
17. Fermion mass hierarchies and flavor mixing from T' symmetry
International Nuclear Information System (INIS)
Ding Guijun
2008-01-01
We construct a supersymmetric model based on T ' x Z 3 x Z 9 flavor symmetry. At the leading order, the charged lepton mass matrix is not diagonal, T ' is broken completely, and the hierarchy in the charged lepton masses is generated naturally. Nearly tribimaximal mixing is predicted, and subleading effects induce corrections of order λ 2 , where λ is the Cabibbo angle. Both the up quark and down quark mass matrices' textures of the well-known U(2) flavor theory are produced at the leading order; realistic hierarchies in quark masses and Cabibbo-Kobayashi-Maskawa matrix elements are obtained. The vacuum alignment and subleading corrections are discussed in detail.
18. Symmetry breaking in clogging for oppositely driven particles
Science.gov (United States)
Glanz, Tobias; Wittkowski, Raphael; Löwen, Hartmut
2016-11-01
The clogging behavior of a symmetric binary mixture of colloidal particles that are driven in opposite directions through constrictions is explored by Brownian dynamics simulations and theory. A dynamical state with a spontaneously broken symmetry occurs where one species is flowing and the other is blocked for a long time, which can be tailored by the size of the constrictions. Moreover, we find self-organized oscillations in clogging and unclogging of the two species. Apart from statistical physics, our results are of relevance for fields like biology, chemistry, and crowd management, where ions, microparticles, pedestrians, or other particles are driven in opposite directions through constrictions.
19. Neutral meson tests of time-reversal symmetry invariance
OpenAIRE
Bevan, Adrian; Inguglia, Gianluca; Zoccali, Michele
2013-01-01
The laws of quantum physics can be studied under the mathematical operation T that inverts the direction of time. Strong and electromagnetic forces are known to be invariant under temporal inversion, however the weak force is not. The BaBar experiment recently exploited the quantum-correlated production of pairs of B0 mesons to show that T is a broken symmetry. Here we show that it is possible to perform a wide range of tests of quark flavour changing processes under T in order to validate th...
20. Introduction to Chiral Symmetry
Energy Technology Data Exchange (ETDEWEB)
Koch, Volker [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2017-05-09
These lectures are an attempt to a pedagogical introduction into the elementary concepts of chiral symmetry in nuclear physics. We will also discuss some effective chiral models such as the linear and nonlinear sigma model as well as the essential ideas of chiral perturbation theory. We will present some applications to the physics of ultrarelativistic heavy ion collisionsd.
1. Classical mirror symmetry
CERN Document Server
Jinzenji, Masao
2018-01-01
This book furnishes a brief introduction to classical mirror symmetry, a term that denotes the process of computing Gromov–Witten invariants of a Calabi–Yau threefold by using the Picard–Fuchs differential equation of period integrals of its mirror Calabi–Yau threefold. The book concentrates on the best-known example, the quintic hypersurface in 4-dimensional projective space, and its mirror manifold. First, there is a brief review of the process of discovery of mirror symmetry and the striking result proposed in the celebrated paper by Candelas and his collaborators. Next, some elementary results of complex manifolds and Chern classes needed for study of mirror symmetry are explained. Then the topological sigma models, the A-model and the B-model, are introduced. The classical mirror symmetry hypothesis is explained as the equivalence between the correlation function of the A-model of a quintic hyper-surface and that of the B-model of its mirror manifold. On the B-model side, the process of construct...
2. Approximate symmetries of Hamiltonians
Science.gov (United States)
Chubb, Christopher T.; Flammia, Steven T.
2017-08-01
We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.
3. Molecular symmetry and spectroscopy
CERN Document Server
Bunker, Philip; Jensen, Per
2006-01-01
The first edition, by P.R. Bunker, published in 1979, remains the sole textbook that explains the use of the molecular symmetry group in understanding high resolution molecular spectra. Since 1979 there has been considerable progress in the field and a second edition is required; the original author has been joined in its writing by Per Jensen. The Material of the first edition has been reorganized and much has been added. The molecular symmetry group is now introduced early on, and the explanation of how to determine nuclear spin statistical weights has been consolidated in one chapter, after groups, symmetry groups, character tables and the Hamiltonian have been introduced. A description of the symmetry in the three-dimensional rotation group K(spatial), irreducible spherical tensor operators, and vector coupling coefficients is now included. The chapters on energy levels and selection rules contain a great deal of material that was not in the first edition (much of it was undiscovered in 1979), concerning ...
4. Introduction to chiral symmetry
International Nuclear Information System (INIS)
Koch, V.
1996-01-01
These lectures are an attempt to a pedagogical introduction into the elementary concepts of chiral symmetry in nuclear physics. Effective chiral models such as the linear and nonlinear sigma model will be discussed as well as the essential ideas of chiral perturbation theory. Some applications to the physics of ultrarelativistic heavy ion collisions will be presented
5. The politics of symmetry
NARCIS (Netherlands)
Pels, D.L.
While symmetry and impartiality have become ruling principles in S&TS, defining its core ideal of a 'value-free relativism', their philosophical anchorage has attracted much less discussion than the issue or:how far their jurisdiction can be extended or generalized. This paper seeks to argue that
6. Symmetries in fundamental physics
CERN Document Server
Sundermeyer, Kurt
2014-01-01
Over the course of the last century it has become clear that both elementary particle physics and relativity theories are based on the notion of symmetries. These symmetries become manifest in that the "laws of nature" are invariant under spacetime transformations and/or gauge transformations. The consequences of these symmetries were analyzed as early as in 1918 by Emmy Noether on the level of action functionals. Her work did not receive due recognition for nearly half a century, but can today be understood as a recurring theme in classical mechanics, electrodynamics and special relativity, Yang-Mills type quantum field theories, and in general relativity. As a matter of fact, as shown in this monograph, many aspects of physics can be derived solely from symmetry considerations. This substantiates the statement of E.P.Wigner "... if we knew all the laws of nature, or the ultimate Law of nature, the invariance properties of these laws would not furnish us new information." Thanks to Wigner we now also underst...
7. Symmetries in fundamental physics
CERN Document Server
Sundermeyer, Kurt
2014-01-01
Over the course of the last century it has become clear that both elementary particle physics and relativity theories are based on the notion of symmetries. These symmetries become manifest in that the "laws of nature" are invariant under spacetime transformations and/or gauge transformations. The consequences of these symmetries were analyzed as early as in 1918 by Emmy Noether on the level of action functionals. Her work did not receive due recognition for nearly half a century, but can today be understood as a recurring theme in classical mechanics, electrodynamics and special relativity, Yang-Mills type quantum field theories, and in general relativity. As a matter of fact, as shown in this monograph, many aspects of physics can be derived solely from symmetry considerations. This substantiates the statement of E.P. Wigner "... if we knew all the laws of nature, or the ultimate Law of nature, the invariance properties of these laws would not furnish us new information." Thanks to Wigner we now also unders...
8. Groups and Symmetry
Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 10. Groups and Symmetry: A Guide to Discovering Mathematics. Geetha Venkataraman. Book Review Volume 4 Issue 10 October 1999 pp 91-92. Fulltext. Click here to view fulltext PDF. Permanent link:
9. Aspects of W∞ symmetry
International Nuclear Information System (INIS)
Sezgin, E.
1991-08-01
We review the structure of W ∞ algebras, their super and topological extensions, and their contractions down to (super) w ∞ . Emphasis is put on the field theoretic realizations of these algebras. We also review the structure of w ∞ and W ∞ gravities and comment on various applications of W ∞ symmetry. (author). 42 refs
10. Non-Noetherian symmetries
International Nuclear Information System (INIS)
Hojman, Sergio A.
1996-01-01
The purpose of these lectures is to present some of the ways in which non-Noetherian symmetries are used in contemporary mathematical physics. These include, among others, obtaining conservation laws for dynamical systems, solving non-linear problems, getting alternative Lagrangians for systems of differential equations and constructing symplectic structures and Hamiltonians for dynamical systems starting from scratch
11. Detection symmetry and asymmetry
NARCIS (Netherlands)
du Buf, J.M.H.
1991-01-01
Experiments were performed on the detection symmetry and asymmetry of incremental and decremental disks, as a function of both disk diameter and duration. It was found that, for a background luminance of 300cd.m-2, thresholds of dynamic (briefly presented) foveal disks are symmetrical for all
12. From symmetries to dynamics
International Nuclear Information System (INIS)
Stern, J.
2000-01-01
The problem of a uniform description of symmetries, their dynamic disturbing and the structure of the vacuum is discussed. The role which problems of this kind played in searching for and understanding the Standard Model of elementary particles from the 1960s till now is also highlighted. (Z.J.)
13. Fields, symmetries, and quarks
International Nuclear Information System (INIS)
Mosel, U.
1989-01-01
'Fields, symmetries, and quarks' covers elements of quantum field theory, symmetries, gauge field theories and phenomenological descriptions of hadrons, with special emphasis on topics relevant to nuclear physics. It is aimed at nuclear physicists in general and at scientists who need a working knowledge of field theory, symmetry principles of elementary particles and their interactions and the quark structure of hadrons. The book starts out with an elementary introduction into classical field theory and its quantization. As gauge field theories require a working knowledge of global symmetries in field theories this topic is then discussed in detail. The following part is concerned with the general structure of gauge field theories and contains a thorough discussion of the still less widely known features of Non-Abelian gauge field theories. Quantum Chromodynamics (QCD), which is important for the understanding of hadronic matter, is discussed in the next section together with the quark compositions of hadrons. The last two chapters give a detailed discussion of phenomenological bag-models. The MIT bag is discussed, so that all theoretical calculations can be followed step by step. Since in all other bag-models the calculational methods and steps are essentially identical, this chapter should enable the reader to actually perform such calculations unaided. A last chapter finally discusses the topological bag-models which have become quite popular over the last few years. (orig.)
14. Symmetry of priapulids (Priapulida). 1. Symmetry of adults.
Science.gov (United States)
Adrianov, A V; Malakhov, V V
2001-02-01
15. Reconstruction of the spontaneously broken gauge theory in non-commutative geometry
International Nuclear Information System (INIS)
Okumura, Y.; Morita, K.
1996-01-01
The scheme previously proposed by the present authors is modified to incorporate the strong interaction by affording the direct product internal symmetry. The authors do not need to prepare the extra discrete space for the colour gauge group responsible for the strong interaction to reconstruct the standard model and the left-right symmetric gauge model (LRSM). The approach based on non-commutative geometry leads us to present many attractive points such as the unified picture of the gauge and Higgs field as the generalized connection on the discrete space M 4 x Z N . This approach leads to unified picture of gauge and Higgs fields as the generalized connection. The standard model needs N=2 discrete space for reconstruction in this formalism. LRSM is still alive as a model with the intermediate symmetry of the spontaneously broken SO(10) grand unified theory (GUT). N=3 discrete space is needed for the reconstruction of LRSM to include two Higgs φ and ξ bosons usual transformed as (2, 2 * , 0) and (1, 3, -2) under SU(2) L x SU(2) R x U(1) Y , respectively. ξ is responsible to make v R Majorana fermion and so well explains the seesaw mechanism. Up and down quarks have different masses through the vacuum expectation value of φ
16. Spontaneous symmetry breaking of the BRST symmetry in presence of the Gribov horizon: Renormalizability
International Nuclear Information System (INIS)
Capri, Marcio; Justo, Igor; Guimaraes, Marcelo; Sorella, Silvio; Dudal, David; Palhares, Leticia
2013-01-01
Full text: In recent years much attention has been devoted to the study of the issue of the Gribov copies and of its relevance for confinement in Yang-Mills theories. The existence of the Gribov copies is a general feature of the gauge-fixing quantization procedure, being related to the impossibility of finding a local gauge condition which picks up only one gauge configuration for each gauge orbit. As it has been shown by Gribov and Zwanziger, a partial solution of the Gribov problem in the Landau gauge can be achieved by restricting the domain of integration in the functional Euclidean integral to the first Gribov horizon. Among the various open aspects of the Gribov-Zwanziger framework, the issue of the BRST symmetry is a source of continuous investigations. In a recent work, we have been able to obtain an equivalent formulation of the Gribov-Zwanziger action which displays an exact BRST symmetry which turns out to be spontaneously broken by the restriction of the domain of integration to the Gribov horizon. In particular, the BRST operator retains the important property of being nilpotent. Moreover, it has also been shown that the Goldstone mode associated to the spontaneous breaking of the BRST symmetry is completely decoupled. The aim of the present work is that of fills up a gap not addressed in the previous work, namely, the renormalizability to all orders of the spontaneous symmetry breaking formulation of the Gribov-Zwanziger theory. As we shall see, the action obtained enjoys a large set of Ward identities which enables to prove that it is, in fact, multiplicatively renormalizable to all orders. (author)
17. Proton-neutron correlations in a broken-pair model
International Nuclear Information System (INIS)
Akkermans, J.N.L.
1981-01-01
In this thesis nuclear-structure calculations are reported which were performed with the broken-pair model. The model which is developed, is an extension of existing broken-pair models in so far that it includes both proton and neutron valence pairs. The relevant formalisms are presented. In contrast to the number-non-conserving model, a proton-neutron broken-pair model is well suited to study the correlations which are produced by the proton-neutron interaction. It is shown that the proton-neutron force has large matrix elements which mix the proton- with neutron broken-pair configurations. This occurs especially for Jsup(PI)=2 + and 3 - pairs. This property of the proton-neutron force is used to improve the spectra of single-closed shell nuclei, where particle-hole excitations of the closed shell are a special case of broken-pair configurations. Using Kr and Te isotopes it is demonstrated that the proton-neutron force gives rise to correlated pair structures, which remain remarkably constant with varying nucleon numbers. (Auth.)
18. Broken-Rotor-Bar Diagnosis for Induction Motors
International Nuclear Information System (INIS)
Wang Jinjiang; Gao, Robert X; Yan Ruqiang
2011-01-01
Broken rotor bar is one of the commonly encountered induction motor faults that may cause serious motor damage to the motor if not detected timely. Past efforts on broken rotor bar diagnosis have been focused on current signature analysis using spectral analysis and wavelet transform. These methods require accurate slip estimation to localize fault-related frequency. This paper presents a new approach to broken rotor bar diagnosis without slip estimation, based on the ensemble empirical mode decomposition (EEMD) and the Hilbert transform. Specifically, the Hilbert transform first extracts the envelope of the motor current signal, which contains broken rotor fault-related frequency information. Subsequently, the envelope signal is adaptively decomposed into a number of intrinsic mode functions (IMFs) by the EEMD algorithm. Two criteria based on the energy and correlation analyses have been investigated to automate the IMF selection. Numerical and experimental studies have confirmed that the proposed approach is effective in diagnosing broken rotor bar faults for improved induction motor condition monitoring and damage assessment.
19. Spontaneous symmetry breaking in curved space-time
International Nuclear Information System (INIS)
Toms, D.J.
1982-01-01
An approach dealing with some of the complications which arise when studying spontaneous symmetry breaking beyond the tree-graph level in situations where the effective potential may not be used is discussed. These situations include quantum field theory on general curved backgrounds or in flat space-times with non-trivial topologies. Examples discussed are a twisted scalar field in S 1 xR 3 and instabilities in an expanding universe. From these it is seen that the topology and curvature of a space-time may affect the stability of the vacuum state. There can be critical length scales or times beyond which symmetries may be broken or restored in certain cases. These features are not present in Minkowski space-time and so would not show up in the usual types of early universe calculations. (U.K.)
20. Symmetry breaking in the double-well hermitian matrix models
International Nuclear Information System (INIS)
Brower, R.C.; Deo, N.; Jain, S.; Tan, C.I.
1993-01-01
We study symmetry breaking in Z 2 symmetric large N matrix models. In the planar approximation for both the symmetric double-well φ 4 model and the symmetric Penner model, we find there is an infinite family of broken symmetry solutions characterized by different sets of recursion coefficients R n and S n that all lead to identical free energies and eigenvalue densities. These solutions can be parameterized by an arbitrary angle θ(x), for each value of x=n/N 4 theory the double scaling string equations are parameterized by a conserved angular momentum parameter in the range 0≤l<∞ and a single arbitrary U(1) phase angle. (orig.)
1. Symmetry breaking in the double-well hermitian matrix models
CERN Document Server
Brower, R C; Jain, S; Tan, C I; Brower, Richard C.; Deo, Nevidita; Jain, Sanjay; Tan, Chung-I
1993-01-01
We study symmetry breaking in $Z_2$ symmetric large $N$ matrix models. In the planar approximation for both the symmetric double-well $\\phi^4$ model and the symmetric Penner model, we find there is an infinite family of broken symmetry solutions characterized by different sets of recursion coefficients $R_n$ and $S_n$ that all lead to identical free energies and eigenvalue densities. These solutions can be parameterized by an arbitrary angle $\\theta(x)$, for each value of $x = n/N < 1$. In the double scaling limit, this class reduces to a smaller family of solutions with distinct free energies already at the torus level. For the double-well $\\phi^4$ theory the double scaling string equations are parameterized by a conserved angular momentum parameter in the range $0 \\le l < \\infty$ and a single arbitrary $U(1)$ phase angle.
2. Radiative breaking scenario for the GUT gauge symmetry
International Nuclear Information System (INIS)
Fukuyama, T.; Kikuchi, T.
2006-01-01
The origin of the grand unified theory (GUT) scale from the top-down perspective is explored. The GUT gauge symmetry is broken by the renormalization group effects, which is an extension of the radiative electroweak symmetry breaking scenario to the GUT models. That is, in the same way as the origin of the electroweak scale, the GUT scale is generated from the Planck scale through the radiative corrections to the soft supersymmetry breaking mass parameters. This mechanism is applied to a perturbative SO(10) GUT model, recently proposed by us. In the SO(10) model, the relation between the GUT scale and the Planck scale can naturally be realized by using order-one coupling constants. (orig.)
3. Symmetries in physics and harmonics
International Nuclear Information System (INIS)
Kolk, D.
2006-01-01
In this book the symmetries of elementary particles are described in relation to the rules of harmonics in music. The selection rules are described in connections with harmonic intervals. Also symmetry breaking is considered in this framework. (HSI)
4. Spontaneous symmetry breaking of (1+1)-dimensional φ4 theory in light-front field theory
International Nuclear Information System (INIS)
Bender, C.M.; Pinsky, S.; van de Sande, B.
1993-01-01
We study spontaneous symmetry breaking in (1+1)-dimensional φ 4 theory using the light-front formulation of field theory. Since the physical vacuum is always the same as the perturbative vacuum in light-front field theory the fields must develop a vacuum expectation value through the zero-mode components of the field. We solve the nonlinear operator equation for the zero mode in the one-mode approximation. We find that spontaneous symmetry breaking occurs at λ critical =4π(3+ √3 )μ 2 , which is consistent with the value λ critical =54.27μ 2 obtained in the equal-time theory. We calculate the vacuum expectation value as a function of the coupling constant in the broken phase both numerically and analytically using the δ expansion. We find two equivalent broken phases. Finally we show that the energy levels of the system have the expected behavior for the broken phase
5. Unified Symmetry of Hamilton Systems
International Nuclear Information System (INIS)
Xu Xuejun; Qin Maochang; Mei Fengxiang
2005-01-01
The definition and the criterion of a unified symmetry for a Hamilton system are presented. The sufficient condition under which the Noether symmetry is a unified symmetry for the system is given. A new conserved quantity, as well as the Noether conserved quantity and the Hojman conserved quantity, deduced from the unified symmetry, is obtained. An example is finally given to illustrate the application of the results.
6. Quantum symmetries in particle interactions
International Nuclear Information System (INIS)
Shirkov, D.V.
1983-01-01
The concept of a quantum symmetry is introduced as a symmetry in the formulation of which quantum representations and specific quantum notions are used essentially. Three quantum symmetry principles are discussed: the principle of renormalizability (possibly super-renormalizability), the principle of local gauge symmetry, and the principle of supersymmetry. It is shown that these principles play a deterministic role in the development of quantum field theory. Historically their use has led to ever stronger restrictions on the interaction mechanism of quantum fields
7. Symmetry and topology in evolution
International Nuclear Information System (INIS)
Lukacs, B.; Berczi, S.; Molnar, I.; Paal, G.
1991-10-01
This volume contains papers of an interdisciplinary symposium on evolution. The aim of this symposium, held in Budapest, Hungary, 28-29 May 1991, was to clear the role of symmetry and topology at different levels of the evolutionary processes. 21 papers were presented, their topics included evolution of the Universe, symmetry of elementary particles, asymmetry of the Earth, symmetry and asymmetry of biomolecules, symmetry and topology of lining objects, human asymmetry etc. (R.P.)
8. A Longitudianl Study of the Link Between Broken Homes and Criminality.
Science.gov (United States)
McCord, Joan
Possible explanatory theories of the relationship between broken homes and crime include the following: (1) broken homes lead to crimes if there are "catalytic agents"; (2) broken homes lead to crime if these homes fail to provide certain conditions which promote socialization; and (3) broken homes and crime have a common source, but not…
9. Charge independence and charge symmetry
Energy Technology Data Exchange (ETDEWEB)
Miller, G A [Washington Univ., Seattle, WA (United States). Dept. of Physics; van Oers, W T.H. [Manitoba Univ., Winnipeg, MB (Canada). Dept. of Physics; [TRIUMF, Vancouver, BC (Canada)
1994-09-01
Charge independence and charge symmetry are approximate symmetries of nature, violated by the perturbing effects of the mass difference between up and down quarks and by electromagnetic interactions. The observations of the symmetry breaking effects in nuclear and particle physics and the implications of those effects are reviewed. (author). 145 refs., 3 tabs., 11 figs.
10. Charge independence and charge symmetry
International Nuclear Information System (INIS)
Miller, G.A.
1994-09-01
Charge independence and charge symmetry are approximate symmetries of nature, violated by the perturbing effects of the mass difference between up and down quarks and by electromagnetic interactions. The observations of the symmetry breaking effects in nuclear and particle physics and the implications of those effects are reviewed. (author). 145 refs., 3 tabs., 11 figs
11. Symmetry energy in nuclear surface
International Nuclear Information System (INIS)
Danielewicz, P.; Lee, Jenny
2009-01-01
Interplay between the dependence of symmetry energy on density and the variation of nucleonic densities across nuclear surface is discussed. That interplay gives rise to the mass dependence of the symmetry coefficient in an energy formula. Charge symmetry of the nuclear interactions allows to introduce isoscalar and isovector densities that are approximately independent of the magnitude of neutron-proton asymmetry. (author)
12. Emergence of Symmetries from Entanglement
CERN Multimedia
CERN. Geneva
2016-01-01
Maximal Entanglement appears to be a key ingredient for the emergence of symmetries. We first illustrate this phenomenon using two examples: the emergence of conformal symmetry in condensed matter systems and the relation of tensor networks to holography. We further present a Principle of Maximal Entanglement that seems to dictate to a large extend the structure of gauge symmetry.
13. Group analysis and renormgroup symmetries
International Nuclear Information System (INIS)
Kovalev, V.F.; Pustovalov, V.V.; Shirkov, D.V.
1996-01-01
An original regular approach to constructing special type symmetries for boundary-value problems, namely renormgroup symmetries, is presented. Different methods of calculating these symmetries based on modern group analysis are described. An application of the approach to boundary value problems is demonstrated with the help of a simple mathematical model. 35 refs
14. Management of broken instrument by file bypass technique
Directory of Open Access Journals (Sweden)
Sultana Parveen
2017-02-01
Full Text Available Different devices and techniques have been developed to retrieve fractured instruments during the endodontic procedures. This case report describes the management of a broken instrument, which was accidentally broken during cleaning and shaping of the root canal in right 2nd molar tooth. A # 25 stainless steel K-file was separated in mesiobuccal canal of the treated tooth. At first, a radiograph was taken to confirm the level of separation of the instrument. The instrument was found to be separated at the apical 3rd of the mesial canal and then file bypass technique was performed. Calcium hydroxide dressing was given for 7 days followed by obturation with guttapercha cone and zinc oxide eugenol sealer in lateral condensation technique. It can be concluded that bypass technique can be considered as simple and effective technique for the management of broken instrument into the root canal.
15. Dark discrete gauge symmetries
International Nuclear Information System (INIS)
Batell, Brian
2011-01-01
We investigate scenarios in which dark matter is stabilized by an Abelian Z N discrete gauge symmetry. Models are surveyed according to symmetries and matter content. Multicomponent dark matter arises when N is not prime and Z N contains one or more subgroups. The dark sector interacts with the visible sector through the renormalizable kinetic mixing and Higgs portal operators, and we highlight the basic phenomenology in these scenarios. In particular, multiple species of dark matter can lead to an unconventional nuclear recoil spectrum in direct detection experiments, while the presence of new light states in the dark sector can dramatically affect the decays of the Higgs at the Tevatron and LHC, thus providing a window into the gauge origin of the stability of dark matter.
16. Symmetries and microscopic physics
International Nuclear Information System (INIS)
Blaizot, J.P.
1997-01-01
This book is based on a course of lectures devoted to the applications of group theory to quantum physics. The purpose is to give students a precise idea of general principles involving the concept of symmetry and to present practical methods used to calculate physical properties derived from symmetries. The first chapter is an introduction to the main results of group theory, 2 chapters highlight principles and methods concerning geometrical transformations in the space of states, state degeneracy and perturbation theory. The last 4 chapters investigate the applications of these methods to atom physics, nuclear structure and elementary particles. A chapter is devoted to the atom of hydrogen and another to the isospin. Numerous exercises and problems, some with their corrections, are proposed. (A.C.)
17. Asymmetry, Symmetry and Beauty
Directory of Open Access Journals (Sweden)
Abbe R. Kopra
2010-07-01
Full Text Available Asymmetry and symmetry coexist in natural and human processes. The vital role of symmetry in art has been well demonstrated. This article highlights the complementary role of asymmetry. Further we show that the interaction of asymmetric action (recursion and symmetric opposition (sinusoidal waves are instrumental in generating creative features (relatively low entropy, temporal complexity, novelty (less recurrence in the data than in randomized copies and complex frequency composition. These features define Bios, a pattern found in musical compositions and in poetry, except for recurrence instead of novelty. Bios is a common pattern in many natural and human processes (quantum processes, the expansion of the universe, gravitational waves, cosmic microwave background radiation, DNA, physiological processes, animal and human populations, and economic time series. The reduction in entropy is significant, as it reveals creativity and contradicts the standard claim of unavoidable decay towards disorder. Artistic creations capture fundamental features of the world.
18. Strong Electroweak Symmetry Breaking
CERN Document Server
Grinstein, Benjamin
2011-01-01
Models of spontaneous breaking of electroweak symmetry by a strong interaction do not have fine tuning/hierarchy problem. They are conceptually elegant and use the only mechanism of spontaneous breaking of a gauge symmetry that is known to occur in nature. The simplest model, minimal technicolor with extended technicolor interactions, is appealing because one can calculate by scaling up from QCD. But it is ruled out on many counts: inappropriately low quark and lepton masses (or excessive FCNC), bad electroweak data fits, light scalar and vector states, etc. However, nature may not choose the minimal model and then we are stuck: except possibly through lattice simulations, we are unable to compute and test the models. In the LHC era it therefore makes sense to abandon specific models (of strong EW breaking) and concentrate on generic features that may indicate discovery. The Technicolor Straw Man is not a model but a parametrized search strategy inspired by a remarkable generic feature of walking technicolor,...
19. Symmetry rules. How science and nature are founded on symmetry
Energy Technology Data Exchange (ETDEWEB)
Rosen, J.
2008-07-01
When we use science to describe and understand the world around us, we are in essence grasping nature through symmetry. In fact, modern theoretical physics suggests that symmetry is a, if not the, foundational principle of nature. Emphasizing the concepts, this book leads the reader coherently and comprehensively into the fertile field of symmetry and its applications. Among the most important applications considered are the fundamental forces of nature and the Universe. It is shown that the Universe cannot possess exact symmetry, which is a principle of fundamental significance. Curie's principle - which states that the symmetry of the effect is at least that of the cause - features prominently. An introduction to group theory, the mathematical language of symmetry, is included. This book will convince all interested readers of the importance of symmetry in science. Furthermore, it will serve as valuable background reading for all students in the physical sciences. (orig.)
20. Symmetry rules How science and nature are founded on symmetry
CERN Document Server
Rosen, Joe
2008-01-01
When we use science to describe and understand the world around us, we are in essence grasping nature through symmetry. In fact, modern theoretical physics suggests that symmetry is a, if not the, foundational principle of nature. Emphasizing the concepts, this book leads the reader coherently and comprehensively into the fertile field of symmetry and its applications. Among the most important applications considered are the fundamental forces of nature and the Universe. It is shown that the Universe cannot possess exact symmetry, which is a principle of fundamental significance. Curie's principle - which states that the symmetry of the effect is at least that of the cause - features prominently. An introduction to group theory, the mathematical language of symmetry, is included. This book will convince all interested readers of the importance of symmetry in science. Furthermore, it will serve as valuable background reading for all students in the physical sciences.
1. Symmetry and quantum mechanics
CERN Document Server
Corry, Scott
2016-01-01
This book offers an introduction to quantum mechanics for professionals, students, and others in the field of mathematics who have a minimal background in physics with an understanding of linear algebra and group theory. It covers such topics as Lie groups, algebras and their representations, and analysis (Hilbert space, distributions, the spectral Theorem, and the Stone-Von Neumann Theorem). The book emphasizes the role of symmetry and is useful to physicists as it provides a mathematical introduction to the topic.
Science.gov (United States)
Jorgensen, Jamie
2001-04-01
This talk will discuss "Project Petrov" Which is designed to investigate gravitational fields with symmetry. Project Petrov represents a collaboration involving physicists, mathematicians as well as graduate and undergraduate math and physics students. An overview of Project Petrov will be given, with an emphasis on students' contributions, including software to classify and generate Lie algebras, to classify isometry groups, and to compute the isometry group of a given metric.
3. Symmetry breaking and chaos
International Nuclear Information System (INIS)
Bunakov, V.E.; Ivanov, I.B.
1999-01-01
Connections between the symmetries of Hamiltonian systems in classical and quantum mechanics, on one hand, and their regularity or chaoticity, on the other hand, are considered. The quantum-chaoticity criterion that was proposed previously and which was borrowed from the theory of compound-nucleus resonances is used to analyze the quantum diamagnetic Kepler problem - that is, the motion of a spinless charged particle in a Coulomb and a uniform magnetic field
4. Symmetry and statistics
International Nuclear Information System (INIS)
French, J.B.
1974-01-01
The concepts of statistical behavior and symmetry are presented from the point of view of many body spectroscopy. Remarks are made on methods for the evaluation of moments, particularly widths, for the purpose of giving a feeling for the types of mathematical structures encountered. Applications involving ground state energies, spectra, and level densities are discussed. The extent to which Hamiltonian eigenstates belong to irreducible representations is mentioned. (4 figures, 1 table) (U.S.)
5. Symmetry in music
Energy Technology Data Exchange (ETDEWEB)
Herrero, O F, E-mail: o.f.herrero@hotmail.co [Conservatorio Superior de Musica ' Eduardo Martinez Torner' Corrada del Obispo s/n 33003 - Oviedo - Asturias (Spain)
2010-06-01
Music and Physics are very close because of the symmetry that appears in music. A periodic wave is what music really is, and there is a field of Physics devoted to waves researching. The different musical scales are the base of all kind of music. This article tries to show how this musical scales are made, how the consonance is the base of many of them and how symmetric they are.
6. Lie symmetries and superintegrability
International Nuclear Information System (INIS)
Nucci, M C; Post, S
2012-01-01
We show that a known superintegrable system in two-dimensional real Euclidean space (Post and Winternitz 2011 J. Phys. A: Math. Theor. 44 162001) can be transformed into a linear third-order equation: consequently we construct many autonomous integrals—polynomials up to order 18—for the same system. The reduction method and the connection between Lie symmetries and Jacobi last multiplier are used.
7. Symmetry in music
International Nuclear Information System (INIS)
Herrero, O F
2010-01-01
Music and Physics are very close because of the symmetry that appears in music. A periodic wave is what music really is, and there is a field of Physics devoted to waves researching. The different musical scales are the base of all kind of music. This article tries to show how this musical scales are made, how the consonance is the base of many of them and how symmetric they are.
8. Broken Stone Marker Construction%碎石桩施工
Institute of Scientific and Technical Information of China (English)
高瑞娥
2009-01-01
随着我国高速公路建设的加快,在高速公路的路基设计和施工中引入并使用碎石桩处理软土地基. 文章就结合碎石桩处理软土地基,浅谈碎石桩的施工过程和检测方法.%This paper unifies the broken stone marker processing soft soil ground, discusses the broken stone marker shallowly the construction pro-cess and the examination method.
9. Dynamical symmetry restoration for a higher-derivative four-fermion model in an external electromagnetic field
International Nuclear Information System (INIS)
Elizalde, E.; Gavrilov, S.P.; Shil'nov, Yu.I.
2000-01-01
A four-fermion model with additional higher-derivative terms is investigated in an external electromagnetic field. The effective potential in the leading order of large-N expansion is calculated in external constant magnetic and electric fields. It is shown that, in contrast to the former results concerning the universal character of 'magnetic catalysis' in dynamical symmetry breaking, in the present higher-derivative model the magnetic field restores chiral symmetry broken initially on the tree level. Numerical results describing a second-order phase transition that accompanies the symmetry restoration at the quantum level are presented. (author)
10. General conditions for the PT symmetry of supersymmetric partner potentials
International Nuclear Information System (INIS)
Levai, G.
2004-01-01
Complete text of publication follows. A common feature of symmetries of quantum systems is that they restrict the form of the Hamiltonian, and consequently they also influence the structure of the energy spectrum. This is also the case with two symmetry concepts that are typically applied in non-relativistic quantum mechanics: supersymmetric quantum mechanics (SUSYQM) and PT symmetry. SUSYQM connects one-dimensional potentials pairwise via the relation V (±) (x) W 2 (x) ± dW/dx + ε, where ε is the factorization energy, V (-) (x) and V (+) (x) are the SUSY partner potentials, while W(x) is the superpotential. In the simplest case, when supersymmetry is unbroken, W(x) is defined in terms of the ground-state wavefunction of V (-) (x) as W(x) = - d/dx lnψ 0 (-) (x), and the factorization energy is chosen as ε E 0 (-) . Under these conditions the SUSY partner potentials possess the same energy levels, except that E 0 (-) is missing from the spectrum of V (+) (x), and the degenerate levels are connected by the SUSY ladder operators A = d/dx + W(x) and A † = - d/dx + W(x). The PT symmetry of a Hamiltonian prescribes its invariance under simultaneous space and time inversion, which boils down to the condition V (x) = V*(-x) in the case of one-dimensional potentials. The unusual feature of this new symmetry concept is that PT-symmetric potentials are complex in general, nevertheless, they possess real energy eigen-values, unless PT symmetry is spontaneously broken, in which case the energy spectrum consists of complex conjugate energy pairs. The interplay of these two symmetry concepts has been analyzed in a number of works, and it has been found that when V (-) (x) has unbroken PT symmetry, then the same applies to V (+) (x), while the spontaneous breakdown of the PT symmetry of V (-) (x) implies the manifest breakdown of the PT symmetry of V (+) (x). The factorization energy ε was found to be real in the former case, and imaginary in the latter one. The examples
11. Symmetry methods for option pricing
Science.gov (United States)
Davison, A. H.; Mamba, S.
2017-06-01
We obtain a solution of the Black-Scholes equation with a non-smooth boundary condition using symmetry methods. The Black-Scholes equation along with its boundary condition are first transformed into the one dimensional heat equation and an initial condition respectively. We then find an appropriate general symmetry generator of the heat equation using symmetries and the fundamental solution of the heat equation. The symmetry generator is chosen such that the boundary condition is left invariant; the symmetry can be used to solve the heat equation and hence the Black-Scholes equation.
12. Discrete gauge symmetries in discrete MSSM-like orientifolds
International Nuclear Information System (INIS)
Ibáñez, L.E.; Schellekens, A.N.; Uranga, A.M.
2012-01-01
Motivated by the necessity of discrete Z N symmetries in the MSSM to insure baryon stability, we study the origin of discrete gauge symmetries from open string sector U(1)'s in orientifolds based on rational conformal field theory. By means of an explicit construction, we find an integral basis for the couplings of axions and U(1) factors for all simple current MIPFs and orientifolds of all 168 Gepner models, a total of 32 990 distinct cases. We discuss how the presence of discrete symmetries surviving as a subgroup of broken U(1)'s can be derived using this basis. We apply this procedure to models with MSSM chiral spectrum, concretely to all known U(3)×U(2)×U(1)×U(1) and U(3)×Sp(2)×U(1)×U(1) configurations with chiral bi-fundamentals, but no chiral tensors, as well as some SU(5) GUT models. We find examples of models with Z 2 (R-parity) and Z 3 symmetries that forbid certain B and/or L violating MSSM couplings. Their presence is however relatively rare, at the level of a few percent of all cases.
13. Is the standard model saved asymptotically by conformal symmetry?
Science.gov (United States)
Gorsky, A.; Mironov, A.; Morozov, A.; Tomaras, T. N.
2015-03-01
It is pointed out that the top-quark and Higgs masses and the Higgs VEV with great accuracy satisfy the relations 4 m {/H 2} = 2 m {/T 2} = v 2, which are very special and reminiscent of analogous ones at Argyres-Douglas points with enhanced conformal symmetry. Furthermore, the RG evolution of the corresponding Higgs self-interaction and Yukawa couplings λ(0) = 1/8 and y(0) = 1 leads to the free-field stable point in the pure scalar sector at the Planck scale, also suggesting enhanced conformal symmetry. Thus, it is conceivable that the Standard Model is the low-energy limit of a distinct special theory with (super?) conformal symmetry at the Planck scale. In the context of such a "scenario," one may further speculate that the Higgs particle is the Goldstone boson of (partly) spontaneously broken conformal symmetry. This would simultaneously resolve the hierarchy and Landau pole problems in the scalar sector and would provide a nearly flat potential with two almost degenerate minima at the electroweak and Planck scales.
14. Tracing symmetries and their breakdown through phases of heterotic (2,2) compactifications
Energy Technology Data Exchange (ETDEWEB)
Blaszczyk, Michael [Johannes-Gutenberg-Universität,Staudingerweg 7, 55099 Mainz (Germany); Oehlmann, Paul-Konstantin [Bethe Center for Theoretical Physics, Physikalisches Institut der Universität Bonn,Nussallee 12, 53115 Bonn (Germany)
2016-04-12
We are considering the class of heterotic N=(2,2) Landau-Ginzburg orbifolds with 9 fields corresponding to A{sub 1}{sup 9} Gepner models. We classify all of its Abelian discrete quotients and obtain 152 inequivalent models closed under mirror symmetry with N=1,2 and 4 supersymmetry in 4D. We compute the full massless matter spectrum at the Fermat locus and find a universal relation satisfied by all models. In addition we give prescriptions of how to compute all quantum numbers of the 4D states including their discrete R-symmetries. Using mirror symmetry of rigid geometries we describe orbifold and smooth Calabi-Yau phases as deformations away from the Landau-Ginzburg Fermat locus in two explicit examples. We match the non-Fermat deformations to the 4D Higgs mechanism and study the conservation of R-symmetries. The first example is a ℤ{sub 3} orbifold on an E{sub 6} lattice where the R-symmetry is preserved. Due to a permutation symmetry of blow-up and torus Kähler parameters the R-symmetry stays conserved also in the smooth Calabi-Yau phase. In the second example the R-symmetry gets broken once we deform to the geometric ℤ{sub 3}×ℤ{sub 3,free} orbifold regime.
15. Tracing symmetries and their breakdown through phases of heterotic (2,2) compactifications
International Nuclear Information System (INIS)
Blaszczyk, Michael; Oehlmann, Paul-Konstantin
2016-01-01
We are considering the class of heterotic N=(2,2) Landau-Ginzburg orbifolds with 9 fields corresponding to A 1 9 Gepner models. We classify all of its Abelian discrete quotients and obtain 152 inequivalent models closed under mirror symmetry with N=1,2 and 4 supersymmetry in 4D. We compute the full massless matter spectrum at the Fermat locus and find a universal relation satisfied by all models. In addition we give prescriptions of how to compute all quantum numbers of the 4D states including their discrete R-symmetries. Using mirror symmetry of rigid geometries we describe orbifold and smooth Calabi-Yau phases as deformations away from the Landau-Ginzburg Fermat locus in two explicit examples. We match the non-Fermat deformations to the 4D Higgs mechanism and study the conservation of R-symmetries. The first example is a ℤ 3 orbifold on an E 6 lattice where the R-symmetry is preserved. Due to a permutation symmetry of blow-up and torus Kähler parameters the R-symmetry stays conserved also in the smooth Calabi-Yau phase. In the second example the R-symmetry gets broken once we deform to the geometric ℤ 3 ×ℤ 3,free orbifold regime.
16. Tracing symmetries and their breakdown through phases of heterotic (2,2) compactifications
Science.gov (United States)
Blaszczyk, Michael; Oehlmann, Paul-Konstantin
2016-04-01
We are considering the class of heterotic N=(2,2) Landau-Ginzburg orbifolds with 9 fields corresponding to A 1 9 Gepner models. We classify all of its Abelian discrete quotients and obtain 152 inequivalent models closed under mirror symmetry with N=1 , 2 and 4 supersymmetry in 4D. We compute the full massless matter spectrum at the Fermat locus and find a universal relation satisfied by all models. In addition we give prescriptions of how to compute all quantum numbers of the 4D states including their discrete R-symmetries. Using mirror symmetry of rigid geometries we describe orbifold and smooth Calabi-Yau phases as deformations away from the Landau-Ginzburg Fermat locus in two explicit examples. We match the non-Fermat deformations to the 4D Higgs mechanism and study the conservation of R-symmetries. The first example is a Z_3 orbifold on an E6 lattice where the R-symmetry is preserved. Due to a permutation symmetry of blow-up and torus Kähler parameters the R-symmetry stays conserved also in the smooth Calabi-Yau phase. In the second example the R-symmetry gets broken once we deform to the geometric Z_3× Z_{3,free} orbifold regime.
17. Symmetry-breaking solutions of the Hubbard model
International Nuclear Information System (INIS)
Kuzemsky, A.L.; )
1998-10-01
The problem of finding the ferromagnetic and antiferromagnetic ''broken symmetry'' solutions of the correlated lattice fermion models beyond the mean-field approximation has been investigated. The calculation of the quasiparticle excitation spectrum with damping for the single- and multi-orbital Hubbard model has been performed in the framework of the equation-of-motion method for two-time temperature Green's Functions within a non-perturbative approach. A unified scheme for the construction of Generalised Mean Fields (elastic scattering corrections) and self-energy (inelastic scattering) in terms of Dyson equation has been generalised in order to include the presence of the ''source fields''. The damping of quasiparticles, which reflects the interaction of the single-particle and collective degrees of freedom has been calculated. The ''broken symmetry'' dynamical solutions of the Hubbard model, which correspond to various types of itinerant antiferromagnetism have been discussed. This approach complements previous studies and clarifies the nature of the concepts of itinerant antiferromagnetism and ''spin-aligning field'' of correlated lattice fermions. (author)
18. Electroweak symmetry breaking studies at the pp colliders of the 1990's and beyond
International Nuclear Information System (INIS)
Chanowitz, M.S.
1989-01-01
Within the conventional framework of a spontaneously broken gauge theory, general principles establish that the electroweak symmetry is broken by a new force that may be weak with associated new quanta below 1 TeV or strong with quanta above 1 TeV. The SSC parameters, √s = 40 TeV and L = 10 33 cm/sup /minus/2/s/sup /minus/1/, define a minimal facility with assured capability to observe the signals of symmetry breaking by a strong force above 1 TeV. Foreseeable luminosity upgrades would not be able to compensate a much lower collider energy for these physics signals. If the strong WW scattering signal were seen at the SSC in the 1990's it would provide a clear imperative for a collider with the physics reach of the ELOISATRON to begin detailed studies of the new force and quanta early in the next century. 35 refs., 7 figs., 4 tabs
19. “Electroweak symmetry breaking: to Higgs or not to Higgs” (3/3)
CERN Multimedia
CERN. Geneva
2009-01-01
How do elementary particles acquire their mass? What is making the photon different from the Z boson? In a word: How is electroweak symmetry broken? This is one of the pressing questions in particle physics that the LHC will answer soon. The aim of this lectures is, after briefly introducing SM physics and the conventional Higgs mechanism, to give a survey of recent attempts to go beyond a simple elementary Higgs. In particular, I will describe composite models (where the Higgs boson emerges from a strongly-interacting sector) and Higsless models. Distinctive signatures at the LHC are expected and will reveal the true nature of the electroweak symmetry sector.
20. “Electroweak symmetry breaking: to Higgs or not to Higgs” (2/3)
CERN Multimedia
CERN. Geneva
2009-01-01
How do elementary particles acquire their mass? What is making the photon different from the Z boson? In a word: How is electroweak symmetry broken? This is one of the pressing questions in particle physics that the LHC will answer soon. The aim of this lectures is, after briefly introducing SM physics and the conventional Higgs mechanism, to give a survey of recent attempts to go beyond a simple elementary Higgs. In particular, I will describe composite models (where the Higgs boson emerges from a strongly-interacting sector) and Higsless models. Distinctive signatures at the LHC are expected and will reveal the true nature of the electroweak symmetry sector.
1. “Electroweak symmetry breaking: to Higgs or not to Higgs” (1/3)
CERN Multimedia
CERN. Geneva
2009-01-01
How do elementary particles acquire their mass? What is making the photon different from the Z boson? In a word: How is electroweak symmetry broken? This is one of the pressing questions in particle physics that the LHC will answer soon. The aim of this lectures is, after briefly introducing SM physics and the conventional Higgs mechanism, to give a survey of recent attempts to go beyond a simple elementary Higgs. In particular, I will describe composite models (where the Higgs boson emerges from a strongly-interacting sector) and Higsless models. Distinctive signatures at the LHC are expected and will reveal the true nature of the electroweak symmetry sector.
2. Chiral dynamics and heavy quark symmetry in a solvable toy field-theoretic model
International Nuclear Information System (INIS)
Bardeen, W.A.; Hill, C.T.
1994-01-01
We study a solvable QCD-like toy theory, a generalization of the Nambu--Jona-Lasinio model, which implements chiral symmetries of light quarks and heavy quark symmetry. The chiral symmetric and chiral broken phases can be dynamically tuned. This implies a parity-doubled heavy-light meson system, corresponding to a (0 - ,1 - ) multiplet and a (0 + ,1 + ) heavy spin multiplet. Consequently the mass difference of the two multiplets is given by a Goldberger-Treiman relation and g A is found to be small. The Isgur-Wise function ξ(w), the decay constant f B , and other observables are studied
3. Factorizable S-matrix and symmetry operator with toroidal rapidity values
International Nuclear Information System (INIS)
Hu Zhanning; Hou Boyu
1992-01-01
The factorizable S-matrix was constructed and the symmetry operator which commutes with the S-metric and has a new form of 'co-product', the elements of which depend on the parameters defining the toroidal rapidity surface. By defining a new operator which commutes with the symmetry operator the Yang-Baxter equation can be obtained. Finally, the relation between the broken Z N -symmetric model and the chiral Potts model was expressed explicitly in the self-dual genus zero limit
4. Implications of horizontal symmetries on baryon number violation in supersymmetric models
International Nuclear Information System (INIS)
Ben-Hamo, V.; Nir, Y.
1994-08-01
The smallness of the quark and lepton parameters and the hierarchy between them could be the result of selection rules due to a horizontal symmetry broken by a small parameter. The same selection rules apply to baryon number violating terms. Consequently, the problem of baryon number violation in supersymmetry may be solved naturally, without invoking any especially-designed extra symmetry. This mechanism is efficient enough even for low-scale flavor physics. Proton decay is likely to be dominated by the modes K + ν-bar i or K o μ + (e + ), and may proceed at observable rates. (authors). 15 refs
5. Symmetry-adapted HAM/3 method and its application to some symmetric molecules
Directory of Open Access Journals (Sweden)
Narita Susumu
2004-01-01
Full Text Available The semiempirical HAM/3 method developed by Lindholm and coworkers about two decades ago has been known to have a deficiency that splits energies for the degenerate energy states. We have recently proposed a group-theoretical approach to remedy the internally broken symmetry of the HAM/3 Hamiltonians. In this paper, we present some results of its application to various small molecules with symmetry Td, C3v, and D3h. The proposed scheme gives correct degeneracy for these molecules.
6. Symmetry-adapted HAM/3 method and its application to some symmetric molecules
OpenAIRE
Narita, Susumu; Shibuya, Tai-ichi; Fujiwara, Fred Y.; Takahata, Yuji
2004-01-01
The semiempirical HAM/3 method developed by Lindholm and coworkers about two decades ago has been known to have a deficiency that splits energies for the degenerate energy states. We have recently proposed a group-theoretical approach to remedy the internally broken symmetry of the HAM/3 Hamiltonians. In this paper, we present some results of its application to various small molecules with symmetry Td, C3v, and D3h. The proposed scheme gives correct degeneracy for these molecules. O método...
7. Children and Broken Homes: Sources for the Teacher.
Science.gov (United States)
Bentley, Eloise
The depreciating attitude toward family life in our society has intensified in the past few years. It is not unusual to find substantial numbers of children in a first grade classroom who live in broken homes. Divorce is the answer for more young couples than ever before, and as a result the children involved must face growing up with a parent…
8. Enticing arsonists with broken windows and social disorder
Science.gov (United States)
Douglas S. Thomas; David T. Butry; Jeffrey P. Prestemon
2011-01-01
In criminology, it is well understood that indicators of urban decay, such as abandoned buildings littered with broken windows, provide criminals with signals identifying neighborhoods with lower crime detection and apprehension rates than better maintained neighborhoods. Whether it is the resident populationâs sense of apathy, lack of civic pride, or fear of...
9. Review of "Spend Smart: Fix Our Broken School Funding System"
Science.gov (United States)
Baker, Bruce
2011-01-01
ConnCAN's Spend Smart: "Fix Our Broken School Funding System" was released concurrently with a bill introduced in the Connecticut legislature, based on the principles outlined in the report. However, the report is of negligible value to the policy debate over Connecticut school finance because it provides little or no support for any of…
10. INFLUENCE OF BROKEN ROTOR BARS LOCATION IN THE ...
African Journals Online (AJOL)
2013-06-30
Jun 30, 2013 ... single-phase induction motor by general method coupling field and circuit equations. IEEE. Trans Magnetics 31(3): 1908-1911. [6] Zouzou S. E., Khelif S., Halem N., Sahraoui M, 2011. Analysis of induction motor with broken rotor bars using circuit-field coupled method. International conference on electric.
11. Broken Heart Syndrome – An intra operative complication
Directory of Open Access Journals (Sweden)
Zara Wani
2018-03-01
Full Text Available We report a case of Broken Heart Syndrome in a 56 year old Postmenopausal woman suffered while undergoing simple biopsy procedure for vocal cord polyp that lead to physical, mental and financial burden both for the patient as well as the doctors. A team of cardiologists based on clinical and echocardiographic findings made the diagnosis of this case.
12. Quantum diffusion in two-dimensional random systems with particle–hole symmetry
International Nuclear Information System (INIS)
Ziegler, K
2012-01-01
We study the scattering dynamics of an n-component spinor wavefunction in a random environment on a two-dimensional lattice. If the particle–hole symmetry of the Hamiltonian is spontaneously broken the dynamics of the quantum particles becomes diffusive on large scales. The latter is described by a non-interacting Grassmann field, indicating a special kind of asymptotic freedom on large scales in d = 2. (paper)
13. Time-reversal symmetry breaking in quantum billiards
Energy Technology Data Exchange (ETDEWEB)
Schaefer, Florian
2009-01-26
The present doctoral thesis describes experimentally measured properties of the resonance spectra of flat microwave billiards with partially broken timereversal invariance induced by an embedded magnetized ferrite. A vector network analyzer determines the complex scattering matrix elements. The data is interpreted in terms of the scattering formalism developed in nuclear physics. At low excitation frequencies the scattering matrix displays isolated resonances. At these the effect of the ferrite on isolated resonances (singlets) and pairs of nearly degenerate resonances (doublets) is investigated. The hallmark of time-reversal symmetry breaking is the violation of reciprocity, i.e. of the symmetry of the scattering matrix. One finds that reciprocity holds in singlets; it is violated in doublets. This is modeled by an effective Hamiltonian of the resonator. A comparison of the model to the data yields time-reversal symmetry breaking matrix elements in the order of the level spacing. Their dependence on the magnetization of the ferrite is understood in terms of its magnetic properties. At higher excitation frequencies the resonances overlap and the scattering matrix elements fluctuate irregularly (Ericson fluctuations). They are analyzed in terms of correlation functions. The data are compared to three models based on random matrix theory. The model by Verbaarschot, Weidenmueller and Zirnbauer describes time-reversal invariant scattering processes. The one by Fyodorov, Savin and Sommers achieves the same for systems with complete time-reversal symmetry breaking. An extended model has been developed that accounts for partial breaking of time-reversal invariance. This extended model is in general agreement with the data, while the applicability of the other two models is limited. The cross-correlation function between forward and backward reactions determines the time-reversal symmetry breaking matrix elements of the Hamiltonian to up to 0.3 mean level spacings. Finally
14. Time-reversal symmetry breaking in quantum billiards
International Nuclear Information System (INIS)
Schaefer, Florian
2009-01-01
The present doctoral thesis describes experimentally measured properties of the resonance spectra of flat microwave billiards with partially broken timereversal invariance induced by an embedded magnetized ferrite. A vector network analyzer determines the complex scattering matrix elements. The data is interpreted in terms of the scattering formalism developed in nuclear physics. At low excitation frequencies the scattering matrix displays isolated resonances. At these the effect of the ferrite on isolated resonances (singlets) and pairs of nearly degenerate resonances (doublets) is investigated. The hallmark of time-reversal symmetry breaking is the violation of reciprocity, i.e. of the symmetry of the scattering matrix. One finds that reciprocity holds in singlets; it is violated in doublets. This is modeled by an effective Hamiltonian of the resonator. A comparison of the model to the data yields time-reversal symmetry breaking matrix elements in the order of the level spacing. Their dependence on the magnetization of the ferrite is understood in terms of its magnetic properties. At higher excitation frequencies the resonances overlap and the scattering matrix elements fluctuate irregularly (Ericson fluctuations). They are analyzed in terms of correlation functions. The data are compared to three models based on random matrix theory. The model by Verbaarschot, Weidenmueller and Zirnbauer describes time-reversal invariant scattering processes. The one by Fyodorov, Savin and Sommers achieves the same for systems with complete time-reversal symmetry breaking. An extended model has been developed that accounts for partial breaking of time-reversal invariance. This extended model is in general agreement with the data, while the applicability of the other two models is limited. The cross-correlation function between forward and backward reactions determines the time-reversal symmetry breaking matrix elements of the Hamiltonian to up to 0.3 mean level spacings. Finally
15. Mirror symmetry II
CERN Document Server
Greene, Brian R
1997-01-01
Mirror symmetry has undergone dramatic progress during the last five years. Tremendous insight has been gained on a number of key issues. This volume surveys these results. Some of the contributions in this work have appeared elsewhere, while others were written specifically for this collection. The areas covered are organized into 4 sections, and each presents papers by both physicists and mathematicians. This volume collects the most important developments that have taken place in mathematical physics since 1991. It is an essential reference tool for both mathematics and physics libraries and for students of physics and mathematics.
DEFF Research Database (Denmark)
Spaten, Ole Michael
2016-01-01
Research publications concerning managers who coach their own employees are barely visible despite its wide- spread use in enterprises (McCarthy & Milner, 2013; Gregory & Levy, 2011; Crabb, 2011). This article focuses on leadership, power and moments of symmetry in the coaching relationship...... regarding managers coaching their employees and it is asked; what contributes to coaching of high quality when one reflects on the power aspect as being immanent? Fourteen middle managers coached five of their employees, and all members of each party wrote down cues and experiences immediately after each...
17. Groups and symmetry
CERN Document Server
Farmer, David W
1995-01-01
In most mathematics textbooks, the most exciting part of mathematics-the process of invention and discovery-is completely hidden from the reader. The aim of Groups and Symmetry is to change all that. By means of a series of carefully selected tasks, this book leads readers to discover some real mathematics. There are no formulas to memorize; no procedures to follow. The book is a guide: Its job is to start you in the right direction and to bring you back if you stray too far. Discovery is left to you. Suitable for a one-semester course at the beginning undergraduate level, there are no prerequ
18. Geometry and symmetry
CERN Document Server
Yale, Paul B
2012-01-01
This book is an introduction to the geometry of Euclidean, affine, and projective spaces with special emphasis on the important groups of symmetries of these spaces. The two major objectives of the text are to introduce the main ideas of affine and projective spaces and to develop facility in handling transformations and groups of transformations. Since there are many good texts on affine and projective planes, the author has concentrated on the n-dimensional cases.Designed to be used in advanced undergraduate mathematics or physics courses, the book focuses on ""practical geometry,"" emphasi
19. Applications of chiral symmetry
International Nuclear Information System (INIS)
Pisarski, R.D.
1995-03-01
The author discusses several topics in the applications of chiral symmetry at nonzero temperature. First, where does the rho go? The answer: up. The restoration of chiral symmetry at a temperature T χ implies that the ρ and a 1 vector mesons are degenerate in mass. In a gauged linear sigma model the ρ mass increases with temperature, m ρ (T χ ) > m ρ (0). The author conjectures that at T χ the thermal ρ - a 1 , peak is relatively high, at about ∼1 GeV, with a width approximately that at zero temperature (up to standard kinematic factors). The ω meson also increases in mass, nearly degenerate with the ρ, but its width grows dramatically with temperature, increasing to at least ∼100 MeV by T χ . The author also stresses how utterly remarkable the principle of vector meson dominance is, when viewed from the modern perspective of the renormalization group. Secondly, he discusses the possible appearance of disoriented chiral condensates from open-quotes quenchedclose quotes heavy ion collisions. It appears difficult to obtain large domains of disoriented chiral condensates in the standard two flavor model. This leads to the last topic, which is the phase diagram for QCD with three flavors, and its proximity to the chiral critical point. QCD may be very near this chiral critical point, and one might thereby generated large domains of disoriented chiral condensates
20. Bootstrap Dynamical Symmetry Breaking
Directory of Open Access Journals (Sweden)
Wei-Shu Hou
2013-01-01
Full Text Available Despite the emergence of a 125 GeV Higgs-like particle at the LHC, we explore the possibility of dynamical electroweak symmetry breaking by strong Yukawa coupling of very heavy new chiral quarks Q . Taking the 125 GeV object to be a dilaton with suppressed couplings, we note that the Goldstone bosons G exist as longitudinal modes V L of the weak bosons and would couple to Q with Yukawa coupling λ Q . With m Q ≳ 700 GeV from LHC, the strong λ Q ≳ 4 could lead to deeply bound Q Q ¯ states. We postulate that the leading “collapsed state,” the color-singlet (heavy isotriplet, pseudoscalar Q Q ¯ meson π 1 , is G itself, and a gap equation without Higgs is constructed. Dynamical symmetry breaking is affected via strong λ Q , generating m Q while self-consistently justifying treating G as massless in the loop, hence, “bootstrap,” Solving such a gap equation, we find that m Q should be several TeV, or λ Q ≳ 4 π , and would become much heavier if there is a light Higgs boson. For such heavy chiral quarks, we find analogy with the π − N system, by which we conjecture the possible annihilation phenomena of Q Q ¯ → n V L with high multiplicity, the search of which might be aided by Yukawa-bound Q Q ¯ resonances.
1. Symmetry in Complex Networks
Directory of Open Access Journals (Sweden)
Angel Garrido
2011-01-01
Full Text Available In this paper, we analyze a few interrelated concepts about graphs, such as their degree, entropy, or their symmetry/asymmetry levels. These concepts prove useful in the study of different types of Systems, and particularly, in the analysis of Complex Networks. A System can be defined as any set of components functioning together as a whole. A systemic point of view allows us to isolate a part of the world, and so, we can focus on those aspects that interact more closely than others. Network Science analyzes the interconnections among diverse networks from different domains: physics, engineering, biology, semantics, and so on. Current developments in the quantitative analysis of Complex Networks, based on graph theory, have been rapidly translated to studies of brain network organization. The brain's systems have complex network features—such as the small-world topology, highly connected hubs and modularity. These networks are not random. The topology of many different networks shows striking similarities, such as the scale-free structure, with the degree distribution following a Power Law. How can very different systems have the same underlying topological features? Modeling and characterizing these networks, looking for their governing laws, are the current lines of research. So, we will dedicate this Special Issue paper to show measures of symmetry in Complex Networks, and highlight their close relation with measures of information and entropy.
2. On natural hierarchy in dynamically broken gauge models
International Nuclear Information System (INIS)
Frere, J.M.
1980-01-01
A model based on dynamical symmetry breaking provides a naturally large 'mass hierarchy'. Few fermions are needed at intermediate energies, and asymptotic freedom of usual interactions is therefore not imperiled. (orig.)
3. In search of symmetry lost
CERN Multimedia
Wilczek, Frank
2004-01-01
Powerful symmetry principles have guided physicists in their quest for nature's fundamental laws. The successful gauge theory of electroweak interactions postulates a more extensive symmetry for its equations than are manifest in the world (8 pages) Powerful symmetry principles have guided physicists in their quest for nature's fundamental laws. The successful gauge theory of electroweak interactions postulates a more extensive symmetry for its equations than are manifest in the world. The discrepancy is ascribed to a pervasive symmetry-breaking field, which fills all space uniformly, rendering the Universe a sort of exotic superconductor. So far, the evidence for these bold ideas is indirect. But soon the theory will undergo a critical test depending on whether the quanta of this symmetry-breaking field, the so-called Higgs particles, are produced at the Large Hadron Collider (due to begin operation in 2007).
4. Symmetry of crystals and molecules
CERN Document Server
2014-01-01
This book successfully combines a thorough treatment of molecular and crystalline symmetry with a simple and informal writing style. By means of familiar examples the author helps to provide the reader with those conceptual tools necessary for the development of a clear understanding of what are often regarded as 'difficult' topics. Christopher Hammond, University of Leeds This book should tell you everything you need to know about crystal and molecular symmetry. Ladd adopts an integrated approach so that the relationships between crystal symmetry, molecular symmetry and features of chemical interest are maintained and reinforced. The theoretical aspects of bonding and symmetry are also well represented, as are symmetry-dependent physical properties and the applications of group theory. The comprehensive coverage will make this book a valuable resource for a broad range of readers.
5. Trieste lectures on mirror symmetry
Energy Technology Data Exchange (ETDEWEB)
Hori, K [Department of Physics and Department of Mathematics, University of Toronto, Toronto, Ontario (Canada)
2003-08-15
These are pedagogical lectures on mirror symmetry given at the Spring School in ICTP, Trieste, March 2002. The focus is placed on worldsheet descriptions of the physics related to mirror symmetry. We start with the introduction to general aspects of (2,2) supersymmetric field theories in 1 + 1 dimensions. We next move on to the study and applications of linear sigma model. Finally, we provide a proof of mirror symmetry in a class of models. (author)
6. Quantum symmetry in quantum theory
International Nuclear Information System (INIS)
Schomerus, V.
1993-02-01
Symmetry concepts have always been of great importance for physical problems like explicit calculations, classification or model building. More recently, new 'quantum symmetries' ((quasi) quantum groups) attracted much interest in quantum theory. It is shown that all these quantum symmetries permit a conventional formulation as symmetry in quantum mechanics. Symmetry transformations can act on the Hilbert space H of physical states such that the ground state is invariant and field operators transform covariantly. Models show that one must allow for 'truncation' in the tensor product of representations of a quantum symmetry. This means that the dimension of the tensor product of two representations of dimension σ 1 and σ 2 may be strictly smaller than σ 1 σ 2 . Consistency of the transformation law of field operators local braid relations leads us to expect, that (weak) quasi quantum groups are the most general symmetries in local quantum theory. The elements of the R-matrix which appears in these local braid relations turn out to be operators on H in general. It will be explained in detail how examples of field algebras with weak quasi quantum group symmetry can be obtained. Given a set of observable field with a finite number of superselection sectors, a quantum symmetry together with a complete set of covariant field operators which obey local braid relations are constructed. A covariant transformation law for adjoint fields is not automatic but will follow when the existence of an appropriate antipode is assumed. At the example of the chiral critical Ising model, non-uniqueness of the quantum symmetry will be demonstrated. Generalized quantum symmetries yield examples of gauge symmetries in non-commutative geometry. Quasi-quantum planes are introduced as the simplest examples of quasi-associative differential geometry. (Weak) quasi quantum groups can act on them by generalized derivations much as quantum groups do in non-commutative (differential-) geometry
7. Spontaneous symmetry breaking and response functions
International Nuclear Information System (INIS)
Beraudo, A.; De Pace, A.; Martini, M.; Molinari, A.
2005-01-01
We study the quantum phase transition occurring in an infinite homogeneous system of spin 1/2 fermions in a non-relativistic context. As an example we consider neutrons interacting through a simple spin-spin Heisenberg force. The two critical values of the coupling strength-signaling the onset into the system of a finite magnetization and of the total magnetization, respectively-are found and their dependence upon the range of the interaction is explored. The spin response function of the system in the region where the spin-rotational symmetry is spontaneously broken is also studied. For a ferromagnetic interaction the spin response along the direction of the spontaneous magnetization occurs in the particle-hole continuum and displays, for not too large momentum transfers, two distinct peaks. The response along the direction orthogonal to the spontaneous magnetization displays instead, beyond a softened and depleted particle-hole continuum, a collective mode to be identified with a Goldstone boson of type II. Notably, the random phase approximation on a Hartree-Fock basis accounts for it, in particular for its quadratic-close to the origin-dispersion relation. It is shown that the Goldstone boson contributes to the saturation of the energy-weighted sum rule for ∼25% when the system becomes fully magnetized (that is in correspondence of the upper critical value of the interaction strength) and continues to grow as the interaction strength increases
8. An introduction to Yangian symmetries
International Nuclear Information System (INIS)
Bernard, D.
1992-01-01
Some aspects of the quantum Yangians as symmetry algebras of two-dimensional quantum field theories are reviewed. They include two main issues: the first is the classical Heisenberg model, covering non-Abelian symmetries, generators of the symmetries and the semi-classical Yangians, an alternative presentation of the semi-classical Yangians, digression on Poisson-Lie groups. The second is the quantum Heisenberg chain, covering non-Abelian symmetries and the quantum Yangians, the transfer matrix and an alternative presentation of the Yangians, digression on the double Yangians. (K.A.) 15 refs
9. Killing symmetries in neutron transport
International Nuclear Information System (INIS)
Lukacs, B.; Racz, A.
1992-10-01
Although inside the reactor zone there is no exact continuous spatial symmetry, in certain configurations neutron flux distribution is close to a symmetrical one. In such cases the symmetrical solution could provide a good starting point to determine the non-symmetrical power distribution. All possible symmetries are determined in the 3-dimensional Euclidean space, and the form of the transport equation is discussed in such a coordinate system which is adapted to the particular symmetry. Possible spontaneous symmetry breakings are pointed out. (author) 6 refs
10. The conservation of orbital symmetry
CERN Document Server
Woodward, R B
2013-01-01
The Conservation of Orbital Symmetry examines the principle of conservation of orbital symmetry and its use. The central content of the principle was that reactions occur readily when there is congruence between orbital symmetry characteristics of reactants and products, and only with difficulty when that congruence does not obtain-or to put it more succinctly, orbital symmetry is conserved in concerted reaction. This principle is expected to endure, whatever the language in which it may be couched, or whatever greater precision may be developed in its application and extension. The book ope
11. Spontaneous symmetry breaking in ΡΤ symmetric systems with nonlinear damping
International Nuclear Information System (INIS)
Karthiga, S.; Chandrasekar, V.K.; Senthilvelan, M.; Lakshmanan, M.
2016-01-01
In this talk, we discuss the remarkable role of position dependent damping in determining the parametric regions of symmetry breaking in nonlinear ΡΤ -symmetric systems. We illustrate the nature of ΡΤ-symmetry preservation and breaking with reference to a remarkable integrable scalar nonlinear system. In the two dimensional cases of such position dependent damped systems, we unveil the existence of a class of novel bi-ΡΤ -symmetric systems which have two fold ΡΤ symmetries. We discuss the dynamics of these systems and show how symmetry breaking occurs, that is whether the symmetry breaking of the two ΡΤ symmetries occurs in pair or occurs one by one. The addition of linear damping in these nonlinearly damped systems induces competition between the two types of damping. This competition results in a ΡΤ phase transition in which the ΡΤ symmetry is broken for lower loss/gain strength and is restored by increasing the loss/gain strength. We also show that by properly designing the form of the position dependent damping, we can tailor the ΡΤ-symmetric regions of the system. (author)
12. Particle-Hole Symmetry Breaking in the Pseudogap State of Bi2201
Energy Technology Data Exchange (ETDEWEB)
Hashimoto, M.; /SIMES, Stanford /Stanford U., Geballe Lab. /LBNL, ALS; He, R.-H.; /aff SIMES, Stanford /Stanford U., Geballe Lab.; Tanaka, K.; /aff SIMES, Stanford /Stanford U., Geballe Lab. /LBNL, ALS /Osaka U.; Testaud, J.P.; /SIMES, Stanford /Stanford U., Geballe Lab. /LBNL, ALS; Meevasana1, W.; Moore, R.G.; Lu, D.H.; /SIMES, Stanford /Stanford U., Geballe Lab.; Yao, H.; /SIMES, Stanford; Yoshida, Y.; Eisaki, H.; /AIST, Tsukuba; Devereaux, T.P.; /SIMES, Stanford /Stanford U., Geballe Lab.; Hussain, Z.; /LBNL, ALS; Shen, Z.-X.; /SIMES, Stanford /Stanford U., Geballe Lab.
2011-08-19
In conventional superconductors, a gap exists in the energy absorption spectrum only below the transition temperature (T{sub c}), corresponding to the energy price to pay for breaking a Cooper pair of electrons. In high-T{sub c} cuprate superconductors above T{sub c}, an energy gap called the pseudogap exists, and is controversially attributed either to pre-formed superconducting pairs, which would exhibit particle-hole symmetry, or to competing phases which would typically break it. Scanning tunnelling microscopy (STM) studies suggest that the pseudogap stems from lattice translational symmetry breaking and is associated with a different characteristic spectrum for adding or removing electrons (particle-hole asymmetry). However, no signature of either spatial or energy symmetry breaking of the pseudogap has previously been observed by angle-resolved photoemission spectroscopy (ARPES). Here we report ARPES data from Bi2201 which reveals both particle-hole symmetry breaking and dramatic spectral broadening indicative of spatial symmetry breaking without long range order, upon crossing through T* into the pseudogap state. This symmetry breaking is found in the dominant region of the momentum space for the pseudogap, around the so-called anti-node near the Brillouin zone boundary. Our finding supports the STM conclusion that the pseudogap state is a broken-symmetry state that is distinct from homogeneous superconductivity.
13. Inversion symmetry breaking induced triply degenerate points in orderly arranged PtSeTe family materials
Science.gov (United States)
Xiao, R. C.; Cheung, C. H.; Gong, P. L.; Lu, W. J.; Si, J. G.; Sun, Y. P.
2018-06-01
k paths exactly with symmetry allow to find triply degenerate points (TDPs) in band structures. The paths that host the type-II Dirac points in PtSe2 family materials also have the spatial symmetry. However, due to Kramers degeneracy (the systems have both inversion symmetry and time reversal symmetry), the crossing points in them are Dirac ones. In this work, based on symmetry analysis, first-principles calculations, and method, we predict that PtSe2 family materials should undergo topological transitions if the inversion symmetry is broken, i.e. the Dirac fermions in PtSe2 family materials split into TDPs in PtSeTe family materials (PtSSe, PtSeTe, and PdSeTe) with orderly arranged S/Se (Se/Te). It is different from the case in high-energy physics that breaking inversion symmetry I leads to the splitting of Dirac fermion into Weyl fermions. We also address a possible method to achieve the orderly arranged in PtSeTe family materials in experiments. Our study provides a real example that Dirac points transform into TDPs, and is helpful to investigate the topological transition between Dirac fermions and TDP fermions.
14. Duality transformation of a spontaneously broken gauge theory
International Nuclear Information System (INIS)
Mizrachi, L.
1981-04-01
Duality transformation for a spontaneously broken gauge theory is constructed in the CDS gauge (xsub(μ)Asub(μ)sup(a)=0). The dual theory is expressed in terms of dual potentials which satisfy the same gauge condition, but with g→ 1 /g. Generally the theory is not self dual but in the weak coupling region (small g), self duality is found for the subgroup which is not spontaneously broken or in regions where monopoles and vortices are concentrated (in agreement with t'Hooft's ideas that monopoles and vortices in the Georgi-Glashow model make it self dual). In the strong coupling regime a systematic strong coupling expansion can be written. For this region the dual theory is generally not local gauge invariant, but it is invariant under global gauge transformations. (author)
15. Theoretical studies of radiative properties of broken clouds
International Nuclear Information System (INIS)
Titov, G.A.
1994-01-01
One of the three goals of the Atmospheric Radiation Measurement (ARM) Program is to improve the quality of radiation models under clear sky, homogeneous cloud, and broken cloud conditions. This report is concerned with the development of the theory of radiation transfer in the broken clouds. Our approach is based on a stochastic description of the interaction between the radiation and cloud field with stochastic geometry; In the following, we discuss (1) the mean radiation fluxes in the near IR spectral range 2.7 to 3.2 μm; (2) the influence of random geometry of individual cumulus clouds on the mean fluxes of visible solar radiation; (3) the equations of the mean radiance in the statistically inhomogeneous cloud fields
16. QCD-instantons and conformal space-time inversion symmetry
International Nuclear Information System (INIS)
Klammer, D.
2008-04-01
In this paper, we explore the appealing possibility that the strong suppression of large-size QCD instantons - as evident from lattice data - is due to a surviving conformal space-time inversion symmetry. This symmetry is both suggested from the striking invariance of highquality lattice data for the instanton size distribution under inversion of the instanton size ρ→(left angle ρ right angle 2 )/(ρ) and from the known validity of space-time inversion symmetry in the classical instanton sector. We project the instanton calculus onto the four-dimensional surface of a five-dimensional sphere via conformal stereographic mapping, before investigating conformal inversion. This projection to a compact, curved geometry is both to avoid the occurence of divergences and to introduce the average instanton size left angle ρ right angle from the lattice data as a new length scale. The average instanton size is identified with the radius b of this 5d-sphere and acts as the conformal inversion radius. For b= left angle ρ right angle, our corresponding results are almost perfectly symmetric under space-time inversion and in good qualitative agreement with the lattice data. For (ρ)/(b)→0 we recover the familiar results of instanton perturbation theory in flat 4d-space. Moreover, we illustrate that a (weakly broken) conformal inversion symmetry would have significant consequences for QCD beyond instantons. As a further successful test for inversion symmetry, we present striking implications for another instanton dominated lattice observable, the chirality-flip ratio in the QCD vacuum. (orig.)
17. Quantum mechanics symmetries
CERN Document Server
Greiner, Walter
1989-01-01
"Quantum Dynamics" is a major survey of quantum theory based on Walter Greiner's long-running and highly successful courses at the University of Frankfurt. The key to understanding in quantum theory is to reinforce lecture attendance and textual study by working through plenty of representative and detailed examples. Firm belief in this principle led Greiner to develop his unique course and to transform it into a remarkable and comprehensive text. The text features a large number of examples and exercises involving many of the most advanced topics in quantum theory. These examples give practical and precise demonstrations of how to use the often subtle mathematics behind quantum theory. The text is divided into five volumes: Quantum Mechanics I - An Introduction, Quantum Mechanics II - Symmetries, Relativistic Quantum Mechanics, Quantum Electrodynamics, Gauge Theory of Weak Interactions. These five volumes take the reader from the fundamental postulates of quantum mechanics up to the latest research in partic...
18. Symmetries of cluster configurations
International Nuclear Information System (INIS)
Kramer, P.
1975-01-01
A deeper understanding of clustering phenomena in nuclei must encompass at least two interrelated aspects of the subject: (A) Given a system of A nucleons with two-body interactions, what are the relevant and persistent modes of clustering involved. What is the nature of the correlated nucleon groups which form the clusters, and what is their mutual interaction. (B) Given the cluster modes and their interaction, what systematic patterns of nuclear structure and reactions emerge from it. Are there, for example, families of states which share the same ''cluster parents''. Which cluster modes are compatible or exclude each other. What quantum numbers could characterize cluster configurations. There is no doubt that we can learn a good deal from the experimentalists who have discovered many of the features relevant to aspect (B). Symmetries specific to cluster configurations which can throw some light on both aspects of clustering are discussed
19. Broken Esophageal Stent Successfully Treated by Interventional Radiology Technique
International Nuclear Information System (INIS)
Zelenak, Kamil; Mistuna, Dusan; Lucan, Jaroslav; Polacek, Hubert
2010-01-01
Esophageal stent fractures occur quite rarely. A 61-year-old male patient was previously treated for rupture of benign stenosis, occurring after dilatation, by implanting an esophageal stent. However, a year after implantation, the patient suffered from dysphagia caused by the broken esophageal stent. He was treated with the interventional radiology technique, whereby a second implantation of the esophageal stent was carried out quite successfully.
20. Octonionic gauge theory from spontaneously broken SO(8)
International Nuclear Information System (INIS)
Lassig, C.C.; Joshi, G.C.
1995-01-01
An attempt is made to construct a gauge theory based on a bimodular representation of the octonion algebra, the non associativity of which is manifested as a non-closure of the bimodule algebra. It is found that this fact leads to gauge-noninvariance of the theory. However, the bimodule algebra can be embedded in SO(8), the gauge theory of which can be broken down to give a massless SO(7) theory together with a massive octonionic gauge theory. 7 refs
1. Mass Formulae for Broken Supersymmetry in Curved Space-Time
CERN Document Server
Ferrara, Sergio
2016-01-01
We derive the mass formulae for ${\\cal N}=1$, $D=4$ matter-coupled Supergravity for broken (and unbroken) Supersymmetry in curved space-time. These formulae are applicable to de Sitter configurations as is the case for inflation. For unbroken Supersymmetry in anti-de Sitter (AdS) one gets the mass relations modified by the AdS curvature. We compute the mass relations both for the potential and its derivative non-vanishing.
2. Determining reactor fuel elements broken by Cerenkov counting
International Nuclear Information System (INIS)
Guo Juhao; Dong Shiyuan; Feng Yuying
1996-01-01
The basis and method of determining fuel elements broken in a reactor by Cerenkov counting measured with liquid scintillation spectrometer are introduced. The radioactive characteristic of the radiation nuclides generating Cherenkov radiation in the primary water of 200 MW nuclear district heating reactor is analyzed. The activity of the activation products in the primary water and the fission products in the fuel elements are calculated. A feasibility of Cerenkov counting measure was analyzed. This method is simple and quick
3. Mental Suffering in Protracted Political Conflict: Feeling Broken or Destroyed.
Science.gov (United States)
Barber, Brian K; McNeely, Clea A; El Sarraj, Eyad; Daher, Mahmoud; Giacaman, Rita; Arafat, Cairo; Barnes, William; Abu Mallouh, Mohammed
2016-01-01
This mixed-methods exploratory study identified and then developed and validated a quantitative measure of a new construct of mental suffering in the occupied Palestinian territory: feeling broken or destroyed. Group interviews were conducted in 2011 with 68 Palestinians, most aged 30-40, in the West Bank, East Jerusalem, and the Gaza Strip to discern local definitions of functioning. Interview participants articulated of a type of suffering not captured in existing mental health instruments used in regions of political conflict. In contrast to the specific difficulties measured by depression and PTSD (sleep, appetite, energy, flashbacks, avoidance, etc.), participants elaborated a more existential form of mental suffering: feeling that one's spirit, morale and/or future was broken or destroyed, and emotional and psychological exhaustion. Participants articulated these feelings when describing the rigors of the political and economic contexts in which they live. We wrote survey items to capture these sentiments and administered these items-along with standard survey measures of mental health-to a representative sample of 1,778 32-43 year olds in the occupied Palestinian territory. The same survey questions also were administered to a representative subsample (n = 508) six months earlier, providing repeated measures of the construct. Across samples and time, the feeling broken or destroyed scale: 1) comprised a separate factor in exploratory factor analyses, 2) had high inter-item consistency, 3) was reported by both genders and in all regions, 4) showed discriminate validity via moderate correlations with measures of feelings of depression and trauma-related stress, and 5) was more commonly experienced than either feelings of depression or trauma-related stress. Feeling broken or destroyed can be reliably measured and distinguished from conventional measures of mental health. Such locally grounded and contextualized measures should be identified and included in
4. Why most flavor-dependence predictions for nonleptonic charm decays are wrong: flavor symmetry and final-state interactions in nonleptonic decays of charmed hadrons
International Nuclear Information System (INIS)
Lipkin, H.J.
1980-09-01
Nonleptonic weak decays of strange hadrons are complicated by the interplay of weak and strong interactions. Models based either on symmetry properties or on the selection of certain types of diagrams are both open to criticism. The symmetries used are all broken in strong interactions, and the selection of some diagrams and neglect of others is never seriously justified. Furthermore, the number of related decays of strange hadrons is small, so that experimental data are insufficient for singificant tests of phenomenological models with a few free parameters. The discovery of charmed particles with many open channels for nonleptonic decays has provided a new impetus for a theoretical understanding of these processes. The GIM current provides a well defined weak hamiltonian, which can justifiably be used to first order. The QCD approach to strong interactions gives flavor-indpendent couplings and flavor symmetry broken only by quark masses. In a model with n generations of quarks and 2n flavors, a flavor symmetry group SU(2n) can be defined which is broken only by H/sub weak/ and the quark masses.Here again, the same two approaches by symmetry and dynamics have been used. But both types of treatment tend to consider only the symmetry properties or dominant diagrams of the weak interaction, including some subtle effects, while overlooking rather obvious effects of strong interactions
5. Perilaku Komunikasi antara Guru dengan Siswa Broken Home
Directory of Open Access Journals (Sweden)
Emilsyah Nur
2017-12-01
Full Text Available Penelitian yang dilakukan merupakan penelitian kualitatif yang menggunakan beberapa informan sebagai narasumber untuk menjawab permasalahan tentang opini publik terhadap komunikasi interpersonal dalam mengatasi orang tua broken home. Hasil penelitian menunjukkan bahwa perilaku komunikasi siswa broken home di sekolah belum sepenuhnya efektif. Hal ini disebabkan oleh: intensitas komunikasi antara orang tua dan anak yang masih kurang sehingga anak enggan untuk terbuka kepada orang tuanya mengenai prestasi belajar. Kurangnya dukungan, rasa empati serta sikap positif yang diberikan orang tua kepada anak juga mempengaruhi hubungan interpersonal diantara orang tua dan anak yang menyebabkan anak lebih terbuka kepada teman atau kerabatnya daripada orang tuanya sendiri.Kesetaraan antara orang tua dan anak masih kurang. Perilaku komunikasi yang demikian sangat berpengaruh terhadap perilaku anak di sekolah. Beberapa faktor yang menghambat prilaku komunikasi siswa broken home diantaranya yaitu orang tua yang kurang bisa membagi waktu antara pekerjaan dan memberikan perhatian kepada anak di rumah sehingga komunikasi dengan anak tidak berjalan dengan lancar, sikap acuh tak acuh yang ditunjukkan orang tua membuat anak menjauhkan diri dan tidak terbuka kepada orang tua dan ketidakterbukaan siswa terhadap Guru.
6. Broken instrument retrieval with indirect ultrasonics in a primary molar.
Science.gov (United States)
Pk, Musale; Sc, Kataria; As, Soni
2016-02-01
The separation of a file during pulpectomy is a rare incident in primary teeth due to inherently wider and relatively straighter root canals. A broken instrument hinders the clinician from optimal preparation and obturation of the root canal system invariably leading to failure, although in such teeth, an extraction followed by suitable space maintenance is considered as the treatment of choice. This case report demonstrates successful nonsurgical retrieval of a separated H file fragment in 84. A 7-year-old girl was referred to the Department of Paedodontics and Preventive Dentistry for endodontic management of a primary tooth 84 with a dento-alveolar abscess. Her medical history was noncontributory. After diagnosing a broken H file in the mesio-lingual canal, the tooth was endodontically treated in two appointments. At the first session, a broken file was successfully retrieved after using low intensity ultrasonic vibrations through a DG 16 endodontic explorer viewed under an operating microscope. After abscess resolution, Vitapex root canal obturation with a preformed metal crown cementation was completed at a second session. The patient was recalled at 3, 6, 12 and 15 month interval and reported to be clinically asymptomatic and radiographically with complete furcal healing. Integration of microscopes and ultrasonics in paediatric dental practice has made it possible to save such teeth with a successful outcome. Favourable location of the separated file, relatively straighter root canal system and patient cooperation resulted in successful nonsurgical management in this case.
7. Symmetry chains and adaptation coefficients
International Nuclear Information System (INIS)
Fritzer, H.P.; Gruber, B.
1985-01-01
Given a symmetry chain of physical significance it becomes necessary to obtain states which transform properly with respect to the symmetries of the chain. In this article we describe a method which permits us to calculate symmetry-adapted quantum states with relative ease. The coefficients for the symmetry-adapted linear combinations are obtained, in numerical form, in terms of the original states of the system and can thus be represented in the form of numerical tables. In addition, one also obtains automatically the matrix elements for the operators of the symmetry groups which are involved, and thus for any physical operator which can be expressed either as an element of the algebra or of the enveloping algebra. The method is well suited for computers once the physically relevant symmetry chain, or chains, have been defined. While the method to be described is generally applicable to any physical system for which semisimple Lie algebras play a role we choose here a familiar example in order to illustrate the method and to illuminate its simplicity. We choose the nuclear shell model for the case of two nucleons with orbital angular momentum l = 1. While the states of the entire shell transform like the smallest spin representation of SO(25) we restrict our attention to its subgroup SU(6) x SU(2)/sub T/. We determine the symmetry chains which lead to total angular momentum SU(2)/sub J/ and obtain the symmetry-adapted states for these chains
8. Collective states and crossing symmetry
International Nuclear Information System (INIS)
Heiss, W.D.
1977-01-01
Collective states are usually described in simple terms but with the use of effective interactions which are supposed to contain more or less complicated contributions. The significance of crossing symmetry is discussed in this connection. Formal problems encountered in the attempts to implement crossing symmetry are pointed out
9. Singlets of fermionic gauge symmetries
NARCIS (Netherlands)
Bergshoeff, E.A.; Kallosh, R.E.; Rahmanov, M.A.
1989-01-01
We investigate under which conditions singlets of fermionic gauge symmetries which are "square roots of gravity" can exist. Their existence is non-trivial because there are no fields neutral in gravity. We tabulate several examples of singlets of global and local supersymmetry and κ-symmetry and
10. Symmetry guide to ferroaxial transitions
Czech Academy of Sciences Publication Activity Database
Hlinka, Jiří; Přívratská, J.; Ondrejkovič, Petr; Janovec, Václav
2016-01-01
Roč. 116, č. 17 (2016), 1-6, č. článku 177602. ISSN 0031-9007 R&D Projects: GA ČR GA15-04121S Institutional support: RVO:68378271 Keywords : symmetry * symmetry breaking * ferroaxial Transitions * property tensors * Aizu species Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 8.462, year: 2016
11. Spontaneous SUSY breaking without R symmetry in supergravity
Science.gov (United States)
Maekawa, Nobuhiro; Omura, Yuji; Shigekami, Yoshihiro; Yoshida, Manabu
2018-03-01
We discuss spontaneous supersymmetry (SUSY) breaking in a model with an anomalous U (1 )A symmetry. In this model, the size of the each term in the superpotential is controlled by the U (1 )A charge assignment and SUSY is spontaneously broken via the Fayet-Iliopoulos of U (1 )A at the metastable vacuum. In the global SUSY analysis, the gaugino masses become much smaller than the sfermion masses, because an approximate R symmetry appears at the SUSY breaking vacuum. In this paper, we show that gaugino masses can be as large as gravitino mass, taking the supergravity effect into consideration. This is because the R symmetry is not imposed so that the constant term in the superpotential, which is irrelevant to the global SUSY analysis, largely contributes to the soft SUSY breaking terms in the supergravity. As the mediation mechanism, we introduce the contributions of the field not charged under U (1 )A and the moduli field to cancel the anomaly of U (1 )A. We comment on the application of our SUSY breaking scenario to the grand unified theory.
12. Composite symmetry-protected topological order and effective models
Science.gov (United States)
Nietner, A.; Krumnow, C.; Bergholtz, E. J.; Eisert, J.
2017-12-01
Strongly correlated quantum many-body systems at low dimension exhibit a wealth of phenomena, ranging from features of geometric frustration to signatures of symmetry-protected topological order. In suitable descriptions of such systems, it can be helpful to resort to effective models, which focus on the essential degrees of freedom of the given model. In this work, we analyze how to determine the validity of an effective model by demanding it to be in the same phase as the original model. We focus our study on one-dimensional spin-1 /2 systems and explain how nontrivial symmetry-protected topologically ordered (SPT) phases of an effective spin-1 model can arise depending on the couplings in the original Hamiltonian. In this analysis, tensor network methods feature in two ways: on the one hand, we make use of recent techniques for the classification of SPT phases using matrix product states in order to identify the phases in the effective model with those in the underlying physical system, employing Künneth's theorem for cohomology. As an intuitive paradigmatic model we exemplify the developed methodology by investigating the bilayered Δ chain. For strong ferromagnetic interlayer couplings, we find the system to transit into exactly the same phase as an effective spin-1 model. However, for weak but finite coupling strength, we identify a symmetry broken phase differing from this effective spin-1 description. On the other hand, we underpin our argument with a numerical analysis making use of matrix product states.
13. Gravitino and scalar {tau}-lepton decays in supersymmetric models with broken R-parity
Energy Technology Data Exchange (ETDEWEB)
Hajer, Jan
2010-06-15
Mildly broken R-parity is known to provide a solution to the cosmological gravitino problem in supergravity extensions of the Standard Model. In this work we consider new effects occurring in the R-parity breaking Minimal Supersymmetric Standard Model including right-handed neutrino superfields. We calculate the most general vacuum expectation values of neutral scalar fields including left- and right-handed scalar neutrinos. Additionally, we derive the corresponding mass mixing matrices of the scalar sector. We recalculate the neutrino mass generation mechanisms due to right- handed neutrinos as well as by cause of R-parity breaking. Furthermore, we obtain a, so far, unknown formula for the neutrino masses for the case where both mechanisms are effective. We then constrain the couplings to bilinear R-parity violating couplings in order to accommodate R-parity breaking to experimental results. In order to constrain the family structure with a U(1){sub Q} flavor symmetry we furthermore embed the particle content into an SU(5) Grand Unified Theory. In this model we calculate the signal of decaying gravitino dark matter as well as the dominant decay channel of a likely NLSP, the scalar {tau}-lepton. Comparing the gravitino signal with results of the Fermi Large Area Telescope enables us to find a lower bound on the decay length of scalar {tau}-leptons in collider experiments. (orig.)
14. Gravitino and scalar τ-lepton decays in supersymmetric models with broken R-parity
International Nuclear Information System (INIS)
Hajer, Jan
2010-01-01
Mildly broken R-parity is known to provide a solution to the cosmological gravitino problem in supergravity extensions of the Standard Model. In this work we consider new effects occurring in the R-parity breaking Minimal Supersymmetric Standard Model including right-handed neutrino superfields. We calculate the most general vacuum expectation values of neutral scalar fields including left- and right-handed scalar neutrinos. Additionally, we derive the corresponding mass mixing matrices of the scalar sector. We recalculate the neutrino mass generation mechanisms due to right- handed neutrinos as well as by cause of R-parity breaking. Furthermore, we obtain a, so far, unknown formula for the neutrino masses for the case where both mechanisms are effective. We then constrain the couplings to bilinear R-parity violating couplings in order to accommodate R-parity breaking to experimental results. In order to constrain the family structure with a U(1) Q flavor symmetry we furthermore embed the particle content into an SU(5) Grand Unified Theory. In this model we calculate the signal of decaying gravitino dark matter as well as the dominant decay channel of a likely NLSP, the scalar τ-lepton. Comparing the gravitino signal with results of the Fermi Large Area Telescope enables us to find a lower bound on the decay length of scalar τ-leptons in collider experiments. (orig.)
15. Irreversible thermodynamics of open chemical networks. I. Emergent cycles and broken conservation laws
International Nuclear Information System (INIS)
Polettini, Matteo; Esposito, Massimiliano
2014-01-01
In this paper and Paper II, we outline a general framework for the thermodynamic description of open chemical reaction networks, with special regard to metabolic networks regulating cellular physiology and biochemical functions. We first introduce closed networks “in a box”, whose thermodynamics is subjected to strict physical constraints: the mass-action law, elementarity of processes, and detailed balance. We further digress on the role of solvents and on the seemingly unacknowledged property of network independence of free energy landscapes. We then open the system by assuming that the concentrations of certain substrate species (the chemostats) are fixed, whether because promptly regulated by the environment via contact with reservoirs, or because nearly constant in a time window. As a result, the system is driven out of equilibrium. A rich algebraic and topological structure ensues in the network of internal species: Emergent irreversible cycles are associated with nonvanishing affinities, whose symmetries are dictated by the breakage of conservation laws. These central results are resumed in the relation a + b = s Y between the number of fundamental affinities a, that of broken conservation laws b and the number of chemostats s Y . We decompose the steady state entropy production rate in terms of fundamental fluxes and affinities in the spirit of Schnakenberg's theory of network thermodynamics, paving the way for the forthcoming treatment of the linear regime, of efficiency and tight coupling, of free energy transduction, and of thermodynamic constraints for network reconstruction
16. Irreversible thermodynamics of open chemical networks. I. Emergent cycles and broken conservation laws.
Science.gov (United States)
Polettini, Matteo; Esposito, Massimiliano
2014-07-14
In this paper and Paper II, we outline a general framework for the thermodynamic description of open chemical reaction networks, with special regard to metabolic networks regulating cellular physiology and biochemical functions. We first introduce closed networks "in a box", whose thermodynamics is subjected to strict physical constraints: the mass-action law, elementarity of processes, and detailed balance. We further digress on the role of solvents and on the seemingly unacknowledged property of network independence of free energy landscapes. We then open the system by assuming that the concentrations of certain substrate species (the chemostats) are fixed, whether because promptly regulated by the environment via contact with reservoirs, or because nearly constant in a time window. As a result, the system is driven out of equilibrium. A rich algebraic and topological structure ensues in the network of internal species: Emergent irreversible cycles are associated with nonvanishing affinities, whose symmetries are dictated by the breakage of conservation laws. These central results are resumed in the relation a + b = s(Y) between the number of fundamental affinities a, that of broken conservation laws b and the number of chemostats s(Y). We decompose the steady state entropy production rate in terms of fundamental fluxes and affinities in the spirit of Schnakenberg's theory of network thermodynamics, paving the way for the forthcoming treatment of the linear regime, of efficiency and tight coupling, of free energy transduction, and of thermodynamic constraints for network reconstruction.
17. Polar Kerr effect studies of time reversal symmetry breaking states in heavy fermion superconductors
Energy Technology Data Exchange (ETDEWEB)
Schemm, E.R., E-mail: eschemm@alumni.stanford.edu [Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA 94305 (United States); Levenson-Falk, E.M. [Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA 94305 (United States); Department of Physics, Stanford University, Stanford, CA 94305 (United States); Kapitulnik, A. [Geballe Laboratory for Advanced Materials, Stanford University, Stanford, CA 94305 (United States); Department of Physics, Stanford University, Stanford, CA 94305 (United States); Department of Applied Physics, Stanford University, Stanford, CA 94305 (United States); Stanford Institute of Energy and Materials Science, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States)
2017-04-15
Highlights: • Polar Kerr effect (PKE) probes broken time-reversal symmetry (TRS) in superconductors. • Absence of PKE below Tc in CeCoIn{sub 5} is consistent with dx2-y2 order parameter symmetry. • PKE in the B phase of the multiphase superconductor UPt3 agrees with an E2u model. • Data on URu2Si2 show broken TRS and additional structure in the superconducting state. - Abstract: The connection between chiral superconductivity and topological order has emerged as an active direction in research as more instances of both have been identified in condensed matter systems. With the notable exception of {sup 3}He-B, all of the known or suspected chiral – that is to say time-reversal symmetry-breaking (TRSB) – superfluids arise in heavy fermion superconductors, although the vast majority of heavy fermion superconductors preserve time-reversal symmetry. Here we review recent experimental efforts to identify TRSB states in heavy fermion systems via measurement of polar Kerr effect, which is a direct consequence of TRSB.
18. Fifty years of symmetry operations
International Nuclear Information System (INIS)
Wigner, E.P.
1978-01-01
The author begins by discussing the application of symmetry principles in classical physics, which began 150 years ago. He then offers a few remarks on the essence of these principles and their role in the structure of physics; events, laws of nature, and invariance principles - kinematic and then dynamic - are treated. After this general discussion of the various types of symmetries, he considers the fundamental differences in their application in classical and quantum physics; the symmetry principles have greater effectiveness in quantum theory. After a few critical remarks of a general nature on the invariance principles, the author reviews the application of symmetry principles in various areas of quantum mechanics: atomic spectra, molecular physics, solid state physics, nuclear physics, and particle physics. He notes that the role of the different symmetries recognized to be approximate provide the most interesting conclusions
19. Symmetry inheritance of scalar fields
International Nuclear Information System (INIS)
Ivica Smolić
2015-01-01
Matter fields do not necessarily have to share the symmetries with the spacetime they live in. When this happens, we speak of the symmetry inheritance of fields. In this paper we classify the obstructions of symmetry inheritance by the scalar fields, both real and complex, and look more closely at the special cases of stationary and axially symmetric spacetimes. Since the symmetry noninheritance is present in the scalar fields of boson stars and may enable the existence of the black hole scalar hair, our results narrow the possible classes of such solutions. Finally, we define and analyse the symmetry noninheritance contributions to the Komar mass and angular momentum of the black hole scalar hair. (paper)
20. Shape analysis with subspace symmetries
KAUST Repository
Berner, Alexander
2011-04-01
We address the problem of partial symmetry detection, i.e., the identification of building blocks a complex shape is composed of. Previous techniques identify parts that relate to each other by simple rigid mappings, similarity transforms, or, more recently, intrinsic isometries. Our approach generalizes the notion of partial symmetries to more general deformations. We introduce subspace symmetries whereby we characterize similarity by requiring the set of symmetric parts to form a low dimensional shape space. We present an algorithm to discover subspace symmetries based on detecting linearly correlated correspondences among graphs of invariant features. We evaluate our technique on various data sets. We show that for models with pronounced surface features, subspace symmetries can be found fully automatically. For complicated cases, a small amount of user input is used to resolve ambiguities. Our technique computes dense correspondences that can subsequently be used in various applications, such as model repair and denoising. © 2010 The Author(s). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8701985478401184, "perplexity": 1612.3463512284488}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589536.40/warc/CC-MAIN-20180716232549-20180717012549-00311.warc.gz"} |
http://manpages.ubuntu.com/manpages/artful/en/man1/pic.1.html | Provided by: groff-base_1.22.3-9_amd64
#### NAME
pic - compile pictures for troff or TeX
#### SYNOPSIS
pic [ -nvCSU ] [ filename ... ]
pic -t [ -cvzCSU ] [ filename ... ]
#### DESCRIPTION
This manual page describes the GNU version of pic, which is part of the groff document
formatting system. pic compiles descriptions of pictures embedded within troff or TeX
input files into commands that are understood by TeX or troff. Each picture starts with a
line beginning with .PS and ends with a line beginning with .PE. Anything outside of .PS
and .PE is passed through without change.
It is the user's responsibility to provide appropriate definitions of the PS and PE
macros. When the macro package being used does not supply such definitions (for example,
old versions of -ms), appropriate definitions can be obtained with -mpic: These will
center each picture.
#### OPTIONS
Options that do not take arguments may be grouped behind a single -. The special option
-- can be used to mark the end of the options. A filename of - refers to the standard
input.
-C Recognize .PS and .PE even when followed by a character other than space or
newline.
-S Safer mode; do not execute sh commands. This can be useful when operating on
untrustworthy input (enabled by default).
-U Unsafe mode; revert the default option -S.
-n Don't use the groff extensions to the troff drawing commands. You should use this
if you are using a postprocessor that doesn't support these extensions. The
extensions are described in groff_out(5). The -n option also causes pic not to use
zero-length lines to draw dots in troff mode.
-t TeX mode.
-c Be more compatible with tpic. Implies -t. Lines beginning with \ are not passed
through transparently. Lines beginning with . are passed through with the initial
. changed to \. A line beginning with .ps is given special treatment: it takes an
optional integer argument specifying the line thickness (pen size) in milliinches;
a missing argument restores the previous line thickness; the default line thickness
is 8 milliinches. The line thickness thus specified takes effect only when a non-
negative line thickness has not been specified by use of the thickness attribute or
by setting the linethick variable.
-v Print the version number.
-z In TeX mode draw dots using zero-length lines.
The following options supported by other versions of pic are ignored:
-D Draw all lines using the \D escape sequence. pic always does this.
-T dev Generate output for the troff device dev. This is unnecessary because the troff
output generated by pic is device-independent.
#### USAGE
This section describes only the differences between GNU pic and the original version of
pic. Many of these differences also apply to newer versions of Unix pic. A complete
documentation is available in the file
/usr/share/doc/groff-base/pic.ms.gz
TeX mode
TeX mode is enabled by the -t option. In TeX mode, pic will define a vbox called \graph
for each picture. Use the figname command to change the name of the vbox. You must
yourself print that vbox using, for example, the command
\centerline{\box\graph}
Actually, since the vbox has a height of zero (it is defined with \vtop) this will produce
slightly more vertical space above the picture than below it;
\centerline{\raise 1em\box\graph}
would avoid this.
To make the vbox having a positive height and a depth of zero (as used e.g. by LaTeX's
graphics.sty), define the following macro in your document:
\def\gpicbox#1{%
\vbox{\unvbox\csname #1\endcsname\kern 0pt}}
Now you can simply say \gpicbox{graph} instead of \box\graph.
You must use a TeX driver that supports the tpic specials, version 2.
Lines beginning with \ are passed through transparently; a % is added to the end of the
line to avoid unwanted spaces. You can safely use this feature to change fonts or to
change the value of \baselineskip. Anything else may well produce undesirable results;
use at your own risk. Lines beginning with a period are not given any special treatment.
Commands
for variable = expr1 to expr2 [by [*]expr3] do X body X
Set variable to expr1. While the value of variable is less than or equal to expr2,
do body and increment variable by expr3; if by is not given, increment variable by
1. If expr3 is prefixed by * then variable will instead be multiplied by expr3.
The value of expr3 can be negative for the additive case; variable is then tested
whether it is greater than or equal to expr2. For the multiplicative case, expr3
must be greater than zero. If the constraints aren't met, the loop isn't executed.
X can be any character not occurring in body.
if expr then X if-true X [else Y if-false Y]
Evaluate expr; if it is non-zero then do if-true, otherwise do if-false. X can be
any character not occurring in if-true. Y can be any character not occurring in
if-false.
print arg...
Concatenate the arguments and print as a line on stderr. Each arg must be an
expression, a position, or text. This is useful for debugging.
command arg...
Concatenate the arguments and pass them through as a line to troff or TeX. Each
arg must be an expression, a position, or text. This has a similar effect to a
line beginning with . or \, but allows the values of variables to be passed
through. For example,
.PS
x = 14
command ".ds string x is " x "."
.PE
\*[string]
prints
x is 14.
sh X command X
Pass command to a shell. X can be any character not occurring in command.
copy "filename"
Include filename at this point in the file.
copy ["filename"] thru X body X [until "word"]
copy ["filename"] thru macro [until "word"]
This construct does body once for each line of filename; the line is split into
blank-delimited words, and occurrences of $i in body, for i between 1 and 9, are replaced by the i-th word of the line. If filename is not given, lines are taken from the current input up to .PE. If an until clause is specified, lines will be read only until a line the first word of which is word; that line will then be discarded. X can be any character not occurring in body. For example, .PS copy thru % circle at ($1,\$2) % until "END"
1 2
3 4
5 6
END
box
.PE
is equivalent to
.PS
circle at (1,2)
circle at (3,4)
circle at (5,6)
box
.PE
The commands to be performed for each line can also be taken from a macro defined
earlier by giving the name of the macro as the argument to thru.
reset
reset variable1[,] variable2 ...
Reset pre-defined variables variable1, variable2 ... to their default values. If
no arguments are given, reset all pre-defined variables to their default values.
Note that assigning a value to scale also causes all pre-defined variables that
control dimensions to be reset to their default values times the new value of
scale.
plot expr ["text"]
This is a text object which is constructed by using text as a format string for
sprintf with an argument of expr. If text is omitted a format string of "%g" is
used. Attributes can be specified in the same way as for a normal text object. Be
very careful that you specify an appropriate format string; pic does only very
limited checking of the string. This is deprecated in favour of sprintf.
variable := expr
This is similar to = except variable must already be defined, and expr will be
assigned to variable without creating a variable local to the current block. (By
contrast, = defines the variable in the current block if it is not already defined
there, and then changes the value in the current block only.) For example, the
following:
.PS
x = 3
y = 3
[
x := 5
y = 5
]
print x " " y
.PE
prints
5 3
Arguments of the form
X anything X
are also allowed to be of the form
{ anything }
In this case anything can contain balanced occurrences of { and }. Strings may contain X
or imbalanced occurrences of { and }.
Expressions
The syntax for expressions has been significantly extended:
x ^ y (exponentiation)
sin(x)
cos(x)
atan2(y, x)
log(x) (base 10)
exp(x) (base 10, i.e. 10^x)
sqrt(x)
int(x)
rand() (return a random number between 0 and 1)
rand(x) (return a random number between 1 and x; deprecated)
srand(x) (set the random number seed)
max(e1, e2)
min(e1, e2)
!e
e1 && e2
e1 || e2
e1 == e2
e1 != e2
e1 >= e2
e1 > e2
e1 <= e2
e1 < e2
"str1" == "str2"
"str1" != "str2"
String comparison expressions must be parenthesised in some contexts to avoid ambiguity.
Other Changes
A bare expression, expr, is acceptable as an attribute; it is equivalent to dir expr,
where dir is the current direction. For example
line 2i
means draw a line 2 inches long in the current direction. The ‘i’ (or ‘I’) character is
ignored; to use another measurement unit, set the scale variable to an appropriate value.
The maximum width and height of the picture are taken from the variables maxpswid and
maxpsht. Initially these have values 8.5 and 11.
Scientific notation is allowed for numbers. For example
x = 5e-2
Text attributes can be compounded. For example,
"foo" above ljust
is valid.
There is no limit to the depth to which blocks can be examined. For example,
[A: [B: [C: box ]]] with .A.B.C.sw at 1,2
circle at last [].A.B.C
is acceptable.
Arcs now have compass points determined by the circle of which the arc is a part.
Circles, ellipses, and arcs can be dotted or dashed. In TeX mode splines can be dotted or
dashed also.
Boxes can have rounded corners. The rad attribute specifies the radius of the quarter-
circles at each corner. If no rad or diam attribute is given, a radius of boxrad is used.
Initially, boxrad has a value of 0. A box with rounded corners can be dotted or dashed.
Boxes can have slanted sides. This effectively changes the shape of a box from a
rectangle to an arbitrary parallelogram. The xslanted and yslanted attributes specify the
x and y offset of the box's upper right corner from its default position.
The .PS line can have a second argument specifying a maximum height for the picture. If
the width of zero is specified the width will be ignored in computing the scaling factor
for the picture. Note that GNU pic will always scale a picture by the same amount
vertically as well as horizontally. This is different from the DWB 2.0 pic which may
scale a picture by a different amount vertically than horizontally if a height is
specified.
Each text object has an invisible box associated with it. The compass points of a text
object are determined by this box. The implicit motion associated with the object is also
determined by this box. The dimensions of this box are taken from the width and height
attributes; if the width attribute is not supplied then the width will be taken to be
textwid; if the height attribute is not supplied then the height will be taken to be the
number of text strings associated with the object times textht. Initially textwid and
textht have a value of 0.
In (almost all) places where a quoted text string can be used, an expression of the form
sprintf("format", arg,...)
can also be used; this will produce the arguments formatted according to format, which
should be a string as described in printf(3) appropriate for the number of arguments
supplied.
The thickness of the lines used to draw objects is controlled by the linethick variable.
This gives the thickness of lines in points. A negative value means use the default
thickness: in TeX output mode, this means use a thickness of 8 milliinches; in TeX output
mode with the -c option, this means use the line thickness specified by .ps lines; in
troff output mode, this means use a thickness proportional to the pointsize. A zero value
means draw the thinnest possible line supported by the output device. Initially it has a
value of -1. There is also a thick[ness] attribute. For example,
circle thickness 1.5
would draw a circle using a line with a thickness of 1.5 points. The thickness of lines
is not affected by the value of the scale variable, nor by the width or height given in
the .PS line.
Boxes (including boxes with rounded corners or slanted sides), circles and ellipses can be
filled by giving them an attribute of fill[ed]. This takes an optional argument of an
expression with a value between 0 and 1; 0 will fill it with white, 1 with black, values
in between with a proportionally gray shade. A value greater than 1 can also be used:
this means fill with the shade of gray that is currently being used for text and lines.
Normally this will be black, but output devices may provide a mechanism for changing this.
Without an argument, then the value of the variable fillval will be used. Initially this
has a value of 0.5. The invisible attribute does not affect the filling of objects. Any
text associated with a filled object will be added after the object has been filled, so
that the text will not be obscured by the filling.
Three additional modifiers are available to specify colored objects: outline[d] sets the
color of the outline, shaded the fill color, and colo[u]r[ed] sets both. All three
keywords expect a suffix specifying the color, for example
Currently, color support isn't available in TeX mode. Predefined color names for groff
are in the device macro files, for example ps.tmac; additional colors can be defined with
the .defcolor request (see the manual page of troff(1) for more details).
To change the name of the vbox in TeX mode, set the pseudo-variable figname (which is
actually a specially parsed command) within a picture. Example:
.PS
figname = foobar;
...
.PE
The picture is then available in the box \foobar.
pic assumes that at the beginning of a picture both glyph and fill color are set to the
default value.
Arrow heads will be drawn as solid triangles if the variable arrowhead is non-zero and
either TeX mode is enabled or the -n option has not been given. Initially arrowhead has a
value of 1. Note that solid arrow heads are always filled with the current outline color.
The troff output of pic is device-independent. The -T option is therefore redundant. All
numbers are taken to be in inches; numbers are never interpreted to be in troff machine
units.
Objects can have an aligned attribute. This will only work if the postprocessor is grops,
or gropdf. Any text associated with an object having the aligned attribute will be
rotated about the center of the object so that it is aligned in the direction from the
start point to the end point of the object. Note that this attribute will have no effect
for objects whose start and end points are coincident.
In places where nth is allowed ‘expr’th is also allowed. Note that ’th is a single token:
no space is allowed between the ’ and the th. For example,
for i = 1 to 4 do {
line from ‘i’th box.nw to ‘i+1’th box.se
}
#### CONVERSION
To obtain a stand-alone picture from a pic file, enclose your pic code with .PS and .PE
requests; roff configuration commands may be added at the beginning of the file, but no
roff text.
It is necessary to feed this file into groff without adding any page information, so you
must check which .PS and .PE requests are actually called. For example, the mm macro
package adds a page number, which is very annoying. At the moment, calling standard groff
without any macro package works. Alternatively, you can define your own requests, e.g. to
do nothing:
.de PS
..
.de PE
..
groff itself does not provide direct conversion into other graphics file formats. But
there are lots of possibilities if you first transform your picture into PostScript®
format using the groff option -Tps. Since this ps-file lacks BoundingBox information it
is not very useful by itself, but it may be fed into other conversion programs, usually
named ps2other or pstoother or the like. Moreover, the PostScript interpreter ghostscript
(gs) has built-in graphics conversion devices that are called with the option
gs -sDEVICE=<devname>
Call
gs --help
for a list of the available devices.
An alternative may be to use the -Tpdf option to convert your picture directly into PDF
format. The MediaBox of the file produced can be controlled by passing a -P-p papersize
to groff.
As the Encapsulated PostScript File Format EPS is getting more and more important, and the
conversion wasn't regarded trivial in the past you might be interested to know that there
is a conversion tool named ps2eps which does the right job. It is much better than the
tool ps2epsi packaged with gs.
For bitmapped graphic formats, you should use pstopnm; the resulting (intermediate) PNM
file can be then converted to virtually any graphics format using the tools of the netpbm
package .
#### FILES
/usr/share/groff/1.22.3/tmac/pic.tmac Example definitions of the PS and PE macros.
#### SEEALSO
troff(1), groff_out(5), tex(1), gs(1), ps2eps(1), pstopnm(1), ps2epsi(1), pnm(5)
Eric S. Raymond, Making Pictures With GNU PIC.
/usr/share/doc/groff-base/pic.ps (this file, together with its source file, pic.ms, is
part of the groff documentation)
Tpic: Pic for TeX
Brian W. Kernighan, PIC — A Graphics Language for Typesetting (User Manual). AT&T Bell
Laboratories, Computing Science Technical Report No. 116
<http://cm.bell-labs.com/cm/cs/cstr/116.ps.gz> (revised May, 1991).
ps2eps is available from CTAN mirrors, e.g.
<ftp://ftp.dante.de/tex-archive/support/ps2eps/>
W. Richard Stevens, Turning PIC Into HTML
<http://www.kohala.com/start/troff/pic2html.html>
W. Richard Stevens, Examples of picMacros
<http://www.kohala.com/start/troff/pic.examples.ps>
#### BUGS
Input characters that are invalid for groff (i.e., those with ASCII code 0, or 013 octal,
or between 015 and 037 octal, or between 0200 and 0237 octal) are rejected even in TeX
mode.
The interpretation of fillval is incompatible with the pic in 10th edition Unix, which
interprets 0 as black and 1 as white.
#### COPYING
Copyright © 1989-2014 Free Software Foundation, Inc.
Permission is granted to make and distribute verbatim copies of this manual provided the
copyright notice and this permission notice are preserved on all copies.
Permission is granted to copy and distribute modified versions of this manual under the
conditions for verbatim copying, provided that the entire resulting derived work is
distributed under the terms of a permission notice identical to this one.
Permission is granted to copy and distribute translations of this manual into another
language, under the above conditions for modified versions, except that this permission
notice may be included in translations approved by the Free Software Foundation instead of
in the original English. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7239800095558167, "perplexity": 4379.982965358172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860776.63/warc/CC-MAIN-20180618183714-20180618203714-00176.warc.gz"} |
https://pirsa.org/21110003 | ## Abstract
Quantum spin liquids (QSL) are enigmatic phases of matter characterized by the absence of symmetry breaking and the presence of fractionalized quasiparticles. While theories for QSLs are now in abundance, tracking them down in real materials has turned out to be remarkably tricky. I will focus on two sets of studies on QSLs in three dimensional pyrochlore systems, which have proven to be particularly promising. In the first work, we analyze the newly discovered spin-1 pyrochlore compound NaCaNi2F7 whose properties we find to be described by a nearly idealized Heisenberg Hamiltonian [1]. We study its dynamical structure factor using molecular dynamics simulations, stochastic dynamical theory, and linear spin wave theory, all of which reproduce remarkably well the momentum dependence of the experimental inelastic neutron scattering intensity as well as its energy dependence (with the exception of the lowest energies) [2]. We apply many of the lessons learnt to Ce2Zr2O7 which has been recently shown to exhibit strong signatures of QSL behavior in neutron scattering experiments. Its magnetic properties emerge from interacting cerium ions, whose ground state doublet (with J = 5/2,m_J = ±3/2) arises from strong spin orbit coupling and crystal field effects. With the help of finite temperature Lanczos calculations, we determine the low energy effective spin-1/2 Hamiltonian parameters using which we reproduce all the prominent features of the dynamical spin structure factor. These parameters suggest the realization of a U(1) π-flux QSL phase [3] and they allow us to make predictions for responses in an applied magnetic field that highlight the important role played by octupoles in the disappearance of spectral weight.
*Supported by FSU and NHMFL, funded by NSF/DMR-1644779 and the State of Florida, and NSF DMR-2046570
[1] K. W. Plumb, H. J. Changlani, A. Scheie, S. Zhang, J. W. Krizan, J. A. Rodriguez-Rivera, Yiming Qiu, B. Winn, R. J. Cava & C. L. Broholm, Nature Physics 15, 54–59 (2019)
[2] S. Zhang, H. J. Changlani, K. W. Plumb, O. Tchernyshyov, and R. Moessner, Phys. Rev. Lett. 122, 167203 (2019)
[3] A.Bhardwaj, S.Zhang, H.Yan, R. Moessner, A. H. Nevidomskyy, H. J. Changlani, arXiv:2108.01096 (2021), under review.
## Details
Talk Number PIRSA:21110003
Speaker Profile Hitesh Changlani
Collection Condensed Matter | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8246016502380371, "perplexity": 2716.7151196905006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363659.21/warc/CC-MAIN-20211209030858-20211209060858-00420.warc.gz"} |
http://www.tek-tips.com/viewthread.cfm?qid=1006228 | INTELLIGENT WORK FORUMS
FOR COMPUTER PROFESSIONALS
Remember Me
Are you a
Computer / IT professional?
Join Tek-Tips Forums!
• Talk With Other Members
• Be Notified Of Responses
• Keyword Search
Favorite Forums
• Automated Signatures
• Best Of All, It's Free!
*Tek-Tips's functionality depends on members receiving e-mail. By joining you are opting in to receive e-mail.
#### Posting Guidelines
Promoting, selling, recruiting, coursework and thesis posting is forbidden.
Jobs from Indeed
Just copy and paste the
# The free lunch is over
Share
## The free lunch is over
(OP)
The article says that processor power has topped out, and that for any future performance gains, programmers must learn how to write concurrent software to take advantage of multi-core CPUs and hyperthreading.
http://www.gotw.ca/publications/concurrency-ddj.htm
Chip H.
____________________________________________________________________
### RE: The free lunch is over
There was an article a week ago about HP Research developing technology that replaces transistors and in doing so creates almost limitless power. I don't recall where I read the article but it was interesting. It also mentioned, however, that it would be years before it would be used as a primary method (if ever, I would add.)
And to think this R&D occurred on Carly's watch and they turn around and oust her as CEO. How fair is that?
### RE: The free lunch is over
#### Quote:
The article says that processor power has topped out, and that for any future performance gains, programmers must learn how to write concurrent software to take advantage of multi-core CPUs and hyperthreading.
Or maybe cleaner code that isn't so power-hungry. Then again with JVMs and "Frameworks" to contend with...
### RE: The free lunch is over
(OP)
#### Quote:
Or maybe cleaner code that isn't so power-hungry.
I agree, but with the current drive to get code to be written in the cheapest place in the world, it can only result in bad code being written. Saw a good one at work the other day:
SELECT a, b, c
FROM tbl_x
WHERE a = a
I can only imagine that template-driven code will become the norm, and we all know how efficient that stuff is. :-(
The only thing that might replace a careful design done by someone who knows the system inside & out, would be a design done by genetic algorithm, and then there's a serious trust issue -- how do you know that the GA's code isn't working because of some unintended side-effect?
Chip H.
____________________________________________________________________
### RE: The free lunch is over
Reminds me of those bloated, finned boats we drove in the 1950s and '60s. The engineering placed no emphasis on clean and efficient energy use (or for that matter safety). It took a series of political embarassments before anything was done at all (Nader, the Saudis exerting the muscles car-culture had given them, etc.).
Anyone with any foresight can see a crisis looming in computing too. Wasteful use of a cheap resource available in bulk from outside sources is a recipe for power plays that change the rules of the game.
### RE: The free lunch is over
#### Quote:
Wasteful use of a cheap resource available in bulk
I think that such waste has been happening since the 286 was released. Every generation of PC hardware (the 286, the 386, the 486, etc..) has seen important increases in computing power, available memory and storage.
And what have we done with it ? Apart from recompiling the same old code from time to time, what have we really done with it ? Clippy ? Streaming video ? Oohh, shiny !
#### Quote:
get code to be written in the cheapest place in the world, it can only result in bad code being written
If I'm not mistaken, buffer overflows are our daily nemesis, and have been since the beginning of the Internet. These buffer overflows were not implemented by low-pay Indian hacks, they were designed by highly-paid Western programmers. Many of them with degrees in Engineering or Science. Fat good that did.
I don't know anything about education in India (or any other third-world country for that matter), and I am not aware that any Indian-based company has produced any code worthy of recognition yet, but given all the trouble we have with pooly-designed mail clients and OSes now, I fail to see how low-pay programmers can do any worse.
Never mind, it's Monday and I'm probably grumpy.
Pascal.
### RE: The free lunch is over
I think "bad" in the sense being discussed here was meant as wasteful rather than outright buggy. The more I think about it though, the real waste is probably in the rapid replacement of hardware just to get the latest and greatest.
Most desktop systems sit running but idle for a lot of hours each day. So maybe the waste is really in not using those resources to any good purpose. In that sense maybe using otherwise bloated development environments that resulted in bloated applications makes some use of the hardware at least. If there is any gain by doing so (development productivity?) then it may be all to the good, since those overkill machines are just sitting around anyway.
I have to wonder if a typical business desktop really ever needs the resources of anything greater than around 256MB and 500Mhz. Machines of that power scale ought to be darned cheap and about the size of a really thick paperback book if built using today's technology, with substantially reduced power consumption compared with older ones. Most of the size would be the hard drive and CD/DVD writer assuming a small external power supply.
At the worst I can't imagine why they'd need to be larger than a case for a CD duplicator (one reader/one writer cases).
### RE: The free lunch is over
Is code inefficiency really escalating in proportion to advance in PC performance? I think one major the reason why the "computer experience" hasn't netted users a huge speed increase is because that there are simply more apps being run concurrently today than before. There are so many apps being run in the background, more multitasking, etc. As new software ideas come out, more programs come out that are attractive to users.
### RE: The free lunch is over
I agree with dilettante in the sense that not using system resources is not always efficient. Sure, if you can get an app to need only 10MB of memory instead of 30MB that sounds great. But is it really? We programmer types often fail to acknowledge the real world implications of our drive to make code smaller and faster. Sometimes bigger and slower can make a lot of business sense.
For example lets examine creating a business process that needs to run nightly to update data.
Lets say we can spend 2 weeks building it and optimizing it to run super effeciently and have it take 15 minutes to run on a $5000 box. Or we can spend 1 week building it with a RAD tool that takes 1 hour to run on the same$5000 box.
What did we gain by spending the extra week? A four fold improvement in effeciency! Sounds great! But if the business needs are such that we have a window of 3 hours to complete the task each night we really have gained nothing at all! We pat ourselves on the back for being effecient meanwhile the developer with the bloated code is twice as productive and ultimately more use to the business world.
Sure this does not apply to all cases. Sometimes it is worth the extra time and effort to maximize effeciency. It is imperative to be able to do so. It is also imperative from an economic standpoint to know when to just take the shortcut and let the app be technically ineffecient.
I would definitly argue that there are certain classes of applications where effeciency is extremely important. This would be any operating system or program meant to run continuously in the background. With these one should always take the smallest possible footprint to leave the resources available for the apps that actually do the productive work.
### RE: The free lunch is over
(OP)
#### Quote:
I think "bad" in the sense being discussed here was meant as wasteful rather than outright buggy.
Correct. I meant it in the sense of lots of cut-n-paste code, where the programmer doesn't have a sense of what it does, but just finds something that sortof works, and futzes with it until it works adequately.
And, this isn't limited to outsourced code. There are plenty of western programmers who do it too. But my experience has been that it primarily comes from overseas programmers.
Chip H.
____________________________________________________________________
### RE: The free lunch is over
#### Quote:
What did we gain by spending the extra week?
Let me see : 2 weeks build time for 15 minutes run time, or one week build time for one hour run time.
What is gained by spending another week optimizing ?
More time for the backups that run after/before the process.
More flexibility for the administrator, secure in the knowledge that an important business process can be rescheduled without severely impacting the nightly run schedule.
The code taking one hour to complete will be judged obsolete sooner because of time constraints, whereas the 15-minute code will be able to justify its usefulness longer, since it will allow for more tasks to run in the same night.
The price of the box is irrelevant, there has to be a box anyway. The cost of the coder(s) is relevant, but that justs pushes back the date at which the code can be deemed to provide value for money (however the company decides to calculate that date).
I agree that quick & dirty is sometimes an acceptable way of doing things. Unfortunately, quick & dirty is what gave us buffer overflows in the first place. And cut&paste code is what has kept them alive.
My opinion is that it always pays to carefully plan an application, and design it as best as is possible - even if it is not destined to support a critical business process.
Pascal.
### RE: The free lunch is over
When you say you have seen "nothing from India..." you are closer that you realise. The concept of zero was invented by Indian mathmaticians while the Scots and English were running around covered in blue paint hiding from the Roman invaders. India has a richer and longer mathematical heretidge than most other civilisations.
It is always easy to find faults in foreigners and ignore the worse ones at home.
But hey, what I can I say? I'm from New Zealand were high technology is the electric fence.
Editor and Publisher of Crystal Clear
www.chelseatech.co.nz/pubs.htm
### RE: The free lunch is over
(OP)
#### Quote:
I'm from New Zealand were high technology is the electric fence.
You're being modest -- Weta Digital has what must be the largest compute cluster in the southern hemisphere.
You're right -- we wouldn't have gotten very far without the concept of zero. It's one of those things that everyone just knew, but didn't apply it to counting until someone in India thought of it.
#### Quote:
It is always easy to find faults in foreigners and ignore the worse ones at home.
I've seen lots of bad code written by US programmers, too. I'm not denying it.
What I suspect is happening is that the Indian software industry is about where we were in 1998 -- they're hiring everyone who can spell "object" and can fog a mirror. As a consequence, the quality of code is rather low.
Chip H.
____________________________________________________________________
### RE: The free lunch is over
Coming from a thoroughly western education, I remember being told at university not to re-write code that I could already take from somewhere else (in those days, Fortran and I'd have to type it in again). In fact this was the very thinking that later grew into black-box object-ism, and thence to rapid application development tools.
How can we criticise people for cut-'n'-pasting, and not bothering to understand what the black boxes really do, if we told them to do it that way in the first place?
### RE: The free lunch is over
Every new computer has been an improvement, from my viewpoint. I can do the same things faster and also a few extra bits.
------------------------------
An old man who lives in the UK
### RE: The free lunch is over
Computers are not the issue. It is the software that allows them to do something, or to crash faster. The new dual core CPUs will now allow multi-threaded applications to crash two threads, independantly.
BocaBurger
<===========================||////////////////|0
The pen is mightier than the sword, but the sword hurts more!
### RE: The free lunch is over
Several years ago there was a thread about increasing cpu power for PC's, and I feel the same now as I did then--it's a big yawn.
Sure, for a server, you want the power. But my guess is that 95% of business desktops are overpowered. You just don't need this kind of power to run Word, Excel, IE, and most of the commercial business apps out there. Memory--yes--most desktops might benefit from more memory due to the bloat and inefficiencies mentioned. But more power wouldn't be noticed by most business users.
Gamers, yeah. Graphics, yeah. But not business desktops.
Me, I want internet bandwidth and memory.
--Jim
### RE: The free lunch is over
jsteph, I agree entirely. But look at it another way: Virtually universally, the more senior a manager is, the larger his chair. I've never seen any evidence that managers develop larger or more delicate bottoms as they get promoted. In the same way, they always have bigger desks, even though it's their secretary who probably needs the bigger desk.
There are not a lot of senior managers who will tolerate having an older, slower cpu than their staff. I'm sure that's one reason why desktops get so vastly powerful.
### RE: The free lunch is over
Lets go back a few years one of my last non pc boxes was an Atari Falcon 16mb with 1gb drive (trust me that was huge and the 16mb cost me close on £200) and an accelator (32mhz i think)card.
On this I ran Steinbergs Cubase Audio, a word proccesor and a decent imaging program.
Many a time we put it up against my friends state of the art P90 (clocked to 110mhz) with 128mb and a 2gb drive running 95.
In nearly every case the Atari trounced the pc in rendering, opening, saving, converting various files. As for Audio well the Atari was in a league of it's own.
Now bearing in mind the proccesing and memory differences, why did the Atari win. In my opion it's simple, as the Atari had so little processor power and memory (a standard falcon was 4mb and 16mhz), the programmers had no choice but to write good quality code.
It was the same with games. Poor graphics and poor sound mean't one thing, to survive you needed gameplay. Now if you have a poor game, stick lots of pretty sound and graphics and hope no one notices.
I'm sure the writers of the game MDK stated they developed on low spec, poor quality machines. If the game slowed down, crashed or generally was of poor quality, they went back and rewrote the code until it worked. Now that was quality programming.
Rant over.....
Stu..
Only the truly stupid believe they know everything.
Stu.. 2004
### RE: The free lunch is over
Talking about quality programming, anyone ever play Chuck Yeager's Air Combat ?
It was a flight sim from 1991. It played very well at the time, and guess what ? IT STILL DOES !
That's right, a game from 1991 that ran on a 386 at 40Mhz now runs on a machine that is 20000+ times faster, and everything still works just like it should.
The guys that coded that game were so good that they managed to make their game react to totally unforseen hardware modifications, and do so gracefully. In 1991, nobody even dreamed of multi-gigahertz processors, or DDR memory, and yet they managed to make their circa-1991 code operate flawlessly on circa-2005 hardware.
Is that quality programming, or what ?
Pascal.
### RE: The free lunch is over
Darn, you had a 40 mhz 386? Mine was only 16 mhz. DX or SX?
A programmer named Pascal, how did that happen?
BocaBurger
<===========================||////////////////|0
The pen is mightier than the sword, but the sword hurts more!
### RE: The free lunch is over
DX obviously, an AMD version (one that didn't totally break the floating point unit). It was fun.
As for the name, ask my parents
Pascal.
### RE: The free lunch is over
You know, I can almost guess why. If you had EGA or VGA graphics, you drew on the real screen; therefore to avoid drawing catastrophes (noise, messy images) you had to coordinate with the raster beam, which meant the program speed was determined largely by the screen refresh rate - which is governed by what the human eye will put up with - which is still the same!
(Of course they probably still did a bit of speed-checking. If your machine is so slow that it takes more than a screen refresh to do all the calculations, then the program will suddenly jump in speed as processor speed improves. And I'm certainly not denying that it was quality programming. It takes a bit of skill to work round a raster beam and get good performance out of a minimal EGA/VGA)
### RE: The free lunch is over
Nano-tech might improve processor power.
And hopefully some genius will invent a new computer model different from the Turing Machine (all computers are turing machines).
### RE: The free lunch is over
(OP)
FYI -
AMD is thinking of selling the dual-core Opterons at a discount, making them comparable in price with the single-core processors. They'll likely go up after the introductory period.
Chip H.
____________________________________________________________________
### RE: The free lunch is over
What this means, of course, is that operating systems that only support single CPU systems (eg XP Home) will have to be rewritten or replaced with an SMP aware OS to take advantage of the dual core operation.
John
### RE: The free lunch is over
I didn't see a mention of that in the article but it stands to reason. I would not be surprised to see some XP Home SMP Edition arrive on the scene though, but surely a Longhorn Home would have this ability from the start. Worst comes to worst, you just use XP Pro.
Relatively few people bother with a full motherboard/CPU upgrade anymore, and would buy a new machine. Since the existing machine with XP Home probably has an OEM license (not transferable to a new machine) there is no impact there.
Those who DO major upgrades to use such a dual-core processor chip would probably be faced with an upgrade to Pro. Wouldn't you think most people inclined to do this are already running Pro though?
So I guess I agree with you but I don't see a major impact. White box OEMs like those in Microsoft's System Builder program would just use another OEM license to keep the cost relatively low.
### RE: The free lunch is over
In fact, re reading the article above, its not even that simple. From that article:
#### Quote (article):
In effect you get two Prescott P4 CPUs each with Hyper Threading (HT) in a single package, which can help boost PC performance when running suitable multithreaded applications - to Windows, the 840 appears as four virtual CPUs.
XP Pro and 2K Pro only support 2 CPUs. For more than that you need a server OS.
With current systems, this means Windows 2000 Server or Windows 2003 Server on the desktop PC just to take full advantage of the new CPUs.
This will of course need to change in the future to accomodate the new generation of CPUs.
The alternative is to disable Hyperthreading features to avoid the need for expensive server operating system licenses.
John
### RE: The free lunch is over
jrbarnett:
#### Quote:
What this means, of course, is that operating systems that only support single CPU systems (eg XP Home) will have to be rewritten or replaced with an SMP aware OS to take advantage of the dual core operation.
Unless some clever programmer somewhere comes up with code that sits on top of the OS and parses instructions to the different CPUs.
### RE: The free lunch is over
(OP)
I think people running XP Home will be out of luck. AFAIK, it can't even take advantage of hyperthreading.
From what I've read, Microsoft is trying to decide what to do with their licensing. They make a lot of money when companies have to step up to Windows Advanced Server because they need to go to 4 CPUs, and I can't see them giving that up. Don't forget that the companies also need to buy SQL Server Advanced edition to run on that 4-cpu box. The regular SQL Server will only recognize 2 cpus.
Chip H.
____________________________________________________________________
Close Box
# Join Tek-Tips® Today!
Join your peers on the Internet's largest technical computer professional community.
It's easy to join and it's free.
Here's Why Members Love Tek-Tips Forums:
• Talk To Other Members
• Notification Of Responses To Questions
• Favorite Forums One Click Access
• Keyword Search Of All Posts, And More...
Register now while it's still free! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20851896703243256, "perplexity": 2762.375020414685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458866468.44/warc/CC-MAIN-20150501054106-00051-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://blog.plover.com/prog/git-q.html | # The Universe of Discourse
Fri, 02 Nov 2018
One of my favorite programs is a super simple Git utility called git-vee that I just love, and I use fifty times a day. It displays a very simple graph that shows where two branches diverged. For example, my push of master was refused because it was not a fast-forward. So I used git-vee to investigate, and saw:
* a41d493 (HEAD -> master) new article: Migraine
* 2825a71 message headers are now beyond parody
| * fa2ae34 (origin/master) message headers are now beyond parody
|/
The current head (master) and its upstream (origin/master) are displayed by default. Here the nearest common ancestor is 142c68a, and I can see the two commits after that on master that are different from the commit on origin/master. The command is called get-vee because the graph is (usually) V-shaped, and I want to find out where the point of the V is and what is on its two arms.
From this V, it appears that what happened was: I pushed fa2ae34, then amended it to produce 2825a71, but I have not yet force-pushed the amendment. Okay! I should simply do the force-push now…
Except wait, what if that's not what happened? What if what happened was, 2825a71 was the original commit, and I pushed it, then fetched it on a different machine, amended it to produce fa2ae34, and force-pushed that? If so, then force-pushing 2825a71 now would overwrite the amendments. How can I tell what I should do?
Formerly I would have used diff and studied the differences, but now I have an easier way to find the answer. I run:
git q HEAD^ origin/master
and it produces the dates on which each commit was created:
2825a71 Fri Nov 2 02:30:06 2018 +0000
fa2ae34 Fri Nov 2 02:25:29 2018 +0000
Aha, it was as I originally thought: 2825a71 is five minutes newer. The force-push is the right thing to do this time.
Although the commit date is the default output, the git-q command can produce any of the information known to git-log, using the usual escape sequences. For example, git q %s ... produces subject lines:
% git q %s HEAD origin/master 142c68a
a41d493 new article: Migraine
fa2ae34 message headers are now beyond parody
and git q '%an <%ae>' tells you who made the commits:
a41d493 Mark Jason Dominus (陶敏修) <mjd@plover.com>
fa2ae34 Mark Jason Dominus (陶敏修) <mjd@plover.com>
142c68a Mark Jason Dominus (陶敏修) <mjd@plover.com>
The program is in my personal git-util repository but it's totally simple and should be easy to customize the way you want:
#!/usr/bin/python3
from sys import argv, stderr
import subprocess
if len(argv) < 3: usage()
if argv[1].startswith('%'):
item = argv[1]
ids = argv[2:]
else:
item='%cd'
ids = argv[1:]
for id in ids:
subprocess.run([ "git", "--no-pager",
"log", "-1", "--format=%h " + item, id]) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5879642963409424, "perplexity": 7772.378319463054}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00456.warc.gz"} |
http://stats.stackexchange.com/questions/30886/variance-matrix-with-equal-diagonal-entries-in-proc-mixed | # Variance matrix with equal diagonal entries in PROC MIXED
Is it possible to specify a variance matrix for the random effects in PROC MIXED with the only restriction that the diagonal entries are equal ? I took a look in the help and I have not found, but this seems strange because such a variance structure is rather natural in some applications.
(EDIT) Consider for instance such a dataset:
> dat
Subject Dose y
1 1 A 10
2 1 A 11
3 1 A 12
4 1 B 30
5 1 B 31
6 1 B 32
7 1 C 100
8 1 C 101
9 1 C 102
10 2 A 11
11 2 A 14
12 2 A 13
13 2 B 33
14 2 B 37
15 2 B 36
16 2 C 105
17 2 C 110
18 2 C 109
19 3 A 9
20 3 A 11
21 3 A 12
22 3 B 30
23 3 B 35
24 3 B 32
25 3 C 115
26 3 C 101
27 3 C 102
and the following model:
PROC MIXED DATA=dat ;
CLASS SUBJECT DOSE ;
MODEL y = DOSE ;
RANDOM DOSE / subject=SUBJECT type=MYMATRIX ;
RUN; QUIT;
I want a matrix "MYMATRIX" with the same variance for each level of the DOSE factor, but not a compound symmetry matrix because the correlation between the means of the levels are different.
(EDIT2) The mathematical meaning of this model is the following one. Denoting by $i$ the index for the dose level and by $j$ the index for the subject, one has $$(y_{ijk} | \mu_{ij}) \sim_{\text{iid}} {\cal N}(\mu_{ij}, \sigma^2_w), \quad k=1, \ldots, 3 \quad \text{ for all } i,j$$ and $$\begin{pmatrix} \mu_{1j} \\ \mu_{2j} \\ \mu_{3j} \end{pmatrix} \sim_{\text{iid}} {\cal N}_3 \left( \begin{pmatrix} \mu_1 \\ \mu_2 \\ \mu_3 \end{pmatrix}, G \right), \quad j=1, \ldots, 3$$ The diagonal entries of the $G$ matrix are the between variances for each level of the dose. I want $G$ to be of the form $$G=\begin{pmatrix} \sigma^2_b & \sigma_{12} & \sigma_{13} \\ \sigma_{12} & \sigma^2_b & \sigma_{23} \\ \sigma_{13} & \sigma_{23} & \sigma^2_b \end{pmatrix}$$
-
Do you mean you have several random effects and that you assume all of them to have the same variance? – ocram Jun 21 '12 at 15:49
@ocram I have just added an example – Stéphane Laurent Jun 21 '12 at 20:43
What's the motivation for doing this? If you constrain the diagonal of the covariance matrix to be the same, the off diagonal is no longer a correlation/covariance anymore. – AdamO Jun 21 '12 at 21:01
You're wrong AdamO. For example the identity matrix is such a matrix. The motivation is that there is a physical interpretation of the equality of the between variances. – Stéphane Laurent Jun 22 '12 at 4:40
If I do understand your model, you have a random dose effect for each subject:
$$\gamma = \left( \begin{array}{c} \gamma_{1A} \\ \gamma_{1B} \\ \gamma_{1C} \\ \gamma_{2A} \\ \gamma_{2B} \\ \gamma_{2C} \\ \gamma_{3A} \\ \gamma_{3B} \\ \gamma_{3C} \\ \end{array} \right),$$ and $G = \textrm{Var}(\gamma)$ has a block-diagonal structure with identical blocks over subjects: $$G = \left( \begin{array}{ccc} \star & \star & \star & 0 & 0 & 0 & 0 & 0 & 0 \\ \star & \star & \star & 0 & 0 & 0 & 0 & 0 & 0 \\ \star & \star & \star & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & \star & \star & \star & 0 & 0 & 0 \\ 0 & 0 & 0 & \star & \star & \star & 0 & 0 & 0 \\ 0 & 0 & 0 & \star & \star & \star & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & \star & \star & \star \\ 0 & 0 & 0 & 0 & 0 & 0 & \star & \star & \star \\ 0 & 0 & 0 & 0 & 0 & 0 & \star & \star & \star \end{array} \right).$$
Now, the type option specifies the covariance structure of each block. If you specify type=vc (default) then each block will have the form $$\left( \begin{array}{ccc} \star & \star & \star \\ \star & \star & \star \\ \star & \star & \star \end{array} \right) = \left( \begin{array}{ccc} \sigma_{_{G}}^{2} & 0 & 0 \\ 0 & \sigma_{_{G}}^{2} & 0 \\ 0 & 0 & \sigma_{_{G}}^{2} \end{array} \right),$$ where $\sigma_{_{G}}^{2}$ is the single parameter of $G$ to be estimated. If you specify type=toep then $$\left( \begin{array}{ccc} \star & \star & \star \\ \star & \star & \star \\ \star & \star & \star \end{array} \right) = \sigma_{_{G}}^{2} \left( \begin{array}{ccc} 1 & \rho_1 & \rho_2 \\ \rho_1 & 1 & \rho_1 \\ \rho_2 & \rho_1 & 1 \end{array} \right),$$ and now there are three parameters in $G$ to be estimated.
If this does not fit your requirement, then you might want to have a look at the group option: all observations having the same level of the group effect have the same covariance parameters.
-
Thank you. In regards to your PS, you're wrong. I will edit my post to make it clearer. The Toeplitz matrix is not appropriate because of $\sigma_{12}=\sigma_{23}$. – Stéphane Laurent Jun 22 '12 at 7:00
Indeed, the Toeplitz does not fully fit what you want. But is sigma12=sigma23 totally unreasonable? Otherwise, I am afraid you have to go to 'type=un' which is a little bit more general than your G matrix... – ocram Jun 22 '12 at 7:31
Another motivation for the equality of the diagonal variances is that I assume $y = \log x$ and then taking the exponential of some confidence bounds for $\mu_1-\mu_2$ gives a confidence interval about the ratio of the means of the $x$ response. This is not true without assuming equality of variances. – Stéphane Laurent Jun 22 '12 at 7:35
Indeed it is not totally unreasonable to assume $\sigma_{12}=\sigma_{23}$. This is what I will do because this is the best available option. But my post is about a way to relax this assumption. – Stéphane Laurent Jun 22 '12 at 9:23
show 4 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896041750907898, "perplexity": 452.572517567776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368703293367/warc/CC-MAIN-20130516112133-00082-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://datascience.stackexchange.com/questions/89876/subtraction-of-positive-and-negative-frequencies-in-sentiment-analysis | # Subtraction of Positive and Negative Frequencies in Sentiment Analysis
In the Positive Negative Sentiment Analysis, Would it make sense mathematically to instead of keeping a score of the positive frequencies and negative frequencies of a word, calculate the difference between them? That way each word would have a positivity 'heat' in which a very high value would indicate a very positive word and vice-versa. How this approach would change the model performance?
Let's take an example, consider two words A & B. A's positive/negative values are +1/0 and B's are +0.5/-0.5. Their difference would appear equivalent (diffAB = 1). When in fact they are quite different sentiments.
If you wanted to compute a single metric, you could do something like "polarity" i.e. take the squared sum of the values which would give you a positive value showing how "polar" the word is. In the case above, the word B would have a lower score because it's positive/negative values are less extreme.
When designing metrics, test a few scenarios to see if it matches your needs.
• Let me know if you have any questions otherwise please accept this answer
– WBM
Mar 4 '21 at 10:07 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8363304734230042, "perplexity": 571.4225374025984}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305288.57/warc/CC-MAIN-20220127193303-20220127223303-00676.warc.gz"} |
https://scicomp.stackexchange.com/users/447/geoffrey-irving?tab=tags | Geoffrey Irving
60 Tags
22 floating-point × 3 8 algorithms × 4 5 nonlinear-equations 2 graph-theory × 2 20 simulation × 2 8 roots × 3 5 integral-equations 2 matlab 20 molecular-dynamics × 2 8 random-sampling × 2 5 linear-programming 2 complex-analysis 19 optimization × 5 7 python 4 stability × 4 1 matrix × 2 16 reference-request × 2 7 numerics 4 statistics × 2 1 probability 15 computational-geometry × 7 6 interval-arithmetic × 3 4 data-structures 1 numpy 11 accuracy × 2 6 partitioning × 2 4 c++ 0 mpi × 4 11 hpc 6 randomized-algorithms × 2 3 linear-algebra × 5 0 krylov-method × 2 11 parallel-computing 6 data-management 3 poisson × 3 0 complexity × 2 10 polynomials × 6 5 quadrature × 2 3 computer-arithmetic × 2 0 eigenvalues × 2 10 convex-optimization × 3 5 pde 3 unstructured-mesh × 2 0 tensor 10 extrapolation × 2 5 iterative-method 3 mixed-integer-programming 0 tensor-decomposition 10 convergence × 2 5 special-functions 3 mesh 0 error-estimation | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8475313782691956, "perplexity": 25256.38336130346}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00281.warc.gz"} |
https://share.cocalc.com/share/5d54f9d642cd3ef1affd88397ab0db616c17e5e0/www/tables/charpoly_s2g1.html?viewer=embed | Open in CoCalc Characteristic polynomials
# Characteristic polynomials of Tp on S2(1(N)).
This is a table of characteristic polynomials of Hecke operators Tp on the space of weight 2 cusp forms for 1(N).
N<37, p<97 charpoly_s2g1.gp | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.908208429813385, "perplexity": 7684.111157671255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672313.95/warc/CC-MAIN-20191123005913-20191123034913-00116.warc.gz"} |
http://mathhelpforum.com/business-math/158572-computing-present-value-series-cash-flows.html | # Math Help - Computing the present value of a series of cash flows
1. ## Computing the present value of a series of cash flows
Given loan interest rate = 8%
bank saving interest rate = 5%
Given the following yearly cash flows with zero initial capital:
$-1,000$ 900 $800$ -1,200 $700 My teacher gives the answer of the future value of these cash flows at the beginning of the fifth year: (((-1000*1.08 + 900) 1.08 + 800) 1.05 -1200 )*1.08 + 700 = 90. 7504 Now, he asked me to compute the present value of$ 90.7504.
I really don't know how to do since different loan interest rate and saving interest rate are given.
He also said that it is incorrect to simply discount $90.7504 by (1+5%)^4 Can anyone help? 2. frankly, im not sure I agree with your teacher when he said you cant discount the accumulated value at 5% to get the present value. Its obvious that a single investment of $90.7504 \times 1.05^{-4}$ will accumulate to 90.7504 at the start of the 5th year. However, presumably he knows what he is talking about. Perhaps he wants you to discount the cashflows back at a varying rate depending on which interest rate was being used to accumulate the cashflows in that year? 3. Perhaps this'll help: Code: Year Flow Interest Balance 1 -1000.00 .00 -1000.00 2 900.00 -80.00 -180.00 3 800.00 -14.40 605.60 4 -1200.00 30.28 -564.12 5 700.00 -45.13 90.7504... 4. i thought he was asking for a PV of the 90.75, rather than how to get the 90.75? 5. Yes, he is; I gave him that so he could "see" the flows... I think he's smart enough to calculate the PV himself. 6. Sorry, but my teacher wants me to find a appropriate discount rate to discount the$90.75 back to the present value. So, I think the problem would be how to find that appropriate discount rate. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.835146427154541, "perplexity": 1321.283175578846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737952309.80/warc/CC-MAIN-20151001221912-00115-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.arxiv-vanity.com/papers/quant-ph/0006064/ | arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
# Separability and distillability in composite quantum systems – a primer –
M. Lewenstein, D. Bruß, J.I. Cirac, B. Kraus, M. Kuś, J. Samsonowicz, A. Sanpera and R. Tarrach Institut für Theoretische Physik, Universität Hannover, D-30167 Hannover, Germany
Institut ür Theoretische Physik, Universität Innsbruck, A–6020 Innsbruck, Austria
Centrum Fizyki Teoretycznej, Polska Akademia Nauk, 02-668 Warsaw, Poland
Department of Mathematics, Warsaw Technical University,00-661 Warsaw, Poland
Departament d’Estructura i Constituients de la Materia, Universitat Barcelona, Spain
August 9, 2020
###### Abstract
Quantum mechanics is already 100 years old, but remains alive and full of challenging open problems. On one hand, the problems encountered at the frontiers of modern theoretical physics like Quantum Gravity, String Theories, etc. concern Quantum Theory, and are at the same time related to open problems of modern mathematics. But even within non-relativistic quantum mechanics itself there are fundamental unresolved problems that can be formulated in elementary terms. These problems are also related to challenging open questions of modern mathematics; linear algebra and functional analysis in particular. Two of these problems will be discussed in this article: a) the separability problem, i.e. the question when the state of a composite quantum system does not contain any quantum correlations or entanglement and b) the distillability problem, i.e. the question when the state of a composite quantum system can be transformed to an entangled pure state using local operations (local refers here to component subsystems of a given system).
Although many results concerning the above mentioned problems have been obtained (in particular in the last few years in the framework of Quantum Information Theory), both problems remain until now essentially open. We will present a primer on the current state of knowledge concerning these problems, and discuss the relation of these problems to one of the most challenging questions of linear algebra: the classification and characterization of positive operator maps.
## I Introduction
Quantum Mechanics celebrates in this year its first century of life. In October 1900, Max Planck presented to the “Deutsche Physikalische Gesellschaft” his seminal papers: “Über eine Verbesserung der Wienschen Spektralgleichung” and “Zur Theorie des Gesetzes der Energieverteilung im Normalspektrum”[1].
While there are no doubts about the success of Quantum Mechanics in explaining - beautifully - many of the problems concerning a variety of physical topics, it is worth stressing that Quantum Theory is by no means passée. It is nowadays still an open theory with several challenges which compel both Physics and Mathematics. Just a few years ago, it was common belief that the“really big” unsolved problems of theoretical physics pertained only to the domain of Quantum Gravity, that is the conjunction of Quantum Field Theory and General Relativity. Quantum Gravity and String Theory are closely related to unsolved challenges of modern mathematics, in particular concerning algebraic topology and algebraic geometry. However, to the surprise of many, the emerging field of Quantum Information Theory[2] has shown that even in the “simple” non-relativistic Quantum Theory there exist still fundamental open problems. The characterization of entanglement, and more specifically, the characterization of separability and distillability of quantum states are among these. Again, they are directly linked to unsolved challenges of mathematics concerning linear algebra and geometry, functional analysis and, in particular, the theory of C-algebras[3].
In this paper we present some of these new open problems and report on the recent progress concerning them. The paper does not intend to be a review article on the subject of quantum entanglement, but rather an introduction -- a primer -- on the subject111This paper has been presented by M. Lewenstein at the conference “Quantum Optics”, Kühtai, January 2000.. It contains nevertheless some new results: we present here two novel separability checks, and some new results concerning distillability of density matrices that possess a non-positive partial transpose. The presentation of these new results and their proofs require the introduction of some technical formalism. Therefore, they have been included as Appendices.
The paper is organized as follows: First, in Section II we explain what the entanglement problem means. In Section III we discuss the problem of separability, that is how to define and discriminate those states that contain only classical correlations and no quantum correlations. In Section IV we focus on the problem of distillability, that is, the possibility– by using local operations and classical communication only– to “distill” from a given ensemble of copies of a given mixed state a maximally entangled pure state. In that section we also present the current state-of-the-art. Finally, our summary remarks are contained in Section V.
## Ii The entanglement problem
In order to explain what the entanglement problem means we first have to specify that in the following we will consider composite quantum systems[4]. Physical states of such systems are in general mixed and can be represented by density matrices, i.e. hermitian, positive definite linear operators of trace one, acting in the Hilbert space , which is a tensor product of Hilbert spaces corresponding to subsystems of the considered system.
Given a quantum state , an apparently innocent question as does this state contain quantum correlations? will in general be very hard (if not impossible!) to answer. First of all, what does it mean that a quantum state does or does not contain quantum correlations? The answer seems to be straightforward: a system contains quantum correlations if the observables of the different subsystems are correlated, and their correlations cannot be reproduced by any means classically. That implies that some form of non-locality [4] is necessary in order to account for such correlations. For pure states – described by projections on a single vector acting on the Hilbert space of the composite system – it is relatively easy to check if the correlations that they contain are classical, or not. For instance, it is enough to check if some kind of Bell inequality[5] is violated to assert that the state contains quantum correlations. In fact, there are many different “entanglement”- criteria and all of them reveal equivalent forms of the non-local character of the entangled pure states. For example, the demonstration that no local hidden variable (LHV) can account for the correlations between the observables in each subsystem is an equivalent definition of non-locality[6].
We know nowadays that these equivalences may fade away when one deals with mixed states. Contrary to a pure state, a mixed state can be prepared in many different ways. The fact that we cannot trace back how it was prepared prevents us from extracting all the information contained in the state. As a consequence, we lack (nowadays) general “entanglement” criteria that allow us to check if the correlations present in the system are genuinely quantum, or not. Despite the fact that many entanglement measures have been introduced, we do not know a “canonical” way of quantifying the entanglement[7]. Furthermore, different manifestations of non-locality are known to be not equivalent. For instance, Werner[6] introduced a family of mixed states that do not violate Bell-type inequalities (they admit a local hidden variable model), but nevertheless are non-local. The question whether there exists a violation of Bell inequalities in the, so-called, strong sense (where the observables take “unphysical” values) for all PPT entangled states (which we define below) remains open[8].
Therefore, the entanglement problem can be outlined as: What does it mean that a given (mixed) state contains or does not contain quantum correlations?
## Iii The separability problem
An essential step forward to understand what does entanglement mean is to discriminate first the states that contain classical correlations only (or no correlations at all). These states are termed separable states, and their mathematical characterization has been formulated by Werner[6]. We shall restrict ourselves here to the most simple composite systems: bipartite systems (with two subsystems traditionally denoted as Alice and Bob) of finite, but otherwise arbitrary dimensions. The states of bipartite systems are described by positive definite hermitian density matrices (with normalized trace) , i.e. , and . The density matrices act on the Hilbert space of the composite system . Without loosing generality we will assume that dim and dim .
The most simple examples of separable states are just product states, i.e. ( acts on , and acts on ). These states contain no correlations whatsoever. A straightforward extension of product states are the states that contain only classical correlations. Werner[6] provided us with the following operational definition of separability:
###### Definition 1
A given state is separable if and only if
ρ=k∑i=1piρAi⊗ρBi , (1)
where , and .
The above expression means that can be written as a convex combination of product states. Equation (1) has a clear physical meaning. The state can be prepared by Alice and Bob by means of local operations (unitary operations, measurements, etc.) and classical communication (LOCC). If is separable the system does not contain quantum correlations. In spite of the definition, the characterization of such states is a rather arduous task. This is so among other facts because, in general, even for a given separable generic matrix we do not have an algorithm to decompose it according to Eq. (1). Thus, the separability problem, perhaps even more basic and fundamental than the entanglement problem can be formulated as: Given a composite quantum state described by , is it separable or not?
Before proceeding further we introduce here the definitions that we shall use throughout the paper. Given a density matrix , we denote by , , and the kernel, the range and the rank of the matrix defined as:
Kernel
Range }.
###### Definition 4
Rank
Let us introduce also the operation of “partial transposition” that will be used throughout the paper and is defined as:
###### Definition 5
The partial transpose of means the transpose only with respect to one of the subsystems. If we express in Alice’s and Bob’s orthonormal product basis:
ρ = M∑i,jN∑k,l⟨i,k|ρ|j,l⟩|i,k⟩⟨j,l| (2) = M∑i,jN∑k,l⟨i,k|ρ|j,l⟩|i⟩A⟨j|⊗|k⟩B⟨l|,
then the partial transposition with respect to Alice is expressed as:
ρTA=M∑i,jN∑k,l⟨i,k|ρ|j,l⟩|j⟩A⟨i|⊗|k⟩B⟨l|. (3)
Note that is basis-dependent, but its spectrum is not. The partial transpose may be , but does not have to be! As , and as always holds, positivity of implies positivity of and vice versa.
A major step in the characterization of the separable states was done by Peres[9] and the Horodecki family[10]. Peres provided a “userfriendly” and very powerful necessary condition for separability. Later on, Horodecki’s demonstrated that this condition is also sufficient for composite Hilbert spaces of dimension and . Their results are enclosed in the following two theorems:
###### Theorem 1
If is separable then .
A matrix that verifies the above theorem is termed “PPT” for positive partial transpose. Notice that being a PPT state is a necessary condition for separability.
###### Theorem 2
If in spaces of dimensions or then is separable.
In general, there exist PPT states (i.e. states with ) which are not separable in spaces ( or ) [11]. The PPT entangled states have been termed “bound entangled states” to distinguish them from the “free entangled states”. This latter names are associated with the distillability property, which we will discuss in the later sections of this paper. “Bound entangled states” are entangled, however, no matter how many copies of them we have, these states cannot be “distilled” via local operations and classical communication to the form of a pure entangled state[12]. We encounter thus new problems such as: How can one distinguish a separable state from a PPT state ? Are all non-PPT states (NPPT states) “free entangled” i.e. distillable? But before trying to answer these questions (that will bring us directly to the problem of distillability), we will first present a physical explanation of what separability means, and discuss shortly the recent progress concerning the quest for separability criteria.
### iii.1 Physical interpretation of separability
Let us now interpret the condition of positive partial transposition from a physical point of view. We start by considering symmetry transformations in the Hilbert space of each subsystem. Wigner’s theorem[13] tells us that every symmetry transformation is necessarily implemented by a unitary () or anti-unitary () matrix. The tensor product of a unitary and an anti-unitary transformation (or ) results in a transformation which is neither unitary, nor anti-unitary in , and whose action on a general ket of the composite system , furthermore, cannot be properly defined. However, its action on a product ket , (where and ) is, apart from a phase ambiguity, well defined. Thus, the action of a combined transformation of the type on projectors corresponding to pure product states is well defined without any ambiguity. As a separable state can always be rewritten as a statistical mixture of product vectors (see Def. 1) it is clear that under the combined transformation (or ), transforms into:
ρs→ρ′s=k∑i=1pi(|e′i⟩⟨e′i|⊗|f′i⟩⟨f′i|) (4)
where ; . Therefore, describes also a physical state so that is a positive definite hermitian matrix (with normalized trace). This is what characterizes separable states: that any local symmetry transformation, which obviously transforms local (in this context local refers to each of the subsystems) physical states into local physical states, also transforms the composite global state into another physical state.
There exists only one independent anti-unitary symmetry[13], and its physical meaning is well known: time reversal. Any other anti-unitary transformation can be expressed in terms of time reversal (as the product of a unitary matrix times time reversal). Thus separability of composite systems implies the lack of correlation between the time arrows of their subsystem. In other words: given a separable composite state, reversing time in one of its subsystems leads again to a physical state [14, 15].
### iii.2 Quest for separability criteria and checks
In the recent years there has been a growing effort in searching for necessary and sufficient separability criteria and checks. Several necessary conditions for separability are known: Werner has derived a condition based on the analysis of local hidden variables (LHV) models and the mean value of the, so-called, flipping operator [6], the Horodecki’s have proposed a necessary criterion based on the so-called -entropy inequalities[16], etc… Quite recently, a general and sufficient condition for separability was discovered by the Horodecki family in terms of positive maps. A map is defined positive if it maps positive operators into positive operators. The condition found by the Horodecki’s, states that is separable iff the tensor product of any positive map acting on one subsystem A and the identity acting on other subsystem B maps into a non-negative operator. This definition, however, involves the characterization of the set of all positive maps which is per se a major task. Later on the reduction criterion of separability was introduced [17, 18]:
###### Criterium 1
If is separable then the map must be positive.
Violation of this criterion is sufficient for entanglement to be free. Following the reduction criterion, a simple and still quite powerful sufficient condition for distillability was provided in Ref.[19], where P. Horodecki et al. showed that if the rank of at least one of the reduced density matrices and exceeds the rank of , then is distillable, ergo is non-separable and NPPT. In particular, it was concluded in Ref. [19] that there is no bound entanglement of rank 2.
Sufficient conditions for separability are also known. In Ref. [20] it was proven that any state close enough to the completely random state is separable. In [20] there were also given the first quantitative bounds for the radius of the ball surrounding that does not contain any entangled state. Much better bounds were found in the following works[21], where it was proven that a full rank mixed state is separable provided that its smallest eigenvalue is greater or equal to .
In Ref. [11], in which the first explicit examples of entangled states with PPT property were provided, another necessary criterion of separability was formulated. According to this criterion:
###### Criterium 2
If the state acting on a finite dimensional Hilbert space is separable then there must exist a set of product vectors that spans the range such that the set of partially complex conjugated product states spans the range of .
The analysis of the range of the density matrices, initiated by P. Horodecki, turned out to be very fruitful, leading, in particular, to the algorithm of optimal decomposition of mixed states into the separable and inseparable part [22, 23], and to systematic methods of constructing examples of PPT entangled states with no product vectors in their range, using either so-called unextendible product bases (UPB’s) [24, 25], or the method described in [26].
In the Appendices A and B we present two novel separability criteria and checks. One provides a necessary condition for separability, or rather a sufficient condition for entanglement. It detects the non-separability of the UPB states. The other criterion, or rather separability check, detects all separable states that are the convex combinations of two product states.
### iii.3 Recent progress in the separability problem
Despite many efforts and seminal results obtained in the recent years, the problem of separability remains essentially open. Recently, a considerable progress of in the study of PPT entangled states has been made [27, 28]. The results obtained allow us to hope to develop a systematic way of constructing optimal criteria for separability in arbitrary Hilbert spaces[29].
Our method employs the idea of “subtracting projectors on product vectors” [22, 23]: if there exists a product vector such that , the projector onto this vector (multiplied by some ) can be subtracted from , such that the remainder is positive definite and PPT. Our results can be divided into three groups.
First, we have studied and found separability criteria for density matrices of sufficiently low dimensional rank. Also constructive algorithms to decompose optimally (with the smallest possible number of terms) the separable matrix according to Eq. (1) - i.e. in product states - have been provided for low rank matrices. For the general case of composite systems with the Hilbert space () our findings are essentially contained in the following two theorems:
###### Theorem 3
If is PPT such that and cannot be embedded in a dimensional space then is separable.
In particular when has rank there exist typically exactly product vectors in the range of such that ; is then a convex combination of projections onto these vectors.
###### Theorem 4
If then typically there exists a finite number of product vectors such that .
These product vectors are the only possible candidates to appear in the decomposition of Eq. (1). Finding them requires solving a system of polynomial equations. After these equations are solved, one can check whether has the decomposition (1). The problem is infinitely easier than the original one since we know now all possible projectors that can be used, and we know that their number is finite. In fact, checking in such a situation whether is separable or not can be done in a finite number of computational steps!
Second, we have studied the structure and generic form of low rank PPT entangled matrices. To study low rank PPT entangled matrices has a twofold purpose. On one hand, the complexity of the problem is reduced, and therefore it is possible to find separability criteria. But, perhaps the most important is the fact that, given a density matrix , one can always decompose it as :
ρ = Λρsep+(1−Λ)δρ , (5) ρTA = ΛρTAsep+(1−Λ)δρTA , (6)
where is a separable state, are projectors onto product states and is maximal. All the information concerning entanglement is then contained in the remainder () which has low rank and can be termed as a “pure” PPT entangled state, or “edge” PPT entangled state. This state has a property that no projection onto the product state can be subtracted from it, keeping the rest positive definite and PPT. Formally, there exist no product vectors such that . The “edge” states violate in the extremal sense the Criterion of Ref. [11]. The problem of the separability reduces now to the problem of separability of the “edge” states, and to the question whether a given mixture of an “edge” and a separable state is separable or not.
In other words, any matrix can be decomposed in a separable part (that contains product vectors in the range) and a remainder, which is the “edge” state. and are low rank matrices that contain all the information related to entanglement. Obviously, knowing the structure of those matrices is therefore of capital importance.
Finally, let us mention a different approach to the entanglement problem, that is based on the so-called entanglement witnesses. An entanglement witness is an observable that reveals the entanglement of an entangled density matrix . B. Terhal[8, 25] introduced entanglement witnesses through the following theorem:
###### Theorem 5
If is entangled then there exists an entanglement witness E such that
Tr (Eρsep)≥0, (7) Tr (Eρ)<0 (8)
for all separable matrices .
Entanglement witnesses represent – in some sense – a kind of Bell inequality which is violated by the entangled state . Each entanglement witness on an space defines a positive map that transforms positive operators on an or –dimensional Hilbert space into positive operators on an or –dimensional space[31]. The maps corresponding to entanglement witnesses are positive, but not completely positive, and in particular their extension to spaces allows to “detect” the entanglement of . The maps corresponding to entanglement witnesses for PPT states are, moreover, non decomposable: they cannot be represented as a combination of completely positive maps and partial transposition.
For every “edge” state it is possible to construct an entanglement witness[29] as:
E=PK(ρ)+(PK(ρTA))TA−ϵ, (9)
where , are projections onto the kernel of and the kernel of , and , where the minimum is taken over all possible product vectors.
We have not only been able to find entanglement witnesses, and the corresponding non–decomposable positive maps for arbitrary “pure” or “edge” PPT states, but also to optimize them in a certain sense [29]. Optimized entanglement witnesses detect significantly more entangled PPT states than the non–optimized ones.
We hope very much that these studies will allow us to characterize extremal points in the convex set of PPT entangled matrices, and then to characterize the extremal points in the convex set of positive maps [30]. If this program is realized, the separability problem will be solved. So far, however, only the first steps have been done and the problem remains open and challenging.
## Iv The Distillability Problem
On having said that, we shall attack now the related problem of the distillability of mixed quantum states. For many applications in quantum information processing [32] and communication one needs a maximally entangled state, that is, a state which in dimensional space can be brought by a local change of basis to the form
|Ψmax⟩=1√MM∑i=1|i,i⟩, (10)
which is shared between two parties.
Although in principle one can create pure and maximally entangled states, in realistic situations any pure state will evolve to a mixed state due to its interaction with the environment. A standard example concerns a situation when two entangled particles (photons, atoms,…) representing the two subsystems are sent from the source to the two involved parties, Alice and Bob, through noisy channels. In order to overcome the noise created during the transmission, the idea of distillation and purification, i.e. enhancement of the given non-maximal mixed entanglement by local operations and classical communication (LOCC) was proposed by Bennett et al. [33], Deutsch el al [34] and Gisin [35]. Again, for Hilbert spaces of composite systems of dimension lower or equal to 6, any mixed entangled state can always be distilled to its pure form. Since for such systems entanglement is equivalent to the NPPT property, we conclude that for , and for systems, all NPPT states are distillable[36]. However, it was shown by Horodecki family [12] that in higher dimensions there exist states (namely PPT entangled states), termed as bound entangled states, which cannot be distilled, in contraposition to free entangled states[37]. The distillability problem can be formulated as: Given a density matrix , is it or is it not distillable?
Let us now define the distillability property, first on an intuitive, and then on a more formal basis.
###### Definition 6
is distillabe if by performing LOCC on some number of copies , Alice and Bob can distill a state arbitrary close to , i.e.
ρ⊗....⊗ρ⟶|Ψmax⟩ ⟨Ψmax|. (11)
The above definition is not very precise – it requires to specify what the LOCC can do with copies of , and does not give any practical advice about how to answer the question of distillability. Fortunately we can use the theorem of Ref.[12], which states that instead of studying the whole set of possible LOCC, in order to determine the distillability of a given density matrix it is sufficient to study projections on a dimensional subspace of the Hilbert space in which acts. The theorem is very useful since it reduces the problem of distillability to a very precisely stated mathematical question; in fact from now on we will use it as a definition of distillability.
###### Theorem 6
is distillable iff there exists a number of copies , and a projector onto a -dimensional space spanned by:
|ei⟩ ∈HA⊗...⊗HAK−times, i=1,2, (12) |fi⟩ ∈HB⊗...⊗HBK−times, i=1,2, (13)
such that the projection
σ=P2×2ρ⊗KP2×2, (14)
is NPPT (i.e. is distillable).
An alternative way of formulating the above theorem is the following: is distillable iff there exists a state from a -dimensional subspace,
|ψ⟩=a|e⋆1⟩|f1⟩+b|e⋆2⟩|f2⟩ , (15)
such that for some .
The idea of the proof of the above theorem is the following: if is distillable it means that one can produce a maximally entangled state, and it is then easy to project (using local projections) that state onto a pure state in a -dimensional subspace. On the other hand, if there is a as in equation (15), such that , then one can first project onto the subspace to which belongs. This is a subspace in which the projected matrix is NPPT, ergo it is distillable. We can then distill several maximally entangled state in this subspace, rotate them unitarily and locally, and combine to a maximally entangled state in the whole space.
Let us now ask what does the criterion of partial transposition,– which plays an important role in the separability problem as we have seen before–, tell us about the distillability problem?
###### Theorem 7
If ( is PPT) then is not distillable [12].
###### Theorem 8
If ( is NPPT) in dimensions , then is distillable[36].
The later holds also for systems, see [38].
### iv.1 Recent progress in the distillability problem
At the end of the last section we have seen that every density matrix with a positive partial transpose cannot be distilled, and that for low dimensions the converse is true. In this section we want to discuss the conjecture that in higher dimensions there are states with a non-positive partial transpose, which are however non-distillable [38, 39]. In other words, non-positivity of the partial transpose seems to be a necessary, but not a sufficient condition for distillability.
The states which are believed to be non-distillable[38], belong to a one-parameter family, which lives in dimension :
ρ(α)=1m(α)(PS+αPA) with α≥0 , (16)
where and denote projectors onto the symmetric and antisymmetric subspace, respectively, and is some normalization. This family is generic in the sense that every density matrix in can be depolarised locally to a state from our family. In this sense these states are nothing more but Werner states[6] defined for systems.
The partial transpose of is given by
ρTA(α)=1n(β)(1−βP) , (17)
where is the normalization, is the projector onto a maximally entangled state , and the relation between and is:
β=N(α−1)α+1 with −N≤β≤N . (18)
Note that for the matrix is positive definite , i.e. is PPT and thus is not distillable. For one finds that is not positive definite, i.e. is NPPT and the question of distillability is open.
The nice thing about the considered family of states is that if we show that:
(i) is distillable for all , then all with NPPT are distillable, because all can be reduced to the “canonical” form (16).
(ii) if there exists , such that is NPTT and is not distillable, then not all with NPPT are distillable. In other words there exist undistillable ’s with NPPT. Both alternatives (if proven) would be an extremely important result. At the moment it seems that the second alternative is true [38, 39], but strictly speaking the problem is open. We shall see below that is distillable for , so that in fact interesting region of the parameter lies between 1 and 3/2.
Before presenting some partial results on the way to the complete proof, which is yet unknown, let us introduce the concept of -distillability, by which we define distillability with respect to copies.
###### Definition 7
is -distillable iff there as given in Eq. (15) such that
⟨ψ|(ρTA)⊗K|ψ⟩<0. (19)
The basic theorem that we have proven so far determined the region of the parameter for which the matrix of Eq. (16) is not -distillable. Our results hold for arbitrary systems, but here we specify them to case of two qutrit systems ().
###### Theorem 9
Let , i.e. act in space, then:
• is not 1-distillable for ;
• is not 2-distillable for ;
• is K-undistillable for , where the best bound for obtained so far is:
βK∼1+13K/3K1/3. (20)
As we see, for every there exist a region of (see Fig.1) in which is not distillable. Unfortunately, this region shrinks, however to a point, as goes to infinity. If we have an arbitrary number of copies of we cannot say whether we will be able or not to distill it. The proof of the above theorem is technical, but essentially simple. Below we sketch the proof concerning 1– and 2–distillability
### iv.2 1–Distillability
Let be the projector complementary to , where is the projector onto maximally entangled states. For one copy of the matrix we have that for any given by Eq. (15):
⟨ψ|Q−βP|ψ⟩ = ⟨ψ|1−(1+β)P|ψ⟩≥ (21) 1−(1+β)23≥0
for . The last inequality follows from the fact that the projection of an arbitrary vector living in a subspace onto a maximally entangled vector in space must not be greater than 2/3. The above considerations imply also that for the matrix is 1–distillable, because there exists a vector in a subspace for which . This proves that is distillable for .
### iv.3 2–Distillability
To show that 2 copies of cannot be distilled for some interval of , we first observe that Q is separable, i.e.
Q=R∑ipi|ei,fi⟩⟨ei,fi|. (22)
Denoting by the trace over the –th copy, and by – the projector for the –th copy we get
Tr(⟨ψ|Q1|ψ⟩)=R∑i=1pi⟨ψ|ei,fi⟩⟨ei,fi|ψ⟩ (23)
where is a vector in the second copy space with 2 Schmidt coefficients, i.e. of the form (15). Similar result holds for the second copy. Then, using the results for 1–distillability we obtain
⟨ψ|Q⊗(Q−12P)|ψ⟩≥0, (24) ⟨ψ|(Q−12P)⊗Q|ψ⟩≥0. (25)
Now, by adding the above results and dividing by two we obtain
⟨ψ|(Q−14P)⊗(Q−14P)|ψ⟩≥⟨ψ|116P⊗P)|ψ⟩≥0, (26)
so that we see that is not 2–distillable for .
### iv.4 Distillability in general
In the Ref. [38] we have performed extensive numerical studies and looked for the minimum of over all possible of the form (15). The numerical results indicate clearly that:
• is not 2-distillable for ;
• is not 3-distillable for .
It is a challenging and open problem to understand these results. So far we have only achieved some progress in the problem of 2–distillability. We have proven that the states for which , and which (as we know) provide the global minimum of for , provide also a local minimum of equal to zero for . The proof of this fact is presented in the appendix C. There exists a well-based suspicion that is not distillable in the entire region of .
## V Conclusions
There is only one conclusion of this paper: Quantum Theory is an open and challenging area of physics. It offers still fundamental and fascinating problems that can be formulated at elementary level, and yet they are related to challenges of the modern mathematics. Particular examples of those are separability and distillability of composite quantum systems.
This paper has been supported by SFB 407 and Schwerpunkt “Quanteninformationsverarbeitung” of Deutsche Forschungsgemeinschaft, by the ESF PESC Programme on Quantum Information, and by the IST Programme EQUIP.
## Appendix A Sufficient criterion for inseparability
Our sufficient criterion of inseparability (i.e. a necessary criterion for separability) is based on the following Lemma. Let us assume the product vectors to be normalized.
###### Lemma 1
If max, then is inseparable.
Let us prove that this statement is true in the case where max. All the other cases can be proved in the same way which implies that we can take the maximum of those three values.
Proof: We define and the witness, . Assuming that we have that . It remains to prove that for all separable. We write , and observe that: .
Unfortunately this criterion does not work for the Horodecki PPT states in space [11]. It does, however, detect the entanglement of the PPT states constructed from UPB’s[24]. In such case , where is a projector onto a space that does not contain any product vectors, , and . Our criterion gives , while .
## Appendix B Separability check for binary mixtures of product states
If a separable matrix is a mixture of two product states (here we call such matrix a binary mixture), then it is relatively easy to check separability. Assume of the following form:
ρ=K∑i=1piρAi⊗ρBi, (27)
where all , and . Let and be density matrices acting in Alice and Bob’s space respectively. Let us define the matrix function
M(μ,ν)=K∑i=1pi(ρAi−μ)⊗(ρBi−ν). (28)
Interestingly, can be calculated without the explicite use of the representation (27),
M(μ,ν)=ρ−μ⊗ρB−ρA⊗ν+μ⊗ν, (29)
where the reduced density matrices are .
For we have an obvious Lemma:
###### Lemma 2
If is a mixture of two product states (with ) then the equation has at least two solutions , and .
The “opposite” implication is also true.
###### Lemma 3
If the equation has solutions such that , , , , , , then is either a separable binary mixture, or a nonseparable binary pseudomixture.
The proof follows directly from Eq. (29). can be written as
ρ=μ⊗(ρB−(1−p)ν)+(ρA−pμ)⊗ν, (30)
with some . We immediately see that is separable if there exists such that and . This implies that the ranges and must be included in the ranges of , and , respectively; at the same time must fulfill the conditions , and , where denotes the operator norm (for details see [27]).
Checking if the equation has solutions is very easy. We can use a product basis in the operator space. Such basis can be chosen to be orthonormal and hermitian with respect to the trace scalar product. The equation projected onto the –th element of the basis reads:
ρij−μiρBj−ρAiνj+μiνj=0. (31)
where , , , etc…We have thus such equations for real coefficients , . The equations have a very simple structure and therefore it is easy to check: a) if they have a solution; b) if the resulting and are positive definite; c) if there exists such that both terms on the RHS of Eq. (30) are positive definite.
The above formulated separability check can be easily generalized to systems of parties and separable mixtures of product states for .
## Appendix C Finding the local minimum for projecting onto the 2-dimensional subspace
In this appendix we will find the states that lead to a local minimum for projection onto a two-dimensional subspace as in equation (19) in the case of 2 copies (in dimension ), for the critical value of the parameter .
We will proceed as follows: our problem will be formulated in terms of a function that has to be minimized with respect to . We will find a family of states for which is shown to reach a local minimum, for a range of parameters .
We introduce a parameter and will study the following function:
f(λ,ψ) = (32) − 3(λ⋅1l⊗P+(1−λ)P⊗1l)|ψ⟩ = ⟨ψ|ρ2(λ)|ψ⟩ ,
where the last line defines , and is the projector onto a maximally entangled state. Here is a given fixed parameter with . The case corresponds to being the partial transpose of two copies of . In the notation of Section IV this corresponds to the value of . We are looking for the minimum of with respect to . This state lives in the two-dimensional subspace and can be written in the Schmidt decomposition (cf. [4])
|ψ⟩=a|e1⟩A|f1⟩B+b|e2⟩A|f2⟩B , (33)
where the states are normalised and and . For clarity we kept the indices and .
Let us rewrite the terms in equation (32), sorting them not pairwise, but with respect to Alice and Bob:
= 3∑i,j,r,s=1|ir⟩A|js⟩B⟨ir|A⟨js|B, P⊗P = 193∑i,j,r,s=1|ir⟩A|ir⟩B⟨js|A⟨js|B, P⊗1l = 133∑i,j,r,s=1|ir⟩A|is⟩B⟨jr|A⟨js|B, 1l⊗P = 133∑i,j,r,s=1|ir⟩A|jr⟩B⟨is|A⟨js|B . (34)
This notation fixes the basis in which we will also write . Indices are used for the first pair and for the second pair.
The minimum of is found by requiring the two conditions
• ,
• maximal .
According to equations (34) we reach a) only if the entries of either the first or the second pair are orthogonal to each other. We can reach b) if the entries in the first bits of Alice and Bob are identical in both terms of the Schmidt decomposition of , and if their second bits are in a product state. The coefficient in equation (33) is easily found to be for maximisation of b).
We can fulfill both conditions a) and b) with the family of states that minimizes for , denoted by :
|ψ⋆⟩=1√2(|ir⟩A|is⟩B+eiφ|jr⟩A|js⟩B), (35)
with . Therefore for we have found the global minimum of to be
minψf(λ=0)=0 . (36)
Using the explicit structure of the states we find that
⟨ψ⋆|1l⊗1l|ψ⋆⟩ = 1 , ⟨ψ⋆|1l⊗P|ψ⋆⟩ = 0 , ⟨ψ⋆|P⊗1l|ψ⋆⟩ = 2/3 . (37)
When varying the parameter we therefore find
⟨ψ⋆|ρ2(0≤λ≤12)|ψ⋆⟩=0 . (38)
Similarly, for the same line of argument holds when interchanging the role of first and second bits, leading to a different minimizing family . At , the point which is symmetric with respect to interchanging the two pairs, both families lead to the expectation value zero.
In the following we will show that corresponds to a local minimum for .
First, we show that the states given in equation (35) form a compact set, by describing how to move through the whole family in infinitesimal steps: Looking at the first pair, we can either make the following change:
|i⟩→xi|i⟩+xk|k⟩ with ⟨k|i⟩=0=⟨k|j⟩ , (39)
or we can move to
|j⟩→xj|j⟩+xl|l⟩ with ⟨l|j⟩=0=⟨l|i⟩ , (40)
or we can change both and , keeping . Regarding the second pair, we can change
|r⟩→xr|r⟩+xp|p⟩ with ⟨p|r⟩=0=⟨p|s⟩ , (41)
or we can move to
|s⟩→xs|s⟩+xt|t⟩ with ⟨t|r⟩=0=⟨t|s⟩ . (42)
In this way we can move within the family in infinitesimal steps, and there are no isolated points.
Let us now move outside of our family by an infinitesimal amount. We will write down the most general path leading away from the family and then show that first order terms of the expectation value vanish, i.e. we have an extremum, and that the functional determinant of second order terms is positive, i.e. we have a local minimum.
The most general infinitesimal step away from our family is given by
|ψ⋆+δ⟩=1√2(√1+δ0|ir⟩A|(i+δ1k)(s+δ2r)⟩B+ √1−δ0eiφ|(j+δ3l)(r+δ4t)⟩A|(j+δ5m)(s+δ6r)⟩B) with ⟨k|i⟩=0=⟨l|j⟩, ⟨l|i⟩=0=⟨t|r⟩, ⟨m|j⟩=0=⟨i+δ1k|j+δ5m⟩ , (43)
so that the Schmidt terms are still orthogonal, and for each of the seven we leave the family. Note that we can always keep one state (in this case ) constant by using bilateral rotations.
We now expand the expectation value
⟨ϱ2⟩ = 1M2⟨ψ⋆+δ|ϱ2(0<λ≤12)|ψ⋆+δ⟩ (44)
in powers of , and find that all terms linear in are indeed vanishing.
The second order terms can be written down explicitly. The diagonal ones are:
O(δ20): 12δ20(1−λ)>0 O(δ21): 12δ21(1+|1−2λ|)>0 O(δ22): 12δ22(1+|1−2λ|+14−1)>0 O(δ23): 12δ23(1+|1−2λ|)>0 O(δ24): 12δ24(1+|1−2λ|+14δts−(λδts+1−λ))>0 O(δ25): 12δ25(1+|1−2λ|)>0 O(δ26): 12δ26(1+|1−2λ|+14−1)>0 . (45)
Here is the Kronecker symbol.
Nearly all off-diagonal terms of second order vanish, the only non-zero one is [note that ]:
O(δ2δ6): 12δ2δ6(λ−34) . (46)
This term is negative for the range ; the corresponding 2x2 determinant, however, is for . Thus we have found the second derivative to be positive, and therefore our family corresponds to a local minimum for . In particular that is the case for , i.e. in the notation of Section IV. It is easy to see that this must also be the case for all .
For a complete proof that two copies are not distillable, however, it remains to be shown that this minimum is a global minimum for .
## References
Want to hear about new tools we're making? Sign up to our mailing list for occasional updates. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532608985900879, "perplexity": 512.3290153557223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738905.62/warc/CC-MAIN-20200812141756-20200812171756-00533.warc.gz"} |
https://kar.kent.ac.uk/29979/ | Skip to main content
# Symbolic solution of simple BVPs on the operator level
Rosenkranz, Markus (2003) Symbolic solution of simple BVPs on the operator level. Communications in Computer Algebra, 37 (3). pp. 84-87. ISSN 1932-2240. (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:29979)
The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided. Official URLhttp://www.sigsam.org/cca/issues/issue145.html
Item Type: Article Boundary value problems; Green's operators; symbolic analysis; computer algebra Q Science > QA Mathematics (inc Computing science) > QA150 AlgebraQ Science > QA Mathematics (inc Computing science) > QA372 Ordinary differential equationsQ Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming, Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Mathematics, Statistics and Actuarial Science Markus Rosenkranz 27 Jul 2012 18:05 UTC 16 Nov 2021 10:07 UTC https://kar.kent.ac.uk/id/eprint/29979 (The current URI for this page, for reference purposes)
• Depositors only (login required): | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205792546272278, "perplexity": 8286.562184491397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358673.74/warc/CC-MAIN-20211128224316-20211129014316-00372.warc.gz"} |
https://www.physicsforums.com/threads/fourier-transforms-is-inflicting-pain.616514/ | # Fourier Transforms is Inflicting Pain!
1. Jun 25, 2012
### Nano-Passion
So I'm doing an internship this summer and one of the things I have to be acquainted with is Fourier Transforms.
My adviser gave me a simple example with that of sin(x). He said that what the Fourier Transform does is transform a time domain signal into one of a frequency signal. Essentially, he added, the peak of a Fourier Transform is that of the greatest match of the frequency to the function sin(x)--where points things that are closest to x-axis are the worst match.
But since I've started this assignment earlier today, I've noticed that Mathematica does not plot Fourier Transform of functions such as sin(x) that have constant frequencies and periodicity for all x. Mathematica plots things such as sin(x)/x where the frequency isn't constant throughout the x-axis. Upon doing some research on FT, I suspect this is because Fourier Transform works for things that vary, not for functions that have constant periodicity. But I don't know any better so I'm asking for your input.
It would also be helpful if someone can clarify the whole concept of Fourier Transforms and how it relates to statistics and data analysis.
2. Jun 25, 2012
### Muphrid
It's probably not plotting the Fourier transform of $\sin x$ because only two frequencies contribute (1 and -1), and everywhere else the transform is zero.
You can imagine the Fourier transform as decomposing a signal into a sum of many sine and cosine functions. The Fourier amplitude $F(\omega)$ tells you what the amplitude of that frequency sine or cosine wave contributes to the signal.
3. Jun 25, 2012
### Nano-Passion
I had trouble running other Fourier Transforms as well, but I'm not sure why.
I've attached a file with some of the Fourier Transforms that I attempted to plot. It would be greatly appreciated if you can take a look at it!
4. Jun 26, 2012
### utkarsh1
sin(x)/x is called sinc x. Its different than sine. Sinc(x) is most important function in communication systems and filters. because theoretical impulse response for "best" filter is a sinc.
5. Jun 26, 2012
### Nano-Passion
Hm interesting. So does sin(x) give itself to fourier transform methods or am I doing something wrong?
6. Jun 26, 2012
### Mute
How are you trying to determine the fourier transforms in Mathematica?
sin(x) does indeed have a fourier transform, but it is a special kind of fourier transform. For example, if you try to calculate the fourier transform in mathematica via the command
Code (Text):
Integrate[Exp[-I*2*Pi*k*x]*Sin[x],{x,-Infinity,Infinity}]
you will not get a result. Mathematica will tell you the integral does not converge. This is because in the regular integral sense the integral indeed does not converge. This is because both Sin(x) and Exp(ikx) oscillate as x tends to infinity, but neither decays, so the integral does not converge. However, if you try to transform via
Code (Text):
FourierTransform[Sin[x],x,k]
I believe you should get a result that involves the Dirac Delta function. Did you try this way?
Now, if you tried the same two commands with Sin[x]/x, both should give you the same result, I think. Why is this? It's because Sin[x]/x decays as x -> Infinity, so the integral does converge in the usual sense.
7. Jun 27, 2012
### Chaitu662
Even my internship is on Fourier Transforms.
A part of the work is to find the frequency component of the signal which has the maximum amplitude (power density). Right now, I'm performing an fft in Matlab and searching for the maximum peak.
The fft is done to do just this and nothing else. An fft of about 1024 samples would consume a lot of time and would drain battery power.
Is there any simpler way to do this?
Thanks,
Chaitanya.
Similar Discussions: Fourier Transforms is Inflicting Pain! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398106932640076, "perplexity": 598.2245299444303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104204.40/warc/CC-MAIN-20170818005345-20170818025345-00266.warc.gz"} |
https://en.dlyang.me/page5/ | # LanternD's Castle
## STT 861 Theory of Prob and STT I Lecture Note - 7
2017-10-18
Survival function and its example; transformation of random variables, scale and rate parameters and their examples; Joint probability density and its example; independent continuous random variables.
## Find the Expectation of a Symmetric Probabiliry Density Function
2017-10-18
If the PDF is symmetric about c, show that E(X)=c. This is a homework problem for course STT802-002 Theory of Probabilities and Statistics I in MSU.
## STT 861 Theory of Prob and STT I Lecture Note - 6
2017-10-11
Hypergeometric distribution; Poisson Law; Brownian motion; continous random variables, exponential distribution, cumulative distribution function, uniform distribution; expectation and variance of continuous random variable.
## STT 861 Theory of Prob and STT I Lecture Note - 5
2017-10-04
Sample mean and sample variance, biased and unbiased estimation; covariance, Hypergeometric distribution and its example; correlation coefficients; discrete distribution, Poisson distribution, Poisson approximation for the Binomial distribution.
## STT 861 Theory of Prob and STT I Lecture Note - 4
2017-09-27
Expectation and its theorems, Geometry distribution, Negative Binomial distribution and their examples; Theorem of linerarity, PMF of a pair of random variables, Markov's inequality; variance and its examples, uniform distribution.
## STT 861 Theory of Prob and STT I Lecture Note - 3
2017-09-20
Random variable, independent random variable and their examples; Bernoulli distribution, Binomial distribution.
## STT 861 Theory of Prob and STT I Lecture Note - 2
2017-09-13
Some of the basic probability and statistics concepts. joint probabilities, combinatories; conditional probabilities and independence and their examples; Bayes' rule.
## STT 861 Theory of Prob and STT I Lecture Note - Overview
2017-09-06
This is the lecture note that I took on the lecture. This post serve as the overview of this series.
## RFIC - Microstrip Transmission Line Design
2016-10-04
Microstrip transmission line is one of the basic type of transmission line in RF integrated circuit. Here is a basic/simple design example of it. This is a homework for course ECE810 RF Integrated Circuits in MSU.
## Adaptive Control - Least-Squares Algorithm in Parameters Estimation
2016-09-08
Parameter estimation is one of the keystones in Adaptive control; the main idea of parameter estimation is to construct a parametric model and then use optimization methods to minimize the error between the true parameter and the estimation. Least-square algorithm is one of the common optimization methods.
## Try MathJax Here
2016-09-06
MathJax is a commonly-used web-based math formula and expression render tool. Here is the test of it.
## Yet Another New Start
2015-05-01
Blog migration again. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168398380279541, "perplexity": 3149.356930753362}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00240.warc.gz"} |
https://www.mathwit.com/math/m2/mod/forum/view.php?id=83&o=5 | ## K12 math forum
We encourage use the forum at the general site, because this is only a demo. course.
List of discussions. Showing 3 of 3 discussions
Status Discussion Last post Replies Actions | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.825061023235321, "perplexity": 13176.22773302862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00021.warc.gz"} |
https://worldwidescience.org/topicpages/v/volcano+observatory+hvo.html | #### Sample records for volcano observatory hvo
1. Modernization of the USGS Hawaiian Volcano Observatory Seismic Processing Infrastructure
Science.gov (United States)
Antolik, L.; Shiro, B.; Friberg, P. A.
2016-12-01
The USGS Hawaiian Volcano Observatory (HVO) operates a Tier 1 Advanced National Seismic System (ANSS) seismic network to monitor, characterize, and report on volcanic and earthquake activity in the State of Hawaii. Upgrades at the observatory since 2009 have improved the digital telemetry network, computing resources, and seismic data processing with the adoption of the ANSS Quake Management System (AQMS) system. HVO aims to build on these efforts by further modernizing its seismic processing infrastructure and strengthen its ability to meet ANSS performance standards. Most notably, this will also allow HVO to support redundant systems, both onsite and offsite, in order to provide better continuity of operation during intermittent power and network outages. We are in the process of implementing a number of upgrades and improvements on HVO's seismic processing infrastructure, including: 1) Virtualization of AQMS physical servers; 2) Migration of server operating systems from Solaris to Linux; 3) Consolidation of AQMS real-time and post-processing services to a single server; 4) Upgrading database from Oracle 10 to Oracle 12; and 5) Upgrading to the latest Earthworm and AQMS software. These improvements will make server administration more efficient, minimize hardware resources required by AQMS, simplify the Oracle replication setup, and provide better integration with HVO's existing state of health monitoring tools and backup system. Ultimately, it will provide HVO with the latest and most secure software available while making the software easier to deploy and support.
2. Hawaiian Volcano Observatory seismic data, January to March 2009
Science.gov (United States)
Nakata, Jennifer S.; Okubo, Paul G.
2010-01-01
This U.S. Geological Survey (USGS), Hawaiian Volcano Observatory (HVO) summary presents seismic data gathered during January–March 2009. The seismic summary offers earthquake hypocenters without interpretation as a source of preliminary data and is complete in that most data for events of M≥1.5 are included. All latitude and longitude references in this report are stated in Old Hawaiian Datum.
3. The story of the Hawaiian Volcano Observatory -- A remarkable first 100 years of tracking eruptions and earthquakes
Science.gov (United States)
Babb, Janet L.; Kauahikaua, James P.; Tilling, Robert I.
2011-01-01
The year 2012 marks the centennial of the Hawaiian Volcano Observatory (HVO). With the support and cooperation of visionaries, financiers, scientists, and other individuals and organizations, HVO has successfully achieved 100 years of continuous monitoring of Hawaiian volcanoes. As we celebrate this milestone anniversary, we express our sincere mahalo—thanks—to the people who have contributed to and participated in HVO’s mission during this past century. First and foremost, we owe a debt of gratitude to the late Thomas A. Jaggar, Jr., the geologist whose vision and efforts led to the founding of HVO. We also acknowledge the pioneering contributions of the late Frank A. Perret, who began the continuous monitoring of Kīlauea in 1911, setting the stage for Jaggar, who took over the work in 1912. Initial support for HVO was provided by the Massachusetts Institute of Technology (MIT) and the Carnegie Geophysical Laboratory, which financed the initial cache of volcano monitoring instruments and Perret’s work in 1911. The Hawaiian Volcano Research Association, a group of Honolulu businessmen organized by Lorrin A. Thurston, also provided essential funding for HVO’s daily operations starting in mid-1912 and continuing for several decades. Since HVO’s beginning, the University of Hawaiʻi (UH), called the College of Hawaii until 1920, has been an advocate of HVO’s scientific studies. We have benefited from collaborations with UH scientists at both the Hilo and Mänoa campuses and look forward to future cooperative efforts to better understand how Hawaiian volcanoes work. The U.S. Geological Survey (USGS) has operated HVO continuously since 1947. Before then, HVO was under the administration of various Federal agencies—the U.S. Weather Bureau, at the time part of the Department of Agriculture, from 1919 to 1924; the USGS, which first managed HVO from 1924 to 1935; and the National Park Service from 1935 to 1947. For 76 of its first 100 years, HVO has been
4. Seismic instrumentation plan for the Hawaiian Volcano Observatory
Science.gov (United States)
Thelen, Weston A.
2014-01-01
The seismic network operated by the U.S. Geological Survey’s Hawaiian Volcano Observatory (HVO) is the main source of authoritative data for reporting earthquakes in the State of Hawaii, including those that occur on the State’s six active volcanoes (Kīlauea, Mauna Loa, Hualālai, Mauna Kea, Haleakalā, Lō‘ihi). Of these volcanoes, Kīlauea and Mauna Loa are considered “very high threat” in a report on the rationale for a National Volcanic Early Warning System (NVEWS) (Ewert and others, 2005). This seismic instrumentation plan assesses the current state of HVO’s seismic network with respect to the State’s active volcanoes and calculates the number of stations that are needed to upgrade the current network to provide a seismic early warning capability for forecasting volcanic activity. Further, the report provides proposed priorities for upgrading the seismic network and a cost assessment for both the installation costs and maintenance costs of the improved network that are required to fully realize the potential of the early warning system.
5. The Hawaiian Volcano Observatory: a natural laboratory for studying basaltic volcanism: Chapter 1 in Characteristics of Hawaiian volcanoes
Science.gov (United States)
Tilling, Robert I.; Kauahikaua, James P.; Brantley, Steven R.; Neal, Christina A.; Poland, Michael P.; Takahashi, T. Jane; Landowski, Claire M.
2014-01-01
In the beginning of the 20th century, geologist Thomas A. Jaggar, Jr., argued that, to fully understand volcanic and associated hazards, the expeditionary mode of studying eruptions only after they occurred was inadequate. Instead, he fervently advocated the use of permanent observatories to record and measure volcanic phenomena—at and below the surface—before, during, and after eruptions to obtain the basic scientific information needed to protect people and property from volcanic hazards. With the crucial early help of American volcanologist Frank Alvord Perret and the Hawaiian business community, the Hawaiian Volcano Observatory (HVO) was established in 1912, and Jaggar’s vision became reality. From its inception, HVO’s mission has centered on several goals: (1) measuring and documenting the seismic, eruptive, and geodetic processes of active Hawaiian volcanoes (principally Kīlauea and Mauna Loa); (2) geological mapping and dating of deposits to reconstruct volcanic histories, understand island evolution, and determine eruptive frequencies and volcanic hazards; (3) systematically collecting eruptive products, including gases, for laboratory analysis; and (4) widely disseminating observatory-acquired data and analysis, reports, and hazard warnings to the global scientific community, emergency-management authorities, news media, and the public. The long-term focus on these goals by HVO scientists, in collaboration with investigators from many other organizations, continues to fulfill Jaggar’s career-long vision of reducing risks from volcanic and earthquake hazards across the globe.
6. A Versatile Time-Lapse Camera System Developed by the Hawaiian Volcano Observatory for Use at Kilauea Volcano, Hawaii
Science.gov (United States)
Orr, Tim R.; Hoblitt, Richard P.
2008-01-01
Volcanoes can be difficult to study up close. Because it may be days, weeks, or even years between important events, direct observation is often impractical. In addition, volcanoes are often inaccessible due to their remote location and (or) harsh environmental conditions. An eruption adds another level of complexity to what already may be a difficult and dangerous situation. For these reasons, scientists at the U.S. Geological Survey (USGS) Hawaiian Volcano Observatory (HVO) have, for years, built camera systems to act as surrogate eyes. With the recent advances in digital-camera technology, these eyes are rapidly improving. One type of photographic monitoring involves the use of near-real-time network-enabled cameras installed at permanent sites (Hoblitt and others, in press). Time-lapse camera-systems, on the other hand, provide an inexpensive, easily transportable monitoring option that offers more versatility in site location. While time-lapse systems lack near-real-time capability, they provide higher image resolution and can be rapidly deployed in areas where the use of sophisticated telemetry required by the networked cameras systems is not practical. This report describes the latest generation (as of 2008) time-lapse camera system used by HVO for photograph acquisition in remote and hazardous sites on Kilauea Volcano.
7. One hundred volatile years of volcanic gas studies at the Hawaiian Volcano Observatory: Chapter 7 in Characteristics of Hawaiian volcanoes
Science.gov (United States)
Sutton, A.J.; Elias, Tamar; Poland, Michael P.; Takahashi, T. Jane; Landowski, Claire M.
2014-01-01
The first volcanic gas studies in Hawai‘i, beginning in 1912, established that volatile emissions from Kīlauea Volcano contained mostly water vapor, in addition to carbon dioxide and sulfur dioxide. This straightforward discovery overturned a popular volatile theory of the day and, in the same action, helped affirm Thomas A. Jaggar, Jr.’s, vision of the Hawaiian Volcano Observatory (HVO) as a preeminent place to study volcanic processes. Decades later, the environmental movement produced a watershed of quantitative analytical tools that, after being tested at Kīlauea, became part of the regular monitoring effort at HVO. The resulting volatile emission and fumarole chemistry datasets are some of the most extensive on the planet. These data indicate that magma from the mantle enters the shallow magmatic system of Kīlauea sufficiently oversaturated in CO2 to produce turbulent flow. Passive degassing at Kīlauea’s summit that occurred from 1983 through 2007 yielded CO2-depleted, but SO2- and H2O-rich, rift eruptive gases. Beginning with the 2008 summit eruption, magma reaching the East Rift Zone eruption site became depleted of much of its volatile content at the summit eruptive vent before transport to Pu‘u ‘Ō‘ō. The volatile emissions of Hawaiian volcanoes are halogen-poor, relative to those of other basaltic systems. Information gained regarding intrinsic gas solubilities at Kīlauea and Mauna Loa, as well as the pressure-controlled nature of gas release, have provided useful tools for tracking eruptive activity. Regular CO2-emission-rate measurements at Kīlauea’s summit, together with surface-deformation and other data, detected an increase in deep magma supply more than a year before a corresponding surge in effusive activity. Correspondingly, HVO routinely uses SO2 emissions to study shallow eruptive processes and effusion rates. HVO gas studies and Kīlauea’s long-running East Rift Zone eruption also demonstrate that volatile emissions can
8. The origin of the Hawaiian Volcano Observatory
International Nuclear Information System (INIS)
Dvorak, John
2011-01-01
I first stepped through the doorway of the Hawaiian Volcano Observatory in 1976, and I was impressed by what I saw: A dozen people working out of a stone-and-metal building perched at the edge of a high cliff with a spectacular view of a vast volcanic plain. Their primary purpose was to monitor the island's two active volcanoes, Kilauea and Mauna Loa. I joined them, working for six weeks as a volunteer and then, years later, as a staff scientist. That gave me several chances to ask how the observatory had started.
9. The origin of the Hawaiian Volcano Observatory
Energy Technology Data Exchange (ETDEWEB)
Dvorak, John [University of Hawaii' s Institute for Astronomy (United States)
2011-05-15
I first stepped through the doorway of the Hawaiian Volcano Observatory in 1976, and I was impressed by what I saw: A dozen people working out of a stone-and-metal building perched at the edge of a high cliff with a spectacular view of a vast volcanic plain. Their primary purpose was to monitor the island's two active volcanoes, Kilauea and Mauna Loa. I joined them, working for six weeks as a volunteer and then, years later, as a staff scientist. That gave me several chances to ask how the observatory had started.
10. Decision Analysis Tools for Volcano Observatories
Science.gov (United States)
Hincks, T. H.; Aspinall, W.; Woo, G.
2005-12-01
Staff at volcano observatories are predominantly engaged in scientific activities related to volcano monitoring and instrumentation, data acquisition and analysis. Accordingly, the academic education and professional training of observatory staff tend to focus on these scientific functions. From time to time, however, staff may be called upon to provide decision support to government officials responsible for civil protection. Recognizing that Earth scientists may have limited technical familiarity with formal decision analysis methods, specialist software tools that assist decision support in a crisis should be welcome. A review is given of two software tools that have been under development recently. The first is for probabilistic risk assessment of human and economic loss from volcanic eruptions, and is of practical use in short and medium-term risk-informed planning of exclusion zones, post-disaster response, etc. A multiple branch event-tree architecture for the software, together with a formalism for ascribing probabilities to branches, have been developed within the context of the European Community EXPLORIS project. The second software tool utilizes the principles of the Bayesian Belief Network (BBN) for evidence-based assessment of volcanic state and probabilistic threat evaluation. This is of practical application in short-term volcano hazard forecasting and real-time crisis management, including the difficult challenge of deciding when an eruption is over. An open-source BBN library is the software foundation for this tool, which is capable of combining synoptically different strands of observational data from diverse monitoring sources. A conceptual vision is presented of the practical deployment of these decision analysis tools in a future volcano observatory environment. Summary retrospective analyses are given of previous volcanic crises to illustrate the hazard and risk insights gained from use of these tools.
11. One hundred years of volcano monitoring in Hawaii
Science.gov (United States)
Kauahikaua, Jim; Poland, Mike
2012-01-01
In 2012 the Hawaiian Volcano Observatory (HVO), the oldest of five volcano observatories in the United States, is commemorating the 100th anniversary of its founding. HVO's location, on the rim of Kilauea volcano (Figure 1)—one of the most active volcanoes on Earth—has provided an unprecedented opportunity over the past century to study processes associated with active volcanism and develop methods for hazards assessment and mitigation. The scientifically and societally important results that have come from 100 years of HVO's existence are the realization of one man's vision of the best way to protect humanity from natural disasters. That vision was a response to an unusually destructive decade that began the twentieth century, a decade that saw almost 200,000 people killed by the effects of earthquakes and volcanic eruptions.
12. Linking space observations to volcano observatories in Latin America: Results from the CEOS DRM Volcano Pilot
Science.gov (United States)
Delgado, F.; Pritchard, M. E.; Biggs, J.; Arnold, D. W. D.; Poland, M. P.; Ebmeier, S. K.; Wauthier, C.; Wnuk, K.; Parker, A. L.; Amelug, F.; Sansosti, E.; Mothes, P. A.; Macedo, O.; Lara, L.; Zoffoli, S.; Aguilar, V.
2015-12-01
Within Latin American, about 315 volcanoes that have been active in the Holocene, but according to the United Nations Global Assessment of Risk 2015 report (GAR15) 202 of these volcanoes have no seismic, deformation or gas monitoring. Following the 2012 Santorini Report on satellite Earth Observation and Geohazards, the Committee on Earth Observation Satellites (CEOS) has developed a 3-year pilot project to demonstrate how satellite observations can be used to monitor large numbers of volcanoes cost-effectively, particularly in areas with scarce instrumentation and/or difficult access. The pilot aims to improve disaster risk management (DRM) by working directly with the volcano observatories that are governmentally responsible for volcano monitoring, and the project is possible thanks to data provided at no cost by international space agencies (ESA, CSA, ASI, DLR, JAXA, NASA, CNES). Here we highlight several examples of how satellite observations have been used by volcano observatories during the last 18 months to monitor volcanoes and respond to crises -- for example the 2013-2014 unrest episode at Cerro Negro/Chiles (Ecuador-Colombia border); the 2015 eruptions of Villarrica and Calbuco volcanoes, Chile; the 2013-present unrest and eruptions at Sabancaya and Ubinas volcanoes, Peru; the 2015 unrest at Guallatiri volcano, Chile; and the 2012-present rapid uplift at Cordon Caulle, Chile. Our primary tool is measurements of ground deformation made by Interferometric Synthetic Aperture Radar (InSAR) but thermal and outgassing data have been used in a few cases. InSAR data have helped to determine the alert level at these volcanoes, served as an independent check on ground sensors, guided the deployment of ground instruments, and aided situational awareness. We will describe several lessons learned about the type of data products and information that are most needed by the volcano observatories in different countries.
13. Protocols for geologic hazards response by the Yellowstone Volcano Observatory
Science.gov (United States)
,
2010-01-01
The Yellowstone Plateau hosts an active volcanic system, with subterranean magma (molten rock), boiling, pressurized waters, and a variety of active faults with significant earthquake hazards. Within the next few decades, light-to-moderate earthquakes and steam explosions are certain to occur. Volcanic eruptions are less likely, but are ultimately inevitable in this active volcanic region. This document summarizes protocols, policies, and tools to be used by the Yellowstone Volcano Observatory (YVO) during earthquakes, hydrothermal explosions, or any geologic activity that could lead to a volcanic eruption.
14. HVO applications and practical experiences; HVO Anwendungen und Praxiserfahrungen
Energy Technology Data Exchange (ETDEWEB)
Doerr, Sebastian [Neste Oil Corporation, Espoo (Finland); Honkanen, Markku; Mikkonen, Seppo
2012-07-01
Since 2007 Neste Oil Finland is producing renewable Diesel based on vegetable oils. The key process step is hydro treatment, products are called HVO = Hydrotreated Vegetable Oils. Process and main product performance will be introduced. Main focus is on applications and experience from several field tests. The presentation will give an overview and short outlook to future applications and challenges. (orig.)
15. 2014 volcanic activity in Alaska: Summary of events and response of the Alaska Volcano Observatory
Science.gov (United States)
Cameron, Cheryl E.; Dixon, James P.; Neal, Christina A.; Waythomas, Christopher F.; Schaefer, Janet R.; McGimsey, Robert G.
2017-09-07
The Alaska Volcano Observatory (AVO) responded to eruptions, possible eruptions, volcanic unrest or suspected unrest, and seismic events at 18 volcanic centers in Alaska during 2014. The most notable volcanic activity consisted of intermittent ash eruptions from long-active Cleveland and Shishaldin Volcanoes in the Aleutian Islands, and two eruptive episodes at Pavlof Volcano on the Alaska Peninsula. Semisopochnoi and Akutan volcanoes had seismic swarms, both likely the result of magmatic intrusion. The AVO also installed seismometers and infrasound instruments at Mount Cleveland during 2014.
16. 2015 Volcanic activity in Alaska—Summary of events and response of the Alaska Volcano Observatory
Science.gov (United States)
Dixon, James P.; Cameron, Cheryl E.; Iezzi, Alexandra M.; Wallace, Kristi
2017-09-28
The Alaska Volcano Observatory (AVO) responded to eruptions, volcanic unrest or suspected unrest, and seismic events at 14 volcanic centers in Alaska during 2015. The most notable volcanic activity consisted of continuing intermittent ash eruptions from Cleveland and Shishaldin volcanoes in the Aleutian Islands. Two eruptive episodes, at Veniaminof and Pavlof, on the Alaska Peninsula ended in 2015. During 2015, AVO re-established the seismograph network at Aniakchak, installed six new broadband seismometers throughout the Aleutian Islands, and added a Multiple component Gas Analyzer System (MultiGAS) station on Augustine.
17. Space volcano observatory (SVO): a metric resolution system on-board a micro/mini-satellite
Science.gov (United States)
Briole, P.; Cerutti-Maori, G.; Kasser, M.
2017-11-01
1500 volcanoes on the Earth are potentially active, one third of them have been active during this century and about 70 are presently erupting. At the beginning of the third millenium, 10% of the world population will be living in areas directly threatened by volcanoes, without considering the effects of eruptions on climate or air-trafic for example. The understanding of volcanic eruptions, a major challenge in geoscience, demands continuous monitoring of active volcanoes. The only way to provide global, continuous, real time and all-weather information on volcanoes is to set up a Space Volcano Observatory closely connected to the ground observatories. Spaceborne observations are mandatory and implement the ground ones as well as airborne ones that can be implemented on a limited set of volcanoes. SVO goal is to monitor both the deformations and the changes in thermal radiance at optical wavelengths from high temperature surfaces of the active volcanic zones. For that, we propose to map at high resolution (1 to 1,5 m pixel size) the topography (stereoscopic observation) and the thermal anomalies (pixel-integrated temperatures above 450°C) of active volcanic areas in a size of 6 x 6 km to 12 x 12 km, large enough for monitoring most of the target features. A return time of 1 to 3 days will allow to get a monitoring useful for hazard mitigation. The paper will present the concept of the optical payload, compatible with a micro/mini satellite (mass in the range 100 - 400 kg), budget for the use of Proteus platform in the case of minisatellite approach will be given and also in the case of CNES microsat platform family. This kind of design could be used for other applications like high resolution imagery on a limited zone for military purpose, GIS, evolution cadaster…
18. Petrologic insights into basaltic volcanism at historically active Hawaiian volcanoes: Chapter 6 in Characteristics of Hawaiian volcanoes
Science.gov (United States)
Helz, Rosalind L.; Clague, David A.; Sisson, Thomas W.; Thornber, Carl R.; Poland, Michael P.; Takahashi, T. Jane; Landowski, Claire M.
2014-01-01
Study of the petrology of Hawaiian volcanoes, in particular the historically active volcanoes on the Island of Hawai‘i, has long been of worldwide scientific interest. When Dr. Thomas A. Jaggar, Jr., established the Hawaiian Volcano Observatory (HVO) in 1912, detailed observations on basaltic activity at Kīlauea and Mauna Loa volcanoes increased dramatically. The period from 1912 to 1958 saw a gradual increase in the collection and analysis of samples from the historical eruptions of Kīlauea and Mauna Loa and development of the concepts needed to evaluate them. In a classic 1955 paper, Howard Powers introduced the concepts of magnesia variation diagrams, to display basaltic compositions, and olivine-control lines, to distinguish between possibly comagmatic and clearly distinct basaltic lineages. In particular, he and others recognized that Kīlauea and Mauna Loa basalts must have different sources.
19. Mauna Loa--history, hazards and risk of living with the world's largest volcano
Science.gov (United States)
Trusdell, Frank A.
2012-01-01
Mauna Loa on the Island Hawaiʻi is the world’s largest volcano. People residing on its flanks face many hazards that come with living on or near an active volcano, including lava flows, explosive eruptions, volcanic smog, damaging earthquakes, and local tsunami (giant seawaves). The County of Hawaiʻi (Island of Hawaiʻi) is the fastest growing County in the State of Hawaii. Its expanding population and increasing development mean that risk from volcano hazards will continue to grow. U.S. Geological Survey (USGS) scientists at the Hawaiian Volcano Observatory (HVO) closely monitor and study Mauna Loa Volcano to enable timely warning of hazardous activity and help protect lives and property.
20. Volcano-Monitoring Instrumentation in the United States, 2008
Science.gov (United States)
Guffanti, Marianne; Diefenbach, Angela K.; Ewert, John W.; Ramsey, David W.; Cervelli, Peter F.; Schilling, Steven P.
2010-01-01
The United States is one of the most volcanically active countries in the world. According to the global volcanism database of the Smithsonian Institution, the United States (including its Commonwealth of the Northern Mariana Islands) is home to about 170 volcanoes that are in an eruptive phase, have erupted in historical time, or have not erupted recently but are young enough (eruptions within the past 10,000 years) to be capable of reawakening. From 1980 through 2008, 30 of these volcanoes erupted, several repeatedly. Volcano monitoring in the United States is carried out by the U.S. Geological Survey (USGS) Volcano Hazards Program, which operates a system of five volcano observatories-Alaska Volcano Observatory (AVO), Cascades Volcano Observatory (CVO), Hawaiian Volcano Observatory (HVO), Long Valley Observatory (LVO), and Yellowstone Volcano Observatory (YVO). The observatories issue public alerts about conditions and hazards at U.S. volcanoes in support of the USGS mandate under P.L. 93-288 (Stafford Act) to provide timely warnings of potential volcanic disasters to the affected populace and civil authorities. To make efficient use of the Nation's scientific resources, the volcano observatories operate in partnership with universities and other governmental agencies through various formal agreements. The Consortium of U.S. Volcano Observatories (CUSVO) was established in 2001 to promote scientific cooperation among the Federal, academic, and State agencies involved in observatory operations. Other groups also contribute to volcano monitoring by sponsoring long-term installation of geophysical instruments at some volcanoes for specific research projects. This report describes a database of information about permanently installed ground-based instruments used by the U.S. volcano observatories to monitor volcanic activity (unrest and eruptions). The purposes of this Volcano-Monitoring Instrumentation Database (VMID) are to (1) document the Nation's existing
1. Implementation of Simple and Functional Web Applications at the Alaska Volcano Observatory Remote Sensing Group
Science.gov (United States)
Skoog, R. A.
2007-12-01
Web pages are ubiquitous and accessible, but when compared to stand-alone applications they are limited in capability. The Alaska Volcano Observatory (AVO) Remote Sensing Group has implemented web pages and supporting server software that provide relatively advanced features to any user able to meet basic requirements. Anyone in the world with access to a modern web browser (such as Mozilla Firefox 1.5 or Internet Explorer 6) and reasonable internet connection can fully use the tools, with no software installation or configuration. This allows faculty, staff and students at AVO to perform many aspects of volcano monitoring from home or the road as easily as from the office. Additionally, AVO collaborators such as the National Weather Service and the Anchorage Volcanic Ash Advisory Center are able to use these web tools to quickly assess volcanic events. Capabilities of this web software include (1) ability to obtain accurate measured remote sensing data values on an semi- quantitative compressed image of a large area, (2) to view any data from a wide time range of data swaths, (3) to view many different satellite remote sensing spectral bands and combinations, to adjust color range thresholds, (4) and to export to KML files which are viewable virtual globes such as Google Earth. The technologies behind this implementation are primarily Javascript, PHP, and MySQL which are free to use and well documented, in addition to Terascan, a commercial software package used to extract data from level-0 data files. These technologies will be presented in conjunction with the techniques used to combine them into the final product used by AVO and its collaborators for operational volcanic monitoring.
2. A century of studying effusive eruptions in Hawai'i: Chapter 9 in Characteristics of Hawaiian volcanoes
Science.gov (United States)
Cashman, Katherine V.; Mangan, Margaret T.; Poland, Michael P.; Takahashi, T. Jane; Landowski, Claire M.
2014-01-01
The Hawaiian Volcano Observatory (HVO) was established as a natural laboratory to study volcanic processes. Since the most frequent form of volcanic activity in Hawai‘i is effusive, a major contribution of the past century of research at HVO has been to describe and quantify lava flow emplacement processes. Lava flow research has taken many forms; first and foremost it has been a collection of basic observational data on active lava flows from both Mauna Loa and Kīlauea volcanoes that have occurred over the past 100 years. Both the types and quantities of observational data have changed with changing technology; thus, another important contribution of HVO to lava flow studies has been the application of new observational techniques. Also important has been a long-term effort to measure the physical properties (temperature, viscosity, crystallinity, and so on) of flowing lava. Field measurements of these properties have both motivated laboratory experiments and presaged the results of those experiments, particularly with respect to understanding the rheology of complex fluids. Finally, studies of the dynamics of lava flow emplacement have combined detailed field measurements with theoretical models to build a framework for the interpretation of lava flows in numerous other terrestrial, submarine, and planetary environments. Here, we attempt to review all these aspects of lava flow studies and place them into a coherent framework that we hope will motivate future research.
3. An automated SO2 camera system for continuous, real-time monitoring of gas emissions from Kīlauea Volcano's summit Overlook Crater
Science.gov (United States)
Kern, Christoph; Sutton, Jeff; Elias, Tamar; Lee, Robert Lopaka; Kamibayashi, Kevan P.; Antolik, Loren; Werner, Cynthia A.
2015-01-01
SO2 camera systems allow rapid two-dimensional imaging of sulfur dioxide (SO2) emitted from volcanic vents. Here, we describe the development of an SO2 camera system specifically designed for semi-permanent field installation and continuous use. The integration of innovative but largely “off-the-shelf” components allowed us to assemble a robust and highly customizable instrument capable of continuous, long-term deployment at Kīlauea Volcano's summit Overlook Crater. Recorded imagery is telemetered to the USGS Hawaiian Volcano Observatory (HVO) where a novel automatic retrieval algorithm derives SO2 column densities and emission rates in real-time. Imagery and corresponding emission rates displayed in the HVO operations center and on the internal observatory website provide HVO staff with useful information for assessing the volcano's current activity. The ever-growing archive of continuous imagery and high-resolution emission rates in combination with continuous data from other monitoring techniques provides insight into shallow volcanic processes occurring at the Overlook Crater. An exemplary dataset from September 2013 is discussed in which a variation in the efficiency of shallow circulation and convection, the processes that transport volatile-rich magma to the surface of the summit lava lake, appears to have caused two distinctly different phases of lake activity and degassing. This first successful deployment of an SO2 camera for continuous, real-time volcano monitoring shows how this versatile technique might soon be adapted and applied to monitor SO2 degassing at other volcanoes around the world.
4. Volcanoes
Science.gov (United States)
... rock, steam, poisonous gases, and ash reach the Earth's surface when a volcano erupts. An eruption can also cause earthquakes, mudflows and flash floods, rock falls and landslides, acid rain, fires, and even tsunamis. Volcanic gas ...
5. Operational tracking of lava lake surface motion at Kīlauea Volcano, Hawai‘i
Science.gov (United States)
Patrick, Matthew R.; Orr, Tim R.
2018-03-08
Surface motion is an important component of lava lake behavior, but previous studies of lake motion have been focused on short time intervals. In this study, we implement the first continuous, real-time operational routine for tracking lava lake surface motion, applying the technique to the persistent lava lake in Halema‘uma‘u Crater at the summit of Kīlauea Volcano, Hawai‘i. We measure lake motion by using images from a fixed thermal camera positioned on the crater rim, transmitting images to the Hawaiian Volcano Observatory (HVO) in real time. We use an existing optical flow toolbox in Matlab to calculate motion vectors, and we track the position of lava upwelling in the lake, as well as the intensity of spattering on the lake surface. Over the past 2 years, real-time tracking of lava lake surface motion at Halema‘uma‘u has been an important part of monitoring the lake’s activity, serving as another valuable tool in the volcano monitoring suite at HVO.
6. Application of Earthquake Subspace Detectors at Kilauea and Mauna Loa Volcanoes, Hawaii
Science.gov (United States)
Okubo, P.; Benz, H.; Yeck, W.
2016-12-01
Recent studies have demonstrated the capabilities of earthquake subspace detectors for detailed cataloging and tracking of seismicity in a number of regions and settings. We are exploring the application of subspace detectors at the United States Geological Survey's Hawaiian Volcano Observatory (HVO) to analyze seismicity at Kilauea and Mauna Loa volcanoes. Elevated levels of microseismicity and occasional swarms of earthquakes associated with active volcanism here present cataloging challenges due the sheer numbers of earthquakes and an intrinsically low signal-to-noise environment featuring oceanic microseism and volcanic tremor in the ambient seismic background. With high-quality continuous recording of seismic data at HVO, we apply subspace detectors (Harris and Dodge, 2011, Bull. Seismol. Soc. Am., doi: 10.1785/0120100103) during intervals of noteworthy seismicity. Waveform templates are drawn from Magnitude 2 and larger earthquakes within clusters of earthquakes cataloged in the HVO seismic database. At Kilauea, we focus on seismic swarms in the summit caldera region where, despite continuing eruptions from vents in the summit region and in the east rift zone, geodetic measurements reflect a relatively inflated volcanic state. We also focus on seismicity beneath and adjacent to Mauna Loa's summit caldera that appears to be associated with geodetic expressions of gradual volcanic inflation, and where precursory seismicity clustered prior to both Mauna Loa's most recent eruptions in 1975 and 1984. We recover several times more earthquakes with the subspace detectors - down to roughly 2 magnitude units below the templates, based on relative amplitudes - compared to the numbers of cataloged earthquakes. The increased numbers of detected earthquakes in these clusters, and the ability to associate and locate them, allow us to infer details of the spatial and temporal distributions and possible variations in stresses within these key regions of the volcanoes.
7. Use of new and old technologies and methods by the Alaska Volcano Observatory during the 2006 eruption of Augustine Volcano, Alaska
Science.gov (United States)
Murray, T. L.; Nye, C. J.; Eichelberger, J. C.
2006-12-01
8. Natural hazards and risk reduction in Hawai'i: Chapter 10 in Characteristics of Hawaiian volcanoes
Science.gov (United States)
Kauahikaua, James P.; Tilling, Robert I.; Poland, Michael P.; Takahashi, T. Jane; Landowski, Claire M.
2014-01-01
Significant progress has been made over the past century in understanding, characterizing, and communicating the societal risks posed by volcanic, earthquake, and tsunami hazards in Hawai‘i. The work of the Hawaiian Volcano Observatory (HVO), with a century-long commitment to serving the public with credible hazards information, contributed substantially to this global progress. Thomas A. Jaggar, Jr., HVO’s founder, advocated that a scientific approach to understanding these hazards would result in strategies to mitigate their damaging effects. The resultant hazard-reduction methods range from prediction of eruptions and tsunamis, thereby providing early warnings for timely evacuation (if needed), to diversion of lava flows away from high-value infrastructure, such as hospitals. In addition to long-term volcano monitoring and multifaceted studies to better understand eruptive and seismic phenomena, HVO has continually and effectively communicated—through its publications, Web site, and public education/outreach programs—hazards information to emergency-management authorities, news media, and the public.
9. HVO, hydrotreated vegetable oil. A premium renewable biofuel for diesel engines
Energy Technology Data Exchange (ETDEWEB)
Mikkonen, Seppo [Neste Oil, Porvoo (Finland); Honkanen, Markku; Kuronen, Markku [Neste Oil, Espoo (Finland)
2013-06-01
HVO is renewable paraffinic diesel fuel produced from vegetable oils or animal fats by hydrotreating and isomerization. Composition is similar to GTL. HVO is not ''biodiesel'' which is a definition reserved for FAME. HVO can be used in diesel fuel without any ''blending wall'' as well as in addition to the FAME in EN 590. As a blending component HVO enhances fuel properties thanks to its high cetane, zero aromatics and reasonable distillation range. HVO can be used for upgrading gas oils to meet diesel fuel standard and for producing premium diesel fuels. HVO is comparable to fossil diesel regarding fuel logistics, stability, water separation and microbiological growth. The use of HVO as such or in blends reduces NO{sub x} and particulate emissions. Risks for fuel system deposits and engine oil deterioration are low. Combustion is practically ash-free meaning low risk for exhaust aftertreatment life-time. Winter grade fuels down to -40 C cloud point can be produced by HVO process from many kinds of feedstocks. HVO is fully accepted by directives and fuel standards. (orig.)
10. Characteristics of Hawaiian volcanoes
Science.gov (United States)
Poland, Michael P.; Takahashi, T. Jane; Landowski, Claire M.
2014-01-01
Founded in 1912 at the edge of the caldera of Kīlauea Volcano, HVO was the vision of Thomas A. Jaggar, Jr., a geologist from the Massachusetts Institute of Technology, whose studies of natural disasters around the world had convinced him that systematic, continuous observations of seismic and volcanic activity were needed to better understand—and potentially predict—earthquakes and volcanic eruptions. Jaggar summarized the aim of HVO by stating that “the work should be humanitarian” and have the goals of developing “prediction and methods of protecting life and property on the basis of sound scientific achievement.” These goals align well with those of the USGS, whose mission is to serve the Nation by providing reliable scientific information to describe and understand the Earth; minimize loss of life and property from natural disasters; manage natural resources; and enhance and protect our quality of life.
11. Volcanoes: Nature's Caldrons Challenge Geochemists.
Science.gov (United States)
Zurer, Pamela S.
1984-01-01
Reviews various topics and research studies on the geology of volcanoes. Areas examined include volcanoes and weather, plate margins, origins of magma, magma evolution, United States Geological Survey (USGS) volcano hazards program, USGS volcano observatories, volcanic gases, potassium-argon dating activities, and volcano monitoring strategies.…
12. Particle and NO{sub x} Emissions from a HVO-Fueled Diesel Engine
Energy Technology Data Exchange (ETDEWEB)
Happonen, M.
2012-10-15
Concerns about oil price, the strengthening climate change and traffic related health effects are all reasons which have promoted the research of renewable fuels. One renewable fuel candidate is diesel consisting of hydrotreated vegetable oils (HVO). The fuel is essentially paraffinic, has high cetane number (>80) and contains practically no oxygen, aromatics or sulphur. Furthermore, HVO fuel can be produced from various feedstocks including palm, soybean and rapeseed oils as well as animal fats. HVO has also been observed to reduce all regulated engine exhaust emissions compared to conventional diesel fuel. In this thesis, the effect of HVO fuel on engine exhaust emissions has been studied further. The thesis is roughly divided into two parts. The first part explores the emission reductions associated with the fuel and studies techniques which could be applied to achieve further emission reductions. One of the studied techniques was adjusting engine settings to better suit HVO fuel. The settings chosen for adjustments were injection pressure, injection timing, the amount of EGR and the timing of inlet valve closing (with constant inlet air mass flow, i.e. Miller timing). The engine adjustments were also successfully targeted to reduce either NO{sub x} or particulate emissions or both. The other applied emission reduction technique was the addition of oxygenate to HVO fuel. The chosen oxygenate was di-n-pentyl ether (DNPE), and tested fuel blend included 20 wt-% DNPE and 80 wt-% HVO. Thus, the oxygen content of the resulting blend was 2 wt-%. Reductions of over 25 % were observed in particulate emissions with the blend compared to pure HVO while NOx emissions altered under 5 %. On the second part of this thesis, the effect of the studied fuels on chosen surface properties of exhaust particles were studied using tandem differential mobility analyzer (TDMA) techniques and transmission electron microscopy (TEM). The studied surface properties were oxidizability and
13. A space-borne, multi-parameter, Virtual Volcano Observatory for the real-time, anywhere-anytime support to decision-making during eruptive crises
Science.gov (United States)
Ferrucci, F.; Tampellini, M.; Loughlin, S. C.; Tait, S.; Theys, N.; Valks, P.; Hirn, B.
2013-12-01
The EVOSS consortium of academic, industrial and institutional partners in Europe and Africa, has created a satellite-based volcano observatory, designed to support crisis management within the Global Monitoring for Environment and Security (GMES) framework of the European Commission. Data from 8 different payloads orbiting on 14 satellite platforms (SEVIRI on-board MSG-1, -2 and -3, MODIS on-board Terra and Aqua, GOME-2 and IASI onboard MetOp-A, OMI on-board Aura, Cosmo-SkyMED/1, /2, /3 and /4, JAMI on-board MTSAT-1 and -2, and, until April 8th2012, SCHIAMACHY on-board ENVISAT) acquired at 5 different down-link stations, are disseminated to and automatically processed at 6 locations in 4 countries. The results are sent, in four separate geographic data streams (high-temperature thermal anomalies, volcanic Sulfur dioxide daily fluxes, volcanic ash and ground deformation), to a central facility called VVO, the 'Virtual Volcano Observatory'. This system operates 24H/24-7D/7 since September 2011 on all volcanoes in Europe, Africa, the Lesser Antilles, and the oceans around them, and during this interval has detected, measured and monitored all subaerial eruptions occurred in this region (44 over 45 certified, with overall detection and processing efficiency of ~97%). EVOSS borne realtime information is delivered to a group of 14 qualified end users, bearing the direct or indirect responsibility of monitoring and managing volcano emergencies, and of advising governments in Comoros, DR Congo, Djibouti, Ethiopia, Montserrat, Uganda, Tanzania, France and Iceland. We present the full set of eruptions detected and monitored - from 2004 to present - by multispectral payloads SEVIRI onboard the geostationary platforms of the MSG constellation, for developing and fine tuning-up the EVOSS system along with its real-time, pre- and post-processing automated algorithms. The set includes 91% of subaerial eruptions occurred at 15 volcanoes (Piton de la Fournaise, Karthala, Jebel al
14. The Small Aircraft Transportation System (SATS), Higher Volume Operations (HVO) Concept and Research
Science.gov (United States)
Baxley, B.; Williams, D.; Consiglio, M.; Adams, C.; Abbott, T.
2005-01-01
The ability to conduct concurrent, multiple aircraft operations in poor weather at virtually any airport offers an important opportunity for a significant increase in the rate of flight operations, a major improvement in passenger convenience, and the potential to foster growth of operations at small airports. The Small Aircraft Transportation System, (SATS) Higher Volume Operations (HVO) concept is designed to increase capacity at the 3400 non-radar, non-towered airports in the United States where operations are currently restricted to one-in/one-out procedural separation during low visibility or ceilings. The concept s key feature is that pilots maintain their own separation from other aircraft using air-to-air datalink and on-board software within the Self-Controlled Area (SCA), an area of flight operations established during poor visibility and low ceilings around an airport without Air Traffic Control (ATC) services. While pilots self-separate within the SCA, an Airport Management Module (AMM) located at the airport assigns arriving pilots their sequence based on aircraft performance, position, winds, missed approach requirements, and ATC intent. The HVO design uses distributed decision-making, safe procedures, attempts to minimize pilot and controller workload, and integrates with today's ATC environment. The HVO procedures have pilots make their own flight path decisions when flying in Instrument Metrological Conditions (IMC) while meeting these requirements. This paper summarizes the HVO concept and procedures, presents a summary of the research conducted and results, and outlines areas where future HVO research is required. More information about SATS HVO can be found at http://ntrs.nasa.gov.
15. Influences of HVO and FAME on the combustion and emissions of modern passenger car diesel engines
International Nuclear Information System (INIS)
Stengel, Benjamin; Sadlowski, Thomas; Wichmann, Volker; Harndorf, Horst
2014-01-01
In the framework of this study engine tests were performed with FAME (fatty acid methyl ester) and HVO (hydrotreated vegetable oil) as straight fuels using a EURO-VI passenger car diesel engine. Standard diesel fuel (EN 590) was used as reference. To analyze the impacts of the biofuels on the combustion process the heat release rates were calculated from in-cylinder pressure measurements using a single-zone model. Furthermore emissions were measured and ECU data was recorded. Results from engine tests showed that both HVO and FAME positively affect the combustion by a decreased ignition delay due to its higher cetane number. Raw exhaust emissions of soot were clearly reduced with HVO while CO and THC emissions showed minor reductions. During FAME operation ECU control settings were shifted due to its lower heating value. FAME showed reductions of soot by 60 % which is caused by the fuel's oxygen content while NO x emissions where slightly increased. However, a fuel adapted ECU calibration could optimize, e.g., the injection timing and EGR to further reduce emissions. Tailpipe emissions were not affected by HVO and FAME as the exhaust aftertreatment systems worked similarly efficient for all three fuels.
16. MATLAB tools for improved characterization and quantification of volcanic incandescence in Webcam imagery; applications at Kilauea Volcano, Hawai'i
Science.gov (United States)
Patrick, Matthew R.; Kauahikaua, James P.; Antolik, Loren
2010-01-01
Webcams are now standard tools for volcano monitoring and are used at observatories in Alaska, the Cascades, Kamchatka, Hawai'i, Italy, and Japan, among other locations. Webcam images allow invaluable documentation of activity and provide a powerful comparative tool for interpreting other monitoring datastreams, such as seismicity and deformation. Automated image processing can improve the time efficiency and rigor of Webcam image interpretation, and potentially extract more information on eruptive activity. For instance, Lovick and others (2008) provided a suite of processing tools that performed such tasks as noise reduction, eliminating uninteresting images from an image collection, and detecting incandescence, with an application to dome activity at Mount St. Helens during 2007. In this paper, we present two very simple automated approaches for improved characterization and quantification of volcanic incandescence in Webcam images at Kilauea Volcano, Hawaii. The techniques are implemented in MATLAB (version 2009b, Copyright: The Mathworks, Inc.) to take advantage of the ease of matrix operations. Incandescence is a useful indictor of the location and extent of active lava flows and also a potentially powerful proxy for activity levels at open vents. We apply our techniques to a period covering both summit and east rift zone activity at Kilauea during 2008?2009 and compare the results to complementary datasets (seismicity, tilt) to demonstrate their integrative potential. A great strength of this study is the demonstrated success of these tools in an operational setting at the Hawaiian Volcano Observatory (HVO) over the course of more than a year. Although applied only to Webcam images here, the techniques could be applied to any type of sequential images, such as time-lapse photography. We expect that these tools are applicable to many other volcano monitoring scenarios, and the two MATLAB scripts, as they are implemented at HVO, are included in the appendixes
17. Magma transport and storage at Kilauea volcano, Hawaii I: 1790-1952
Science.gov (United States)
Wright, T. L.; Klein, F.
2011-12-01
We trace the evolution of Kilauea from the time of the first oral records of an explosive eruption in 1790 to the long eruption in Halemaumau crater in 1952. The establishment of modern seismic and geodetic networks in the early 1960s showed that eruptions and intrusions were fed from two magma sources beneath the summit at depths of 2-6 and ~1 km respectively (sources 1 and 2), and that seaward spreading of the south flank took place on a decollement at 10-12 km depth at the base of the Kilauea edifice. A third diffuse, pressure-transmitting magma system (source 3) between the shallow East rift zone and the decollement was also identified. We test the null hypothesis that the volcano has behaved similarly throughout its lifetime, and conclude that the null hypothesis is not met for the period preceding the 1952 summit eruption because of changes in magma supply rate and differences in ground deformation patterns. The western missionaries arriving at Kilauea in 1823 were confronted with a caldera-wide lava lake. Filling rates determined by visual observation correspond to magma supply rates that averaged more than 0.3 km3/yr prior to 1840 and declined to 1894, when lava disappeared altogether at Halemaumau crater. The Hawaiian Volcano Observatory (HVO) was established by Thomas A. Jaggar in 1912 adjacent to the Volcano House Hotel on the rim of Kilauea. Instrumental observation at HVO began using a seismometer that doubled as a tiltmeter. A 1912-1924 magma supply rate of 0.024 km3/yr agreed with the rate of filling of Kilauea caldera from 1840-1894. 1924 was a critical year. An intrusion that moved down Kilauea's East rift zone beginning in February culminated beneath the lower East rift zone in April. In May, explosive eruptions accompanied a dramatic draining of Halemaumau. Triangulation results between 1912 and 1921 showed uplift extending far beyond Kilauea caldera and an equally large regional subsidence occurred between 1921 and 1927. HVO tilt narrows the
18. Preliminary Validation of the Small Aircraft Transportation System Higher Volume Operations (SATS HVO) Concept
Science.gov (United States)
Williams, Daniel; Consiglio, Maria; Murdoch, Jennifer; Adams, Catherine
2004-01-01
This document provides a preliminary validation of the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept for normal conditions. Initial results reveal that the concept provides reduced air traffic delays when compared to current operations without increasing pilot workload. Characteristic to the SATS HVO concept is the establishment of a newly defined area of flight operations called a Self-Controlled Area (SCA) which would be activated by air traffic control (ATC) around designated non-towered, non-radar airports. During periods of poor visibility, SATS pilots would take responsibility for separation assurance between their aircraft and other similarly equipped aircraft in the SCA. Using onboard equipment and simple instrument flight procedures, they would then be better able to approach and land at the airport or depart from it. This concept would also require a new, ground-based automation system, typically located at the airport that would provide appropriate sequencing information to the arriving aircraft. Further validation of the SATS HVO concept is required and is the subject of ongoing research and subsequent publications.
19. The Small Aircraft Transportation System (SATS), Higher Volume Operations (HVO) Off-Nominal Operations
Science.gov (United States)
Baxley, B.; Williams, D.; Consiglio, M.; Conway, S.; Adams, C.; Abbott, T.
2005-01-01
The ability to conduct concurrent, multiple aircraft operations in poor weather, at virtually any airport, offers an important opportunity for a significant increase in the rate of flight operations, a major improvement in passenger convenience, and the potential to foster growth of charter operations at small airports. The Small Aircraft Transportation System, (SATS) Higher Volume Operations (HVO) concept is designed to increase traffic flow at any of the 3400 nonradar, non-towered airports in the United States where operations are currently restricted to one-in/one-out procedural separation during Instrument Meteorological Conditions (IMC). The concept's key feature is pilots maintain their own separation from other aircraft using procedures, aircraft flight data sent via air-to-air datalink, cockpit displays, and on-board software. This is done within the Self-Controlled Area (SCA), an area of flight operations established during poor visibility or low ceilings around an airport without Air Traffic Control (ATC) services. The research described in this paper expands the HVO concept to include most off-nominal situations that could be expected to occur in a future SATS environment. The situations were categorized into routine off-nominal operations, procedural deviations, equipment malfunctions, and aircraft emergencies. The combination of normal and off-nominal HVO procedures provides evidence for an operational concept that is safe, requires little ground infrastructure, and enables concurrent flight operations in poor weather.
20. Influences of HVO and FAME on the combustion and emissions of modern passenger car diesel engines; Einfluesse von HVO und FAME auf die Verbrennung und Emissionen moderner PKW-Dieselmotoren
Energy Technology Data Exchange (ETDEWEB)
Stengel, Benjamin [Rostock Univ. (Germany). Lehrstuhl fuer Kolbenmaschinen und Verbrennungsmotoren; Sadlowski, Thomas; Wichmann, Volker; Harndorf, Horst
2014-08-01
In the framework of this study engine tests were performed with FAME (fatty acid methyl ester) and HVO (hydrotreated vegetable oil) as straight fuels using a EURO-VI passenger car diesel engine. Standard diesel fuel (EN 590) was used as reference. To analyze the impacts of the biofuels on the combustion process the heat release rates were calculated from in-cylinder pressure measurements using a single-zone model. Furthermore emissions were measured and ECU data was recorded. Results from engine tests showed that both HVO and FAME positively affect the combustion by a decreased ignition delay due to its higher cetane number. Raw exhaust emissions of soot were clearly reduced with HVO while CO and THC emissions showed minor reductions. During FAME operation ECU control settings were shifted due to its lower heating value. FAME showed reductions of soot by 60 % which is caused by the fuel's oxygen content while NO{sub x} emissions where slightly increased. However, a fuel adapted ECU calibration could optimize, e.g., the injection timing and EGR to further reduce emissions. Tailpipe emissions were not affected by HVO and FAME as the exhaust aftertreatment systems worked similarly efficient for all three fuels.
1. Lava lake activity at the summit of Kīlauea Volcano in 2016
Science.gov (United States)
Patrick, Matthew R.; Orr, Tim R.; Swanson, Donald A.; Elias, Tamar; Shiro, Brian
2018-04-10
gas emissions created volcanic air pollution (vog) that affected large areas of the Island of Hawai‘i. The summit eruption has been a major attraction for visitors in Hawai‘i Volcanoes National Park. During 2016, the rising lake levels allowed the lake and its spattering to be more consistently visible from public viewing areas, enhancing the visitor experience. The U.S. Geological Survey’s Hawaiian Volcano Observatory (HVO) closely monitors the summit eruption and keeps emergency managers and the public informed of activity.
2. An overview of the Icelandic Volcano Observatory response to the on-going rifting event at Bárðarbunga (Iceland) and the SO2 emergency associated with the gas-rich eruption in Holuhraun
Science.gov (United States)
Barsotti, Sara; Jonsdottir, Kristin; Roberts, Matthew J.; Pfeffer, Melissa A.; Ófeigsson, Benedikt G.; Vögfjord, Kristin; Stefánsdóttir, Gerður; Jónasdóttir, Elin B.
2015-04-01
On 16 August, 2014, Bárðarbunga volcano entered a new phase of unrest. Elevated seismicity in the area with up to thousands of earthquakes detected per day and significant deformation was observed around the Bárðarbunga caldera. A dike intrusion was monitored for almost two weeks until a small, short-lived effusive eruption began on 29 August in Holuhraun. Two days later a second, more intense, tremendously gas-rich eruption started that is still (as of writing) ongoing. The Icelandic Volcano Observatory (IVO), within the Icelandic Meteorological Office (IMO), monitors all the volcanoes in Iceland. Responsibilities include evaluating their related hazards, issuing warnings to the public and Civil Protection, and providing information regarding risks to aviation, including a weekly summary of volcanic activity provided to the Volcanic Ash Advisory Center in London. IVO has monitored the Bárðarbunga unrest phase since its beginning with the support of international colleagues and, in collaboration with the University of Iceland and the Environment Agency of Iceland, provides scientific support and interpretation of the ongoing phenomena to the local Civil Protection. The Aviation Color Code, for preventing hazards to aviation due to ash-cloud encounter, has been widely used and changed as soon as new observations and geophysical data from the monitoring network have suggested a potential evolution in the volcanic crisis. Since the onset of the eruption, IVO is monitoring the gas emission by using different and complementary instrumentations aimed at analyzing the plume composition as well as estimating the gaseous fluxes. SO2 rates have been measured with both real-time scanning DOASes and occasional mobile DOAS traveses, near the eruption site and in the far field. During the first month-and-a-half of the eruption, an average flux equal to 400 kg/s was registered, with peaks exceeding 1,000 kg/s. Along with these measurements the dispersal model CALPUFF has
3. Volcano warning systems: Chapter 67
Science.gov (United States)
Gregg, Chris E.; Houghton, Bruce F.; Ewert, John W.
2015-01-01
Messages conveying volcano alert level such as Watches and Warnings are designed to provide people with risk information before, during, and after eruptions. Information is communicated to people from volcano observatories and emergency management agencies and from informal sources and social and environmental cues. Any individual or agency can be both a message sender and a recipient and multiple messages received from multiple sources is the norm in a volcanic crisis. Significant challenges to developing effective warning systems for volcanic hazards stem from the great diversity in unrest, eruption, and post-eruption processes and the rapidly advancing digital technologies that people use to seek real-time risk information. Challenges also involve the need to invest resources before unrest to help people develop shared mental models of important risk factors. Two populations of people are the target of volcano notifications–ground- and aviation-based populations, and volcano warning systems must address both distinctly different populations.
4. Aligning petrology with geophysics: the Father's Day intrusion and eruption, Kīlauea Volcano, Hawaii
Science.gov (United States)
Salem, L. C.; Edmonds, M.; Maclennan, J.; Houghton, B. F.; Poland, M. P.
2016-12-01
The Father's Day 2007 eruption at Kīlauea Volcano, Hawaii, is an unprecedented opportunity to align geochemical techniques with the exceptionally detailed volcano monitoring data collected by the Hawaiian Volcano Observatory (HVO). Increased CO2 emissions were measured during a period of inflation at the summit of Kilauea in 2003-2007, suggesting that the rate of magma supply to the summit had increased [Poland et al., 2012]. The June 2007 Father's Day eruption in the East Rift Zone (ERZ) occurred at the peak of the summit inflation. It offers the potential to sample magmas that have ascended on short timescales prior to 2007 from the lower crust, and perhaps mantle, with limited fractionation in the summit reservoir. The bulk rock composition of the lavas erupted are certainly consistent with this idea, with >8.5 wt% MgO compared to a typical 7.0-7.5 wt% for contemporaneous PuuOo ERZ lavas. However, our analysis of the major and trace element chemistry of olivine-hosted melt inclusions shows that the melts are in fact relatively evolved, with Mg# eruptions, e.g. Kīlauea Iki. The magma evidently entrained a crystal cargo of more primitive olivines, compositionally typical of summit eruption magma (with 81-84 mol% Fo). The melt inclusion chemistry shows homogenized and narrowly distributed trace element ratios, medium/low CO2 abundances and high concentrations of sulfur (unlike typical ERZ magmas). However, the chemistry is unlike melts that have partially bypassed the summit reservoir, e.g. those erupted at Kīlauea Iki, Mauna Ulu. We suggest that the Father's Day magma had been resident in the magma reservoir prior to the 2003-2007 inflation, and was evacuated from the reservoir into the ERZ in response to the increased rate of intrusion of magma from depth. Dissolved volatile contents along profiles in embayments ("open" melt inclusions) were measured and compared to diffusion models to predict timescales of magma decompression prior to eruption. These are
5. Characteristics of Offshore Hawai';i Island Seismicity and Velocity Structure, including Lo';ihi Submarine Volcano
Science.gov (United States)
Merz, D. K.; Caplan-Auerbach, J.; Thurber, C. H.
2013-12-01
The Island of Hawai';i is home to the most active volcanoes in the Hawaiian Islands. The island's isolated nature, combined with the lack of permanent offshore seismometers, creates difficulties in recording small magnitude earthquakes with accuracy. This background offshore seismicity is crucial in understanding the structure of the lithosphere around the island chain, the stresses on the lithosphere generated by the weight of the islands, and how the volcanoes interact with each other offshore. This study uses the data collected from a 9-month deployment of a temporary ocean bottom seismometer (OBS) network fully surrounding Lo';ihi volcano. This allowed us to widen the aperture of earthquake detection around the Big Island, lower the magnitude detection threshold, and better constrain the hypocentral depths of offshore seismicity that occurs between the OBS network and the Hawaii Volcano Observatory's land based network. Although this study occurred during a time of volcanic quiescence for Lo';ihi, it establishes a basis for background seismicity of the volcano. More than 480 earthquakes were located using the OBS network, incorporating data from the HVO network where possible. Here we present relocated hypocenters using the double-difference earthquake location algorithm HypoDD (Waldhauser & Ellsworth, 2000), as well as tomographic images for a 30 km square area around the summit of Lo';ihi. Illuminated by using the double-difference earthquake location algorithm HypoDD (Waldhauser & Ellsworth, 2000), offshore seismicity during this study is punctuated by events locating in the mantle fault zone 30-50km deep. These events reflect rupture on preexisting faults in the lower lithosphere caused by stresses induced by volcano loading and flexure of the Pacific Plate (Wolfe et al., 2004; Pritchard et al., 2007). Tomography was performed using the double-difference seismic tomography method TomoDD (Zhang & Thurber, 2003) and showed overall velocities to be slower than
6. What Are Volcano Hazards?
Science.gov (United States)
... Sheet 002-97 Revised March 2008 What Are Volcano Hazards? Volcanoes give rise to numerous geologic and ... as far as 15 miles from the volcano. Volcano Landslides A landslide or debris avalanche is a ...
7. Probing magma reservoirs to improve volcano forecasts
Science.gov (United States)
Lowenstern, Jacob B.; Sisson, Thomas W.; Hurwitz, Shaul
2017-01-01
When it comes to forecasting eruptions, volcano observatories rely mostly on real-time signals from earthquakes, ground deformation, and gas discharge, combined with probabilistic assessments based on past behavior [Sparks and Cashman, 2017]. There is comparatively less reliance on geophysical and petrological understanding of subsurface magma reservoirs.
8. Reducing volcanic risk; are we winning some battles but losing the war?
Science.gov (United States)
Tilling, R.I.
1991-01-01
Historically, significant advances in volcanology have been catalyzed by volcanic disasters or crises, reflecting the the simple fact that volcanoes seem to receive serious scientific and public attention only when they cause, or threaten to cause, trouble. For example, three deadly eruptions in 1902, Mount Pelee, Santa Maria, and Soufriere (St.Vincent), spurred the movement to establish permanent volcano observatories there. Profoundly impresses by the devastation cused by Mont Pelee, Thomas A. Jaggar, Jr. founded the Hawaiian Volcano Observatory (HVO) in 1912. Since then, studies conducted at HVO and new observatories have been pivotal in transforming the nascent science of volcanology into the multidisciplinary science that it is today.
9. Instrumentation Recommendations for Volcano Monitoring at U.S. Volcanoes Under the National Volcano Early Warning System
Science.gov (United States)
Moran, Seth C.; Freymueller, Jeff T.; LaHusen, Richard G.; McGee, Kenneth A.; Poland, Michael P.; Power, John A.; Schmidt, David A.; Schneider, David J.; Stephens, George; Werner, Cynthia A.; White, Randall A.
2008-01-01
As magma moves toward the surface, it interacts with anything in its path: hydrothermal systems, cooling magma bodies from previous eruptions, and (or) the surrounding 'country rock'. Magma also undergoes significant changes in its physical properties as pressure and temperature conditions change along its path. These interactions and changes lead to a range of geophysical and geochemical phenomena. The goal of volcano monitoring is to detect and correctly interpret such phenomena in order to provide early and accurate warnings of impending eruptions. Given the well-documented hazards posed by volcanoes to both ground-based populations (for example, Blong, 1984; Scott, 1989) and aviation (for example, Neal and others, 1997; Miller and Casadevall, 2000), volcano monitoring is critical for public safety and hazard mitigation. Only with adequate monitoring systems in place can volcano observatories provide accurate and timely forecasts and alerts of possible eruptive activity. At most U.S. volcanoes, observatories traditionally have employed a two-component approach to volcano monitoring: (1) install instrumentation sufficient to detect unrest at volcanic systems likely to erupt in the not-too-distant future; and (2) once unrest is detected, install any instrumentation needed for eruption prediction and monitoring. This reactive approach is problematic, however, for two reasons. 1. At many volcanoes, rapid installation of new ground-1. based instruments is difficult or impossible. Factors that complicate rapid response include (a) eruptions that are preceded by short (hours to days) precursory sequences of geophysical and (or) geochemical activity, as occurred at Mount Redoubt (Alaska) in 1989 (24 hours), Anatahan (Mariana Islands) in 2003 (6 hours), and Mount St. Helens (Washington) in 1980 and 2004 (7 and 8 days, respectively); (b) inclement weather conditions, which may prohibit installation of new equipment for days, weeks, or even months, particularly at
10. Iridium emissions from Hawaiian volcanoes
International Nuclear Information System (INIS)
Finnegan, D.L.; Zoller, W.H.; Miller, T.M.
1988-01-01
Particle and gas samples were collected at Mauna Loa volcano during and after its eruption in March and April, 1984 and at Kilauea volcano in 1983, 1984, and 1985 during various phases of its ongoing activity. In the last two Kilauea sampling missions, samples were collected during eruptive activity. The samples were collected using a filterpack system consisting of a Teflon particle filter followed by a series of 4 base-treated Whatman filters. The samples were analyzed by INAA for over 40 elements. As previously reported in the literature, Ir was first detected on particle filters at the Mauna Loa Observatory and later from non-erupting high temperature vents at Kilauea. Since that time Ir was found in samples collected at Kilauea and Mauna Loa during fountaining activity as well as after eruptive activity. Enrichment factors for Ir in the volcanic fumes range from 10,000 to 100,000 relative to BHVO. Charcoal impregnated filters following a particle filter were collected to see if a significant amount of the Ir was in the gas phase during sample collection. Iridium was found on charcoal filters collected close to the vent, no Ir was found on the charcoal filters. This indicates that all of the Ir is in particulate form very soon after its release. Ratios of Ir to F and Cl were calculated for the samples from Mauna Loa and Kilauea collected during fountaining activity. The implications for the KT Ir anomaly are still unclear though as Ir was not found at volcanoes other than those at Hawaii. Further investigations are needed at other volcanoes to ascertain if basaltic volcanoes other than hot spots have Ir enrichments in their fumes
11. Iridium emissions from Hawaiian volcanoes
Science.gov (United States)
Finnegan, D. L.; Zoller, W. H.; Miller, T. M.
1988-01-01
Particle and gas samples were collected at Mauna Loa volcano during and after its eruption in March and April, 1984 and at Kilauea volcano in 1983, 1984, and 1985 during various phases of its ongoing activity. In the last two Kilauea sampling missions, samples were collected during eruptive activity. The samples were collected using a filterpack system consisting of a Teflon particle filter followed by a series of 4 base-treated Whatman filters. The samples were analyzed by INAA for over 40 elements. As previously reported in the literature, Ir was first detected on particle filters at the Mauna Loa Observatory and later from non-erupting high temperature vents at Kilauea. Since that time Ir was found in samples collected at Kilauea and Mauna Loa during fountaining activity as well as after eruptive activity. Enrichment factors for Ir in the volcanic fumes range from 10,000 to 100,000 relative to BHVO. Charcoal impregnated filters following a particle filter were collected to see if a significant amount of the Ir was in the gas phase during sample collection. Iridium was found on charcoal filters collected close to the vent, no Ir was found on the charcoal filters. This indicates that all of the Ir is in particulate form very soon after its release. Ratios of Ir to F and Cl were calculated for the samples from Mauna Loa and Kilauea collected during fountaining activity. The implications for the KT Ir anomaly are still unclear though as Ir was not found at volcanoes other than those at Hawaii. Further investigations are needed at other volcanoes to ascertain if basaltic volcanoes other than hot spots have Ir enrichments in their fumes.
12. The 2014 eruptions of Pavlof Volcano, Alaska
Science.gov (United States)
Waythomas, Christopher F.; Haney, Matthew M.; Wallace, Kristi; Cameron, Cheryl E.; Schneider, David J.
2017-12-22
across the region results in a relatively large number of airborne observations of eruptive activity. During the 2014 Pavlof eruptions, the Alaska Volcano Observatory received observations and photographs from pilots and local observers, which aided evaluation of the eruptive activity and the areas affected by eruptive products.This report outlines the chronology of events associated with the 2014 eruptive activity at Pavlof Volcano, provides documentation of the style and character of the eruptive episodes, and reports briefly on the eruptive products and impacts. The principal observations are described and portrayed on maps and photographs, and the 2014 eruptive activity is compared to historical eruptions.
13. Advances in volcano monitoring and risk reduction in Latin America
Science.gov (United States)
McCausland, W. A.; White, R. A.; Lockhart, A. B.; Marso, J. N.; Assitance Program, V. D.; Volcano Observatories, L. A.
2014-12-01
We describe results of cooperative work that advanced volcanic monitoring and risk reduction. The USGS-USAID Volcano Disaster Assistance Program (VDAP) was initiated in 1986 after disastrous lahars during the 1985 eruption of Nevado del Ruiz dramatizedthe need to advance international capabilities in volcanic monitoring, eruption forecasting and hazard communication. For the past 28 years, VDAP has worked with our partners to improve observatories, strengthen monitoring networks, and train observatory personnel. We highlight a few of the many accomplishments by Latin American volcano observatories. Advances in monitoring, assessment and communication, and lessons learned from the lahars of the 1985 Nevado del Ruiz eruption and the 1994 Paez earthquake enabled the Servicio Geológico Colombiano to issue timely, life-saving warnings for 3 large syn-eruptive lahars at Nevado del Huila in 2007 and 2008. In Chile, the 2008 eruption of Chaitén prompted SERNAGEOMIN to complete a national volcanic vulnerability assessment that led to a major increase in volcano monitoring. Throughout Latin America improved seismic networks now telemeter data to observatories where the decades-long background rates and types of seismicity have been characterized at over 50 volcanoes. Standardization of the Earthworm data acquisition system has enabled data sharing across international boundaries, of paramount importance during both regional tectonic earthquakes and during volcanic crises when vulnerabilities cross international borders. Sharing of seismic forecasting methods led to the formation of the international organization of Latin American Volcano Seismologists (LAVAS). LAVAS courses and other VDAP training sessions have led to international sharing of methods to forecast eruptions through recognition of precursors and to reduce vulnerabilities from all volcano hazards (flows, falls, surges, gas) through hazard assessment, mapping and modeling. Satellite remote sensing data
14. Geophysical monitoring of the Purace volcano, Colombia
Directory of Open Access Journals (Sweden)
M. Arcila
1996-06-01
Full Text Available Located in the extreme northwestern part of the Los Coconucos volcanic chain in the Central Cordillera, the Purace is one of Colombia's most active volcanoes. Recent geological studies indicate an eruptive history of mainly explosive behavior which was marked most recently by a minor ash eruption in 1977. Techniques used to forecast the renewal of activity of volcanoes after a long period of quiescence include the monitoring of seismicity and ground deformation near the volcano. As a first approach toward the monitoring of the Purace volcano, Southwest Seismological Observatory (OSSO, located in the city of Cali, set up one seismic station in 1986. Beginning in June 1991, the seismic signals have also been transmitted to the Colombian Geological Survey (INGEOMINAS at the Volcanological and Seismological Observatory (OVS-UOP, located in the city of Popayan. Two more seismic stations were installed early in 1994 forming a minimum seismic network and a geodetic monitoring program for ground deformation studies was established and conducted by INGEOMINAS.
15. Magma supply, storage, and transport at shield-stage Hawaiian volcanoes: Chapter 5 in Characteristics of Hawaiian volcanoes
Science.gov (United States)
Poland, Michael P.; Miklius, Asta; Montgomery-Brown, Emily K.; Poland, Michael P.; Takahashi, T. Jane; Landowski, Claire M.
2014-01-01
The characteristics of magma supply, storage, and transport are among the most critical parameters governing volcanic activity, yet they remain largely unconstrained because all three processes are hidden beneath the surface. Hawaiian volcanoes, particularly Kīlauea and Mauna Loa, offer excellent prospects for studying subsurface magmatic processes, owing to their accessibility and frequent eruptive and intrusive activity. In addition, the Hawaiian Volcano Observatory, founded in 1912, maintains long records of geological, geophysical, and geochemical data. As a result, Hawaiian volcanoes have served as both a model for basaltic volcanism in general and a starting point for many studies of volcanic processes.
16. Global Volcano Locations Database
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — NGDC maintains a database of over 1,500 volcano locations obtained from the Smithsonian Institution Global Volcanism Program, Volcanoes of the World publication. The...
17. A Scientific Excursion: Volcanoes.
Science.gov (United States)
Olds, Henry, Jr.
1983-01-01
Reviews an educationally valuable and reasonably well-designed simulation of volcanic activity in an imaginary land. VOLCANOES creates an excellent context for learning information about volcanoes and for developing skills and practicing methods needed to study behavior of volcanoes. (Author/JN)
18. Volcano seismology
Science.gov (United States)
Chouet, B.
2003-01-01
A fundamental goal of volcano seismology is to understand active magmatic systems, to characterize the configuration of such systems, and to determine the extent and evolution of source regions of magmatic energy. Such understanding is critical to our assessment of eruptive behavior and its hazardous impacts. With the emergence of portable broadband seismic instrumentation, availability of digital networks with wide dynamic range, and development of new powerful analysis techniques, rapid progress is being made toward a synthesis of high-quality seismic data to develop a coherent model of eruption mechanics. Examples of recent advances are: (1) high-resolution tomography to image subsurface volcanic structures at scales of a few hundred meters; (2) use of small-aperture seismic antennas to map the spatio-temporal properties of long-period (LP) seismicity; (3) moment tensor inversions of very-long-period (VLP) data to derive the source geometry and mass-transport budget of magmatic fluids; (4) spectral analyses of LP events to determine the acoustic properties of magmatic and associated hydrothermal fluids; and (5) experimental modeling of the source dynamics of volcanic tremor. These promising advances provide new insights into the mechanical properties of volcanic fluids and subvolcanic mass-transport dynamics. As new seismic methods refine our understanding of seismic sources, and geochemical methods better constrain mass balance and magma behavior, we face new challenges in elucidating the physico-chemical processes that cause volcanic unrest and its seismic and gas-discharge manifestations. Much work remains to be done toward a synthesis of seismological, geochemical, and petrological observations into an integrated model of volcanic behavior. Future important goals must include: (1) interpreting the key types of magma movement, degassing and boiling events that produce characteristic seismic phenomena; (2) characterizing multiphase fluids in subvolcanic
19. Punctuated Evolution of Volcanology: An Observatory Perspective
Science.gov (United States)
Burton, W. C.; Eichelberger, J. C.
2010-12-01
20. Global Volcano Model
Science.gov (United States)
Sparks, R. S. J.; Loughlin, S. C.; Cottrell, E.; Valentine, G.; Newhall, C.; Jolly, G.; Papale, P.; Takarada, S.; Crosweller, S.; Nayembil, M.; Arora, B.; Lowndes, J.; Connor, C.; Eichelberger, J.; Nadim, F.; Smolka, A.; Michel, G.; Muir-Wood, R.; Horwell, C.
2012-04-01
Over 600 million people live close enough to active volcanoes to be affected when they erupt. Volcanic eruptions cause loss of life, significant economic losses and severe disruption to people's lives, as highlighted by the recent eruption of Mount Merapi in Indonesia. The eruption of Eyjafjallajökull, Iceland in 2010 illustrated the potential of even small eruptions to have major impact on the modern world through disruption of complex critical infrastructure and business. The effects in the developing world on economic growth and development can be severe. There is evidence that large eruptions can cause a change in the earth's climate for several years afterwards. Aside from meteor impact and possibly an extreme solar event, very large magnitude explosive volcanic eruptions may be the only natural hazard that could cause a global catastrophe. GVM is a growing international collaboration that aims to create a sustainable, accessible information platform on volcanic hazard and risk. We are designing and developing an integrated database system of volcanic hazards, vulnerability and exposure with internationally agreed metadata standards. GVM will establish methodologies for analysis of the data (eg vulnerability indices) to inform risk assessment, develop complementary hazards models and create relevant hazards and risk assessment tools. GVM will develop the capability to anticipate future volcanism and its consequences. NERC is funding the start-up of this initiative for three years from November 2011. GVM builds directly on the VOGRIPA project started as part of the GRIP (Global Risk Identification Programme) in 2004 under the auspices of the World Bank and UN. Major international initiatives and partners such as the Smithsonian Institution - Global Volcanism Program, State University of New York at Buffalo - VHub, Earth Observatory of Singapore - WOVOdat and many others underpin GVM.
1. Volcanoes: observations and impact
Science.gov (United States)
Thurber, Clifford; Prejean, Stephanie G.
2012-01-01
Volcanoes are critical geologic hazards that challenge our ability to make long-term forecasts of their eruptive behaviors. They also have direct and indirect impacts on human lives and society. As is the case with many geologic phenomena, the time scales over which volcanoes evolve greatly exceed that of a human lifetime. On the other hand, the time scale over which a volcano can move from inactivity to eruption can be rather short: months, weeks, days, and even hours. Thus, scientific study and monitoring of volcanoes is essential to mitigate risk. There are thousands of volcanoes on Earth, and it is impractical to study and implement ground-based monitoring at them all. Fortunately, there are other effective means for volcano monitoring, including increasing capabilities for satellite-based technologies.
2. Private Observatories in South Africa
Science.gov (United States)
Rijsdijk, C.
2016-12-01
Descriptions of private observatories in South Africa, written by their owners. Positions, equipment descriptions and observing programmes are given. Included are: Klein Karoo Observatory (B. Monard), Cederberg Observatory (various), Centurion Planetary and Lunar Observatory (C. Foster), Le Marischel Observatory (L. Ferreira), Sterkastaaing Observatory (M. Streicher), Henley on Klip (B. Fraser), Archer Observatory (B. Dumas), Overbeek Observatory (A. Overbeek), Overberg Observatory (A. van Staden), St Cyprian's School Observatory, Fisherhaven Small Telescope Observatory (J. Retief), COSPAR 0433 (G. Roberts), COSPAR 0434 (I. Roberts), Weltevreden Karoo Observatory (D. Bullis), Winobs (M. Shafer)
3. European Southern Observatory
CERN Multimedia
CERN PhotoLab
1970-01-01
Professor A. Blaauw, Director general of the European Southern Observatory, with George Hampton on his right, signs the Agreement covering collaboration with CERN in the construction of the large telescope to be installed at the ESO Observatory in Chile.
4. Visions of Volcanoes
Directory of Open Access Journals (Sweden)
David M. Pyle
2017-12-01
Full Text Available The long nineteenth century marked an important transition in the understanding of the nature of combustion and fire, and of volcanoes and the interior of the earth. It was also a period when dramatic eruptions of Vesuvius lit up the night skies of Naples, providing ample opportunities for travellers, natural philosophers, and early geologists to get up close to the glowing lavas of an active volcano. This article explores written and visual representations of volcanoes and volcanic activity during the period, with the particular perspective of writers from the non-volcanic regions of northern Europe. I explore how the language of ‘fire’ was used in both first-hand and fictionalized accounts of peoples’ interactions with volcanoes and experiences of volcanic phenomena, and see how the routine or implicit linkage of ‘fire’ with ‘combustion’ as an explanation for the deep forces at play within and beneath volcanoes slowly changed as the formal scientific study of volcanoes developed. I show how Vesuvius was used as a ‘model’ volcano in science and literature and how, later, following devastating eruptions in Indonesia and the Caribbean, volcanoes took on a new dimension as contemporary agents of death and destruction.
5. Assessing individual and organizational response to volcanic crisis and unrest at Kīlauea and Mauna Loa volcanoes, Hawai'i
Science.gov (United States)
Reeves, Ashleigh; Gregg, Chris; Lindell, Michael; Prater, Carla; Joyner, Timothy; Eggert, Sarah
2017-04-01
This study describes response to and preparedness for eruption and unrest at Kīlauea and Mauna Loa volcanoes, respectively. The on-going 1983-present eruption of Kīlauea's East Rift Zone (ERZ) has generated a series of lava flow crises, the latest occurring in 2014 and 2015 when lava from a new vent flowed northeast and into the perimeter of developed areas in the lower Puna District, some 20km distant. It took ca. 2 months for the June 27 lava flow to advance a distance to which scientists reported it might be a concern to people downslope, but this prompted widespread formal and informal responses and culminated in improvements to infrastructure, voluntary evacuations of residents and businesses and closure of schools. Unlike Kīlauea, which has had frequent crises since the mid-20th century, the last eruption of nearby Mauna Loa occurred in 1984 and the last eruption and crisis on its Southwest Rift Zone (SWZ) was in 1950, so residents there are less familiar with eruptions than in Puna. In September 2015, the US Geological Survey, Hawaiian Volcano Observatory upgraded Mauna Loa's Alert Level from Normal to Advisory due to increases in unrest above known background levels. A crisis on Mauna Loa's SWZ would likely be much different than the recent 2014-15 crisis at Kīlauea as steep topography downslope of the SWZ and typical high discharge rates mean lava flows move fast, posing increased risk to areas downslope. Typically, volcanic eruptions have significant economic consequences out of proportion with their magnitudes. Furthermore, uncertainties regarding the physical and organizational communication of risk information amplify these economic losses. One significant impediment to risk communication is limited knowledge about the most effective ways to verbally, numerically and graphically communicate scientific uncertainty. This was a challenge in the recent lava flow crisis on Kīlauea. The public's demand for near-real time information updates, including
6. Mauna Kea volcano's ongoing 18-year swarm
Science.gov (United States)
Wech, A.; Thelen, W. A.
2017-12-01
Mauna Kea is a large postshield-stage volcano that forms the highest peak on Hawaii Island. The 4,205-meter high volcano erupted most recently between 6,000 and 4,500 years ago and exhibits relatively low rates of seismicity, which are mostly tectonic in origin resulting from lithospheric flexure under the weight of the volcano. Here we identify deep repeating earthquakes occurring beneath the summit of Mauna Kea. These earthquakes, which are not part of the Hawaiian Volcano Observatory's regional network catalog, were initially detected through a systematic search for coherent seismicity using envelope cross-correlation, and subsequent analysis revealed the presence of a long-term, ongoing swarm. The events have energy concentrated at 2-7 Hz, and can be seen in filtered waveforms dating back to the earliest continuous data from a single station archived at IRIS from November 1999. We use a single-station (3 component) match-filter analysis to create a catalog of the repeating earthquakes for the past 18 years. Using two templates created through phase-weighted stacking of thousands of sta/lta-triggers, we find hundreds of thousands of M1.3-1.6 earthquakes repeating every 7-12 minutes throughout this entire time period, with many smaller events occurring in between. The earthquakes occur at 28-31 km depth directly beneath the summit within a conspicuous gap in seismicity surrounding the flanks of the volcano. Magnitudes and periodicity are remarkably stable long-term, but do exhibit slight variability and occasionally display higher variability on shorter time scales. Network geometry precludes obtaining a reliable focal mechanism, but we interpret the frequency content and hypocenters to infer a volcanic source distinct from the regional tectonic seismicity responding to the load of the island. In this model, the earthquakes may result from the slow, persistent degassing of a relic magma chamber at depth.
7. Chronology and References of Volcanic Eruptions and Selected Unrest in the United States, 1980-2008
Science.gov (United States)
Diefenbach, Angela K.; Guffanti, Marianne; Ewert, John W.
2009-01-01
The United States ranks as one of the top countries in the world in the number of young, active volcanoes within its borders. The United States, including the Commonwealth of the Northern Mariana Islands, is home to approximately 170 geologically active (age activity, unrest, that do not culminate in eruptions. Monitoring volcanic activity in the United States is the responsibility of the U.S. Geological Survey (USGS) Volcano Hazards Program (VHP) and is accomplished with academic, Federal, and State partners. The VHP supports five Volcano Observatories - the Alaska Volcano Observatory (AVO), Cascades Volcano Observatory (CVO), Yellowstone Volcano Observatory (YVO), Long Valley Observatory (LVO), and Hawaiian Volcano Observatory (HVO). With the exception of HVO, which was established in 1912, the U.S. Volcano Observatories have been established in the past 27 years in response to specific volcanic eruptions or sustained levels of unrest. As understanding of volcanic activity and hazards has grown over the years, so have the extent and types of monitoring networks and techniques available to detect early signs of anomalous volcanic behavior. This increased capability is providing us with a more accurate gauge of volcanic activity in the United States. The purpose of this report is to (1) document the range of volcanic activity that U.S. Volcano Observatories have dealt with, beginning with the 1980 eruption of Mount St. Helens, (2) describe some overall characteristics of the activity, and (3) serve as a quick reference to pertinent published literature on the eruptions and unrest documented in this report.
8. Assigning a volcano alert level: negotiating uncertainty, risk, and complexity in decision-making processes
OpenAIRE
Carina J Fearnley
2013-01-01
A volcano alert level system (VALS) is used to communicate warning information from scientists to civil authorities managing volcanic hazards. This paper provides the first evaluation of how the decision-making process behind the assignation of an alert level, using forecasts of volcanic behaviour, operates in practice . Using interviews conducted from 2007 to 2009 at five USGS-managed (US Geological Survey) volcano observatories (Alaska, Cascades, Hawaii, Long Valley, and Yellowstone), two k...
9. Cook Inlet and Kenai Peninsula, Alaska ESI: VOLCANOS (Volcano Points)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains the locations of volcanos in Cook Inlet and Kenai Peninsula, Alaska. Vector points in the data set represent the location of the volcanos....
10. Volcanoes: Coming Up from Under.
Science.gov (United States)
Science and Children, 1980
1980-01-01
Provides specific information about the eruption of Mt. St. Helens in March 1980. Also discusses how volcanoes are formed and how they are monitored. Words associated with volcanoes are listed and defined. (CS)
11. Organizational changes at Earthquakes & Volcanoes
Science.gov (United States)
Gordon, David W.
1992-01-01
Primary responsibility for the preparation of Earthquakes & Volcanoes within the Geological Survey has shifted from the Office of Scientific Publications to the Office of Earthquakes, Volcanoes, and Engineering (OEVE). As a consequence of this reorganization, Henry Spall has stepepd down as Science Editor for Earthquakes & Volcanoes(E&V).
12. Hawaii's volcanoes revealed
Science.gov (United States)
Eakins, Barry W.; Robinson, Joel E.; Kanamatsu, Toshiya; Naka, Jiro; Smith, John R.; Takahashi, Eiichi; Clague, David A.
2003-01-01
Hawaiian volcanoes typically evolve in four stages as volcanism waxes and wanes: (1) early alkalic, when volcanism originates on the deep sea floor; (2) shield, when roughly 95 percent of a volcano's volume is emplaced; (3) post-shield alkalic, when small-volume eruptions build scattered cones that thinly cap the shield-stage lavas; and (4) rejuvenated, when lavas of distinct chemistry erupt following a lengthy period of erosion and volcanic quiescence. During the early alkalic and shield stages, two or more elongate rift zones may develop as flanks of the volcano separate. Mantle-derived magma rises through a vertical conduit and is temporarily stored in a shallow summit reservoir from which magma may erupt within the summit region or be injected laterally into the rift zones. The ongoing activity at Kilauea's Pu?u ?O?o cone that began in January 1983 is one such rift-zone eruption. The rift zones commonly extend deep underwater, producing submarine eruptions of bulbous pillow lava. Once a volcano has grown above sea level, subaerial eruptions produce lava flows of jagged, clinkery ?a?a or smooth, ropy pahoehoe. If the flows reach the ocean they are rapidly quenched by seawater and shatter, producing a steep blanket of unstable volcanic sediment that mantles the upper submarine slopes. Above sea level then, the volcanoes develop the classic shield profile of gentle lava-flow slopes, whereas below sea level slopes are substantially steeper. While the volcanoes grow rapidly during the shield stage, they may also collapse catastrophically, generating giant landslides and tsunami, or fail more gradually, forming slumps. Deformation and seismicity along Kilauea's south flank indicate that slumping is occurring there today. Loading of the underlying Pacific Plate by the growing volcanic edifices causes subsidence, forming deep basins at the base of the volcanoes. Once volcanism wanes and lava flows no longer reach the ocean, the volcano continues to submerge, while
13. TENCompetence Competence Observatory
NARCIS (Netherlands)
Vervenne, Luk
2010-01-01
Vervenne, L. (2007) TENCompetence Competence Observatory. Sources available http://tencompetence.cvs.sourceforge.net/viewvc/tencompetence/wp8/org.tencompetence.co/. Available under the three clause BSD license, copyright TENCompetence Foundation.
14. Long Baseline Observatory (LBO)
Data.gov (United States)
Federal Laboratory Consortium — The Long Baseline Observatory (LBO) comprises ten radio telescopes spanning 5,351 miles. It's the world's largest, sharpest, dedicated telescope array. With an eye...
15. The Pierre Auger Observatory
International Nuclear Information System (INIS)
Hojvat, C.
1997-03-01
The Pierre Auger Observatory is an international collaboration for the detailed study of the highest energy cosmic rays. It will operate at two similar sites, one in the northern hemisphere and one in the southern hemisphere. The Observatory is designed to collect a statistically significant data set of events with energies greater than 10 19 eV and with equal exposures for the northern and southern skies
16. Observatories and Telescopes of Modern Times
Science.gov (United States)
Leverington, David
2016-11-01
Preface; Part I. Optical Observatories: 1. Palomar Mountain Observatory; 2. The United States Optical Observatory; 3. From the Next Generation Telescope to Gemini and SOAR; 4. Competing primary mirror designs; 5. Active optics, adaptive optics and other technical innovations; 6. European Northern Observatory and Calar Alto; 7. European Southern Observatory; 8. Mauna Kea Observatory; 9. Australian optical observatories; 10. Mount Hopkins' Whipple Observatory and the MMT; 11. Apache Point Observatory; 12. Carnegie Southern Observatory (Las Campanas); 13. Mount Graham International Optical Observatory; 14. Modern optical interferometers; 15. Solar observatories; Part II. Radio Observatories: 16. Australian radio observatories; 17. Cambridge Mullard Radio Observatory; 18. Jodrell Bank; 19. Early radio observatories away from the Australian-British axis; 20. The American National Radio Astronomy Observatory; 21. Owens Valley and Mauna Kea; 22. Further North and Central American observatories; 23. Further European and Asian radio observatories; 24. ALMA and the South Pole; Name index; Optical observatory and telescope index; Radio observatory and telescope index; General index.
17. The added value of time-variable microgravimetry to the understanding of how volcanoes work
Science.gov (United States)
Carbone, Daniele; Poland, Michael; Greco, Filippo; Diament, Michel
2017-01-01
During the past few decades, time-variable volcano gravimetry has shown great potential for imaging subsurface processes at active volcanoes (including some processes that might otherwise remain “hidden”), especially when combined with other methods (e.g., ground deformation, seismicity, and gas emissions). By supplying information on changes in the distribution of bulk mass over time, gravimetry can provide information regarding processes such as magma accumulation in void space, gas segregation at shallow depths, and mechanisms driving volcanic uplift and subsidence. Despite its potential, time-variable volcano gravimetry is an underexploited method, not widely adopted by volcano researchers or observatories. The cost of instrumentation and the difficulty in using it under harsh environmental conditions is a significant impediment to the exploitation of gravimetry at many volcanoes. In addition, retrieving useful information from gravity changes in noisy volcanic environments is a major challenge. While these difficulties are not trivial, neither are they insurmountable; indeed, creative efforts in a variety of volcanic settings highlight the value of time-variable gravimetry for understanding hazards as well as revealing fundamental insights into how volcanoes work. Building on previous work, we provide a comprehensive review of time-variable volcano gravimetry, including discussions of instrumentation, modeling and analysis techniques, and case studies that emphasize what can be learned from campaign, continuous, and hybrid gravity observations. We are hopeful that this exploration of time-variable volcano gravimetry will excite more scientists about the potential of the method, spurring further application, development, and innovation.
18. The Powell Volcano Remote Sensing Working Group Overview
Science.gov (United States)
Reath, K.; Pritchard, M. E.; Poland, M. P.; Wessels, R. L.; Biggs, J.; Carn, S. A.; Griswold, J. P.; Ogburn, S. E.; Wright, R.; Lundgren, P.; Andrews, B. J.; Wauthier, C.; Lopez, T.; Vaughan, R. G.; Rumpf, M. E.; Webley, P. W.; Loughlin, S.; Meyer, F. J.; Pavolonis, M. J.
2017-12-01
Hazards from volcanic eruptions pose risks to the lives and livelihood of local populations, with potential global impacts to businesses, agriculture, and air travel. The 2015 Global Assessment of Risk report notes that 800 million people are estimated to live within 100 km of 1400 subaerial volcanoes identified as having eruption potential. However, only 55% of these volcanoes have any type of ground-based monitoring. The only methods currently available to monitor these unmonitored volcanoes are space-based systems that provide a global view. However, with the explosion of data techniques and sensors currently available, taking full advantage of these resources can be challenging. The USGS Powell Center Volcano Remote Sensing Working Group is working with many partners to optimize satellite resources for global detection of volcanic unrest and assessment of potential eruption hazards. In this presentation we will describe our efforts to: 1) work with space agencies to target acquisitions from the international constellation of satellites to collect the right types of data at volcanoes with forecasting potential; 2) collaborate with the scientific community to develop databases of remotely acquired observations of volcanic thermal, degassing, and deformation signals to facilitate change detection and assess how these changes are (or are not) related to eruption; and 3) improve usage of satellite observations by end users at volcano observatories that report to their respective governments. Currently, the group has developed time series plots for 48 Latin American volcanoes that incorporate variations in thermal, degassing, and deformation readings over time. These are compared against eruption timing and ground-based data provided by the Smithsonian Institute Global Volcanism Program. Distinct patterns in unrest and eruption are observed at different volcanoes, illustrating the difficulty in developing generalizations, but highlighting the power of remote sensing
19. Anatomy of a volcano
NARCIS (Netherlands)
Hooper, A.; Wassink, J.
2011-01-01
The Icelandic volcano Eyjafjallajökull caused major disruption in European airspace last year. According to his co-author, Freysteinn Sigmundsson, the reconstruction published in Nature six months later by aerospace engineering researcher, Dr Andy Hooper, opens up a new direction in volcanology. “We
20. Spying on volcanoes
Science.gov (United States)
Watson, Matthew
2017-07-01
Active volcanoes can be incredibly dangerous, especially to those who live nearby, but how do you get close enough to observe one in action? Matthew Watson explains how artificial drones are providing volcanologists with insights that could one day save human lives
1. Geology of kilauea volcano
Science.gov (United States)
Moore, R.B.; Trusdell, F.A.
1993-01-01
This paper summarizes studies of the structure, stratigraphy, petrology, drill holes, eruption frequency, and volcanic and seismic hazards of Kilauea volcano. All the volcano is discussed, but the focus is on its lower cast rift zone (LERZ) because active exploration for geothermal energy is concentrated in that area. Kilauea probably has several separate hydrothermal-convection systems that develop in response to the dynamic behavior of the volcano and the influx of abundant meteoric water. Important features of some of these hydrothermal-convection systems are known through studies of surface geology and drill holes. Observations of eruptions during the past two centuries, detailed geologic mapping, radiocarbon dating, and paleomagnetic secular-variation studies indicate that Kilauea has erupted frequently from its summit and two radial rift zones during Quaternary time. Petrologic studies have established that Kilauea erupts only tholeiitic basalt. Extensive ash deposits at Kilauea's summit and on its LERZ record locally violent, but temporary, disruptions of local hydrothermal-convection systems during the interaction of water or steam with magma. Recent drill holes on the LERZ provide data on the temperatures of the hydrothermal-convection systems, intensity of dike intrusion, porosity and permeability, and an increasing amount of hydrothermal alteration with depth. The prehistoric and historic record of volcanic and seismic activity indicates that magma will continue to be supplied to deep and shallow reservoirs beneath Kilauea's summit and rift zones and that the volcano will be affected by eruptions and earthquakes for many thousands of years. ?? 1993.
2. Strategies for the implementation of a European Volcano Observations Research Infrastructure
Science.gov (United States)
Puglisi, Giuseppe
2015-04-01
Active volcanic areas in Europe constitute a direct threat to millions of people on both the continent and adjacent islands. Furthermore, eruptions of "European" volcanoes in overseas territories, such as in the West Indies, an in the Indian and Pacific oceans, can have a much broader impacts, outside Europe. Volcano Observatories (VO), which undertake volcano monitoring under governmental mandate and Volcanological Research Institutions (VRI; such as university departments, laboratories, etc.) manage networks on European volcanoes consisting of thousands of stations or sites where volcanological parameters are either continuously or periodically measured. These sites are equipped with instruments for geophysical (seismic, geodetic, gravimetric, electromagnetic), geochemical (volcanic plumes, fumaroles, groundwater, rivers, soils), environmental observations (e.g. meteorological and air quality parameters), including prototype deployment. VOs and VRIs also operate laboratories for sample analysis (rocks, gases, isotopes, etc.), near-real time analysis of space-borne data (SAR, thermal imagery, SO2 and ash), as well as high-performance computing centres; all providing high-quality information on the current status of European volcanoes and the geodynamic background of the surrounding areas. This large and high-quality deployment of monitoring systems, focused on a specific geophysical target (volcanoes), together with the wide volcanological phenomena of European volcanoes (which cover all the known volcano types) represent a unique opportunity to fundamentally improve the knowledge base of volcano behaviour. The existing arrangement of national infrastructures (i.e. VO and VRI) appears to be too fragmented to be considered as a unique distributed infrastructure. Therefore, the main effort planned in the framework of the EPOS-PP proposal is focused on the creation of services aimed at providing an improved and more efficient access to the volcanological facilities
3. 2004 Deformation of Okmok Volcano,Alaska, USA
Science.gov (United States)
Fournier, T. J.; Freymueller, J. T.
2004-12-01
Okmok Volcano is a basaltic shield volcano with a 10km diameter caldera located on Umnak Island in the Aleutian Arc, Alaska. Okmok has had frequent effusive eruptions, the latest in 1997. In 2002 the Alaska Volcano Observatory installed a seismic network and three continuous GPS stations. Two stations are located in the caldera and one is located at the base of the volcano at Fort Glenn. Because of instrumentation problems the GPS network was not fully operational until August 2003. A fourth GPS site, located on the south flank of the volcano, came online in September 2004. The three continuous GPS instruments captured a rapid inflation event at Okmok Volcano spanning 6 months from March to August 2004. The instruments give a wonderful time-series of the episode but poor spatial coverage. Modeling the deformation is accomplished by supplementing the continuous data with campaign surveys conducted in the summers of 2002, 2003 and 2004. Displacements between the 2002 and 2003 campaigns show a large inflation event between those time periods. The continuous and campaign data suggest that deformation at Okmok is characterized by short-lived rapid inflation interspersed with periods of moderate inflation. Velocities during the 2004 event reached a maximum of 31cm/yr in the vertical direction and 15cm/yr eastward at the station OKCD, compared with the pre-inflation velocities of 4cm/yr in the vertical and 2.5cm/yr southeastward. Using a Mogi point source model both prior to and during the inflation gives a source location in the center of the caldera and a depth of about 3km. The source strength rate is three times larger during the inflation event than the period preceding it. Based on the full time series of campaign and continuous GPS data, it appears that the variation in inflation rate results from changes in the magma supply rate and not from changes in the depth of the source.
4. Geology of Kilauea volcano
Energy Technology Data Exchange (ETDEWEB)
Moore, R.B. (Geological Survey, Denver, CO (United States). Federal Center); Trusdell, F.A. (Geological Survey, Hawaii National Park, HI (United States). Hawaiian Volcano Observatory)
1993-08-01
This paper summarizes studies of the structure, stratigraphy, petrology, drill holes, eruption frequency, and volcanic and seismic hazards of Kilauea volcano. All the volcano is discussed, but the focus is on its lower east rift zone (LERZ) because active exploration for geothermal energy is concentrated in that area. Kilauea probably has several separate hydrothermal-convection systems that develop in response to the dynamic behavior of the volcano and the influx of abundant meteoric water. Important features of some of these hydrothermal-convection systems are known through studies of surface geology and drill holes. Observations of eruptions during the past two centuries, detailed geologic mapping, radiocarbon dating, and paleomagnetic secular-variation studies indicate that Kilauea has erupted frequently from its summit and two radial rift zones during Quaternary time. Petrologic studies have established that Kilauea erupts only tholeiitic basalt. Extensive ash deposits at Kilauea's summit and on its LERZ record locally violent, but temporary, disruptions of local hydrothermal-convection systems during the interaction of water or steam with magma. Recent drill holes on the LERZ provide data on the temperatures of the hydrothermal-convection systems, intensity of dike intrusion, porosity and permeability, and an increasing amount of hydrothermal alteration with depth. The prehistoric and historic record of volcanic and seismic activity indicates that magma will continue to be supplied to deep and shallow reservoirs beneath Kilauea's summit and rift zones and that the volcano will be affected by eruptions and earthquakes for many thousands of years. 71 refs., 2 figs.
5. Catalogue of Icelandic Volcanoes
Science.gov (United States)
Ilyinskaya, Evgenia; Larsen, Gudrún; Gudmundsson, Magnús T.; Vogfjörd, Kristin; Jonsson, Trausti; Oddsson, Björn; Reynisson, Vidir; Pagneux, Emmanuel; Barsotti, Sara; Karlsdóttir, Sigrún; Bergsveinsson, Sölvi; Oddsdóttir, Thorarna
2017-04-01
The Catalogue of Icelandic Volcanoes (CIV) is a newly developed open-access web resource (http://icelandicvolcanoes.is) intended to serve as an official source of information about volcanoes in Iceland for the public and decision makers. CIV contains text and graphic information on all 32 active volcanic systems in Iceland, as well as real-time data from monitoring systems in a format that enables non-specialists to understand the volcanic activity status. The CIV data portal contains scientific data on all eruptions since Eyjafjallajökull 2010 and is an unprecedented endeavour in making volcanological data open and easy to access. CIV forms a part of an integrated volcanic risk assessment project in Iceland GOSVÁ (commenced in 2012), as well as being part of the European Union funded effort FUTUREVOLC (2012-2016) on establishing an Icelandic volcano supersite. The supersite concept implies integration of space and ground based observations for improved monitoring and evaluation of volcanic hazards, and open data policy. This work is a collaboration of the Icelandic Meteorological Office, the Institute of Earth Sciences at the University of Iceland, and the Civil Protection Department of the National Commissioner of the Iceland Police, with contributions from a large number of specialists in Iceland and elsewhere.
6. Collaborative Monitoring and Hazard Mitigation at Fuego Volcano, Guatemala
Science.gov (United States)
Lyons, J. J.; Bluth, G. J.; Rose, W. I.; Patrick, M.; Johnson, J. B.; Stix, J.
2007-05-01
A portable, digital sensor network has been installed to closely monitor changing activity at Fuego volcano, which takes advantage of an international collaborative effort among Guatemala, U.S. and Canadian universities, and the Peace Corps. The goal of this effort is to improve the understanding shallow internal processes, and consequently to more effectively mitigate volcanic hazards. Fuego volcano has had more than 60 historical eruptions and nearly-continuous activity make it an ideal laboratory to study volcanic processes. Close monitoring is needed to identify base-line activity, and rapidly identify and disseminate changes in the activity which might threaten nearby communities. The sensor network is comprised of a miniature DOAS ultraviolet spectrometer fitted with a system for automated plume scans, a digital video camera, and two seismo-acoustic stations and portable dataloggers. These sensors are on loan from scientists who visited Fuego during short field seasons and donated use of their sensors to a resident Peace Corps Masters International student from Michigan Technological University for extended data collection. The sensor network is based around the local volcano observatory maintained by Instituto National de Sismologia, Vulcanologia, Metrologia e Hidrologia (INSIVUMEH). INSIVUMEH provides local support and historical knowledge of Fuego activity as well as a secure location for storage of scientific equipment, data processing, and charging of the batteries that power the sensors. The complete sensor network came online in mid-February 2007 and here we present preliminary results from concurrent gas, seismic, and acoustic monitoring of activity from Fuego volcano.
7. US Naval Observatory Hourly Observations
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — Hourly observations journal from the National Observatory in Washington DC. The observatory is the first station in the United States to produce hourly observations...
8. Galactic Super-volcano in Action
Science.gov (United States)
2010-08-01
A galactic "super-volcano" in the massive galaxy M87 is erupting and blasting gas outwards, as witnessed by NASA's Chandra X-ray Observatory and NSF's Very Large Array. The cosmic volcano is being driven by a giant black hole in the galaxy's center and preventing hundreds of millions of new stars from forming. Astronomers studying this black hole and its effects have been struck by the remarkable similarities between it and a volcano in Iceland that made headlines earlier this year. At a distance of about 50 million light years, M87 is relatively close to Earth and lies at the center of the Virgo cluster, which contains thousands of galaxies. M87's location, coupled with long observations over Chandra's lifetime, has made it an excellent subject for investigations of how a massive black hole impacts its environment. "Our results show in great detail that supermassive black holes have a surprisingly good control over the evolution of the galaxies in which they live," said Norbert Werner of the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University and the SLAC National Accelerator Laboratory, who led one of two papers describing the study. "And it doesn't stop there. The black hole's reach extends ever farther into the entire cluster, similar to how one small volcano can affect practically an entire hemisphere on Earth." The cluster surrounding M87 is filled with hot gas glowing in X-ray light, which is detected by Chandra. As this gas cools, it can fall toward the galaxy's center where it should continue to cool even faster and form new stars. However, radio observations with the Very Large Array suggest that in M87 jets of very energetic particles produced by the black hole interrupt this process. These jets lift up the relatively cool gas near the center of the galaxy and produce shock waves in the galaxy's atmosphere because of their supersonic speed. The scientists involved in this research have found the interaction of this cosmic
9. ESO's Two Observatories Merge
Science.gov (United States)
2005-02-01
On February 1, 2005, the European Southern Observatory (ESO) has merged its two observatories, La Silla and Paranal, into one. This move will help Europe's prime organisation for astronomy to better manage its many and diverse projects by deploying available resources more efficiently where and when they are needed. The merged observatory will be known as the La Silla Paranal Observatory. Catherine Cesarsky, ESO's Director General, comments the new development: "The merging, which was planned during the past year with the deep involvement of all the staff, has created unified maintenance and engineering (including software, mechanics, electronics and optics) departments across the two sites, further increasing the already very high efficiency of our telescopes. It is my great pleasure to commend the excellent work of Jorge Melnick, former director of the La Silla Observatory, and of Roberto Gilmozzi, the director of Paranal." ESO's headquarters are located in Garching, in the vicinity of Munich (Bavaria, Germany), and this intergovernmental organisation has established itself as a world-leader in astronomy. Created in 1962, ESO is now supported by eleven member states (Belgium, Denmark, Finland, France, Germany, Italy, The Netherlands, Portugal, Sweden, Switzerland, and the United Kingdom). It operates major telescopes on two remote sites, all located in Chile: La Silla, about 600 km north of Santiago and at an altitude of 2400m; Paranal, a 2600m high mountain in the Atacama Desert 120 km south of the coastal city of Antofagasta. Most recently, ESO has started the construction of an observatory at Chajnantor, a 5000m high site, also in the Atacama Desert. La Silla, north of the town of La Serena, has been the bastion of the organization's facilities since 1964. It is the site of two of the most productive 4-m class telescopes in the world, the New Technology Telescope (NTT) - the first major telescope equipped with active optics - and the 3.6-m, which hosts HARPS
10. Ruiz Volcano: Preliminary report
Science.gov (United States)
Ruiz Volcano, Colombia (4.88°N, 75.32°W). All times are local (= GMT -5 hours).An explosive eruption on November 13, 1985, melted ice and snow in the summit area, generating lahars that flowed tens of kilometers down flank river valleys, killing more than 20,000 people. This is history's fourth largest single-eruption death toll, behind only Tambora in 1815 (92,000), Krakatau in 1883 (36,000), and Mount Pelée in May 1902 (28,000). The following briefly summarizes the very preliminary and inevitably conflicting information that had been received by press time.
11. Expanding the HAWC Observatory
Energy Technology Data Exchange (ETDEWEB)
Mori, Johanna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-08-17
The High Altitude Water Cherenkov Gamma-Ray Observatory is expanding its current array of 300 water tanks to include 350 outrigger tanks to increase sensitivity to gamma rays above 10 TeV. This involves creating and testing hardware with which to build the new tanks, including photomultiplier tubes, high voltage supply units, and flash analog to digital converters. My responsibilities this summer included preparing, testing and calibrating that equipment.
12. South African Astronomical Observatory
International Nuclear Information System (INIS)
1987-01-01
Work at the South African Astronomical Observatory (SAAO) in recent years, by both staff and visitors, has made major contributions to the fields of astrophysics and astronomy. During 1986 the SAAO has been involved in studies of the following: galaxies; celestial x-ray sources; magellanic clouds; pulsating variables; galactic structure; binary star phenomena; nebulae and interstellar matter; stellar astrophysics; open clusters; globular clusters, and solar systems
13. Satellite monitoring of remote volcanoes improves study efforts in Alaska
Science.gov (United States)
Dean, K.; Servilla, M.; Roach, A.; Foster, B.; Engle, K.
Satellite monitoring of remote volcanoes is greatly benefitting the Alaska Volcano Observatory (AVO), and last year's eruption of the Okmok Volcano in the Aleutian Islands is a good case in point. The facility was able to issue and refine warnings of the eruption and related activity quickly, something that could not have been done using conventional seismic surveillance techniques, since seismometers have not been installed at these locations.AVO monitors about 100 active volcanoes in the North Pacific (NOPAC) region, but only a handful are observed by costly and logistically complex conventional means. The region is remote and vast, about 5000 × 2500 km, extending from Alaska west to the Kamchatka Peninsula in Russia (Figure 1). Warnings are transmitted to local communities and airlines that might be endangered by eruptions. More than 70,000 passenger and cargo flights fly over the region annually, and airborne volcanic ash is a threat to them. Many remote eruptions have been detected shortly after the initial magmatic activity using satellite data, and eruption clouds have been tracked across air traffic routes. Within minutes after eruptions are detected, information is relayed to government agencies, private companies, and the general public using telephone, fax, and e-mail. Monitoring of volcanoes using satellite image data involves direct reception, real-time monitoring, and data analysis. Two satellite data receiving stations, located at the Geophysical Institute, University of Alaska Fairbanks (UAF), are capable of receiving data from the advanced very high resolution radiometer (AVHRR) on National Oceanic and Atmospheric Administration (NOAA) polar orbiting satellites and from synthetic aperture radar (SAR) equipped satellites.
14. Using Bayesian Belief Networks To Assess Volcano State from Multiple Monitoring Timeseries And Other Evidence
Science.gov (United States)
Odbert, Henry; Aspinall, Willy
2013-04-01
When volcanoes exhibit unrest or become eruptively active, science-based decision support invariably is sought by civil authorities. Evidence available to scientists about a volcano's internal state is usually indirect, secondary or very nebulous.Advancement of volcano monitoring technology in recent decades has increased the variety and resolution of multi-parameter timeseries data recorded at volcanoes. Monitoring timeseries may be interpreted in real time by observatory staff and are often later subjected to further analytic scrutiny by the research community at large. With increasing variety and resolution of data, interpreting these multiple strands of parallel, partial evidence has become increasingly complex. In practice, interpretation of many timeseries involves familiarity with the idiosyncracies of the volcano, the monitoring techniques, the configuration of the recording instrumentation, observations from other datasets, and so on. Assimilation of this knowledge is necessary in order to select and apply the appropriate statistical techniques required to extract the required information. Bayesian Belief Networks (BBNs) use probability theory to treat and evaluate uncertainties in a rational and auditable scientific manner, but only to the extent warranted by the strength of the available evidence. The concept is a suitable framework for marshalling multiple observations, model results and interpretations - and associated uncertainties - in a methodical manner. The formulation is usually implemented in graphical form and could be developed as a tool for near real-time, ongoing use in a volcano observatory, for example. We explore the application of BBNs in analysing volcanic timeseries, the certainty with which inferences may be drawn, and how they can be updated dynamically. Such approaches provide a route to developing analytical interface(s) between volcano monitoring analyses and probabilistic hazard analysis. We discuss the use of BBNs in hazard
15. Eruptive viscosity and volcano morphology
International Nuclear Information System (INIS)
Posin, S.B.; Greeley, R.
1988-01-01
Terrestrial central volcanoes formed predominantly from lava flows were classified as shields, stratovolcanoes, and domes. Shield volcanoes tend to be large in areal extent, have convex slopes, and are characterized by their resemblance to inverted hellenic war shields. Stratovolcanoes have concave slopes, whereas domes are smaller and have gentle convex slopes near the vent that increase near the perimeter. In addition to these differences in morphology, several other variations were observed. The most important is composition: shield volcanoes tend to be basaltic, stratovolcanoes tend to be andesitic, and domes tend to be dacitic. However, important exceptions include Fuji, Pico, Mayon, Izalco, and Fuego which have stratovolcano morphologies but are composed of basaltic lavas. Similarly, Ribkwo is a Kenyan shield volcano composed of trachyte and Suswa and Kilombe are shields composed of phonolite. These exceptions indicate that eruptive conditions, rather than composition, may be the primary factors that determine volcano morphology. The objective of this study is to determine the relationships, if any, between eruptive conditions (viscosity, erupted volume, and effusion rate) and effusive volcano morphology. Moreover, it is the goal of this study to incorporate these relationships into a model to predict the eruptive conditions of extraterrestrial (Martian) volcanoes based on their morphology
16. Astronomical publications of Melbourne Observatory
Science.gov (United States)
Andropoulos, Jenny Ioanna
2014-05-01
During the second half of the 19th century and the first half of the 20th century, four well-equipped government observatories were maintained in Australia - in Melbourne, Sydney, Adelaide and Perth. These institutions conducted astronomical observations, often in the course of providing a local time service, and they also collected and collated meteorological data. As well, some of these observatories were involved at times in geodetic surveying, geomagnetic recording, gravity measurements, seismology, tide recording and physical standards, so the term "observatory" was being used in a rather broad sense! Despite the international renown that once applied to Williamstown and Melbourne Observatories, relatively little has been written by modern-day scholars about astronomical activities at these observatories. This research is intended to rectify this situation to some extent by gathering, cataloguing and analysing the published astronomical output of the two Observatories to see what contributions they made to science and society. It also compares their contributions with those of Sydney, Adelaide and Perth Observatories. Overall, Williamstown and Melbourne Observatories produced a prodigious amount of material on astronomy in scientific and technical journals, in reports and in newspapers. The other observatories more or less did likewise, so no observatory of those studied markedly outperformed the others in the long term, especially when account is taken of their relative resourcing in staff and equipment.
17. Soufriere Hills Volcano
Science.gov (United States)
2002-01-01
In this ASTER image of Soufriere Hills Volcano on Montserrat in the Caribbean, continued eruptive activity is evident by the extensive smoke and ash plume streaming towards the west-southwest. Significant eruptive activity began in 1995, forcing the authorities to evacuate more than 7,000 of the island's original population of 11,000. The primary risk now is to the northern part of the island and to the airport. Small rockfalls and pyroclastic flows (ash, rock and hot gases) are common at this time due to continued growth of the dome at the volcano's summit.This image was acquired on October 29, 2002 by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA's Terra satellite. With its 14 spectral bands from the visible to the thermal infrared wavelength region, and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER images Earth to map and monitor the changing surface of our planet.ASTER is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of Economy, Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products.The broad spectral coverage and high spectral resolution of ASTER will provide scientists in numerous disciplines with critical information for surface mapping, and monitoring of dynamic conditions and temporal change. Example applications are: monitoring glacial advances and retreats; monitoring potentially active volcanoes; identifying crop stress; determining cloud morphology and physical properties; wetlands evaluation; thermal pollution monitoring; coral reef degradation; surface temperature mapping of soils and geology; and measuring surface heat balance.Dr. Anne Kahle at NASA's Jet Propulsion Laboratory, Pasadena, California, is the U.S. Science team leader; Bjorn Eng of JPL is the project manager. The Terra mission is part of NASA
18. Sudbury neutrino observatory
International Nuclear Information System (INIS)
Ewan, G.T.; Mak, H.B.; Robertson, B.C.
1985-07-01
This report discusses the proposal to construct a unique neutrino observatory. The observatory would contain a Cerenkov detector which would be located 2070 m below the earth's surface in an INCO mine at Creighton near Sudbury and would contain 1000 tons of D20 which is an excellent target material. Neutrinos carry detailed information in their spectra on the reactions taking place deep in the interstellar interior and also provide information on supernova explosions. In addition to their role as astrophysical probes a knowledge of the properties of neutrinos is crucial to theories of grand unification. There are three main objectives of the laboratory. The prime objective will be to study B electron neutrinos from the sun by a direct counting method that will measure their energy and direction. The second major objective will be to establish if electron neutrinos change into other neutrino species in transit from the sun to the earth. Finally it is hoped to be able to observe a supernova with the proposed detector. The features of the Sudbury Neutrino Observatory which make it unique are its high sensitivity to electron neutrinos and its ability to detect all other types of neutrinos of energy greater than 2.2 MeV. In section II of this proposal the major physics objectives are discussed in greater detail. A conceptual design for the detector, and measurements and calculations which establish the feasibility of the neutrino experiments are presented in section III. Section IV is comprised of a discussion on the possible location of the laboratory and Section V contains a brief indication of the main areas to be studied in Phase II of the design study
19. Sudbury neutrino observatory
International Nuclear Information System (INIS)
Ewan, G.T.; Evans, H.C.; Lee, H.W.
1986-10-01
This report is a supplement to a report (SNO-85-3 (Sudbury Neutrino Observatory)) which contained the results of a feasibility study on the construction of a deep underground neutrino observatory based on a 1000 ton heavy water Cerenkov detector. Neutrinos carry detailed information in their spectra on the reactions taking place deep in the interstellar interior and also provide information on supernova explosions. In addition to their role as astrophysical probes, a knowledge of the properties of neutrinos is crucial to theories of grand unification. The Sudbury Neutrino Observatory is unique in its high sensitivity to electron neutrinos and its ability to detect all other types of neutrinos of energy greater than 2.2 MeV. The results of the July 1985 study indicated that the project is technically feasible in that the proposed detector can measure the direction and energy of electron neutrinos above 7 MeV and the scientific programs will make significant contributions to physics and astrophysics. This present report contains new information obtained since the 1985 feasibility study. The enhanced conversion of neutrinos in the sun and the new physics that could be learned using the heavy water detector are discussed in the physics section. The other sections will discuss progress in the areas of practical importance in achieving the physics objectives such as new techniques to measure, monitor and remove low levels of radioactivity in detector components, ideas on calibration of the detector and so forth. The section entitled Administration contains a membership list of the working groups within the SNO collaboration
20. The Observatory Health Report
Directory of Open Access Journals (Sweden)
Laura Murianni
2008-06-01
Full Text Available
Background: The number of indicators aiming to provide a clear picture of healthcare needs and the quality and efficiency of healthcare systems and services has proliferated in recent years. The activity of the National Observatory on Health Status in the Italian Regions is multidisciplinary, involving around 280 public health care experts, clinicians, demographers, epidemiologists, mathematicians, statisticians and economists who with their different competencies, and scientific interests aim to improve the collective health of individuals and their conditions through the use of “core indicators”. The main outcome of the National Observatory on Health Status in the Italian Regions is the “Osservasalute Report – a report on health status and the quality of healthcare assistance in the Italian Regions”.
Results: The results of Observatory Report show it is necessary:
• to improve the monitoring of primary health care services (where the chronic disease could be cared through implementation of clinical path;
• to improve in certain areas of hospital care such as caesarean deliveries, as well as the average length of stay in the pre-intervention phase, etc.;
• to try to be more focused on the patients/citizens in our health care services; • to practice more geographical interventions to reduce the North-South divide as well as reduce gender inequity.
Conclusions: The health status of Italian people is good with positive results and outcomes, but in the meantime some further efforts should be done especially in the South that still has to improve the quality and the organization of health care services. There are huge differences in accuracy and therefore usefulness of the reported data, both between diseases and between
1. Patterns in thermal emissions from the volcanoes of the Aleutian Islands
Science.gov (United States)
Blackett, M.; Webley, P. W.; Dehn, J.
2012-12-01
Using AVHRR data 1993-2011 and the Alaska Volcano Observatory's Okmok II Algorithm, the thermal emissions from all volcanoes in the Aleutian Islands were converted from temperature to power emission and examined for periodicity. The emissions were also summed to quantify the total energy released throughout the period. It was found that in the period April 1997 - January 2004 (37% of the period) the power emission from the volcanoes of the island arc declined sharply to constitute just 5.7% of the total power output for the period (138,311 MW), and this was attributable to just three volcanoes: Veniaminof (1.0%), Cleveland (1.5%) and Shishaldin (3.2%). This period of apparent reduced activity contrasts with the periods both before and after and is unrelated to the number of sensors in orbit at the time. What is also evident from the data set is that in terms of overall power emission over this period, the majority of emitted energy is largely attributable to those volcanoes which erupt with regularity (again, Veniaminof [29.7%], Cleveland [17%] and Shishaldin [11.4%]), as opposed to from the relatively few, large scale events (i.e. Reboubt [5.4%], Okmok [8.3%], Augustine [9.7%]; Pavlov [13.9%] being an exception). Sum power emission from volcanoes in the Aleutian Islands (1993-2011)
2. Ash and Steam, Soufriere Hills Volcano, Monserrat
Science.gov (United States)
2002-01-01
International Space Station crew members are regularly alerted to dynamic events on the Earth's surface. On request from scientists on the ground, the ISS crew observed and recorded activity from the summit of Soufriere Hills on March 20, 2002. These two images provide a context view of the island (bottom) and a detailed view of the summit plume (top). When the images were taken, the eastern side of the summit region experienced continued lava growth, and reports posted on the Smithsonian Institution's Weekly Volcanic Activity Report indicate that 'large (50-70 m high), fast-growing, spines developed on the dome's summit. These spines periodically collapsed, producing pyroclastic flows down the volcano's east flank that sometimes reached the Tar River fan. Small ash clouds produced from these events reached roughly 1 km above the volcano and drifted westward over Plymouth and Richmond Hill. Ash predominately fell into the sea. Sulfur dioxide emission rates remained high. Theodolite measurements of the dome taken on March 20 yielded a dome height of 1,039 m.' Other photographs by astronauts of Montserrat have been posted on the Earth Observatory: digital photograph number ISS002-E-9309, taken on July 9, 2001; and a recolored and reprojected version of the same image. Digital photograph numbers ISS004-E-8972 and 8973 were taken 20 March, 2002 from Space Station Alpha and were provided by the Earth Sciences and Image Analysis Laboratory at Johnson Space Center. Additional images taken by astronauts and cosmonauts can be viewed at the NASA-JSC Gateway to Astronaut Photography of Earth.
3. Alaska - Russian Far East connection in volcano research and monitoring
Science.gov (United States)
Izbekov, P. E.; Eichelberger, J. C.; Gordeev, E.; Neal, C. A.; Chebrov, V. N.; Girina, O. A.; Demyanchuk, Y. V.; Rybin, A. V.
2012-12-01
The Kurile-Kamchatka-Alaska portion of the Pacific Rim of Fire spans for nearly 5400 km. It includes more than 80 active volcanoes and averages 4-6 eruptions per year. Resulting ash clouds travel for hundreds to thousands of kilometers defying political borders. To mitigate volcano hazard to aviation and local communities, the Alaska Volcano Observatory (AVO) and the Institute of Volcanology and Seismology (IVS), in partnership with the Kamchatkan Branch of the Geophysical Survey of the Russian Academy of Sciences (KBGS), have established a collaborative program with three integrated components: (1) volcano monitoring with rapid information exchange, (2) cooperation in research projects at active volcanoes, and (3) volcanological field schools for students and young scientists. Cooperation in volcano monitoring includes dissemination of daily information on the state of volcanic activity in neighboring regions, satellite and visual data exchange, as well as sharing expertise and technologies between AVO and the Kamchatkan Volcanic Eruption Response Team (KVERT) and Sakhalin Volcanic Eruption Response Team (SVERT). Collaboration in scientific research is best illustrated by involvement of AVO, IVS, and KBGS faculty and graduate students in mutual international studies. One of the most recent examples is the NSF-funded Partnerships for International Research and Education (PIRE)-Kamchatka project focusing on multi-disciplinary study of Bezymianny volcano in Kamchatka. This international project is one of many that have been initiated as a direct result of a bi-annual series of meetings known as Japan-Kamchatka-Alaska Subduction Processes (JKASP) workshops that we organize together with colleagues from Hokkaido University, Japan. The most recent JKASP meeting was held in August 2011 in Petropavlovsk-Kamchatsky and brought together more than 130 scientists and students from Russia, Japan, and the United States. The key educational component of our collaborative program
4. Volcanoes, Third Edition
Science.gov (United States)
Nye, Christopher J.
It takes confidence to title a smallish book merely “Volcanoes” because of the impliction that the myriad facets of volcanism—chemistry, physics, geology, meteorology, hazard mitigation, and more—have been identified and addressed to some nontrivial level of detail. Robert and Barbara Decker have visited these different facets seamlessly in Volcanoes, Third Edition. The seamlessness comes from a broad overarching, interdisciplinary, professional understanding of volcanism combined with an exceptionally smooth translation of scientific jargon into plain language.The result is a book which will be informative to a very broad audience, from reasonably educated nongeologists (my mother loves it) to geology undergraduates through professional volcanologists. I bet that even the most senior professional volcanologists will learn at least a few things from this book and will find at least a few provocative discussions of subjects they know.
5. Sudbury neutrino observatory proposal
International Nuclear Information System (INIS)
Ewan, G.T.; Evans, H.C.; Lee, H.W.
1987-10-01
This report is a proposal by the Sudbury Neutrino Observatory (SNO) collaboration to develop a world class laboratory for neutrino astrophysics. This observatory would contain a large volume heavy water detector which would have the potential to measure both the electron-neutrino flux from the sun and the total solar neutrino flux independent of neutrino type. It will therefore be possible to test models of solar energy generation and, independently, to search for neutrino oscillations with a sensitivity many orders of magnitude greater than that of terrestrial experiments. It will also be possible to search for spectral distortion produced by neutrino oscillations in the dense matter of the sun. Finally the proposed detector would be sensitive to neutrinos from a stellar collapse and would detect neutrinos of all types thus providing detailed information on the masses of muon- and tau-neutrinos. The neutrino detector would contain 1000 tons of D20 and would be located more than 2000 m below ground in the Creighton mine near Sudbury. The operation and performance of the proposed detector are described and the laboratory design is presented. Construction schedules and responsibilities and the planned program of technical studies by the SNO collaboration are outlined. Finally, the total capital cost is estimated to be $35M Canadian and the annual operating cost, after construction, would be$1.8 M Canadian, including the insurance costs of the heavy water
6. Muon imaging of volcanoes with Cherenkov telescopes
Science.gov (United States)
Carbone, Daniele; Catalano, Osvaldo; Cusumano, Giancarlo; Del Santo, Melania; La Parola, Valentina; La Rosa, Giovanni; Maccarone, Maria Concetta; Mineo, Teresa; Pareschi, Giovanni; Sottile, Giuseppe; Zuccarello, Luciano
2017-04-01
The quantitative understanding of the inner structure of a volcano is a key feature to model the processes leading to paroxysmal activity and, hence, to mitigate volcanic hazards. To pursue this aim, different geophysical techniques are utilized, that are sensitive to different properties of the rocks (elastic, electrical, density). In most cases, these techniques do not allow to achieve the spatial resolution needed to characterize the shallowest part of the plumbing system and may require dense measurements in active zones, implying a high level of risk. Volcano imaging through cosmic-ray muons is a promising technique that allows to overcome the above shortcomings. Muons constantly bombard the Earth's surface and can travel through large thicknesses of rock, with an energy loss depending on the amount of crossed matter. By measuring the absorption of muons through a solid body, one can deduce the density distribution inside the target. To date, muon imaging of volcanic structures has been mainly achieved with scintillation detectors. They are sensitive to noise sourced from (i) the accidental coincidence of vertical EM shower particles, (ii) the fake tracks initiated from horizontal high-energy electrons and low-energy muons (not crossing the target) and (iii) the flux of upward going muons. A possible alternative to scintillation detectors is given by Cherenkov telescopes. They exploit the Cherenkov light emitted when charged particles (like muons) travel through a dielectric medium, with velocity higher than the speed of light. Cherenkov detectors are not significantly affected by the above noise sources. Furthermore, contrarily to scintillator-based detectors, Cherenkov telescopes permit a measurement of the energy spectrum of the incident muon flux at the installation site, an issue that is indeed relevant for deducing the density distribution inside the target. In 2014, a prototype Cherenkov telescope was installed at the Astrophysical Observatory of Serra
7. Volcanoes in Eruption - Set 1
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The word volcano is used to refer to the opening from which molten rock and gas issue from Earth's interior onto the surface, and also to the cone, hill, or mountain...
8. Volcanoes in Eruption - Set 2
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The word volcano is used to refer to the opening from which molten rock and gas issue from Earth's interior onto the surface, and also to the cone, hill, or mountain...
9. Continuous monitoring of Hawaiian volcanoes with thermal cameras
Science.gov (United States)
Patrick, Matthew R.; Orr, Tim R.; Antolik, Loren; Lee, Robert Lopaka; Kamibayashi, Kevan P.
2014-01-01
Continuously operating thermal cameras are becoming more common around the world for volcano monitoring, and offer distinct advantages over conventional visual webcams for observing volcanic activity. Thermal cameras can sometimes “see” through volcanic fume that obscures views to visual webcams and the naked eye, and often provide a much clearer view of the extent of high temperature areas and activity levels. We describe a thermal camera network recently installed by the Hawaiian Volcano Observatory to monitor Kīlauea’s summit and east rift zone eruptions (at Halema‘uma‘u and Pu‘u ‘Ō‘ō craters, respectively) and to keep watch on Mauna Loa’s summit caldera. The cameras are long-wave, temperature-calibrated models protected in custom enclosures, and often positioned on crater rims close to active vents. Images are transmitted back to the observatory in real-time, and numerous Matlab scripts manage the data and provide automated analyses and alarms. The cameras have greatly improved HVO’s observations of surface eruptive activity, which includes highly dynamic lava lake activity at Halema‘uma‘u, major disruptions to Pu‘u ‘Ō‘ō crater and several fissure eruptions.
10. GLACIERS OF THE KORYAK VOLCANO
Directory of Open Access Journals (Sweden)
T. M. Manevich
2012-01-01
Full Text Available The paper presents main glaciological characteristics of present-day glaciers located on the Koryaksky volcano. The results of fieldwork (2008–2009 and high-resolution satellite image analysis let us to specify and complete information on modern glacial complex of Koryaksky volcano. Now there are seven glaciers with total area 8.36 km2. Three of them advance, two are in stationary state and one degrades. Moreover, the paper describes the new crater glacier.
11. Radon emanometry in active volcanoes
Energy Technology Data Exchange (ETDEWEB)
Seidel, J.L.; Monnin, M. (CNRS, IN2P3, BP45/F63170 Aubiere (France)); Cejudo, J. (Instituto Nacional de Investigaciones Nucleares, Mexico City)
1984-01-01
Radon emission measurements from active volcanoes has, since 1981, been continuously measured at monitoring stations in Mexico and in Costa Rica. Counting of etched alpha tracks on cellulose nitrate LR-115 detectors give varying results at the several stations. Radon emanation at Chichon, where an explosive eruption occurred in 1982, fell down. Radon detection at the active volcano in Colima shows a pattern of very low emission. At the Costa Rica stations located at Poas, Arenal and Irazu, the radon emanation shows regularity.
12. Geophysical data collection using an interactive personal computer system. Part 1. ; Experimental monitoring of Suwanosejima volcano
Energy Technology Data Exchange (ETDEWEB)
Iguchi, M. (Kyoto Univerdity, Kyoto (Japan). Disaster Prevention Reserach Institute)
1991-10-15
In the article, a computer-communication system was developed in order to collect geophysical data from remote volcanos via a public telephpne network. This system is composed of a host presonal computer at an observatory and several personal computers as terminals at remote stations. Each terminal acquires geophysical data, such as seismic, intrasonic, and ground deformation date. These gara are stored in the terminals temporarily, and transmitted to the host computer upon command from host computer. Experimental monitoring was conducted between Sakurajima Volcanological Observatory and several statins in the Satsunan Islands and southern Kyushu. The seismic and eruptive activities of Suwanosejima volcano were monitored by this system. Consequently, earthquakes and air-shocks accompanied by the explosive activity were observed. B-type earthquakes occurred prio to the relatively prolonged eruptive activity. Intermittent occurrences of volcanic tremors were also clearly recognized from the change in mean amplitubes of seismic waves. 7 refs., 10 figs., 2 tabs.
13. Vertical Motions of Oceanic Volcanoes
Science.gov (United States)
Clague, D. A.; Moore, J. G.
2006-12-01
Oceanic volcanoes offer abundant evidence of changes in their elevations through time. Their large-scale motions begin with a period of rapid subsidence lasting hundreds of thousands of years caused by isostatic compensation of the added mass of the volcano on the ocean lithosphere. The response is within thousands of years and lasts as long as the active volcano keeps adding mass on the ocean floor. Downward flexure caused by volcanic loading creates troughs around the growing volcanoes that eventually fill with sediment. Seismic surveys show that the overall depression of the old ocean floor beneath Hawaiian volcanoes such as Mauna Loa is about 10 km. This gross subsidence means that the drowned shorelines only record a small part of the total subsidence the islands experienced. In Hawaii, this history is recorded by long-term tide-gauge data, the depth in drill holes of subaerial lava flows and soil horizons, former shorelines presently located below sea level. Offshore Hawaii, a series of at least 7 drowned reefs and terraces record subsidence of about 1325 m during the last half million years. Older sequences of drowned reefs and terraces define the early rapid phase of subsidence of Maui, Molokai, Lanai, Oahu, Kauai, and Niihau. Volcanic islands, such as Maui, tip down toward the next younger volcano as it begins rapid growth and subsidence. Such tipping results in drowned reefs on Haleakala as deep as 2400 m where they are tipped towards Hawaii. Flat-topped volcanoes on submarine rift zones also record this tipping towards the next younger volcano. This early rapid subsidence phase is followed by a period of slow subsidence lasting for millions of years caused by thermal contraction of the aging ocean lithosphere beneath the volcano. The well-known evolution along the Hawaiian chain from high to low volcanic island, to coral island, and to guyot is due to this process. This history of rapid and then slow subsidence is interrupted by a period of minor uplift
14. Chiliques volcano, Chile
Science.gov (United States)
2002-01-01
A January 6, 2002 ASTER nighttime thermal infrared image of Chiliques volcano in Chile shows a hot spot in the summit crater and several others along the upper flanks of the edifice, indicating new volcanic activity. Examination of an earlier nighttime thermal infrared image from May 24,2000 showed no thermal anomaly. Chiliques volcano was previously thought to be dormant. Rising to an elevation of 5778 m, Chiliques is a simple stratovolcano with a 500-m-diameter circular summit crater. This mountain is one of the most important high altitude ceremonial centers of the Incas. It is rarely visited due to its difficult accessibility. Climbing to the summit along Inca trails, numerous ruins are encountered; at the summit there are a series of constructions used for rituals. There is a beautiful lagoon in the crater that is almost always frozen.The daytime image was acquired on November 19, 2000 and was created by displaying ASTER bands 1,2 and 3 in blue, green and red. The nighttime image was acquired January 6, 2002, and is a color-coded display of a single thermal infrared band. The hottest areas are white, and colder areas are darker shades of red. Both images cover an area of 7.5 x 7.5 km, and are centered at 23.6 degrees south latitude, 67.6 degrees west longitude.Both images cover an area of 7.5 x 7.5 km, and are centered at 23.6 degrees south latitude, 67.6 degrees west longitude.These images were acquired by the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on NASA's Terra satellite. With its 14spectral bands from the visible to the thermal infrared wavelength region, and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER will image Earth for the next 6 years to map and monitor the changing surface of our planet.ASTER is one of five Earth-observing instruments launched December 18,1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of Economy, Trade and Industry. A joint U
15. Perennial Environment Observatory
International Nuclear Information System (INIS)
Plas, Frederic
2014-07-01
The Perennial Environment Observatory [Observatoire Perenne de l'Environnement - OPE] is a unique approach and infrastructure developed and implemented by ANDRA, the French National Radioactive Waste Management Agency, as part of its overall project of deep geological disposal for radioactive waste. Its current mission is to assess the initial state of the rural (forest, pasture, open-field and aquatic) environment, prior to repository construction. This will be followed in 2017 (pending construction authorizations) and for a period exceeding a century, by monitoring of any impact the repository may have on the environment. In addition to serving its own industrial purpose of environmental monitoring, ANDRA also opens the OPE approach, infrastructure and acquired knowledge (database...) to the scientific community to support further research on long term evolution of the environment subjected to natural and anthropogenic stresses, and to contribute to a better understanding of the interaction between the various compartments of the environment
16. Sudbury Neutrino Observatory
International Nuclear Information System (INIS)
Beier, E.W.
1992-03-01
This document is a technical progress report on work performed at the University of Pennsylvania during the current year on the Sudbury Neutrino Observatory project. The motivation for the experiment is the measurement of neutrinos emitted by the sun. The Sudbury Neutrino Observatory (SNO) is a second generation dedicated solar neutrino experiment which will extend the results of our work with the Kamiokande II detector by measuring three reactions of neutrinos rather than the single reaction measured by the Kamiokande experiment. The collaborative project includes physicists from Canada, the United Kingdom, and the United States. Full funding for the construction of this facility was obtained in January 1990, and its construction is estimated to take five years. The motivation for the SNO experiment is to study the fundamental properties of neutrinos, in particular the mass and mixing parameters, which remain undetermined after decades of experiments in neutrino physics utilizing accelerators and reactors as sources of neutrinos. To continue the study of neutrino properties it is necessary to use the sun as a neutrino source. The long distance to the sun makes the search for neutrino mass sensitive to much smaller mass than can be studied with terrestrial sources. Furthermore, the matter density in the sun is sufficiently large to enhance the effects of small mixing between electron neutrinos and mu or tau neutrinos. This experiment, when combined with the results of the radiochemical 37 Cl and 71 Ga experiments and the Kamiokande II experiment, should extend our knowledge of these fundamental particles, and as a byproduct, improve our understanding of energy generation in the sun
17. The upgrade of the HAWC observatory
Energy Technology Data Exchange (ETDEWEB)
Schoorlemmer, Harm [Max-Plank-Institut fuer Kernphysik, Heidelberg (Germany); Collaboration: HAWC-Collaboration
2016-07-01
The High Altitude Water Cherenkov (HAWC) high-energy gamma-ray observatory has recently been completed near the Sierra Negra volcano in central Mexico. HAWC consists of 300 Water Cherenkov Detectors, each containing 200 tons of purified water, that cover a total surface area of 20,000 m{sup 2}. HAWC observes gamma rays in the 0.1-100 TeV range and has a sensitivity to TeV-scale gamma-ray sources an order of magnitude better than previous air-shower arrays. The HAWC trigger for the highest energy gamma rays reaches an effective area of 10{sup 5} m{sup 2} but many of them are poorly reconstructed because the shower core falls outside the array. An upgrade that increases the present fraction of well reconstructed showers above 10 TeV by a factor of 3-4 can be done with a sparse outrigger array of small water Cherenkov detectors that pinpoint the core position and by that improve the angular resolution of the reconstructed showers. Such an outrigger array would be of the order of 300 small water Cherenkov detectors of 2.5 m{sup 3} placed over an area four times larger than HAWC. The Max Planck Institute fuer Kernphysik in Heidelberg just joined the collaboration and will provide the FADC electronics for the readout of the outrigger tanks. Detailed simulations are being performed to optimize the performance of the upgrade.
18. Global Volcano Mortality Risks and Distribution
Data.gov (United States)
National Aeronautics and Space Administration — Global Volcano Mortality Risks and Distribution is a 2.5 minute grid representing global volcano mortality risks. The data set was constructed using historical...
19. Health observatories in iran.
Science.gov (United States)
Rashidian, A; Damari, B; Larijani, B; Vosoogh Moghadda, A; Alikhani, S; Shadpour, K; Khosravi, A
2013-01-01
The Islamic Republic of Iran, in her 20 year vision by the year 2025, is a developed country with the first economic, scientific and technological status in the region, with revolutionary and Islamic identity, inspiring Islamic world, as well as effective and constructive interaction in international relations. Enjoying health, welfare, food security, social security, equal opportunities, fair income distribution, strong family structure; to be away from poverty, corruption, and discrimination; and benefiting desirable living environment are also considered out of characteristics of Iranian society in that year. Strategic leadership towards perceived vision in each setting requires restrictive, complete and timely information. According to constitution of National Institute for Health Researches, law of the Fifth Development Plan of the country and characteristics of health policy making, necessity of designing a Health Observatory System (HOS) was felt. Some Principles for designing such system were formulated by taking following steps: reviewing experience in other countries, having local history of the HOS in mind, superior documents, analysis of current production and management of health information, taking the possibilities to run a HOS into account. Based on these principles, the protocol of HOS was outlined in 3 different stages of opinion poll of informed experts responsible for production on management of information, by using questionnaires and Focus Group Discussions. The protocol includes executive regulations, the list of health indicators, vocabulary and a calendar for periodic studies of the community health situation.
20. The Sudbury Neutrino Observatory
International Nuclear Information System (INIS)
Norman, E.B.; Chan, Y.D.; Garcia, A.; Lesko, K.T.; Smith, A.R.; Stokstad, R.G.; Zlimen, I.; Evans, H.C.; Ewan, G.T.; Hallin, A.; Lee, H.W.; Leslie, J.R.; MacArthur, J.D.; Mak, H.B.; McDonald, A.B.; McLatchie, W.; Robertson, B.C.; Skensved, P.; Sur, B.; Jagam, P.; Law, J.; Ollerhead, R.W.; Simpson, J.J.; Wang, J.X.; Tanner, N.W.; Jelley, N.A.; Barton, J.C.; Doucas, G.; Hooper, E.W.; Knox, A.B.; Moorhead, M.E.; Omori, M.; Trent, P.T.; Wark, D.L.
1992-11-01
Two experiments now in progress have reported measurements of the flux of high energy neutrinos from the Sun. Since about 1970, Davis and his co-workers have been using a 37 Cl-based detector to measure the 7 Be and 8 B solar neutrino flux and have found it to be at least a factor of three lower than that predicted by the Standard Solar Model (SSM). The Kamiokande collaborations has been taking data since 1986 using a large light-water Cerenkov detector and have confirmed that the flux is about two times lower than predicted. Recent results from the SAGE and GALLEX gallium-based detectors show that there is also a deficit of the low energy pp solar neutrinos. These discrepancies between experiment and theory could arise because of inadequacies in the theoretical models of solar energy generation or because of previously unobserved properties of neutrinos. The Sudbury Neutrino Observatory (SNO) will provide the information necessary to decide which of these solutions to the ''solar neutrino problem'' is correct
1. Relative chronology of Martian volcanoes
International Nuclear Information System (INIS)
Landheim, R.; Barlow, N.G.
1991-01-01
Impact cratering is one of the major geological processes that has affected the Martian surface throughout the planet's history. The frequency of craters within particular size ranges provides information about the formation ages and obliterative episodes of Martian geologic units. The Barlow chronology was extended by measuring small craters on the volcanoes and a number of standard terrain units. Inclusions of smaller craters in units previously analyzed by Barlow allowed for a more direct comparison between the size-frequency distribution data for volcanoes and established chronology. During this study, 11,486 craters were mapped and identified in the 1.5 to 8 km diameter range in selected regions of Mars. The results are summarized in this three page report and give a more precise estimate of the relative chronology of the Martian volcanoes. Also, the results of this study lend further support to the increasing evidence that volcanism has been a dominant geologic force throughout Martian history
2. Systematic radon survey over active volcanoes
Energy Technology Data Exchange (ETDEWEB)
Seidel, J.L.; Monnin, M.; Garcia Vindas, J.R. [Centre National de la Recherche Cientifique, Montpellier (France). Lab. GBE; Ricard, L.P.; Staudacher, T. [Observatoire Volcanologique Du Pitou de la Fournaise, La Plaine des Cafres (France)
1999-08-01
Data obtained since 1993 on Costa Rica volcanos are presented and radon anomalies recorded before the eruption of the Irazu volcano (December 8, 1994) are discussed. The Piton de la Fournaise volcano is inactive since mid 1992. The influence of the external parameters on the radon behaviour is studied and the type of perturbations induced on short-term measurements are individuate.
3. Multiphase modelling of mud volcanoes
Science.gov (United States)
Colucci, Simone; de'Michieli Vitturi, Mattia; Clarke, Amanda B.
2015-04-01
Mud volcanism is a worldwide phenomenon, classically considered as the surface expression of piercement structures rooted in deep-seated over-pressured sediments in compressional tectonic settings. The release of fluids at mud volcanoes during repeated explosive episodes has been documented at numerous sites and the outflows resemble the eruption of basaltic magma. As magma, the material erupted from a mud volcano becomes more fluid and degasses while rising and decompressing. The release of those gases from mud volcanism is estimated to be a significant contributor both to fluid flux from the lithosphere to the hydrosphere, and to the atmospheric budget of some greenhouse gases, particularly methane. For these reasons, we simulated the fluid dynamics of mud volcanoes using a newly-developed compressible multiphase and multidimensional transient solver in the OpenFOAM framework, taking into account the multicomponent nature (CH4, CO2, H2O) of the fluid mixture, the gas exsolution during the ascent and the associated changes in the constitutive properties of the phases. The numerical model has been tested with conditions representative of the LUSI, a mud volcano that has been erupting since May 2006 in the densely populated Sidoarjo regency (East Java, Indonesia), forcing the evacuation of 40,000 people and destroying industry, farmland, and over 10,000 homes. The activity of LUSI mud volcano has been well documented (Vanderkluysen et al., 2014) and here we present a comparison of observed gas fluxes and mud extrusion rates with the outcomes of numerical simulations. Vanderkluysen, L.; Burton, M. R.; Clarke, A. B.; Hartnett, H. E. & Smekens, J.-F. Composition and flux of explosive gas release at LUSI mud volcano (East Java, Indonesia) Geochem. Geophys. Geosyst., Wiley-Blackwell, 2014, 15, 2932-2946
4. Sea floor magnetic observatory
Science.gov (United States)
Korepanov, V.; Prystai, A.; Vallianatos, F.; Makris, J.
2003-04-01
The electromagnetic precursors of seismic hazards are widely accepted as strong evidence of the approaching earthquake or volcano eruption. The monitoring of these precursors are of main interest in densely populated areas, what creates serious problems to extract them at the strong industrial noise background. An interesting possibility to improve signal-to-noise ratio gives the installation of the observation points in the shelf zones near the possible earthquake places, what is fairly possible in most seismically active areas in Europe, e. g. in Greece and Italy. The serious restriction for this is the cost of the underwater instrumentation. To realize such experiments it requires the unification of efforts of several countries (e. g., GEOSTAR) or of the funds of some great companies (e. g., SIO magnetotelluric instrument). The progress in electronic components development as well as the appearance of inexpensive watertight glass spheres made it possible to decrease drastically the price of recently developed sea floor magnetic stations. The autonomous vector magnetometer LEMI-301 for sea bed application is described in the report. It is produced on the base of three-component flux-gate sensor. Non-magnetic housing and minimal magnetism of electronic components enable the instrument to be implemented as a monoblock construction where the electronic unit is placed close to the sensor. Automatic circuit provides convenient compensation of the initial field offset and readings of full value (6 digits) of the measured field. Timing by internal clock provides high accuracy synchronization of data. The internal flash memory assures long-term autonomous data storage. The system also has two-axes tilt measurement system. The methodological questions of magnetometer operation at sea bed were studied in order to avoid two types of errors appearing at such experimental cases. First is sea waving influence and second one magnetometer orientation at its random positioning on
5. Digital Data for Volcano Hazards in the Mount Jefferson Region, Oregon
Science.gov (United States)
Schilling, S.P.; Doelger, S.; Walder, J.S.; Gardner, C.A.; Conrey, R.M.; Fisher, B.J.
2008-01-01
Mount Jefferson has erupted repeatedly for hundreds of thousands of years, with its last eruptive episode during the last major glaciation which culminated about 15,000 years ago. Geologic evidence shows that Mount Jefferson is capable of large explosive eruptions. The largest such eruption occurred between 35,000 and 100,000 years ago. If Mount Jefferson erupts again, areas close to the eruptive vent will be severely affected, and even areas tens of kilometers (tens of miles) downstream along river valleys or hundreds of kilometers (hundreds of miles) downwind may be at risk. Numerous small volcanoes occupy the area between Mount Jefferson and Mount Hood to the north, and between Mount Jefferson and the Three Sisters region to the south. These small volcanoes tend not to pose the far-reaching hazards associated with Mount Jefferson, but are nonetheless locally important. A concern at Mount Jefferson, but not at the smaller volcanoes, is the possibility that small-to-moderate sized landslides could occur even during periods of no volcanic activity. Such landslides may transform as they move into lahars (watery flows of rock, mud, and debris) that can inundate areas far downstream. The geographic information system (GIS) volcano hazard data layer used to produce the Mount Jefferson volcano hazard map in USGS Open-File Report 99-24 (Walder and others, 1999) is included in this data set. Both proximal and distal hazard zones were delineated by scientists at the Cascades Volcano Observatory and depict various volcano hazard areas around the mountain.
6. Alaska volcanoes guidebook for teachers
Science.gov (United States)
2011-01-01
7. The Sudbury neutrino observatory
International Nuclear Information System (INIS)
McLatchie, W.; Earle, E.D.
1987-08-01
This report initially discusses the Homestake Mine Experiment, South Dakota, U.S.A. which has been detecting neutrinos in 38 x 10 litre vats of cleaning fluid containing chlorine since the 1960's. The interation between neutrinos and chlorine produces argon so the number of neutrinos over time can be calculated. However, the number of neutrinos which have been detected represent only one third to one quarter of the expected number i.e. 11 per month rather than 48. It is postulated that the electron-neutrinos originating in the solar core could change into muon- or tau-neutrinos during passage through the high electron densities of the sun. The 'low' results at Homestake could thus be explained by the fact that the experiment is only sensitive to electron-neutrinos. The construction of a heavy water detector is therefore proposed as it would be able to determine the energy of the neutrinos, their time of arrival at the detector and their direction. It is proposed to build the detector at Creighton mine near Sudbury at a depth of 6800 feet below ground level thus shielding the detector from cosmic rays which would completely obscure the neutrino signals from the detector. The report then discusses the facility itself, the budget estimate and the social and economic impact on the surrounding area. At the time of publication the proposal for the Sudbury Neutrino Observatory was due to be submitted for peer review by Oct. 1, 1987 and then to various granting bodies charged with the funding of scientific research in Canada, the U.S.A. and Britain
8. An astronomical observatory for Peru
Science.gov (United States)
del Mar, Juan Quintanilla; Sicardy, Bruno; Giraldo, Víctor Ayma; Callo, Víctor Raúl Aguilar
2011-06-01
Peru and France are to conclude an agreement to provide Peru with an astronomical observatory equipped with a 60-cm diameter telescope. The principal aims of this project are to establish and develop research and teaching in astronomy. Since 2004, a team of researchers from Paris Observatory has been working with the University of Cusco (UNSAAC) on the educational, technical and financial aspects of implementing this venture. During an international astronomy conference in Cusco in July 2009, the foundation stone of the future Peruvian Observatory was laid at the top of Pachatusan Mountain. UNSAAC, represented by its Rector, together with the town of Oropesa and the Cusco regional authority, undertook to make the sum of 300,000€ available to the project. An agreement between Paris Observatory and UNSAAC now enables Peruvian students to study astronomy through online teaching.
9. Astronomical databases of Nikolaev Observatory
Science.gov (United States)
Protsyuk, Y.; Mazhaev, A.
2008-07-01
Several astronomical databases were created at Nikolaev Observatory during the last years. The databases are built by using MySQL search engine and PHP scripts. They are available on NAO web-site http://www.mao.nikolaev.ua.
10. Geomagnetic Observatory Database February 2004
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA National Centers for Environmental Information (formerly National Geophysical Data Center) maintains an active database of worldwide geomagnetic observatory...
11. The South African astronomical observatory
International Nuclear Information System (INIS)
Feast, M.
1985-01-01
A few examples of the activities of the South African Astronomical Observatory are discussed. This includes the studying of stellar evolution, dust around stars, the determination of distances to galaxies and collaboration with space experiments
12. The South African Astronomical Observatory
International Nuclear Information System (INIS)
1988-01-01
The geographical position, climate and equipment at the South African Astronomical Observatory (SAAO), together with the enthusiasm and efforts of SAAO scientific and technical staff and of visiting scientists, have enabled the Observatory to make a major contribution to the fields of astrophysics and cosmology. During 1987 the SAAO has been involved in studies of the following: supernovae; galaxies, including Seyfert galaxies; celestial x-ray sources; magellanic clouds; pulsating variables; galatic structure; binary star phenomena; nebulae; interstellar matter and stellar astrophysics
13. Laboratory volcano geodesy
Science.gov (United States)
Færøvik Johannessen, Rikke; Galland, Olivier; Mair, Karen
2014-05-01
intrusion can be excavated and photographed from several angles to compute its 3D shape with the same photogrammetry method. Then, the surface deformation pattern can be directly compared with the shape of underlying intrusion. This quantitative dataset is essential to quantitatively test and validate classical volcano geodetic models.
14. Standardisation of the USGS Volcano Alert Level System (VALS): analysis and ramifications
Science.gov (United States)
Fearnley, C. J.; McGuire, W. J.; Davies, G.; Twigg, J.
2012-11-01
The standardisation of volcano early warning systems (VEWS) and volcano alert level systems (VALS) is becoming increasingly common at both the national and international level, most notably following UN endorsement of the development of globally comprehensive early warning systems. Yet, the impact on its effectiveness, of standardising an early warning system (EWS), in particular for volcanic hazards, remains largely unknown and little studied. This paper examines this and related issues through evaluation of the emergence and implementation, in 2006, of a standardised United States Geological Survey (USGS) VALS. Under this upper-management directive, all locally developed alert level systems or practices at individual volcano observatories were replaced with a common standard. Research conducted at five USGS-managed volcano observatories in Alaska, Cascades, Hawaii, Long Valley and Yellowstone explores the benefits and limitations this standardisation has brought to each observatory. The study concludes (1) that the process of standardisation was predominantly triggered and shaped by social, political, and economic factors, rather than in response to scientific needs specific to each volcanic region; and (2) that standardisation is difficult to implement for three main reasons: first, the diversity and uncertain nature of volcanic hazards at different temporal and spatial scales require specific VEWS to be developed to address this and to accommodate associated stakeholder needs. Second, the plural social contexts within which each VALS is embedded present challenges in relation to its applicability and responsiveness to local knowledge and context. Third, the contingencies of local institutional dynamics may hamper the ability of a standardised VALS to effectively communicate a warning. Notwithstanding these caveats, the concept of VALS standardisation clearly has continuing support. As a consequence, rather than advocating further commonality of a standardised
15. A framework for cross-observatory volcanological database management
Science.gov (United States)
Aliotta, Marco Antonio; Amore, Mauro; Cannavò, Flavio; Cassisi, Carmelo; D'Agostino, Marcello; Dolce, Mario; Mastrolia, Andrea; Mangiagli, Salvatore; Messina, Giuseppe; Montalto, Placido; Fabio Pisciotta, Antonino; Prestifilippo, Michele; Rossi, Massimo; Scarpato, Giovanni; Torrisi, Orazio
2017-04-01
In the last years, it has been clearly shown how the multiparametric approach is the winning strategy to investigate the complex dynamics of the volcanic systems. This involves the use of different sensor networks, each one dedicated to the acquisition of particular data useful for research and monitoring. The increasing interest devoted to the study of volcanological phenomena led the constitution of different research organizations or observatories, also relative to the same volcanoes, which acquire large amounts of data from sensor networks for the multiparametric monitoring. At INGV we developed a framework, hereinafter called TSDSystem (Time Series Database System), which allows to acquire data streams from several geophysical and geochemical permanent sensor networks (also represented by different data sources such as ASCII, ODBC, URL etc.), located on the main volcanic areas of Southern Italy, and relate them within a relational database management system. Furthermore, spatial data related to different dataset are managed using a GIS module for sharing and visualization purpose. The standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common space and time scale. In order to share data between INGV observatories, and also with Civil Protection, whose activity is related on the same volcanic districts, we designed a "Master View" system that, starting from the implementation of a number of instances of the TSDSystem framework (one for each observatory), makes possible the joint interrogation of data, both temporal and spatial, on instances located in different observatories, through the use of web services technology (RESTful, SOAP). Similarly, it provides metadata for equipment using standard schemas (such as FDSN StationXML). The "Master View" is also responsible for managing the data policy through a "who owns what" system, which allows you to associate viewing/download of
16. What Happened to Our Volcano?
Science.gov (United States)
Mangiante, Elaine Silva
2006-01-01
In this article, the author presents an investigative approach to "understanding Earth changes." The author states that students were familiar with earthquakes and volcanoes in other regions of the world but never considered how the land beneath their feet had experienced changes over time. Here, their geology unit helped them understand…
17. The Carl Sagan solar and stellar observatories as remote observatories
Science.gov (United States)
Saucedo-Morales, J.; Loera-Gonzalez, P.
In this work we summarize recent efforts made by the University of Sonora, with the goal of expanding the capability for remote operation of the Carl Sagan Solar and Stellar Observatories, as well as the first steps that have been taken in order to achieve autonomous robotic operation in the near future. The solar observatory was established in 2007 on the university campus by our late colleague A. Sánchez-Ibarra. It consists of four solar telescopes mounted on a single equatorial mount. On the other hand, the stellar observatory, which saw the first light on 16 February 2010, is located 21 km away from Hermosillo, Sonora at the site of the School of Agriculture of the University of Sonora. Both observatories can now be remotely controlled, and to some extent are able to operate autonomously. In this paper we discuss how this has been accomplished in terms of the use of software as well as the instruments under control. We also briefly discuss the main scientific and educational objectives, the future plans to improve the control software and to construct an autonomous observatory on a mountain site, as well as the opportunities for collaborations.
18. The Observatory as Laboratory: Spectral Analysis at Mount Wilson Observatory
Science.gov (United States)
Brashear, Ronald
2018-01-01
This paper will discuss the seminal changes in astronomical research practices made at the Mount Wilson Observatory in the early twentieth century by George Ellery Hale and his staff. Hale’s desire to set the agenda for solar and stellar astronomical research is often described in terms of his new telescopes, primarily the solar tower observatories and the 60- and 100-inch telescopes on Mount Wilson. This paper will focus more on the ancillary but no less critical parts of Hale’s research mission: the establishment of associated “physical” laboratories as part of the observatory complex where observational spectral data could be quickly compared with spectra obtained using specialized laboratory equipment. Hale built a spectroscopic laboratory on the mountain and a more elaborate physical laboratory in Pasadena and staffed it with highly trained physicists, not classically trained astronomers. The success of Hale’s vision for an astronomical observatory quickly made the Carnegie Institution’s Mount Wilson Observatory one of the most important astrophysical research centers in the world.
19. Morphometry of terrestrial shield volcanoes
Science.gov (United States)
Grosse, Pablo; Kervyn, Matthieu
2018-03-01
Shield volcanoes are described as low-angle edifices built primarily by the accumulation of successive lava flows. This generic view of shield volcano morphology is based on a limited number of monogenetic shields from Iceland and Mexico, and a small set of large oceanic islands (Hawaii, Galápagos). Here, the morphometry of 158 monogenetic and polygenetic shield volcanoes is analyzed quantitatively from 90-meter resolution SRTM DEMs using the MORVOLC algorithm. An additional set of 24 lava-dominated 'shield-like' volcanoes, considered so far as stratovolcanoes, are documented for comparison. Results show that there is a large variation in shield size (volumes from 0.1 to > 1000 km3), profile shape (height/basal width (H/WB) ratios mostly from 0.01 to 0.1), flank slope gradients (average slopes mostly from 1° to 15°), elongation and summit truncation. Although there is no clear-cut morphometric difference between shield volcanoes and stratovolcanoes, an approximate threshold can be drawn at 12° average slope and 0.10 H/WB ratio. Principal component analysis of the obtained database enables to identify four key morphometric descriptors: size, steepness, plan shape and truncation. Hierarchical cluster analysis of these descriptors results in 12 end-member shield types, with intermediate cases defining a continuum of morphologies. The shield types can be linked in terms of growth stages and shape evolution, related to (1) magma composition and rheology, effusion rate and lava/pyroclast ratio, which will condition edifice steepness; (2) spatial distribution of vents, in turn related to the magmatic feeding system and the tectonic framework, which will control edifice plan shape; and (3) caldera formation, which will condition edifice truncation.
20. Body Wave and Ambient Noise Tomography of Makushin Volcano, Alaska
Science.gov (United States)
Lanza, F.; Thurber, C. H.; Syracuse, E. M.; Ghosh, A.; LI, B.; Power, J. A.
2017-12-01
Located in the eastern portion of the Alaska-Aleutian subduction zone, Makushin Volcano is among the most active volcanoes in the United States and has been classified as high threat based on eruptive history and proximity to the City of Unalaska and international air routes. In 2015, five individual seismic stations and three mini seismic arrays of 15 stations each were deployed on Unalaska island to supplement the Alaska Volcano Observatory (AVO) permanent seismic network. This temporary array was operational for one year. Taking advantage of the increased azimuthal coverage and the array's increased earthquake detection capability, we developed body-wave Vp and Vp/Vs seismic images of the velocity structure beneath the volcano. Body-wave tomography results show a complex structure with the upper 5 km of the crust dominated by both positive and negative Vp anomalies. The shallow high-Vp features possibly delineate remnant magma pathways or conduits. Low-Vp regions are found east of the caldera at approximately 6-9 km depth. This is in agreement with previous tomographic work and geodetic models, obtained using InSAR data, which had identified this region as a possible long-term source of magma. We also observe a high Vp/Vs feature extending between 7 and 12 km depth below the caldera, possibly indicating partial melting, although the resolution is diminished at these depths. The distributed stations allow us to further complement body-wave tomography with ambient noise imaging and to obtain higher quality of Vs images. Our data processing includes single station data preparation and station-pair cross-correlation steps (Bensen et al., 2007), and the use of the phase weighted stacking method (Schimmel and Gallart, 2007) to improve the signal-to-noise ratio of the cross-correlations. We will show surface-wave dispersion curves, group velocity maps, and ultimately a 3D Vs image. By performing both body wave and ambient noise tomography, we provide a high
1. Catalog of earthquake hypocenters at Redoubt Volcano and Mt. Spurr, Alaska: October 12, 1989 - December 31, 1990
Science.gov (United States)
Power, John A.; March, Gail D.; Lahr, John C.; Jolly, Arthur D.; Cruse, Gina R.
1993-01-01
The Alaska Volcano Observatory (AVO), a cooperative program of the U.S. Geological Survey, the Geophysical Institute of the University of Alaska, Fairbanks, and the Alaska Division of Geological and Geophysical Surveys, began a program of seismic monitoring at potentially active volcanoes in the Cook Inlet region in 1988. Seismic monitoring of this area was previously accomplished by two independent seismic networks operated by the U.S. Geological Survey (Northern Cook Inlet) and the Geophysical Institute (Southern Cook Inlet). In 1989 the AVO seismic program consisted of three small-aperture networks of six, five, and six stations on Mt. Spurr, Redoubt Volcano, and Augustine Volcano respectively. Thirty-five other stations were operated in the Cook Inlet region as part of the AVO program. During 1990 six additional stations were added to the Redoubt network in response to eruptive activity, and three stations were installed at Iliamna Volcano. The principal objectives of the AVO program have been the seismic surveillance of the Cook Inlet volcanoes and the investigation of seismic processes associated with active volcanism.
2. Taurus Hill Observatory Scientific Observations for Pulkova Observatory during the 2016-2017 Season
Science.gov (United States)
Hentunen, V.-P.; Haukka, H.; Heikkinen, E.; Salmi, T.; Juutilainen, J.
2017-09-01
Taurus Hill Observatory (THO), observatory code A95, is an amateur observatory located in Varkaus, Finland. The observatory is maintained by the local astronomical association Warkauden Kassiopeia. THO research team has observed and measured various stellar objects and phenomena. Observatory has mainly focused on exoplanet light curve measurements, observing the gamma rays burst, supernova discoveries and monitoring. We also do long term monitoring projects.
3. Observatory response to a volcanic crisis: the Campi Flegrei simulation exercise
Science.gov (United States)
Papale, Paolo; De Natale, Giuseppe
2015-04-01
In Febraury 2014 a simulation exercise was conducted at Campi Flegrei, Italy, in order to test the scientific response capabilities and the effectiveness of communication with Civil Protection authorities. The simulation was organized in the frame of the EU-VUELCO project, and involved the participation of the Osservatorio Vesuviano of INGV (INGV-OV) corroborated by other INGV scientists involved for their specific competencies; and the Italian Civil Protection, which was supported by an expert team formed by selected experts from the Italian academy and by VUELCO scientists from several EU and Latin American countries. The simulation included a previously appointed group of four volcanologists covering a range of expertise in volcano seismology, geodesy, geochemistry, and with experience both on the Campi Flegrei system and on other volcanic systems and crises in the world. The duty of this 'volcano team' was that of producing consistent sets of signals, that were sent to INGV-OV at the beginning of each simulation phase. In turn, the observatory response was that of i) immediately communicate the relevant observations to the Civil Protection; ii) analyze the synthetic signals and observations and extract a consistent picture and interpretation, including the analysis and quantification of uncertainties; iii) organize all the information produced in a bulletin, that was sent to the Civil Protection at the end of each simulation phase and that contained, according to national established agreements, a) the information available, and b) its interpretation including forecasts on the possible medium-short term evolution. The test included four simulation phases and it was blind, as only the volcano team knew the evolution and the final outcome; the volcano team was located at the INGV buildings in Rome, far from INGV-OV in Naples and the Civil Protection Dept. still in Rome, and with no contacts with any of them for the entire duration of the simulation. In this
4. GEOSCOPE Observatory Recent Developments
Science.gov (United States)
Leroy, N.; Pardo, C.; Bonaime, S.; Stutzmann, E.; Maggi, A.
2010-12-01
The GEOSCOPE observatory consists of a global seismic network and a data center. The 31 GEOSCOPE stations are installed in 19 countries, across all continents and on islands throughout the oceans. They are equipped with three component very broadband seismometers (STS1 or STS2) and 24 or 26 bit digitizers, as required by the Federation of Seismic Digital Network (FDSN). In most stations, a pressure gauge and a thermometer are also installed. Currently, 23 stations send data in real or near real time to GEOSCOPE Data Center and tsunami warning centers. In 2009, two stations (SSB and PPTF) have been equipped with warpless base plates. Analysis of one year of data shows that the new installation decreases long period noise (20s to 1000s) by 10 db on horizontal components. SSB is now rated in the top ten long period stations for horizontal components according to the LDEO criteria. In 2010, Stations COYC, PEL and RER have been upgraded with Q330HR, Metrozet electronics and warpless base plates. They have been calibrated with the calibration table CT-EW1 and the software jSeisCal and Calex-EW. Aluminum jars are now installed instead of glass bells. A vacuum of 100 mbars is applied in the jars which improves thermal insulation of the seismometers and reduces moisture and long-term corrosion in the sensor. A new station RODM has just been installed in Rodrigues Island in Mauritius with standard Geoscope STS2 setup: STS2 seismometer on a granite base plate and covered by cooking pot and thermal insulation, it is connected to Q330HR digitizer, active lightning protection, Seiscomp PC and real-time internet connection. Continuous data of all stations are collected in real time or with a delay by the GEOSCOPE Data Center in Paris where they are validated, archived and made available to the international scientific community. Data are freely available to users by different interfaces according data types (see : http://geoscope.ipgp.fr) - Continuous data in real time coming
5. Griffith Observatory: Hollywood's Celestial Theater
Science.gov (United States)
Margolis, Emily A.; Dr. Stuart W. Leslie
2018-01-01
The Griffith Observatory, perched atop the Hollywood Hills, is perhaps the most recognizable observatory in the world. Since opening in 1935, this Los Angeles icon has brought millions of visitors closer to the heavens. Through an analysis of planning documentation, internal newsletters, media coverage, programming and exhibition design, I demonstrate how the Observatory’s Southern California location shaped its form and function. The astronomical community at nearby Mt. Wilson Observatory and Caltech informed the selection of instrumentation and programming, especially for presentations with the Observatory’s Zeiss Planetarium, the second installed in the United States. Meanwhile the Observatory staff called upon some of Hollywood’s best artists, model makers, and scriptwriters to translate the latest astronomical discoveries into spectacular audiovisual experiences, which were enhanced with Space Age technological displays on loan from Southern California’s aerospace companies. The influences of these three communities- professional astronomy, entertainment, and aerospace- persist today and continue to make Griffith Observatory one of the premiere sites of public astronomy in the country.
6. Volcano monitoring using the Global Positioning System: Filtering strategies
Science.gov (United States)
Larson, K.M.; Cervelli, Peter; Lisowski, M.; Miklius, Asta; Segall, P.; Owen, S.
2001-01-01
Permanent Global Positioning System (GPS) networks are routinely used for producing improved orbits and monitoring secular tectonic deformation. For these applications, data are transferred to an analysis center each day and routinely processed in 24-hour segments. To use GPS for monitoring volcanic events, which may last only a few hours, real-time or near real-time data processing and subdaily position estimates are valuable. Strategies have been researched for obtaining station coordinates every 15 min using a Kalman filter; these strategies have been tested on data collected by a GPS network on Kilauea Volcano. Data from this network are tracked continuously, recorded every 30 s, and telemetered hourly to the Hawaiian Volcano Observatory. A white noise model is heavily impacted by data outages and poor satellite geometry, but a properly constrained random walk model fits the data well. Using a borehole tiltmeter at Kilauea's summit as ground-truth, solutions using different random walk constraints were compared. This study indicates that signals on the order of 5 mm/h are resolvable using a random walk standard deviation of 0.45 cm/???h. Values lower than this suppress small signals, and values greater than this have significantly higher noise at periods of 1-6 hours. Copyright 2001 by the American Geophysical Union.
7. Visits to La Plata Observatory
Science.gov (United States)
Feinstein, A.
1985-03-01
La Plata Observatory will welcome visitors to ESO-La Silla that are willing to make a stop at Buenos Aires on their trip to Chile or on their way back. There is a nice guesthouse at the Observatory that can be used, for a couple of days or so, by astronomers interested in visiting the Observatory and delivering talks on their research work to the Argentine colleagues. No payments can, however, be made at present. La Plata is at 60 km from Buenos Aires. In the same area lie the Instituto de Astronomia y Fisica dei Espacio (IAFE), in Buenos Aires proper, and the Instituto Argentino de Radioastronomia (IAR). about 40 km from Buenos Aires on the way to La Plata. Those interested should contacl: Sr Decano Prof. Cesar A. Mondinalli, or Dr Alejandro Feinstein, Observatorio Astron6mico, Paseo dei Bosque, 1900 La Plata, Argentina. Telex: 31216 CESLA AR.
8. Results from the Autonomous Triggering of in situ Sensors on Kilauea Volcano, HI, from Eruption Detection by Spacecraft
Science.gov (United States)
Doubleday, J.; Behar, A.; Davies, A.; Mora-Vargas, A.; Tran, D.; Abtahi, A.; Pieri, D. C.; Boudreau, K.; Cecava, J.
2008-12-01
Response time in acquiring sensor data in volcanic emergencies can be greatly improved through use of autonomous systems. For instance, ground-based observations and data processing applications of the JPL Volcano Sensor Web have promptly triggered spacecraft observations [e.g., 1]. The reverse command and information flow path can also be useful, using autonomous analysis of spacecraft data to trigger in situ sensors. In this demonstration project, SO2 sensors were incorporated into expendable "Volcano Monitor" capsules and placed downwind of the Pu'u 'O'o vent of Kilauea volcano, Hawai'i. In nominal (low) power conservation mode, data from these sensors were collected and transmitted every hour to the Volcano Sensor Web through the Iridium Satellite Network. When SO2 readings exceeded a predetermined threshold, the modem within the Volcano Monitor sent an alert to the Sensor Web, and triggered a request for prompt Earth Observing-1 (EO-1) spacecraft data acquisition. The Volcano Monitors were also triggered by the Sensor Web in response to an eruption detection by the MODIS instrument on Terra. During these pre- defined "critical events" the Sensor Web ordered the SO2 sensors within the Volcano Monitor to increase their sampling frequency to every 5 minutes (high power "burst mode"). Autonomous control of the sensors' sampling frequency enabled the Sensor Web to monitor and respond to rapidly evolving conditions, and allowed rapid compilation and dissemination of these data to the scientific community. Reference: [1] Davies et al., (2006) Eos, 87, (1), 1 and 5. This work was performed at the Jet Propulsion Laboratory-California Institute of Technology, under contract to NASA. Support was provided by the NASA AIST program, the Idaho Space Grant Consortium, and the New Mexico Space Grant Program. We also especially thank the personnel of the USGS Hawaiian Volcano Observatory for their invaluable scientific guidance and logistical assistance.
9. Aleutian Islands Coastal Resources Inventory and Environmental Sensitivity Maps: VOLCANOS (Volcano Points)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains point locations of active volcanoes as compiled by Motyka et al., 1993. Eighty-nine volcanoes with eruptive phases in the Quaternary are...
10. Astronomical Research Using Virtual Observatories
Directory of Open Access Journals (Sweden)
M Tanaka
2010-01-01
Full Text Available The Virtual Observatory (VO for Astronomy is a framework that empowers astronomical research by providing standard methods to find, access, and utilize astronomical data archives distributed around the world. VO projects in the world have been strenuously developing VO software tools and/or portal systems. Interoperability among VO projects has been achieved with the VO standard protocols defined by the International Virtual Observatory Alliance (IVOA. As a result, VO technologies are now used in obtaining astronomical research results from a huge amount of data. We describe typical examples of astronomical research enabled by the astronomical VO, and describe how the VO technologies are used in the research.
11. The South African Astronomical Observatory
International Nuclear Information System (INIS)
1989-01-01
The research work discussed in this report covers a wide range, from work on the nearest stars to studies of the distant quasars, and the astronomers who have carried out this work come from universities and observatories spread around the world as well as from South African universities and from the South African Astronomical Observatory (SAAO) staff itself. A characteristic of much of this work has been its collaborative character. SAAO studies in 1989 included: supernovae 1987A; galaxies; ground-based observations of celestial x-ray sources; the Magellanic Clouds; pulsating variables; galactic structure; binary star phenomena; the provision of photometric standards; nebulous matter; stellar astrophysics, and astrometry
12. Automated tracking of lava lake level using thermal images at Kīlauea Volcano, Hawai’i
Science.gov (United States)
Patrick, Matthew R.; Swanson, Don; Orr, Tim R.
2016-01-01
Tracking the level of the lava lake in Halema‘uma‘u Crater, at the summit of Kīlauea Volcano, Hawai’i, is an essential part of monitoring the ongoing eruption and forecasting potentially hazardous changes in activity. We describe a simple automated image processing routine that analyzes continuously-acquired thermal images of the lava lake and measures lava level. The method uses three image segmentation approaches, based on edge detection, short-term change analysis, and composite temperature thresholding, to identify and track the lake margin in the images. These relative measurements from the images are periodically calibrated with laser rangefinder measurements to produce real-time estimates of lake elevation. Continuous, automated tracking of the lava level has been an important tool used by the U.S. Geological Survey’s Hawaiian Volcano Observatory since 2012 in real-time operational monitoring of the volcano and its hazard potential.
13. Integrating SAR with Optical and Thermal Remote Sensing for Operational Near Real-Time Volcano Monitoring
Science.gov (United States)
Meyer, F. J.; Webley, P.; Dehn, J.; Arko, S. A.; McAlpin, D. B.
2013-12-01
Volcanic eruptions are among the most significant hazards to human society, capable of triggering natural disasters on regional to global scales. In the last decade, remote sensing techniques have become established in operational forecasting, monitoring, and managing of volcanic hazards. Monitoring organizations, like the Alaska Volcano Observatory (AVO), are nowadays heavily relying on remote sensing data from a variety of optical and thermal sensors to provide time-critical hazard information. Despite the high utilization of these remote sensing data to detect and monitor volcanic eruptions, the presence of clouds and a dependence on solar illumination often limit their impact on decision making processes. Synthetic Aperture Radar (SAR) systems are widely believed to be superior to optical sensors in operational monitoring situations, due to the weather and illumination independence of their observations and the sensitivity of SAR to surface changes and deformation. Despite these benefits, the contributions of SAR to operational volcano monitoring have been limited in the past due to (1) high SAR data costs, (2) traditionally long data processing times, and (3) the low temporal sampling frequencies inherent to most SAR systems. In this study, we present improved data access, data processing, and data integration techniques that mitigate some of the above mentioned limitations and allow, for the first time, a meaningful integration of SAR into operational volcano monitoring systems. We will introduce a new database interface that was developed in cooperation with the Alaska Satellite Facility (ASF) and allows for rapid and seamless data access to all of ASF's SAR data holdings. We will also present processing techniques that improve the temporal frequency with which hazard-related products can be produced. These techniques take advantage of modern signal processing technology as well as new radiometric normalization schemes, both enabling the combination of
14. Flank tectonics of Martian volcanoes
International Nuclear Information System (INIS)
Thomas, P.J.; Squyres, S.W.; Carr, M.H.
1990-01-01
On the flanks of Olympus Mons is a series of terraces, concentrically distributed around the caldera. Their morphology and location suggest that they could be thrust faults caused by compressional failure of the cone. In an attempt to understand the mechanism of faulting and the possible influences of the interior structure of Olympus Mons, the authors have constructed a numerical model for elastic stresses within a Martian volcano. In the absence of internal pressurization, the middle slopes of the cone are subjected to compressional stress, appropriate to the formation of thrust faults. These stresses for Olympus Mons are ∼250 MPa. If a vacant magma chamber is contained within the cone, the region of maximum compressional stress is extended toward the base of the cone. If the magma chamber is pressurized, extensional stresses occur at the summit and on the upper slopes of the cone. For a filled but unpressurized magma chamber, the observed positions of the faults agree well with the calculated region of high compressional stress. Three other volcanoes on Mars, Ascraeus Mons, Arsia Mons, and Pavonis Mons, possess similar terraces. Extending the analysis to other Martian volcanoes, they find that only these three and Olympus Mons have flank stresses that exceed the compressional failure strength of basalt, lending support to the view that the terraces on all four are thrust faults
15. Improvements in geomagnetic observatory data quality
DEFF Research Database (Denmark)
Reda, Jan; Fouassier, Danielle; Isac, Anca
2011-01-01
between observatories and the establishment of observatory networks has harmonized standards and practices across the world; improving the quality of the data product available to the user. Nonetheless, operating a highquality geomagnetic observatory is non-trivial. This article gives a record...... of the current state of observatory instrumentation and methods, citing some of the general problems in the complex operation of geomagnetic observatories. It further gives an overview of recent improvements of observatory data quality based on presentation during 11th IAGA Assembly at Sopron and INTERMAGNET...
16. Deep Space Climate Observatory (DSCOVR)
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The Deep Space Climate ObserVatoRy (DSCOVR) satellite is a NOAA operated asset at the first Lagrange (L1) point. The primary space weather instrument is the PlasMag...
17. Virtual Investigations of an Active Deep Sea Volcano
Science.gov (United States)
Sautter, L.; Taylor, M. M.; Fundis, A.; Kelley, D. S.; Elend, M.
2013-12-01
Axial Seamount, located on the Juan de Fuca spreading ridge 300 miles off the Oregon coast, is an active volcano whose summit caldera lies 1500 m beneath the sea surface. Ongoing construction of the Regional Scale Nodes (RSN) cabled observatory by the University of Washington (funded by the NSF Ocean Observatories Initiative) has allowed for exploration of recent lava flows and active hydrothermal vents using HD video mounted on the ROVs, ROPOS and JASON II. College level oceanography/marine geology online laboratory exercises referred to as Online Concept Modules (OCMs) have been created using video and video frame-captured mosaics to promote skill development for characterizing and quantifying deep sea environments. Students proceed at their own pace through a sequence of short movies with which they (a) gain background knowledge, (b) learn skills to identify and classify features or biota within a targeted environment, (c) practice these skills, and (d) use their knowledge and skills to make interpretations regarding the environment. Part (d) serves as the necessary assessment component of the laboratory exercise. Two Axial Seamount-focused OCMs will be presented: 1) Lava Flow Characterization: Identifying a Suitable Cable Route, and 2) Assessing Hydrothermal Vent Communities: Comparisons Among Multiple Sulfide Chimneys.
18. Seafloor Observatory Science: a Review
Directory of Open Access Journals (Sweden)
L. Beranzoli
2006-06-01
Full Text Available The ocean exerts a pervasive influence on Earths environment. It is therefore important that we learn how this system operates (NRC, 1998b; 1999. For example, the ocean is an important regulator of climate change (e.g., IPCC, 1995. Understanding the link between natural and anthropogenic climate change and ocean circulation is essential for predicting the magnitude and impact of future changes in Earths climate. Understanding the ocean, and the complex physical, biological, chemical, and geological systems operating within it, should be an important goal for the opening decades of the 21st century. Another fundamental reason for increasing our understanding of ocean systems is that the global economy is highly dependent on the ocean (e.g., for tourism, fisheries, hydrocarbons, and mineral resources (Summerhayes, 1996. The establishment of a global network of seafloor observatories will help to provide the means to accomplish this goal. These observatories will have power and communication capabilities and will provide support for spatially distributed sensing systems and mobile platforms. Sensors and instruments will potentially collect data from above the air-sea interface to below the seafloor. Seafloor observatories will also be a powerful complement to satellite measurement systems by providing the ability to collect vertically distributed measurements within the water column for use with the spatial measurements acquired by satellites while also providing the capability to calibrate remotely sensed satellite measurements (NRC, 2000. Ocean observatory science has already had major successes. For example the TAO array has enabled the detection, understanding and prediction of El Niño events (e.g., Fujimoto et al., 2003. This paper is a world-wide review of the new emerging Seafloor Observatory Science, and describes both the scientific motivations for seafloor observatories and the technical solutions applied to their architecture. A
19. Norwegian Ocean Observatory Network (NOON)
Science.gov (United States)
Ferré, Bénédicte; Mienert, Jürgen; Winther, Svein; Hageberg, Anne; Rune Godoe, Olav; Partners, Noon
2010-05-01
The Norwegian Ocean Observatory Network (NOON) is led by the University of Tromsø and collaborates with the Universities of Oslo and Bergen, UniResearch, Institute of Marine Research, Christian Michelsen Research and SINTEF. It is supported by the Research Council of Norway and oil and gas (O&G) industries like Statoil to develop science, technology and new educational programs. Main topics relate to ocean climate and environment as well as marine resources offshore Norway from the northern North Atlantic to the Arctic Ocean. NOON's vision is to bring Norway to the international forefront in using cable based ocean observatory technology for marine science and management, by establishing an infrastructure that enables real-time and long term monitoring of processes and interactions between hydrosphere, geosphere and biosphere. This activity is in concert with the EU funded European Strategy Forum on Research Infrastructures (ESFRI) roadmap and European Multidisciplinary Seafloor Observation (EMSO) project to attract international leading research developments. NOON envisions developing towards a European Research Infrastructure Consortium (ERIC). Beside, the research community in Norway already possesses a considerable marine infrastructure that can expand towards an international focus for real-time multidisciplinary observations in times of rapid climate change. PIC The presently established cable-based fjord observatory, followed by the establishment of a cable-based ocean observatory network towards the Arctic from an O&G installation, will provide invaluable knowledge and experience necessary to make a successful larger cable-based observatory network at the Norwegian and Arctic margin (figure 1). Access to large quantities of real-time observation from the deep sea, including high definition video, could be used to provide the public and future recruits to science a fascinating insight into an almost unexplored part of the Earth beyond the Arctic Circle
20. Space astrophysical observatory 'Orion-2'
International Nuclear Information System (INIS)
Gurzadyan, G.A.; Jarakyan, A.L.; Krmoyan, M.N.; Kashin, A.L.; Loretsyan, G.M.; Ohanesyan, J.B.
1976-01-01
Ultraviolet spectrograms of a large number of faint stars up to 13sup(m) were obtained in the wavelengths 2000-5000 A by means of the space observatory 'Orion-2' installed in the spaceship 'Soyuz-13' with two spacemen on board. The paper deals with a description of the operation modes of this observatory, the designs and basic schemes of the scientific and auxiliary device and the method of combining the work of the flight engineer and the automation system of the observatory itself. It also treats of the combination of the particular parts of 'Orion-2' observatory on board the spaceship and the measures taken to provide for its normal functioning in terms of the space flight. A detailed description is given of the optical, electrical and mechanical schemes of the devices - meniscus telescope with an objective prism, stellar diffraction spectrographs, single-coordinate and two-coordinate stellar and solar transducers, control panel, control systems, etc. The paper also provides the functional scheme of astronavigation, six-wheel stabilization, the design of mounting (assembling) the stabilized platform carrying the telescopes and the drives used in it. Problems relating to the observation program in orbit, the ballistic provision of initial data, and control of the operation of the observatory are also dealt with. In addition, the paper carries information of the photomaterials used, the methods of their energy calibration, standardization and the like. Matters of pre-start tests of apparatus, the preparation of the spacemen for conducting astronomical observations with the given devices, etc. are likewise dwelt on. The paper ends with a brief survey of the results obtained and the elaboration of the observed material. (Auth.)
1. The Magnetic Observatory Buildings at the Royal Observatory, Cape
Science.gov (United States)
Glass, I. S.
2015-10-01
During the 1830s there arose a strong international movement, promoted by Carl Friedrich Gauss and Alexander von Humboldt, to characterise the earth's magnetic field. By 1839 the Royal Society in London, driven by Edward Sabine, had organised a "Magnetic Crusade" - the establishment of a series of magnetic and meteorological observatories around the British Empire, including New Zealand, Australia, St Helena and the Cape. This article outlines the history of the latter installation, its buildings and what became of them.
2. K-Ar ages of the Hiruzen volcano group and the Daisen volcano
International Nuclear Information System (INIS)
Tsukui, Masashi; Nishido, Hirotsugu; Nagao, Keisuke.
1985-01-01
Seventeen volcanic rocks of the Hiruzen volcano group and the Daisen volcano, in southwest Japan, were dated by the K-Ar method to clarify the age of volcanic activity in this region and the evolution of these composite volcanoes. The eruption ages of the Hiruzen volcano group were revealed to be about 0.9 Ma to 0.5 Ma, those of the Daisen volcano to be about 1 Ma to very recent. These results are consistent with geological and paleomagnetic data of previous workers. Effusion of lavas in the area was especially vigorous at 0.5+-0.1 Ma. It was generally considered that the Hiruzen volcano group had erupted during latest Pliocene to early Quaternary and it is older than the Daisen volcano, mainly from their topographic features. However, their overlapping eruption ages and petrographical similarities of the lavas of the Hiruzen volcano group and the Daisen volcano suggest that they may be included in the Daisen volcano in a broad sense. The aphyric andesite, whose eruption age had been correlated to Wakurayama andesite (6.34+-0.19 Ma) in Matsue city and thought to be the basement of the Daisen volcano, was dated to be 0.46+-0.04 Ma. It indicates that petrographically similar aphyric andesite erupted sporadically at different time and space in the San'in district. (author)
3. The MicroObservatory Net
Science.gov (United States)
1994-12-01
A group of scientists, engineers and educators based at the Harvard-Smithsonian Center for Astrophysics (CfA) has developed a prototype of a small, inexpensive and fully integrated automated astronomical telescope and image processing system. The project team is now building five second generation instruments. The MicroObservatory has been designed to be used for classroom instruction by teachers as well as for original scientific research projects by students. Probably in no other area of frontier science is it possible for a broad spectrum of students (not just the gifted) to have access to state-of-the-art technologies that would allow for original research. The MicroObservatory combines the imaging power of a cooled CCD, with a self contained and weatherized reflecting optical telescope and mount. A microcomputer points the telescope and processes the captured images. The MicroObservatory has also been designed to be used as a valuable new capture and display device for real time astronomical imaging in planetariums and science museums. When the new instruments are completed in the next few months, they will be tried with high school students and teachers, as well as with museum groups. We are now planning to make the MicroObservatories available to students, teachers and other individual users over the Internet. We plan to allow the telescope to be controlled in real time or in batch mode, from a Macintosh or PC compatible computer. In the real-time mode, we hope to give individual access to all of the telescope control functions without the need for an "on-site" operator. Users would sign up for a specific period of time. In the batch mode, users would submit jobs for the telescope. After the MicroObservatory completed a specific job, the images would be e-mailed back to the user. At present, we are interested in gaining answers to the following questions: (1) What are the best approaches to scheduling real-time observations? (2) What criteria should be used
4. The Volcano Disaster Assistance Program—Helping to save lives worldwide for more than 30 years
Science.gov (United States)
Lowenstern, Jacob B.; Ramsey, David W.
2017-10-20
What do you do when a sleeping volcano roars back to life? For more than three decades, countries around the world have called upon the U.S. Geological Survey’s (USGS) Volcano Disaster Assistance Program (VDAP) to contribute expertise and equipment in times of crisis. Co-funded by the USGS and the U.S. Agency for International Development’s Office of U.S. Foreign Disaster Assistance (USAID/OFDA), VDAP has evolved and grown over the years, adding newly developed monitoring technologies, training and exchange programs, and eruption forecasting methodologies to greatly expand global capabilities that mitigate the impacts of volcanic hazards. These advances, in turn, strengthen the ability of the United States to respond to its own volcanic events.VDAP was formed in 1986 in response to the devastating volcanic mudflow triggered by an eruption of Nevado del Ruiz volcano in Colombia. The mudflow destroyed the city of Armero on the night of November 13, 1985, killing more than 25,000 people in the city and surrounding areas. Sadly, the tragedy was avoidable. Better education of the local population and clear communication between scientists and public officials could have allowed warnings to be received, understood, and acted upon prior to the disaster.VDAP strives to ensure that such a tragedy will never happen again. The program’s mission is to assist foreign partners, at their request, in volcano monitoring and empower them to take the lead in mitigating hazards at their country’s threatening volcanoes. Since 1986, team members have responded to over 70 major volcanic crises at more than 50 volcanoes and have strengthened response capacity in 12 countries. The VDAP team consists of approximately 20 geologists, geophysicists, and engineers, who are based out of the USGS Cascades Volcano Observatory in Vancouver, Washington. In 2016, VDAP was a finalist for the Samuel J. Heyman Service to America Medal for its work in improving volcano readiness and warning
5. Buckets of ash track tephra flux from Halema'uma'u Crater, Hawai'i
Science.gov (United States)
Swanson, Don; Wooten, Kelly M.; Orr, Tim R.
2009-01-01
The 2008–2009 eruption at Kīlauea Volcano's summit made news because of its eight small discrete explosive eruptions and noxious volcanic smog (vog) created from outgassing sulfur dioxide. Less appreciated is the ongoing, weak, but continuous output of tephra, primarily ash, from the new open vent in Halema'uma'u Crater. This tephra holds clues to processes causing the eruption and forming the new crater-in-a-crater, and its flux is important to hazard evaluations.The setting of the vent–easily accessible from the Hawaiian Volcano Observatory (HVO)—is unusually favorable for neardaily tracking of tephra mass flux during this small prolonged basaltic eruption. Recognizing this, scientists from HVO are collecting ash and documenting how ejection masses, components, and chemical compositions vary through time.
6. Geoflicks Reviewed--Films about Hawaiian Volcanoes.
Science.gov (United States)
Bykerk-Kauffman, Ann
1994-01-01
Reviews 11 films on volcanic eruptions in the United States. Films are given a one- to five-star rating and the film's year, length, source and price are listed. Top films include "Inside Hawaiian Volcanoes" and "Kilauea: Close up of an Active Volcano." (AIM)
7. Orographic Flow over an Active Volcano
Science.gov (United States)
Poulidis, Alexandros-Panagiotis; Renfrew, Ian; Matthews, Adrian
2014-05-01
Orographic flows over and around an isolated volcano are studied through a series of numerical model experiments. The volcano top has a heated surface, so can be thought of as "active" but not erupting. A series of simulations with different atmospheric conditions and using both idealised and realistic configurations of the Weather Research and Forecast (WRF) model have been carried out. The study is based on the Soufriere Hills volcano, located on the island of Montserrat in the Caribbean. This is a dome-building volcano, leading to a sharp increase in the surface skin temperature at the top of the volcano - up to tens of degrees higher than ambient values. The majority of the simulations use an idealised topography, in order for the results to have general applicability to similar-sized volcanoes located in the tropics. The model is initialised with idealised atmospheric soundings, representative of qualitatively different atmospheric conditions from the rainy season in the tropics. The simulations reveal significant changes to the orographic flow response, depending upon the size of the temperature anomaly and the atmospheric conditions. The flow regime and characteristic features such as gravity waves, orographic clouds and orographic rainfall patterns can all be qualitatively changed by the surface heating anomaly. Orographic rainfall over the volcano can be significantly enhanced with increased temperature anomaly. The implications for the eruptive behaviour of the volcano and resulting secondary volcanic hazards will also be discussed.
8. Inventory of gas flux measurements from volcanoes of the global Network for Observation of Volcanic and Atmospheric Change (NOVAC)
Science.gov (United States)
Galle, B.; Arellano, S.; Norman, P.; Conde, V.
2012-04-01
NOVAC, the Network for Observation of Volcanic and Atmospheric Change, was initiated in 2005 as a 5-year-long project financed by the European Union. Its main purpose is to create a global network for the monitoring and research of volcanic atmospheric plumes and related geophysical phenomena by using state-of-the-art spectroscopic remote sensing technology. Up to 2012, 64 instruments have been installed at 24 volcanoes in 13 countries of Latin America, Italy, Democratic Republic of Congo, Reunion, Iceland, and Philippines, and efforts are being done to expand the network to other active volcanic zones. NOVAC has been a pioneer initiative in the community of volcanologists and embraces the objectives of the Word Organization of Volcano Observatories (WOVO) and the Global Earth Observation System of Systems (GEOSS). In this contribution, we present the results of the measurements of SO2 gas fluxes carried out within NOVAC, which for some volcanoes represent a record of more than 7 years of continuous monitoring. The network comprises some of the most strongly degassing volcanoes in the world, covering a broad range of tectonic settings, levels of unrest, and potential risk. We show a global perspective of the output of volcanic gas from the covered regions, specific trends of degassing for a few selected volcanoes, and the significance of the database for further studies in volcanology and other geosciences.
9. Boscovich and the Brera Observatory .
Science.gov (United States)
Antonello, E.
In the mid 18th century both theoretical and practical astronomy were cultivated in Milan by Barnabites and Jesuits. In 1763 Boscovich was appointed to the chair of mathematics of the University of Pavia in the Duchy of Milan, and the following year he designed an observatory for the Jesuit Collegium of Brera in Milan. The Specola was built in 1765 and it became quickly one of the main european observatories. We discuss the relation between Boscovich and Brera in the framework of a short biography. An account is given of the initial research activity in the Specola, of the departure of Boscovich from Milan in 1773 and his coming back just before his death.
10. Compton Gamma-Ray Observatory
Science.gov (United States)
1991-01-01
This photograph shows the Compton Gamma-Ray Observatory (GRO) being deployed by the Remote Manipulator System (RMS) arm aboard the Space Shuttle Atlantis during the STS-37 mission in April 1991. The GRO reentered Earth atmosphere and ended its successful mission in June 2000. For nearly 9 years, the GRO Burst and Transient Source Experiment (BATSE), designed and built by the Marshall Space Flight Center (MSFC), kept an unblinking watch on the universe to alert scientists to the invisible, mysterious gamma-ray bursts that had puzzled them for decades. By studying gamma-rays from objects like black holes, pulsars, quasars, neutron stars, and other exotic objects, scientists could discover clues to the birth, evolution, and death of stars, galaxies, and the universe. The gamma-ray instrument was one of four major science instruments aboard the Compton. It consisted of eight detectors, or modules, located at each corner of the rectangular satellite to simultaneously scan the entire universe for bursts of gamma-rays ranging in duration from fractions of a second to minutes. In January 1999, the instrument, via the Internet, cued a computer-controlled telescope at Las Alamos National Laboratory in Los Alamos, New Mexico, within 20 seconds of registering a burst. With this capability, the gamma-ray experiment came to serve as a gamma-ray burst alert for the Hubble Space Telescope, the Chandra X-Ray Observatory, and major gound-based observatories around the world. Thirty-seven universities, observatories, and NASA centers in 19 states, and 11 more institutions in Europe and Russia, participated in the BATSE science program.
11. Satellite Observations of Volcanic Clouds from the Eruption of Redoubt Volcano, Alaska, 2009
Science.gov (United States)
Dean, K. G.; Ekstrand, A. L.; Webley, P.; Dehn, J.
2009-12-01
Redoubt Volcano began erupting on 23 March 2009 (UTC) and consisted of 19 events over a 14 day period. The volcano is located on the Alaska Peninsula, 175 km southwest of Anchorage, Alaska. The previous eruption was in 1989/1990 and seriously disrupted air traffic in the region, including the near catastrophic engine failure of a passenger airliner. Plumes and ash clouds from the recent eruption were observed on a variety of satellite data (AVHRR, MODIS and GOES). The eruption produced volcanic clouds up to 19 km which are some of the highest detected in recent times in the North Pacific region. The ash clouds primarily drifted north and east of the volcano, had a weak ash signal in the split window data and resulted in light ash falls in the Cook Inlet basin and northward into Alaska’s Interior. Volcanic cloud heights were measured using ground-based radar, and plume temperature and wind shear methods but each of the techniques resulted in significant variations in the estimates. Even though radar showed the greatest heights, satellite data and wind shears suggest that the largest concentrations of ash may be at lower altitudes in some cases. Sulfur dioxide clouds were also observed on satellite data (OMI, AIRS and Calipso) and they primarily drifted to the east and were detected at several locations across North America, thousands of kilometers from the volcano. Here, we show time series data collected by the Alaska Volcano Observatory, illustrating the different eruptive events and ash clouds that developed over the subsequent days.
12. Exploring Geology on the World-Wide Web--Volcanoes and Volcanism.
Science.gov (United States)
Schimmrich, Steven Henry; Gore, Pamela J. W.
1996-01-01
Focuses on sites on the World Wide Web that offer information about volcanoes. Web sites are classified into areas of Global Volcano Information, Volcanoes in Hawaii, Volcanoes in Alaska, Volcanoes in the Cascades, European and Icelandic Volcanoes, Extraterrestrial Volcanism, Volcanic Ash and Weather, and Volcano Resource Directories. Suggestions…
13. Developing geophysical monitoring at Mayon volcano, a collaborative project EOS-PHIVOLCS
Science.gov (United States)
Hidayat, D.; Laguerta, E.; Baloloy, A.; Valerio, R.; Marcial, S. S.
2011-12-01
Mayon is an openly-degassed volcano, producing mostly small, frequent eruptions, most recently in Aug-Sept 2006 and Dec 2009. Mayon volcano status is level 1 with low seismicity dominated mostly local and regional tectonic earthquakes with continuous emission of SO2 from its crater. A research collaboration between Earth Observatory of Singapore-NTU and Philippine Institute of Volcanology and Seismology (PHIVOLCS) have been initiated in 2010 with effort to develop a multi-disciplinary monitoring system around Mayon includes geophysical monitoring, gas geochemical monitoring, and petrologic studies. Currently there are 4 broadband seismographs, 3 short period instruments, and 4 tiltmeters. These instruments will be telemetered to the Lignon Hill Volcano Observatory through radio and 3G broadband internet. We also make use of our self-made low-cost datalogger which has been operating since Jan 2011, performing continuous data acquisition with sampling rate of 20 minute/sample and transmitted through gsm network. First target of this monitoring system is to obtain continuous multi parameter data transmitted in real time to the observatory from different instruments. Tectonically, Mayon is located in the Oas Graben, a northwest-trending structural depression. Previous study using InSAR data, showing evidence of a left-lateral oblique slip movement of the fault North of Mayon. Understanding on what structures active deformation is occurring and how deformation signal is currently partitioned between tectonic and volcanic origin is a key for characterizing magma movement in the time of unrest. Preliminary analysis of the tangential components of tiltmeters (particularly the stations 5 and 7.5 NE from the volcano) shows gradual inflation movement over a few months period. The tangential components for tiltmeters are roughly perpendicular to the fault north of Mayon. This may suggest downward tilting of the graben in the northern side of Mayon. Another possibility is that
14. Geophysical Exploration on the Structure of Volcanoes: Two Case Histories
Energy Technology Data Exchange (ETDEWEB)
Furumoto, A. S.
1974-01-01
Geophysical methods of exploration were used to determine the internal structure of Koolau Volcano in Hawaii and of Rabaul Volcano in New Guinea. By use of gravity and seismic data the central vent or plug of Koolau Volcano was outlined. Magnetic data seem to indicate that the central plug is still above the Curie Point. If so, the amount of heat energy available is tremendous. As for Rabaul Volcano, it is located in a region characterized by numerous block faulting. The volcano is only a part of a large block that has subsided. Possible geothermal areas exist near the volcano but better potential areas may exist away from the volcano.
15. A scalable database model for multiparametric time series: a volcano observatory case study
Science.gov (United States)
Montalto, Placido; Aliotta, Marco; Cassisi, Carmelo; Prestifilippo, Michele; Cannata, Andrea
2014-05-01
The variables collected by a sensor network constitute a heterogeneous data source that needs to be properly organized in order to be used in research and geophysical monitoring. With the time series term we refer to a set of observations of a given phenomenon acquired sequentially in time. When the time intervals are equally spaced one speaks of period or sampling frequency. Our work describes in detail a possible methodology for storage and management of time series using a specific data structure. We designed a framework, hereinafter called TSDSystem (Time Series Database System), in order to acquire time series from different data sources and standardize them within a relational database. The operation of standardization provides the ability to perform operations, such as query and visualization, of many measures synchronizing them using a common time scale. The proposed architecture follows a multiple layer paradigm (Loaders layer, Database layer and Business Logic layer). Each layer is specialized in performing particular operations for the reorganization and archiving of data from different sources such as ASCII, Excel, ODBC (Open DataBase Connectivity), file accessible from the Internet (web pages, XML). In particular, the loader layer performs a security check of the working status of each running software through an heartbeat system, in order to automate the discovery of acquisition issues and other warning conditions. Although our system has to manage huge amounts of data, performance is guaranteed by using a smart partitioning table strategy, that keeps balanced the percentage of data stored in each database table. TSDSystem also contains modules for the visualization of acquired data, that provide the possibility to query different time series on a specified time range, or follow the realtime signal acquisition, according to a data access policy from the users.
16. Unzipping of the volcano arc, Japan
Science.gov (United States)
Stern, R.J.; Smoot, N.C.; Rubin, M.
1984-01-01
A working hypothesis for the recent evolution of the southern Volcano Arc, Japan, is presented which calls upon a northward-progressing sundering of the arc in response to a northward-propagating back-arc basin extensional regime. This model appears to explain several localized and recent changes in the tectonic and magrnatic evolution of the Volcano Arc. Most important among these changes is the unusual composition of Iwo Jima volcanic rocks. This contrasts with normal arc tholeiites typical of the rest of the Izu-Volcano-Mariana and other primitive arcs in having alkaline tendencies, high concentrations of light REE and other incompatible elements, and relatively high silica contents. In spite of such fractionated characteristics, these lavas appear to be very early manifestations of a new volcanic and tectonic cycle in the southern Volcano Arc. These alkaline characteristics and indications of strong regional uplift are consistent with the recent development of an early stage of inter-arc basin rifting in the southern Volcano Arc. New bathymetric data are presented in support of this model which indicate: 1. (1) structural elements of the Mariana Trough extend north to the southern Volcano Arc. 2. (2) both the Mariana Trough and frontal arc shoal rapidly northwards as the Volcano Arc is approached. 3. (3) rugged bathymetry associated with the rifted Mariana Trough is replaced just south of Iwo Jima by the development of a huge dome (50-75 km diameter) centered around Iwo Jima. Such uplifted domes are the immediate precursors of rifts in other environments, and it appears that a similar situation may now exist in the southern Volcano Arc. The present distribution of unrifted Volcano Arc to the north and rifted Mariana Arc to the south is interpreted not as a stable tectonic configuration but as representing a tectonic "snapshot" of an arc in the process of being rifted to form a back-arc basin. ?? 1984.
17. Volcanoes
Science.gov (United States)
... Extreme Heat Older Adults (Aged 65+) Infants and Children Chronic Medical Conditions Low Income Athletes Outdoor Workers Pets Hot Weather Tips Warning Signs and Symptoms FAQs Social Media How to Stay Cool Missouri Cooling Centers Extreme ...
18. Common processes at unique volcanoes – a volcanological conundrum
OpenAIRE
Katharine eCashman; Juliet eBiggs
2014-01-01
An emerging challenge in modern volcanology is the apparent contradiction between the perception that every volcano is unique, and classification systems based on commonalities among volcano morphology and eruptive style. On the one hand, detailed studies of individual volcanoes show that a single volcano often exhibits similar patterns of behavior over multiple eruptive episodes; this observation has led to the idea that each volcano has its own distinctive pattern of behavior (or “personali...
19. Observatory Sponsoring Astronomical Image Contest
Science.gov (United States)
2005-05-01
Forget the headphones you saw in the Warner Brothers thriller Contact, as well as the guttural throbs emanating from loudspeakers at the Very Large Array in that 1997 movie. In real life, radio telescopes aren't used for "listening" to anything - just like visible-light telescopes, they are used primarily to make images of astronomical objects. Now, the National Radio Astronomy Observatory (NRAO) wants to encourage astronomers to use radio-telescope data to make truly compelling images, and is offering cash prizes to winners of a new image contest. Radio Galaxy Fornax A Radio Galaxy Fornax A Radio-optical composite image of giant elliptical galaxy NGC 1316, showing the galaxy (center), a smaller companion galaxy being cannibalized by NGC 1316, and the resulting "lobes" (orange) of radio emission caused by jets of particles spewed from the core of the giant galaxy Click on image for more detail and images CREDIT: Fomalont et al., NRAO/AUI/NSF "Astronomy is a very visual science, and our radio telescopes are capable of producing excellent images. We're sponsoring this contest to encourage astronomers to make the extra effort to turn good images into truly spectacular ones," said NRAO Director Fred K.Y. Lo. The contest, offering a grand prize of $1,000, was announced at the American Astronomical Society's meeting in Minneapolis, Minnesota. The image contest is part of a broader NRAO effort to make radio astronomical data and images easily accessible and widely available to scientists, students, teachers, the general public, news media and science-education professionals. That effort includes an expanded image gallery on the observatory's Web site. "We're not only adding new radio-astronomy images to our online gallery, but we're also improving the organization and accessibility of the images," said Mark Adams, head of education and public outreach (EPO) at NRAO. "Our long-term goal is to make the NRAO Image Gallery an international resource for radio astronomy imagery 20. Internet-accessible, near-real-time volcano monitoring data for geoscience education: the Volcanoes Exploration Project—Puu Oo Science.gov (United States) Poland, M. P.; Teasdale, R.; Kraft, K. 2010-12-01 Internet-accessible real- and near-real-time Earth science datasets are an important resource for geoscience education, but relatively few comprehensive datasets are available, and background information to aid interpretation is often lacking. In response to this need, the U.S. Geological Survey’s (USGS) Hawaiian Volcano Observatory, in collaboration with the National Aeronautics and Space Administration and the University of Hawai‘i, Mānoa, established the Volcanoes Exploration Project: Pu‘u ‘O‘o (VEPP). The VEPP Web site provides access, in near-real time, to geodetic, seismic, and geologic data from the Pu‘u ‘O‘o eruptive vent on Kilauea Volcano, Hawai‘i. On the VEPP Web site, a time series query tool provides a means of interacting with continuous geophysical data. In addition, results from episodic kinematic GPS campaigns and lava flow field maps are posted as data are collected, and archived Webcam images from Pu‘u ‘O‘o crater are available as a tool for examining visual changes in volcanic activity over time. A variety of background information on volcano surveillance and the history of the 1983-present Pu‘u ‘O‘o-Kupaianaha eruption puts the available monitoring data in context. The primary goal of the VEPP Web site is to take advantage of high visibility monitoring data that are seldom suitably well-organized to constitute an established educational resource. In doing so, the VEPP project provides a geoscience education resource that demonstrates the dynamic nature of volcanoes and promotes excitement about the process of scientific discovery through hands-on learning. To support use of the VEPP Web site, a week-long workshop was held at Kilauea Volcano in July 2010, which included 25 participants from the United States and Canada. The participants represented a diverse cross-section of higher learning, from community colleges to research universities, and included faculty who teach both large introductory non-major classes 1. The high energy astronomy observatories Science.gov (United States) Neighbors, A. K.; Doolittle, R. F.; Halpers, R. E. 1977-01-01 The forthcoming NASA project of orbiting High Energy Astronomy Observatories (HEAO's) designed to probe the universe by tracing celestial radiations and particles is outlined. Solutions to engineering problems concerning HEAO's which are integrated, yet built to function independently are discussed, including the onboard digital processor, mirror assembly and the thermal shield. The principle of maximal efficiency with minimal cost and the potential capability of the project to provide explanations to black holes, pulsars and gamma-ray bursts are also stressed. The first satellite is scheduled for launch in April 1977. 2. The Hartebeeshoek Radio Astronomy Observatory International Nuclear Information System (INIS) Nicolson, G.D. 1986-01-01 This article briefly discusses the questions, problems and study fields of the modern astronomer. Radioastronomy has made important contributions to the study of the evolution of stars and has given much information on the birth of stars while at the other extreme, studies of neutron stars and the radio emission from the remnants of supernova explosions have given further insight into the death of individual stars. Radio astronomical studies have learned astronomers much about the structure of the Milky way and some twenty years ago, in a search for new radio galaxies, quasars were discovered. Radioastronomy research in South Africa is carried out at the Hartebeesthoek Radio Astronomy Observatory 3. The ultimate air shower observatory International Nuclear Information System (INIS) Jones, L.W. 1981-01-01 The possibility of constructing an international air shower observatory in the Himalayas is explored. A site at about 6500 m elevation (450 g/cm 2 ) would provide more definitive measurements of composition and early interaction properties of primaries above 10 16 eV than can be achieved with existing arrays. By supplementing a surface array with a Fly's Eye and muon detectors, information on the highest energy cosmic rays may be gained which is not possible in any other way. Potential sites, technical aspects, and logistical problems are explored 4. BART: The Czech Autonomous Observatory Czech Academy of Sciences Publication Activity Database Nekola, Martin; Hudec, René; Jelínek, M.; Kubánek, P.; Štrobl, Jan; Polášek, Cyril 2010-01-01 Roč. 2010, Spec. Is. (2010), 103986/1-103986/5 ISSN 1687-7969. [Workshop on Robotic Autonomous Observatories. Málaga, 18.05.2009-21.05.2009] R&D Projects: GA ČR GA205/08/1207 Grant - others:ESA(XE) ESA-PECS project No. 98023; Spanish Ministry of Education and Science(ES) AP2003-1407 Institutional research plan: CEZ:AV0Z10030501 Keywords : robotic telescope * BART * gamma ray bursts Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics http://www.hindawi.com/journals/aa/2010/103986.html 5. Combining Volcano Monitoring Timeseries Analyses with Bayesian Belief Networks to Update Hazard Forecast Estimates Science.gov (United States) Odbert, Henry; Hincks, Thea; Aspinall, Willy 2015-04-01 Volcanic hazard assessments must combine information about the physical processes of hazardous phenomena with observations that indicate the current state of a volcano. Incorporating both these lines of evidence can inform our belief about the likelihood (probability) and consequences (impact) of possible hazardous scenarios, forming a basis for formal quantitative hazard assessment. However, such evidence is often uncertain, indirect or incomplete. Approaches to volcano monitoring have advanced substantially in recent decades, increasing the variety and resolution of multi-parameter timeseries data recorded at volcanoes. Interpreting these multiple strands of parallel, partial evidence thus becomes increasingly complex. In practice, interpreting many timeseries requires an individual to be familiar with the idiosyncrasies of the volcano, monitoring techniques, configuration of recording instruments, observations from other datasets, and so on. In making such interpretations, an individual must consider how different volcanic processes may manifest as measureable observations, and then infer from the available data what can or cannot be deduced about those processes. We examine how parts of this process may be synthesised algorithmically using Bayesian inference. Bayesian Belief Networks (BBNs) use probability theory to treat and evaluate uncertainties in a rational and auditable scientific manner, but only to the extent warranted by the strength of the available evidence. The concept is a suitable framework for marshalling multiple strands of evidence (e.g. observations, model results and interpretations) and their associated uncertainties in a methodical manner. BBNs are usually implemented in graphical form and could be developed as a tool for near real-time, ongoing use in a volcano observatory, for example. We explore the application of BBNs in analysing volcanic data from the long-lived eruption at Soufriere Hills Volcano, Montserrat. We show how our method 6. Hydrothermal systems and volcano geochemistry Science.gov (United States) Fournier, R.O. 2007-01-01 The upward intrusion of magma from deeper to shallower levels beneath volcanoes obviously plays an important role in their surface deformation. This chapter will examine less obvious roles that hydrothermal processes might play in volcanic deformation. Emphasis will be placed on the effect that the transition from brittle to plastic behavior of rocks is likely to have on magma degassing and hydrothermal processes, and on the likely chemical variations in brine and gas compositions that occur as a result of movement of aqueous-rich fluids from plastic into brittle rock at different depths. To a great extent, the model of hydrothermal processes in sub-volcanic systems that is presented here is inferential, based in part on information obtained from deep drilling for geothermal resources, and in part on the study of ore deposits that are thought to have formed in volcanic and shallow plutonic environments. 7. Daily variation characteristics at polar geomagnetic observatories Science.gov (United States) Lepidi, S.; Cafarella, L.; Pietrolungo, M.; Di Mauro, D. 2011-08-01 This paper is based on the statistical analysis of the diurnal variation as observed at six polar geomagnetic observatories, three in the Northern and three in the Southern hemisphere. Data are for 2006, a year of low geomagnetic activity. We compared the Italian observatory Mario Zucchelli Station (TNB; corrected geomagnetic latitude: 80.0°S), the French-Italian observatory Dome C (DMC; 88.9°S), the French observatory Dumont D'Urville (DRV; 80.4°S) and the three Canadian observatories, Resolute Bay (RES; 83.0°N), Cambridge Bay (CBB; 77.0°N) and Alert (ALE, 87.2°N). The aim of this work was to highlight analogies and differences in daily variation as observed at the different observatories during low geomagnetic activity year, also considering Interplanetary Magnetic Field conditions and geomagnetic indices. 8. EMSO: European multidisciplinary seafloor observatory Science.gov (United States) Favali, Paolo; Beranzoli, Laura 2009-04-01 EMSO has been identified by the ESFRI Report 2006 as one of the Research Infrastructures that European members and associated states are asked to develop in the next decades. It will be based on a European-scale network of multidisciplinary seafloor observatories from the Arctic to the Black Sea with the aim of long-term real-time monitoring of processes related to geosphere/biosphere/hydrosphere interactions. EMSO will enhance our understanding of processes, providing long time series data for the different phenomenon scales which constitute the new frontier for study of Earth interior, deep-sea biology and chemistry, and ocean processes. The development of an underwater network is based on past EU projects and is supported by several EU initiatives, such as the on-going ESONET-NoE, aimed at strengthening the ocean observatories' scientific and technological community. The EMSO development relies on the synergy between the scientific community and industry to improve European competitiveness with respect to countries such as USA, Canada and Japan. Within the FP7 Programme launched in 2006, a call for Preparatory Phase (PP) was issued in order to support the foundation of the legal and organisational entity in charge of building up and managing the infrastructure, and coordinating the financial effort among the countries. The EMSO-PP project, coordinated by the Italian INGV with participation by 11 institutions from as many European countries, started in April 2008 and will last four years. 9. Lahar hazards at Mombacho Volcano, Nicaragua Science.gov (United States) Vallance, J.W.; Schilling, S.P.; Devoli, G. 2001-01-01 Mombacho volcano, at 1,350 meters, is situated on the shores of Lake Nicaragua and about 12 kilometers south of Granada, a city of about 90,000 inhabitants. Many more people live a few kilometers southeast of Granada in 'las Isletas de Granada and the nearby 'Peninsula de Aseses. These areas are formed of deposits of a large debris avalanche (a fast moving avalanche of rock and debris) from Mombacho. Several smaller towns with population, in the range of 5,000 to 12,000 inhabitants are to the northwest and the southwest of Mombacho volcano. Though the volcano has apparently not been active in historical time, or about the last 500 years, it has the potential to produce landslides and debris flows (watery flows of mud, rock, and debris -- also known as lahars when they occur on a volcano) that could inundate these nearby populated areas. -- Vallance, et.al., 2001 10. Analysis of volcano rocks by Moessbauer spectroscopy International Nuclear Information System (INIS) Sitek, J.; Dekan, J. 2012-01-01 In this work we have analysed the basalt rock from Mount Ba tur volcano situated on the Island of Bali in Indonesia.We compared our results with composition of basalt rocks from some other places on the Earth. (authors) 11. Moessbauer Spectroscopy study of Quimsachata Volcano materials International Nuclear Information System (INIS) Dominguez, A.G.B. 1988-01-01 It has been studied volcanic lava from Quimsachata Volcano in Pem. Moessbauer Spectroscopy, X-ray diffraction, electronic and optical microscopy allowed the identification of different mineralogical phases. (A.C.AS.) [pt 12. Lahar hazards at Agua volcano, Guatemala Science.gov (United States) Schilling, S.P.; Vallance, J.W.; Matías, O.; Howell, M.M. 2001-01-01 At 3760 m, Agua volcano towers more than 3500 m above the Pacific coastal plain to the south and 2000 m above the Guatemalan highlands to the north. The volcano is within 5 to 10 kilometers (km) of Antigua, Guatemala and several other large towns situated on its northern apron. These towns have a combined population of nearly 100,000. It is within about 20 km of Escuintla (population, ca. 100,000) to the south. Though the volcano has not been active in historical time, or about the last 500 years, it has the potential to produce debris flows (watery flows of mud, rock, and debris—also known as lahars when they occur on a volcano) that could inundate these nearby populated areas. 13. Worldwide R&D of Virtual Observatory Science.gov (United States) Cui, C. Z.; Zhao, Y. H. 2008-07-01 Virtual Observatory (VO) is a data intensive online astronomical research and education environment, taking advantages of advanced information technologies to achieve seamless and uniform access to astronomical information. The concept of VO was introduced in the late 1990s to meet the challenges brought up with data avalanche in astronomy. In the paper, current status of International Virtual Observatory Alliance, technical highlights from world wide VO projects are reviewed, a brief introduction of Chinese Virtual Observatory is given. 14. A search for the volcanomagnetic signal at Deception volcano (South Shetland I., Antarctica Directory of Open Access Journals (Sweden) J. M. Ibáñez 1997-06-01 Full Text Available After the increase in seismic activity detected during the 1991-1992 summer survey at Deception Island, the continuous measurement of total magnetic intensity was included among the different techniques used to monitor this active volcano. The Polish geomagnetic observatory Arctowski, located on King George Island, served as a reference station, and changes in the differences between the daily mean values at both stations were interpreted as indicators of volcanomagnetic effects at Deception. A magnetic station in continuous recording mode was also installed during the 1993-1994 and 1994-1995 surveys. During the latter, a second magnetometer was deployed on Deception Island, and a third one in the vicinity of the Spanish Antarctic Station on Livingston Island (at a distance of 35 km and was used as a reference station. The results from the first survey suggest that a small magma injection, responsible for the seismic re-activation, could produce a volcanomagnetic effect, detected as a slight change in the difference between Deception and Arctowski. On the other hand, a long term variation starting at that moment seems to indicate a thermomagnetic effect. However the short register period of only two stations do not allow the sources to be modelled. The future deployment of a magnetic array during the austral summer surveys, throughout the volcano, and of a permanent geomagnetic observatory at Livingston I. is aimed at further observations of magnetic transients of volcanic origin at Deception Island. 15. Byurakan Astrophysical Observatory as Cultural Centre Science.gov (United States) Mickaelian, A. M.; Farmanyan, S. V. 2017-07-01 NAS RA V. Ambartsumian Byurakan Astrophysical Observatory is presented as a cultural centre for Armenia and the Armenian nation in general. Besides being scientific and educational centre, the Observatory is famous for its unique architectural ensemble, rich botanical garden and world of birds, as well as it is one of the most frequently visited sightseeing of Armenia. In recent years, the Observatory has also taken the initiative of the coordination of the Cultural Astronomy in Armenia and in this field, unites the astronomers, historians, archaeologists, ethnographers, culturologists, literary critics, linguists, art historians and other experts. Keywords: Byurakan Astrophysical Observatory, architecture, botanic garden, tourism, Cultural Astronomy. 16. Eruption of a deep-sea mud volcano triggers rapid sediment movement Science.gov (United States) Feseker, Tomas; Boetius, Antje; Wenzhöfer, Frank; Blandin, Jerome; Olu, Karine; Yoerger, Dana R.; Camilli, Richard; German, Christopher R.; de Beer, Dirk 2014-01-01 Submarine mud volcanoes are important sources of methane to the water column. However, the temporal variability of their mud and methane emissions is unknown. Methane emissions were previously proposed to result from a dynamic equilibrium between upward migration and consumption at the seabed by methane-consuming microbes. Here we show non-steady-state situations of vigorous mud movement that are revealed through variations in fluid flow, seabed temperature and seafloor bathymetry. Time series data for pressure, temperature, pH and seafloor photography were collected over 431 days using a benthic observatory at the active Håkon Mosby Mud Volcano. We documented 25 pulses of hot subsurface fluids, accompanied by eruptions that changed the landscape of the mud volcano. Four major events triggered rapid sediment uplift of more than a metre in height, substantial lateral flow of muds at average velocities of 0.4 m per day, and significant emissions of methane and CO2 from the seafloor. PMID:25384354 17. Volcano alert level systems: managing the challenges of effective volcanic crisis communication Science.gov (United States) Fearnley, C. J.; Beaven, S. 2018-05-01 Over the last four decades, volcano observatories have adopted a number of different communication strategies for the dissemination of information on changes in volcanic behaviour and potential hazards to a wide range of user groups. These commonly include a standardised volcano alert level system (VALS), used in conjunction with other uni-valent communication techniques (such as information statements, reports and maps) and multi-directional techniques (such as meetings and telephone calls). This research, based on interviews and observation conducted 2007-2009 at the five US Geological Survey (USGS) volcano observatories, and including some of the key users of the VALS, argues for the importance of understanding how communicating volcanic hazard information takes place as an everyday social practice, focusing on the challenges of working across the boundaries between the scientific and decision-making communities. It is now widely accepted that the effective use, value and deployment of information across science-policy interfaces of this kind depend on three criteria: the scientific credibility of the information, its relevance to the needs of stakeholders and the legitimacy of both the information and the processes that produced it. Translation and two-way communication are required to ensure that all involved understand what information is credible and relevant. Findings indicate that whilst VALS play a role in raising awareness of an unfolding situation, supplementary communication techniques are crucial in facilitating situational understanding of that situation, and the uncertainties inherent to its scientific assessment, as well as in facilitating specific responses. In consequence, best practice' recommendations eschew further standardisation, and focus on the in situ cultivation of dialogue between scientists and stakeholders as a means of ensuring that information, and the processes through which it is produced are perceived to be legitimate by all 18. Real-time source deformation modeling through GNSS permanent stations at Merapi volcano (Indonesia Science.gov (United States) Beauducel, F.; Nurnaning, A.; Iguchi, M.; Fahmi, A. A.; Nandaka, M. A.; Sumarti, S.; Subandriyo, S.; Metaxian, J. P. 2014-12-01 Mt. Merapi (Java, Indonesia) is one of the most active and dangerous volcano in the world. A first GPS repetition network was setup and periodically measured since 1993, allowing detecting a deep magma reservoir, quantifying magma flux in conduit and identifying shallow discontinuities around the former crater (Beauducel and Cornet, 1999;Beauducel et al., 2000, 2006). After the 2010 centennial eruption, when this network was almost completely destroyed, Indonesian and Japanese teams installed a new continuous GPS network for monitoring purpose (Iguchi et al., 2011), consisting of 3 stations located at the volcano flanks, plus a reference station at the Yogyakarta Observatory (BPPTKG).In the framework of DOMERAPI project (2013-2016) we have completed this network with 5 additional stations, which are located on the summit area and volcano surrounding. The new stations are 1-Hz sampling, GNSS (GPS + GLONASS) receivers, and near real-time data streaming to the Observatory. An automatic processing has been developed and included in the WEBOBS system (Beauducel et al., 2010) based on GIPSY software computing precise daily moving solutions every hour, and for different time scales (2 months, 1 and 5 years), time series and velocity vectors. A real-time source modeling estimation has also been implemented. It uses the depth-varying point source solution (Mogi, 1958; Williams and Wadge, 1998) in a systematic inverse problem model exploration that displays location, volume variation and 3-D probability map.The operational system should be able to better detect and estimate the location and volume variations of possible magma sources, and to follow magma transfer towards the surface. This should help monitoring and contribute to decision making during future unrest or eruption. 19. National Astronomical Observatory of Japan CERN Document Server Haubold, Hans J; UN/ESA/NASA Workshop on the International Heliophysical Year 2007 and Basic Space Science, hosted by the National Astronomical Observatory of Japan 2010-01-01 This book represents Volume II of the Proceedings of the UN/ESA/NASA Workshop on the International Heliophysical Year 2007 and Basic Space Science, hosted by the National Astronomical Observatory of Japan, Tokyo, 18 - 22 June, 2007. It covers two programme topics explored in this and past workshops of this nature: (i) non-extensive statistical mechanics as applicable to astrophysics, addressing q-distribution, fractional reaction and diffusion, and the reaction coefficient, as well as the Mittag-Leffler function and (ii) the TRIPOD concept, developed for astronomical telescope facilities. The companion publication, Volume I of the proceedings of this workshop, is a special issue in the journal Earth, Moon, and Planets, Volume 104, Numbers 1-4, April 2009. 20. Autonomous Infrastructure for Observatory Operations Science.gov (United States) Seaman, R. This is an era of rapid change from ancient human-mediated modes of astronomical practice to a vision of ever larger time domain surveys, ever bigger "big data", to increasing numbers of robotic telescopes and astronomical automation on every mountaintop. Over the past decades, facets of a new autonomous astronomical toolkit have been prototyped and deployed in support of numerous space missions. Remote and queue observing modes have gained significant market share on the ground. Archives and data-mining are becoming ubiquitous; astroinformatic techniques and virtual observatory standards and protocols are areas of active development. Astronomers and engineers, planetary and solar scientists, and researchers from communities as diverse as particle physics and exobiology are collaborating on a vast range of "multi-messenger" science. What then is missing? 1. TUM Critical Zone Observatory, Germany Science.gov (United States) Völkel, Jörg; Eden, Marie 2014-05-01 Founded 2011 the TUM Critical Zone Observatory run by the Technische Universität München and partners abroad is the first CZO within Germany. TUM CZO is both, a scientific as well as an education project. It is a watershed based observatory, but moving behind this focus. In fact, two mountainous areas are integrated: (1) The Ammer Catchment area as an alpine and pre alpine research area in the northern limestone Alps and forelands south of Munich; (2) the Otter Creek Catchment in the Bavarian Forest with a crystalline setting (Granite, Gneiss) as a mid mountainous area near Regensburg; and partly the mountainous Bavarian Forest National Park. The Ammer Catchment is a high energy system as well as a sensitive climate system with past glacial elements. The lithology shows mostly carbonates from Tertiary and Mesozoic times (e.g. Flysch). Source-to-sink processes are characteristic for the Ammer Catchment down to the last glacial Ammer Lake as the regional erosion and deposition base. The consideration of distal depositional environments, the integration of upstream and downstream landscape effects are characteristic for the Ammer Catchment as well. Long term datasets exist in many regards. The Otter Creek catchment area is developed in a granitic environment, rich in saprolites. As a mid mountainous catchment the energy system is facing lower stage. Hence, it is ideal comparing both of them. Both TUM CZO Catchments: The selected catchments capture the depositional environment. Both catchment areas include historical impacts and rapid land use change. Crosscutting themes across both sites are inbuilt. Questions of ability to capture such gradients along climosequence, chronosequence, anthroposequence are essential. 2. Volcanoes of México: An Interactive CD-ROM From the Smithsonian's Global Volcanism Program Science.gov (United States) Siebert, L.; Kimberly, P.; Calvin, C.; Luhr, J. F.; Kysar, G. 2002-12-01 The Smithsonian Institution's Global Volcanism Program is nearing completion of an interactive CD-ROM, the Volcanoes of México. This CD is the second in a series sponsored by the U.S. Department of Energy Office of Geothermal Technologies to collate Smithsonian data on Quaternary volcanism as a resource for the geothermal community. It also has utility for those concerned with volcanic hazard and risk mitgation as well as an educational tool for those interested in Mexican volcanism. We acknowledge the significant contributions of many Mexican volcanologists to the eruption reports, data, and images contained in this CD, in particular those contributions of the Centro Nacional de Prevencion de Desastres (CENAPRED), the Colima Volcano Observatory of the University of Colima, and the Universidad Nacional Autónoma de México (UNAM). The Volcanoes of México CD has a format similar to that of an earlier Smithsonian CD, the Volcanoes of Indonesia, but also shows Pleistocene volcanic centers and additional data on geothermal sites. A clickable map of México shows both Holocene and Pleistocene volcanic centers and provides access to individual pages on 67 volcanoes ranging from Cerro Prieto in Baja California to Tacaná on the Guatemalan border. These include geographic and geologic data on individual volcanoes (as well as a brief paragraph summarizing the geologic history) along with tabular eruption chronologies, eruptive characteristics, and eruptive volumes, when known. Volcano data are accessible from both geographical and alphabetical searches. A major component of the CD is more than 400 digitized images illustrating the morphology of volcanic centers and eruption processes and deposits, providing a dramatic visual primer to the country's volcanoes. Images of specific eruptions can be directly linked to from the eruption chronology tables. The Volcanoes of México CD includes monthly reports and associated figures and tables cataloging volcanic activity in M 3. Overview of gas flux measurements from volcanoes of the global Network for Observation of Volcanic and Atmospheric Change (NOVAC) Science.gov (United States) Galle, Bo; Arellano, Santiago; Conde, Vladimir 2015-04-01 NOVAC, the Network for Observation of Volcanic and Atmospheric Change, was initiated in 2005 as a 5-years-long project financed by the European Union. Its main purpose is to create a global network for the study of volcanic atmospheric plumes and related geophysical phenomena by using state-of-the-art spectroscopic remote sensing technology. Up to 2014, 67 instruments have been installed at 25 volcanoes in 13 countries of Latin America, Italy, Democratic Republic of Congo, Reunion, Iceland, and Philippines, and efforts are being done to expand the network to other active volcanic zones. NOVAC has been a pioneer initiative in the community of volcanologists and embraces the objectives of the Word Organization of Volcano Observatories (WOVO) and the Global Earth Observation System of Systems (GEOSS). In this contribution, we present the results of the measurements of SO2 gas fluxes carried out within NOVAC, which for some volcanoes represent a record of more than 8 years of semi-continuous monitoring. The network comprises some of the most strongly degassing volcanoes in the world, covering a broad range of tectonic settings, levels of unrest, and potential risk. Examples of correlations with seismicity and other geophysical phenomena, environmental impact studies and comparisons with previous global estimates will be discussed as well as the significance of the database for further studies in volcanology and other geosciences. 4. The Merapi Interactive Project: Offering a Fancy Cross-Disciplinary Scientific Understanding of Merapi Volcano to a Wide Audience. Science.gov (United States) Morin, J.; Kerlow, I. 2015-12-01 The Merapi volcano is of great interest to a wide audience as it is one of the most dangerous volcanoes worldwide and a beautiful touristic spot. The scientific literature available on that volcano both in Earth and Social sciences is rich but mostly inaccessible to the public because of the scientific jargon and the restricted database access. Merapi Interactive aims at developing clear information and attractive content about Merapi for a wide audience. The project is being produced by the Art and Media Group at the Earth Observatory of Singapore, and it takes the shape of an e-book. It offers a consistent, comprehensive, and jargon-filtered synthesis of the main volcanic-risk related topics about Merapi: volcanic mechanisms, eruptive history, associated hazards and risks, the way inhabitants and scientists deal with it, and what daily life at Merapi looks like. The project provides a background to better understand volcanoes, and it points out some interactions between scientists and society. We propose two levels of interpretation: one that is understandable by 10-year old kids and above and an expert level with deeper presentations of specific topics. Thus, the Merapi Interactive project intends to provide an engaging and comprehensive interactive book that should interest kids, adults, as well as Earth Sciences undergraduates and academics. Merapi Interactive is scheduled for delivery in mid-2016. 5. Observatory data and the Swarm mission DEFF Research Database (Denmark) Macmillan, S.; Olsen, Nils 2013-01-01 products. We describe here the preparation of the data set of ground observatory hourly mean values, including procedures to check and select observatory data spanning the modern magnetic survey satellite era. We discuss other possible combined uses of satellite and observatory data, in particular those......The ESA Swarm mission to identify and measure very accurately the different magnetic signals that arise in the Earth’s core, mantle, crust, oceans, ionosphere and magnetosphere, which together form the magnetic field around the Earth, has increased interest in magnetic data collected on the surface...... of the Earth at observatories. The scientific use of Swarm data and Swarm-derived products is greatly enhanced by combination with observatory data and indices. As part of the Swarm Level-2 data activities plans are in place to distribute such ground-based data along with the Swarm data as auxiliary data... 6. Sensibility analysis of VORIS lava-flow simulations: application to Nyamulagira volcano, Democratic Republic of Congo Science.gov (United States) Syavulisembo, A. M.; Havenith, H.-B.; Smets, B.; d'Oreye, N.; Marti, J. 2015-03-01 Assessment and management of volcanic risk are important scientific, economic, and political issues, especially in densely populated areas threatened by volcanoes. The Virunga area in the Democratic Republic of Congo, with over 1 million inhabitants, has to cope permanently with the threat posed by the active Nyamulagira and Nyiragongo volcanoes. During the past century, Nyamulagira erupted at intervals of 1-4 years - mostly in the form of lava flows - at least 30 times. Its summit and flank eruptions lasted for periods of a few days up to more than two years, and produced lava flows sometimes reaching distances of over 20 km from the volcano, thereby affecting very large areas and having a serious impact on the region of Virunga. In order to identify a useful tool for lava flow hazard assessment at the Goma Volcano Observatory (GVO), we tested VORIS 2.0.1 (Felpeto et al., 2007), a freely available software (http://www.gvb-csic.es) based on a probabilistic model that considers topography as the main parameter controlling lava flow propagation. We tested different Digital Elevation Models (DEM) - SRTM1, SRTM3, and ASTER GDEM - to analyze the sensibility of the input parameters of VORIS 2.0.1 in simulation of recent historical lava-flow for which the pre-eruption topography is known. The results obtained show that VORIS 2.0.1 is a quick, easy-to-use tool for simulating lava-flow eruptions and replicates to a high degree of accuracy the eruptions tested. In practice, these results will be used by GVO to calibrate VORIS model for lava flow path forecasting during new eruptions, hence contributing to a better volcanic crisis management. 7. Three-axial Fiber Bragg Grating Strain Sensor for Volcano Monitoring Science.gov (United States) Giacomelli, Umberto; Beverini, Nicolò; Carbone, Daniele; Carelli, Giorgio; Francesconi, Francesco; Gambino, Salvatore; Maccioni, Enrico; Morganti, Mauro; Orazi, Massimo; Peluso, Rosario; Sorrentino, Fiodor 2017-04-01 Fiber optic and FBGs sensors have attained a large diffusion in the last years as cost-effective monitoring and diagnostic devices in civil engineering. However, in spite of their potential impact, these instruments have found very limited application in geophysics. In order to study earthquakes and volcanoes, the measurement of crustal deformation is of crucial importance. Stress and strain behaviour is among the best indicators of changes in the activity of volcanoes .. Deep bore-hole dilatometers and strainmeters have been employed for volcano monitoring. These instruments are very sensitive and reliable, but are not cost-effective and their installation requires a large effort. Fiber optic based devices offer low cost, small size, wide frequency band, easier deployment and even the possibility of creating a local network with several sensors linked in an array. We present the realization, installation and first results of a shallow-borehole (8,5 meters depth) three-axial Fiber Bragg Grating (FBG) strain sensor prototype. This sensor has been developed in the framework of the MED-SUV project and installed on Etna volcano, in the facilities of the Serra La Nave astrophysical observatory. The installation siteis about 7 Km South-West of the summit craters, at an elevation of about 1740 m. The main goal of our work is the realization of a three-axial device having a high resolution and accuracy in static and dynamic strain measurements, with special attention to the trade-off among resolution, cost and power consumption. The sensor structure and its read-out system are innovative and offer practical advantages in comparison with traditional strain meters. Here we present data collected during the first five months of operation. In particular, the very clear signals recorded in the occurrence of the Central Italy seismic event of October 30th demonstrate the performances of our device. 8. Integrating SAR and derived products into operational volcano monitoring and decision support systems Science.gov (United States) Meyer, F. J.; McAlpin, D. B.; Gong, W.; Ajadi, O.; Arko, S.; Webley, P. W.; Dehn, J. 2015-02-01 Remote sensing plays a critical role in operational volcano monitoring due to the often remote locations of volcanic systems and the large spatial extent of potential eruption pre-cursor signals. Despite the all-weather capabilities of radar remote sensing and its high performance in monitoring of change, the contribution of radar data to operational monitoring activities has been limited in the past. This is largely due to: (1) the high costs associated with radar data; (2) traditionally slow data processing and delivery procedures; and (3) the limited temporal sampling provided by spaceborne radars. With this paper, we present new data processing and data integration techniques that mitigate some of these limitations and allow for a meaningful integration of radar data into operational volcano monitoring decision support systems. Specifically, we present fast data access procedures as well as new approaches to multi-track processing that improve near real-time data access and temporal sampling of volcanic systems with SAR data. We introduce phase-based (coherent) and amplitude-based (incoherent) change detection procedures that are able to extract dense time series of hazard information from these data. For a demonstration, we present an integration of our processing system with an operational volcano monitoring system that was developed for use by the Alaska Volcano Observatory (AVO). Through an application to a historic eruption, we show that the integration of SAR into systems such as AVO can significantly improve the ability of operational systems to detect eruptive precursors. Therefore, the developed technology is expected to improve operational hazard detection, alerting, and management capabilities. 9. International lunar observatory / power station: from Hawaii to the Moon Science.gov (United States) Durst, S. -like lava flow geology adds to Mauna Kea / Moon similarities. Operating amidst the extinct volcano's fine grain lava and dust particles offers experience for major challenges posed by silicon-edged, powdery, deep and abundant lunar regolith. Power stations for lunar observatories, both robotic and low cost at first, are an immediate enabling necessity and will serve as a commercial-industrial driver for a wide range of lunar base technologies. Both microwave rectenna-transmitters and radio-optical telescopes, maybe 1-meter diameter, can be designed using the same, new ultra-lightweight materials. Five of the world's six major spacefaring powers - America, Russia, Japan, China and India, are located around Hawaii in the Pacific / Asia area. With Europe, which has many resources in the Pacific hemisphere including Arianespace offices in Tokyo and Singapore, they have 55-60% of the global population. New international business partnerships such as Sea Launch in the mid-Pacific, and national ventures like China's Hainan spaceport, Japan's Kiribati shuttle landing site, Australia and Indonesia's emerging launch sites, and Russia's Ekranoplane sea launcher / lander - all combine with still more and advancing technologies to provide the central Pacific a globally representative, state-of-the-art and profitable access to space in this new century. The astronomer / engineers tasked with operation of the lunar observatory / power station will be the first to voyage from Hawaii to the Moon, before this decade is out. Their scientific and technical training at the world's leading astronomical complex on the lunar-like landscape of Mauna Kea may be enhanced with the learning and transmission of local cultures. Following the astronomer / engineers, tourism and travel in the commercially and technologically dynamic Pacific hemisphere will open the new ocean of space to public access in the 21st century like they opened the old ocean of sea and air to Hawaii in the 20th - with Hawaii 10. Effects of Volcanoes on the Natural Environment Science.gov (United States) Mouginis-Mark, Peter J. 2005-01-01 The primary focus of this project has been on the development of techniques to study the thermal and gas output of volcanoes, and to explore our options for the collection of vegetation and soil data to enable us to assess the impact of this volcanic activity on the environment. We originally selected several volcanoes that have persistent gas emissions and/or magma production. The investigation took an integrated look at the environmental effects of a volcano. Through their persistent activity, basaltic volcanoes such as Kilauea (Hawaii) and Masaya (Nicaragua) contribute significant amounts of sulfur dioxide and other gases to the lower atmosphere. Although primarily local rather than regional in its impact, the continuous nature of these eruptions means that they can have a major impact on the troposphere for years to decades. Since mid-1986, Kilauea has emitted about 2,000 tonnes of sulfur dioxide per day, while between 1995 and 2000 Masaya has emotted about 1,000 to 1,500 tonnes per day (Duffel1 et al., 2001; Delmelle et al., 2002; Sutton and Elias, 2002). These emissions have a significant effect on the local environment. The volcanic smog ("vog" ) that is produced affects the health of local residents, impacts the local ecology via acid rain deposition and the generation of acidic soils, and is a concern to local air traffic due to reduced visibility. Much of the work that was conducted under this NASA project was focused on the development of field validation techniques of volcano degassing and thermal output that could then be correlated with satellite observations. In this way, we strove to develop methods by which not only our study volcanoes, but also volcanoes in general worldwide (Wright and Flynn, 2004; Wright et al., 2004). Thus volcanoes could be routinely monitored for their effects on the environment. The selected volcanoes were: Kilauea (Hawaii; 19.425 N, 155.292 W); Masaya (Nicaragua; 11.984 N, 86.161 W); and Pods (Costa Rica; 10.2OoN, 84.233 W). 11. Volcanoes in the Classroom--an Explosive Learning Experience. Science.gov (United States) Thompson, Susan A.; Thompson, Keith S. 1996-01-01 Presents a unit on volcanoes for third- and fourth-grade students. Includes demonstrations; video presentations; building a volcano model; and inviting a scientist, preferably a vulcanologist, to share his or her expertise with students. (JRH) 12. Volcanostratigraphic Approach for Evaluation of Geothermal Potential in Galunggung Volcano Science.gov (United States) Ramadhan, Q. S.; Sianipar, J. Y.; Pratopo, A. K. 2016-09-01 he geothermal systems in Indonesia are primarily associated with volcanoes. There are over 100 volcanoes located on Sumatra, Java, and in the eastern part of Indonesia. Volcanostratigraphy is one of the methods that is used in the early stage for the exploration of volcanic geothermal system to identify the characteristics of the volcano. The stratigraphy of Galunggung Volcano is identified based on 1:100.000 scale topographic map of Tasikmalaya sheet, 1:50.000 scale topographic map and also geological map. The schematic flowchart for evaluation of geothermal exploration is used to interpret and evaluate geothermal potential in volcanic regions. Volcanostratigraphy study has been done on Galunggung Volcano and Talaga Bodas Volcano, West Java, Indonesia. Based on the interpretation of topographic map and analysis of the dimension, rock composition, age and stress regime, we conclude that both Galunggung Volcano and Talaga Bodas Volcano have a geothermal resource potential that deserve further investigation. 13. Solar Imagery - Photosphere - Sunspot Drawings - McMath-Hulbert Observatory Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The McMath-Hulbert Observatory is a decommissioned solar observatory in Lake Angelus, Michigan, USA. It was established in 1929 as a private observatory by father... 14. EMSO: European Multidisciplinary Seafloor Observatory Science.gov (United States) Favali, P.; Partnership, Emso 2009-04-01 EMSO, a Research Infrastructure listed within ESFRI (European Strategy Forum on Research Infrastructures) Roadmap), is the European-scale network of multidisciplinary seafloor observatories from the Arctic to the Black Sea with the scientific objective of long-term real-time monitoring of processes related to geosphere/biosphere/hydrosphere interactions. EMSO will enhance our understanding of processes through long time series appropriate to the scale of the phenomena, constituting the new frontier of studying Earth interior, deep-sea biology and chemistry and ocean processes. EMSO will reply also to the need expressed in the frame of GMES (Global Monitoring for Environment and Security) to develop a marine segment integrated in the in situ and satellite global monitoring system. The EMSO development relays upon the synergy between the scientific community and the industry to improve the European competitiveness with respect to countries like USA/Canada, NEPTUNE, VENUS and MARS projects, Taiwan, MACHO project, and Japan, DONET project. In Europe the development of an underwater network is based on previous EU-funded projects since early '90, and presently supported by EU initiatives. The EMSO infrastructure will constitute the extension to the sea of the land-based networks. Examples of data recorded by seafloor observatories will be presented. EMSO is presently at the stage of Preparatory Phase (PP), funded in the EC FP7 Capacities Programme. The project has started in April 2008 and will last 4 years with the participation of 12 Institutions representing 12 countries. EMSO potential will be significantly increased also with the interaction with other Research Infrastructures addressed to Earth Science. 2. IFREMER-Institut Français de Recherche pour l'exploitation de la mer (France, ref. Roland Person); KDM-Konsortium Deutsche Meeresforschung e.V. (Germany, ref. Christoph Waldmann); IMI-Irish Marine Institute (Ireland, ref. Michael Gillooly); UTM-CSIC-Unidad de 15. EMSO: European Multidisciplinary Seafloor Observatory Science.gov (United States) Favali, Paolo 2010-05-01 EMSO, a Research Infrastructure listed within ESFRI (European Strategy Forum on Research Infrastructures) Roadmap (Report 2006, http://cordis.europa.eu/esfri/roadmap.htm), is the European-scale network of multidisciplinary seafloor observatories from the Arctic to the Black Sea with the scientific objective of long-term real-time monitoring of processes related to geosphere/biosphere/hydrosphere interactions. EMSO will enhance our understanding of processes through long time series appropriate to the scale of the phenomena, constituting the new frontier of studying Earth interior, deep-sea biology and chemistry and ocean processes. The development of an underwater network is based on previous EU-funded projects since early '90 and is being supported by several EU initiatives, as the on-going ESONET-NoE, coordinated by IFREMER (2007-2011, http://www.esonet-emso.org/esonet-noe/), and aims at gathering together the Research Community of the Ocean Observatories. In 2006 the FP7 Capacities Programme launched a call for Preparatory Phase (PP) projects, that will provide the support to create the legal and organisational entities in charge of managing the infrastructures, and coordinating the financial effort among the countries. Under this call the EMSO-PP project was approved in 2007 with the coordination of INGV and the participation of other 11 Institutions of 11 countries. The project has started in April 2008 and will last 4 years. The EMSO is a key-infrastructure both for Ocean Sciences and for Solid Earth Sciences. In this respect it will enhance and complement profitably the capabilities of other European research infrastructures such as EPOS, ERICON-Aurora Borealis, and SIOS. The perspective of the synergy among EMSO and other ESFRI Research Infrastructures will be outlined. EMSO Partners: IFREMER-Institut Français de Recherche pour l'exploitation de la mer (France, ref. Roland Person); KDM-Konsortium Deutsche Meeresforschung e.V. (Germany, ref. Christoph 16. Long Period Earthquakes Beneath California's Young and Restless Volcanoes Science.gov (United States) Pitt, A. M.; Dawson, P. B.; Shelly, D. R.; Hill, D. P.; Mangan, M. 2013-12-01 The newly established USGS California Volcano Observatory has the broad responsibility of monitoring and assessing hazards at California's potentially threatening volcanoes, most notably Mount Shasta, Medicine Lake, Clear Lake Volcanic Field, and Lassen Volcanic Center in northern California; and Long Valley Caldera, Mammoth Mountain, and Mono-Inyo Craters in east-central California. Volcanic eruptions occur in California about as frequently as the largest San Andreas Fault Zone earthquakes-more than ten eruptions have occurred in the last 1,000 years, most recently at Lassen Peak (1666 C.E. and 1914-1917 C.E.) and Mono-Inyo Craters (c. 1700 C.E.). The Long Valley region (Long Valley caldera and Mammoth Mountain) underwent several episodes of heightened unrest over the last three decades, including intense swarms of volcano-tectonic (VT) earthquakes, rapid caldera uplift, and hazardous CO2 emissions. Both Medicine Lake and Lassen are subsiding at appreciable rates, and along with Clear Lake, Long Valley Caldera, and Mammoth Mountain, sporadically experience long period (LP) earthquakes related to migration of magmatic or hydrothermal fluids. Worldwide, the last two decades have shown the importance of tracking LP earthquakes beneath young volcanic systems, as they often provide indication of impending unrest or eruption. Herein we document the occurrence of LP earthquakes at several of California's young volcanoes, updating a previous study published in Pitt et al., 2002, SRL. All events were detected and located using data from stations within the Northern California Seismic Network (NCSN). Event detection was spatially and temporally uneven across the NCSN in the 1980s and 1990s, but additional stations, adoption of the Earthworm processing system, and heightened vigilance by seismologists have improved the catalog over the last decade. LP earthquakes are now relatively well-recorded under Lassen (~150 events since 2000), Clear Lake (~60 events), Mammoth Mountain 17. Volcano Trial Case on GEP: Systematically processing EO data OpenAIRE Baumann, Andreas Bruno Graziano 2017-01-01 Volcanoes can be found all over the world; on land and below water surface. Even nowadays not all volcanoes are known. About 600 erupted in geologically recent times and about 50-70 volcanoes are currently active. Volcanoes can cause earthquakes; throw out blasts and tephras; release (toxic) gases; lava can flow relatively slow down the slopes; mass movements like debris avalanches, and landslides can cause tsunamis; and fast and hot pyroclastic surge, flows, and lahars can travel fast down ... 18. The Malaysian Robotic Solar Observatory (P29) Science.gov (United States) Othman, M.; Asillam, M. F.; Ismail, M. K. H. 2006-11-01 Robotic observatory with small telescopes can make significant contributions to astronomy observation. They provide an encouraging environment for astronomers to focus on data analysis and research while at the same time reducing time and cost for observation. The observatory will house the primary 50cm robotic telescope in the main dome which will be used for photometry, spectroscopy and astrometry observation activities. The secondary telescope is a robotic multi-apochromatic refractor (maximum diameter: 15 cm) which will be housed in the smaller dome. This telescope set will be used for solar observation mainly in three different wavelengths simultaneously: the Continuum, H-Alpha and Calcium K-line. The observatory is also equipped with an automated weather station, cloud & rain sensor and all-sky camera to monitor the climatic condition, sense the clouds (before raining) as well as to view real time sky view above the observatory. In conjunction with the Langkawi All-Sky Camera, the observatory website will also display images from the Malaysia - Antarctica All-Sky Camera used to monitor the sky at Scott Base Antarctica. Both all-sky images can be displayed simultaneously to show the difference between the equatorial and Antarctica skies. This paper will describe the Malaysian Robotic Observatory including the systems available and method of access by other astronomers. We will also suggest possible collaboration with other observatories in this region. 19. Stability and behavior of the outer array of small water Cherenkov detectors, outriggers, in the HAWC observatory OpenAIRE Capistrán, T.; Torres, I.; Moreno, E.; collaboration, for the HAWC 2017-01-01 The High-Altitude Water Cherenkov (HAWC) Observatory is used for detecting TeV gamma rays. HAWC is operating at 4,100 meters above level sea on the slope of the Sierra Negra Volcano in the State of Puebla, Mexico, and consists of an array of 300 water Cherenkov detectors (WCDs) covering an area of 22,000$m^2$. Each WCD is equipped with four photomultiplier tubes (PMTs) to detect Cherenkov emission in the water from secondary particles of extensive air-shower (EAS) that are produced in the in... 20. Volcano Geodesy: Recent developments and future challenges Science.gov (United States) Fernandez, Jose F.; Pepe, Antonio; Poland, Michael; Sigmundsson, Freysteinn 2017-01-01 Ascent of magma through Earth's crust is normally associated with, among other effects, ground deformation and gravity changes. Geodesy is thus a valuable tool for monitoring and hazards assessment during volcanic unrest, and it provides valuable data for exploring the geometry and volume of magma plumbing systems. Recent decades have seen an explosion in the quality and quantity of volcano geodetic data. New datasets (some made possible by regional and global scientific initiatives), as well as new analysis methods and modeling practices, have resulted in important changes to our understanding of the geodetic characteristics of active volcanism and magmatic processes, from the scale of individual eruptive vents to global compilations of volcano deformation. Here, we describe some of the recent developments in volcano geodesy, both in terms of data and interpretive tools, and discuss the role of international initiatives in meeting future challenges for the field. 1. Soil radon response around an active volcano International Nuclear Information System (INIS) Segovia, N.; Valdes, C.; Pena, P.; Mena, M.; Tamez, E. 2001-01-01 Soil radon behavior related to the volcanic eruptive period 1997-1999 of Popocatepetl volcano has been studied as a function of the volcanic activity. Since the volcano is located 60 km from Mexico City, the risk associated with an explosive eruptive phase is high and an intense surveillance program has been implemented. Previous studies in this particular volcano showed soil radon pulses preceding the initial phase of the eruption. The radon survey was performed with LR-115 track detectors at a shallow depth and the effect of the soil moisture during the rainy season has been observed on the detectors response. In the present state of the volcanic activity the soil radon behavior has shown more stability than in previous eruptive stages 2. Predicting the Timing and Location of the next Hawaiian Volcano Science.gov (United States) Russo, Joseph; Mattox, Stephen; Kildau, Nicole 2010-01-01 The wealth of geologic data on Hawaiian volcanoes makes them ideal for study by middle school students. In this paper the authors use existing data on the age and location of Hawaiian volcanoes to predict the location of the next Hawaiian volcano and when it will begin to grow on the floor of the Pacific Ocean. An inquiry-based lesson is also… 3. Interdisciplinary studies of eruption at Chaiten Volcano, Chile Science.gov (United States) John S. Pallister; Jon J. Major; Thomas C. Pierson; Richard P. Hoblitt; Jacob B. Lowenstern; John C. Eichelberger; Lara. Luis; Hugo Moreno; Jorge Munoz; Jonathan M. Castro; Andres Iroume; Andrea Andreoli; Julia Jones; Fred Swanson; Charlie Crisafulli 2010-01-01 There was keen interest within the volcanology community when the first large eruption of high-silica rhyolite since that of Alaska's Novarupta volcano in 1912 began on 1 May 2008 at Chaiten volcano, southern Chile, a 3-kilometer-diameter caldera volcano with a prehistoric record of rhyolite eruptions. Vigorous explosions occurred through 8 May 2008, after which... 4. How Do Volcanoes Affect Human Life? Integrated Unit. Science.gov (United States) Dayton, Rebecca; Edwards, Carrie; Sisler, Michelle This packet contains a unit on teaching about volcanoes. The following question is addressed: How do volcanoes affect human life? The unit covers approximately three weeks of instruction and strives to present volcanoes in an holistic form. The five subject areas of art, language arts, mathematics, science, and social studies are integrated into… 5. Living with Volcanoes: Year Eleven Teaching Resource Unit. Science.gov (United States) Le Heron, Kiri; Andrews, Jill; Hooks, Stacey; Larnder, Michele; Le Heron, Richard 2000-01-01 Presents a unit on volcanoes and experiences with volcanoes that helps students develop geography skills. Focuses on four volcanoes: (1) Rangitoto Island; (2) Lake Pupuke; (3) Mount Smart; and (4) One Tree Hill. Includes an answer sheet and resources to use with the unit. (CMK) 6. Robotic Software for the Thacher Observatory Science.gov (United States) Lawrence, George; Luebbers, Julien; Eastman, Jason D.; Johnson, John A.; Swift, Jonathan 2018-06-01 The Thacher Observatory—a research and educational facility located in Ojai, CA—uses a 0.7 meter telescope to conduct photometric research on a variety of targets including eclipsing binaries, exoplanet transits, and supernovae. Currently, observations are automated using commercial software. In order to expand the flexibility for specialized scientific observations and to increase the educational value of the facility on campus, we are adapting and implementing the custom observatory control software and queue scheduling developed for the Miniature Exoplanet Radial Velocity Array (MINERVA) to the Thacher Observatory. We present the design and implementation of this new software as well as its demonstrated functionality on the Thacher Observatory. 7. Volcano-tectonic earthquakes: A new tool for estimating intrusive volumes and forecasting eruptions Science.gov (United States) White, Randall; McCausland, Wendy 2016-01-01 , the intruded magma volume can be quickly and easily estimated with few short-period seismic stations. Notable cases in which distal VT events preceded eruptions at long-dormant volcanoes include: Nevado del Ruiz (1984-1985), Pinatubo (1991), Unzen (1989-1995), Soufriere Hills (1995), Shishaldin (1989-1999), Tacana' (1985-1986), Pacaya (1980-1984), Rabaul (1994), and Cotopaxi (2001). Additional cases are recognized at frequently active volcanoes including Popocateptl (2001-2003) and Mauna Loa (1984). We present four case studies (Pinatubo, Soufriere Hills, Unzen, and Tacana') in which we demonstrate the above mentioned VT characteristics prior to eruptions. Using regional data recorded by NEIC, we recognized in near-real time that a huge distal VT swarm was occurring, deduced that a proportionately huge magmatic intrusion was taking place beneath the long dormant Sulu Range, New Britain Island, Papua New Guinea, that it was likely to lead to eruptive activity, and warned Rabaul Volcano Observatory days before a phreatic eruption occurred. This confirms the value of this technique for eruption forecasting. We also present a counter-example where we deduced that a VT swarm at Volcan Cosiguina, Nicaragua, indicated a small intrusion, insufficient to reach the surface and erupt. Finally, we discuss limitations of the method and propose a mechanism by which this distal VT seismicity is triggered by magmatic intrusion. 8. Volcanoes muon imaging using Cherenkov telescopes International Nuclear Information System (INIS) Catalano, O.; Del Santo, M.; Mineo, T.; Cusumano, G.; Maccarone, M.C.; Pareschi, G. 2016-01-01 A detailed understanding of a volcano inner structure is one of the key-points for the volcanic hazards evaluation. To this aim, in the last decade, geophysical radiography techniques using cosmic muon particles have been proposed. By measuring the differential attenuation of the muon flux as a function of the amount of rock crossed along different directions, it is possible to determine the density distribution of the interior of a volcano. Up to now, a number of experiments have been based on the detection of the muon tracks crossing hodoscopes, made up of scintillators or nuclear emulsion planes. Using telescopes based on the atmospheric Cherenkov imaging technique, we propose a new approach to study the interior of volcanoes detecting of the Cherenkov light produced by relativistic cosmic-ray muons that survive after crossing the volcano. The Cherenkov light produced along the muon path is imaged as a typical annular pattern containing all the essential information to reconstruct particle direction and energy. Our new approach offers the advantage of a negligible background and an improved spatial resolution. To test the feasibility of our new method, we have carried out simulations with a toy-model based on the geometrical parameters of ASTRI SST-2M, i.e. the imaging atmospheric Cherenkov telescope currently under installation onto the Etna volcano. Comparing the results of our simulations with previous experiments based on particle detectors, we gain at least a factor of 10 in sensitivity. The result of this study shows that we resolve an empty cylinder with a radius of about 100 m located inside a volcano in less than 4 days, which implies a limit on the magma velocity of 5 m/h. 9. Volcanoes muon imaging using Cherenkov telescopes Energy Technology Data Exchange (ETDEWEB) Catalano, O. [INAF, Istituto di Astrofisica Spaziale e Fisica cosmica di Palermo, via U. La Malfa 153, I-90146 Palermo (Italy); Del Santo, M., E-mail: melania@ifc.inaf.it [INAF, Istituto di Astrofisica Spaziale e Fisica cosmica di Palermo, via U. La Malfa 153, I-90146 Palermo (Italy); Mineo, T.; Cusumano, G.; Maccarone, M.C. [INAF, Istituto di Astrofisica Spaziale e Fisica cosmica di Palermo, via U. La Malfa 153, I-90146 Palermo (Italy); Pareschi, G. [INAF Osservatorio Astronomico di Brera, Via E. Bianchi 46, I-23807, Merate (Italy) 2016-01-21 A detailed understanding of a volcano inner structure is one of the key-points for the volcanic hazards evaluation. To this aim, in the last decade, geophysical radiography techniques using cosmic muon particles have been proposed. By measuring the differential attenuation of the muon flux as a function of the amount of rock crossed along different directions, it is possible to determine the density distribution of the interior of a volcano. Up to now, a number of experiments have been based on the detection of the muon tracks crossing hodoscopes, made up of scintillators or nuclear emulsion planes. Using telescopes based on the atmospheric Cherenkov imaging technique, we propose a new approach to study the interior of volcanoes detecting of the Cherenkov light produced by relativistic cosmic-ray muons that survive after crossing the volcano. The Cherenkov light produced along the muon path is imaged as a typical annular pattern containing all the essential information to reconstruct particle direction and energy. Our new approach offers the advantage of a negligible background and an improved spatial resolution. To test the feasibility of our new method, we have carried out simulations with a toy-model based on the geometrical parameters of ASTRI SST-2M, i.e. the imaging atmospheric Cherenkov telescope currently under installation onto the Etna volcano. Comparing the results of our simulations with previous experiments based on particle detectors, we gain at least a factor of 10 in sensitivity. The result of this study shows that we resolve an empty cylinder with a radius of about 100 m located inside a volcano in less than 4 days, which implies a limit on the magma velocity of 5 m/h. 10. Volcano geodesy in the Cascade arc, USA Science.gov (United States) Poland, Michael; Lisowski, Michael; Dzurisin, Daniel; Kramer, Rebecca; McLay, Megan; Pauk, Benjamin 2017-01-01 Experience during historical time throughout the Cascade arc and the lack of deep-seated deformation prior to the two most recent eruptions of Mount St. Helens might lead one to infer that Cascade volcanoes are generally quiescent and, specifically, show no signs of geodetic change until they are about to erupt. Several decades of geodetic data, however, tell a different story. Ground- and space-based deformation studies have identified surface displacements at five of the 13 major Cascade arc volcanoes that lie in the USA (Mount Baker, Mount St. Helens, South Sister, Medicine Lake, and Lassen volcanic center). No deformation has been detected at five volcanoes (Mount Rainier, Mount Hood, Newberry Volcano, Crater Lake, and Mount Shasta), and there are not sufficient data at the remaining three (Glacier Peak, Mount Adams, and Mount Jefferson) for a rigorous assessment. In addition, gravity change has been measured at two of the three locations where surveys have been repeated (Mount St. Helens and Mount Baker show changes, while South Sister does not). Broad deformation patterns associated with heavily forested and ice-clad Cascade volcanoes are generally characterized by low displacement rates, in the range of millimeters to a few centimeters per year, and are overprinted by larger tectonic motions of several centimeters per year. Continuous GPS is therefore the best means of tracking temporal changes in deformation of Cascade volcanoes and also for characterizing tectonic signals so that they may be distinguished from volcanic sources. Better spatial resolution of volcano deformation can be obtained through the use of campaign GPS, semipermanent GPS, and interferometric synthetic aperture radar observations, which leverage the accumulation of displacements over time to improve signal to noise. Deformation source mechanisms in the Cascades are diverse and include magma accumulation and withdrawal, post-emplacement cooling of recent volcanic deposits, magmatic 11. Volcano geodesy in the Cascade arc, USA Science.gov (United States) Poland, Michael P.; Lisowski, Michael; Dzurisin, Daniel; Kramer, Rebecca; McLay, Megan; Pauk, Ben 2017-08-01 Experience during historical time throughout the Cascade arc and the lack of deep-seated deformation prior to the two most recent eruptions of Mount St. Helens might lead one to infer that Cascade volcanoes are generally quiescent and, specifically, show no signs of geodetic change until they are about to erupt. Several decades of geodetic data, however, tell a different story. Ground- and space-based deformation studies have identified surface displacements at five of the 13 major Cascade arc volcanoes that lie in the USA (Mount Baker, Mount St. Helens, South Sister, Medicine Lake, and Lassen volcanic center). No deformation has been detected at five volcanoes (Mount Rainier, Mount Hood, Newberry Volcano, Crater Lake, and Mount Shasta), and there are not sufficient data at the remaining three (Glacier Peak, Mount Adams, and Mount Jefferson) for a rigorous assessment. In addition, gravity change has been measured at two of the three locations where surveys have been repeated (Mount St. Helens and Mount Baker show changes, while South Sister does not). Broad deformation patterns associated with heavily forested and ice-clad Cascade volcanoes are generally characterized by low displacement rates, in the range of millimeters to a few centimeters per year, and are overprinted by larger tectonic motions of several centimeters per year. Continuous GPS is therefore the best means of tracking temporal changes in deformation of Cascade volcanoes and also for characterizing tectonic signals so that they may be distinguished from volcanic sources. Better spatial resolution of volcano deformation can be obtained through the use of campaign GPS, semipermanent GPS, and interferometric synthetic aperture radar observations, which leverage the accumulation of displacements over time to improve signal to noise. Deformation source mechanisms in the Cascades are diverse and include magma accumulation and withdrawal, post-emplacement cooling of recent volcanic deposits, magmatic 12. Volcanology and volcano sedimentology of Sahand region International Nuclear Information System (INIS) Moine Vaziri, H.; Amine Sobhani, E. 1977-01-01 There was no volcano in Precambrian and Mesozoic eras in Iran, but in most place of Iran during the next eras volcanic rocks with green series and Dacites were seen. By the recent survey in Sahand mountain in NW of Iran volcanography, determination of rocks and the age of layers were estimated. The deposits of Precambrian as sediment rocks are also seen in the same area. All of volcanic periods in this place were studied; their extrusive rocks, their petrography and the result of their analytical chemistry were discussed. Finally volcano sedimentology of Sahand mountain were described 13. Interoperability of Heliophysics Virtual Observatories Science.gov (United States) Thieman, J.; Roberts, A.; King, T.; King, J.; Harvey, C. 2008-01-01 If you'd like to find interrelated heliophysics (also known as space and solar physics) data for a research project that spans, for example, magnetic field data and charged particle data from multiple satellites located near a given place and at approximately the same time, how easy is this to do? There are probably hundreds of data sets scattered in archives around the world that might be relevant. Is there an optimal way to search these archives and find what you want? There are a number of virtual observatories (VOs) now in existence that maintain knowledge of the data available in subdisciplines of heliophysics. The data may be widely scattered among various data centers, but the VOs have knowledge of what is available and how to get to it. The problem is that research projects might require data from a number of subdisciplines. Is there a way to search multiple VOs at once and obtain what is needed quickly? To do this requires a common way of describing the data such that a search using a common term will find all data that relate to the common term. This common language is contained within a data model developed for all of heliophysics and known as the SPASE (Space Physics Archive Search and Extract) Data Model. NASA has funded the main part of the development of SPASE but other groups have put resources into it as well. How well is this working? We will review the use of SPASE and how well the goal of locating and retrieving data within the heliophysics community is being achieved. Can the VOs truly be made interoperable despite being developed by so many diverse groups? 14. The Arecibo Observatory Space Academy Science.gov (United States) Rodriguez-Ford, Linda A.; Fernanda Zambrano Marin, Luisa; Aponte Hernandez, Betzaida; Soto, Sujeily; Rivera-Valentin, Edgard G. 2016-10-01 The Arecibo Observatory Space Academy (AOSA) is an intense fifteen-week pre-college research program for qualified high school students residing in Puerto Rico, which includes ten days for hands-on, on site research activities. Our mission is to prepare students for their professional careers by allowing them to receive an independent and collaborative research experience on topics related to the multidisciplinary field of space science. Our objectives are to (1) supplement the student's STEM education via inquiry-based learning and indirect teaching methods, (2) immerse students in an ESL environment, further developing their verbal and written presentation skills, and (3) foster in every student an interest in the STEM fields by harnessing their natural curiosity and knowledge in order to further develop their critical thinking and investigation skills. Students interested in participating in the program go through an application, interview and trial period before being offered admission. They are welcomed as candidates the first weeks, and later become cadets while experiencing designing, proposing, and conducting research projects focusing in fields like Physics, Astronomy, Geology, Chemistry, and Engineering. Each individual is evaluated with program compatibility based on peer interaction, preparation, participation, and contribution to class, group dynamics, attitude, challenges, and inquiry. This helps to ensure that specialized attention can be given to students who demonstrate a dedication and desire to learn. Deciding how to proceed in the face of setbacks and unexpected problems is central to the learning experience. At the end of the semester, students present their research to the program mentors, peers, and scientific staff. This year, AOSA students also focused on science communication and were trained by NASA's FameLab. Students additionally presented their research at this year's International Space Development Conference (ISDC), which was held in 15. Pro-Amateur Observatories as a Significant Resource for Professional Astronomers - Taurus Hill Observatory Science.gov (United States) Haukka, H.; Hentunen, V.-P.; Nissinen, M.; Salmi, T.; Aartolahti, H.; Juutilainen, J.; Vilokki, H. 2013-09-01 Taurus Hill Observatory (THO), observatory code A95, is an amateur observatory located in Varkaus, Finland. The observatory is maintained by the local astronomical association of Warkauden Kassiopeia [8]. THO research team has observed and measured various stellar objects and phenomena. Observatory has mainly focuse d on asteroid [1] and exoplanet light curve measurements, observing the gamma rays burst, supernova discoveries and monitoring [2]. We also do long term monitoring projects [3]. THO research team has presented its research work on previous EPSC meetings ([4], [5],[6], [7]) and got very supportive reactions from the European planetary science community. The results and publications that pro-amateur based observatories, like THO, have contributed, clearly demonstrates that pro-amateurs area significant resource for the professional astronomers now and even more in the future. 16. Magdalena Ridge Observatory Interferometer: Status Update National Research Council Canada - National Science Library Creech-Eakman, M. J; Bakker, E. J; Buscher, D. F; Coleman, T. A; Haniff, C. A; Jurgenson, C. A; Klinglesmith, III, D. A; Parameswariah, C. B; Romero, V. D; Shtromberg, A. V; Young, J. S 2006-01-01 The Magdalena Ridge Observatory Interferometer (MROI) is a ten element optical and near-infrared imaging interferometer being built in the Magdalena mountains west of Socorro, NM at an altitude of 3230 m... 17. Ten years of the Spanish Virtual Observatory Science.gov (United States) Solano, E. 2015-05-01 The main objective of the Virtual Observatory (VO) is to guarantee an easy and efficient access and analysis of the information hosted in astronomical archives. The Spanish Virtual Observatory (SVO) is a project that was born in 2004 with the goal of promoting and coordinating the VO-related activities at national level. SVO is also the national contact point for the international VO initiatives, in particular the International Virtual Observatory Alliance (IVOA) and the Euro-VO project. The project, led by Centro de Astrobiología (INTA-CSIC), is structured around four major topics: a) VO compliance of astronomical archives, b) VO-science, c) VO- and data mining-tools, and d) Education and outreach. In this paper I will describe the most important results obtained by the Spanish Virtual Observatory in its first ten years of life as well as the future lines of work. 18. The Astrophysical Multimessenger Observatory Network (AMON) Science.gov (United States) Smith. M. W. E.; Fox, D. B.; Cowen, D. F.; Meszaros, P.; Tesic, G.; Fixelle, J.; Bartos, I.; Sommers, P.; Ashtekar, Abhay; Babu, G. Jogesh; 2013-01-01 We summarize the science opportunity, design elements, current and projected partner observatories, and anticipated science returns of the Astrophysical Multimessenger Observatory Network (AMON). AMON will link multiple current and future high-energy, multimessenger, and follow-up observatories together into a single network, enabling near real-time coincidence searches for multimessenger astrophysical transients and their electromagnetic counterparts. Candidate and high-confidence multimessenger transient events will be identified, characterized, and distributed as AMON alerts within the network and to interested external observers, leading to follow-up observations across the electromagnetic spectrum. In this way, AMON aims to evoke the discovery of multimessenger transients from within observatory subthreshold data streams and facilitate the exploitation of these transients for purposes of astronomy and fundamental physics. As a central hub of global multimessenger science, AMON will also enable cross-collaboration analyses of archival datasets in search of rare or exotic astrophysical phenomena. 19. CERN Multimedia Bradley, M 2003-01-01 Canberra bushfires have gutted the Mount Stromlo Observatory causing the flames destroyed five telescopes, the workshop, eight staff homes and the main dome, causing more than$20 million in damage (1 page).
20. In Brief: Deep-sea observatory
Science.gov (United States)
Showstack, Randy
2008-11-01
The first deep-sea ocean observatory offshore of the continental United States has begun operating in the waters off central California. The remotely operated Monterey Accelerated Research System (MARS) will allow scientists to monitor the deep sea continuously. Among the first devices to be hooked up to the observatory are instruments to monitor earthquakes, videotape deep-sea animals, and study the effects of acidification on seafloor animals. Some day we may look back at the first packets of data streaming in from the MARS observatory as the equivalent of those first words spoken by Alexander Graham Bell: Watson, come here, I need you!','' commented Marcia McNutt, president and CEO of the Monterey Bay Aquarium Research Institute, which coordinated construction of the observatory. For more information, see http://www.mbari.org/news/news_releases/2008/mars-live/mars-live.html.
1. The Farid and Moussa Raphael Observatory
International Nuclear Information System (INIS)
Hajjar, R
2017-01-01
The Farid and Moussa Raphael Observatory (FMRO) at Notre Dame University Louaize (NDU) is a teaching, research, and outreach facility located at the main campus of the university. It located very close to the Lebanese coast, in an urbanized area. It features a 60-cm Planewave CDK telescope, and instruments that allow for photometric and spetroscopic studies. The observatory currently has one thinned, back-illuminated CCD camera, used as the main imager along with Johnson-Cousin and Sloan photometric filters. It also features two spectrographs, one of which is a fiber fed echelle spectrograph. These are used with a dedicated CCD. The observatory has served for student projects, and summer schools for advanced undergraduate and graduate students. It is also made available for use by the regional and international community. The control system is currently being configured for remote observations. A number of long-term research projects are also being launched at the observatory. (paper)
2. Recent Inflation of Kilauea Volcano
Science.gov (United States)
Miklius, A.; Poland, M.; Desmarais, E.; Sutton, A.; Orr, T.; Okubo, P.
2006-12-01
Over the last three years, geodetic monitoring networks and satellite radar interferometry have recorded substantial inflation of Kilauea's magma system, while the Puu Oo eruption on the east rift zone has continued unabated. Combined with the approximate doubling of carbon dioxide emission rates at the summit during this period, these observations indicate that the magma supply rate to the volcano has increased. Since late 2003, the summit area has risen over 20 cm, and a 2.5 km-long GPS baseline across the summit area has extended almost half a meter. The center of inflation has been variable, with maximum uplift shifting from an area near the center of the caldera to the southeastern part of the caldera in 2004-2005. In 2006, the locus of inflation shifted again, to the location of the long-term magma reservoir in the southern part of the caldera - the same area that had subsided more than 1.5 meters during the last 23 years of the ongoing eruption. In addition, the southwest rift zone reversed its long-term trend of subsidence and began uplifting in early 2006. The east rift zone has shown slightly accelerated rates of extension, but with a year-long hiatus following the January 2005 south flank aseismic slip event. Inflation rates have varied greatly. Accelerated rates of extension and uplift in early 2005 and 2006 were also associated with increased seismicity. Seismicity occurred not only at inflation centers, but was also triggered on the normal faulting area northwest of the caldera and the strike-slip faulting area in the upper east rift zone. In early 2006, at about the time that we started recording uplift on the southwest rift zone, the rate of earthquakes extending from the summit into the southwest rift zone at least quadrupled. The most recent previous episode of inflation at Kilauea, in 2002, may have resulted from reduced lava- transport capacity, as it was associated with decreased outflow at the eruption site. In contrast, eruption volumes
3. Early German Plans for a Southern Observatory
Science.gov (United States)
Wolfschmidt, Gudrun
As early as the 18th and 19th centuries, French and English observers were active in South Africa. Around the beginning of the 20th century the Heidelberg astronomer Max Wolf (1863-1932) proposed a southern observatory. In 1907 Hermann Carl Vogel (1841-1907), director of the Astrophysical Observatory Potsdam, suggested a southern station in Spain. His ideas for building an observatory in Windhuk for photographing the sky and measuring the solar constant were taken over by the Göttingen astronomers. In 1910 Karl Schwarzschild (1873-1916), after having visited the observatories in America, pointed out the usefulness of an observatory in South West Africa, where it would have better weather than in Germany and also give access to the southern sky. Seeing tests were begun in 1910 by Potsdam astronomers, but WW I stopped the plans. In 1928 Erwin Finlay-Freundlich (1885-1964), inspired by the Hamburg astronomer Walter Baade (1893-1960), worked out a detailed plan for a southern observatory with a reflecting telescope, spectrographs and an astrograph with an objective prism. Paul Guthnick (1879-1947), director of the Berlin observatory, in cooperation with APO Potsdam and Hamburg, made a site survey to Africa in 1929 and found the conditions in Windhuk to be ideal. Observations were started in the 1930s by Berlin and Breslau astronomers, but were stopped by WW II. In the 1950s, astronomers from Hamburg and The Netherlands renewed the discussion in the framework of European cooperation, and this led to the founding of ESO in 1963, as is well described by Blaauw (1991). Blaauw, Adriaan: ESO's Early History. The European Southern Observatory from Concept to Reality. Garching bei München: ESO 1991.
4. The Pierre Auger Cosmic Ray Observatory
Czech Academy of Sciences Publication Activity Database
Aab, A.; Abreu, P.; Aglietta, M.; Boháčová, Martina; Chudoba, Jiří; Ebr, Jan; Grygar, Jiří; Mandát, Dušan; Nečesal, Petr; Palatka, Miroslav; Pech, Miroslav; Prouza, Michael; Řídký, Jan; Schovánek, Petr; Trávníček, Petr; Vícha, Jakub
2015-01-01
Roč. 798, Oct (2015), s. 172-213 ISSN 0168-9002 R&D Projects: GA MŠk(CZ) LG13007; GA MŠk(CZ) 7AMB14AR005; GA ČR(CZ) GA14-17501S Institutional support: RVO:68378271 Keywords : Pierre Auger Observatory * high energy cosmic rays * hybrid observatory * water Cherenkov detectors * air fluorescence detectors Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 1.200, year: 2015
5. A Green Robotic Observatory for Astronomy Education
Science.gov (United States)
Reddy, Vishnu; Archer, K.
2008-09-01
With the development of robotic telescopes and stable remote observing software, it is currently possible for a small institution to have an affordable astronomical facility for astronomy education. However, a faculty member has to deal with the light pollution (observatory location on campus), its nightly operations and regular maintenance apart from his day time teaching and research responsibilities. While building an observatory at a remote location is a solution, the cost of constructing and operating such a facility, not to mention the environmental impact, are beyond the reach of most institutions. In an effort to resolve these issues we have developed a robotic remote observatory that can be operated via the internet from anywhere in the world, has a zero operating carbon footprint and minimum impact on the local environment. The prototype observatory is a clam-shell design that houses an 8-inch telescope with a SBIG ST-10 CCD detector. The brain of the observatory is a low draw 12-volt harsh duty computer that runs the dome, telescope, CCD camera, focuser, and weather monitoring. All equipment runs of a 12-volt AGM-style battery that has low lead content and hence more environmental-friendly to dispose. The total power of 12-14 amp/hrs is generated from a set of solar panels that are large enough to maintain a full battery charge for several cloudy days. This completely eliminates the need for a local power grid for operations. Internet access is accomplished via a high-speed cell phone broadband connection or satellite link eliminating the need for a phone network. An independent observatory monitoring system interfaces with the observatory computer during operation. The observatory converts to a trailer for transportation to the site and is converted to a semi-permanent building without wheels and towing equipment. This ensures minimal disturbance to local environment.
6. Early German plans for southern observatories
Science.gov (United States)
Wolfschmidt, G.
2002-07-01
As early as the 18th and 19th centuries, French and English observers were active in South Africa. Around the beginning of the 20th century, Heidelberg and Potsdam astronomers proposed a southern observatory. Then Göttingen astronomers suggested building an observatory in Windhoek for photographing the sky and measuring the solar constant. In 1910 Karl Schwarzschild (1873-1916), after a visit to observatories in the United States, pointed out the usefulness of an observatory in South West Africa, in a climate superior to that in Germany, giving German astronomers access to the southern sky. Seeing tests were begun in 1910 by Potsdam astronomers, but WW I stopped the plans. In 1928 Erwin Finlay-Freundlich (1885-1964), inspired by the Hamburg astronomer Walter Baade (1893-1960), worked out a detailed plan for a southern observatory with a reflecting telescope, spectrographs and an astrograph with an objective prism. Paul Guthnick (1879-1947), director of the Berlin observatory, in cooperation with APO Potsdam and Hamburg, made a site survey to Africa in 1929 and found the conditions in Windhoek to be ideal. Observations were started in the 1930s by Berlin and Breslau astronomers, but were stopped by WW II. In the 1950s, astronomers from Hamburg and The Netherlands renewed the discussion in the framework of European cooperation, and this led to the founding of ESO in 1963.
7. Observatories of Sawai Jai Singh II
Science.gov (United States)
Johnson-Roehr, Susan N.
Sawai Jai Singh II, Maharaja of Amber and Jaipur, constructed five observatories in the second quarter of the eighteenth century in the north Indian cities of Shahjahanabad (Delhi), Jaipur, Ujjain, Mathura, and Varanasi. Believing the accuracy of his naked-eye observations would improve with larger, more stable instruments, Jai Singh reengineered common brass instruments using stone construction methods. His applied ingenuity led to the invention of several outsize masonry instruments, the majority of which were used to determine the coordinates of celestial objects with reference to the local horizon. During Jai Singh's lifetime, the observatories were used to make observations in order to update existing ephemerides such as the Zīj-i Ulugh Begī. Jai Singh established communications with European astronomers through a number of Jesuits living and working in India. In addition to dispatching ambassadorial parties to Portugal, he invited French and Bavarian Jesuits to visit and make use of the observatories in Shahjahanabad and Jaipur. The observatories were abandoned after Jai Singh's death in 1743 CE. The Mathura observatory was disassembled completely before 1857. The instruments at the remaining observatories were restored extensively during the nineteenth and twentieth centuries.
8. Growth and degradation of Hawaiian volcanoes: Chapter 3 in Characteristics of Hawaiian volcanoes
Science.gov (United States)
Clague, David A.; Sherrod, David R.; Poland, Michael P.; Takahashi, T. Jane; Landowski, Claire M.
2014-01-01
The 19 known shield volcanoes of the main Hawaiian Islands—15 now emergent, 3 submerged, and 1 newly born and still submarine—lie at the southeast end of a long-lived hot spot chain. As the Pacific Plate of the Earth’s lithosphere moves slowly northwestward over the Hawaiian hot spot, volcanoes are successively born above it, evolve as they drift away from it, and eventually die and subside beneath the ocean surface.
9. The Russian-Ukrainian Observatories Network for the European Astronomical Observatory Route Project
Science.gov (United States)
Andrievsky, S. M.; Bondar, N. I.; Karetnikov, V. G.; Kazantseva, L. V.; Nefedyev, Y. A.; Pinigin, G. I.; Pozhalova, Zh. A.; Rostopchina-Shakhovskay, A. N.; Stepanov, A. V.; Tolbin, S. V.
2011-09-01
In 2004,the Center of UNESCO World Heritage has announced a new initiative "Astronomy & World Heritage" directed for search and preserving of objects,referred to astronomy,its history in a global value,historical and cultural properties. There were defined a strategy of thematic programme "Initiative" and general criteria for selecting of ancient astronomical objects and observatories. In particular, properties that are situated or have significance in relation to celestial objects or astronomical events; representations of sky and/or celestial bodies and astronomical events; observatories and instruments; properties closely connected with the history of astronomy. In 2005-2006,in accordance with the program "Initiative", information about outstanding properties connected with astronomy have been collected.In Ukraine such work was organized by astronomical expert group in Nikolaev Astronomical Observatory. In 2007, Nikolaev observatory was included to the Tentative List of UNESCO under # 5116. Later, in 2008, the network of four astronomical observatories of Ukraine in Kiev,Crimea, Nikolaev and Odessa,considering their high authenticities and integrities,was included to the Tentative List of UNESCO under # 5267 "Astronomical Observatories of Ukraine". In 2008-2009, a new project "Thematic Study" was opened as a successor of "Initiative". It includes all fields of astronomical heritage from earlier prehistory to the Space astronomy (14 themes in total). We present the Ukraine-Russian Observatories network for the "European astronomical observatory Route project". From Russia two observatories are presented: Kazan Observatory and Pulkovo Observatory in the theme "Astronomy from the Renaissance to the mid-twentieth century".The description of astronomical observatories of Ukraine is given in accordance with the project "Thematic study"; the theme "Astronomy from the Renaissance to the mid-twentieth century" - astronomical observatories in Kiev,Nikolaev and Odessa; the
10. Carbonate assimilation at Merapi volcano, Java Indonesia
DEFF Research Database (Denmark)
Chadwick, J.P; Troll, V.R; Ginibre,, C.
2007-01-01
Recent basaltic andesite lavas from Merapi volcano contain abundant, complexly zoned, plagioclase phenocrysts, analysed here for their petrographic textures, major element composition and Sr isotope composition. Anorthite (An) content in individual crystals can vary by as much as 55 mol% (An40^95...
11. Biological Studies on a Live Volcano.
Science.gov (United States)
Zipko, Stephen J.
1992-01-01
Describes scientific research on an Earthwatch expedition to study Arenal, one of the world's most active volcanoes, in north central Costa Rica. The purpose of the two-week project was to monitor and understand the past and ongoing development of a small, geologically young, highly active stratovolcano in a tropical, high-rainfall environment.…
12. Of volcanoes, saints, trash, and frogs
DEFF Research Database (Denmark)
Andersen, Astrid Oberborbeck
, at the same time as political elections and economic hardship. During one year of ethnographic fieldwork volcanoes, saints, trash and frogs were among the nonhuman entities referred to in conversations and engaged with when responding to the changes that trouble the world and everyday life of Arequipans...
13. Muons reveal the interior of volcanoes
CERN Multimedia
Francesco Poppi
2010-01-01
The MU-RAY project has the very challenging aim of providing a “muon X-ray” of the Vesuvius volcano (Italy) using a detector that records the muons hitting it after traversing the rock structures of the volcano. This technique was used for the first time in 1971 by the Nobel Prize-winner Louis Alvarez, who was searching for unknown burial chambers in the Chephren pyramid. The location of the muon detector on the slopes of the Vesuvius volcano. Like X-ray scans of the human body, muon radiography allows researchers to obtain an image of the internal structures of the upper levels of volcanoes. Although such an image cannot help to predict ‘when’ an eruption might occur, it can, if combined with other observations, help to foresee ‘how’ it could develop and serves as a powerful tool for the study of geological structures. Muons come from the interaction of cosmic rays with the Earth's atmosphere. They are able to traverse layers of ro...
14. False Color Image of Volcano Sapas Mons
Science.gov (United States)
1991-01-01
This false-color image shows the volcano Sapas Mons, which is located in the broad equatorial rise called Atla Regio (8 degrees north latitude and 188 degrees east longitude). The area shown is approximately 650 kilometers (404 miles) on a side. Sapas Mons measures about 400 kilometers (248 miles) across and 1.5 kilometers (0.9 mile) high. Its flanks show numerous overlapping lava flows. The dark flows on the lower right are thought to be smoother than the brighter ones near the central part of the volcano. Many of the flows appear to have been erupted along the flanks of the volcano rather than from the summit. This type of flank eruption is common on large volcanoes on Earth, such as the Hawaiian volcanoes. The summit area has two flat-topped mesas, whose smooth tops give a relatively dark appearance in the radar image. Also seen near the summit are groups of pits, some as large as one kilometer (0.6 mile) across. These are thought to have formed when underground chambers of magma were drained through other subsurface tubes and lead to a collapse at the surface. A 20 kilometer-diameter (12-mile diameter) impact crater northeast of the volcano is partially buried by the lava flows. Little was known about Atla Regio prior to Magellan. The new data, acquired in February 1991, show the region to be composed of at least five large volcanoes such as Sapas Mons, which are commonly linked by complex systems of fractures or rift zones. If comparable to similar features on Earth, Atla Regio probably formed when large volumes of molten rock upwelled from areas within the interior of Venus known as'hot spots.' Magellan is a NASA spacecraft mission to map the surface of Venus with imaging radar. The basic scientific instrument is a synthetic aperture radar, or SAR, which can look through the thick clouds perpetually shielding the surface of Venus. Magellan is in orbit around Venus which completes one turn around its axis in 243 Earth days. That period of time, one Venus day
15. Hazard maps of Colima volcano, Mexico
Science.gov (United States)
Suarez-Plascencia, C.; Nunez-Cornu, F. J.; Escudero Ayala, C. R.
2011-12-01
Colima volcano, also known as Volcan de Fuego (19° 30.696 N, 103° 37.026 W), is located on the border between the states of Jalisco and Colima and is the most active volcano in Mexico. Began its current eruptive process in February 1991, in February 10, 1999 the biggest explosion since 1913 occurred at the summit dome. The activity during the 2001-2005 period was the most intense, but did not exceed VEI 3. The activity resulted in the formation of domes and their destruction after explosive events. The explosions originated eruptive columns, reaching attitudes between 4,500 and 9,000 m.a.s.l., further pyroclastic flows reaching distances up to 3.5 km from the crater. During the explosive events ash emissions were generated in all directions reaching distances up to 100 km, slightly affected nearby villages as Tuxpan, Tonila, Zapotlán, Cuauhtemoc, Comala, Zapotitlan de Vadillo and Toliman. During the 2005 this volcano has had an intense effusive-explosive activity, similar to the one that took place during the period of 1890 through 1900. Intense pre-plinian eruption in January 20, 1913, generated little economic losses in the lower parts of the volcano due to low population density and low socio-economic activities at the time. Shows the updating of the volcanic hazard maps published in 2001, where we identify whit SPOT satellite imagery and Google Earth, change in the land use on the slope of volcano, the expansion of the agricultural frontier on the east and southeast sides of the Colima volcano, the population inhabiting the area is approximately 517,000 people, and growing at an annual rate of 4.77%, also the region that has shown an increased in the vulnerability for the development of economic activities, supported by the construction of highways, natural gas pipelines and electrical infrastructure that connect to the Port of Manzanillo to Guadalajara city. The update the hazard maps are: a) Exclusion areas and moderate hazard for explosive events
16. Geochemical studies on island arc volcanoes
International Nuclear Information System (INIS)
Notsu, Kenji
1998-01-01
This paper summarizes advances in three topics of geochemical studies on island arc volcanoes, which I and my colleagues have been investigating. First one is strontium isotope studies of arc volcanic rocks mainly from Japanese island arcs. We have shown that the precise spatial distribution of the 87 Sr/ 86 Sr ratio reflects natures of the subduction structure and slab-mantle interaction. Based on the 87 Sr/ 86 Sr ratio of volcanic rocks in the northern Kanto district, where two plates subduct concurrently with different directions, the existence of an aseismic portion of the Philippine Sea plate ahead of the seismic one was suggested. Second one is geochemical monitoring of active arc volcanoes. 3 He/ 4 He ratio of volcanic volatiles was shown to be a good indicator to monitor the behavior of magma: ascent and drain-back of magma result in increase and decrease in the ratio, respectively. In the case of 1986 eruptions of Izu-Oshima volcano, the ratio began to increase two months after big eruptions, reaching the maximum and decreased. Such delayed response is explained in terms of travelling time of magmatic helium from the vent area to the observation site along the underground steam flow. Third one is remote observation of volcanic gas chemistry of arc volcanoes, using an infrared absorption spectroscopy. During Unzen eruptions starting in 1990, absorption features of SO 2 and HCl of volcanic gas were detected from the observation station at 1.3 km distance. This was the first ground-based remote detection of HCl in volcanic gas. In the recent work at Aso volcano, we could identify 5 species (CO, COS, CO 2 , SO 2 and HCl) simultaneously in the volcanic plume spectra. (author)
17. The SARVIEWS Project: Automated SAR Processing in Support of Operational Near Real-time Volcano Monitoring
Science.gov (United States)
Meyer, F. J.; Webley, P. W.; Dehn, J.; Arko, S. A.; McAlpin, D. B.; Gong, W.
2016-12-01
Volcanic eruptions are among the most significant hazards to human society, capable of triggering natural disasters on regional to global scales. In the last decade, remote sensing has become established in operational volcano monitoring. Centers like the Alaska Volcano Observatory rely heavily on remote sensing data from optical and thermal sensors to provide time-critical hazard information. Despite this high use of remote sensing data, the presence of clouds and a dependence on solar illumination often limit their impact on decision making. Synthetic Aperture Radar (SAR) systems are widely considered superior to optical sensors in operational monitoring situations, due to their weather and illumination independence. Still, the contribution of SAR to operational volcano monitoring has been limited in the past due to high data costs, long processing times, and low temporal sampling rates of most SAR systems. In this study, we introduce the automatic SAR processing system SARVIEWS, whose advanced data analysis and data integration techniques allow, for the first time, a meaningful integration of SAR into operational monitoring systems. We will introduce the SARVIEWS database interface that allows for automatic, rapid, and seamless access to the data holdings of the Alaska Satellite Facility. We will also present a set of processing techniques designed to automatically generate a set of SAR-based hazard products (e.g. change detection maps, interferograms, geocoded images). The techniques take advantage of modern signal processing and radiometric normalization schemes, enabling the combination of data from different geometries. Finally, we will show how SAR-based hazard information is integrated in existing multi-sensor decision support tools to enable joint hazard analysis with data from optical and thermal sensors. We will showcase the SAR processing system using a set of recent natural disasters (both earthquakes and volcanic eruptions) to demonstrate its
18. The Fram Strait integrated ocean observatory
Science.gov (United States)
Fahrbach, E.; Beszczynska-Möller, A.; Rettig, S.; Rohardt, G.; Sagen, H.; Sandven, S.; Hansen, E.
2012-04-01
A long-term oceanographic moored array has been operated since 1997 to measure the ocean water column properties and oceanic advective fluxes through Fram Strait. While the mooring line along 78°50'N is devoted to monitoring variability of the physical environment, the AWI Hausgarten observatory, located north of it, focuses on ecosystem properties and benthic biology. Under the EU DAMOCLES and ACOBAR projects, the oceanographic observatory has been extended towards the innovative integrated observing system, combining the deep ocean moorings, multipurpose acoustic system and a network of gliders. The main aim of this system is long-term environmental monitoring in Fram Strait, combining satellite data, acoustic tomography, oceanographic measurements at moorings and glider sections with high-resolution ice-ocean circulation models through data assimilation. In future perspective, a cable connection between the Hausgarten observatory and a land base on Svalbard is planned as the implementation of the ESONET Arctic node. To take advantage of the planned cabled node, different technologies for the underwater data transmission were reviewed and partially tested under the ESONET DM AOEM. The main focus was to design and evaluate available technical solutions for collecting data from different components of the Fram Strait ocean observing system, and an integration of available data streams for the optimal delivery to the future cabled node. The main components of the Fram Strait integrated observing system will be presented and the current status of available technologies for underwater data transfer will be reviewed. On the long term, an initiative of Helmholtz observatories foresees the interdisciplinary Earth-Observing-System FRAM which combines observatories such as the long term deep-sea ecological observatory HAUSGARTEN, the oceanographic Fram Strait integrated observing system and the Svalbard coastal stations maintained by the Norwegian ARCTOS network. A vision
19. 195-Year History of Mykolayiv Observatory: Events and People
Directory of Open Access Journals (Sweden)
Shulga, O.V.
2017-01-01
Full Text Available The basic stages of the history of the Mykolaiv Astronomical Observatory are shown. The main results of the Observatory activities are presented by the catalogs of star positions, major and minor planets in the Solar system, space objects in the Earth orbit. The information on the qualitative and quantitative structure of the Observatory, cooperation with the observatories of Ukraine and foreign countries as well as major projects carried out in the Observatory is provided.
20. Space Radar Image of Colombian Volcano
Science.gov (United States)
1999-01-01
This is a radar image of a little known volcano in northern Colombia. The image was acquired on orbit 80 of space shuttle Endeavour on April 14, 1994, by the Spaceborne Imaging Radar C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR). The volcano near the center of the image is located at 5.6 degrees north latitude, 75.0 degrees west longitude, about 100 kilometers (65 miles) southeast of Medellin, Colombia. The conspicuous dark spot is a lake at the bottom of an approximately 3-kilometer-wide (1.9-mile) volcanic collapse depression or caldera. A cone-shaped peak on the bottom left (northeast rim) of the caldera appears to have been the source for a flow of material into the caldera. This is the northern-most known volcano in South America and because of its youthful appearance, should be considered dormant rather than extinct. The volcano's existence confirms a fracture zone proposed in 1985 as the northern boundary of volcanism in the Andes. The SIR-C/X-SAR image reveals another, older caldera further south in Colombia, along another proposed fracture zone. Although relatively conspicuous, these volcanoes have escaped widespread recognition because of frequent cloud cover that hinders remote sensing imaging in visible wavelengths. Four separate volcanoes in the Northern Andes nations ofColombia and Ecuador have been active during the last 10 years, killing more than 25,000 people, including scientists who were monitoring the volcanic activity. Detection and monitoring of volcanoes from space provides a safe way to investigate volcanism. The recognition of previously unknown volcanoes is important for hazard evaluations because a number of major eruptions this century have occurred at mountains that were not previously recognized as volcanoes. Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of
1. ESA innovation rescues Ultraviolet Observatory
Science.gov (United States)
1995-10-01
experience to have the opportunity to do an in-depth review of operational procedures established in 1978 and be given the chance to streamline these through the application of the tools available to engineers and scientists in 1995." The innovative arrangements were designed and developed at the ESA IUE Observatory, which is located in Spain at ESA's Villafranca Satellite Tracking Station in Villanueva de la Canada near Madrid. As a result, ESA is now performing all of WE's science observations (16 hours per day) from the Villafranca station. All the processing of the observations transmitted by the satellite and the subsequent rapid data distribution to the research scientists world-wide is now done from Villafranca. NASA does maintain its role in the programme in the area of operational spacecraft maintenance support, satellite communications and data re-processing for IUE's Final Archive. Thus the IUE Project could be extended and the final IUE observing program can now be implemented. In particular, this will involve critical studies on comets (e,g. on Comet Hale-Bopp), on stellar wind structures, on the enigmatic mini-quasars (which are thought to power the nuclei of Active Galaxies), as well as performing pre- studies which will optimize the utilization of the Hubble Space Telescope. Prof. R.M. Bonnet, Director of the ESA Science Programme comments "I am quite pleased that we have been able to secure the extension of our support for the scientists in Europe and the world to this highly effective mission. Also the scientists can be proud of the utilization of IUE, with more than 3000 learned publications and 200 Doctoral dissertations based on data from IUE. Through this they demonstrate in turn to be very appreciative of our efforts in the Science Programme".
2. Augustine Volcano, Cook Inlet, Alaska (January 12, 2006)
Science.gov (United States)
2006-01-01
Since last spring, the U.S. Geological Survey's Alaska Volcano Observatory (AVO) has detected increasing volcanic unrest at Augustine Volcano in Cook Inlet, Alaska near Anchorage. Based on all available monitoring data, AVO regards that an eruption similar to 1976 and 1986 is the most probable outcome. During January, activity has been episodic, and characterized by emission of steam and ash plumes, rising to altitudes in excess of 9,000 m (30,000 ft), and posing hazards to aircraft in the vicinity. An ASTER image was acquired at 12:42 AST on January 12, 2006, during an eruptive phase of Augustine. The perspective rendition shows the eruption plume derived from the ASTER image data. ASTER's stereo viewing capability was used to calculate the 3-dimensional topography of the eruption cloud as it was blown to the south by prevailing winds. From a maximum height of 3060 m (9950 ft), the plume cooled and its top descended to 1900 m (6175 ft). The perspective view shows the ASTER data draped over the plume top topography, combined with a base image acquired in 2000 by the Landsat satellite, that is itself draped over ground elevation data from the Shuttle Radar Topography Mission. The topographic relief has been increased 1.5 times for this illustration. Comparison of the ASTER plume topography data with ash dispersal models and weather radar data will allow the National Weather Service to validate and improve such models. These models are used to forecast volcanic ash plume trajectories and provide hazard alerts and warnings to aircraft in the Alaska region. ASTER is one of five Earth-observing instruments launched December 18, 1999, on NASA's Terra satellite. The instrument was built by Japan's Ministry of Economy, Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products. The broad spectral coverage and high spectral resolution of ASTER provides scientists in numerous disciplines with
3. The EarthScope Plate Boundary Observatory: Bringing Low Latency Data From Unimak Island, Alaska
Science.gov (United States)
Feaux, K.; Mencin, D.; Jackson, M.; Gallaher, W.; Pauk, B.; Smith, S.
2008-05-01
The Plate Boundary Observatory (PBO), part of the NSF-funded EarthScope project, will complete the installation of a fourteen station GPS network on Unimak Island, Alaska in August, 2008. The primary data communications goal of the project is to design and implement a robust data communications network capable of downloading 15-sec daily GPS files and streaming 1 Hz GPS data, via Ustream, from Unimak Island to three data relay points in the Aleutian chain. As part of the permitting agreement with the landowner, PBO will co-locate the GPS stations with existing USGS seismic stations. The technical challenges involved in optimizing the data communications network for both the GPS data and the seismic data will be presented. From Unimak island, there will be three separate data telemetry paths: 1) West through a radio repeater on Akutan volcano to a VSAT in Akutan village, 2) East through a radio repeater to a T1 connection in Cold Bay, AK, 3) South through a radio repeater to a VSAT at an existing PBO GPS station in King Cove, AK. The difficulties involved in the project include complex network geometries with multiple radio repeaters, long distance RF transmission over water, hardware bandwidth limitations, power limitations, space limitations, as well as working in bear country on an incredibly remote and active volcano.
4. An international network of magnetic observatories
Science.gov (United States)
Love, Jeffrey J.; Chulliat, A.
2013-01-01
Since its formation in the late 1980s, the International Real-Time Magnetic Observatory Network (INTERMAGNET), a voluntary consortium of geophysical institutes from around the world, has promoted the operation of magnetic observatories according to modern standards [eg. Rasson, 2007]. INTERMAGNET institutes have cooperatively developed infrastructure for data exchange and management ads well as methods for data processing and checking. INTERMAGNET institute have also helped to expand global geomagnetic monitoring capacity, most notably by assisting magnetic observatory institutes in economically developing countries by working directly with local geophysicists. Today the INTERMAGNET consortium encompasses 57 institutes from 40 countries supporting 120 observatories (see Figures 1a and 1b). INTERMAGNET data record a wide variety of time series signals related to a host of different physical processes in the Earth's interiors and in the Earth's surrounding space environment [e.g., Love, 2008]. Observatory data have always had a diverse user community, and to meet evolving demand, INTERMAGNET has recently coordinated the introduction of several new data services.
5. The University of Montana's Blue Mountain Observatory
Science.gov (United States)
Friend, D. B.
2004-12-01
The University of Montana's Department of Physics and Astronomy runs the state of Montana's only professional astronomical observatory. The Observatory, located on nearby Blue Mountain, houses a 16 inch Boller and Chivens Cassegrain reflector (purchased in 1970), in an Ash dome. The Observatory sits just below the summit ridge, at an elevation of approximately 6300 feet. Our instrumentation includes an Op-Tec SSP-5A photoelectric photometer and an SBIG ST-9E CCD camera. We have the only undergraduate astronomy major in the state (technically a physics major with an astronomy option), so our Observatory is an important component of our students' education. Students have recently carried out observing projects on the photometry of variable stars and color photometry of open clusters and OB associations. In my poster I will show some of the data collected by students in their observing projects. The Observatory is also used for public open houses during the summer months, and these have become very popular: at times we have had 300 visitors in a single night.
6. Common processes at unique volcanoes – a volcanological conundrum
Directory of Open Access Journals (Sweden)
Katharine eCashman
2014-11-01
Full Text Available An emerging challenge in modern volcanology is the apparent contradiction between the perception that every volcano is unique, and classification systems based on commonalities among volcano morphology and eruptive style. On the one hand, detailed studies of individual volcanoes show that a single volcano often exhibits similar patterns of behaviour over multiple eruptive episodes; this observation has led to the idea that each volcano has its own distinctive pattern of behaviour (or personality. In contrast, volcano classification schemes define eruption styles referenced to type volcanoes (e.g. Plinian, Strombolian, Vulcanian; this approach implicitly assumes that common processes underpin volcanic activity and can be used to predict the nature, extent and ensuing hazards of individual volcanoes. Actual volcanic eruptions, however, often include multiple styles, and type volcanoes may experience atypical eruptions (e.g., violent explosive eruptions of Kilauea, Hawaii1. The volcanological community is thus left with a fundamental conundrum that pits the uniqueness of individual volcanic systems against generalization of common processes. Addressing this challenge represents a major challenge to volcano research.
7. Continuous monitoring of volcanoes with borehole strainmeters
Science.gov (United States)
Linde, Alan T.; Sacks, Selwyn
Monitoring of volcanoes using various physical techniques has the potential to provide important information about the shape, size and location of the underlying magma bodies. Volcanoes erupt when the pressure in a magma chamber some kilometers below the surface overcomes the strength of the intervening rock, resulting in detectable deformations of the surrounding crust. Seismic activity may accompany and precede eruptions and, from the patterns of earthquake locations, inferences may be made about the location of magma and its movement. Ground deformation near volcanoes provides more direct evidence on these, but continuous monitoring of such deformation is necessary for all the important aspects of an eruption to be recorded. Sacks-Evertson borehole strainmeters have recorded strain changes associated with eruptions of Hekla, Iceland and Izu-Oshima, Japan. Those data have made possible well-constrained models of the geometry of the magma reservoirs and of the changes in their geometry during the eruption. The Hekla eruption produced clear changes in strain at the nearest instrument (15 km from the volcano) starting about 30 minutes before the surface breakout. The borehole instrument on Oshima showed an unequivocal increase in the amplitude of the solid earth tides beginning some years before the eruption. Deformational changes, detected by a borehole strainmeter and a very long baseline tiltmeter, and corresponding to the remote triggered seismicity at Long Valley, California in the several days immediately following the Landers earthquake are indicative of pressure changes in the magma body under Long Valley, raising the question of whether such transients are of more general importance in the eruption process. We extrapolate the experience with borehole strainmeters to estimate what could be learned from an installation of a small network of such instruments on Mauna Loa. Since the process of conduit formation from the magma sources in Mauna Loa and other
8. Volcanoes of the Wrangell Mountains and Cook Inlet region, Alaska: selected photographs
Science.gov (United States)
Neal, Christina A.; McGimsey, Robert G.; Diggles, Michael F.
2001-01-01
Alaska is home to more than 40 active volcanoes, many of which have erupted violently and repeatedly in the last 200 years. This CD-ROM contains 97 digitized color 35-mm images which represent a small fraction of thousands of photographs taken by Alaska Volcano Observatory scientists, other researchers, and private citizens. The photographs were selected to portray Alaska's volcanoes, to document recent eruptive activity, and to illustrate the range of volcanic phenomena observed in Alaska. These images are for use by the interested public, multimedia producers, desktop publishers, and the high-end printing industry. The digital images are stored in the 'images' folder and can be read across Macintosh, Windows, DOS, OS/2, SGI, and UNIX platforms with applications that can read JPG (JPEG - Joint Photographic Experts Group format) or PCD (Kodak's PhotoCD (YCC) format) files. Throughout this publication, the image numbers match among the file names, figure captions, thumbnail labels, and other references. Also included on this CD-ROM are Windows and Macintosh viewers and engines for keyword searches (Adobe Acrobat Reader with Search). At the time of this publication, Kodak's policy on the distribution of color-management files is still unresolved, and so none is included on this CD-ROM. However, using the Universal Ektachrome or Universal Kodachrome transforms found in your software will provide excellent color. In addition to PhotoCD (PCD) files, this CD-ROM contains large (14.2'x19.5') and small (4'x6') screen-resolution (72 dots per inch; dpi) images in JPEG format. These undergo downsizing and compression relative to the PhotoCD images.
9. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i
Science.gov (United States)
Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.
2015-01-01
We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity.
10. Optimized autonomous space in-situ sensor web for volcano monitoring
Science.gov (United States)
Song, W.-Z.; Shirazi, B.; Huang, R.; Xu, M.; Peterson, N.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.; Kedar, S.; Chien, S.; Webb, F.; Kiely, A.; Doubleday, J.; Davies, A.; Pieri, D.
2010-01-01
In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), have developed a prototype of dynamic and scalable hazard monitoring sensor-web and applied it to volcano monitoring. The combined Optimized Autonomous Space In-situ Sensor-web (OASIS) has two-way communication capability between ground and space assets, uses both space and ground data for optimal allocation of limited bandwidth resources on the ground, and uses smart management of competing demands for limited space assets. It also enables scalability and seamless infusion of future space and in-situ assets into the sensor-web. The space and in-situ control components of the system are integrated such that each element is capable of autonomously tasking the other. The ground in-situ was deployed into the craters and around the flanks of Mount St. Helens in July 2009, and linked to the command and control of the Earth Observing One (EO-1) satellite. ?? 2010 IEEE.
11. The Pu'u 'O'o-Kupaianaha Eruption of Kilauea Volcano, Hawaii: The First 20 Years
Science.gov (United States)
Heliker, Christina C.; Swanson, Donald A.; Takahashi, Taeko Jane
2003-01-01
The Pu'u 'O'o-Kupaianaha eruption started on January 3, 1983. The ensuing 20-year period of nearly continuous eruption is the longest at Kilauea Volcano since the famous lava-lake activity of the 19th century. No rift-zone eruption in more than 600 years even comes close to matching the duration and volume of activity of these past two decades. Fortunately, such a landmark event came during a period of remarkable technological advancements in volcano monitoring. When the eruption began, the Global Positioning System (GPS) and the Geographic Information System (GIS) were but glimmers on the horizon, broadband seismology was in its infancy, and the correlation spectrometer (COSPEC), used to measure SO2 flux, was still very young. Now, all of these techniques are employed on a daily basis to track the ongoing eruption and construct models about its behavior. The 12 chapters in this volume, written by present or past Hawaiian Volcano Observatory staff members and close collaborators, celebrate the growth of understanding that has resulted from research during the past 20 years of Kilauea's eruption. The chapters range widely in emphasis, subject matter, and scope, but all present new concepts or important modifications of previous ideas - in some cases, ideas long held and cherished.
12. Darwin's triggering mechanism of volcano eruptions
Science.gov (United States)
Galiev, Shamil
2010-05-01
Charles Darwin wrote that ‘… the elevation of many hundred square miles of territory near Concepcion is part of the same phenomenon, with that splashing up, if I may so call it, of volcanic matter through the orifices in the Cordillera at the moment of the shock;…' and ‘…a power, I may remark, which acts in paroxysmal upheavals like that of Concepcion, and in great volcanic eruptions,…'. Darwin reports that ‘…several of the great chimneys in the Cordillera of central Chile commenced a fresh period of activity ….' In particular, Darwin reported on four-simultaneous large eruptions from the following volcanoes: Robinson Crusoe, Minchinmavida, Cerro Yanteles and Peteroa (we cite the Darwin's sentences following his The Voyage of the Beagle and researchspace. auckland. ac. nz/handle/2292/4474). Let us consider these eruptions taking into account the volcano shape and the conduit. Three of the volcanoes (Minchinmavida (2404 m), Cerro Yanteles (2050 m), and Peteroa (3603 m)) are stratovolcanos and are formed of symmetrical cones with steep sides. Robinson Crusoe (922 m) is a shield volcano and is formed of a cone with gently sloping sides. They are not very active. We may surmise, that their vents had a sealing plug (vent fill) in 1835. All these volcanoes are conical. These common features are important for Darwin's triggering model, which is discussed below. The vent fill material, usually, has high level of porosity and a very low tensile strength and can easily be fragmented by tension waves. The action of a severe earthquake on the volcano base may be compared with a nuclear blast explosion of the base. It is known, that after a underground nuclear explosion the vertical motion and the surface fractures in a tope of mountains were observed. The same is related to the propagation of waves in conical elements. After the explosive load of the base. the tip may break and fly off at high velocity. Analogous phenomenon may be generated as a result of a
13. Multinational History of Strasbourg Astronomical Observatory
CERN Document Server
Heck, André
2005-01-01
Strasbourg Astronomical Observatory is quite an interesting place for historians: several changes of nationality between France and Germany, high-profile scientists having been based there, big projects born or installed in its walls, and so on. Most of the documents circulating on the history of the Observatory and on related matters have however been so far poorly referenced, if at all. This made necessary the compilation of a volume such as this one, offering fully-documented historical facts and references on the first decades of the Observatory history, authored by both French and German specialists. The experts contributing to this book have done their best to write in a way understandable to readers not necessarily hyperspecialized in astronomy nor in the details of European history. After an introductory chapter by the Editor, contributions by Wolfschmidt and by Duerbeck respectively deal extensively with the German periods and review people and instrumentation, while another paper by Duerbeck is more...
14. Chicago's Dearborn Observatory: a study in survival
Science.gov (United States)
Bartky, Ian R.
2000-12-01
The Dearborn Observatory, located on the Old University of Chicago campus from 1863 until 1888, was America's most promising astronomical facility when it was founded. Established by the Chicago Astronomical Society and directed by one of the country's most gifted astronomers, it boasted the largest telescope in the world and virtually unlimited operating funds. The Great Chicago Fire of 1871 destroyed its funding and demolished its research programme. Only via the sale of time signals and the heroic efforts of two amateur astronomers did the Dearborn Observatory survive.
15. Geoelectric monitoring at the Boulder magnetic observatory
Directory of Open Access Journals (Sweden)
C. C. Blum
2017-11-01
Full Text Available Despite its importance to a range of applied and fundamental studies, and obvious parallels to a robust network of magnetic-field observatories, long-term geoelectric field monitoring is rarely performed. The installation of a new geoelectric monitoring system at the Boulder magnetic observatory of the US Geological Survey is summarized. Data from the system are expected, among other things, to be used for testing and validating algorithms for mapping North American geoelectric fields. An example time series of recorded electric and magnetic fields during a modest magnetic storm is presented. Based on our experience, we additionally present operational aspects of a successful geoelectric field monitoring system.
16. Operation of the Pierre Auger Observatory
International Nuclear Information System (INIS)
Rodriguez Martino, Julio
2011-01-01
While the work to make data acquisition fully automatic continues, both the Fluorescence Detectors and the Surface Detectors of the Pierre Auger Observatory need some kind of attention from the local staff. In the first case, the telescopes are operated and monitored during the moonless periods. The ground array only needs monitoring, but the larger number of stations implies more variables to consider. AugerAccess (a high speed internet connection) will give the possibility of operating and monitoring the observatory from any place in the world. This arises questions about secure access, better control software and alarms. Solutions are already being tested and improved.
17. SPASE and the Heliophysics Virtual Observatories
Directory of Open Access Journals (Sweden)
J R Thieman
2010-02-01
Full Text Available The Space Physics Archive Search and Extract (SPASE project has developed an information model for interoperable access and retrieval of data within the Heliophysics (also known as space and solar physics science community. The diversity of science data archives within this community has led to the establishment of many virtual observatories to coordinate the data pathways within Heliophysics subdisciplines, such as magnetospheres, waves, radiation belts, etc. The SPASE information model provides a semantic layer and common language for data descriptions so that searches might be made across the whole of the heliophysics data environment, especially through the virtual observatories.
18. Public relations for a national observatory
Science.gov (United States)
Finley, David G.
The National Radio Astronomy Observatory (NRAO) is a government-funded organization providing state-of-the art observational facilities to the astronomical community on a peer-reviewed basis. In this role, the NRAO must address three principal constituencies with its public-relations efforts. These are: the astronomical community; the funding and legislative bodies of the Federal Government; and the general public. To serve each of these constituencies, the Observatory has developed a set of public-relations initiatives supported by public-relations and outreach professionals as well as by management and scientific staff members. The techniques applied and the results achieved in each of these areas are described.
19. Silicic magma generation at Askja volcano, Iceland
Science.gov (United States)
2009-04-01
Rate of magma differentiation is an important parameter for hazard assessment at active volcanoes. However, estimates of these rates depend on proper understanding of the underlying magmatic processes and magma generation. Differences in isotope ratios of O, Th and B between silicic and in contemporaneous basaltic magmas have been used to emphasize their origin by partial melting of hydrothermally altered metabasaltic crust in the rift-zones favoured by a strong geothermal gradient. An alternative model for the origin of silicic magmas in the Iceland has been proposed based on U-series results. Young mantle-derived mafic protolith is thought to be metasomatized and partially melted to form the silicic end-member. However, this model underestimates the compositional variations of the hydrothermally-altered basaltic crust. New data on U-Th disequilibria and O-isotopes in basalts and dacites from Askja volcano reveal a strong correlation between (230Th/232Th) and delta 18O. The 1875 AD dacite has the lowest Th- and O isotope ratios (0.94 and -0.24 per mille, respectively) whereas tephra of evolved basaltic composition, erupted 2 months earlier, has significantly higher values (1.03 and 2.8 per mille, respectively). Highest values are observed in the most recent basalts (erupted in 1920 and 1961) inside the Askja caldera complex and out on the associated fissure swarm (Sveinagja basalt). This correlation also holds for older magma such as an early Holocene dacites, which eruption may have been provoked by rapid glacier thinning. Silicic magmas at Askja volcano thus bear geochemical signatures that are best explained by partial melting of extensively hydrothermally altered crust and that the silicic magma source has remained constant during the Holocene at least. Once these silicic magmas are formed they appear to erupt rapidly rather than mixing and mingling with the incoming basalt heat-source that explains lack of icelandites and the bi-modal volcanism at Askja
20. Geothermal Exploration of Newberry Volcano, Oregon
Energy Technology Data Exchange (ETDEWEB)
Waibel, Albert F. [Columbia Geoscience, Pasco, WA (United States); Frone, Zachary S. [Southern Methodist Univ., Dallas, TX (United States); Blackwell, David D. [Southern Methodist Univ., Dallas, TX (United States)
2014-12-01
Davenport Newberry (Davenport) has completed 8 years of exploration for geothermal energy on Newberry Volcano in central Oregon. Two deep exploration test wells were drilled by Davenport on the west flank of the volcano, one intersected a hydrothermal system; the other intersected isolated fractures with no hydrothermal interconnection. Both holes have bottom-hole temperatures near or above 315°C (600°F). Subsequent to deep test drilling an expanded exploration and evaluation program was initiated. These efforts have included reprocessing existing data, executing multiple geological, geophysical, geochemical programs, deep exploration test well drilling and shallow well drilling. The efforts over the last three years have been made possible through a DOE Innovative Exploration Technology (IET) Grant 109, designed to facilitate innovative geothermal exploration techniques. The combined results of the last 8 years have led to a better understanding of the history and complexity of Newberry Volcano and improved the design and interpretation of geophysical exploration techniques with regard to blind geothermal resources in volcanic terrain.
1. Electrical structure of Newberry Volcano, Oregon
Science.gov (United States)
Fitterman, D.V.; Stanley, W.D.; Bisdorf, R.J.
1988-01-01
From the interpretation of magnetotelluric, transient electromagnetic, and Schlumberger resistivity soundings, the electrical structure of Newberry Volcano in central Oregon is found to consist of four units. From the surface downward, the geoelectrical units are 1) very resistive, young, unaltered volcanic rock, (2) a conductive layer of older volcanic material composed of altered tuffs, 3) a thick resistive layer thought to be in part intrusive rocks, and 4) a lower-crustal conductor. This model is similar to the regional geoelectrical structure found throughout the Cascade Range. Inside the caldera, the conductive second layer corresponds to the steep temperature gradient and alteration minerals observed in the USGS Newberry 2 test-hole. Drill hole information on the south and north flanks of the volcano (test holes GEO N-1 and GEO N-3, respectively) indicates that outside the caldera the conductor is due to alteration minerals (primarily smectite) and not high-temperature pore fluids. On the flanks of Newberry the conductor is generally deeper than inside the caldera, and it deepens with distance from the summit. A notable exception to this pattern is seen just west of the caldera rim, where the conductive zone is shallower than at other flank locations. The volcano sits atop a rise in the resistive layer, interpreted to be due to intrusive rocks. -from Authors
2. Monitoring active volcanoes: The geochemical approach
Directory of Open Access Journals (Sweden)
Takeshi Ohba
2011-06-01
Full Text Available
The geochemical surveillance of an active volcano aims to recognize possible signals that are related to changes in volcanic activity. Indeed, as a consequence of the magma rising inside the volcanic "plumbing system" and/or the refilling with new batches of magma, the dissolved volatiles in the magma are progressively released as a function of their relative solubilities. When approaching the surface, these fluids that are discharged during magma degassing can interact with shallow aquifers and/or can be released along the main volcano-tectonic structures. Under these conditions, the following main degassing processes represent strategic sites to be monitored.
The main purpose of this special volume is to collect papers that cover a wide range of topics in volcanic fluid geochemistry, which include geochemical characterization and geochemical monitoring of active volcanoes using different techniques and at different sites. Moreover, part of this volume has been dedicated to the new geochemistry tools.
3. Nanoscale volcanoes: accretion of matter at ion-sculpted nanopores.
Science.gov (United States)
Mitsui, Toshiyuki; Stein, Derek; Kim, Young-Rok; Hoogerheide, David; Golovchenko, J A
2006-01-27
We demonstrate the formation of nanoscale volcano-like structures induced by ion-beam irradiation of nanoscale pores in freestanding silicon nitride membranes. Accreted matter is delivered to the volcanoes from micrometer distances along the surface. Volcano formation accompanies nanopore shrinking and depends on geometrical factors and the presence of a conducting layer on the membrane's back surface. We argue that surface electric fields play an important role in accounting for the experimental observations.
4. Efficient inversion of volcano deformation based on finite element models : An application to Kilauea volcano, Hawaii
Science.gov (United States)
Charco, María; González, Pablo J.; Galán del Sastre, Pedro
2017-04-01
The Kilauea volcano (Hawaii, USA) is one of the most active volcanoes world-wide and therefore one of the better monitored volcanoes around the world. Its complex system provides a unique opportunity to investigate the dynamics of magma transport and supply. Geodetic techniques, as Interferometric Synthetic Aperture Radar (InSAR) are being extensively used to monitor ground deformation at volcanic areas. The quantitative interpretation of such surface ground deformation measurements using geodetic data requires both, physical modelling to simulate the observed signals and inversion approaches to estimate the magmatic source parameters. Here, we use synthetic aperture radar data from Sentinel-1 radar interferometry satellite mission to image volcano deformation sources during the inflation along Kilauea's Southwest Rift Zone in April-May 2015. We propose a Finite Element Model (FEM) for the calculation of Green functions in a mechanically heterogeneous domain. The key aspect of the methodology lies in applying the reciprocity relationship of the Green functions between the station and the source for efficient numerical inversions. The search for the best-fitting magmatic (point) source(s) is generally conducted for an array of 3-D locations extending below a predefined volume region. However, our approach allows to reduce the total number of Green functions to the number of the observation points by using the, above mentioned, reciprocity relationship. This new methodology is able to accurately represent magmatic processes using physical models capable of simulating volcano deformation in non-uniform material properties distribution domains, which eventually will lead to better description of the status of the volcano.
5. Geologic map of Medicine Lake volcano, northern California
Science.gov (United States)
Donnelly-Nolan, Julie M.
2011-01-01
Medicine Lake volcano forms a broad, seemingly nondescript highland, as viewed from any angle on the ground. Seen from an airplane, however, treeless lava flows are scattered across the surface of this potentially active volcanic edifice. Lavas of Medicine Lake volcano, which range in composition from basalt through rhyolite, cover more than 2,000 km2 east of the main axis of the Cascade Range in northern California. Across the Cascade Range axis to the west-southwest is Mount Shasta, its towering volcanic neighbor, whose stratocone shape contrasts with the broad shield shape of Medicine Lake volcano. Hidden in the center of Medicine Lake volcano is a 7 km by 12 km summit caldera in which nestles its namesake, Medicine Lake. The flanks of Medicine Lake volcano, which are dotted with cinder cones, slope gently upward to the caldera rim, which reaches an elevation of nearly 8,000 ft (2,440 m). The maximum extent of lavas from this half-million-year-old volcano is about 80 km north-south by 45 km east-west. In postglacial time, 17 eruptions have added approximately 7.5 km3 to its total estimated volume of 600 km3, and it is considered to be the largest by volume among volcanoes of the Cascades arc. The volcano has erupted nine times in the past 5,200 years, a rate more frequent than has been documented at all other Cascades arc volcanoes except Mount St. Helens.
6. The critical role of volcano monitoring in risk reduction
Directory of Open Access Journals (Sweden)
R. I. Tilling
2008-01-01
Full Text Available Data from volcano-monitoring studies constitute the only scientifically valid basis for short-term forecasts of a future eruption, or of possible changes during an ongoing eruption. Thus, in any effective hazards-mitigation program, a basic strategy in reducing volcano risk is the initiation or augmentation of volcano monitoring at historically active volcanoes and also at geologically young, but presently dormant, volcanoes with potential for reactivation. Beginning with the 1980s, substantial progress in volcano-monitoring techniques and networks – ground-based as well space-based – has been achieved. Although some geochemical monitoring techniques (e.g., remote measurement of volcanic gas emissions are being increasingly applied and show considerable promise, seismic and geodetic methods to date remain the techniques of choice and are the most widely used. Availability of comprehensive volcano-monitoring data was a decisive factor in the successful scientific and governmental responses to the reawakening of Mount St. elens (Washington, USA in 1980 and, more recently, to the powerful explosive eruptions at Mount Pinatubo (Luzon, Philippines in 1991. However, even with the ever-improving state-of-the-art in volcano monitoring and predictive capability, the Mount St. Helens and Pinatubo case histories unfortunately still represent the exceptions, rather than the rule, in successfully forecasting the most likely outcome of volcano unrest.
7. Edifice growth, deformation and rift zone development in basaltic setting: Insights from Piton de la Fournaise shield volcano (Réunion Island)
Science.gov (United States)
Michon, Laurent; Cayol, Valérie; Letourneur, Ludovic; Peltier, Aline; Villeneuve, Nicolas; Staudacher, Thomas
2009-07-01
The overall morphology of basaltic volcanoes mainly depends on their eruptive activity (effusive vs. explosive), the geometry of the rift zones and the characteristics of both endogenous and exogenous growth processes. The origin of the steep geometry of the central cone of Piton de la Fournaise volcano, which is unusual for a basaltic effusive volcano, and its deformation are examined with a combination of a detailed morphological analysis, field observations, GPS data from the Piton de la Fournaise Volcano Observatory and numerical models. The new caldera walls formed during the April 2007 summit collapse reveal that the steep cone is composed of a pyroclastic core, inherited from an earlier explosive phase, overlapped by a pile of thin lava flows. This suggests that exogenous processes played a major role in the building of the steep central cone. Magma injections into the cone, which mainly occur along the N25-30 and N120 rift zones, lead to an asymmetric outward inflation concentrated in the cone's eastern half. This endogenous growth progressively tilts the southeast and east flanks of the cone, and induces the development of a dense network of flank fractures. Finally, it is proposed that intrusions along the N120 rift zone are encouraged by stresses induced by magma injections along the N25-30 rift zone.
8. Data assimilation strategies for volcano geodesy
Science.gov (United States)
Zhan, Yan; Gregg, Patricia M.
2017-09-01
Ground deformation observed using near-real time geodetic methods, such as InSAR and GPS, can provide critical information about the evolution of a magma chamber prior to volcanic eruption. Rapid advancement in numerical modeling capabilities has resulted in a number of finite element models targeted at better understanding the connection between surface uplift associated with magma chamber pressurization and the potential for volcanic eruption. Robust model-data fusion techniques are necessary to take full advantage of the numerical models and the volcano monitoring observations currently available. In this study, we develop a 3D data assimilation framework using the Ensemble Kalman Filter (EnKF) approach in order to combine geodetic observations of surface deformation with geodynamic models to investigate volcanic unrest. The EnKF sequential assimilation method utilizes disparate data sets as they become available to update geodynamic models of magma reservoir evolution. While the EnKF has been widely applied in hydrologic and climate modeling, the adaptation for volcano monitoring is in its initial stages. As such, our investigation focuses on conducting a series of sensitivity tests to optimize the EnKF for volcano applications and on developing specific strategies for assimilation of geodetic data. Our numerical experiments illustrate that the EnKF is able to adapt well to the spatial limitations posed by GPS data and the temporal limitations of InSAR, and that specific strategies can be adopted to enhance EnKF performance to improve model forecasts. Specifically, our numerical experiments indicate that: (1) incorporating additional iterations of the EnKF analysis step is more efficient than increasing the number of ensemble members; (2) the accuracy of the EnKF results are not affected by initial parameter assumptions; (3) GPS observations near the center of uplift improve the quality of model forecasts; (4) occasionally shifting continuous GPS stations to
9. India-Based Neutrino Observatory (INO)
India-Based Neutrino Observatory (INO) · Atmospheric neutrinos – India connection · INO Collaboration · INO Project components · ICAL: The physics goals · Slide 6 · Slide 7 · INO site : Bodi West Hills · Underground Laboratory Layout · Status of activities at INO Site · Slide 11 · Slide 12 · INO-ICAL Detector · ICAL factsheet.
10. Asteroids Observed from GMARS and Santana Observatories
Science.gov (United States)
Stephens, Robert D.
2009-01-01
Lightcurve period and amplitude results from Santana and GMARS Observatories are reported for 2008 June to September: 1472 Muonio, 8.706 ± 0.002 h and 0.50 mag; 2845 Franklinken, 114 ± 1 h and 0.8 mag; and 4533 Orth (> 24 hours).
11. Reengineering observatory operations for the time domain
Science.gov (United States)
Seaman, Robert L.; Vestrand, W. T.; Hessman, Frederic V.
2014-07-01
Observatories are complex scientific and technical institutions serving diverse users and purposes. Their telescopes, instruments, software, and human resources engage in interwoven workflows over a broad range of timescales. These workflows have been tuned to be responsive to concepts of observatory operations that were applicable when various assets were commissioned, years or decades in the past. The astronomical community is entering an era of rapid change increasingly characterized by large time domain surveys, robotic telescopes and automated infrastructures, and - most significantly - of operating modes and scientific consortia that span our individual facilities, joining them into complex network entities. Observatories must adapt and numerous initiatives are in progress that focus on redesigning individual components out of the astronomical toolkit. New instrumentation is both more capable and more complex than ever, and even simple instruments may have powerful observation scripting capabilities. Remote and queue observing modes are now widespread. Data archives are becoming ubiquitous. Virtual observatory standards and protocols and astroinformatics data-mining techniques layered on these are areas of active development. Indeed, new large-aperture ground-based telescopes may be as expensive as space missions and have similarly formal project management processes and large data management requirements. This piecewise approach is not enough. Whatever challenges of funding or politics facing the national and international astronomical communities it will be more efficient - scientifically as well as in the usual figures of merit of cost, schedule, performance, and risks - to explicitly address the systems engineering of the astronomical community as a whole.
12. Education and public engagement in observatory operations
Science.gov (United States)
Gabor, Pavel; Mayo, Louis; Zaritsky, Dennis
2016-07-01
Education and public engagement (EPE) is an essential part of astronomy's mission. New technologies, remote observing and robotic facilities are opening new possibilities for EPE. A number of projects (e.g., Telescopes In Education, MicroObservatory, Goldstone Apple Valley Radio Telescope and UNC's Skynet) have developed new infrastructure, a number of observatories (e.g., University of Arizona's "full-engagement initiative" towards its astronomy majors, Vatican Observatory's collaboration with high-schools) have dedicated their resources to practical instruction and EPE. Some of the facilities are purpose built, others are legacy telescopes upgraded for remote or automated observing. Networking among institutions is most beneficial for EPE, and its implementation ranges from informal agreements between colleagues to advanced software packages with web interfaces. The deliverables range from reduced data to time and hands-on instruction while operating a telescope. EPE represents a set of tasks and challenges which is distinct from research applications of the new astronomical facilities and operation modes. In this paper we examine the experience with several EPE projects, and some lessons and challenges for observatory operation.
13. MMS Observatory TV Results Contamination Summary
Science.gov (United States)
Rosecrans, Glenn; Brieda, Lubos; Errigo, Therese
2014-01-01
The Magnetospheric Multiscale (MMS) mission is a constellation of 4 observatories designed to investigate the fundamental plasma physics of reconnection in the Earth's magnetosphere. The various instrument suites measure electric and magnetic fields, energetic particles, and plasma composition. Each spacecraft has undergone extensive environmental testing to prepare it for its minimum 2 year mission. In this paper, we report on the extensive thermal vacuum testing campaign. The testing was performed at the Naval Research Laboratory utilizing the "Big Blue" vacuum chamber. A total of ten thermal vacuum tests were performed, including two chamber certifications, three dry runs, and five tests of the individual MMS observatories. During the test, the observatories were enclosed in a thermal enclosure known as the "hamster cage". The enclosure allowed for a detailed thermal control of various observatory zone, but at the same time, imposed additional contamination and system performance requirements. The environment inside the enclosure and the vacuum chamber was actively monitored by several QCMs, RGA, and up to 18 ion gauges. Each spacecraft underwent a bakeout phase, which was followed by 4 thermal cycles. Unique aspects of the TV campaign included slow pump downs with a partial represses, thruster firings, Helium identification, and monitoring pressure spikes with ion gauges. Selected data from these TV tests is presented along with lessons learned.
14. Reverberation Mapping Results from MDM Observatory
DEFF Research Database (Denmark)
Denney, Kelly D.; Peterson, B. M.; Pogge, R. W.
2009-01-01
We present results from a multi-month reverberation mapping campaign undertaken primarily at MDM Observatory with supporting observations from around the world. We measure broad line region (BLR) radii and black hole masses for six objects. A velocity-resolved analysis of the H_beta response show...
15. Robotic Autonomous Observatories: A Historical Perspective
Directory of Open Access Journals (Sweden)
2010-01-01
Full Text Available This paper presents a historical introduction to the field of Robotic Astronomy, from the point of view of a scientist working in this field for more than a decade. The author discusses the basic definitions, the differing telescope control operating systems, observatory managers, as well as a few current scientific applications.
16. Geomagnetic secular variation at the African observatories
International Nuclear Information System (INIS)
Haile, T.
2002-10-01
Geomagnetic data from ten observatories in the African continent with time series data length of more than three decades have been analysed. All-day annual mean values of the D, H and Z components were used to study secular variations in the African region. The residuals in D, H and Z components obtained after removing polynomial fits have been examined in relation to the sunspot cycle. The occurrence of the 1969-1970 worldwide geomagnetic impulse in each observatory is studied. It is found that the secular variation in the field can be represented for most of the observatories with polynomials of second or third degree. Departures from these trends are observed over the Southern African region where strong local magnetic anomalies have been observed. The residuals in the geomagnetic field components have been shown to exhibit parallelism with the periods corresponding to double solar cycle for some of the stations. A clear latitudinal distribution in the geomagnetic component that exhibits the 1969-70 jerk is shown. The jerk appears in the plots of the first differences in H for the southern most observatories of Hermanus, Hartebeesthoek, and Tsuemb, while the Z plots show the jerk for near equatorial and equatorial stations of Antananarivo, Luanda Belas, Bangui and Addis Ababa. There is some indication for this jerk in the first difference plots of D for the northern stations of M'Bour and Tamanrasset. The plots of D rather strongly suggest the presence of a jerk around 1980 at most of the stations. (author)
17. Astronomical Virtual Observatories Through International Collaboration
Directory of Open Access Journals (Sweden)
Masatoshi Ohishi
2010-03-01
Full Text Available Astronomical Virtual Observatories (VOs are emerging research environment for astronomy, and 16 countries and a region have funded to develop their VOs based on international standard protocols for interoperability. The 16 funded VO projects have established the International Virtual Observatory Alliance (http://www.ivoa.net/ to develop the standard interoperable interfaces such as registry (meta data, data access, query languages, output format (VOTable, data model, application interface, and so on. The IVOA members have constructed each VO environment through the IVOA interfaces. National Astronomical Observatory of Japan (NAOJ started its VO project (Japanese Virtual Observatory - JVO in 2002, and developed its VO system. We have succeeded to interoperate the latest JVO system with other VOs in the USA and Europe since December 2004. Observed data by the Subaru telescope, satellite data taken by the JAXA/ISAS, etc. are connected to the JVO system. Successful interoperation of the JVO system with other VOs means that astronomers in the world will be able to utilize top-level data obtained by these telescopes from anywhere in the world at anytime. System design of the JVO system, experiences during our development including problems of current standard protocols defined in the IVOA, and proposals to resolve these problems in the near future are described.
18. Lights go out at city observatory
CERN Multimedia
Armstrong, R
2003-01-01
Edinburgh's Royal Observatory is to close its doors to the public due to dwindling visitor numbers. The visitor centre will remain open to the general public for planned lectures and night-time observing sessions, but will cease to be open on a daily basis from next month (1/2 page).
19. Radioecological Observatories - Breeding Grounds for Innovative Research
Energy Technology Data Exchange (ETDEWEB)
Steiner, Martin; Urso, Laura; Wichterey, Karin; Willrodt, Christine [Bundesamt fuer Strahlenschutz - BfS, Willy-Brandt-Strasse 5, 38226 Salzgitter (Germany); Beresford, Nicholas A.; Howard, Brenda [NERC Centre for Ecology and Hydrology - CEH, Lancaster Environment Centre, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); Bradshaw, Clare; Stark, Karolina [Stockholms Universitet - SU, Universitetsvaegen 10, SE-10691 Stockholm (Sweden); Dowdall, Mark; Liland, Astrid [Norwegian Radiation Protection Authority - NRPA, P.O. Box 55, NO-1332 Oesteraas (Norway); Eyrolle- Boyer, Frederique; Guillevic, Jerome; Hinton, Thomas [Institut de Radioprotection et de Surete Nucleaire - IRSN, 31, Avenue de la Division Leclerc, 92260 Fontenay-aux-Roses (France); Gashchak, Sergey [Chornobyl Center for Nuclear Safety, Radioactive Waste and Radioecology - Chornobyl Center, 77th Gvardiiska Dyviiya str.7/1, 07100 Slavutych (Ukraine); Hutri, Kaisa-Leena; Ikaeheimonen, Tarja; Muikku, Maarit; Outola, Iisa [Radiation and Nuclear Safety Authority - STUK, P.O. Box 14, 00881 Helsinki (Finland); Michalik, Boguslaw [Glowny Instytut Gornictwa - GIG, Plac Gwarkow 1, 40-166 Katowice (Poland); Mora, Juan Carlos; Real, Almudena; Robles, Beatriz [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas - CIEMAT, Avenida complutense, 40, 28040 Madrid (Spain); Oughton, Deborah; Salbu, Brit [Norwegian University of Life Sciences - NMBU, P.O. Box 5003, NO-1432 Aas (Norway); Sweeck, Lieve [Studiecentrum voor Kernenergie/Centre d' Etude de l' Energie Nucleaire (SCK.CEN), Avenue Herrmann- Debroux 40, BE-1160 Brussels (Belgium); Yoschenko, Vasyl [National University of Life and Environmental Sciences of Ukraine (NUBiP of Ukraine), Herojiv Obrony st., 15, Kyiv-03041 (Ukraine)
2014-07-01
20. Development of Armenian-Georgian Virtual Observatory
Science.gov (United States)
Mickaelian, Areg; Kochiashvili, Nino; Astsatryan, Hrach; Harutyunian, Haik; Magakyan, Tigran; Chargeishvili, Ketevan; Natsvlishvili, Rezo; Kukhianidze, Vasil; Ramishvili, Giorgi; Sargsyan, Lusine; Sinamyan, Parandzem; Kochiashvili, Ia; Mikayelyan, Gor
2009-10-01
The Armenian-Georgian Virtual Observatory (ArGVO) project is the first initiative in the world to create a regional VO infrastructure based on national VO projects and regional Grid. The Byurakan and Abastumani Astrophysical Observatories are scientific partners since 1946, after establishment of the Byurakan observatory . The Armenian VO project (ArVO) is being developed since 2005 and is a part of the International Virtual Observatory Alliance (IVOA). It is based on the Digitized First Byurakan Survey (DFBS, the digitized version of famous Markarian survey) and other Armenian archival data. Similarly, the Georgian VO will be created to serve as a research environment to utilize the digitized Georgian plate archives. Therefore, one of the main goals for creation of the regional VO is the digitization of large amounts of plates preserved at the plate stacks of these two observatories. The total amount of plates is more than 100,000 units. Observational programs of high importance have been selected and some 3000 plates will be digitized during the next two years; the priority is being defined by the usefulness of the material for future science projects, like search for new objects, optical identifications of radio, IR, and X-ray sources, study of variability and proper motions, etc. Having the digitized material in VO standards, a VO database through the regional Grid infrastructure will be active. This partnership is being carried out in the framework of the ISTC project A-1606 "Development of Armenian-Georgian Grid Infrastructure and Applications in the Fields of High Energy Physics, Astrophysics and Quantum Physics".
1. Volcano Monitoring in Ecuador: Three Decades of Continuous Progress of the Instituto Geofisico - Escuela Politecnica Nacional
Science.gov (United States)
Ruiz, M. C.; Yepes, H. A.; Hall, M. L.; Mothes, P. A.; Ramon, P.; Hidalgo, S.; Andrade, D.; Vallejo Vargas, S.; Steele, A. L.; Anzieta, J. C.; Ortiz, H. D.; Palacios, P.; Alvarado, A. P.; Enriquez, W.; Vasconez, F.; Vaca, M.; Arrais, S.; Viracucha, G.; Bernard, B.
2014-12-01
In 1988, the Instituto Geofisico (IG) began a permanent surveillance of Ecuadorian volcanoes, and due to activity on Guagua Pichincha, SP seismic stations and EDM control lines were then installed. Later, with the UNDRO and OAS projects, telemetered seismic monitoring was expanded to Tungurahua, Cotopaxi, Cuicocha, Chimborazo, Antisana, Cayambe, Cerro Negro, and Quilotoa volcanoes. In 1992 an agreement with the Instituto Ecuatoriano de Electrificacion strengthened the monitoring of Tungurahua and Cotopaxi volcanoes with real-time SP seismic networks and EDM lines. Thus, background activity levels became established, which was helpful because of the onset of the 1999 eruptive activity at Tungurahua and Guagua Pichincha. These eruptions had a notable impact on Baños and Quito. Unrest at Cotopaxi volcano was detected in 2001-2002, but waned. In 2002 Reventador began its eruptive period which continues to the present and is closely monitored by the IG. In 2006 permanent seismic BB stations and infrasound sensors were installed at Tungurahua and Cotopaxi under a cooperative program supported by JICA, which allowed us to follow Tungurahua's climatic eruptions of 2006 and subsequent eruptions up to the present. Programs supported by the Ecuadorian Secretaria Nacional de Ciencia y Tecnologia and the Secretaria Nacional de Planificacion resulted in further expansion of the IG's monitoring infrastructure. Thermal and video imagery, SO2 emission monitoring, geochemical analyses, continuous GPS and tiltmeters, and micro-barometric surveillance have been incorporated. Sangay, Soche, Ninahuilca, Pululahua, and Fernandina, Cerro Azul, Sierra Negra, and Alcedo in the Galapagos Islands are now monitored in real-time. During this time, international cooperation with universities (Blaise Pascal & Nice-France, U. North Carolina, New Mexico Tech, Uppsala-Sweden, Nagoya, etc.), and research centers (USGS & UNAVCO-USA, IRD-France, NIED-Japan, SGC-Colombia, VAAC, MIROVA) has introduced
2. Operations of and Future Plans for the Pierre Auger Observatory
Energy Technology Data Exchange (ETDEWEB)
Abraham, : J.; Abreu, P.; Aglietta, M.; Aguirre, C.; Ahn, E.J.; Allard, D.; Allekotte, I.; Allen, J.; Alvarez-Muniz, J.; Ambrosio, M.; Anchordoqui, L.
2009-06-01
These are presentations to be presented at the 31st International Cosmic Ray Conference, in Lodz, Poland during July 2009. It consists of the following presentations: (1) Performance and operation of the Surface Detectors of the Pierre Auger Observatory; (2) Extension of the Pierre Auger Observatory using high-elevation fluorescence telescopes (HEAT); (3) AMIGA - Auger Muons and Infill for the Ground Array of the Pierre Auger Observatory; (4) Radio detection of Cosmic Rays at the southern Auger Observatory; (5) Hardware Developments for the AMIGA enhancement at the Pierre Auger Observatory; (6) A simulation of the fluorescence detectors of the Pierre Auger Observatory using GEANT 4; (7) Education and Public Outreach at the Pierre Auger Observatory; (8) BATATA: A device to characterize the punch-through observed in underground muon detectors and to operate as a prototype for AMIGA; and (9) Progress with the Northern Part of the Pierre Auger Observatory.
3. The Paris Observatory has 350 years
Science.gov (United States)
Lequeux, James
2017-01-01
The Paris Observatory is the oldest astronomical observatory that has worked without interruption since its foundation to the present day. The building due to Claude Perrault is still in existence with few modifications, but of course other buildings have been added all along the centuries for housing new instruments and laboratories. In particular, a large dome has been built on the terrace in 1847, with a 38-cm diameter telescope completed in 1857: both are still visible. The main initial purpose of the Observatory was to determine longitudes. This was achieved by Jean-Dominique Cassini using the eclipses of the satellites of Jupiter: a much better map of France was the produced using this method, which unfortunately does not work at sea. Incidentally, the observation of these eclipses led to the discovery in 1676 of the finite velocity of light by Cassini and Rømer. Cassini also discovered the differential rotation of Jupiter and four satellites of Saturn. Then, geodesy was to be the main activity of the Observatory for more than a century, culminating in the famous Cassini map of France completed around 1790. During the first half of the 19th century, under François Arago, the Observatory was at the centre of French physics, which then developed very rapidly. Arago initiated astrophysics in 1810 by showing that the Sun and stars are made of incandescent gas. In 1854, the new director, Urbain Le Verrier, put emphasis on astrometry and celestial mechanics, discovering in particular the anomalous advance of the perihelion of Mercury, which was later to be a proof of General Relativity. In 1858, Leon Foucault built the first modern reflecting telescopes with their silvered glass mirror. Le Verrier created on his side modern meteorology, including some primitive forecasts. The following period was not so bright, due to the enormous project of the Carte du Ciel, which took much of the forces of the Observatory for half a century with little scientific return. In
4. Brazil to Join the European Southern Observatory
Science.gov (United States)
2010-12-01
The Federative Republic of Brazil has yesterday signed the formal accession agreement paving the way for it to become a Member State of the European Southern Observatory (ESO). Following government ratification Brazil will become the fifteenth Member State and the first from outside Europe. On 29 December 2010, at a ceremony in Brasilia, the Brazilian Minister of Science and Technology, Sergio Machado Rezende and the ESO Director General, Tim de Zeeuw signed the formal accession agreement aiming to make Brazil a Member State of the European Southern Observatory. Brazil will become the fifteen Member State and the first from outside Europe. Since the agreement means accession to an international convention, the agreement must now be submitted to the Brazilian Parliament for ratification [1]. The signing of the agreement followed the unanimous approval by the ESO Council during an extraordinary meeting on 21 December 2010. "Joining ESO will give new impetus to the development of science, technology and innovation in Brazil as part of the considerable efforts our government is making to keep the country advancing in these strategic areas," says Rezende. The European Southern Observatory has a long history of successful involvement with South America, ever since Chile was selected as the best site for its observatories in 1963. Until now, however, no non-European country has joined ESO as a Member State. "The membership of Brazil will give the vibrant Brazilian astronomical community full access to the most productive observatory in the world and open up opportunities for Brazilian high-tech industry to contribute to the European Extremely Large Telescope project. It will also bring new resources and skills to the organisation at the right time for them to make a major contribution to this exciting project," adds ESO Director General, Tim de Zeeuw. The European Extremely Large Telescope (E-ELT) telescope design phase was recently completed and a major review was
5. Capability of the HAWC Gamma-Ray Observatory for the Indirect Detection of Ultrahigh-Energy Neutrinos
Directory of Open Access Journals (Sweden)
Hermes León Vargas
2017-01-01
Full Text Available The detection of ultrahigh-energy neutrinos, with energies in the PeV range or above, is a topic of great interest in modern astroparticle physics. The importance comes from the fact that these neutrinos point back to the most energetic particle accelerators in the Universe and provide information about their underlying acceleration mechanisms. Atmospheric neutrinos are a background for these challenging measurements, but their rate is expected to be negligible above ≈1 PeV. In this work we describe the feasibility to study ultrahigh-energy neutrinos based on the Earth-skimming technique, by detecting the charged leptons produced in neutrino-nucleon interactions in a high mass target. We propose to detect the charged leptons, or their decay products, with the High Altitude Water Cherenkov (HAWC observatory and use as a large-mass target for the neutrino interactions the Pico de Orizaba volcano, the highest mountain in Mexico. In this work we develop an estimate of the detection rate using a geometrical model to calculate the effective area of the observatory. Our results show that it may be feasible to perform measurements of the ultrahigh-energy neutrino flux from cosmic origin during the expected lifetime of the HAWC observatory.
6. Preliminary Volcano-Hazard Assessment for Gareloi Volcano, Gareloi Island, Alaska
Science.gov (United States)
Coombs, Michelle L.; McGimsey, Robert G.; Browne, Brandon L.
2008-01-01
Gareloi Volcano (178.794 degrees W and 51.790 degrees N) is located on Gareloi Island in the Delarof Islands group of the Aleutian Islands, about 2,000 kilometers west-southwest of Anchorage and about 150 kilometers west of Adak, the westernmost community in Alaska. This small (about 8x10 kilometer) volcano has been one of the most active in the Aleutians since its discovery by the Bering expedition in the 1740s, though because of its remote location, observations have been scant and many smaller eruptions may have gone unrecorded. Eruptions of Gareloi commonly produce ash clouds and lava flows. Scars on the flanks of the volcano and debris-avalanche deposits on the adjacent seafloor indicate that the volcano has produced large landslides in the past, possibly causing tsunamis. Such events are infrequent, occurring at most every few thousand years. The primary hazard from Gareloi is airborne clouds of ash that could affect aircraft. In this report, we summarize and describe the major volcanic hazards associated with Gareloi.
7. Volcano art at Hawaii Volcanoes National Park—A science perspective
Science.gov (United States)
2018-03-26
Long before landscape photography became common, artists sketched and painted scenes of faraway places for the masses. Throughout the 19th century, scientific expeditions to Hawaiʻi routinely employed artists to depict images for the people back home who had funded the exploration and for those with an interest in the newly discovered lands. In Hawaiʻi, artists portrayed the broad variety of people, plant and animal life, and landscapes, but a feature of singular interest was the volcanoes. Painters of early Hawaiian volcano landscapes created art that formed a cohesive body of work known as the “Volcano School” (Forbes, 1992). Jules Tavernier, Charles Furneaux, and D. Howard Hitchcock were probably the best known artists of this school, and their paintings can be found in galleries around the world. Their dramatic paintings were recognized as fine art but were also strong advertisements for tourists to visit Hawaiʻi. Many of these masterpieces are preserved in the Museum and Archive Collection of Hawaiʻi Volcanoes National Park, and in this report we have taken the opportunity to match the artwork with the approximate date and volcanological context of the scene.
8. Evolution of deep crustal magma structures beneath Mount Baekdu volcano (MBV) intraplate volcano in northeast Asia
Science.gov (United States)
Rhie, J.; Kim, S.; Tkalcic, H.; Baag, S. Y.
2017-12-01
Heterogeneous features of magmatic structures beneath intraplate volcanoes are attributed to interactions between the ascending magma and lithospheric structures. Here, we investigate the evolution of crustal magmatic stuructures beneath Mount Baekdu volcano (MBV), which is one of the largest continental intraplate volcanoes in northeast Asia. The result of our seismic imaging shows that the deeper Moho depth ( 40 km) and relatively higher shear wave velocities (>3.8 km/s) at middle-to-lower crustal depths beneath the volcano. In addition, the pattern at the bottom of our model shows that the lithosphere beneath the MBV is shallower (interpret the observations as a compositional double layering of mafic underplating and a overlying cooled felsic structure due to fractional crystallization of asthenosphere origin magma. To achieve enhanced vertical and horizontal model coverage, we apply two approaches in this work, including (1) a grid-search based phase velocity measurement using real-coherency of ambient noise data and (2) a transdimensional Bayesian joint inversion using multiple ambient noise dispersion data.
9. Understanding cyclic seismicity and ground deformation patterns at volcanoes: Intriguing lessons from Tungurahua volcano, Ecuador
Science.gov (United States)
Neuberg, Jürgen W.; Collinson, Amy S. D.; Mothes, Patricia A.; Ruiz, Mario C.; Aguaiza, Santiago
2018-01-01
Cyclic seismicity and ground deformation patterns are observed on many volcanoes worldwide where seismic swarms and the tilt of the volcanic flanks provide sensitive tools to assess the state of volcanic activity. Ground deformation at active volcanoes is often interpreted as pressure changes in a magmatic reservoir, and tilt is simply translated accordingly into inflation and deflation of such a reservoir. Tilt data recorded by an instrument in the summit area of Tungurahua volcano in Ecuador, however, show an intriguing and unexpected behaviour on several occasions: prior to a Vulcanian explosion when a pressurisation of the system would be expected, the tilt signal declines significantly, hence indicating depressurisation. At the same time, seismicity increases drastically. Envisaging that such a pattern could carry the potential to forecast Vulcanian explosions on Tungurahua, we use numerical modelling and reproduce the observed tilt patterns in both space and time. We demonstrate that the tilt signal can be more easily explained as caused by shear stress due to viscous flow resistance, rather than by pressurisation of the magmatic plumbing system. In general, our numerical models prove that if magma shear viscosity and ascent rate are high enough, the resulting shear stress is sufficient to generate a tilt signal as observed on Tungurahua. Furthermore, we address the interdependence of tilt and seismicity through shear stress partitioning and suggest that a joint interpretation of tilt and seismicity can shed new light on the eruption potential of silicic volcanoes.
10. CALIPSO Borehole Instrumentation Project at Soufriere Hills Volcano, Montserrat, BWI: Data Acquisition, Telemetry, Integration, and Archival Systems
Science.gov (United States)
Mattioli, G. S.; Linde, A. T.; Sacks, I. S.; Malin, P. E.; Shalev, E.; Elsworth, D.; Hidayat, D.; Voight, B.; Young, S. R.; Dunkley, P. N.; Herd, R.; Norton, G.
2003-12-01
The CALIPSO Project (Caribbean Andesite Lava Island-volcano Precision Seismo-geodetic Observatory) has greatly enhanced the monitoring and scientific infrastructure at the Soufriere Hills Volcano, Montserrat with the recent installation of an integrated array of borehole and surface geophysical instrumentation at four sites. Each site was designed to be sufficiently hardened to withstand extreme meteorological events (e.g. hurricanes) and only require minimum routine maintenance over an expected observatory lifespan of >30 y. The sensor package at each site includes: a single-component, very broad band, Sacks-Evertson strainmeter, a three-component seismometer ( ˜Hz to 1 kHz), a Pinnacle Technologies series 5000 tiltmeter, and a surface Ashtech u-Z CGPS station with choke ring antenna, SCIGN mount and radome. This instrument package is similar to that envisioned by the Plate Boundary Observatory for deployment on EarthScope target volcanoes in western North America and thus the CALIPSO Project may be considered a prototype PBO installation with real field testing on a very active and dangerous volcano. Borehole sites were installed in series and data acquisition began immediately after the sensors were grouted into position at 200 m depth, with the first completed at Trants (5.8 km from dome) in 12-02, then Air Studios (5.2 km), Geralds (9.4 km), and Olveston (7.0 km) in 3-03. Analog data from the strainmeter (50 Hz sync) and seismometer (200 Hz) were initially digitized and locally archived using RefTek 72A-07 data acquisition systems (DAS) on loan from the PASSCAL instrument pool. Data were downloaded manually to a laptop approximately every month from initial installation until August 2003, when new systems were installed. Approximately 0.2 Tb of raw data in SEGY format have already been acquired and are currently archived at UARK for analysis by the CALIPSO science team. The July 12th dome collapse and vulcanian explosion events were recorded at 3 of the 4
11. Volcano monitoring with an infrared camera: first insights from Villarrica Volcano
Science.gov (United States)
Rosas Sotomayor, Florencia; Amigo Ramos, Alvaro; Velasquez Vargas, Gabriela; Medina, Roxana; Thomas, Helen; Prata, Fred; Geoffroy, Carolina
2015-04-01
This contribution focuses on the first trials of the, almost 24/7 monitoring of Villarrica volcano with an infrared camera. Results must be compared with other SO2 remote sensing instruments such as DOAS and UV-camera, for the ''day'' measurements. Infrared remote sensing of volcanic emissions is a fast and safe method to obtain gas abundances in volcanic plumes, in particular when the access to the vent is difficult, during volcanic crisis and at night time. In recent years, a ground-based infrared camera (Nicair) has been developed by Nicarnica Aviation, which quantifies SO2 and ash on volcanic plumes, based on the infrared radiance at specific wavelengths through the application of filters. Three Nicair1 (first model) have been acquired by the Geological Survey of Chile in order to study degassing of active volcanoes. Several trials with the instruments have been performed in northern Chilean volcanoes, and have proven that the intervals of retrieved SO2 concentration and fluxes are as expected. Measurements were also performed at Villarrica volcano, and a location to install a ''fixed'' camera, at 8km from the crater, was discovered here. It is a coffee house with electrical power, wifi network, polite and committed owners and a full view of the volcano summit. The first measurements are being made and processed in order to have full day and week of SO2 emissions, analyze data transfer and storage, improve the remote control of the instrument and notebook in case of breakdown, web-cam/GoPro support, and the goal of the project: which is to implement a fixed station to monitor and study the Villarrica volcano with a Nicair1 integrating and comparing these results with other remote sensing instruments. This works also looks upon the strengthen of bonds with the community by developing teaching material and giving talks to communicate volcanic hazards and other geoscience topics to the people who live "just around the corner" from one of the most active volcanoes
12. Imaging magma plumbing beneath Askja volcano, Iceland
Science.gov (United States)
Greenfield, Tim; White, Robert S.
2015-04-01
Volcanoes during repose periods are not commonly monitored by dense instrumentation networks and so activity during periods of unrest is difficult to put in context. We have operated a dense seismic network of 3-component, broadband instruments around Askja, a large central volcano in the Northern Volcanic Zone, Iceland, since 2006. Askja last erupted in 1961, with a relatively small basaltic lava flow. Since 1975 the central caldera has been subsiding and there has been no indication of volcanic activity. Despite this, Askja has been one of the more seismically active volcanoes in Iceland. The majority of these events are due to an extensive geothermal area within the caldera and tectonically induced earthquakes to the northeast which are not related to the magma plumbing system. More intriguing are the less numerous deeper earthquakes at 12-24km depth, situated in three distinct areas within the volcanic system. These earthquakes often show a frequency content which is lower than the shallower activity, but they still show strong P and S wave arrivals indicative of brittle failure, despite their location being well below the brittle-ductile boundary, which, in Askja is ~7km bsl. These earthquakes indicate the presence of melt moving or degassing at depth while the volcano is not inflating, as only high strain rates or increased pore fluid pressures would cause brittle fracture in what is normally an aseismic region in the ductile zone. The lower frequency content must be the result of a slower source time function as earthquakes which are both high frequency and low frequency come from the same cluster, thereby discounting a highly attenuating lower crust. To image the plumbing system beneath Askja, local and regional earthquakes have been used as sources to solve for the velocity structure beneath the volcano. Travel-time tables were created using a finite difference technique and the residuals were used to solve simultaneously for both the earthquake locations
13. Mineralogical and geochemical study of mud volcanoes in north ...
African Journals Online (AJOL)
The gulf of Cadiz is one of the most interesting areas to study mud volcanoes and structures related to cold fluid seeps since their discovery in 1999. In this study, we present results from gravity cores collected from Ginsburg and Meknes mud volcanoes and from circular structure located in the gulf of Cadiz (North Atlantic ...
14. Fuego Volcano eruption (Guatemala, 1974): evidence of a tertiary fragmentation?
International Nuclear Information System (INIS)
Brenes-Andre, Jose
2014-01-01
Values for mode and dispersion calculated from SFT were analyzed using the SFT (Sequential Fragmentation/Transport) model to Fuego Volcano eruption (Guatemala, 1974). Analysis results have showed that the ideas initially proposed for Irazu, can be applied to Fuego Volcano. Experimental evidence was found corroborating the existence of tertiary fragmentations. (author) [es
15. 36 CFR 7.25 - Hawaii Volcanoes National Park.
Science.gov (United States)
2010-07-01
... 36 Parks, Forests, and Public Property 1 2010-07-01 2010-07-01 false Hawaii Volcanoes National Park. 7.25 Section 7.25 Parks, Forests, and Public Property NATIONAL PARK SERVICE, DEPARTMENT OF THE INTERIOR SPECIAL REGULATIONS, AREAS OF THE NATIONAL PARK SYSTEM § 7.25 Hawaii Volcanoes National Park. (a...
16. Using Google Earth to Study the Basic Characteristics of Volcanoes
Science.gov (United States)
Schipper, Stacia; Mattox, Stephen
2010-01-01
Landforms, natural hazards, and the change in the Earth over time are common material in state and national standards. Volcanoes exemplify these standards and readily capture the interest and imagination of students. With a minimum of training, students can recognize erupted materials and types of volcanoes; in turn, students can relate these…
17. Volcano ecology: Disturbance characteristics and assembly of biological communities
Science.gov (United States)
Volcanic eruptions are powerful expressions of Earth’s geophysical forces which have shaped and influenced ecological systems since the earliest days of life. The study of the interactions of volcanoes and ecosystems, termed volcano ecology, focuses on the ecological responses of organisms and biolo...
18. Copahue volcano and its regional magmatic setting
Science.gov (United States)
Varekamp, J C; Zareski, J E; Camfield, L M; Todd, Erin
2016-01-01
Copahue volcano (Province of Neuquen, Argentina) has produced lavas and strombolian deposits over several 100,000s of years, building a rounded volcano with a 3 km elevation. The products are mainly basaltic andesites, with the 2000–2012 eruptive products the most mafic. The geochemistry of Copahue products is compared with those of the main Andes arc (Llaima, Callaqui, Tolhuaca), the older Caviahue volcano directly east of Copahue, and the back arc volcanics of the Loncopue graben. The Caviahue rocks resemble the main Andes arc suite, whereas the Copahue rocks are characterized by lower Fe and Ti contents and higher incompatible element concentrations. The rocks have negative Nb-Ta anomalies, modest enrichments in radiogenic Sr and Pb isotope ratios and slightly depleted Nd isotope ratios. The combined trace element and isotopic data indicate that Copahue magmas formed in a relatively dry mantle environment, with melting of a subducted sediment residue. The back arc basalts show a wide variation in isotopic composition, have similar water contents as the Copahue magmas and show evidence for a subducted sedimentary component in their source regions. The low 206Pb/204Pb of some backarc lava flows suggests the presence of a second endmember with an EM1 flavor in its source. The overall magma genesis is explained within the context of a subducted slab with sediment that gradually looses water, water-mobile elements, and then switches to sediment melt extracts deeper down in the subduction zone. With the change in element extraction mechanism with depth comes a depletion and fractionation of the subducted complex that is reflected in the isotope and trace element signatures of the products from the main arc to Copahue to the back arc basalts.
19. Isotopic evolution of Mauna Loa volcano
International Nuclear Information System (INIS)
Kurz, M.D.; Kammer, D.P.
1991-01-01
In an effort to understand the temporal helium isotopic variations in Mauna Loa volcano, we have measured helium, strontium and lead isotopes in a suite of Mauna Loa lavas that span most of the subaerial eruptive history of the volcano. The lavas range in age from historical flows to Ninole basalt which are thought to be several hundred thousand years old. Most of the samples younger than 30 ka in age (Kau Basalt) are radiocarbon-dated flows, while the samples older than 30 ka are stratigraphically controlled (Kahuku and Ninole Basalt). The data reveal a striking change in the geochemistry of the lavas approximately 10 ka before present. The lavas older than 10 ka are characterized by high 3 He/ 4 He (≅ 16-20 times atmospheric), higher 206 Pb/ 204 Pb (≅ 18.2), and lower 87 Sr/ 86 Sr(≅ 0.70365) ratios than the younger Kau samples (having He, Pb and Sr ratios of approximately 8.5 x atmospheric, 18.1 and 0.70390, respectively). The historical lavas are distinct in having intermediate Sr and Pb isotopic compositions with 3 He/ 4 He ratios similar to the other young Kau basalt (≅ 8.5 x atmospheric). The isotopic variations are on a shorter time scale (100 to 10,000 years) than has previously been observed for Hawaiian volcanoes, and demonstrate the importance of geochronology and stratigraphy to geochemical studies. The data show consistency between all three isotope systems, which suggests that the variations are not related to magma chamber degassing processes, and that helium is not decoupled from the other isotopes. However, the complex temporal evolution suggests that three distinct mantle sources are required to explain the isotopic data. Most of the Mauna Loa isotopic variations could be explained by mixing between a plume type source, similar to Loihi, and an asthenospheric source with helium isotopic composition close to MORB and elevated Sr isotopic values. (orig./WL)
20. Monte Carlo Volcano Seismic Moment Tensors
Science.gov (United States)
Waite, G. P.; Brill, K. A.; Lanza, F.
2015-12-01
Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.
1. Volcano morphometry and volume scaling on Venus
Science.gov (United States)
Garvin, J. B.; Williams, R. S., Jr.
1994-01-01
A broad variety of volcanic edifices have been observed on Venus. They ranged in size from the limits of resolution of the Magellan SAR (i.e., hundreds of meters) to landforms over 500 km in basal diameter. One of the key questions pertaining to volcanism on Venus concerns the volume eruption rate or VER, which is linked to crustal productivity over time. While less than 3 percent of the surface area of Venus is manifested as discrete edifices larger than 50 km in diameter, a substantial component of the total crustal volume of the planet over the past 0.5 Ga is related to isolated volcanoes, which are certainly more easily studied than the relatively diffusely defined plains volcanic flow units. Thus, we have focused our efforts on constraining the volume productivity of major volcanic edifices larger than 100 km in basal diameter. Our approach takes advantage of the topographic data returned by Magellan, as well as our database of morphometric statistics for the 20 best known lava shields of Iceland, plus Mauna Loa of Hawaii. As part of this investigation, we have quantified the detailed morphometry of nearly 50 intermediate to large scale edifices, with particular attention to their shape systematics. We found that a set of venusian edifices which include Maat, Sapas, Tepev, Sif, Gula, a feature at 46 deg S, 215 deg E, as well as the shield-like structure at 10 deg N, 275 deg E are broadly representative of the approx. 400 volcanic landforms larger than 50 km. The cross-sectional shapes of these 7 representative edifices range from flattened cones (i.e., Sif) similar to classic terrestrial lava shields such as Mauna Loa and Skjaldbreidur, to rather dome-like structures which include Maat and Sapas. The majority of these larger volcanoes surveyed as part of our study displayed cross-sectional topographies with paraboloidal shaped, in sharp contrast with the cone-like appearance of most simple terrestrial lava shields. In order to more fully explore the
2. The deep structure of Axial Volcano
Science.gov (United States)
West, Michael Edwin
The subsurface structure of Axial Volcano, near the intersection of the Juan de Fuca Ridge and the Cobb-Eickelberg seamount chain in the northeast Pacific, is imaged from an active source seismic experiment. At a depth of 2.25 to 3.5 km beneath Axial lies an 8 km x 12 km region of very low seismic velocities that can only be explained by the presence of magma. In the center of this magma storage chamber at 2--3.5 km below sea floor, the crust is at least 10--20% melt. At depths of 4--5 km there is evidence of additional low concentrations of magma (a few percent) over a larger area. In total, 5--11 km3 of magma are stored in the mid-crust beneath Axial. This is more melt than has been positively identified under any basaltic volcano on Earth. It is also far more than the 0.1--0.2 km3 emplaced during the 1998 eruption. The implied residence time in the magma reservoir of a few hundred to a few thousand years agrees with geochemical trends which suggest prolonged storage and mixing of magmas. The large volume of melt bolsters previous observations that Axial provides much of the material to create crust along its 50 km rift zones. A high velocity ring-shaped feature sits above the magma chamber just outside the caldera walls. This feature is believed to be the result of repeated dike injections from the magma body to the surface during the construction of the volcanic edifice. A rapid change in crustal thickness from 8 to 11 km within 15 km of the caldera implies focused delivery of melt from the mantle. The high flux of magma suggests that melting occurs deeper in the mantle than along the nearby ridge. Melt supply to the volcano is not connected to any plumbing system associated with the adjacent segments of the Juan de Fuca Ridge. This suggests that, despite Axial's proximity to the ridge, the Cobb hot spot currently drives the supply of melt to the volcano.
3. Cataloging tremor at Kilauea Volcano, Hawaii
Science.gov (United States)
Thelen, W. A.; Wech, A.
2013-12-01
Tremor is a ubiquitous seismic feature on Kilauea volcano, which emanates from at least three distinct sources. At depth, intermittent tremor and earthquakes thought to be associated with the underlying plumbing system of Kilauea (Aki and Koyanagi, 1981) occurs approximately 40 km below and 40 km SW of the summit. At the summit of the volcano, nearly continuous tremor is recorded close to a persistently degassing lava lake, which has been present since 2008. Much of this tremor is correlated with spattering at the lake surface, but tremor also occurs in the absence of spattering, and was observed at the summit of the volcano prior to the appearance of the lava lake, predominately in association with inflation/deflation events. The third known source of tremor is in the area of Puu Oo, a vent that has been active since 1983. The exact source location and depth is poorly constrained for each of these sources. Consistently tracking the occurrence and location of tremor in these areas through time will improve our understanding of the plumbing geometry beneath Kilauea volcano and help identify precursory patterns in tremor leading to changes in eruptive activity. The continuous and emergent nature of tremor precludes the use of traditional earthquake techniques for automatic detection and location of seismicity. We implement the method of Wech and Creager (2008) to both detect and localize tremor seismicity in the three regions described above. The technique uses an envelope cross-correlation method in 5-minute windows that maximizes tremor signal coherency among seismic stations. The catalog is currently being built in near-realtime, with plans to extend the analysis to the past as time and continuous data availability permits. This automated detection and localization method has relatively poor depth constraints due to the construction of the envelope function. Nevertheless, the epicenters distinguish activity among the different source regions and serve as
4. Geology of El Chichon volcano, Chiapas, Mexico
Science.gov (United States)
Duffield, Wendell A.; Tilling, Robert I.; Canul, Rene
1984-03-01
The (pre-1982) 850-m-high andesitic stratovolcano El Chichón, active during Pleistocene and Holocene time, is located in rugged, densely forested terrain in northcentral Chiapas, México. The nearest neighboring Holocene volcanoes are 275 km and 200 km to the southeast and northwest, respectively. El Chichón is built on Tertiary siltstone and sandstone, underlain by Cretaceous dolomitic limestone; a 4-km-deep bore hole near the east base of the volcano penetrated this limestone and continued 770 m into a sequence of Jurassic or Cretaceous evaporitic anhydrite and halite. The basement rocks are folded into generally northwest-trending anticlines and synclines. El Chichón is built over a small dome-like structure superposed on a syncline, and this structure may reflect cumulative deformation related to growth of a crustal magma reservoir beneath the volcano. The cone of El Chichón consists almost entirely of pyroclastic rocks. The pre-1982 cone is marked by a 1200-m-diameter (explosion?) crater on the southwest flank and a 1600-m-diameter crater apparently of similar origin at the summit, a lava dome partly fills each crater. The timing of cone and dome growth is poorly known. Field evidence indicates that the flank dome is older than the summit dome, and K-Ar ages from samples high on the cone suggest that the flank dome is older than about 276,000 years. At least three pyroclastic eruptions have occurred during the past 1250 radiocarbon years. Nearly all of the pyroclastic and dome rocks are moderately to highly porphyritic andesite, with plagioclase, hornblende and clinopyroxene the most common phenocrysts. Geologists who mapped El Chichón in 1980 and 1981 warned that the volcano posed a substantial hazard to the surrounding region. This warning was proven to be prophetic by violent eruptions that occurred in March and April of 1982. These eruptions blasted away nearly all of the summit dome, blanketed the surrounding region with tephra, and sent pyroclastic
5. Degassing Processes at Persistently Active Explosive Volcanoes
Science.gov (United States)
Smekens, Jean-Francois
Among volcanic gases, sulfur dioxide (SO2) is by far the most commonly measured. More than a monitoring proxy for volcanic degassing, SO 2 has the potential to alter climate patterns. Persistently active explosive volcanoes are characterized by short explosive bursts, which often occur at periodic intervals numerous times per day, spanning years to decades. SO 2 emissions at those volcanoes are poorly constrained, in large part because the current satellite monitoring techniques are unable to detect or quantify plumes of low concentration in the troposphere. Eruption plumes also often show high concentrations of ash and/or aerosols, which further inhibit the detection methods. In this work I focus on quantifying volcanic gas emissions at persistently active explosive volcanoes and their variations over short timescales (minutes to hours), in order to document their contribution to natural SO2 flux as well as investigate the physical processes that control their behavior. In order to make these measurements, I first develop and assemble a UV ground-based instrument, and validate it against an independently measured source of SO2 at a coal-burning power plant in Arizona. I establish a measurement protocol and demonstrate that the instrument measures SO 2 fluxes with Indonesia), a volcano that has been producing cycles of repeated explosions with periods of minutes to hours for the past several decades. Semeru produces an average of 21-71 tons of SO2 per day, amounting to a yearly output of 8-26 Mt. Using the Semeru data, along with a 1-D transient numerical model of magma ascent, I test the validity of a model in which a viscous plug at the top of the conduit produces cycles of eruption and gas release. I find that it can be a valid hypothesis to explain the observed patterns of degassing at Semeru. Periodic behavior in such a system occurs for a very narrow range of conditions, for which the mass balance between magma flux and open-system gas escape repeatedly
6. Mud Volcanoes as Exploration Targets on Mars
Science.gov (United States)
Allen, Carlton C.; Oehler, Dorothy Z.
2010-01-01
Tens of thousands of high-albedo mounds occur across the southern part of the Acidalia impact basin on Mars. These structures have geologic, physical, mineralogic, and morphologic characteristics consistent with an origin from a sedimentary process similar to terrestrial mud volcanism. The potential for mud volcanism in the Northern Plains of Mars has been recognized for some time, with candidate mud volcanoes reported from Utopia, Isidis, northern Borealis, Scandia, and the Chryse-Acidalia region. We have proposed that the profusion of mounds in Acidalia is a consequence of this basin's unique geologic setting as the depocenter for the tune fraction of sediments delivered by the outflow channels from the highlands.
7. Mud Volcanoes of Trinidad as Astrobiological Analogs for Martian Environments
Directory of Open Access Journals (Sweden)
2014-10-01
Full Text Available Eleven onshore mud volcanoes in the southern region of Trinidad have been studied as analog habitats for possible microbial life on Mars. The profiles of the 11 mud volcanoes are presented in terms of their physical, chemical, mineralogical, and soil properties. The mud volcanoes sampled all emitted methane gas consistently at 3% volume. The average pH for the mud volcanic soil was 7.98. The average Cation Exchange Capacity (CEC was found to be 2.16 kg/mol, and the average Percentage Water Content was 34.5%. Samples from three of the volcanoes, (i Digity; (ii Piparo and (iii Devil’s Woodyard were used to culture bacterial colonies under anaerobic conditions indicating possible presence of methanogenic microorganisms. The Trinidad mud volcanoes can serve as analogs for the Martian environment due to similar geological features found extensively on Mars in Acidalia Planitia and the Arabia Terra region.
8. Measurements of radon and chemical elements: Popocatepetl volcano
International Nuclear Information System (INIS)
Pena, P.; Segovia, N.; Lopez, B.; Reyes, A.V.; Armienta, M.A.; Valdes, C.; Mena, M.; Seidel, J.L.; Monnin, M.
2002-01-01
The Popocatepetl volcano is a higher risk volcano located at 60 Km from Mexico City. Radon measurements on soil in two fixed seasons located in the north slope of volcano were carried out. Moreover the radon content, major chemical elements and tracks in water samples of three springs was studied. The radon of soil was determined with solid detectors of nuclear tracks (DSTN). The radon in subterranean water was evaluated through the liquid scintillation method and it was corroborated with an Alpha Guard equipment. The major chemical elements were determined with conventional chemical methods and the track elements were measured using an Icp-Ms equipment. The radon on soil levels were lower, indicating a moderate diffusion of the gas across the slope of the volcano. The radon in subterranean water shown few changes in relation with the active scene of the volcano. The major chemical elements and tracks showed a stable behavior during the sampling period. (Author)
9. Mud Volcanoes of Trinidad as Astrobiological Analogs for Martian Environments
Science.gov (United States)
Hosein, Riad; Haque, Shirin; Beckles, Denise M.
2014-01-01
Eleven onshore mud volcanoes in the southern region of Trinidad have been studied as analog habitats for possible microbial life on Mars. The profiles of the 11 mud volcanoes are presented in terms of their physical, chemical, mineralogical, and soil properties. The mud volcanoes sampled all emitted methane gas consistently at 3% volume. The average pH for the mud volcanic soil was 7.98. The average Cation Exchange Capacity (CEC) was found to be 2.16 kg/mol, and the average Percentage Water Content was 34.5%. Samples from three of the volcanoes, (i) Digity; (ii) Piparo and (iii) Devil’s Woodyard were used to culture bacterial colonies under anaerobic conditions indicating possible presence of methanogenic microorganisms. The Trinidad mud volcanoes can serve as analogs for the Martian environment due to similar geological features found extensively on Mars in Acidalia Planitia and the Arabia Terra region. PMID:25370529
10. Tsunamis generated by eruptions from mount st. Augustine volcano, alaska.
Science.gov (United States)
Kienle, J; Kowalik, Z; Murty, T S
1987-06-12
During an eruption of the Alaskan volcano Mount St. Augustine in the spring of 1986, there was concern about the possibility that a tsunami might be generated by the collapse of a portion of the volcano into the shallow water of Cook Inlet. A similar edifice collapse of the volcano and ensuing sea wave occurred during an eruption in 1883. Other sea waves resulting in great loss of life and property have been generated by the eruption of coastal volcanos around the world. Although Mount St. Augustine remained intact during this eruptive cycle, a possible recurrence of the 1883 events spurred a numerical simulation of the 1883 sea wave. This simulation, which yielded a forecast of potential wave heights and travel times, was based on a method that could be applied generally to other coastal volcanos.
11. Mud volcanoes of trinidad as astrobiological analogs for martian environments.
Science.gov (United States)
Hosein, Riad; Haque, Shirin; Beckles, Denise M
2014-10-13
Eleven onshore mud volcanoes in the southern region of Trinidad have been studied as analog habitats for possible microbial life on Mars. The profiles of the 11 mud volcanoes are presented in terms of their physical, chemical, mineralogical, and soil properties. The mud volcanoes sampled all emitted methane gas consistently at 3% volume. The average pH for the mud volcanic soil was 7.98. The average Cation Exchange Capacity (CEC) was found to be 2.16 kg/mol, and the average Percentage Water Content was 34.5%. Samples from three of the volcanoes, (i) Digity; (ii) Piparo and (iii) Devil's Woodyard were used to culture bacterial colonies under anaerobic conditions indicating possible presence of methanogenic microorganisms. The Trinidad mud volcanoes can serve as analogs for the Martian environment due to similar geological features found extensively on Mars in Acidalia Planitia and the Arabia Terra region.
12. Establishment, test and evaluation of a prototype volcano surveillance system
Science.gov (United States)
Ward, P. L.; Eaton, J. P.; Endo, E.; Harlow, D.; Marquez, D.; Allen, R.
1973-01-01
A volcano-surveillance system utilizing 23 multilevel earthquake counters and 6 biaxial borehole tiltmeters is being installed and tested on 15 volcanoes in 4 States and 4 foreign countries. The purpose of this system is to give early warning when apparently dormant volcanoes are becoming active. The data are relayed through the ERTS-Data Collection System to Menlo Park for analysis. Installation was completed in 1972 on the volcanoes St. Augustine and Iliamna in Alaska, Kilauea in Hawaii, Baker, Rainier and St. Helens in Washington, Lassen in California, and at a site near Reykjavik, Iceland. Installation continues and should be completed in April 1973 on the volcanoes Santiaguito, Fuego, Agua and Pacaya in Guatemala, Izalco in El Salvador and San Cristobal, Telica and Cerro Negro in Nicaragua.
13. Data standards for the international virtual observatory
Directory of Open Access Journals (Sweden)
R J Hanisch
2006-11-01
Full Text Available A primary goal of the International Virtual Observatory Alliance, which brings together Virtual Observatory Projects from 16 national and international development projects, is to develop, evaluate, test, and agree upon standards for astronomical data formatting, data discovery, and data delivery. In the three years that the IVOA has been in existence, substantial progress has been made on standards for tabular data, imaging data, spectroscopic data, and large-scale databases and on managing the metadata that describe data collections and data access services. In this paper, I describe how the IVOA operates and give my views as to why such a broadly based international collaboration has been able to make such rapid progress.
14. Beyond the Observatory: Reflections on the Centennial
Science.gov (United States)
Devorkin, D. H.
1999-05-01
One of the many unexpected side-benefits of acting as editor of the AAS centennial volume was the chance to take a fresh look at some of the personalities who helped to shape the American Astronomical Society. A common characteristic of these people was their energy, compassion and drive to go "Beyond the Observatory," to borrow a phrase from Harlow Shapley. But what did going beyond the observatory' mean to Shapley, or to the others who shaped and maintained the Society in its first one hundred years of life? Just as the discipline of astronomy has changed in profound ways in the past century, so has the American Astronomical Society changed, along with the people who have been its leaders and its sustainers and the culture that has fostered it. The Centennial meeting of the Society offers a chance to reflect on the people who have given American astronomy its sense of community identity.
15. The STELLA Robotic Observatory on Tenerife
Directory of Open Access Journals (Sweden)
Klaus G. Strassmeier
2010-01-01
Full Text Available The Astrophysical Institute Potsdam (AIP and the Instituto de Astrofísica de Canarias (IAC inaugurated the robotic telescopes STELLA-I and STELLA-II (STELLar Activity on Tenerife on May 18, 2006. The observatory is located on the Izaña ridge at an elevation of 2400 m near the German Vacuum Tower Telescope. STELLA consists of two 1.2 m alt-az telescopes. One telescope fiber feeds a bench-mounted high-resolution echelle spectrograph while the other telescope feeds a wide-field imaging photometer. Both scopes work autonomously by means of artificial intelligence. Not only that the telescopes are automated, but the entire observatory operates like a robot, and does not require any human presence on site.
16. High Energy Astronomy Observatory (HEAO)-2
Science.gov (United States)
1982-01-01
This artist's concept depicts the High Energy Astronomy Observatory (HEAO)-2 in orbit. The HEAO-2, the first imaging and largest x-ray telescope built to date, was capable of producing actual photographs of x-ray objects. Shortly after launch, the HEAO-2 was nicknamed the Einstein Observatory by its scientific experimenters in honor of the centernial of the birth of Albert Einstein, whose concepts of relativity and gravitation have influenced much of modern astrophysics, particularly x-ray astronomy. The HEAO-2, designed and developed by TRW, Inc. under the project management of the Marshall Space Flight Center, was launched aboard an Atlas/Centaur launch vehicle on November 13, 1978. The HEAO-2 was originally identified as HEAO-B but the designation was changed once the spacecraft achieved orbit.
17. Observatory Magnetometer In-Situ Calibration
Directory of Open Access Journals (Sweden)
A Marusenkov
2011-07-01
Full Text Available An experimental validation of the in-situ calibration procedure, which allows estimating parameters of observatory magnetometers (scale factors, sensor misalignment without its operation interruption, is presented. In order to control the validity of the procedure, the records provided by two magnetometers calibrated independently in a coil system have been processed. The in-situ estimations of the parameters are in very good agreement with the values provided by the coil system calibration.
18. From AISR to the Virtual Observatory
Science.gov (United States)
Szalay, Alexander S.
2014-01-01
The talk will provide a retrospective on important results enabled by the NASA AISR program. The program had a unique approach to funding research at the intersection of astrophysics, applied computer science and statistics. It had an interdisciplinary angle, encouraged high risk, high return projects. Without this program the Virtual Observatory would have never been started. During its existence the program has funded some of the most innovative applied computer science projects in astrophysics.
19. Utilizing Internet Technologies in Observatory Control Systems
Science.gov (United States)
Cording, Dean
2002-12-01
The 'Internet boom' of the past few years has spurred the development of a number of technologies to provide services such as secure communications, reliable messaging, information publishing and application distribution for commercial applications. Over the same period, a new generation of computer languages have also developed to provide object oriented design and development, improved reliability, and cross platform compatibility. Whilst the business models of the 'dot.com' era proved to be largely unviable, the technologies that they were based upon have survived and have matured to the point were they can now be utilized to build secure, robust and complete observatory control control systems. This paper will describe how Electro Optic Systems has utilized these technologies in the development of its third generation Robotic Observatory Control System (ROCS). ROCS provides an extremely flexible configuration capability within a control system structure to provide truly autonomous robotic observatory operation including observation scheduling. ROCS was built using Internet technologies such as Java, Java Messaging Service (JMS), Lightweight Directory Access Protocol (LDAP), Secure Sockets Layer (SSL), eXtendible Markup Language (XML), Hypertext Transport Protocol (HTTP) and Java WebStart. ROCS was designed to be capable of controlling all aspects of an observatory and be able to be reconfigured to handle changing equipment configurations or user requirements without the need for an expert computer programmer. ROCS consists of many small components, each designed to perform a specific task, with the configuration of the system specified using a simple meta language. The use of small components facilitates testing and makes it possible to prove that the system is correct.
20. The architecture of LAMOST observatory control system
International Nuclear Information System (INIS)
Wang Jian; Jin Ge; Yu Xiaoqi; Wan Changsheng; Hao Likai; Li Xihua
2005-01-01
The design of architecture is the one of the most important part in development of Observatory Control System (OCS) for LAMOST. Based on the complexity of LAMOST, long time of development for LAMOST and long life-cycle of OCS system, referring many kinds of architecture pattern, the architecture of OCS is established which is a component-based layered system using many patterns such as the MVC and proxy. (authors)
1. Technology Development for a Neutrino Astrophysical Observatory
International Nuclear Information System (INIS)
Chaloupka, V.; Cole, T.; Crawford, H.J.; He, Y.D.; Jackson, S.; Kleinfelder, S.; Lai, K.W.; Learned, J.; Ling, J.; Liu, D.; Lowder, D.; Moorhead, M.; Morookian, J.M.; Nygren, D.R.; Price, P.B.; Richards, A.; Shapiro, G.; Shen, B.; Smoot, George F.; Stokstad, R.G.; VanDalen, G.; Wilkes, J.; Wright, F.; Young, K.
1996-01-01
We propose a set of technology developments relevant to the design of an optimized Cerenkov detector for the study of neutrino interactions of astrophysical interest. Emphasis is placed on signal processing innovations that enhance significantly the quality of primary data. These technical advances, combined with field experience from a follow-on test deployment, are intended to provide a basis for the engineering design for a kilometer-scale Neutrino Astrophysical Observatory
2. Translating Volcano Hazards Research in the Cascades Into Community Preparedness
Science.gov (United States)
Ewert, J. W.; Driedger, C. L.
2015-12-01
Research by the science community into volcanic histories and physical processes at Cascade volcanoes in the states of Washington, Oregon, and California has been ongoing for over a century. Eruptions in the 20th century at Lassen Peak and Mount St. Helen demonstrated the active nature of Cascade volcanoes; the 1980 eruption of Mount St. Helens was a defining moment in modern volcanology. The first modern volcano hazards assessments were produced by the USGS for some Cascade volcanoes in the 1960s. A rich scientific literature exists, much of which addresses hazards at these active volcanoes. That said community awareness, planning, and preparation for eruptions generally do not occur as a result of a hazard analyses published in scientific papers, but by direct communication with scientists. Relative to other natural hazards, volcanic eruptions (or large earthquakes, or tsunami) are outside common experience, and the public and many public officials are often surprised to learn of the impacts volcanic eruptions could have on their communities. In the 1980s, the USGS recognized that effective hazard communication and preparedness is a multi-faceted, long-term undertaking and began working with federal, state, and local stakeholders to build awareness and foster community action about volcano hazards. Activities included forming volcano-specific workgroups to develop coordination plans for volcano emergencies; a concerted public outreach campaign; curriculum development and teacher training; technical training for emergency managers and first responders; and development of hazard information that is accessible to non-specialists. Outcomes include broader ownership of volcano hazards as evidenced by bi-national exchanges of emergency managers, community planners, and first responders; development by stakeholders of websites focused on volcano hazards mitigation; and execution of table-top and functional exercises, including evacuation drills by local communities.
3. A robotic observatory in the city
Science.gov (United States)
Ruch, Gerald T.; Johnston, Martin E.
2012-05-01
The University of St. Thomas (UST) Observatory is an educational facility integrated into UST's undergraduate curriculum as well as the curriculum of several local schools. Three characteristics combine to make the observatory unique. First, the telescope is tied directly to the support structure of a four-story parking ramp instead of an isolated pier. Second, the facility can be operated remotely over an Internet connection and is capable of performing observations without a human operator. Third, the facility is located on campus in the heart of a metropolitan area where light pollution is severe. Our tests indicate that, despite the lack of an isolated pier, vibrations from the ramp do not degrade the image quality at the telescope. The remote capability facilitates long and frequent observing sessions and allows others to use the facility without traveling to UST. Even with the high background due to city lights, the sensitivity and photometric accuracy of the system are sufficient to fulfill our pedagogical goals and to perform a variety of scientific investigations. In this paper, we outline our educational mission, provide a detailed description of the observatory, and discuss its performance characteristics.
4. LAGO: The Latin American giant observatory
Science.gov (United States)
Sidelnik, Iván; Asorey, Hernán; LAGO Collaboration
2017-12-01
The Latin American Giant Observatory (LAGO) is an extended cosmic ray observatory composed of a network of water-Cherenkov detectors (WCD) spanning over different sites located at significantly different altitudes (from sea level up to more than 5000 m a.s.l.) and latitudes across Latin America, covering a wide range of geomagnetic rigidity cut-offs and atmospheric absorption/reaction levels. The LAGO WCD is simple and robust, and incorporates several integrated devices to allow time synchronization, autonomous operation, on board data analysis, as well as remote control and automated data transfer. This detection network is designed to make detailed measurements of the temporal evolution of the radiation flux coming from outer space at ground level. LAGO is mainly oriented to perform basic research in three areas: high energy phenomena, space weather and atmospheric radiation at ground level. It is an observatory designed, built and operated by the LAGO Collaboration, a non-centralized collaborative union of more than 30 institutions from ten countries. In this paper we describe the scientific and academic goals of the LAGO project - illustrating its present status with some recent results - and outline its future perspectives.
5. The Lowell Observatory Predoctoral Fellowship Program
Science.gov (United States)
Prato, Lisa A.; Shkolnik, E.
2014-01-01
Lowell Observatory is pleased to solicit applications for our Predoctoral Fellowship Program. Now beginning its seventh year, this program is designed to provide unique research opportunities to graduate students in good standing, currently enrolled at Ph.D. granting institutions. Lowell staff research spans a wide range of topics, from astronomical instrumentation, to icy bodies in our solar system, exoplanet science, stellar populations, star formation, and dwarf galaxies. The Observatory's new 4.3 meter Discovery Channel Telescope has successfully begun science operations and we anticipate the commissioning of several new instruments in 2014, making this a particularly exciting time to do research at Lowell. Student research is expected to lead to a thesis dissertation appropriate for graduation at the doctoral level at the student's home institution. The Observatory provides competitive compensation and full benefits to student scholars. For more information, see http://www2.lowell.edu/rsch/predoc.php and links therein. Applications for Fall 2014 are due by May 1, 2014.
6. Recent results from the Compton Observatory
Energy Technology Data Exchange (ETDEWEB)
Michelson, P.F.; Hansen, W.W. [Stanford Univ., CA (United States)
1994-12-01
The Compton Observatory is an orbiting astronomical observatory for gamma-ray astronomy that covers the energy range from about 30 keV to 30 GeV. The Energetic Gamma Ray Experiment Telescope (EGRET), one of four instruments on-board, is capable of detecting and imaging gamma radiation from cosmic sources in the energy range from approximately 20 MeV to 30 GeV. After about one month of tests and calibration following the April 1991 launch, a 15-month all sky survey was begun. This survey is now complete and the Compton Observatory is well into Phase II of its observing program which includes guest investigator observations. Among the highlights from the all-sky survey discussed in this presentation are the following: detection of five pulsars with emission above 100 MeV; detection of more than 24 active galaxies, the most distant at redshift greater than two; detection of many high latitude, unidentified gamma-ray sources, some showing significant time variability; detection of at least two high energy gamma-ray bursts, with emission in one case extending to at least 1 GeV. EGRET has also detected gamma-ray emission from solar flares up to energies of at least 2 GeV and has observed gamma-rays from the Large Magellanic Cloud.
7. The brazilian indigenous planetary-observatory
Science.gov (United States)
Afonso, G. B.
2003-08-01
We have performed observations of the sky alongside with the Indians of all Brazilian regions that made it possible localize many indigenous constellations. Some of these constellations are the same as the other South American Indians and Australian aborigines constellations. The scientific community does not have much of this information, which may be lost in one or two generations. In this work, we present a planetary-observatory that we have made in the Park of Science Newton Freire-Maia of Paraná State, in order to popularize the astronomical knowledge of the Brazilian Indians. The planetary consists, essentially, of a sphere of six meters in diameter and a projection cylinder of indigenous constellations. In this planetary we can identify a lot of constellations that we have gotten from the Brazilian Indians; for instance, the four seasonal constellations: the Tapir (spring), the Old Man (summer), the Deer (autumn) and the Rhea (winter). A two-meter height wooden staff that is posted vertically on the horizontal ground similar to a Gnomon and stones aligned with the cardinal points and the soltices directions constitutes the observatory. A stone circle of ten meters in diameter surrounds the staff and the aligned stones. During the day we observe the Sun apparent motions and at night the indigenous constellations. Due to the great community interest in our work, we are designing an itinerant indigenous planetary-observatory to be used in other cities mainly by indigenous and primary schools teachers.
8. Reale Osservatorio Vesuviano: the First Volcanological Observatory in the World
Science.gov (United States)
Avvisati, Gala; de Vita, Sandro; Di Vito, Mauro Antonio; Marotta, Enrica; Sangianantoni, Agata; Peluso, Rosario; Pasquale Ricciardi, Giovanni; Tulino, Sabrina; Uzzo, Tullia; Ghilardi, Massimo; De Natale, Giuseppe
2015-04-01
The Reale Osservatorio Vesuviano (ROV), historic home of the Istituto Nazionale di Geofisica e Vulcanologia (INGV), is the oldest volcanological observatory in the world. It was founded in 1841 by the Bourbon king of Naples. The building is located on the western slope of Mount Vesuvius, one of the most famous and dangerous volcanoes in the world. Since its foundation, the ROV has always attracted researchers, visitors and students from many countries. The ROV site is an elegant neo-classical building which at present hosts permanent exhibitions of part of its inheritance of valuable mineral, scientific instrument and art collections. A radical change is now under way, starting with the structural reinforcement of the building, renewal and upgrading of services, and the redefinition of exhibition itineraries so as to make visits still more enjoyable and informative. This will include the integration of outdoor footpaths and theme-based routes designed for users of differing levels of expertise. This major transformation also involves a study and a number of operations aimed at the possibility of developing self-financed activities. To this end an analysis of tourist movements in Campania was conducted, in part so as to attract to the ROV a larger and more varied group of visitors. In an area that - despite its unique characteristics - is currently significantly degraded and underused, the creation of such a powerful tourist and cultural attraction would serve as a focus for the development of additional activities and services that would greatly enhance it and stimulate growth. These activities would, of course, be compatible with a territory that has a high risk of volcanic hazards - indeed, such growth would constitute an important component in mitigating this risk in the area. The example given illustrates how the restoration and enhancement of a piece of our historic, scientific and cultural heritage could be the driving force behind the economic revival of an
9. Simple, Affordable and Sustainable Borehole Observatories for Complex Monitoring Objectives
Science.gov (United States)
Kopf, A.; Hammerschmidt, S.; Davis, E.; Saffer, D.; Wheat, G.; LaBonte, A.; Meldrum, R.; Heesemann, M.; Villinger, H.; Freudenthal, T.; Ratmeyer, V.; Renken, J.; Bergenthal, M.; Wefer, G.
2012-04-01
Around 20 years ago, the scientific community started to use borehole observatories, so-called CORKs or Circulation Obviation Retrofit Kits, which are installed inside submarine boreholes, and which allow the re-establishment and monitoring of in situ conditions. From the first CORKs which allowed only rudimentary fluid pressure and temperature measurements, the instruments evolved to multi-functional and multi-level subseafloor laboratories, including, for example, long-term fluid sampling devices, in situ microbiological experiments or strainmeter. Nonetheless, most boreholes are still left uninstrumented, which is a major loss for the scientific community. In-stallation of CORKs usually requires a drillship and subsequent ROV assignments for data download and instru-ment maintenance, which is a major logistic and financial effort. Moreover, the increasing complexity of the CORK systems increased not only the expenses but led also to longer installation times and a higher sensitivity of the in-struments to environmental constraints. Here, we present three types of Mini-CORKs, which evolved back to more simple systems yet providing a wide range of possible in situ measurements. As a regional example the Nankai Trough is chosen, where repeated subduction thrust earthquakes with M8+ occurred. The area has been investigated by several drilling campaigns of the DSDP, ODP and IODP, where boreholes were already instrumented by different CORKs. Unfortunately, some of the more complex systems showed incomplete functionality, and moreover, the increased ship time forced IODP to rely on third party funds for the observatories. Consequently, the need for more affordable CORKs arose, which may be satisfied by the systems presented here. The first type, the so-called SmartPlug, provides two pressure transducers and four temperature sensors, and monitors a hydrostatic reference section and an isolated zone of interest. It was already installed at the Nankai Trough accretionary
10. Volcano-ice interactions on Mars
International Nuclear Information System (INIS)
Allen, C.C.
1979-01-01
Central volcanic eruptions beneath terrestrial glaciers have built steep-sided, flat-topped mountains composed of pillow lava, glassy tuff, capping flows, and cones of basalt. Subglacial fissure eruptions produced ridges of similar compostion. In some places the products from a number of subglacial vents have combined to form widespread deposits. The morphologies of these subglacial volcanoes are distinctive enough to allow their recognition at the resolutions characteristic of Viking orbiter imagery. Analogs to terrestrial subglacial volcanoes have been identified on the northern plains and near the south polar cap of Mars. The polar feature provides probable evidence of volcanic eruptions beneath polar ice. A mixed unit of rock and ice is postulated to have overlain portions of the northern plains, with eruptions into this ground ice having produced mountains and ridges analogous to those in Iceland. Subsequent breakdown of this unit due to ice melting revealed the volcanic features. Estimated heights of these landforms indicate that the ice-rich unit once ranged from approximately 100 to 1200 m thick
11. TMT approach to observatory software development process
Science.gov (United States)
Buur, Hanne; Subramaniam, Annapurni; Gillies, Kim; Dumas, Christophe; Bhatia, Ravinder
2016-07-01
The purpose of the Observatory Software System (OSW) is to integrate all software and hardware components of the Thirty Meter Telescope (TMT) to enable observations and data capture; thus it is a complex software system that is defined by four principal software subsystems: Common Software (CSW), Executive Software (ESW), Data Management System (DMS) and Science Operations Support System (SOSS), all of which have interdependencies with the observatory control systems and data acquisition systems. Therefore, the software development process and plan must consider dependencies to other subsystems, manage architecture, interfaces and design, manage software scope and complexity, and standardize and optimize use of resources and tools. Additionally, the TMT Observatory Software will largely be developed in India through TMT's workshare relationship with the India TMT Coordination Centre (ITCC) and use of Indian software industry vendors, which adds complexity and challenges to the software development process, communication and coordination of activities and priorities as well as measuring performance and managing quality and risk. The software project management challenge for the TMT OSW is thus a multi-faceted technical, managerial, communications and interpersonal relations challenge. The approach TMT is using to manage this multifaceted challenge is a combination of establishing an effective geographically distributed software team (Integrated Product Team) with strong project management and technical leadership provided by the TMT Project Office (PO) and the ITCC partner to manage plans, process, performance, risk and quality, and to facilitate effective communications; establishing an effective cross-functional software management team composed of stakeholders, OSW leadership and ITCC leadership to manage dependencies and software release plans, technical complexities and change to approved interfaces, architecture, design and tool set, and to facilitate
12. Three-dimensional stochastic adjustment of volcano geodetic network in Arenal volcano, Costa Rica
Science.gov (United States)
Muller, C.; van der Laat, R.; Cattin, P.-H.; Del Potro, R.
2009-04-01
Volcano geodetic networks are a key instrument to understanding magmatic processes and, thus, forecasting potentially hazardous activity. These networks are extensively used on volcanoes worldwide and generally comprise a number of different traditional and modern geodetic surveying techniques such as levelling, distances, triangulation and GNSS. However, in most cases, data from the different methodologies are surveyed, adjusted and analysed independently. Experience shows that the problem with this procedure is the mismatch between the excellent correlation of position values within a single technique and the low cross-correlation of such values within different techniques or when the same network is surveyed shortly after using the same technique. Moreover one different independent network for each geodetic surveying technique strongly increase logistics and thus the cost of each measurement campaign. It is therefore important to develop geodetic networks which combine the different geodetic surveying technique, and to adjust geodetic data together in order to better quantify the uncertainties associated to the measured displacements. In order to overcome the lack of inter-methodology data integration, the Geomatic Institute of the University of Applied Sciences of Western Switzerland (HEIG-VD) has developed a methodology which uses a 3D stochastic adjustment software of redundant geodetic networks, TRINET+. The methodology consists of using each geodetic measurement technique for its strengths relative to other methodologies. Also, the combination of the measurements in a single network allows more cost-effective surveying. The geodetic data are thereafter adjusted and analysed in the same referential frame. The adjustment methodology is based on the least mean square method and links the data with the geometry. Trinet+ also allows to run a priori simulations of the network, hence testing the quality and resolution to be expected for a determined network even
13. From Chaitén to the Chilean volcano monitoring network Jorge Munoz, Hugo Moreno, Servicio Nacional de Geología y Minería, Chile, jmunoz@sernageomin.cl
Science.gov (United States)
Muñoz, J.; Moreno, H.
2010-12-01
Chaitén volcano in southern Andes started a plinian to subplinian rhyolitic eruption on May 2008 following a long period of quiescence. A new dome complex grew up at high rates during 2008-2009 inside a 2 kilometers caldera like structure. Pyroclastic, laharic, block and ash flows and ash falls deposits have been affecting the surrounding populations, ground, vegetation, ocean and rivers, such as the laharic flows burying the currently evacuated Chaitén city. The geological, volcanologic and seismic knowledge produced during the eruption and the determination of evolutionary sceneries were properly transferred and consequently taken in account during complex decisions of authorities in charge of the emergency. As a result, no fatalities or major people injuries were produced during this rhyolitic eruption. Mainly as the consequence of the eruption of the Chaitén volcano but also due to the valuable technical advice during the crisis management, evacuation, hazards evolution, volcanic alerts and selection of sites for relocation of the Chaitén city provided by geologist and volcanologist from SERNAGEOMIN, the funding for the National Volcano Monitoring Network (RNVV) was approved during 2008 and it was integrated as a Bicentenary initiative. During the lapse of 5 year, RNVV need to create professional capacity and working teams, improve the current volcano observatory at Temuco and conform three new observatories at Coihaique, Talca and Antofagasta cities to implement volcano monitoring networks at the 43 hazardous volcanoes along the Chilean Andes. Monitoring net is currently conformed by seismic stations in 10 volcanoes or volcanic groups (San Pedro-San Pablo in Central Volcanic Andes and Llaima, Villlarrica, Mocho-Choshuenco, Carrán-Los Venados, Cordón Caulle, Osorno, Calbuco, Chaitén and Melimoyu in the southern volcanic Andes), in addition to gas measure and video camera stations in Llaima, Villarrica and Chaitén volcanoes. In addition, the geologic and
14. Optical satellite data volcano monitoring: a multi-sensor rapid response system
Science.gov (United States)
Duda, Kenneth A.; Ramsey, Michael; Wessels, Rick L.; Dehn, Jonathan
2009-01-01
In this chapter, the use of satellite remote sensing to monitor active geological processes is described. Specifically, threats posed by volcanic eruptions are briefly outlined, and essential monitoring requirements are discussed. As an application example, a collaborative, multi-agency operational volcano monitoring system in the north Pacific is highlighted with a focus on the 2007 eruption of Kliuchevskoi volcano, Russia. The data from this system have been used since 2004 to detect the onset of volcanic activity, support the emergency response to large eruptions, and assess the volcanic products produced following the eruption. The overall utility of such integrative assessments is also summarized. The work described in this chapter was originally funded through two National Aeronautics and Space Administration (NASA) Earth System Science research grants that focused on the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument. A skilled team of volcanologists, geologists, satellite tasking experts, satellite ground system experts, system engineers and software developers collaborated to accomplish the objectives. The first project, Automation of the ASTER Emergency Data Acquisition Protocol for Scientific Analysis, Disaster Monitoring, and Preparedness, established the original collaborative research and monitoring program between the University of Pittsburgh (UP), the Alaska Volcano Observatory (AVO), the NASA Land Processes Distributed Active Archive Center (LP DAAC) at the U.S. Geological Survey (USGS) Earth Resources Observation and Science (EROS) Center, and affiliates on the ASTER Science Team at the Jet Propulsion Laboratory (JPL) as well as associates at the Earth Remote Sensing Data Analysis Center (ERSDAC) in Japan. This grant, completed in 2008, also allowed for detailed volcanic analyses and data validation during three separate summer field campaigns to Kamchatka Russia. The second project, Expansion and synergistic use
15. The Virtual Solar Observatory and the Heliophysics Meta-Virtual Observatory
Science.gov (United States)
Gurman, Joseph B.
2007-01-01
The Virtual Solar Observatory (VSO) is now able to search for solar data ranging from the radio to gamma rays, obtained from space and groundbased observatories, from 26 sources at 12 data providers, and from 1915 to the present. The solar physics community can use a Web interface or an Application Programming Interface (API) that allows integrating VSO searches into other software, including other Web services. Over the next few years, this integration will be especially obvious as the NASA Heliophysics division sponsors the development of a heliophysics-wide virtual observatory (VO), based on existing VO's in heliospheric, magnetospheric, and ionospheric physics as well as the VSO. We examine some of the challenges and potential of such a "meta-VO."
Science.gov (United States)
Major, J.J.; Schilling, S.P.; Sofield, D.J.; Escobar, C.D.; Pullinger, C.R.
2001-01-01
17. Chemical compositions of lavas from Myoko volcano group
International Nuclear Information System (INIS)
Hasenaka, Toshiaki; Yoshida, Takeyoshi; Hayatsu, Kenji.
1995-01-01
In the volcanic rocks produced in island arc and continental margin arc, the phenomena of magma mixing is observed considerably generally. The research on these phenomena has been carried out also in Japan, and the periodically refilled magma chamber model has been proposed. In this report, the results of the photon activation analysis for the volcanic rock samples of Myoko volcano, for which the magma chamber model that the supply of basalt magma is periodically received was proposed, and of which the age of eruption and the stratigraphy are clearly known, are shown, and the above model is examined together with the published data of fluorescent X-ray analysis and others. The history of activities and the rate of magma extrusion of Myoko volcano group are described. The modal compositions of the volcanic rock samples of Myoko and Kurohime volcanos, for which photon activation analysis was carried out, are shown and discussed. The results of the analysis of the chemical composition of 39 volcanic rock samples from Myoko, Kurohime and Iizuna volcanos are shown. The primary magma in Myoko volcano group, the crystallization differentiation depth and moisture content of magma in Myoko and Kurohime volcanos, the presumption of Felsic and Mafic end-members in R type andesite in Myoko volcano group, and the change of magma composition with lapse of time are described. (K.I.)
18. Chemical compositions of lavas from Myoko volcano group
Energy Technology Data Exchange (ETDEWEB)
Hasenaka, Toshiaki; Yoshida, Takeyoshi [Tohoku Univ., Sendai (Japan). Faculty of Science; Hayatsu, Kenji
1995-08-01
In the volcanic rocks produced in island arc and continental margin arc, the phenomena of magma mixing is observed considerably generally. The research on these phenomena has been carried out also in Japan, and the periodically refilled magma chamber model has been proposed. In this report, the results of the photon activation analysis for the volcanic rock samples of Myoko volcano, for which the magma chamber model that the supply of basalt magma is periodically received was proposed, and of which the age of eruption and the stratigraphy are clearly known, are shown, and the above model is examined together with the published data of fluorescent X-ray analysis and others. The history of activities and the rate of magma extrusion of Myoko volcano group are described. The modal compositions of the volcanic rock samples of Myoko and Kurohime volcanos, for which photon activation analysis was carried out, are shown and discussed. The results of the analysis of the chemical composition of 39 volcanic rock samples from Myoko, Kurohime and Iizuna volcanos are shown. The primary magma in Myoko volcano group, the crystallization differentiation depth and moisture content of magma in Myoko and Kurohime volcanos, the presumption of Felsic and Mafic end-members in R type andesite in Myoko volcano group, and the change of magma composition with lapse of time are described. (K.I.)
19. 2011 Oregon Department of Geology and Mineral Industries (DOGAMI) Lidar: Cascade Volcano Observatory (CVO) Newberry Study Area
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The Oregon Department of Geology & Mineral Industries (DOGAMI) contracted with Watershed Sciences, Inc. to collect high resolution topographic LiDAR data for...
20. SAR interferometry applications on active volcanoes. State of the art and perspectives for volcano monitoring
Energy Technology Data Exchange (ETDEWEB)
Puglisi, G.; Coltelli, M. [Istituto Nazionale di Geofisica e Vulcanologia, Catania (Italy)
2001-02-01
In this paper the application of the Synthetic Aperture Radar Interferometry (INSAR) on volcanology is analysed. Since it is not a real novelty among the different applications of INSAR in Earth Observation activities, at the beginning of this paper it is analysed the state of the art of the researches in this field. During the discussion, the point of view of volcanologists is favoured because it is considered that the first applications were often badly aimed. Consequently, the initial INSAR performances in volcanology were overrated with respect to the real capabilities of this technique. This fact lead to discover some unexpected limitations in INSAR usage in volcano monitoring, but, at the same time, spurred on scientists to overcome these drawbacks. The results achieved recently allow to better apply SAR to volcanology; in the paper a possible operative work-plan aimed at introducing INSAR in the volcano monitoring system is presented.
1. Geologic Map of the Summit Region of Kilauea Volcano, Hawaii
Science.gov (United States)
Neal, Christina A.; Lockwood, John P.
2003-01-01
This report consists of a large map sheet and a pamphlet. The map shows the geology, some photographs, description of map units, and correlation of map units. The pamphlet gives the full text about the geologic map. The area covered by this map includes parts of four U.S. Geological Survey 7.5' topographic quadrangles (Kilauea Crater, Volcano, Kau Desert, and Makaopuhi). It encompasses the summit, upper rift zones, and Koae Fault System of Kilauea Volcano and a part of the adjacent, southeast flank of Mauna Loa Volcano. The map is dominated by products of eruptions from Kilauea Volcano, the southernmost of the five volcanoes on the Island of Hawaii and one of the world's most active volcanoes. At its summit (1,243 m) is Kilauea Crater, a 3 km-by-5 km collapse caldera that formed, possibly over several centuries, between about 200 and 500 years ago. Radiating away from the summit caldera are two linear zones of intrusion and eruption, the east and the southwest rift zones. Repeated subaerial eruptions from the summit and rift zones have built a gently sloping, elongate shield volcano covering approximately 1,500 km2. Much of the volcano lies under water; the east rift zone extends 110 km from the summit to a depth of more than 5,000 m below sea level; whereas the southwest rift zone has a more limited submarine continuation. South of the summit caldera, mostly north-facing normal faults and open fractures of the Koae Fault System extend between the two rift zones. The Koae Fault System is interpreted as a tear-away structure that accommodates southward movement of Kilauea's flank in response to distension of the volcano perpendicular to the rift zones.
2. Mount Meager Volcano, Canada: a Case Study for Landslides on Glaciated Volcanoes
Science.gov (United States)
Roberti, G. L.; Ward, B. C.; van Wyk de Vries, B.; Falorni, G.; Perotti, L.; Clague, J. J.
2015-12-01
3. 3D electrical conductivity tomography of volcanoes
Science.gov (United States)
Soueid Ahmed, A.; Revil, A.; Byrdina, S.; Coperey, A.; Gailler, L.; Grobbe, N.; Viveiros, F.; Silva, C.; Jougnot, D.; Ghorbani, A.; Hogg, C.; Kiyan, D.; Rath, V.; Heap, M. J.; Grandis, H.; Humaida, H.
2018-05-01
Electrical conductivity tomography is a well-established galvanometric method for imaging the subsurface electrical conductivity distribution. We characterize the conductivity distribution of a set of volcanic structures that are different in terms of activity and morphology. For that purpose, we developed a large-scale inversion code named ECT-3D aimed at handling complex topographical effects like those encountered in volcanic areas. In addition, ECT-3D offers the possibility of using as input data the two components of the electrical field recorded at independent stations. Without prior information, a Gauss-Newton method with roughness constraints is used to solve the inverse problem. The roughening operator used to impose constraints is computed on unstructured tetrahedral elements to map complex geometries. We first benchmark ECT-3D on two synthetic tests. A first test using the topography of Mt. St Helens volcano (Washington, USA) demonstrates that we can successfully reconstruct the electrical conductivity field of an edifice marked by a strong topography and strong variations in the resistivity distribution. A second case study is used to demonstrate the versatility of the code in using the two components of the electrical field recorded on independent stations along the ground surface. Then, we apply our code to real data sets recorded at (i) a thermally active area of Yellowstone caldera (Wyoming, USA), (ii) a monogenetic dome on Furnas volcano (the Azores, Portugal), and (iii) the upper portion of the caldera of Kīlauea (Hawai'i, USA). The tomographies reveal some of the major structures of these volcanoes as well as identifying alteration associated with high surface conductivities. We also review the petrophysics underlying the interpretation of the electrical conductivity of fresh and altered volcanic rocks and molten rocks to show that electrical conductivity tomography cannot be used as a stand-alone technique due to the non-uniqueness in
4. Large-N in Volcano Settings: Volcanosri
Science.gov (United States)
Lees, J. M.; Song, W.; Xing, G.; Vick, S.; Phillips, D.
2014-12-01
We seek a paradigm shift in the approach we take on volcano monitoring where the compromise from high fidelity to large numbers of sensors is used to increase coverage and resolution. Accessibility, danger and the risk of equipment loss requires that we develop systems that are independent and inexpensive. Furthermore, rather than simply record data on hard disk for later analysis we desire a system that will work autonomously, capitalizing on wireless technology and in field network analysis. To this end we are currently producing a low cost seismic array which will incorporate, at the very basic level, seismological tools for first cut analysis of a volcano in crises mode. At the advanced end we expect to perform tomographic inversions in the network in near real time. Geophone (4 Hz) sensors connected to a low cost recording system will be installed on an active volcano where triggering earthquake location and velocity analysis will take place independent of human interaction. Stations are designed to be inexpensive and possibly disposable. In one of the first implementations the seismic nodes consist of an Arduino Due processor board with an attached Seismic Shield. The Arduino Due processor board contains an Atmel SAM3X8E ARM Cortex-M3 CPU. This 32 bit 84 MHz processor can filter and perform coarse seismic event detection on a 1600 sample signal in fewer than 200 milliseconds. The Seismic Shield contains a GPS module, 900 MHz high power mesh network radio, SD card, seismic amplifier, and 24 bit ADC. External sensors can be attached to either this 24-bit ADC or to the internal multichannel 12 bit ADC contained on the Arduino Due processor board. This allows the node to support attachment of multiple sensors. By utilizing a high-speed 32 bit processor complex signal processing tasks can be performed simultaneously on multiple sensors. Using a 10 W solar panel, second system being developed can run autonomously and collect data on 3 channels at 100Hz for 6 months
5. Volcanology Curricula Development Aided by Online Educational Resource
Science.gov (United States)
Poland, Michael P.; van der Hoeven Kraft, Katrien J.; Teasdale, Rachel
2011-03-01
Using On-Line Volcano Monitoring Data in College and University Courses: The Volcano Exploration Project: Puu Ōō (VEPP); Hawaii Volcanoes National Park, Hawaii, 26-30 July 2010; Volcanic activity is an excellent hook for engaging college and university students in geoscience classes. An increasing number of Internet-accessible real-time and near-real time volcano monitoring data are now available and constitute an important resource for geoscience education; however, relatively few data sets are comprehensive, and many lack background information to aid in interpretation. In response to the need for organized, accessible, and well-documented volcano education resources, the U.S. Geological Survey's Hawaiian Volcano Observatory (HVO), in collaboration with NASA and the University of Hawaii at Manoa, established the Volcanoes Exploration Project: Puu Ōō (VEPP). The VEPP Web site (http://vepp.wr.usgs.gov) is an educational resource that provides access, in near real time, to geodetic, seismic, and geologic data from the active Puu Ōō eruptive vent on Kilauea volcano, Hawaii, along with background and context information. A strength of the VEPP site is the common theme of the Puu Ōō eruption, which allows the site to be revisited multiple times to demonstrate different principles and integrate many aspects of volcanology.
6. Dynamic triggering of volcano drumbeat-like seismicity at the Tatun volcano group in Taiwan
Science.gov (United States)
Lin, Cheng-Horng
2017-07-01
Periodical seismicity during eruptions has been observed at several volcanoes, such as Mount St. Helens and Soufrière Hills. Movement of magma is often considered one of the most important factors in its generation. Without any magma movement, drumbeat-like (or heartbeat-like) periodical seismicity was detected twice beneath one of the strongest fumarole sites (Dayoukeng) among the Tatun volcano group in northern Taiwan in 2015. Both incidences of drumbeat-like seismicity were respectively started after felt earthquakes in Taiwan, and then persisted for 1-2 d afterward with repetition intervals of ∼18 min between any two adjacent events. The phenomena suggest both drumbeat-like (heartbeat-like) seismicity sequences were likely triggered by dynamic waves generated by the two felt earthquakes. Thus, rather than any involvement of magma, a simplified pumping system within a degassing conduit is proposed to explain the generation of drumbeat-like seismicity. The collapsed rocks within the conduit act as a piston, which was repeatedly lifted up by ascending gas from a deeper reservoir and dropped down when the ascending gas was escaping later. These phenomena show that the degassing process is still very strong in the Tatun volcano group in Taiwan, even though it has been dormant for about several thousand years.
7. GAIA virtual observatory - development and practices
Science.gov (United States)
Syrjäsuo, Mikko; Marple, Steve
2010-05-01
The Global Auroral Imaging Access, or GAIA, is a virtual observatory providing quick access to summary data from satellite and ground-based instruments that remote sense auroral precipitation (http://gaia-vxo.org). This web-based service facilitates locating data relevant to particular events by simultaneously displaying summary images from various data sets around the world. At the moment, there are GAIA server nodes in Canada, Finland, Norway and the UK. The development is an international effort and the software and metadata are freely available. The GAIA system is based on a relational database which is queried by a dedicated software suite that also creates the graphical end-user interface if such is needed. Most commonly, the virtual observatory is used interactively by using a web browser: the user provides the date and the type of data of interest. As the summary data from multiple instruments are displayed simultaneously, the user can conveniently explore the recorded data. The virtual observatory provides essentially instant access to the images originating from all major auroral instrument networks including THEMIS, NORSTAR, GLORIA and MIRACLE. The scientific, educational and outreach use is limited by creativity rather than access. The first version of the GAIA was developed at the University of Calgary (Alberta, Canada) in 2004-2005. This proof-of-concept included mainly THEMIS and MIRACLE data, which comprised of millions of summary plots and thumbnail images. However, it was soon realised that a complete re-design was necessary to increase flexibility. In the presentation, we will discuss the early history and motivation of GAIA as well as how the development continued towards the current version. The emphasis will be on practical problems and their solutions. Relevant design choices will also be highlighted.
8. Protection of Hawaii's Observatories from Light Pollution
Science.gov (United States)
Wainscoat, Richard J.
2018-01-01
Maunakea Observatory, located on the island of Hawaii, is among the world darkest sites for astronomy. Strong efforts to preserve the dark night sky over the last forty years have proven successful. Artificial light presently adds only approximately 2% to the natural night sky brightness. The techniques being used to protect Maunakea from light pollution will be described, along with the challenges that are now being faced.Haleakala Observatory, located on the island of Maui, is also an excellent observing site, and is among the best sites in the United States. Lighting restrictions in Maui County are much weaker, and consequently, the night sky above Haleakala is less well protected. Haleakala is closer to Honolulu and the island of Oahu (population approximately 1 million), and the glow from Oahu makes the northwestern sky brighter.Much of the lighting across most of the United States, including Hawaii, is presently being converted to LED lighting. This provides an opportunity to replace existing poorly shielded lights with properly shielded LED fixtures, but careful spectral management is essential. It is critically important to only use LED lighting that is deficient in blue and green light. LED lighting also is easy to dim. Dimming of lights later at night, when there is no need for brighter lighting, is an important tool for reducing light pollution.Techniques used to protect astronomical observatories from light pollution are similar to the techniques that must be used to protect animals that are affected by light at night, such as endangered birds and turtles. These same techniques are compatible with recent human health related lighting recommendations from the American Medical Association.
9. Determination and uncertainty of moment tensors for microearthquakes at Okmok Volcano, Alaska
Science.gov (United States)
Pesicek, J.D.; Sileny, J.; Prejean, S.G.; Thurber, C.H.
2012-01-01
Efforts to determine general moment tensors (MTs) for microearthquakes in volcanic areas are often hampered by small seismic networks, which can lead to poorly constrained hypocentres and inadequate modelling of seismic velocity heterogeneity. In addition, noisy seismic signals can make it difficult to identify phase arrivals correctly for small magnitude events. However, small volcanic earthquakes can have source mechanisms that deviate from brittle double-couple shear failure due to magmatic and/or hydrothermal processes. Thus, determining reliable MTs in such conditions is a challenging but potentially rewarding pursuit. We pursued such a goal at Okmok Volcano, Alaska, which erupted recently in 1997 and in 2008. The Alaska Volcano Observatory operates a seismic network of 12 stations at Okmok and routinely catalogues recorded seismicity. Using these data, we have determined general MTs for seven microearthquakes recorded between 2004 and 2007 by inverting peak amplitude measurements of P and S phases. We computed Green's functions using precisely relocated hypocentres and a 3-D velocity model. We thoroughly assessed the quality of the solutions by computing formal uncertainty estimates, conducting a variety of synthetic and sensitivity tests, and by comparing the MTs to solutions obtained using alternative methods. The results show that MTs are sensitive to station distribution and errors in the data, velocity model and hypocentral parameters. Although each of the seven MTs contains a significant non-shear component, we judge several of the solutions to be unreliable. However, several reliable MTs are obtained for a group of previously identified repeating events, and are interpreted as compensated linear-vector dipole events.
10. Citizen Observatories: A Standards Based Architecture
Science.gov (United States)
Simonis, Ingo
2015-04-01
A number of large-scale research projects are currently under way exploring the various components of citizen observatories, e.g. CITI-SENSE (http://www.citi-sense.eu), Citclops (http://citclops.eu), COBWEB (http://cobwebproject.eu), OMNISCIENTIS (http://www.omniscientis.eu), and WeSenseIt (http://www.wesenseit.eu). Common to all projects is the motivation to develop a platform enabling effective participation by citizens in environmental projects, while considering important aspects such as security, privacy, long-term storage and availability, accessibility of raw and processed data and its proper integration into catalogues and international exchange and collaboration systems such as GEOSS or INSPIRE. This paper describes the software architecture implemented for setting up crowdsourcing campaigns using standardized components, interfaces, security features, and distribution capabilities. It illustrates the Citizen Observatory Toolkit, a software suite that allows defining crowdsourcing campaigns, to invite registered and unregistered participants to participate in crowdsourcing campaigns, and to analyze, process, and visualize raw and quality enhanced crowd sourcing data and derived products. The Citizen Observatory Toolkit is not a single software product. Instead, it is a framework of components that are built using internationally adopted standards wherever possible (e.g. OGC standards from Sensor Web Enablement, GeoPackage, and Web Mapping and Processing Services, as well as security and metadata/cataloguing standards), defines profiles of those standards where necessary (e.g. SWE O&M profile, SensorML profile), and implements design decisions based on the motivation to maximize interoperability and reusability of all components. The toolkit contains tools to set up, manage and maintain crowdsourcing campaigns, allows building on-demand apps optimized for the specific sampling focus, supports offline and online sampling modes using modern cell phones with
11. Pulsating stars and the Virtual Observatory
Science.gov (United States)
Suárez, Juan Carlos
2017-09-01
Virtual Observatory is one of the most used internet-based protocols in astronomy. It has become somewhat natural to find, manage, compare, visualize and download observations from very different archives of astronomical observations with no effort. The VO technology beyond that is now being a reality for asteroseismology, not only for observations but also for theoretical models. Here I give a brief description of the most important VO tools related with asteroseismology, as well as a rough outline of the current development in this field.
12. Recent Results from the Pierre Auger observatory
International Nuclear Information System (INIS)
Kampert, Karl-Heinz
2010-01-01
The Pierre Auger observatory is a hybrid air shower experiment which uses multiple detection techniques to investigate the origin, spectrum, and composition of ultrahigh energy cosmic rays. We present recent results on these topics and discuss their implications to the understanding the origin of the most energetic particles in nature as well as for physics beyond the Standard Model, such as violation of Lorentz invariance and 'top-down' models of cosmic ray production. Future plans, including enhancements underway at the southern site in Argentina will be presented. (author)
13. Pulsating stars and the Virtual Observatory
Directory of Open Access Journals (Sweden)
Suárez Juan Carlos
2017-01-01
Full Text Available Virtual Observatory is one of the most used internet-based protocols in astronomy. It has become somewhat natural to find, manage, compare, visualize and download observations from very different archives of astronomical observations with no effort. The VO technology beyond that is now being a reality for asteroseismology, not only for observations but also for theoretical models. Here I give a brief description of the most important VO tools related with asteroseismology, as well as a rough outline of the current development in this field.
14. The Virtual Solar Observatory: Progress and Diversions
Science.gov (United States)
Gurman, Joseph B.; Bogart, R. S.; Amezcua, A.; Hill, Frank; Oien, Niles; Davey, Alisdair R.; Hourcle, Joseph; Mansky, E.; Spencer, Jennifer L.
2017-08-01
The Virtual Solar Observatory (VSO) is a known and useful method for identifying and accessing solar physics data online. We review current "behind the scenes" work on the VSO, including the addition of new data providers and the return of access to data sets to which service was temporarily interrupted. We also report on the effect on software development efforts when government IT “security” initiatives impinge on finite resoruces. As always, we invite SPD members to identify data sets, services, and interfaces they would like to see implemented in the VSO.
15. Deep magma transport at Kilauea volcano, Hawaii
Science.gov (United States)
Wright, T.L.; Klein, F.W.
2006-01-01
The shallow part of Kilauea's magma system is conceptually well-understood. Long-period and short-period (brittle-failure) earthquake swarms outline a near-vertical magma transport path beneath Kilauea's summit to 20 km depth. A gravity high centered above the magma transport path demonstrates that Kilauea's shallow magma system, established early in the volcano's history, has remained fixed in place. Low seismicity at 4-7 km outlines a storage region from which magma is supplied for eruptions and intrusions. Brittle-failure earthquake swarms shallower than 5 km beneath the rift zones accompany dike emplacement. Sparse earthquakes extend to a decollement at 10-12 km along which the south flank of Kilauea is sliding seaward. This zone below 5 km can sustain aseismic magma transport, consistent with recent tomographic studies. Long-period earthquake clusters deeper than 40 km occur parallel to and offshore of Kilauea's south coast, defining the deepest seismic response to magma transport from the Hawaiian hot spot. A path connecting the shallow and deep long-period earthquakes is defined by mainshock-aftershock locations of brittle-failure earthquakes unique to Kilauea whose hypocenters are deeper than 25 km with magnitudes from 4.4 to 5.2. Separation of deep and shallow long-period clusters occurs as the shallow plumbing moves with the volcanic edifice, while the deep plumbing is centered over the hotspot. Recent GPS data agrees with the volcano-propagation vector from Kauai to Maui, suggesting that Pacific plate motion, azimuth 293.5?? and rate of 7.4 cm/yr, has been constant over Kilauea's lifetime. However, volcano propagation on the island of Hawaii, azimuth 325??, rate 13 cm/yr, requires southwesterly migration of the locus of melting within the broad hotspot. Deep, long-period earthquakes lie west of the extrapolated position of Kilauea backward in time along a plate-motion vector, requiring southwesterly migration of Kilauea's magma source. Assumed ages of 0
16. Space Radar Image of Kilauea Volcano, Hawaii
Science.gov (United States)
1994-01-01
This is a deformation map of the south flank of Kilauea volcano on the big island of Hawaii, centered at 19.5 degrees north latitude and 155.25 degrees west longitude. The map was created by combining interferometric radar data -- that is data acquired on different passes of the space shuttle which are then overlayed to obtain elevation information -- acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar during its first flight in April 1994 and its second flight in October 1994. The area shown is approximately 40 kilometers by 80 kilometers (25 miles by 50 miles). North is toward the upper left of the image. The colors indicate the displacement of the surface in the direction that the radar instrument was pointed (toward the right of the image) in the six months between images. The analysis of ground movement is preliminary, but appears consistent with the motions detected by the Global Positioning System ground receivers that have been used over the past five years. The south flank of the Kilauea volcano is among the most rapidly deforming terrains on Earth. Several regions show motions over the six-month time period. Most obvious is at the base of Hilina Pali, where 10 centimeters (4 inches) or more of crustal deformation can be seen in a concentrated area near the coastline. On a more localized scale, the currently active Pu'u O'o summit also shows about 10 centimeters (4 inches) of change near the vent area. Finally, there are indications of additional movement along the upper southwest rift zone, just below the Kilauea caldera in the image. Deformation of the south flank is believed to be the result of movements along faults deep beneath the surface of the volcano, as well as injections of magma, or molten rock, into the volcano's 'plumbing' system. Detection of ground motions from space has proven to be a unique capability of imaging radar technology. Scientists hope to use deformation data acquired by SIR-C/X-SAR and future imaging
17. Element fluxes from Copahue Volcano, Argentina
Science.gov (United States)
Varekamp, J. C.
2003-12-01
Copahue volcano in Argentina has an active volcano-magmatic hydrothermal system that emits fluids with pH=0.3 that feed a river system. River flux measurements and analytical data provide element flux data from 1997 to 2003, which includes the eruptive period of July to December 2000. The fluids have up to 6.5 percent sulfate, 1 percent Cl and ppm levels of B, As, Cu, Zn and Pb. The hydrothermal system acts as a perfect scrubber for magmatic gases during the periods of passive degassing, although the dissolved magmatic gases are modified through water rock interaction and mineral precipitation. The magmatic SO2 disproportionates into sulfate and liquid elemental sulfur at about 300 C; the sulfate is discharged with the fluids, whereas the liquid sulfur is temporarily retained in the reservoir but ejected during phreatic and hydrothermal eruptions. The intrusion and chemical attack of new magma in the hydrothermal reservoir in early 2000 was indicated by strongly increased Mg concentrations and Mg fluxes, and higher Mg/Cl and Mg/K values. The hydrothermal discharge has acidified a large glacial lake (0.5 km3) to pH=2 and the lake effluents acidify the exiting river. Even more than 100 km downstream, the effects of acid pulses from the lake are evident from red coated boulders and fish die-offs. The river-bound sulfate fluxes from the system range from 70 to 200 kilotonnes/year. The equivalent SO2 output of the whole volcanic system ranges from 150 to 500 tonnes/day, which includes the fraction of native sulfur that formed inside the mountain but does not include the release of SO2 into the atmosphere during the eruptions. Trace element fluxes of the river will be scaled up and compared with global element fluxes from meteoric river waters (subterranean volcanic weathering versus watershed weathering).
18. Geomechanical rock properties of a basaltic volcano
Directory of Open Access Journals (Sweden)
Lauren N Schaefer
2015-06-01
Full Text Available In volcanic regions, reliable estimates of mechanical properties for specific volcanic events such as cyclic inflation-deflation cycles by magmatic intrusions, thermal stressing, and high temperatures are crucial for building accurate models of volcanic phenomena. This study focuses on the challenge of characterizing volcanic materials for the numerical analyses of such events. To do this, we evaluated the physical (porosity, permeability and mechanical (strength properties of basaltic rocks at Pacaya Volcano (Guatemala through a variety of laboratory experiments, including: room temperature, high temperature (935 °C, and cyclically-loaded uniaxial compressive strength tests on as-collected and thermally-treated rock samples. Knowledge of the material response to such varied stressing conditions is necessary to analyze potential hazards at Pacaya, whose persistent activity has led to 13 evacuations of towns near the volcano since 1987. The rocks show a non-linear relationship between permeability and porosity, which relates to the importance of the crack network connecting the vesicles in these rocks. Here we show that strength not only decreases with porosity and permeability, but also with prolonged stressing (i.e., at lower strain rates and upon cooling. Complimentary tests in which cyclic episodes of thermal or load stressing showed no systematic weakening of the material on the scale of our experiments. Most importantly, we show the extremely heterogeneous nature of volcanic edifices that arise from differences in porosity and permeability of the local lithologies, the limited lateral extent of lava flows, and the scars of previous collapse events. Input of these process-specific rock behaviors into slope stability and deformation models can change the resultant hazard analysis. We anticipate that an increased parameterization of rock properties will improve mitigation power.
19. Antarctic volcanoes: A remote but significant hazard
Science.gov (United States)
Geyer, Adelina; Martí, Alex; Folch, Arnau; Giralt, Santiago
2017-04-01
Ash emitted during explosive volcanic eruptions can be dispersed over massive areas of the globe, posing a threat to both human health and infrastructures, such as the air traffic. Some of the last eruptions occurred during this decade (e.g. 14/04/2010 - Eyjafjallajökull, Iceland; 24/05/2011-Grímsvötn, Iceland; 05/06/2011-Puyehue-Cordón Caulle, Chile) have strongly affected the air traffic in different areas of the world, leading to economic losses of billions of euros. From the tens of volcanoes located in Antarctica, at least nine are known to be active and five of them have reported volcanic activity in historical times. However, until now, no attention has been paid to the possible social, economical and environmental consequences of an eruption that would occur on high southern latitudes, perhaps because it is considered that its impacts would be minor or local, and mainly restricted to the practically inhabited Antarctic continent. We show here, as a case study and using climate models, how volcanic ash emitted during a regular eruption of one of the most active volcanoes in Antarctica, Deception Island (South Shetland Islands), could reach the African continent as well as Australia and South America. The volcanic cloud could strongly affect the air traffic not only in the region and at high southern latitudes, but also the flights connecting Africa, South America and Oceania. Results obtained are crucial to understand the patterns of volcanic ash distribution at high southern latitudes with obvious implications for tephrostratigraphical and chronological studies that provide valuable isochrones with which to synchronize palaeoclimate records. This research was partially funded by the MINECO grants VOLCLIMA (CGL2015-72629-EXP)and POSVOLDEC(CTM2016-79617-P)(AEI/FEDER, UE), the Ramón y Cajal research program (RYC-2012-11024) and the NEMOH European project (REA grant 34 agreement n° 289976).
20. The Lowell Observatory Predoctoral Scholar Program
Science.gov (United States)
Prato, Lisa; Nofi, Larissa
2018-01-01
Lowell Observatory is pleased to solicit applications for our Predoctoral Scholar Fellowship Program. Now beginning its tenth year, this program is designed to provide unique research opportunities to graduate students in good standing, currently enrolled at Ph.D. granting institutions. Lowell staff research spans a wide range of topics, from astronomical instrumentation, to icy bodies in our solar system, exoplanet science, stellar populations, star formation, and dwarf galaxies. Strong collaborations, the new Ph.D. program at Northern Arizona University, and cooperative links across the greater Flagstaff astronomical community create a powerful multi-institutional locus in northern Arizona. Lowell Observatory's new 4.3 meter Discovery Channel Telescope is operating at full science capacity and boasts some of the most cutting-edge and exciting capabilities available in optical/infrared astronomy. Student research is expected to lead to a thesis dissertation appropriate for graduation at the doctoral level at the student's home institution. For more information, see http://www2.lowell.edu/rsch/predoc.php and links therein. Applications for Fall 2018 are due by May 1, 2018; alternate application dates will be considered on an individual basis.
1. SPASE, Metadata, and the Heliophysics Virtual Observatories
Science.gov (United States)
Thieman, James; King, Todd; Roberts, Aaron
2010-01-01
To provide data search and access capability in the field of Heliophysics (the study of the Sun and its effects on the Solar System, especially the Earth) a number of Virtual Observatories (VO) have been established both via direct funding from the U.S. National Aeronautics and Space Administration (NASA) and through other funding agencies in the U.S. and worldwide. At least 15 systems can be labeled as Virtual Observatories in the Heliophysics community, 9 of them funded by NASA. The problem is that different metadata and data search approaches are used by these VO's and a search for data relevant to a particular research question can involve consulting with multiple VO's - needing to learn a different approach for finding and acquiring data for each. The Space Physics Archive Search and Extract (SPASE) project is intended to provide a common data model for Heliophysics data and therefore a common set of metadata for searches of the VO's. The SPASE Data Model has been developed through the common efforts of the Heliophysics Data and Model Consortium (HDMC) representatives over a number of years. We currently have released Version 2.1 of the Data Model. The advantages and disadvantages of the Data Model will be discussed along with the plans for the future. Recent changes requested by new members of the SPASE community indicate some of the directions for further development.
2. Fine Guidance Sensing for Coronagraphic Observatories
Science.gov (United States)
Brugarolas, Paul; Alexander, James W.; Trauger, John T.; Moody, Dwight C.
2011-01-01
Three options have been developed for Fine Guidance Sensing (FGS) for coronagraphic observatories using a Fine Guidance Camera within a coronagraphic instrument. Coronagraphic observatories require very fine precision pointing in order to image faint objects at very small distances from a target star. The Fine Guidance Camera measures the direction to the target star. The first option, referred to as Spot, was to collect all of the light reflected from a coronagraph occulter onto a focal plane, producing an Airy-type point spread function (PSF). This would allow almost all of the starlight from the central star to be used for centroiding. The second approach, referred to as Punctured Disk, collects the light that bypasses a central obscuration, producing a PSF with a punctured central disk. The final approach, referred to as Lyot, collects light after passing through the occulter at the Lyot stop. The study includes generation of representative images for each option by the science team, followed by an engineering evaluation of a centroiding or a photometric algorithm for each option. After the alignment of the coronagraph to the fine guidance system, a "nulling" point on the FGS focal point is determined by calibration. This alignment is implemented by a fine alignment mechanism that is part of the fine guidance camera selection mirror. If the star images meet the modeling assumptions, and the star "centroid" can be driven to that nulling point, the contrast for the coronagraph will be maximized.
3. Developing a Virtual Network of Research Observatories
Science.gov (United States)
Hooper, R. P.; Kirschtl, D.
2008-12-01
The hydrologic community has been discussing the concept of a network of observatories for the advancement of hydrologic science in areas of scaling processes, in testing generality of hypotheses, and in examining non-linear couplings between hydrologic, biotic, and human systems. The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) is exploring the formation of a virtual network of observatories, formed from existing field studies without regard to funding source. Such a network would encourage sharing of data, metadata, field methods, and data analysis techniques to enable multidisciplinary synthesis, meta-analysis, and scientific collaboration in hydrologic and environmental science and engineering. The virtual network would strive to provide both the data and the environmental context of the data through advanced cyberinfrastructure support. The foundation for this virtual network is Water Data Services that enable the publication of time-series data collected at fixed points using a services-oriented architecture. These publication services, developed in the CUAHSI Hydrologic Information Systems project, permit the discovery of data from both academic and government sources through a single portal. Additional services under consideration are publication of geospatial data sets, immersive environments based upon site digital elevation models, and a common web portal to member sites populated with structured data about the site (such as land use history and geologic setting) to permit understanding the environmental context of the data being shared.
4. Optimized Autonomous Space In-situ Sensor-Web for volcano monitoring
Science.gov (United States)
Song, W.-Z.; Shirazi, B.; Kedar, S.; Chien, S.; Webb, F.; Tran, D.; Davis, A.; Pieri, D.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.
2008-01-01
In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), is developing a prototype dynamic and scaleable hazard monitoring sensor-web and applying it to volcano monitoring. The combined Optimized Autonomous Space -In-situ Sensor-web (OASIS) will have two-way communication capability between ground and space assets, use both space and ground data for optimal allocation of limited power and bandwidth resources on the ground, and use smart management of competing demands for limited space assets. It will also enable scalability and seamless infusion of future space and in-situ assets into the sensor-web. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been active since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO-1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real-time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be
5. Synergistic Use of Satellite Volcano Detection and Science: A Fifteen Year Perspective of ASTER on Terra
Science.gov (United States)
Ramsey, M. S.
2014-12-01
The success of Terra-based observations using the ASTER instrument of active volcanic processes early in the mission gave rise to a funded NASA program designed to both increase the number of ASTER observations following an eruption and validate the satellite data. The urgent request protocol (URP) system for ASTER grew out of this initial study and has now operated in conjunction with and the support of the Alaska Volcano Observatory, the University of Alaska Fairbanks, the University of Hawaii, the USGS Land Processes DAAC, and the ASTER science team. The University of Pittsburgh oversees this rapid response/sensor-web system, which until 2011 had focused solely on the active volcanoes in the North Pacific region. Since that time, it has been expanded to operate globally with AVHRR and MODIS and now ASTER VNIR/TIR data are being acquired at numerous erupting volcanoes around the world. This program relies on the increased temporal resolution of AVHRR/MODIS midwave infrared data to trigger the next available ASTER observation, which results in ASTER data as frequently as every 2-5 days. For many targets, the URP has increased the observational frequency over active eruptions by as much 50%. The data have been used for operational response to new eruptions, longer-term scientific studies such as capturing detailed changes in lava domes/flows, pyroclastic flows and lahars. These data have also been used to infer the emplacement of new lava lobes, detect endogenous dome growth, and interpret hazardous dome collapse events. The emitted TIR radiance from lava surfaces has also been used effectively to model composition, texture and degassing. Now, this long-term archive of volcanic image data is being mined to provide statistics on the expectations of future high-repeat TIR data such as that proposed for the NASA HyspIRI mission. In summary, this operational/scientific program utilizing the unique properties of ASTER and the Terra mission has shown the potential for
6. The Active Lava Flows of Kilauea Volcano, Hawaii
'lahar' is from Indonesia, a country with some of the most active and destructive volcanoes .... tourist-dependent businesses such as airlines, rental car compa- nies, and hotels. ... excellent viewing conditions and photo opportunities. The heat.
7. Vegetation damage and recovery after Chiginagak Volcano Crater drainage event
Data.gov (United States)
Department of the Interior — From August 20 — 23, 2006, I revisited Chiginigak volcano to document vegetation recovery after the crater drainage event that severely damaged vegetation in May of...
8. Penguin Bank: A Loa-Trend Hawaiian Volcano
Science.gov (United States)
Xu, G.; Blichert-Toft, J.; Clague, D. A.; Cousens, B.; Frey, F. A.; Moore, J. G.
2007-12-01
Hawaiian volcanoes along the Hawaiian Ridge from Molokai Island in the northwest to the Big Island in the southeast, define two parallel trends of volcanoes known as the Loa and Kea spatial trends. In general, lavas erupted along these two trends have distinctive geochemical characteristics that have been used to define the spatial distribution of geochemical heterogeneities in the Hawaiian plume (e.g., Abouchami et al., 2005). These geochemical differences are well established for the volcanoes forming the Big Island. The longevity of the Loa- Kea geochemical differences can be assessed by studying East and West Molokai volcanoes and Penguin Bank which form a volcanic ridge perpendicular to the Loa and Kea spatial trends. Previously we showed that East Molokai volcano (~1.5 Ma) is exclusively Kea-like and that West Molokai volcano (~1.8 Ma) includes lavas that are both Loa- and Kea-like (Xu et al., 2005 and 2007).The submarine Penguin Bank (~2.2 Ma), probably an independent volcano constructed west of West Molokai volcano, should be dominantly Loa-like if the systematic Loa and Kea geochemical differences were present at ~2.2 Ma. We have studied 20 samples from Penguin Bank including both submarine and subaerially-erupted lavas recovered by dive and dredging. All lavas are tholeiitic basalt representing shield-stage lavas. Trace element ratios, such as Sr/Nb and Zr/Nb, and isotopic ratios of Sr and Nd clearly are Loa-like. On an ɛNd-ɛHf plot, Penguin Bank lavas fall within the field defined by Mauna Loa lavas. Pb isotopic data lie near the Loa-Kea boundary line defined by Abouchami et al. (2005). In conclusion, we find that from NE to SW, i.e., perpendicular to the Loa and Kea spatial trend, there is a shift from Kea-like East Molokai lavas to Loa-like Penguin Bank lavas with the intermediate West Molokai volcano having lavas with both Loa- and Kea-like geochemical features. Therefore, the Loa and Kea geochemical dichotomy exhibited by Big Island volcanoes
9. Remote measurement of high preeruptive water vapor emissions at Sabancaya volcano by passive differential optical absorption spectroscopy
Science.gov (United States)
Kern, Christoph; Masias, Pablo; Apaza, Fredy; Reath, Kevin; Platt, Ulrich
2017-01-01
Water (H2O) is by far the most abundant volcanic volatile species and plays a predominant role in driving volcanic eruptions. However, numerous difficulties associated with making accurate measurements of water vapor in volcanic plumes have limited their use as a diagnostic tool. Here we present the first detection of water vapor in a volcanic plume using passive visible-light differential optical absorption spectroscopy (DOAS). Ultraviolet and visible-light DOAS measurements were made on 21 May 2016 at Sabancaya Volcano, Peru. We find that Sabancaya's plume contained an exceptionally high relative water vapor abundance 6 months prior to its November 2016 eruption. Our measurements yielded average sulfur dioxide (SO2) emission rates of 800–900 t/d, H2O emission rates of around 250,000 t/d, and an H2O/SO2 molecular ratio of 1000 which is about an order of magnitude larger than typically found in high-temperature volcanic gases. We attribute the high water vapor emissions to a boiling-off of Sabancaya's hydrothermal system caused by intrusion of magma to shallow depths. This hypothesis is supported by a significant increase in the thermal output of the volcanic edifice detected in infrared satellite imagery leading up to and after our measurements. Though the measurement conditions encountered at Sabancaya were very favorable for our experiment, we show that visible-light DOAS systems could be used to measure water vapor emissions at numerous other high-elevation volcanoes. Such measurements would provide observatories with additional information particularly useful for forecasting eruptions at volcanoes harboring significant hydrothermal systems.
10. The dynamics of Hawaiian-style eruptions: a century of study: Chapter 8 in Characteristics of Hawaiian volcanoes
Science.gov (United States)
Mangan, Margaret T.; Cashman, Katharine V.; Swanson, Donald A.; Poland, Michael P.; Takahashi, T. Jane; Landowski, Claire M.
2014-01-01
This chapter, prepared in celebration of the Hawaiian Volcano Observatoryʼs centennial, provides a historical lens through which to view modern paradigms of Hawaiian-style eruption dynamics. The models presented here draw heavily from observations, monitoring, and experiments conducted on Kīlauea Volcano, which, as the site of frequent and accessible eruptions, has attracted scientists from around the globe. Long-lived eruptions in particular—Halema‘uma‘u 1907–24, Kīlauea Iki 1959, Mauna Ulu 1969–74, Pu‘u ‘Ō‘ō-Kupaianaha 1983–present, and Halema‘uma‘u 2008–present—have offered incomparable opportunities to conceptualize and constrain theoretical models with multidisciplinary data and to field-test model results. The central theme in our retrospective is the interplay of magmatic gas and near-liquidus basaltic melt. A century of study has shown that gas exsolution facilitates basaltic dike propagation; volatile solubility and vesiculation kinetics influence magma-rise rates and fragmentation depths; bubble interactions and gas-melt decoupling modulate magma rheology, eruption intensity, and plume dynamics; and pyroclast outgassing controls characteristics of eruption deposits. Looking to the future, we anticipate research leading to a better understanding of how eruptive activity is influenced by volatiles, including the physics of mixed CO2-H2O degassing, gas segregation in nonuniform conduits, and vaporization of external H2O during magma ascent.
11. Two hundred years of magma transport and storage at Kīlauea Volcano, Hawai'i, 1790-2008
Science.gov (United States)
Wright, Thomas L.; Klein, Fred W.
2014-01-01
This publication summarizes the evolution of the internal plumbing of Kīlauea Volcano on the Island of Hawaiʻi from the first documented eruption in 1790 to the explosive eruption of March 2008 in Halemaʻumaʻu Crater. For the period before the founding of the Hawaiian Volcano Observatory in 1912, we rely on written observations of eruptive activity, earthquake swarms, and periodic draining of magma from the lava lake present in Kīlauea Caldera. After 1912 the written observations are supplemented by continuous measurement of tilting of the ground at Kīlauea’s summit and by a continuous instrumental record of earthquakes, both measurements made during 1912–56 by a single pendulum seismometer housed on the northeast edge of Kīlauea’s summit. Interpretations become more robust following the installation of seismic and deformation networks in the 1960s. A major advance in the 1990s was the ability to continuously record and telemeter ground deformation to allow its precise correlation with seismic activity before and after eruptions, intrusions, and large earthquakes.
12. Volcano-hydrothermal energy research at white Island, New Zealand
International Nuclear Information System (INIS)
Allis, R.G.
1994-01-01
This paper presents the White Island (New Zealand) volcano-hydrothermal research project by the N.Z. DSIR and the Geological Survey of Japan, which is investigating the coupling between magmatic and geothermal systems. The first phase of this investigation is a geophysical survey of the crater floor of the andesite volcano, White Island during 1991/1992, to be followed by drilling from the crater floor into the hydrothermal system. (TEC). 4 figs., 8 refs
13. Geochemical signatures of tephras from Quaternary Antarctic Peninsula volcanoes
OpenAIRE
Kraus,Stefan; Kurbatov,Andrei; Yates,Martin
2013-01-01
In the northern Antarctic Peninsula area, at least 12 Late Plelstocene-Holocene volcanic centers could be potential sources of tephra layers in the region. We present unique geochemical fingerprints for ten of these volcanoes using major, trace, rare earth element, and isotope data from 95 samples of tephra and other eruption products. The volcanoes have predominantly basaltic and basaltic andesitic compositions. The Nb/Y ratio proves useful to distinguish between volcanic centers located on ...
14. Estimates of elastic plate thicknesses beneath large volcanos on Venus
Science.gov (United States)
Mcgovern, Patrick J.; Solomon, Sean C.
1992-01-01
Megellan radar imaging and topography data are now available for a number of volcanos on Venus greater than 100 km in radius. These data can be examined to reveal evidence of the flexural response of the lithosphere to the volcanic load. On Earth, flexure beneath large hotspot volcanos results in an annual topographic moat that is partially to completely filled in by sedimentation and mass wasting from the volcano's flanks. On Venus, erosion and sediment deposition are considered to be negligible at the resolution of Magellan images. Thus, it may be possible to observe evidence of flexure by the ponding of recent volcanic flows in the moat. We also might expect to find topographic signals from unfilled moats surrounding large volcanos on Venus, although these signals may be partially obscured by regional topography. Also, in the absence of sedimentation, tectonic evidence of deformation around large volcanos should be evident except where buried by very young flows. We use analytic solutions in axisymmetric geometry for deflections and stresses resulting from loading of a plate overlying an inviscid fluid. Solutions for a set of disk loads are superimposed to obtain a solution for a conical volcano. The deflection of the lithosphere produces an annular depression or moat, the extent of which can be estimated by measuring the distance from the volcano's edge to the first zero crossing or to the peak of the flexural arch. Magellan altimetry data records (ARCDRs) from data cycle 1 are processed using the GMT mapping and graphics software to produce topographic contour maps of the volcanos. We then take topographic profiles that cut across the annular and ponded flows seen on the radar images. By comparing the locations of these flows to the predicted moat locations from a range of models, we estimate the elastic plate thickness that best fits the observations, together with the uncertainty in that estimate.
15. Science Potential of a Deep Ocean Antineutrino Observatory
International Nuclear Information System (INIS)
Dye, S.T.
2007-01-01
This paper presents science potential of a deep ocean antineutrino observatory being developed at Hawaii. The observatory design allows for relocation from one site to another. Positioning the observatory some 60 km distant from a nuclear reactor complex enables precision measurement of neutrino mixing parameters, leading to a determination of neutrino mass hierarchy and θ 13 . At a mid-Pacific location the observatory measures the flux and ratio of uranium and thorium decay neutrinos from earth's mantle and performs a sensitive search for a hypothetical natural fission reactor in earth's core. A subsequent deployment at another mid-ocean location would test lateral heterogeneity of uranium and thorium in earth's mantle
Directory of Open Access Journals (Sweden)
Yasuhiro Minamoto
2013-06-01
Full Text Available The Japan Meteorological Agency (JMA is operating four geomagnetic observatories in Japan. Kakioka Magnetic Observatory (KMO, commissioned in 1913, is the oldest. The hourly records at KMO cover over almost 100 years. KMO is JMA's headquarters for geomagnetic and geoelectric observations. Almost all data are available at the KMO website free of charge for researchers. KMO and two other observatories have been certified as INTERMAGNET observatories, and quasi-real-time geomagnetic data from them are available at the INTERMAGNET website.
17. Recent Seismicity in the Ceboruco Volcano, Western Mexico
Science.gov (United States)
Nunez, D.; Chávez-Méndez, M. I.; Nuñez-Cornu, F. J.; Sandoval, J. M.; Rodriguez-Ayala, N. A.; Trejo-Gomez, E.
2017-12-01
The Ceboruco volcano is the largest (2280 m.a.s.l) of several volcanoes along the Tepic-Zacoalco rift zone in Nayarit state (Mexico). During the last 1000 years, this volcano had effusive-explosive episodes with eight eruptions providing an average of one eruption each 125 years. Since the last eruption occurred in 1870, 147 years ago, a new eruption likelihood is really high and dangerous due to nearby population centers, important roads and lifelines that traverse the volcano's slopes. This hazards indicates the importance of monitoring the seismicity associated with the Ceboruco volcano whose ongoing activity is evidenced by fumaroles and earthquakes. During 2003 and 2008, this region was registered by just one Lennartz Marslite seismograph featuring a Lennartz Le3D sensor (1 Hz) [Rodríguez Uribe et al. (2013)] where they observed that seismicity rates and stresses appear to be increasing indicating higher levels of activity within the volcano. Until July 2017, a semi-permanent network with three Taurus (Nanometrics) and one Q330 Quanterra (Kinemetrics) digitizers with Lennartz 3Dlite sensors of 1 Hz natural frequency was registering in the area. In this study, we present the most recent seismicity obtained by the semi-permanent network and a temporary network of 21 Obsidians 4X and 8X (Kinemetrics) covering an area of 16 km x 16 km with one station every 2.5-3 km recording from November 2016 to July 2017.
18. The diversity of mud volcanoes in the landscape of Azerbaijan
Science.gov (United States)
Rashidov, Tofig
2014-05-01
As the natural phenomenon the mud volcanism (mud volcanoes) of Azerbaijan are known from the ancient times. The historical records describing them are since V century. More detail study of this natural phenomenon had started in the second half of XIX century. The term "mud volcano" (or "mud hill") had been given by academician H.W. Abich (1863), more exactly defining this natural phenomenon. All the previous definitions did not give such clear and capacious explanation of it. In comparison with magmatic volcanoes, globally the mud ones are restricted in distribution; they mainly locate within the Alpine-Himalayan, Pacific and Central Asian mobile belts, in more than 30 countries (Columbia, Trinidad Island, Italy, Romania, Ukraine, Georgia, Azerbaijan, Turkmenistan, Iran, Pakistan, Indonesia, Burma, Malaysia, etc.). Besides it, the zones of mud volcanoes development are corresponded to zones of marine accretionary prisms' development. For example, the South-Caspian depression, Barbados Island, Cascadia (N.America), Costa-Rica, Panama, Japan trench. Onshore it is Indonesia, Japan, and Trinidad, Taiwan. The mud volcanism with non-accretionary conditions includes the areas of Black Sea, Alboran Sea, the Gulf of Mexico (Louisiana coast), Salton Sea. But new investigations reveal more new mud volcanoes and in places which were not considered earlier as the traditional places of mud volcanoes development (e.g. West Nile Rive delta). Azerbaijan is the classic region of mud volcanoes development. From over 800 world mud volcanoes there are about 400 onshore and within the South-Caspian basin, which includes the territory of East Azerbaijan (the regions of Shemakha-Gobustan and Low-Kura River, Absheron peninsula), adjacent water area of South Caspian (Baku and Absheron archipelagoes) and SW Turkmenistan and represents an area of great downwarping with thick (over 25 km) sedimentary series. Generally, in the modern relief the mud volcanoes represent more or less large uplifts
19. Artificial intelligence for the CTA Observatory scheduler
Science.gov (United States)
Colomé, Josep; Colomer, Pau; Campreciós, Jordi; Coiffard, Thierry; de Oña, Emma; Pedaletti, Giovanna; Torres, Diego F.; Garcia-Piquer, Alvaro
2014-08-01
The Cherenkov Telescope Array (CTA) project will be the next generation ground-based very high energy gamma-ray instrument. The success of the precursor projects (i.e., HESS, MAGIC, VERITAS) motivated the construction of this large infrastructure that is included in the roadmap of the ESFRI projects since 2008. CTA is planned to start the construction phase in 2015 and will consist of two arrays of Cherenkov telescopes operated as a proposal-driven open observatory. Two sites are foreseen at the southern and northern hemispheres. The CTA observatory will handle several observation modes and will have to operate tens of telescopes with a highly efficient and reliable control. Thus, the CTA planning tool is a key element in the control layer for the optimization of the observatory time. The main purpose of the scheduler for CTA is the allocation of multiple tasks to one single array or to multiple sub-arrays of telescopes, while maximizing the scientific return of the facility and minimizing the operational costs. The scheduler considers long- and short-term varying conditions to optimize the prioritization of tasks. A short-term scheduler provides the system with the capability to adapt, in almost real-time, the selected task to the varying execution constraints (i.e., Targets of Opportunity, health or status of the system components, environment conditions). The scheduling procedure ensures that long-term planning decisions are correctly transferred to the short-term prioritization process for a suitable selection of the next task to execute on the array. In this contribution we present the constraints to CTA task scheduling that helped classifying it as a Flexible Job-Shop Problem case and finding its optimal solution based on Artificial Intelligence techniques. We describe the scheduler prototype that uses a Guarded Discrete Stochastic Neural Network (GDSN), for an easy representation of the possible long- and short-term planning solutions, and Constraint
20. Felsic maar-diatreme volcanoes: a review
Science.gov (United States)
Ross, Pierre-Simon; Carrasco Núñez, Gerardo; Hayman, Patrick
2017-02-01
Felsic maar-diatreme volcanoes host major ore deposits but have been largely ignored in the volcanology literature, especially for the diatreme portion of the system. Here, we use two Mexican tuff rings as analogs for the maar ejecta ring, new observations from one diatreme, and the economic geology literature on four other mineralized felsic maar-diatremes to produce an integrated picture of this type of volcano. The ejecta rings are up to 50 m+ thick and extend laterally up to ˜1.5 km from the crater edge. In two Mexican examples, the lower part of the ejecta ring is dominated by pyroclastic surge deposits with abundant lithic clasts (up to 80% at Hoya de Estrada). These deposits display low-angle cross-bedding, dune bedforms, undulating beds, channels, bomb sags, and accretionary lapilli and are interpreted as phreatomagmatic. Rhyolitic juvenile clasts at Tepexitl have only 0-25% vesicles in this portion of the ring. The upper parts of the ejecta ring sequences in the Mexican examples have a different character: lithic clasts can be less abundant, the grain size is typically coarser, and the juvenile clasts can be different in character (with some more vesicular fragments). Fragmentation was probably shallower at this stage. The post-eruptive maar crater infill is known at Wau and consists of reworked pyroclastic deposits as well as lacustrine and other sediments. Underneath are bedded upper diatreme deposits, interpreted as pyroclastic surge and fall deposits. The upper diatreme and post-eruptive crater deposits have dips larger than 30° at Wau, with approximately centroclinal attitudes. At still lower structural levels, the diatreme pyroclastic infill is largely unbedded; Montana Tunnels and Kelian are good examples of this. At Cerro de Pasco, the pyroclastic infill seems bedded despite about 500 m of post-eruptive erosion relative to the pre-eruptive surface. The contact between the country rocks and the diatreme is sometimes characterized by country rock
1. The Steward Observatory asteroid relational database
Science.gov (United States)
Sykes, Mark V.; Alvarezdelcastillo, Elizabeth M.
1991-01-01
The Steward Observatory Asteroid Relational Database (SOARD) was created as a flexible tool for undertaking studies of asteroid populations and sub-populations, to probe the biases intrinsic to asteroid databases, to ascertain the completeness of data pertaining to specific problems, to aid in the development of observational programs, and to develop pedagogical materials. To date, SOARD has compiled an extensive list of data available on asteroids and made it accessible through a single menu-driven database program. Users may obtain tailored lists of asteroid properties for any subset of asteroids or output files which are suitable for plotting spectral data on individual asteroids. The program has online help as well as user and programmer documentation manuals. The SOARD already has provided data to fulfill requests by members of the astronomical community. The SOARD continues to grow as data is added to the database and new features are added to the program.
2. In situ vector calibration of magnetic observatories
Directory of Open Access Journals (Sweden)
A. Gonsette
2017-09-01
Full Text Available The goal of magnetic observatories is to measure and provide a vector magnetic field in a geodetic coordinate system. For that purpose, instrument set-up and calibration are crucial. In particular, the scale factor and orientation of a vector magnetometer may affect the magnetic field measurement. Here, we highlight the baseline concept and demonstrate that it is essential for data quality control. We show how the baselines can highlight a possible calibration error. We also provide a calibration method based on high-frequency absolute measurements. This method determines a transformation matrix for correcting variometer data suffering from scale factor and orientation errors. We finally present a practical case where recovered data have been successfully compared to those coming from a reference magnetometer.
3. The sunspot databases of the Debrecen Observatory
Science.gov (United States)
Baranyi, Tünde; Gyori, Lajos; Ludmány, András
2015-08-01
We present the sunspot data bases and online tools available in the Debrecen Heliophysical Observatory: the DPD (Debrecen Photoheliographic Data, 1974 -), the SDD (SOHO/MDI-Debrecen Data, 1996-2010), the HMIDD (SDO/HMI-Debrecen Data, HMIDD, 2010-), the revised version of Greenwich Photoheliographic Data (GPR, 1874-1976) presented together with the Hungarian Historical Solar Drawings (HHSD, 1872-1919). These are the most detailed and reliable documentations of the sunspot activity in the relevant time intervals. They are very useful for studying sunspot group evolution on various time scales from hours to weeks. Time-dependent differences between the available long-term sunspot databases are investigated and cross-calibration factors are determined between them. This work has received funding from the European Community's Seventh Framework Programme (FP7/2012-2015) under grant agreement No. 284461 (eHEROES).
4. MMS Observatory Thermal Vacuum Results Contamination Summary
Science.gov (United States)
Rosecrans, Glenn P.; Errigo, Therese; Brieda, Lubos
2014-01-01
The MMS mission is a constellation of 4 observatories designed to investigate the fundamental plasma physics of reconnection in the Earths magnetosphere. Each spacecraft has undergone extensive environmental testing to prepare it for its minimum 2 year mission. The various instrument suites measure electric and magnetic fields, energetic particles, and plasma composition. Thermal vacuum testing was conducted at the Naval Research Laboratory (NRL) in their Big Blue vacuum chamber. The individual spacecraft were tested and enclosed in a cryopanel enclosure called a Hamster cage. Specific contamination control validations were actively monitored by several QCMs, a facility RGA, and at times, with 16 Ion Gauges. Each spacecraft underwent a bakeout phase, followed by 4 thermal cycles. Unique aspects of the TV environment included slow pump downs with represses, thruster firings, Helium identification, and monitoring pressure spikes with Ion gauges. Various data from these TV tests will be shown along with lessons learned.
5. Meteorological observatory for Antarctic data collection
International Nuclear Information System (INIS)
Grigioni, P.; De Silvestri, L.
1996-01-01
In the last years, a great number of automatic weather stations was installed in Antarctica, with the aim to examine closely the weather and climate of this region and to improve the coverage of measuring points on the Antarctic surface. In 1987 the Italian Antarctic Project started to set up a meteorological network, in an area not completely covered by other countries. Some of the activities performed by the meteorological observatory, concerning technical functions such as maintenance of the AWS's and the execution of radio soundings, or relating to scientific purposes such as validation and elaboration of collected data, are exposed. Finally, some climatological considerations on the thermal behaviour of the Antarctic troposphere such as 'coreless winter', and on the wind field, including katabatic flows in North Victoria Land are described
6. Virtual Observatory: From Concept to Implementation
Science.gov (United States)
Djorgovski, S. G.; Williams, R.
2005-12-01
We review the origins of the Virtual Observatory (VO) concept, and the current status of the efforts in this field. VO is the response of the astronomical community to the challenges posed by the modern massive and complex data sets. It is a framework in which information technology is harnessed to organize, maintain, and explore the rich information content of the exponentially growing data sets, and to enable a qualitatively new science to be done with them. VO will become a complete, open, distributed, web-based framework for astronomy of the early 21st century. A number of significant efforts worldwide are now striving to convert this vision into reality. The technological and methodological challenges posed by the information-rich astronomy are also common to many other fields. We see a fundamental change in the way all science is done, driven by the information technology revolution.
7. SOFIA: The Next Generation Airborne Observatory
Science.gov (United States)
Dunham, Edward; Witteborn, Fred C. (Technical Monitor)
1995-01-01
SOFIA, the Stratospheric Observatory For Infrared Astronomy, will carry a 2.5 meter telescope into the stratosphere on 160 7.5 hour flights per year. At stratospheric altitudes SOFIA will operate above 99% of the water vapor in the Earth's atmosphere, allowing observation of wide regions of the infrared spectrum that are totally obscured from even the best ground-based sites. Its mobility and long range will allow worldwide observation of ephemeral events such as occultations and eclipses. SOFIA will be developed jointly by NASA and DARA, the German space agency. It has been included in the President's budget request to Congress for a development start in FY96 (this October!) and enjoys strong support in Germany. This talk will cover SOFIA's scientific goals, technical characteristics, science operating plan, and political status.
8. Supernova observations at McDonald Observatory
International Nuclear Information System (INIS)
Wheeler, J.C.
1984-01-01
The programs to obtain high quality spectra and photometry of supernovae at McDonald Observatory are reviewed. Spectra of recent Type I supernovae in NGC 3227, NGC 3625, and NGC 4419 are compared with those of SN 1981b in NGC 4536 to quantitatively illustrate both the homogeneity of Type I spectra at similar epochs and the differences in detail which will serve as a probe of the physical processes in the explosions. Spectra of the recent supernova in NGC 0991 give for the first time quantitative confirmation of a spectrally homogeneous, but distinct subclass of Type I supernovae which appears to be less luminous and to have lower excitation at maximum light than classical Type I supernovae
9. Communication Between Volcanoes: a Possible Path
Science.gov (United States)
Linde, A. T.; Sacks, I. S.
2002-12-01
The Japan Meteorological Agency installed and operates a network of Sacks-Evertson type borehole strainmeters in south-east Honshu. One of these instruments is on Izu-Oshima, a volcanic island at the northern end of the Izu-Bonin arc. That strainmeter recorded large strain changes associated with the 1986 eruption of Miharayama on the island and, over the period from 1980 to the 1986 eruption, the amplitude of the solid earth tides changed by almost a factor of two. Miyake-jima, about 75 km south of Izu-Oshima, erupted in October 1983. No deformation monitoring was available on Miyake but several changes occurred in the strain record at Izu-Oshima. There was a clear decrease in amplitude of the long-term strain rate. Short period (~hour) events recorded by the strainmeter became much more frequent about 6 months before the Miyake eruption and ceased following the eruption. At the time of the Miyake eruption, the rate of increase of the tidal amplitude also decreased. While all of these changes were observed on a single instrument, they are very different types of change. From a number of independent checks, we can be sure that the strainmeter did not experience any change in performance at that time. Thus it recorded a change in deformation behavior in three very different frequency bands: over very long term, at tidal periods (~day) and at very short periods (~hour). It appears that the distant eruption in 1983 had an effect on the magmatic system under Izu-Oshima. It is likely that these changes were enhanced to the observed level because Izu-Oshima was itself close to eruption failure. More recent tomographic and seismic attenuation work in the Tohoku (northern Honshu) area has shown the existence of a low velocity, high attenuation horizontally elongated structure under the volcanic front. This zone, likely to contain partial melt, is horizontally continuous along the front. If such a structure exists in the similar tectonic setting for these volcanoes, it
10. Volatile Element Fluxes at Copahue Volcano, Argentina
Science.gov (United States)
Varekamp, J. C.
2002-05-01
Copahue volcano has a crater lake and acid hot springs that discharge into the Rio Agrio river system. These fluids are very concentrated (up to 6 % sulfate), rich in rock-forming elements (up to 2000 ppm Mg) and small spheres of native sulfur float in the crater lake. The stable isotope composition of the waters (delta 18O =-2.1 to + 3.6 per mille; delta D = -49 to -26 per mille) indicates that the hot spring waters are at their most concentrated about 70% volcanic brine and 30 % glacial meltwater. The crater lake waters have similar mixing proportions but added isotope effects from intense evaporation. Further dilution of the waters in the Rio Agrio gives values closer to local meteoric waters (delta 18O = -11 per mille; delta D = -77 per mille), whereas evaporation in closed ponds led to very heavy water (up to delta 18O = +12 per mille). The delta 34S value of dissolved sulfate is +14.2 per mille, whereas the native sulfur has values of -8.2 to -10.5 per mille. The heavy sulfate probably formed when SO2 disproportionated into bisulfate and native sulfur at about 300 C. We measured the sulfate fluxes in the Rio Agrio, which ranged from 20-40 kilotons S/year. The whole system was releasing sulfur at an equivalent rate of about 250-650 tons SO2/day. From the river flux sulfur values and the stochiometry of the disproportionation reaction we calculated the rate of liquid sulfur storage inside the volcano (6000 m3/year). During the eruptions of 1995/2000, large amounts of that stored liquid sulfur were ejected as pyroclastic sulfur. The calculated rate of rock dissolution (from rock- forming element fluxes in the Rio Agrio) suggests that the void space generated by rock dissolution is largely filled by native sulfur and silica. The S/Cl ratio in the hydrothermal fluids is about 2, whereas glass inclusions have S/Cl = 0.2, indicating the strong preferential degassing of sulfur.
11. Magma Dynamics in Dome-Building Volcanoes
Science.gov (United States)
Kendrick, J. E.; Lavallée, Y.; Hornby, A. J.; Schaefer, L. N.; Oommen, T.; Di Toro, G.; Hirose, T.
2014-12-01
The frequent and, as yet, unpredictable transition from effusive to explosive volcanic behaviour is common to active composite volcanoes, yet our understanding of the processes which control this evolution is poor. The rheology of magma, dictated by its composition, porosity and crystal content, is integral to eruption behaviour and during ascent magma behaves in an increasingly rock-like manner. This behaviour, on short timescales in the upper conduit, provides exceptionally dynamic conditions that favour strain localisation and failure. Seismicity released by this process can be mimicked by damage accumulation that releases acoustic signals on the laboratory scale, showing that the failure of magma is intrinsically strain-rate dependent. This character aids the development of shear zones in the conduit, which commonly fracture seismogenically, producing fault surfaces that control the last hundreds of meters of ascent by frictional slip. High-velocity rotary shear (HVR) experiments demonstrate that at ambient temperatures, gouge behaves according to Byerlee's rule at low slip velocities. At rock-rock interfaces, mechanical work induces comminution of asperities and heating which, if sufficient, may induce melting and formation of pseudotachylyte. The viscosity of the melt, so generated, controls the subsequent lubrication or resistance to slip along the fault plane thanks to non-Newtonian suspension rheology. The bulk composition, mineralogy and glass content of the magma all influence frictional behaviour, which supersedes buoyancy as the controlling factor in magma ascent. In the conduit of dome-building volcanoes, the fracture and slip processes are further complicated: slip-rate along the conduit margin fluctuates. The shear-thinning frictional melt yields a tendency for extremely unstable slip thanks to its pivotal position with regard to the glass transition. This thermo-kinetic transition bestows the viscoelastic melt with the ability to either flow or
12. Volcano surveillance by ACR silver fox
Science.gov (United States)
Patterson, M.C.L.; Mulligair, A.; Douglas, J.; Robinson, J.; Pallister, J.S.
2005-01-01
Recent growth in the business of unmanned air vehicles (UAVs) both in the US and abroad has improved their overall capability, resulting in a reduction in cost, greater reliability and adoption into areas where they had previously not been considered. Uses in coastal and border patrol, forestry and agriculture have recently been evaluated in an effort to expand the observed area and reduce surveillance and reconnaissance costs for information gathering. The scientific community has both contributed and benefited greatly in this development. A larger suite of light-weight miniaturized sensors now exists for a range of applications which in turn has led to an increase in the gathering of information from these autonomous vehicles. In October 2004 the first eruption of Mount St Helens since 1986 caused tremendous interest amoUg people worldwide. Volcanologists at the U.S. Geological Survey rapidly ramped up the level of monitoring using a variety of ground-based sensors deployed in the crater and on the flanks of the volcano using manned helicopters. In order to develop additional unmanned sensing methods that can be used in potentially hazardous and low visibility conditions, a UAV experiment was conducted during the ongoing eruption early in November. The Silver Fox UAV was flown over and inside the crater to perform routine observation and data gathering, thereby demonstrating a technology that could reduce physical risk to scientists and other field operatives. It was demonstrated that UAVs can be flown autonomously at an active volcano and can deliver real time data to a remote location. Although still relatively limited in extent, these initial flights provided information on volcanic activity and thermal conditions within the crater and at the new (2004) lava dome. The flights demonstrated that readily available visual and infrared video sensors mounted in a small and relatively low-cost aerial platform can provide useful data on volcanic phenomena. This was
13. The Solar Connections Observatory for Planetary Environments
Science.gov (United States)
Oliversen, Ronald J.; Harris, Walter M.; Oegerle, William R. (Technical Monitor)
2002-01-01
The NASA Sun-Earth Connection theme roadmap calls for comparative study of how the planets, comets, and local interstellar medium (LISM) interact with the Sun and respond to solar variability. Through such a study we advance our understanding of basic physical plasma and gas dynamic processes, thus increasing our predictive capabilities for the terrestrial, planetary, and interplanetary environments where future remote and human exploration will occur. Because the other planets have lacked study initiatives comparable to the terrestrial ITM, LWS, and EOS programs, our understanding of the upper atmospheres and near space environments on these worlds is far less detailed than our knowledge of the Earth. To close this gap we propose a mission to study {\\it all) of the solar interacting bodies in our planetary system out to the heliopause with a single remote sensing space observatory, the Solar Connections Observatory for Planetary Environments (SCOPE). SCOPE consists of a binocular EUV/FUV telescope operating from a remote, driftaway orbit that provides sub-arcsecond imaging and broadband medium resolution spectro-imaging over the 55-290 nm bandpass, and high (R>10$^{5}$ resolution H Ly-$\\alpha$ emission line profile measurements of small scale planetary and wide field diffuse solar system structures. A key to the SCOPE approach is to include Earth as a primary science target. From its remote vantage point SCOPE will be able to observe auroral emission to and beyond the rotational pole. The other planets and comets will be monitored in long duration campaigns centered when possible on solar opposition when interleaved terrestrial-planet observations can be used to directly compare the response of both worlds to the same solar wind stream and UV radiation field. Using a combination of observations and MHD models, SCOPE will isolate the different controlling parameters in each planet system and gain insight into the underlying physical processes that define the
14. Towards a new Mercator Observatory Control System
Science.gov (United States)
Pessemier, W.; Raskin, G.; Prins, S.; Saey, P.; Merges, F.; Padilla, J. P.; Van Winckel, H.; Waelkens, C.
2010-07-01
A new control system is currently being developed for the 1.2-meter Mercator Telescope at the Roque de Los Muchachos Observatory (La Palma, Spain). Formerly based on transputers, the new Mercator Observatory Control System (MOCS) consists of a small network of Linux computers complemented by a central industrial controller and an industrial real-time data communication network. Python is chosen as the high-level language to develop flexible yet powerful supervisory control and data acquisition (SCADA) software for the Linux computers. Specialized applications such as detector control, auto-guiding and middleware management are also integrated in the same Python software package. The industrial controller, on the other hand, is connected to the majority of the field devices and is targeted to run various control loops, some of which are real-time critical. Independently of the Linux distributed control system (DCS), this controller makes sure that high priority tasks such as the telescope motion, mirror support and hydrostatic bearing control are carried out in a reliable and safe way. A comparison is made between different controller technologies including a LabVIEW embedded system, a PROFINET Programmable Logic Controller (PLC) and motion controller, and an EtherCAT embedded PC (soft-PLC). As the latter is chosen as the primary platform for the lower level control, a substantial part of the software is being ported to the IEC 61131-3 standard programming languages. Additionally, obsolete hardware is gradually being replaced by standard industrial alternatives with fast EtherCAT communication. The use of Python as a scripting language allows a smooth migration to the final MOCS: finished parts of the new control system can readily be commissioned to replace the corresponding transputer units of the old control system with minimal downtime. In this contribution, we give an overview of the systems design, implementation details and the current status of the project.
15. Electricity and gas market Observatory - 2. Quarter of 2011
International Nuclear Information System (INIS)
2011-06-01
The purpose of the Observatory is to provide the general public with indicators for monitoring market deregulation. It both covers the wholesale and retail electricity and gas markets in Metropolitan France. This Observatory is updated every three months and data are available on CRE web site (www.cre.fr)
16. Electricity and gas market Observatory - 4. Quarter of 2010
International Nuclear Information System (INIS)
2010-12-01
The purpose of the Observatory is to provide the general public with indicators for monitoring market deregulation. It both covers the wholesale and retail electricity and gas markets in Metropolitan France. This Observatory is updated every three months and data are available on CRE web site (www.cre.fr)
17. Electricity and gas market Observatory - 3. Quarter of 2012
International Nuclear Information System (INIS)
2012-09-01
The purpose of the Observatory is to provide the general public with indicators for monitoring market deregulation. It both covers the wholesale and retail electricity and gas markets in Metropolitan France. This Observatory is updated every three months and data are available on CRE web site (www.cre.fr)
18. University Observatory, Ludwig-Maximilians-Universität
Science.gov (United States)
Murdin, P.
2000-11-01
The University Observatory of Ludwig-Maximilians-Universität was founded in 1816. Astronomers who worked or graduated at the Munich Observatory include: Fraunhofer, Soldner, Lamont, Seeliger and Karl Schwarzschild. At present four professors and ten staff astronomers work here. Funding comes from the Bavarian Government, the German Science Foundation, and other German and European research progra...
19. Electricity and gas market Observatory - 1. Quarter of 2012
International Nuclear Information System (INIS)
2012-03-01
The purpose of the Observatory is to provide the general public with indicators for monitoring market deregulation. It both covers the wholesale and retail electricity and gas markets in Metropolitan France. This Observatory is updated every three months and data are available on CRE web site (www.cre.fr)
20. Electricity and gas market Observatory - 4. Quarter of 2011
International Nuclear Information System (INIS)
2011-12-01
The purpose of the Observatory is to provide the general public with indicators for monitoring market deregulation. It both covers the wholesale and retail electricity and gas markets in Metropolitan France. This Observatory is updated every three months and data are available on CRE web site (www.cre.fr)
1. Electricity and gas market Observatory - 3. Quarter of 2011
International Nuclear Information System (INIS)
2011-09-01
The purpose of the Observatory is to provide the general public with indicators for monitoring market deregulation. It both covers the wholesale and retail electricity and gas markets in Metropolitan France. This Observatory is updated every three months and data are available on CRE web site (www.cre.fr)
2. Electricity and gas market Observatory - 4. Quarter of 2012
International Nuclear Information System (INIS)
2012-12-01
The purpose of the Observatory is to provide the general public with indicators for monitoring market deregulation. It both covers the wholesale and retail electricity and gas markets in Metropolitan France. This Observatory is updated every three months and data are available on CRE web site (www.cre.fr)
3. Electricity and gas market Observatory - 2. Quarter of 2012
International Nuclear Information System (INIS)
2012-06-01
The purpose of the Observatory is to provide the general public with indicators for monitoring market deregulation. It both covers the wholesale and retail electricity and gas markets in Metropolitan France. This Observatory is updated every three months and data are available on CRE web site (www.cre.fr)
4. Electricity and gas market Observatory - 1. Quarter of 2011
International Nuclear Information System (INIS)
2011-03-01
The purpose of the Observatory is to provide the general public with indicators for monitoring market deregulation. It both covers the wholesale and retail electricity and gas markets in Metropolitan France. This Observatory is updated every three months and data are available on CRE web site (www.cre.fr)
5. Science requirements and the design of cabled ocean observatories
Directory of Open Access Journals (Sweden)
2006-06-01
Full Text Available The ocean sciences are beginning a new phase in which scientists will enter the ocean environment and adaptively observe the Earth-Ocean system through remote control of sensors and sensor platforms. This new ocean science paradigm will be implemented using innovative facilities called ocean observatories which provide unprecedented levels of power and communication to access and manipulate real-time sensor networks deployed within many different environments in the ocean basins. Most of the principal design drivers for ocean observatories differ from those for commercial submarine telecommunications systems. First, ocean observatories require data to be input and output at one or more seafloor nodes rather than at a few land terminuses. Second, ocean observatories must distribute a lot of power to the seafloor at variable and fluctuating rates. Third, the seafloor infrastructure for an ocean observatory inherently requires that the wet plant be expandable and reconfigurable. Finally, because the wet communications and power infrastructure is comparatively complex, ocean observatory infrastructure must be designed for low life cycle cost rather than zero maintenance. The origin of these differences may be understood by taking a systems engineering approach to ocean observatory design through examining the requirements derived from science and then going through the process of iterative refinement to yield conceptual and physical designs. This is illustrated using the NEPTUNE regional cabled observatory power and data communications sub-systems.
6. Electricity and gas market Observatory - 1. Quarter of 2013
International Nuclear Information System (INIS)
2013-03-01
The purpose of the Observatory is to provide the general public with indicators for monitoring market deregulation. It both covers the wholesale and retail electricity and gas markets in Metropolitan France. Since 2013, it also covers the wholesale CO 2 market. This Observatory is updated every three months and data are available on CRE web site (www.cre.fr)
7. Significant breakthroughs in monitoring networks of the volcanological and seismological French observatories
Science.gov (United States)
lemarchand, A.; Francois, B.; Bouin, M.; Brenguier, F.; Clouard, V.; Di Muro, A.; Ferrazzini, V.; Shapiro, N.; Staudacher, T.; Kowalski, P.; Agrinier, P.
2013-12-01
8. Volcano-tectonic interactions at Sabancaya and other Peruvian volcanoes revealed by InSAR and seismicity
Science.gov (United States)
Jay, J.; Pritchard, M. E.; Aron, F.; Delgado, F.; Macedo, O.; Aguilar, V.
2013-12-01
An InSAR survey of all 13 Holocene volcanoes in the Andean Central Volcanic Zone of Peru reveals previously undocumented surface deformation that is occasionally accompanied by seismic activity. Our survey utilizes SAR data spanning from 1992 to the present from the ERS-1, ERS-2, and Envisat satellites, as well as selected data from the TerraSAR-X satellite. We find that the recent unrest at Sabancaya volcano (heightened seismicity since 22 February 2013 and increased fumarolic output) has been accompanied by surface deformation. We also find two distinct deformation episodes near Sabancaya that are likely associated with an earthquake swarm in February 2013 and a M6 normal fault earthquake that occurred on 17 July 2013. Preliminary modeling suggests that faulting from the observed seismic moment can account for nearly all of the observed deformation and thus we have not yet found clear evidence for recent magma intrusion. We also document an earlier episode of deformation that occurred between December 2002 and September 2003 which may be associated with a M5.3 earthquake that occurred on 13 December 2002 on the Solarpampa fault, a large EW-striking normal fault located about 25 km northwest of Sabancaya volcano. All of the deformation episodes between 2002 and 2013 are spatially distinct from the inflation seen near Sabancaya from 1992 to 1997. In addition to the activity at Sabancaya, we also observe deformation near Coropuna volcano, in the Andagua Valley, and in the region between Ticsani and Tutupaca volcanoes. InSAR images reveal surface deformation that is possibly related to an earthquake swarm near Coropuna and Sabancaya volcanoes in December 2001. We also find persistent deformation in the scoria cone and lava field along the Andagua Valley, located 40 km east of Corpuna. An earthquake swarm near Ticsani volcano in 2005 produced surface deformation centered northwest of the volcano and was accompanied by a north-south elongated subsidence signal to the
9. SUBMARINE VOLCANO CHARACTERISTICS IN SABANG WATERS
Directory of Open Access Journals (Sweden)
Hananto Kurnio
2017-07-01
Full Text Available The aim of the study is to understand the characteristics of a volcano occurred in marine environment, as Weh Island where Sabang City located is still demonstrated its volcanic cone morphology either through satellite imagery or bathymetric map. Methods used were marine geology, marine geophysics and oceanography. Results show that surface volcanism (sea depth less than 50 m take place as fumaroles, solfataras, hot ground, hot spring, hot mud pool and alteration in the vicinities of seafloor and coastal area vents. Seismic records also showed acoustic turbidity in the sea water column due to gas bubblings produced by seafloor fumaroles. Geochemical analyses show that seafloor samples in the vicinities of active and non-active fumarole vent are abundances with rare earth elements (REE. These were interpreted that the fumarole bring along REE through its gases and deposited on the surrounding seafloor surface. Co-existence between active fault of Sumatra and current volcanism produce hydrothermal mineralization in fault zone as observed in Serui and Pria Laot-middle of Weh Island which both are controlled by normal faults and graben.
10. Energy budget of the volcano Stromboli, Italy
Science.gov (United States)
Mcgetchin, T. R.; Chouet, B. A.
1979-01-01
The results of the analyses of movies of eruptions at Stromboli, Italy, and other available data are used to discuss the question of its energy partitioning among various energy transport mechanisms. Energy is transported to the surface from active volcanoes in at least eight modes, viz. conduction (and convection) of the heat through the surface, radiative heat transfer from the vent, acoustical radiation in blast and jet noise, seismic radiation, thermal energy of ejected particles, kinetic energy of ejected particles, thermal energy of ejected gas, and kinetic energy of ejected gas. Estimated values of energy flux from Stromboli by these eight mechanisms are tabulated. The energy budget of Stromboli in its normal mode of activity appears to be dominated by heat conduction (and convection) through the ground surface. Heat carried by eruption gases is the most important of the other energy transfer modes. Radiated heat from the open vent and heat carried by ejected lava particles also contribute to the total flux, while seismic energy accounts for about 0.5% of the total. All other modes are trivial by comparison.
11. An EarthScope Plate Boundary Observatory Progress Report
Science.gov (United States)
Jackson, M.; Anderson, G.; Blume, F.; Walls, C.; Coyle, B.; Feaux, K.; Friesen, B.; Phillips, D.; Hafner, K.; Johnson, W.; Mencin, D.; Pauk, B.; Dittmann, T.
2007-12-01
UNAVCO is building and operating the Plate Boundary Observatory (PBO), part of the NSF-funded EarthScope project to understand the structure, dynamics, and evolution of the North American continent. When complete in October 2008, the 875 GPS, 103 strain and seismic, and 28 tiltmeters stations will comprise the largest integrated geodetic and seismic network in United States and the second largest in the world. Data from the PBO network will facilitate research into plate boundary deformation with unprecedented scope and detail. As of 1 September 2007, UNAVCO had completed 680 PBO GPS stations and had upgraded 89% of the planned PBO Nucleus stations. Highlights of the past year's work include the expansion of the Alaska subnetwork to 95 continuously-operating stations, including coverage of Akutan and Augustine volcanoes and reconnaissance for future installations on Unimak Island; the installation of nine new stations on Mt. St. Helens; and the arrival of 33 permits for station installations on BLM land in Nevada. The Augustine network provided critical data on magmatic and volcanic processes associated with the 2005-2006 volcanic crisis, and has expanded to a total of 11 stations. Please visit http://pboweb.unavco.org/?pageid=3 for further information on PBO GPS network construction activities. As of September 2007, 41 PBO borehole stations had been installed and three laser strainmeter stations were operating, with a total of 60 borehole stations and 4 laser strainmeters expected by October 2007. In response to direction from the EarthScope community, UNAVCO installed a dense network of six stations along the San Jacinto Fault near Anza, California; installed three of four planned borehole strainmeter stations on Mt. St. Helens; and has densified coverage of the Parkfield area. Please visit http://pboweb.unavco.org/?pageid=8 for more information on PBO strainmeter network construction progress. The combined PBO/Nucleus GPS network provides 350 GB of raw standard
12. Geomorphological classification of post-caldera volcanoes in the Buyan-Bratan caldera, North Bali, Indonesia
Science.gov (United States)
Okuno, Mitsuru; Harijoko, Agung; Wayan Warmada, I.; Watanabe, Koichiro; Nakamura, Toshio; Taguchi, Sachihiro; Kobayashi, Tetsuo
2017-12-01
A landform of the post-caldera volcanoes (Lesung, Tapak, Sengayang, Pohen, and Adeng) in the Buyan-Bratan caldera on the island of Bali, Indonesia can be classified by topographic interpretation. The Tapak volcano has three craters, aligned from north to south. Lava effused from the central crater has flowed downward to the northwest, separating the Tamblingan and Buyan Lakes. This lava also covers the tip of the lava flow from the Lesung volcano. Therefore, it is a product of the latest post-caldera volcano eruption. The Lesung volcano also has two craters, with a gully developing on the pyroclastic cone from the northern slope to the western slope. Lava from the south crater has flowed down the western flank, beyond the caldera rim. Lava distributed on the eastern side from the south also surrounds the Sengayang volcano. The Adeng volcano is surrounded by debris avalanche deposits from the Pohen volcano. Based on these topographic relationships, Sengayang volcano appears to be the oldest of the post-caldera volcanoes, followed by the Adeng, Pohen, Lesung, and Tapak volcanoes. Coarse-grained scoria falls around this area are intercalated with two foreign tephras: the Samalas tephra (1257 A.D.) from Lombok Island and the Penelokan tephra (ca. 5.5 kBP) from the Batur caldera. The source of these scoria falls is estimated to be either the Tapak or Lesung volcano, implying that at least two volcanoes have erupted during the Holocene period.
13. Global Positioning System (GPS) survey of Augustine Volcano, Alaska, August 3-8, 2000: data processing, geodetic coordinates and comparison with prior geodetic surveys
Science.gov (United States)
Pauk, Benjamin A.; Power, John A.; Lisowski, Mike; Dzurisin, Daniel; Iwatsubo, Eugene Y.; Melbourne, Tim
2001-01-01
Between August 3 and 8,2000,the Alaska Volcano Observatory completed a Global Positioning System (GPS) survey at Augustine Volcano, Alaska. Augustine is a frequently active calcalkaline volcano located in the lower portion of Cook Inlet (fig. 1), with reported eruptions in 1812, 1882, 1909?, 1935, 1964, 1976, and 1986 (Miller et al., 1998). Geodetic measurements using electronic and optical surveying techniques (EDM and theodolite) were begun at Augustine Volcano in 1986. In 1988 and 1989, an island-wide trilateration network comprising 19 benchmarks was completed and measured in its entirety (Power and Iwatsubo, 1998). Partial GPS surveys of the Augustine Island geodetic network were completed in 1992 and 1995; however, neither of these surveys included all marks on the island.Additional GPS measurements of benchmarks A5 and A15 (fig. 2) were made during the summers of 1992, 1993, 1994, and 1996. The goals of the 2000 GPS survey were to:1) re-measure all existing benchmarks on Augustine Island using a homogeneous set of GPS equipment operated in a consistent manner, 2) add measurements at benchmarks on the western shore of Cook Inlet at distances of 15 to 25 km, 3) add measurements at an existing benchmark (BURR) on Augustine Island that was not previously surveyed, and 4) add additional marks in areas of the island thought to be actively deforming. The entire survey resulted in collection of GPS data at a total of 24 sites (fig. 1 and 2). In this report we describe the methods of GPS data collection and processing used at Augustine during the 2000 survey. We use this data to calculate coordinates and elevations for all 24 sites surveyed. Data from the 2000 survey is then compared toelectronic and optical measurements made in 1988 and 1989. This report also contains a general description of all marks surveyed in 2000 and photographs of all new marks established during the 2000 survey (Appendix A).
14. An Updated Earthquake Relocation Catalog for the Island of Hawaíi from 2009 to 2016
Science.gov (United States)
Lin, G.; Okubo, P.; Shearer, P. M.; Matoza, R. S.
2017-12-01
We present an updated catalog of Hawaiian seismicity, systematically relocated from a starting catalog compiled by the Hawaiian Volcano Observatory (HVO). This is a continuation of our collaboration that began with relocating Hawaiian seismicity from 1992 through April 2009 and subsequently added 1986 through 1991, all initially processed with HVO's Caltech-USGS Seismic Processing systems. Our current efforts are initially focused on extending waveform cross-correlation analyses to significantly greater numbers of candidate event pairs of earthquakes recorded since 2009, after HVO migrated to its ANSS Quake Management Software (AQMS) systems. In its roughly 8 years of AQMS processing, HVO has cataloged over 170,000 events. Particular challenges with this more recent dataset relate to field network upgrades that introduced numerous broadband sensors to replace short-period instruments and significantly increased numbers of event triggers. A relatively low percentage of interactively-reviewed events compared to the pre-2009 catalogs also presents a significant challenge to our analysis. We start by ray tracing through a previously developed three-dimensional (3-D) seismic velocity model to relocate all the earthquakes with phase arrivals. We then use these 3-D relocated events, with improved absolute locations, as reference events to perform similar-event cluster analysis and differential-time relative relocation to all the available events in the data set. The resulting catalog of relocated, well-constrained hypocenters is an extension of our previous studies. Combined with earlier products of our systematic catalog relocations, the increased numbers of relocated earthquakes from more than 30 years of seismic monitoring offer enhanced opportunities for study and interpretation of seismic and volcanic processes spanning the entire 1986-2016 interval.
15. Volcano related atmospheric toxicants in Hilo and Hawaii Volcanoes National Park: implications for human health.
Science.gov (United States)
Michaud, Jon-Pierre; Krupitsky, Dmitry; Grove, John S; Anderson, Bruce S
2005-08-01
Volcanic fog (vog) from Kilauea volcano on the island of Hawaii includes a variety of chemical species including sulfur compounds and traces of metals such as mercury. The metal species seen tended to be in the nanograms per cubic meter range, whereas oxides of sulfur: SO2 and SO3 and sulfate aerosols, were in the range of micrograms per cubic meter and rarely even as high as a few milligrams per cubic meter of air (nominally ppb to ppm). These sulfur species are being investigated for associations with both acute and chronic changes in human health status. The sulfate aerosols tend to be less than 1 microm in diameter and tend to dominate the mass of this submicron size mode. The sulfur chemistry is dynamic, changing composition from predominantly sulfur dioxide and trioxide gasses near the volcano, to predominantly sulfate aerosols on the west side of the island. Time, concentration and composition characteristics of submicron aerosols and sulfur dioxide are described with respect to the related on-going health studies and public health management concerns. Exposures to sulfur dioxide and particulate matter equal to or less than 1 microm in size were almost always below the national ambient air quality standards (NAAQS). These standards do not however consider the acidic nature and submicron size of the aerosol, nor the possibility of the aerosol and the sulfur dioxide interacting in their toxicity. Time series plots, histograms and descriptive statistics of hourly averages give the reader a sense of some of the exposures observed.
16. Tephra compositions from Late Quaternary volcanoes around the Antarctic Peninsula
Science.gov (United States)
Kraus, S.
2009-12-01
Crustal extension and rifting processes opened the Bransfield Strait between the South Shetland Islands and the Antarctic Peninsula during the last 4 Ma. Similar processes on the Peninsula's eastern side are responsible for volcanism along Larsen Rift. There are at least 11 volcanic centers with known or suspected Late Pleistocene / Holocene explosive activity (Fig. 1). Fieldwork was carried out on the islands Deception, Penguin, Bridgeman and Paulet, moreover at Melville Peak (King George Is.) and Rezen Peak (Livingston Is.). Of special importance is the second ever reported visit and sampling at Sail Rock, and the work on never before visited outcrops on the northern slopes and at the summit of Cape Purvis volcano (Fig. 1). The new bulk tephra ICP-MS geochemical data provide a reliable framework to distinguish the individual volcanic centers from each other. According to their Mg-number, Melville Peak and Penguin Island represent the most primitive magma source. Nb/Y ratios higher than 0.67 in combination with elevated Th/Yb and Ta/Yb ratios and strongly enriched LREE seem to be diagnostic to distinguish the volcanoes located along the Larsen Rift from those associated with Bransfield Rift. Sr/Y ratios discriminate between the individual Larsen Rift volcanoes, Paulet Island showing considerably higher values than Cape Purvis volcano. Along Bransfield Rift, Bridgeman Island and Melville Peak have notably lower Nb/Y and much higher Th/Nb than Deception Island, Penguin Island and Sail Rock. The latter displays almost double the Th/Yb ratio as compared to Deception Island, and also much higher LREE enrichment but extraordinarily low Ba/Th, discriminating it from Penguin Island. Such extremely low Ba/Th ratios are also typical for Melville Peak, but for none of the other volcanoes. Penguin Island has almost double the Ba/Th and Sr/Y ratios higher than any other investigated volcano. Whereas the volcanoes located in the northern part of Bransfield Strait have Zr
17. The 2012-2014 eruptive cycle of Copahue Volcano, Southern Andes. Magmatic-Hydrothermal system interaction and manifestations.
Science.gov (United States)
Morales, Sergio; Alarcón, Alex; Basualto, Daniel; Bengoa, Cintia; Bertín, Daniel; Cardona, Carlos; Córdova, Maria; Franco, Luis; Gil, Fernando; Hernandez, Erasmo; Lara, Luis; Lazo, Jonathan; Mardones, Cristian; Medina, Roxana; Peña, Paola; Quijada, Jonathan; San Martín, Juan; Valderrama, Oscar
2015-04-01
Copahue Volcano (COPV), in Southern Andes of Chile, is an andesitic-basaltic stratovolcano, which is located on the western margin of Caviahue Caldera. The COPV have a NE-trending fissure with 9 aligned vents, being El Agrio the main currently active vent, with ca. 400 m in diameter. The COPV is placed into an extensive hydrothermal system which has modulated its recent 2012-2014 eruptive activity, with small phreatic to phreatomagmatic eruptions and isolated weak strombolian episodes and formation of crater lakes inside the main crater. Since 2012, the Southern Andes Volcano Observatory (OVDAS) carried out the real-time monitoring with seismic broadband stations, GPS, infrasound sensors and webcams. In this work, we report pre, sin, and post-eruptive seismic activity of the last two main eruptions (Dec, 2012 and Oct, 2014) both with different seismic precursors and superficial activity, showing the second one a particularly appearance of seismic quiescence episodes preceding explosive activity, as an indicator of interaction between magmatic-hydrothermal systems. The first episode, in late 2012, was characterized by a low frequency (0.3-0.4 Hz and 1.0-1.5 Hz) continuous tremor which increased gradually from background noise level amplitude to values of reduced displacement (DR), close to 50 cm2 at the peak of the eruption, reaching an eruptive column of ~1.5 km height. After few months of recording low energy seismicity, a sequence of low frequency, repetitive and low energy seismic events arose, with a frequency of occurrence up to 300 events/hour. Also, the VLP earthquakes were added to the record probably associated with magma intrusion into a deep magmatic chamber during all stages of eruptive process, joined to the record of VT seismicity during the same period, which is located throughout the Caviahue Caldera area. Both kind of seismic patterns were again recorded in October 2014, being the precursor of the new eruptive cycle at this time as well as the
18. Seismic observations of Redoubt Volcano, Alaska - 1989-2010 and a conceptual model of the Redoubt magmatic system
Science.gov (United States)
Power, John A.; Stihler, Scott D.; Chouet, Bernard A.; Haney, Matthew M.; Ketner, D.M.
2013-01-01
2009 eruptions the Redoubt magmatic system is envisioned to consist of a shallow system of cracks extending 1 to 2 km below the crater floor, a magma storage or source region at roughly 3 to 9 km depth, and a diffuse magma source region at 25 to 38 km depth. Close tracking of seismic activity allowed the Alaska Volcano Observatory to successfully issue warnings prior to many of the hazardous explosive events that occurred in 2009.
19. Deep long-period earthquakes beneath Washington and Oregon volcanoes
Science.gov (United States)
Nichols, M.L.; Malone, S.D.; Moran, S.C.; Thelen, W.A.; Vidale, J.E.
2011-01-01
Deep long-period (DLP) earthquakes are an enigmatic type of seismicity occurring near or beneath volcanoes. They are commonly associated with the presence of magma, and found in some cases to correlate with eruptive activity. To more thoroughly understand and characterize DLP occurrence near volcanoes in Washington and Oregon, we systematically searched the Pacific Northwest Seismic Network (PNSN) triggered earthquake catalog for DLPs occurring between 1980 (when PNSN began collecting digital data) and October 2009. Through our analysis we identified 60 DLPs beneath six Cascade volcanic centers. No DLPs were associated with volcanic activity, including the 1980-1986 and 2004-2008 eruptions at Mount St. Helens. More than half of the events occurred near Mount Baker, where the background flux of magmatic gases is greatest among Washington and Oregon volcanoes. The six volcanoes with DLPs (counts in parentheses) are Mount Baker (31), Glacier Peak (9), Mount Rainier (9), Mount St. Helens (9), Three Sisters (1), and Crater Lake (1). No DLPs were identified beneath Mount Adams, Mount Hood, Mount Jefferson, or Newberry Volcano, although (except at Hood) that may be due in part to poorer network coverage. In cases where the DLPs do not occur directly beneath the volcanic edifice, the locations coincide with large structural faults that extend into the deep crust. Our observations suggest the occurrence of DLPs in these areas could represent fluid and/or magma transport along pre-existing tectonic structures in the middle crust. ?? 2010 Elsevier B.V.
20. Designing Observatories for the Hydrologic Sciences
Science.gov (United States)
Hooper, R. P.
2004-05-01
The need for longer-term, multi-scale, coherent, and multi-disciplinary data to test hypotheses in hydrologic science has been recognized by numerous prestigious review panels over the past decade (e.g. NRC's Basic Research Opportunities in Earth Science). Designing such observatories has proven to be a challenge not only on scientific, but also technological, economic and even sociologic levels. The Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) has undertaken a "paper" prototype design of a hydrologic observatory (HO) for the Neuse River Basin, NC and plans to solicit proposals and award grants to develop implementation plans for approximately 10 basins (which may be defined by topographic or groundwater divides) during the summer of 2004. These observatories are envisioned to be community resources with data available to all scientists, with support facilities to permit their use by both local and remote investigators. This paper presents the broad design concepts which were developed from a national team of scientists for the Neuse River Basin Prototype. There are three fundamental characteristics of a watershed or river basin that are critical for answering the major scientific questions proposed by the NRC to advance hydrologic, biogeochemical and ecological sciences: (1) the store and flux of water, sediment, nutrients and contaminants across interfaces at multiple scales must be identified; (2) the residence time of these constituents, and (3) their flowpaths and response spectra to forcing must be estimated. "Stores" consist of subsurface, land surface and atmospheric volumes partitioned over the watershed. The HO will require "core measurements" which will serve the communities of hydrologic science for long range research questions. The core measurements will also provide context for shorter-term or hypothesis-driven research investigations. The HO will support "mobile measurement facilities" designed to support teams
1. GROSS- GAMMA RAY OBSERVATORY ATTITUDE DYNAMICS SIMULATOR
Science.gov (United States)
Garrick, J.
1994-01-01
The Gamma Ray Observatory (GRO) spacecraft will constitute a major advance in gamma ray astronomy by offering the first opportunity for comprehensive observations in the range of 0.1 to 30,000 megaelectronvolts (MeV). The Gamma Ray Observatory Attitude Dynamics Simulator, GROSS, is designed to simulate this mission. The GRO Dynamics Simulator consists of three separate programs: the Standalone Profile Program; the Simulator Program, which contains the Simulation Control Input/Output (SCIO) Subsystem, the Truth Model (TM) Subsystem, and the Onboard Computer (OBC) Subsystem; and the Postprocessor Program. The Standalone Profile Program models the environment of the spacecraft and generates a profile data set for use by the simulator. This data set contains items such as individual external torques; GRO spacecraft, Tracking and Data Relay Satellite (TDRS), and solar and lunar ephemerides; and star data. The Standalone Profile Program is run before a simulation. The SCIO subsystem is the executive driver for the simulator. It accepts user input, initializes parameters, controls simulation, and generates output data files and simulation status display. The TM subsystem models the spacecraft dynamics, sensors, and actuators. It accepts ephemerides, star data, and environmental torques from the Standalone Profile Program. With these and actuator commands from the OBC subsystem, the TM subsystem propagates the current state of the spacecraft and generates sensor data for use by the OBC and SCIO subsystems. The OBC subsystem uses sensor data from the TM subsystem, a Kalman filter (for attitude determination), and control laws to compute actuator commands to the TM subsystem. The OBC subsystem also provides output data to the SCIO subsystem for output to the analysts. The Postprocessor Program is run after simulation is completed. It generates printer and CRT plots and tabular reports of the simulated data at the direction of the user. GROSS is written in FORTRAN 77 and
2. The Rapid Ice Sheet Change Observatory (RISCO)
Science.gov (United States)
Morin, P.; Howat, I. M.; Ahn, Y.; Porter, C.; McFadden, E. M.
2010-12-01
The recent expansion of observational capacity from space has revealed dramatic, rapid changes in the Earth’s ice cover. These discoveries have fundamentally altered how scientists view ice-sheet change. Instead of just slow changes in snow accumulation and melting over centuries or millennia, important changes can occur in sudden events lasting only months, weeks, or even a single day. Our understanding of these short time- and space-scale processes, which hold important implications for future global sea level rise, has been impeded by the low temporal and spatial resolution, delayed sensor tasking, incomplete coverage, inaccessibility and/or high cost of data available to investigators. New cross-agency partnerships and data access policies provide the opportunity to dramatically improve the resolution of ice sheet observations by an order of magnitude, from timescales of months and distances of 10’s of meters, to days and meters or less. Advances in image processing technology also enable application of currently under-utilized datasets. The infrastructure for systematically gathering, processing, analyzing and distributing these data does not currently exist. Here we present the development of a multi-institutional, multi-platform observatory for rapid ice change with the ultimate objective of helping to elucidate the relevant timescales and processes of ice sheet dynamics and response to climate change. The Rapid Ice Sheet Observatory (RISCO) gathers observations of short time- and space-scale Cryosphere events and makes them easily accessible to investigators, media and general public. As opposed to existing data centers, which are structured to archive and distribute diverse types of raw data to end users with the specialized software and skills to analyze them, RISCO focuses on three types of geo-referenced raster (image) data products in a format immediately viewable with commonly available software. These three products are (1) sequences of images
3. The FUTUREVOLC Supersite's e-Infrastructure - A multidisciplinary data hub and data service for Icelandic Volcanoes
Science.gov (United States)
Vogfjörd, Kristín S.; Sigmundsson, Freysteinn; Sverrisson, Sverrir Th.; Sigurdsson, Sigurdur F.; Ófeigsson, Benedikt G.; Arnarsson, Ólafur S.; Kristinsson, Ingvar; Ilyinskaya, Evgenia; Oddsdóttir, Thorarna Ýr; Bergsveinsson, Sölvi Th.; Hjartansson, Kristján R.
2014-05-01
4. Tremor Source Location at Okmok Volcano
Science.gov (United States)
Reyes, C. G.; McNutt, S. R.
2007-12-01
Initial results using an amplitude-based tremor location program have located several active tremor episodes under Cone A, a vent within Okmok volcano's 10 km caldera. Okmok is an andesite volcano occupying the north-eastern half of Umnak Island, in the Aleutian islands. Okmok is defined by a ~2000 y.b.p. caldera that contains multiple cinder cones. Cone A, the youngest of these, extruded lava in 1997 covering the caldera floor. Since April 2003, continuous seismic data have been recorded from eight vertical short-period stations (L4-C's) installed at distances from Cone A ranging from 2 km to 31 km. In 2004 four additional 3- component broadband stations were added, co-located with continuous GPS stations. InSAR and GPS measurements of post-eruption deformation show that Okmok experienced several periods of rapid inflation (Mann and Freymueller, 2002), from the center of the 10 km diameter caldera. While there are few locatable VT earthquakes, there has been nearly continuous low-level tremor with stronger amplitude bursts occurring at variable rates and durations. The character of occurrence remained relatively constant over the course of days to weeks until the signal ceased in mid 2005. Within any day, tremor behavior remains fairly consistent, with bursts closely resembling each other, suggesting a single main process or source location. The tremor is composed of irregular waves with a broad range of frequencies, though most energy resides between ~2 Hz and 6 Hz. Attempts to locate the tremor using traditional arrival time methods fail because the signal is emergent, with envelopes too ragged to correlate on time scales that hold much hope for a location. Instead, focus was shifted to the amplitude ratios at various stations. Candidates for the tremor source include the center of inflation and Cone A, 3 km to the south-west. For all dates on record, data were band pass filtered between 1 and 5 Hz, then evaluated in 20.48 second windows (N=2048, sampling rate
5. Basaltic cannibalism at Thrihnukagigur volcano, Iceland
Science.gov (United States)
Hudak, M. R.; Feineman, M. D.; La Femina, P. C.; Geirsson, H.
2014-12-01
Magmatic assimilation of felsic continental crust is a well-documented, relatively common phenomenon. The extent to which basaltic crust is assimilated by magmas, on the other hand, is not well known. Basaltic cannibalism, or the wholesale incorporation of basaltic crustal material into a basaltic magma, is thought to be uncommon because basalt requires more energy than higher silica rocks to melt. Basaltic materials that are unconsolidated, poorly crystalline, or palagonitized may be more easily ingested than fully crystallized massive basalt, thus allowing basaltic cannibalism to occur. Thrihnukagigur volcano, SW Iceland, offers a unique exposure of a buried cinder cone within its evacuated conduit, 100 m below the main vent. The unconsolidated tephra is cross-cut by a NNE-trending dike, which runs across the ceiling of this cave to a vent that produced lava and tephra during the ~4 Ka fissure eruption. Preliminary petrographic and laser ablation inductively coupled mass spectrometry (LA-ICP-MS) analyses indicate that there are two populations of plagioclase present in the system - Population One is stubby (aspect ratio 2.1), subhedral to euhedral, and has much higher Ba/Sr ratios. Population One crystals are observed in the cinder cone, dike, and surface lavas, whereas Population Two crystals are observed only in the dike and surface lavas. This suggests that a magma crystallizing a single elongate population of plagioclase intruded the cinder cone and rapidly assimilated the tephra, incorporating the stubbier population of phenocrysts. This conceptual model for basaltic cannibalism is supported by field observations of large-scale erosion upward into the tephra, which is coated by magma flow-back indicating that magma was involved in the thermal etching. While the unique exposure at Thrihnukagigur makes it an exceptional place to investigate basaltic cannibalism, we suggest that it is not limited to this volcanic system. Rather it is a process that likely
6. Electricity and gas market observatory. 2. Quarter 2007
International Nuclear Information System (INIS)
2007-01-01
The purpose of the observatory is to provide the general public with indicators for monitoring market deregulation. It both covers the wholesale and retail electricity and gas markets in Metropolitan France. This observatory is updated every three months and data are available on CRE web site (www.cre.fr). The present observatory is dedicated only to eligible customers before 1 July 2007, i.e. non-residential customers. Statistics related to residential customers will be published in the next observatory (1 December 2007). Content: A - The electricity market: The retail electricity market (Introduction, Non-residential customer segments and their respective weights, Status at July 1, 2007, Dynamic analysis: 2. Quarter 2007); The wholesale electricity market (Introduction, Wholesale market activity in France, Wholesale market activity in France, Prices on the French wholesale market and European comparison, Import and export volumes, Concentration of the French electricity market, Striking fact of the second quarter 2007); B - The gas market: The retail gas market (Introduction, The non-residential customer segments and their respective weights, Status at July 1, 2007); The wholesale gas market (Gas pricing and gas markets in Europe, The wholesale market in France); C - Appendices: Electricity and gas market observatories combined glossary, Specific electricity market observatory glossary, Specific gas market observatory glossary
7. Saint Petersburg magnetic observatory: from Voeikovo subdivision to INTERMAGNET certification
Science.gov (United States)
Sidorov, Roman; Soloviev, Anatoly; Krasnoperov, Roman; Kudin, Dmitry; Grudnev, Andrei; Kopytenko, Yury; Kotikov, Andrei; Sergushin, Pavel
2017-11-01
Since June 2012 the Saint Petersburg magnetic observatory is being developed and maintained by two institutions of the Russian Academy of Sciences (RAS) - the Geophysical Center of RAS (GC RAS) and the Saint Petersburg branch of the Pushkov Institute of Terrestrial Magnetism, Ionosphere and Radio Wave Propagation of RAS (IZMIRAN SPb). On 29 April 2016 the application of the Saint Petersburg observatory (IAGA code SPG) for introduction into the INTERMAGNET network was accepted after approval by the experts of the first definitive dataset over 2015, produced by the GC RAS, and on 9 June 2016 the SPG observatory was officially certified. One of the oldest series of magnetic observations, originating in 1834, was resumed in the 21st century, meeting the highest quality standards and all modern technical requirements. In this paper a brief historical and scientific background of the SPG observatory foundation and development is given, the stages of its renovation and upgrade in the 21st century are described, and information on its current state is provided. The first results of the observatory functioning are discussed and geomagnetic variations registered at the SPG observatory are assessed and compared with geomagnetic data from the INTERMAGNET observatories located in the same region.
8. Saint Petersburg magnetic observatory: from Voeikovo subdivision to INTERMAGNET certification
Directory of Open Access Journals (Sweden)
R. Sidorov
2017-11-01
Full Text Available Since June 2012 the Saint Petersburg magnetic observatory is being developed and maintained by two institutions of the Russian Academy of Sciences (RAS – the Geophysical Center of RAS (GC RAS and the Saint Petersburg branch of the Pushkov Institute of Terrestrial Magnetism, Ionosphere and Radio Wave Propagation of RAS (IZMIRAN SPb. On 29 April 2016 the application of the Saint Petersburg observatory (IAGA code SPG for introduction into the INTERMAGNET network was accepted after approval by the experts of the first definitive dataset over 2015, produced by the GC RAS, and on 9 June 2016 the SPG observatory was officially certified. One of the oldest series of magnetic observations, originating in 1834, was resumed in the 21st century, meeting the highest quality standards and all modern technical requirements. In this paper a brief historical and scientific background of the SPG observatory foundation and development is given, the stages of its renovation and upgrade in the 21st century are described, and information on its current state is provided. The first results of the observatory functioning are discussed and geomagnetic variations registered at the SPG observatory are assessed and compared with geomagnetic data from the INTERMAGNET observatories located in the same region.
9. TWO EXOPLANETS DISCOVERED AT KECK OBSERVATORY
International Nuclear Information System (INIS)
Valenti, Jeff A.; Fischer, Debra; Giguere, Matt; Isaacson, Howard; Marcy, Geoffrey W.; Howard, Andrew W.; Johnson, John A.; Henry, Gregory W.; Wright, Jason T.
2009-01-01
We present two exoplanets detected at Keck Observatory. HD 179079 is a G5 subgiant that hosts a hot Neptune planet with M sin i = 27.5 M + in a 14.48 days, low-eccentricity orbit. The stellar reflex velocity induced by this planet has a semiamplitude of K = 6.6 m s -1 . HD 73534 is a G5 subgiant with a Jupiter-like planet of M sin i = 1.1 M Jup and K = 16 m s -1 in a nearly circular 4.85 yr orbit. Both stars are chromospherically inactive and metal-rich. We discuss a known, classical bias in measuring eccentricities for orbits with velocity semiamplitudes, K, comparable to the radial velocity uncertainties. For exoplanets with periods longer than 10 days, the observed exoplanet eccentricity distribution is nearly flat for large amplitude systems (K > 80 m s -1 ), but rises linearly toward low eccentricity for lower amplitude systems (K > 20 m s -1 ).
10. The CARIBIC flying observatory and its applications
International Nuclear Information System (INIS)
Brenninkmeijer, C.
2012-01-01
The troposphere can be considered as a complex chemical reactor reaching from the boundary layer up to the tropopause region, in which a multitude of reactions takes place driven by sunlight and supplied with precursors emitted by vegetation, wildfires, and obviously human activities on earth, like burning oil products. Research aircraft (say modified business jets) are far too expensive for a global view of this extensive atmospheric system that changes from day to night, season to season, year to year, and will keep changing. CARIBIC (www.caribic.de) is a logical answer; it is a flying observatory, a 1.5 ton freight container packed with over 15 instruments, for exploring the atmosphere on a regular basis using cargo space in a Lufthansa Airbus A340-600 on intercontinental flights. By means of various results obtained by CARIBIC, about among others volcanic eruptions, the monsoon and accompanying emissions of methane, and long range transport of pollution, we will show how some of the questions atmospheric research grapples with are being addressed, without having a fleet of business jets. (author)
11. Distributed Computing for the Pierre Auger Observatory
International Nuclear Information System (INIS)
Chudoba, J.
2015-01-01
Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system. (paper)
12. Table mountain observatory support to other programs
International Nuclear Information System (INIS)
Harris, A.W.
1988-01-01
The Table Mountain Observatory (TMO) facilities include well equipped 24 inch and 16 inch telescopes with a 40 inch telescope (owned by Pomona College) due for completion during FY 89. This proposal is to provide operational support (equipment maintenance, setup, and observing assistnce) at TMO to other programs. The program currently most heavily supported by this grant is the asteroid photometry program directed by A. W. Harris. During 1987, about 20 asteroids were observed, including a near-earth asteroid, 1951 Midas. The photometric observations are used to derive rotation periods, estimate shapes and pole orientations, and to define the phase relations of asteroids. The E class asteroid 64 Angelina was observed, and showed the same opposition spike observed of 44 Jysa, last year. Comet observations are made with the narrow band camera system of David Rees, University College London. Observational support and training was provided to students and faculty from Claremont Colleges for variable star observing programs. Researchers propose to continue the asteroid program, with emphasis on measuring phase relations of low and high albedo asteroids at very low phase angles, and supporting collaborative studies of asteroid shapes
13. Neutrino observations from the Sudbury Neutrino Observatory
Energy Technology Data Exchange (ETDEWEB)
Ahmad, Q.R.; Allen, R.C.; Andersen, T.C.; Anglin, J.D.; Barton,J.C.; Beier, E.W.; Bercovitch, M.; Bigu, J.; Biller, S.D.; Black, R.A.; Blevis, I.; Boardman, R.J.; Boger, J.; Bonvin, E.; Boulay, M.G.; Bowler,M.G.; Bowles, T.J.; Brice, S.J.; Browne, M.C.; Bullard, T.V.; Buhler, G.; Cameron, J.; Chan, Y.D.; Chen, H.H.; Chen, M.; Chen, X.; Cleveland, B.T.; Clifford, E.T.H.; Cowan, J.H.M.; Cowen, D.F.; Cox, G.A.; Dai, X.; Dalnoki-Veress, F.; Davidson, W.F.; Doe, P.J.; Doucas, G.; Dragowsky,M.R.; Duba, C.A.; Duncan, F.A.; Dunford, M.; Dunmore, J.A.; Earle, E.D.; Elliott, S.R.; Evans, H.C.; Ewan, G.T.; Farine, J.; Fergani, H.; Ferraris, A.P.; Ford, R.J.; Formaggio, J.A.; Fowler, M.M.; Frame, K.; Frank, E.D.; Frati, W.; Gagnon, N.; Germani, J.V.; Gil, S.; Graham, K.; Grant, D.R.; Hahn, R.L.; Hallin, A.L.; Hallman, E.D.; Hamer, A.S.; Hamian, A.A.; Handler, W.B.; Haq, R.U.; Hargrove, C.K.; Harvey, P.J.; Hazama, R.; Heeger, K.M.; Heintzelman, W.J.; Heise, J.; Helmer, R.L.; Hepburn, J.D.; Heron, H.; Hewett, J.; Hime, A.; Hykawy, J.G.; Isaac,M.C.P.; Jagam, P.; Jelley, N.A.; Jillings, C.; Jonkmans, G.; Kazkaz, K.; Keener, P.T.; Klein, J.R.; Knox, A.B.; Komar, R.J.; Kouzes, R.; Kutter,T.; Kyba, C.C.M.; Law, J.; Lawson, I.T.; Lay, M.; Lee, H.W.; Lesko, K.T.; Leslie, J.R.; Levine, I.; Locke, W.; Luoma, S.; Lyon, J.; Majerus, S.; Mak, H.B.; Maneira, J.; Manor, J.; Marino, A.D.; McCauley, N.; McDonald,D.S.; McDonald, A.B.; McFarlane, K.; McGregor, G.; Meijer, R.; Mifflin,C.; Miller, G.G.; Milton, G.; Moffat, B.A.; Moorhead, M.; Nally, C.W.; Neubauer, M.S.; Newcomer, F.M.; Ng, H.S.; Noble, A.J.; Norman, E.B.; Novikov, V.M.; O' Neill, M.; Okada, C.E.; Ollerhead, R.W.; Omori, M.; Orrell, J.L.; Oser, S.M.; Poon, A.W.P.; Radcliffe, T.J.; Roberge, A.; Robertson, B.C.; Robertson, R.G.H.; Rosendahl, S.S.E.; Rowley, J.K.; Rusu, V.L.; Saettler, E.; Schaffer, K.K.; Schwendener,M.H.; Schulke, A.; Seifert, H.; Shatkay, M.; Simpson, J.J.; Sims, C.J.; et al.
2001-09-24
The Sudbury Neutrino Observatory (SNO) is a water imaging Cherenkov detector. Its usage of 1000 metric tons of D{sub 2}O as target allows the SNO detector to make a solar-model independent test of the neutrino oscillation hypothesis by simultaneously measuring the solar {nu}{sub e} flux and the total flux of all active neutrino species. Solar neutrinos from the decay of {sup 8}B have been detected at SNO by the charged-current (CC) interaction on the deuteron and by the elastic scattering (ES) of electrons. While the CC reaction is sensitive exclusively to {nu}{sub e}, the ES reaction also has a small sensitivity to {nu}{sub {mu}} and {nu}{sub {tau}}. In this paper, recent solar neutrino results from the SNO experiment are presented. It is demonstrated that the solar flux from {sup 8}B decay as measured from the ES reaction rate under the no-oscillation assumption is consistent with the high precision ES measurement by the Super-Kamiokande experiment. The {nu}{sub e} flux deduced from the CC reaction rate in SNO differs from the Super-Kamiokande ES results by 3.3{sigma}. This is evidence for an active neutrino component, in additional to {nu}{sub e}, in the solar neutrino flux. These results also allow the first experimental determination of the total active {sup 8}B neutrino flux from the Sun, and is found to be in good agreement with solar model predictions.
14. Recent results from the Pierre Auger Observatory
International Nuclear Information System (INIS)
Gouffon, Philippe
2010-01-01
Full text. The Pierre Auger Observatory has been designed to observe cosmic rays with energies above 1018 eV . The southern site, located in Malargue, Argentina, is now fully operational (since mid 2008) and has been collecting data continuously while being deployed. The northern site, which will give a full sky coverage, is under development in Lamar, Colorado, USA. The PAO uses two complementary techniques to measure the direction of arrival and the energy of the comic rays. In the southern site, its 1600 water Cerenkov tanks, spread over 3000 km 2 , sample the extended air shower front when it hits the ground, measuring time and energy deposited, while the 4 fluorescence detectors stations, each with 6 telescopes, collect the UV light emitted by the shower core, registering the time, intensity and angle of reception. Though the Pierre Auger collaboration will be taking data for the next two decades, several results have already been published based on data collected until 2009 and will be discussed briefly: the energy spectrum and its implications on the GZK cut off controversy, limits on photon and neutrino fluxes, anisotropy, point sources and mass composition. (author)
15. Distributed Computing for the Pierre Auger Observatory
Science.gov (United States)
Chudoba, J.
2015-12-01
Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.
16. Recent results from the Pierre Auger Observatory
Energy Technology Data Exchange (ETDEWEB)
Gouffon, Philippe [Universidade de Sao Paulo (IF/USP), SP (Brazil). Inst. de Fisica
2010-07-01
Full text. The Pierre Auger Observatory has been designed to observe cosmic rays with energies above 1018 eV . The southern site, located in Malargue, Argentina, is now fully operational (since mid 2008) and has been collecting data continuously while being deployed. The northern site, which will give a full sky coverage, is under development in Lamar, Colorado, USA. The PAO uses two complementary techniques to measure the direction of arrival and the energy of the comic rays. In the southern site, its 1600 water Cerenkov tanks, spread over 3000 km{sup 2}, sample the extended air shower front when it hits the ground, measuring time and energy deposited, while the 4 fluorescence detectors stations, each with 6 telescopes, collect the UV light emitted by the shower core, registering the time, intensity and angle of reception. Though the Pierre Auger collaboration will be taking data for the next two decades, several results have already been published based on data collected until 2009 and will be discussed briefly: the energy spectrum and its implications on the GZK cut off controversy, limits on photon and neutrino fluxes, anisotropy, point sources and mass composition. (author)
17. Neutrino Observations from the Sudbury Neutrino Observatory
Science.gov (United States)
Q. R. Ahmad, R. C. Allen, T. C. Andersen, J. D. Anglin, G. B?hler, J. C. Barton, E. W. Beier, M. Bercovitch, J. Bigu, S. Biller, R. A. Black, I. Blevis, R. J. Boardman, J. Boger, E. Bonvin, M. G. Boulay, M. G. Bowler, T. J. Bowles, S. J. Brice, M. C. Browne, T. V. Bullard, T. H. Burritt, K. Cameron, J. Cameron, Y. D. Chan, M. Chen, H. H. Chen, X. Chen, M. C. Chon, B. T. Cleveland, E. T. H. Clifford, J. H. M. Cowan, D. F. Cowen, G. A. Cox, Y. Dai, X. Dai, F. Dalnoki-Veress, W. F. Davidson, P. J. Doe, G. Doucas, M. R. Dragowsky, C. A. Duba, F. A. Duncan, J. Dunmore, E. D. Earle, S. R. Elliott, H. C. Evans, G. T. Ewan, J. Farine, H. Fergani, A. P. Ferraris, R. J. Ford, M. M. Fowler, K. Frame, E. D. Frank, W. Frati, J. V. Germani, S. Gil, A. Goldschmidt, D. R. Grant, R. L. Hahn, A. L. Hallin, E. D. Hallman, A. Hamer, A. A. Hamian, R. U. Haq, C. K. Hargrove, P. J. Harvey, R. Hazama, R. Heaton, K. M. Heeger, W. J. Heintzelman, J. Heise, R. L. Helmer, J. D. Hepburn, H. Heron, J. Hewett, A. Hime, M. Howe, J. G. Hykawy, M. C. P. Isaac, P. Jagam, N. A. Jelley, C. Jillings, G. Jonkmans, J. Karn, P. T. Keener, K. Kirch, J. R. Klein, A. B. Knox, R. J. Komar, R. Kouzes, T. Kutter, C. C. M. Kyba, J. Law, I. T. Lawson, M. Lay, H. W. Lee, K. T. Lesko, J. R. Leslie, I. Levine, W. Locke, M. M. Lowry, S. Luoma, J. Lyon, S. Majerus, H. B. Mak, A. D. Marino, N. McCauley, A. B. McDonald, D. S. McDonald, K. McFarlane, G. McGregor, W. McLatchie, R. Meijer Drees, H. Mes, C. Mifflin, G. G. Miller, G. Milton, B. A. Moffat, M. Moorhead, C. W. Nally, M. S. Neubauer, F. M. Newcomer, H. S. Ng, A. J. Noble, E. B. Norman, V. M. Novikov, M. O'Neill, C. E. Okada, R. W. Ollerhead, M. Omori, J. L. Orrell, S. M. Oser, A. W. P. Poon, T. J. Radcliffe, A. Roberge, B. C. Robertson, R. G. H. Robertson, J. K. Rowley, V. L. Rusu, E. Saettler, K. K. Schaffer, A. Schuelke, M. H. Schwendener, H. Seifert, M. Shatkay, J. J. Simpson, D. Sinclair, P. Skensved, A. R. Smith, M. W. E. Smith, N. Starinsky, T. D. Steiger, R. G. Stokstad, R. S. Storey, B. Sur, R. Tafirout, N. Tagg, N. W. Tanner, R. K. Taplin, M. Thorman, P. Thornewell, P. T. Trent, Y. I. Tserkovnyak, R. Van Berg, R. G. Van de Water, C. J. Virtue, C. E. Waltham, J.-X. Wang, D. L. Wark, N. West, J. B. Wilhelmy, J. F. Wilkerson, J. Wilson, P. Wittich, J. M. Wouters, and M. Yeh
2001-09-24
The Sudbury Neutrino Observatory (SNO) is a water imaging Cherenkov detector. Its usage of 1000 metric tons of D{sub 2}O as target allows the SNO detector to make a solar-model independent test of the neutrino oscillation hypothesis by simultaneously measuring the solar {nu}{sub e} flux and the total flux of all active neutrino species. Solar neutrinos from the decay of {sup 8}B have been detected at SNO by the charged-current (CC) interaction on the deuteron and by the elastic scattering (ES) of electrons. While the CC reaction is sensitive exclusively to {nu}{sub e}, the ES reaction also has a small sensitivity to {nu}{sub {mu}} and {nu}{sub {tau}}. In this paper, recent solar neutrino results from the SNO experiment are presented. It is demonstrated that the solar flux from {sup 8}B decay as measured from the ES reaction rate under the no-oscillation assumption is consistent with the high precision ES measurement by the Super-Kamiokande experiment. The {nu}{sub e} flux deduced from the CC reaction rate in SNO differs from the Super-Kamiokande ES results by 3.3{sigma}. This is evidence for an active neutrino component, in additional to {nu}{sub e}, in the solar neutrino flux. These results also allow the first experimental determination of the total active {sup 8}B neutrino flux from the Sun, and is found to be in good agreement with solar model predictions.
18. Design of smart sensing components for volcano monitoring
Science.gov (United States)
Xu, M.; Song, W.-Z.; Huang, R.; Peng, Y.; Shirazi, B.; LaHusen, R.; Kiely, A.; Peterson, N.; Ma, A.; Anusuya-Rangappa, L.; Miceli, M.; McBride, D.
2009-01-01
In a volcano monitoring application, various geophysical and geochemical sensors generate continuous high-fidelity data, and there is a compelling need for real-time raw data for volcano eruption prediction research. It requires the network to support network synchronized sampling, online configurable sensing and situation awareness, which pose significant challenges on sensing component design. Ideally, the resource usages shall be driven by the environment and node situations, and the data quality is optimized under resource constraints. In this paper, we present our smart sensing component design, including hybrid time synchronization, configurable sensing, and situation awareness. Both design details and evaluation results are presented to show their efficiency. Although the presented design is for a volcano monitoring application, its design philosophy and framework can also apply to other similar applications and platforms. ?? 2009 Elsevier B.V.
19. Postshield stage transitional volcanism on Mahukona Volcano, Hawaii
Science.gov (United States)
Clague, D.A.; Calvert, A.T.
2009-01-01
Age spectra from 40Ar/39Ar incremental heating experiments yield ages of 298??25 ka and 310??31 ka for transitional composition lavas from two cones on submarine Mahukona Volcano, Hawaii. These ages are younger than the inferred end of the tholeiitic shield stage and indicate that the volcano had entered the postshield alkalic stage before going extinct. Previously reported elevated helium isotopic ratios of lavas from one of these cones were incorrectly interpreted to indicate eruption during a preshield alkalic stage. Consequently, high helium isotopic ratios are a poor indicator of eruptive stage, as they occur in preshield, shield, and postshield stage lavas. Loihi Seamount and Kilauea are the only known Hawaiian volcanoes where the volume of preshield alkalic stage lavas can be estimated. ?? Springer-Verlag 2008.
20. Sutter Buttes-the lone volcano in California's Great Valley
Science.gov (United States)
Hausback, Brain P.; Muffler, L.J. Patrick; Clynne, Michael A.
2011-01-01
The volcanic spires of the Sutter Buttes tower 2,000 feet above the farms and fields of California's Great Valley, just 50 miles north-northwest of Sacramento and 11 miles northwest of Yuba City. The only volcano within the valley, the Buttes consist of a central core of volcanic domes surrounded by a large apron of fragmental volcanic debris. Eruptions at the Sutter Buttes occurred in early Pleistocene time, 1.6 to 1.4 million years ago. The Sutter Buttes are not part of the Cascade Range of volcanoes to the north, but instead are related to the volcanoes in the Coast Ranges to the west in the vicinity of Clear Lake, Napa Valley, and Sonoma Valley.
1. Digital Geologic Map Database of Medicine Lake Volcano, Northern California
Science.gov (United States)
Ramsey, D. W.; Donnelly-Nolan, J. M.; Felger, T. J.
2010-12-01
Medicine Lake volcano, located in the southern Cascades ~55 km east-northeast of Mount Shasta, is a large rear-arc, shield-shaped volcano with an eruptive history spanning nearly 500 k.y. Geologic mapping of Medicine Lake volcano has been digitally compiled as a spatial database in ArcGIS. Within the database, coverage feature classes have been created representing geologic lines (contacts, faults, lava tubes, etc.), geologic unit polygons, and volcanic vent location points. The database can be queried to determine the spatial distributions of different rock types, geologic units, and other geologic and geomorphic features. These data, in turn, can be used to better understand the evolution, growth, and potential hazards of this large, rear-arc Cascades volcano. Queries of the database reveal that the total area covered by lavas of Medicine Lake volcano, which range in composition from basalt through rhyolite, is about 2,200 km2, encompassing all or parts of 27 U.S. Geological Survey 1:24,000-scale topographic quadrangles. The maximum extent of these lavas is about 80 km north-south by 45 km east-west. Occupying the center of Medicine Lake volcano is a 7 km by 12 km summit caldera in which nestles its namesake, Medicine Lake. The flanks of the volcano, which are dotted with cinder cones, slope gently upward to the caldera rim, which reaches an elevation of nearly 2,440 m. Approximately 250 geologic units have been mapped, only half a dozen of which are thin surficial units such as alluvium. These volcanic units mostly represent eruptive events, each commonly including a vent (dome, cinder cone, spatter cone, etc.) and its associated lava flow. Some cinder cones have not been matched to lava flows, as the corresponding flows are probably buried, and some flows cannot be correlated with vents. The largest individual units on the map are all basaltic in composition, including the late Pleistocene basalt of Yellowjacket Butte (296 km2 exposed), the largest unit on the
2. Earth Girl Volcano: An Interactive Game for Disaster Preparedness
Science.gov (United States)
Kerlow, Isaac
2017-04-01
Earth Girl Volcano is an interactive casual strategy game for disaster preparedness. The project is designed for mainstream audiences, particularly for children, as an engaging and fun way to learn about volcano hazards. Earth Girl is a friendly character that kids can easily connect with and she helps players understand how to best minimize volcanic risk. Our previous award-winning game, Earth Girl Tsunami, has seen success on social media, and is available as a free app for both Android and iOS tables and large phones in seven languages: Indonesian, Thai, Tamil, Japanese, Chinese, Spanish, French and English. This is the first public viewing of the Earth Girl Volcano new game prototype.
3. Multiple Active Volcanoes in the Northeast Lau Basin
Science.gov (United States)
Baker, E. T.; Resing, J. A.; Lupton, J. E.; Walker, S. L.; Embley, R. W.; Rubin, K. H.; Buck, N.; de Ronde, C. E.; Arculus, R. J.
2010-12-01
The northeast Lau Basin occupies a complex geological area between the Tafua arc front, the E-W trending Tonga Trench, and the Northeast Lau Spreading Center. These boundaries create multiple zones of extension and thus provide abundant opportunities for magma to invade the crust. The 25-km-long chain of “Mata” volcanoes lies near the center of this area, separated from both the arc front and the spreading ridge. In 2008 we discovered hydrothermal venting on the largest and most southerly of these volcanoes, W and E Mata. In 2010 we visited the 7 smaller volcanoes that form a 15-km-long arcuate sweep to the north from W and E Mata (the “North Matas”). We also revisited W and E Mata. Over each volcano we conducted CTD tows to map plumes and collect water samples. Based on the CTD results, camera tows searched for seafloor sources on three volcanoes. The N Mata volcanoes, extending from Mata Taha (1) in the south to Mata Fitu (7) in the north, lie within a prominent gap in the shallow bathymetry along the southern border of the Tonga trench. Northward from E Mata the Mata volcanoes degrade from large symmetrical cones to smaller and blocky volcanic edifices. Summit depths range from 1165 m (W Mata) to 2670 m (Mata Nima (5)). The most active volcano in the chain is the erupting W Mata, with an intense plume that extended 250 m above the summit. Hydrothermal temperature anomalies (Δθ, corrected for hydrographic masking effects) reached ˜1.7°C, with light-scattering values as high as 2-5 ΔNTU. The 2010 surveys now show that 6 of the 7 N Mata volcanoes are also hydrothermally active. Along the N Matas, Δθ and ΔNTU signals ranged from robust to weak, but distinct oxidation-reduction potential (aka Eh) anomalies confirmed active venting in each case. The most concentrated plumes were found near Mata Ua (2) and Mata Fitu (7), with Δθ and ΔNTU maxima of 0.1-0.17°C and 0.3, respectively. Despite the variability in plume strength, however, ΔNTU/Δθ ratios
4. Determining the stress field in active volcanoes using focal mechanisms
Directory of Open Access Journals (Sweden)
Bruno Massa
2016-11-01
Full Text Available Stress inversion of seismological datasets became an essential tool to retrieve the stress field of active tectonics and volcanic areas. In particular, in volcanic areas, it is able to put constrains on volcano-tectonics and in general in a better understanding of the volcano dynamics. During the last decades, a wide range of stress inversion techniques has been proposed, some of them specifically conceived to manage seismological datasets. A modern technique of stress inversion, the BRTM, has been applied to seismological datasets available at three different regions of active volcanism: Mt. Somma-Vesuvius (197 Fault Plane Solutions, FPSs, Campi Flegrei (217 FPSs and Long Valley Caldera (38,000 FPSs. The key role of stress inversion techniques in the analysis of the volcano dynamics has been critically discussed. A particular emphasis was devoted to performances of the BRTM applied to volcanic areas.
5. The Importance of Marine Observatories and of RAIA in Particular
Directory of Open Access Journals (Sweden)
Luísa Bastos
2016-08-01
Full Text Available Coastal and Oceanic Observatories are important tools to provide information on ocean state, phenomena and processes. They meet the need for a better understanding of coastal and ocean dynamics, revealing regional characteristics and vulnerabilities. These observatories are extremely useful to guide human actions in response to natural events and potential climate change impacts, anticipating the occurrence of extreme weather and oceanic events and helping to minimize consequent personal and material damages and costs.International organizations and local governments have shown an increasing interest in operational oceanography and coastal, marine and oceanic observations, which resulted in substantial investments in these areas. A variety of physical, chemical and biological data have been collected to better understand the specific characteristics of each ocean area and its importance in the global context. Also the general public’s interest in marine issues and observatories has been raised, mainly in relation to vulnerability, sustainability and climate change issues. Data and products obtained by an observatory are hence useful to a broad range of stakeholders, from national and local authorities to the population in general.An introduction to Ocean Observatories, including their national and regional importance, and a brief analysis of the societal interest in these observatories and related issues are presented. The potential of a Coastal and Ocean Observatory is then demonstrated using the RAIA observatory as example. This modern and comprehensive observatory is dedicated to improve operational oceanography, technology and marine science for the North Western Iberian coast, and to provide services to a large range of stakeholders.
6. Inside the volcano: The how and why of Thrihnukagigur volcano, Iceland
Science.gov (United States)
LaFemina, Peter; Hudak, Michael; Feineman, Maureen; Geirsson, Halldor; Normandeau, Jim; Furman, Tanya
2015-04-01
The Thrihnukagigur volcano, located in the Brennisteinsfjöll fissure swarm on the Reykjanes Peninsula, Iceland, offers a unique exposure of the upper magmatic plumbing system of a monogenetic volcano. The volcano formed during a dike-fed strombolian eruption ~3500 BP with flow-back leaving an evacuated conduit, elongated parallel to the regional maximum horizontal stress. At least two vents were formed above the dike, as well as several small hornitos south-southwest of the main vent. In addition to the evacuated conduit, a cave exists 120 m below the vent. The cave exposes stacked lava flows and a buried cinder cone. The unconsolidated tephra of the cone is cross-cut by a NNE-trending dike, which runs across the ceiling of this cave to the vent that produced lava and tephra during the ~3500 BP fissure eruption. We present geochemical, petrologic and geologic observations, including a high-resolution three-dimensional scan of the system that indicate the dike intersected, eroded and assimilated unconsolidated tephra from the buried cinder cone, thus excavating a region along the dike, allowing for future slumping and cave formation. Two petrographically distinct populations of plagioclase phenocrysts are present in the system: a population of smaller (maximum length 1 mm) acicular phenocrysts and a population of larger (maximum length 10 mm) tabular phenocrysts that is commonly broken and displays disequilibrium sieve textures. The acicular plagioclase crystals are present in the dike and lavas while the tabular crystals are in these units and the buried tephra. An intrusion that appears not to have interacted with the tephra has only acicular plagioclase. This suggests that a magma crystallizing a single acicular population of plagioclase intruded the cinder cone and rapidly assimilated the tephra, incorporating the tabular population of phenocrysts from the cone. Petrographic thin-sections of lavas sampled near the vent show undigested fragments of tephra from
7. Measuring Gases Using Drones at Turrialba Volcano, Costa Rica
Science.gov (United States)
Stix, J.; Alan, A., Jr.; Corrales, E.; D'Arcy, F.; de Moor, M. J.; Diaz, J. A.
2016-12-01
We are currently developing a series of drones and associated instrumentation to study Turrialba volcano in Costa Rica. This volcano has shown increasing activity during the last 20 years, and the volcano is currently in a state of heightened unrest as exemplified by recent explosive activity in May-August 2016. The eruptive activity has made the summit area inaccessible to normal gas monitoring activities, prompting development of new techniques to measure gas compositions. We have been using two drones, a DJI Spreading Wings S1000 octocopter and a Turbo Ace Matrix-i quadcopter, to airlift a series of instruments to measure volcanic gases in the plume of the volcano. These instruments comprise optical and electrochemical sensors to measure CO2, SO2, and H2S concentrations which are considered the most significant species to help forecast explosive eruptions and determine the relative proportions of magmatic and hydrothermal components in the volcanic gas. Additionally, cameras and sensors to measure air temperature, relative humidity, atmospheric pressure, and GPS location are included in the package to provide meteorological and geo-referenced information to complement the concentration data and provide a better picture of the volcano from a remote location. The integrated payloads weigh 1-2 kg, which can typically be flown by the drones in 10-20 minutes at altitudes of 2000-4000 meters. Preliminary tests at Turrialba in May 2016 have been very encouraging, and we are in the process of refining both the drones and the instrumentation packages for future flights. Our broader goals are to map gases in detail with the drones in order to make flux measurements of each species, and to apply this approach at other volcanoes.
8. Investigation of the Dashigil mud volcano (Azerbaijan) using beryllium-10
Energy Technology Data Exchange (ETDEWEB)
Kim, K.J., E-mail: kjkim@kigam.re.kr [Korea Geological Research Division, Korea Institute of Geoscience and Mineral Resources, Daejeon 305-350 (Korea, Republic of); Baskaran, M.; Jweda, J. [Department of Geology, Wayne State University, Detroit, MI 48202 (United States); Feyzullayev, A.A.; Aliyev, C. [Geology Institute of the Azerbaijan National Academy of Sciences (ANAS), Baku, AZ 1143 (Azerbaijan); Matsuzaki, H. [MALT, University of Tokyo, Tokyo (Japan); Jull, A.J.T. [NSF Arizona AMS Lab, University of Arizona, AZ 85721 (United States)
2013-01-15
We collected and analyzed five sediments from three mud volcano (MV) vents and six suspended and bottom sediment samples from the adjoining river near the Dashgil mud volcano in Azerbaijan for {sup 10}Be. These three MV are found among the 190 onshore and >150 offshore MV in this region which correspond to the western flank of the South Caspian depression. These MVs overlie the faulted and petroleum-bearing anticlines. The {sup 10}Be concentrations and {sup 10}Be/{sup 9}Be ratios are comparable to the values reported for mud volcanoes in Trinidad Island. It appears that the stable Be concentrations in Azerbaijan rivers are not perturbed by anthropogenic effects and are comparable to the much older sediments (mud volcano samples). The {sup 10}Be and {sup 9}Be concentrations in our river sediments are compared to the global data set and show that the {sup 10}Be values found for Kura River are among the lowest of any river for which data exist. We attribute this low {sup 10}Be concentration to the nature of surface minerals which are affected by the residual hydrocarbon compounds that occur commonly in the study area in particular and Azerbaijan at large. The concentrations of {sup 40}K and U-Th-series radionuclides ({sup 234}Th, {sup 210}Pb, {sup 226}Ra, and {sup 228}Ra) indicate overall homogeneity of the mud volcano samples from the three different sites. Based on the {sup 10}Be concentrations of the mud volcano samples, the age of the mud sediments could be at least as old as 4 myr.
9. A generic model for the shallow velocity structure of volcanoes
Science.gov (United States)
Lesage, Philippe; Heap, Michael J.; Kushnir, Alexandra
2018-05-01
The knowledge of the structure of volcanoes and of the physical properties of volcanic rocks is of paramount importance to the understanding of volcanic processes and the interpretation of monitoring observations. However, the determination of these structures by geophysical methods suffers limitations including a lack of resolution and poor precision. Laboratory experiments provide complementary information on the physical properties of volcanic materials and their behavior as a function of several parameters including pressure and temperature. Nevertheless combined studies and comparisons of field-based geophysical and laboratory-based physical approaches remain scant in the literature. Here, we present a meta-analysis which compares 44 seismic velocity models of the shallow structure of eleven volcanoes, laboratory velocity measurements on about one hundred rock samples from five volcanoes, and seismic well-logs from deep boreholes at two volcanoes. The comparison of these measurements confirms the strong variability of P- and S-wave velocities, which reflects the diversity of volcanic materials. The values obtained from laboratory experiments are systematically larger than those provided by seismic models. This discrepancy mainly results from scaling problems due to the difference between the sampled volumes. The averages of the seismic models are characterized by very low velocities at the surface and a strong velocity increase at shallow depth. By adjusting analytical functions to these averages, we define a generic model that can describe the variations in P- and S-wave velocities in the first 500 m of andesitic and basaltic volcanoes. This model can be used for volcanoes where no structural information is available. The model can also account for site time correction in hypocenter determination as well as for site and path effects that are commonly observed in volcanic structures.
10. Astronomical Observatory of Belgrade from 1924 to 1955
Science.gov (United States)
2014-12-01
History of the Astronomical Observatory in Belgrade, as the presentation is done here, become the field of interest to the author of the present monograph in early 2002. Then, together with Luka C. Popovic, during the Conference "Development of Astronomy among Serbs II" held in early April of that year, he prepared a paper entitled "Astronomska opservatorija tokom Drugog Svetskog rata" (Astronomical Observatory in the Second World War). This paper was based on the archives material concerning the Astronomical Observatory which has been professionally bearing in mind the author's position the subject of his work.
11. The First Astronomical Observatory in Cluj-Napoca
Science.gov (United States)
Szenkovits, Ferenc
2008-09-01
One of the most important cities of Romania is Cluj-Napoca (Kolozsvár, Klausenburg). This is a traditional center of education, with many universities and high schools. From the second half of the 18th century the University of Cluj has its own Astronomical Observatory, serving for didactical activities and scientific researches. The famous astronomer Maximillian Hell was one of those Jesuits who put the base of this Astronomical Observatory. Our purpose is to offer a short history of the beginnings of this Astronomical Observatory.
12. Astronomy and astrophysics communication in the UCM Observatory
Science.gov (United States)
Crespo-Chacón, I.; de Castro, E.; Díaz, C.; Gallego, J.; Gálvez, M. C.; Hernán-Obispo, M.; López-Santiago, J.; Montes, D.; Pascual, S.; Verdet, A.; Villar, V.; Zamorano, J.
We present a summary of the last activities of science communication that have taken place in the Observatorio de la Universidad Complutense de Madrid (UCM Observatory) on the occasion of the Third Science Week of the Comunidad Autónoma de Madrid (3-16 November 2003), including guided tours through the observatory facilities, solar observations, and several talks. Moreover the current telescopes, instruments and tools of the UCM Observatory have allowed us to organize other communicating activities such as the live observation, together with its internet broadcast, of total lunar eclipses and other exceptional astronomical events as the Venus transit that took place in 8 June 2004.
13. The 2000 AD eruption of Copahue Volcano, Southern Andes
OpenAIRE
Naranjo, José Antonio; Polanco, Edmundo
2004-01-01
Although all historic eruptions of the Copahue volcano (37°45'S-71°10.2'W, 3,001 m a.s.l.) have been of low magnitude, the largest (VEI=2) and longest eruptive cycle occurred from July to October 2000. Phreatic phases characterized the main events as a former acid crater lake was blown up. Low altitude columns were deviated by low altitude winds in variable directions, but slightly predominant to the NNE. The presence of the El Agrio caldera depression to the east of Copahue volcano may have ...
14. The recent seismicity of Teide volcano, Tenerife (Canary Islands, Spain)
Science.gov (United States)
D'Auria, L.; Albert, G. W.; Calvert, M. M.; Gray, A.; Vidic, C.; Barrancos, J.; Padilla, G.; García-Hernández, R.; Perez, N. M.
2017-12-01
Tenerife is an active volcanic island which experienced several eruptions of moderate intensity in historical times, and few explosive eruptions in the Holocene. The increasing population density and the consistent number of tourists are constantly raising the volcanic risk of the island.On 02/10/2016 a remarkable swarm of long-period events was recorded and was interpreted as the effect of a transient massive fluid discharge episode occurring within the deep hydrothermal system of Teide volcano. Actually, since Oct. 2016, the hydrothermal system of the volcano underwent a progressive pressurization, testified by the marked variation of different geochemical parameters. The most striking observation is the increase in the diffuse CO2 emission from the summit crater of Teide volcano which started increasing from a background value of about 20 tons/day and reaching a peak of 175 tons/day in Feb. 2017.The pressurization process has been accompanied by an increase in the volcano-tectonic seismicity of. Teide volcano, recorded by the Red Sísmica Canaria, managed by Instituto Volcanológico de Canarias (INVOLCAN). The network began its full operativity in Nov. 2016 and currently consists of 15 broadband seismic stations. Since Nov. 2016 the network detected more than 100 small magnitude earthquakes, located beneath Teide volcano at depths usually ranging between 5 and 15 km. On January 6th 2017 a M=2.5 earthquake was recorded in the area, being one of the strongest ever recorded since decades. Most of the events show typical features of the microseismicity of hydrothermal systems: high spatial and temporal clustering and similar waveforms of individual events which often are overlapped.We present the spatial and temporal distribution of the seismicity of Teide volcano since Nov. 2016, comparing it also with the past seismicity of the volcano. Furthermore we analyze the statistical properties of the numerous swarms recorded until now with the aid of a template
15. Magma paths at Piton de la Fournaise Volcano
OpenAIRE
Michon , Laurent; Ferrazzini , Valérie; Di Muro , Andrea
2016-01-01
International audience; Several patterns of magma paths have been proposed since the 1980s for Piton de la Fournaise volcano. Given the significant differences, which are presented here, we propose a reappraisal of the magma intrusion paths using a 17-years-long database of volcano-tectonic seismic events and a detailed mapping of the scoria cones. At the edifice scale, the magma propagates along two N120 trending rift zones. They are wide, linear, spotted by small to large scoria cones and r...
16. Geophysical Observations Supporting Research of Magmatic Processes at Icelandic Volcanoes
Science.gov (United States)
Vogfjörd, Kristín. S.; Hjaltadóttir, Sigurlaug; Roberts, Matthew J.
2010-05-01
Magmatic processes at volcanoes on the boundary between the European and North American plates in Iceland are observed with in-situ multidisciplinary geophysical networks owned by different national, European or American universities and research institutions, but through collaboration mostly operated by the Icelandic Meteorological Office. The terrestrial observations are augmented by space-based interferometric synthetic aperture radar (InSAR) images of the volcanoes and their surrounding surface. Together this infrastructure can monitor magma movements in several volcanoes from the base of the crust up to the surface. The national seismic network is sensitive enough to detect small scale seismicity deep in the crust under some of the voclanoes. High resolution mapping of this seismicity and its temporal progression has been used to delineate the track of the magma as it migrates upwards in the crust, either to form an intrusion at shallow levels or to reach the surface in an eruption. Broadband recording has also enabled capturing low frequency signals emanating from magmatic movements. In two volcanoes, Eyjafjallajökull and Katla, just east of the South Iceland Seismic Zone (SISZ), seismicity just above the crust-mantle boundary has revealed magma intruding into the crust from the mantle below. As the magma moves to shallower levels, the deformation of the Earth‘s surface is captured by geodetic systems, such as continuous GPS networks, (InSAR) images of the surface and -- even more sensitive to the deformation -- strain meters placed in boreholes around 200 m below the Earth‘s surface. Analysis of these signals can reveal the size and shape of the magma as well as the temporal evolution. At near-by Hekla volcano flanking the SISZ to the north, where only 50% of events are of M>1 compared to 86% of earthquakes in Eyjafjallajökull, the sensitivity of the seismic network is insufficient to detect the smallest seismicity and so the volcano appears less
17. Monitoring Volcanoes by Use of Air-Dropped Sensor Packages
Science.gov (United States)
Kedar, Sharon; Rivellini, Tommaso; Webb, Frank; Blaes, Brent; Bracho, Caroline; Lockhart, Andrew; McGee, Ken
2003-01-01
Sensor packages that would be dropped from airplanes have been proposed for pre-eruption monitoring of physical conditions on the flanks of awakening volcanoes. The purpose of such monitoring is to gather data that could contribute to understanding and prediction of the evolution of volcanic systems. Each sensor package, denoted a volcano monitoring system (VMS), would include a housing with a parachute attached at its upper end and a crushable foam impact absorber at its lower end (see figure). The housing would contain survivable low-power instrumentation that would include a Global Positioning System (GPS) receiver, an inclinometer, a seismometer, a barometer, a thermometer, and CO2 and SO2 analyzers. The housing would also contain battery power, control, data-logging, and telecommunication subsystems. The proposal for the development of the VMS calls for the use of commercially available sensor, power, and telecommunication equipment, so that efforts could be focused on integrating all of the equipment into a system that could survive impact and operate thereafter for 30 days, transmitting data on the pre-eruptive state of a target volcano to a monitoring center. In a typical scenario, VMSs would be dropped at strategically chosen locations on the flanks of a volcano once the volcano had been identified as posing a hazard from any of a variety of observations that could include eyewitness reports, scientific observations from positions on the ground, synthetic-aperture-radar scans from aircraft, and/or remote sensing from aboard spacecraft. Once dropped, the VMSs would be operated as a network of in situ sensors that would transmit data to a local monitoring center. This network would provide observations as part of an integrated volcano-hazard assessment strategy that would involve both remote sensing and timely observations from the in situ sensors. A similar strategy that involves the use of portable sensors (but not dropping of sensors from aircraft) is
18. Pyroclastic sulphur eruption at Poas Volcano, Costa Rica
Energy Technology Data Exchange (ETDEWEB)
Francis, P.W.; Thorpe, R.S.; Brown, G.C.; Glasscock, J.
1980-01-01
The recent Voyager missions to Jupiter have highlighted the role of sulphur in volcanic processes on io. Although fumarolic sulphur and SO/sub 2/ gas are almost universal in terrestrial active volcanoes, and rare instances of sulphur lava flows have been reported, sulphur in a pyroclastic form has only been described from Poas Volcano, Costa Rica. Here we amplify the original descriptions by Bennett and Raccichini and describe a recent eruption of pyroclastic sulphur scoria and ejected blocks that are characterised by miniature sulphur stalactites and stalagmites.
19. Cyclic Activity of Mud Volcanoes: Evidences from Trinidad (SE Caribbean)
Science.gov (United States)
Deville, E.
2007-12-01
Fluid and solid transfer in mud volcanoes show different phases of activity, including catastrophic events followed by periods of relative quiescence characterized by moderate activity. This can be notably shown by historical data onshore Trinidad. Several authors have evoked a possible link between the frequencies of eruption of some mud volcanoes and seismic activity, but in Trinidad there is no direct correlation between mud eruptions and seisms. It appears that each eruptive mud volcano has its own period of catastrophic activity, and this period is highly variable from one volcano to another. The frequency of activity of mud volcanoes seems essentially controlled by local pressure regime within the sedimentary pile. At the most, a seism can, in some cases, activate an eruption close to its term. The dynamics of expulsion of the mud volcanoes during the quiescence phases has been studied notably from temperature measurements within the mud conduits. The mud temperature is concurrently controlled by, either, the gas flux (endothermic gas depressurizing induces a cooling effect), or by the mud flux (mud is a vector for convective heat transfer). Complex temperature distribution was observed in large conduits and pools. Indeed, especially in the bigger pools, the temperature distribution characterizes convective cells with an upward displacement of mud above the deep outlet, and ring-shaped rolls associated with the burial of the mud on the flanks of the pools. In simple, tube-like shaped, narrow conduits, the temperature is more regular, but we observed different types of profiles, with either downward increasing or decreasing temperatures. If the upward flow of mud would be regular, we should expect increasing temperatures and progressively decreasing gradient with depth within the conduits. However, the variable measured profiles from one place to another, as well as time-variable measured temperatures within the conduits and especially, at the base of the
20. The contribution of the Volcano Observations Work Package to the implementation of the European Plate Observing System
Science.gov (United States)
Puglisi, Giuseppe
2016-04-01
The overall aim of the implementation phase of European Plate Observing System (EPOS) is to make the integrated platform operational in order to guarantee seamless access to the data provided by the European Solid Earth communities. The Volcano Observations Work Package (WP11) contributes to this objective by implementing a Thematic Core Service (TCS) which is planned to give access to the data and services provided by the European Volcano Observatories (VO) and some Volcanological Research Institutions (VRI; such as university departments, laboratories, etc.). Both types are considered as national research infrastructures (RI) which the TCS will integrate. Currently, monitoring networks on European volcanoes consist of thousands of stations or sites where volcanological parameters are continuously or periodically measured. These sites are equipped with instruments for geophysical (seismic, geodetic, gravimetric, electromagnetic), geochemical (volcanic plumes, fumaroles, groundwater, rivers, soils), environmental observations (e.g. meteorological and air quality parameters), as well as various prototypal monitoring systems (e.g. Doppler radars, ground based SAR). Across Europe several laboratories provide sample characterization (rocks, gases, isotopes, etc.), quasi-continuous analysis of space-borne data (SAR, thermal imagery, SO2 and ash), as well as high-performance computing facilities. All these RIs provide high-quality information (observations) on the current status of European volcanoes and the geodynamic background of the surrounding areas. The implementation of the Volcano Observations TCS will address technical as well as managerial issues, both considering the current heterogeneous state-of-the-art of the volcanological research infrastructures in Europe. Indeed, the current arrangement of individual VO and VRI is considered too fragmented to be considered as a unique distributed infrastructure. Therefore, the main effort in the framework of the EPOS
1. The "Volcano Observations" Thematic Core Service of the European Plate Observing System (EPOS): status of the implementation.
Science.gov (United States)
Puglisi, Giuseppe
2017-04-01
The European volcanological community contributes to implementation of European Plate Observing System (EPOS) by making operational an integrated platform to guarantee a seamless access to the data provided by the European Solid Earth communities. To achieve this objective, the Volcano Observations Work Package (WP11) will implement a Thematic Core Services (TCS) which is planned to give access to the data and services provided by the European Volcano Observatories (VO) and some Volcanological Research Institutions (VRI; as university departments, laboratories, etc.); both types are considered as national research infrastructures (RI) over which to build the TCS. Currently, the networks on European volcanoes consist of thousands of stations or sites where volcanological parameters are continuously or periodically measured. These sites are equipped with instruments for geophysical (seismic, geodetic, gravimetric, electromagnetic), geochemical (volcanic plumes, fumaroles, groundwater, rivers, soils), environmental observations (e.g. meteorological and air quality parameters), as well as various prototypal monitoring systems (e.g. Doppler radars, ground based SAR). In Europe also operate laboratories for sample analysis (rocks, gases, isotopes, etc.), and almost continuous analysis of space-borne data (SAR, thermal imagery, SO2 and ash), as well as high-performance computing centres. All these RIs provide high-quality information (observations) on the current status of European volcanoes and the geodynamic background of the surrounding areas. The implementation of the Volcano Observations TCS is addressing technical and management issues, both considering the current heterogeneous state of the art of the volcanological research infrastructures in Europe. Indeed, the frame of the VO and VRI is now too fragmented to be considered as a unique distributed infrastructure, thus the main effort planned in the frame of the EPOS-IP is focused to create services aimed at
2. Monitoring quiescent volcanoes by diffuse He degassing: case study Teide volcano
Science.gov (United States)
Pérez, Nemesio M.; Melián, Gladys; Asensio-Ramos, María; Padrón, Eleazar; Hernández, Pedro A.; Barrancos, José; Padilla, Germán; Rodríguez, Fátima; Calvo, David; Alonso, Mar
2016-04-01
Tenerife (2,034 km2), the largest of the Canary Islands, is the only island that has developed a central volcanic complex (Teide-Pico Viejo stratovolcanoes), characterized by the eruption of differentiated magmas. This central volcanic complex has been built in the intersection of the three major volcanic rift-zones of Tenerife, where most of the historical volcanic activity has taken place. The existence of a volcanic-hydrothermal system beneath Teide volcano is suggested by the occurrence of a weak fumarolic system, steamy ground and high rates of diffuse CO2 degassing all around the summit cone of Teide (Pérez et al., 2013). Diffuse emission studies of non-reactive and/or highly mobile gases such as helium have recently provided promising results to detect changes in the magmatic gas component at surface related to volcanic unrest episodes (Padrón et al., 2013). The geochemical properties of He minimize the interaction of this noble gas on its movement toward the earth's surface, and its isotopic composition is not affected by subsequent chemical reactions. It is highly mobile, chemically inert, physically stable, non-biogenic, sparingly soluble in water under ambient conditions, almost non-adsorbable, and highly diffusive with a diffusion coefficient ˜10 times that of CO2. As part of the geochemical monitoring program for the volcanic surveillance of Teide volcano, yearly surveys of diffuse He emission through the surface of the summit cone of Teide volcano have been performed since 2006. Soil He emission rate was measured yearly at ˜130 sampling sites selected in the surface environment of the summit cone of Teide volcano (Tenerife, Canary Islands), covering an area of ˜0.5 km2, assuming that He emission is governed by convection and diffusion. The distribution of the sampling sites was carefully chosen to homogeneously cover the target area, allowing the computation of the total He emission by sequential Gaussian simulation (sGs). Nine surveys have been
3. Waiting for a catastrophe from the eruption of Vesuvius or Phlegraean Fields volcanoes from the lack of autoregulation of the territories at risk
Science.gov (United States)
Dobran, F.
2017-12-01
Vesuvius and Phlegraean Fields volcanoes in the Bay of Naples produce large-scale eruptions with periods that range from centuries and several millennia for the former and tens of thousands of years for the latter. The city of Naples with one million inhabitants is situated between these volcanoes and is surrounded with another two million people. The eruptions of Vesuvius have during the past 2000 years destroyed many local communities and Naples is built on the Phlegraean Fields eruption deposits of 15,000 years ago. The Vesuvius Observatory monitors these volcanoes for seismicity, ground deformation, and gas emissions, and was an independent entity until 15 years ago when it passed under the control of the central government in Rome. The Observatory lost its ability to work directly with local authorities to make rapid decisions in case of volcanic emergencies and the central decision-making process risks to produce catastrophic consequences that are much worse than those from Katrina. As in the Katrina situation, the central authority risk management strategy is flawed because it is politicized and lacks the knowledge of the territory at risk for taking timely decisions. In the Neapolitan area there are many actors with different interests and without an effective collaboration between volunteers, businesses, social, cultural and professional groups there is an excessive likelihood that an emergency decision will end in tragedy. The evacuation plans for Neapolitan volcanoes call for relocating more than two million people and the key issues are who will give the evacuation order, on what basis, and when, because by waiting for too long can produce a catastrophe and by reacting too early can drain the national treasury and cause significant social and political consequences. To avoid this dilemma is to replace massive evacuation or deportation plans of geologists with a risk reduction strategy that produces an autoregulation of the territory that is resilient
4. Multi-parametric investigation of the volcano-hydrothermal system at Tatun Volcano Group, Northern Taiwan
Science.gov (United States)
Rontogianni, S.; Konstantinou, K. I.; Lin, C.-H.
2012-07-01
The Tatun Volcano Group (TVG) is located in northern Taiwan near the capital Taipei. In this study we selected and analyzed almost four years (2004-2007) of its seismic activity. The seismic network established around TVG initially consisted of eight three-component seismic stations with this number increasing to twelve by 2007. Local seismicity mainly involved high frequency (HF) earthquakes occurring as isolated events or as part of spasmodic bursts. Mixed and low frequency (LF) events were observed during the same period but more rarely. During the analysis we estimated duration magnitudes for the HF earthquakes and used a probabilistic non-linear method to accurately locate all these events. The complex frequencies of LF events were also analyzed with the Sompi method indicating fluid compositions consistent with a misty or dusty gas. We juxtaposed these results with geochemical/temperature anomalies extracted from fumarole gas and rainfall levels covering a similar period. This comparison is interpreted in the context of a model proposed earlier for the volcano-hydrothermal system of TVG where fluids and magmatic gases ascend from a magma body that lies at around 7-8 km depth. Most HF earthquakes occur as a response to stresses induced by fluid circulation within a dense network of cracks pervading the upper crust at TVG. The largest (ML ~ 3.1) HF event that occurred on 24 April 2006 at a depth of 5-6 km had source characteristics compatible with that of a tensile crack. It was followed by an enrichment in magmatic components of the fumarole gases as well as a fumarole temperature increase, and provides evidence for ascending fluids from a magma body into the shallow hydrothermal system. This detailed analysis and previous physical volcanology observations at TVG suggest that the region is volcanically active and that measures to mitigate potential hazards have to be considered by the local authorities.
5. Multi-parametric investigation of the volcano-hydrothermal system at Tatun Volcano Group, Northern Taiwan
Directory of Open Access Journals (Sweden)
S. Rontogianni
2012-07-01
Full Text Available The Tatun Volcano Group (TVG is located in northern Taiwan near the capital Taipei. In this study we selected and analyzed almost four years (2004–2007 of its seismic activity. The seismic network established around TVG initially consisted of eight three-component seismic stations with this number increasing to twelve by 2007. Local seismicity mainly involved high frequency (HF earthquakes occurring as isolated events or as part of spasmodic bursts. Mixed and low frequency (LF events were observed during the same period but more rarely. During the analysis we estimated duration magnitudes for the HF earthquakes and used a probabilistic non-linear method to accurately locate all these events. The complex frequencies of LF events were also analyzed with the Sompi method indicating fluid compositions consistent with a misty or dusty gas. We juxtaposed these results with geochemical/temperature anomalies extracted from fumarole gas and rainfall levels covering a similar period. This comparison is interpreted in the context of a model proposed earlier for the volcano-hydrothermal system of TVG where fluids and magmatic gases ascend from a magma body that lies at around 7–8 km depth. Most HF earthquakes occur as a response to stresses induced by fluid circulation within a dense network of cracks pervading the upper crust at TVG. The largest (ML ~ 3.1 HF event that occurred on 24 April 2006 at a depth of 5–6 km had source characteristics compatible with that of a tensile crack. It was followed by an enrichment in magmatic components of the fumarole gases as well as a fumarole temperature increase, and provides evidence for ascending fluids from a magma body into the shallow hydrothermal system. This detailed analysis and previous physical volcanology observations at TVG suggest that the region is volcanically active and that measures to mitigate potential hazards have to be considered by the local authorities.
6. ACTIVITY AND Vp/Vs RATIO OF VOLCANO-TECTONIC SEISMIC SWARM ZONES AT NEVADO DEL RUIZ VOLCANO, COLOMBIA
Directory of Open Access Journals (Sweden)
Londoño B. John Makario
2010-06-01
Full Text Available An analysis of the seismic activity for volcano-tectonic earthquake (VT swarms zones at Nevado del Ruiz Volcano (NRV was carried out for the interval 1985- 2002, which is the most seismic active period at NRV until now (2010. The swarm-like seismicity of NRV was frequently concentrated in very well defined clusters around the volcano. The seismic swarm zone located at the active crater was the most active during the entire time. The seismic swarm zone located to the west of the volcano suggested some relationship with the volcanic crises. It was active before and after the two eruptions occurred in November 1985 and September 1989. It is believed that this seismic activity may be used as a monitoring tool of volcanic activity. For each seismic swarm zone the Vp/Vs ratio was also calculated by grouping of earthquakes and stations. It was found that each seismic swarm zone had a distinct Vp/Vs ratio with respect to the others, except for the crater and west swarm zones, which had the same value. The average Vp/Vs ratios for the seismic swarm zones located at the active crater and to the west of the volcano are about 6-7% lower than that for the north swarm zone, and about 3% lower than that for the south swarm zone. We suggest that the reduction of the Vp/Vs ratio is due to degassing phenomena inside the central and western earthquake swarm zones, or due to the presence of microcracks inside the volcano. This supposition is in agreement with other studies of geophysics, geochemistry and drilling surveys carried out at NRV.
7. Virtual Observatories, Data Mining, and Astroinformatics
Science.gov (United States)
Borne, Kirk
The historical, current, and future trends in knowledge discovery from data in astronomy are presented here. The story begins with a brief history of data gathering and data organization. A description of the development ofnew information science technologies for astronomical discovery is then presented. Among these are e-Science and the virtual observatory, with its data discovery, access, display, and integration protocols; astroinformatics and data mining for exploratory data analysis, information extraction, and knowledge discovery from distributed data collections; new sky surveys' databases, including rich multivariate observational parameter sets for large numbers of objects; and the emerging discipline of data-oriented astronomical research, called astroinformatics. Astroinformatics is described as the fourth paradigm of astronomical research, following the three traditional research methodologies: observation, theory, and computation/modeling. Astroinformatics research areas include machine learning, data mining, visualization, statistics, semantic science, and scientific data management.Each of these areas is now an active research discipline, with significantscience-enabling applications in astronomy. Research challenges and sample research scenarios are presented in these areas, in addition to sample algorithms for data-oriented research. These information science technologies enable scientific knowledge discovery from the increasingly large and complex data collections in astronomy. The education and training of the modern astronomy student must consequently include skill development in these areas, whose practitioners have traditionally been limited to applied mathematicians, computer scientists, and statisticians. Modern astronomical researchers must cross these traditional discipline boundaries, thereby borrowing the best of breed methodologies from multiple disciplines. In the era of large sky surveys and numerous large telescopes, the potential
8. Cyberinfrastructure for the NSF Ocean Observatories Initiative
Science.gov (United States)
Orcutt, J. A.; Vernon, F. L.; Arrott, M.; Chave, A.; Krueger, I.; Schofield, O.; Glenn, S.; Peach, C.; Nayak, A.
2007-12-01
The Internet today is vastly different than the Internet that we knew even five years ago and the changes that will be evident five years from now, when the NSF Ocean Observatories Initiative (OOI) prototype has been installed, are nearly unpredictable. Much of this progress is based on the exponential growth in capabilities of consumer electronics and information technology; the reality of this exponential behavior is rarely appreciated. For example, the number of transistors on a square cm of silicon will continue to double every 18 months, the density of disk storage will double every year, and network bandwidth will double every eight months. Today's desktop 2TB RAID will be 64TB and the 10Gbps Regional Scale Network fiber optical connection will be running at 1.8Tbps. The same exponential behavior characterizes the future of genome sequencing. The first two sequences of composites of individuals' genes cost tens of millions of dollars in 2001. Dr. Craig Venter just published a more accurate complete human genome (his own) at a cost on the order of 100,000. The J. Craig Venter Institute has provided support for the X Prize for Genomics offering 10M to the first successful sequencing of a human genome for \$1,000. It's anticipated that the prize will be won within five years. Major advances in technology that are broadly viewed as disruptive or revolutionary rather than evolutionary will often depend upon the exploitation of exponential expansions in capability. Applications of these ideas to the OOI will be discussed. Specifically, the agile ability to scale cyberinfrastructure commensurate with the exponential growth of sensors, networks and computational capability and demand will be described.
9. The Science and Design of the AGIS Observatory
Science.gov (United States)
Schroedter, Martin
2010-02-01
The AGIS observatory is a next-generation array of imaging atmospheric Cherenkov telescopes (IACTs) for gamma-ray astronomy between 100 GeV and 100 TeV. The AGIS observatory is the next logical step in high energy gamma-ray astronomy, offering improved angular resolution and sensitivity compared to FERMI, and overlapping the high energy end of FERMI's sensitivity band. The baseline AGIS observatory will employ an array of 36 Schwarzschild-Couder IACTs in combination with a highly pixelated (0.05^o diameter) camera. The instrument is designed to provide millicrab sensitivity over a wide (8^o diameter) field of view, allowing both deep studies of faint point sources as well as efficient mapping of the Galactic plane and extended sources. I will describe science drivers behind the AGIS observatory and the design and status of the project. )
10. Science Potential of a Deep Ocean Antineutrino Observatory
Energy Technology Data Exchange (ETDEWEB)
Dye, S.T. [Department of Physics and Astronomy, University of Hawaii, 2505 Correa Road, Honolulu, Hawaii, 96822 (United States); College of Natural Sciences, Hawaii Pacific University, 45-045 Kamehameha Highway, Kaneohe, Hawaii 96744 (United States)
2007-06-15
This paper presents science potential of a deep ocean antineutrino observatory being developed at Hawaii. The observatory design allows for relocation from one site to another. Positioning the observatory some 60 km distant from a nuclear reactor complex enables precision measurement of neutrino mixing parameters, leading to a determination of neutrino mass hierarchy and {theta}{sub 13}. At a mid-Pacific location the observatory measures the flux and ratio of uranium and thorium decay neutrinos from earth's mantle and performs a sensitive search for a hypothetical natural fission reactor in earth's core. A subsequent deployment at another mid-ocean location would test lateral heterogeneity of uranium and thorium in earth's mantle.
11. ALOHA Cabled Observatory (ACO): Acoustic Doppler Current Profiler (ADCP): Velocity
Data.gov (United States)
National Oceanic and Atmospheric Administration, Department of Commerce — The University of Hawaii's ALOHA ("A Long-term Oligotrophic Habitat Assessment") Cabled Observatory (ACO) is located 100 km north of the island of Oahu, Hawaii (22...
12. Integration of space geodesy: a US National Geodetic Observatory
Science.gov (United States)
Yunck, Thomas P.; Neilan, Ruth
2003-01-01
In the interest of improving the performance and efficiency of space geodesy a diverse group in the U.S., in collaboration with IGGOS, has begun to establish a unified National Geodetic Observatory (NGO).
13. 150th Anniversary of the Astronomical Observatory Library of Sciences
Science.gov (United States)
Solntseva, T.
The scientific library of the Astronomical observatory of Kyiv Taras Shevchenko University is one of the oldest ones of such a type in Ukraine. Our Astronomical Observatory and its scientific library will celebrate 150th anniversary of their foundation. 900 volumes of duplicates of Olbers' private library underlay our library. These ones were acquired by Russian Academy of Sciences for Poulkovo observatory in 1841 but according to Struve's order were transmitted to Kyiv Saint Volodymyr University. These books are of great value. There are works edited during Copernicus', Kepler's, Galilei's, Newton's, Descartes' lifetime. Our library contains more than 100000 units of storage - monographs, periodical astronomical editions from the first (Astronomische Nachrichten, Astronomical journal, Monthly Notices etc.), editions of the majority of the astronomical observatories and institutions of the world, unique astronomical atlases and maps
14. How Mount Stromlo Observatory shed its imperial beginnings
Science.gov (United States)
Bhathal, Ragbir
2014-12-01
In the 90 years since its foundation in 1924, Mount Stromlo Observatory in Australia has changed from an outpost of empire to an international research institution. Ragbir Bhathal examines how the British influence waxed and waned.
15. Experience in CCD Photometry at the Tartu Observatory
Directory of Open Access Journals (Sweden)
Tuvikene T.
2003-12-01
Full Text Available We give overview of the CCD instrumentation and data reduction techniques used at the Tartu Observatory. The first results from photometric observations of the peculiar variable V838 Mon are presented.
16. A Regional Observatory for Producers' Climate Change Adaptation ...
International Development Research Centre (IDRC) Digital Library (Canada)
2016-04-22
Apr 22, 2016 ... A Regional Observatory for Producers' Climate Change Adaptation in Thies, Senegal ... The Adaptation Insights series is a joint publication of the International Development Research Centre and the Centre for ... Innovation.
17. Grain investigation by the help of satellite observatories
International Nuclear Information System (INIS)
Friedemann, C.
1988-01-01
Interstellar grains are investigated by the help of satellite observatories taking into account extraterrestrical ultraviolet observations, infrared astronomy by the help of orbiting cooled telescopes, obs | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5516095161437988, "perplexity": 7099.520886964199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744649.77/warc/CC-MAIN-20181118201101-20181118223101-00427.warc.gz"} |
http://math.stackexchange.com/questions/40076/how-can-i-solve-this-non-linear-differential-equation/40081 | # How can I solve this non-linear differential equation?
I'm trying to solve the equation $$y' = 1 - y^2$$ Here is my attempt: $$y' = 1 - y^2$$ Divide by (1-y^2) $$\frac{y'}{1-y^2} = 1$$ Integrate both sides: $$\frac{1}{2}\log|\frac{y+1}{y-1}|=t+c$$ Rearrange $$y = \frac{ke^{2t}+1}{ke^{2t}-1}$$ I'd have thought that solution was right, but we have to figure out a specific solution with y(0) = 0. But this isn't possible with the above equation.
-
Doesn't $y(0)=0$ imply $k=-1$? – lhf May 19 '11 at 13:41
While I was writing this, I rewrote $e^{2c}$ = $k$. Am I allowed to set k to -1? – Hannesh May 19 '11 at 13:44
+1 for showing your work. No, working in the reals, you cannot have $k=-1$ when it came from $e^{2c}$. Good for you to keep track of that-it is easy to miss. – Ross Millikan May 19 '11 at 13:56
@Ross, @Luboš: I don't think complex numbers are the issue here. When seeking real-valued solutions, one can indeed stay completely within the real realm, if one handles the absolute value signs correctly. (Cont.) – Hans Lundmark May 19 '11 at 16:40
(Cont.) From the integrated expression it follows that $\left| \frac{y+1}{y-1} \right| = \exp 2(t+c)$, hence $\frac{y+1}{y-1} = \pm e^{2c} e^{2t}$. Now let $k = \pm e^{2c}$; then $k$ can be anything except zero. By letting $k$ run through the nonzero real numbers, you get all the real-valued solutions $y(t)$, except the constant ones $y(t)=1$ and $y(t)=-1$ which should have been noted separately before dividing by $1-y^2$. – Hans Lundmark May 19 '11 at 16:40
show 5 more comments
## 3 Answers
Since you want a solution near $y=0$, you should use $1-y$ in the denominator (as it will be positive) and can remove the absolute value signs. This changes some signs in your answer, giving $$y = \frac{ke^{2t}-1}{ke^{2t}+1}$$ and $k=1$ gives $y(0)=0$
-
I thought this wasn't legal since k came from $e^{2c}$ (finding c would involve the log of a negative number) however I realized that there is no reason why c can't be complex. – Hannesh May 19 '11 at 13:54
Dear @Hannesh, it's not only legal but mandatory to allow all integration constants throughout the calculation being arbitrary complex numbers. Solving equations - algebraic or differential - in the reals isn't simpler than in complex numbers. Quite on the contrary, it's more complicated because you must solve it using all possible complex values of the parameters, and at the very end, you must do an extra job of filtering out the solutions that are not real. See the exchanges right under your question. – Luboš Motl May 19 '11 at 14:06
See Hans's comment above. There's no need for complex valued integration constants here as long as you don't ignore the absolute value. – cch May 19 '11 at 19:58
add comment
Reducing from what you have a little more, we get that equal to Tanh[x-k].
Tanh[-k] == 0 //Seting x to zero
Therefore k = 0, leaving Tanh[x] as your function.
-
Exactly, this is the right compact form of the solution. tanh is sinh/cosh so its derivative is $(\cosh^2 t - \sinh^2 t)/\cosh^2 t = 1/\cosh^2 t$ which is equal to $1-\tanh^2 t$, indeed. – Luboš Motl May 19 '11 at 14:04
add comment
I wrote it down and solved it in a slightly different way. The first thing you should notice is that $y = 1$ and $y = -1$ are the two constant solutions, which allows you then to divide $y'$ by $1-y^2$, since you want to study it for $y(0) \in (-1,1)$, knowing that any solution starting in $(-1,1)$ stays there (or dies in 1).
Then yes, with some algebra you manage to get
$\left(\log \frac{1+y}{1-y}\right)' = 2$
if I haven't screwed up with the signs; now integrating it from 0 to $t$ you get:
$\log \frac{1+y(t)}{1-y(t)} - \log \frac{1+y(0)}{1-y(0)} = 2t$
without the absolute value since everything in the argument of the logs is non-negative. By imposing $y(0) = 0$ the second term in the left vanishes and you're left with an easy expression that if inverted gives the following:
$y(t) = \frac{e^{2t} - 1}{e^{2t} + 1}$
which is simply
$y(t) = \tanh (t)$
and of course double checking $y(0) = 0$.
-
add comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8897531032562256, "perplexity": 420.44553094151286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://careyoukeep.com/supplements-for-healthy-brain-function-best-nootropics-for-focus.html | He used to get his edge from Adderall, but after moving from New Jersey to San Francisco, he says, he couldn’t find a doctor who would write him a prescription. Driven to the Internet, he discovered a world of cognition-enhancing drugs known as nootropics — some prescription, some over-the-counter, others available on a worldwide gray market of private sellers — said to improve memory, attention, creativity and motivation.
A key ingredient of Noehr’s chemical “stack” is a stronger racetam called Phenylpiracetam. He adds a handful of other compounds considered to be mild cognitive enhancers. One supplement, L-theanine, a natural constituent in green tea, is claimed to neutralise the jittery side-effects of caffeine. Another supplement, choline, is said to be important for experiencing the full effects of racetams. Each nootropic is distinct and there can be a lot of variation in effect from person to person, says Lawler. Users semi-annonymously compare stacks and get advice from forums on sites such as Reddit. Noehr, who buys his powder in bulk and makes his own capsules, has been tweaking chemicals and quantities for about five years accumulating more than two dozens of jars of substances along the way. He says he meticulously researches anything he tries, buys only from trusted suppliers and even blind-tests the effects (he gets his fiancée to hand him either a real or inactive capsule).
Two variants of the Towers of London task were used by Elliott et al. (1997) to study the effects of MPH on planning. The object of this task is for subjects to move game pieces from one position to another while adhering to rules that constrain the ways in which they can move the pieces, thus requiring subjects to plan their moves several steps ahead. Neither version of the task revealed overall effects of the drug, but one version showed impairment for the group that received the drug first, and the other version showed enhancement for the group that received the placebo first.
With all these studies pointing to the nootropic benefits of some essential oils, it can logically be concluded then that some essential oils can be considered “smart drugs.” However, since essential oils have so much variety and only a small fraction of this wide range has been studied, it cannot be definitively concluded that absolutely all essential oils have brain-boosting benefits. The connection between the two is strong, however.
28,61,36,25,61,57,39,56,23,37,24,50,54,32,50,33,16,42,41,40,34,33,31,65,23,36,29,51,46,31,45,52,30, 50,29,36,57,60,34,48,32,41,48,34,51,40,53,73,56,53,53,57,46,50,35,50,60,62,30,60,48,46,52,60,60,48, 47,34,50,51,45,54,70,48,61,43,53,60,44,57,50,50,52,37,55,40,53,48,50,52,44,50,50,38,43,66,40,24,67, 60,71,54,51,60,41,58,20,28,42,53,59,42,31,60,42,58,36,48,53,46,25,53,57,60,35,46,32,26,68,45,20,51, 56,48,25,62,50,54,47,42,55,39,60,44,32,50,34,60,47,70,68,38,47,48,70,51,42,41,35,36,39,23,50,46,44,56,50,39
Core body temperature, local pH and internal pressure are important indicators of patient well-being. While a thermometer can give an accurate reading during regular checkups, the monitoring of professionals in high-intensity situations requires a more accurate inner body temperature sensor. An ingestible chemical sensor can record acidity and pH levels along the gastrointestinal tract to screen for ulcers or tumors. Sensors also can be built into medications to track compliance.
From its online reputation and product presentation to our own product run, Synagen IQ smacks of mediocre performance. A complete list of ingredients could have been convincing and decent, but the lack of information paired with the potential for side effects are enough for beginners to old-timers in nootropic use to shy away and opt for more trusted and reputable brands. There is plenty that needs to be done to uplift the brand and improve its overall ranking in the widely competitive industry. Learn More...
Many laboratory tasks have been developed to study working memory, each of which taxes to varying degrees aspects such as the overall capacity of working memory, its persistence over time, and its resistance to interference either from task-irrelevant stimuli or among the items to be retained in working memory (i.e., cross-talk). Tasks also vary in the types of information to be retained in working memory, for example, verbal or spatial information. The question of which of these task differences correspond to differences between distinct working memory systems and which correspond to different ways of using a single underlying system is a matter of debate (e.g., D’Esposito, Postle, & Rypma, 2000; Owen, 2000). For the present purpose, we ignore this question and simply ask, Do MPH and d-AMP affect performance in the wide array of tasks that have been taken to operationalize working memory? If the literature does not yield a unanimous answer to this question, then what factors might be critical in determining whether stimulant effects are manifest?
Many people quickly become overwhelmed by the volume of information and number of products on the market. Because each website claims its product is the best and most effective, it is easy to feel confused and unable to decide. Smart Pill Guide is a resource for reliable information and independent reviews of various supplements for brain enhancement.
Powders are good for experimenting with (easy to vary doses and mix), but not so good for regular taking. I use OO gel capsules with a Capsule Machine: it’s hard to beat $20, it works, it’s not that messy after practice, and it’s not too bad to do 100 pills. However, I once did 3kg of piracetam + my other powders, and doing that nearly burned me out on ever using capsules again. If you’re going to do that much, something more automated is a serious question! (What actually wound up infuriating me the most was when capsules would stick in either the bottom or top try - requiring you to very gingerly pull and twist them out, lest the two halves slip and spill powder - or when the two halves wouldn’t lock and you had to join them by hand. In contrast: loading the gel caps could be done automatically without looking, after some experience.) One idea I’ve been musing about is the connections between IQ, Conscientiousness, and testosterone. IQ and Conscientiousness do not correlate to a remarkable degree - even though one would expect IQ to at least somewhat enable a long-term perspective, self-discipline, metacognition, etc! There are indications in studies of gifted youth that they have lower testosterone levels. The studies I’ve read on testosterone indicate no improvements to raw ability. So, could there be a self-sabotaging aspect to human intelligence whereby greater intelligence depends on lack of testosterone, but this same lack also holds back Conscientiousness (despite one’s expectation that intelligence would produce greater self-discipline and planning), undermining the utility of greater intelligence? Could cases of high IQ types who suddenly stop slacking and accomplish great things sometimes be due to changes in testosterone? Studies on the correlations between IQ, testosterone, Conscientiousness, and various measures of accomplishment are confusing and don’t always support this theory, but it’s an idea to keep in mind. You may have come across this age-old adage, “Work smarter, not harder.” So, why not extend the same philosophy in other aspects of your life? Are you in a situation wherein no matter how much you exercise, eat healthy, and sleep well, you still struggle to focus and motivate yourself? If yes, you need a smart solution minus the adverse health effects. Try ‘Smart Drugs,’ that could help you out of your situation by enhancing your thought process, boosting your memory, and making you more creative and productive. Some critics argue that Modafinil is an expression of that, a symptom of a new 24/7 work routine. But what if the opposite is true? Let’s say you could perform a task in significantly less time than usual. You could then use the rest of your time differently, spending it with family, volunteering, or taking part in a leisure activity. And imagine that a drug helped you focus on clearing your desk and inbox before leaving work. Wouldn’t that help you relax once you get home? Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of 20% \times \frac{1}{\text{dozens}} of being iodine! I may be unduly optimistic if I give this as much as 10%. Sure, those with a mental illness may very well need a little more monitoring to make sure they take their medications, but will those suffering from a condition with hallmark symptoms of paranoia and anxiety be helped by consuming a technology that quite literally puts a tracking device inside their body? For patients hearing voices telling them that they're being watched, a monitoring device may be a hard pill to swallow. If you want to try a nootropic in supplement form, check the label to weed out products you may be allergic to and vet the company as best you can by scouring its website and research basis, and talking to other customers, Kerl recommends. "Find one that isn't just giving you some temporary mental boost or some quick fix – that’s not what a nootropic is intended to do," Cyr says. Regardless of your goal, there is a supplement that can help you along the way. Below, we’ve put together the definitive smart drugs list for peak mental performance. There are three major groups of smart pills and cognitive enhancers. We will cover each one in detail in our list of smart drugs. They are natural and herbal nootropics, prescription ADHD medications, and racetams and synthetic nootropics. MarketInsightsReports provides syndicated market research reports to industries, organizations or even individuals with an aim of helping them in their decision making process. These reports include in-depth market research studies i.e. market share analysis, industry analysis, information on products, countries, market size, trends, business research details and much more. MarketInsightsReports provides Global and regional market intelligence coverage, a 360-degree market view which includes statistical forecasts, competitive landscape, detailed segmentation, key trends, and strategic recommendations. Taurine (Examine.com) was another gamble on my part, based mostly on its inclusion in energy drinks. I didn’t do as much research as I should have: it came as a shock to me when I read in Wikipedia that taurine has been shown to prevent oxidative stress induced by exercise and was an antioxidant - oxidative stress is a key part of how exercise creates health benefits and antioxidants inhibit those benefits. Smart Pill is formulated with herbs, amino acids, vitamins and co-factors to provide nourishment for the brain, which may enhance memory, cognitive function, and clarity. , which may enhance memory, cognitive function, and clarity. In a natural base containing potent standardized extract 24% flavonoid glycosides. Fast acting super potent formula. A unique formulation containing a blend of essential nutrients, herbs and co-factors. As it happens, these are areas I am distinctly lacking in. When I first began reading about testosterone I had no particular reason to think it might be an issue for me, but it increasingly sounded plausible, an aunt independently suggested I might be deficient, a biological uncle turned out to be severely deficient with levels around 90 ng/dl (where the normal range for 20-49yo males is 249-839), and finally my blood test in August 2013 revealed that my actual level was 305 ng/dl; inasmuch as I was 25 and not 49, this is a tad low. Took pill 12:11 PM. I am not certain. While I do get some things accomplished (a fair amount of work on the Silk Road article and its submission to places), I also have some difficulty reading through a fiction book (Sum) and I seem kind of twitchy and constantly shifting windows. I am weakly inclined to think this is Adderall (say, 60%). It’s not my normal feeling. Next morning - it was Adderall. I had tried 8 randomized days like the Adderall experiment to see whether I was one of the people whom modafinil energizes during the day. (The other way to use it is to skip sleep, which is my preferred use.) I rarely use it during the day since my initial uses did not impress me subjectively. The experiment was not my best - while it was double-blind randomized, the measurements were subjective, and not a good measure of mental functioning like dual n-back (DNB) scores which I could statistically compare from day to day or against my many previous days of dual n-back scores. Between my high expectation of finding the null result, the poor experiment quality, and the minimal effect it had (eliminating an already rare use), the value of this information was very small. While these two compounds may not be as exciting as a super pill that instantly unlocks the full potential of your brain, they currently have the most science to back them up. And, as Patel explains, they’re both relatively safe for healthy individuals of most ages. Patel explains that a combination of caffeine and L-theanine is the most basic supplement stack (or combined dose) because the L-theanine can help blunt the anxiety and “shakiness” that can come with ingesting too much caffeine. In general, I feel a little bit less alert, but still close to normal. By 6PM, I have a mild headache, but I try out 30 rounds of gbrainy (haven’t played it in months) and am surprised to find that I reach an all-time high; no idea whether this is due to DNB or not, since Gbrainy is very heavily crystallized (half the challenge disappears as you learn how the problems work), but it does indicate I’m not deluding myself about mental ability. (To give a figure: my last score well before I did any DNB was 64, and I was doing well that day; on modafinil, I had a 77.) I figure the headache might be food related, eat, and by 7:30 the headache is pretty much gone and I’m fine up to midnight. The surveys just reviewed indicate that many healthy, normal students use prescription stimulants to enhance their cognitive performance, based in part on the belief that stimulants enhance cognitive abilities such as attention and memorization. Of course, it is possible that these users are mistaken. One possibility is that the perceived cognitive benefits are placebo effects. Another is that the drugs alter students’ perceptions of the amount or quality of work accomplished, rather than affecting the work itself (Hurst, Weidner, & Radlow, 1967). A third possibility is that stimulants enhance energy, wakefulness, or motivation, which improves the quality and quantity of work that students can produce with a given, unchanged, level of cognitive ability. To determine whether these drugs enhance cognition in normal individuals, their effects on cognitive task performance must be assessed in relation to placebo in a masked study design. But there would also be significant downsides. Amphetamines are structurally similar to crystal meth – a potent, highly addictive recreational drug which has ruined countless lives and can be fatal. Both Adderall and Ritalin are known to be addictive, and there are already numerous reports of workers who struggled to give them up. There are also side effects, such as nervousness, anxiety, insomnia, stomach pains, and even hair loss, among others. Didn't seem very important to me. Trump's ability to discern importance in military projects, sure, why not. Shanahan may be the first honest cabinet head; it could happen. With the record this administration has I'd need some long odds to bet that way. Does anyone doubt he got the loyalty spiel and then the wink and nod that anything he could get away with was fine. monies In 2011, as part of the Silk Road research, I ordered 10x100mg Modalert (5btc) from a seller. I also asked him about his sourcing, since if it was bad, it’d be valuable to me to know whether it was sourced from one of the vendors listed in my table. He replied, more or less, I get them from a large Far Eastern pharmaceuticals wholesaler. I think they’re probably the supplier for a number of the online pharmacies. 100mg seems likely to be too low, so I treated this shipment as 5 doses: ## Similarly, Mehta et al 2000 noted that the positive effects of methylphenidate (40 mg) on spatial working memory performance were greatest in those volunteers with lower baseline working memory capacity. In a study of the effects of ginkgo biloba in healthy young adults, Stough et al 2001 found improved performance in the Trail-Making Test A only in the half with the lower verbal IQ. Use of prescription stimulants by normal healthy individuals to enhance cognition is said to be on the rise. Who is using these medications for cognitive enhancement, and how prevalent is this practice? Do prescription stimulants in fact enhance cognition for normal healthy people? We review the epidemiological and cognitive neuroscience literatures in search of answers to these questions. Epidemiological issues addressed include the prevalence of nonmedical stimulant use, user demographics, methods by which users obtain prescription stimulants, and motivations for use. Cognitive neuroscience issues addressed include the effects of prescription stimulants on learning and executive function, as well as the task and individual variables associated with these effects. Little is known about the prevalence of prescription stimulant use for cognitive enhancement outside of student populations. Among college students, estimates of use vary widely but, taken together, suggest that the practice is commonplace. The cognitive effects of stimulants on normal healthy people cannot yet be characterized definitively, despite the volume of research that has been carried out on these issues. Published evidence suggests that declarative memory can be improved by stimulants, with some evidence consistent with enhanced consolidation of memories. Effects on the executive functions of working memory and cognitive control are less reliable but have been found for at least some individuals on some tasks. In closing, we enumerate the many outstanding questions that remain to be addressed by future research and also identify obstacles facing this research. But notice that most of the cost imbalance is coming from the estimate of the benefit of IQ - if it quadrupled to a defensible$8000, that would be close to the experiment cost! So in a way, what this VoI calculation tells us is that what is most valuable right now is not that iodine might possibly increase IQ, but getting a better grip on how much any IQ intervention is worth.
Using prescription ADHD medications, racetams, and other synthetic nootropics can boost brain power. Yes, they can work. Even so, we advise against using them long-term since the research on their safety is still new. Use them at your own risk. For the majority of users, stick with all natural brain supplements for best results. What is your favorite smart pill for increasing focus and mental energy? Tell us about your favorite cognitive enhancer in the comments below.
Deficiencies in B vitamins can cause memory problems, mood disorders, and cognitive impairment. B vitamins will not make you smarter on their own. Still, they support a wide array of cognitive functions. Most of the B complex assists in some fashion with brain activity. Vitamin B12 (Methylcobalamin) is the most critical B vitamin for mental health.
When I worked on the Bulletproof Diet book, I wanted to verify that the effects I was getting from Bulletproof Coffee were not coming from modafinil, so I stopped using it and measured my cognitive performance while I was off of it. What I found was that on Bulletproof Coffee and the Bulletproof Diet, my mental performance was almost identical to my performance on modafinil. I still travel with modafinil, and I’ll take it on occasion, but while living a Bulletproof lifestyle I rarely feel the need. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2089698165655136, "perplexity": 2282.8712220212333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517682.16/warc/CC-MAIN-20190418141430-20190418163430-00330.warc.gz"} |
https://indico.cern.ch/event/839985/contributions/3986054/ | # LXX International conference "NUCLEUS – 2020. Nuclear physics and elementary particle physics. Nuclear physics technologies"
Oct 11 – 17, 2020
Online
Europe/Moscow timezone
## Precision beta-spectrum measurement of RaE with semiconductror spectrometers
Oct 14, 2020, 6:10 PM
1h
Online
#### Online
Poster report Section 5. Neutrino physics and astrophysics.
ilia Drachnev
### Description
Precise knowledge of forbidden transition beta-spectra plays a significant role in both nuclear and particle physics.
In this work we present a precision measurement of the beta-spectrum shape for $^{210}$Bi (historically RaE) performed with spectrometers based on semiconductor Si(Li) detectors. This first forbidden non-unique transition has the transition form-factor strongly deviated from unity and knowledge of its spectrum would play an important role in low-background physics in presence of $^{210}$Pb background. The studies were performed with spectrometers in target-detector and 4-$\pi$ geometries. The measured transition form-factor could be approximated as $H(W) = 1 + (-0.433 \pm 0.002) W + (0.0510 \pm 0.0004) W^2$ and $H(W) = 1 + (-0.433 \pm 0.002) W + (0.0510 \pm 0.0004) W^2$ for the target-detector and 4-$\pi$ spectrometer respectively that is in good agreement between the two experiments as well as with the previous studies. The form-factor parameter precision has been substantially increased with respect to the previous experimental results. This work was supported by the Russian Foundation for Basic Research (project nos. 16-29-13014 and 19-02-00097).
### Primary authors
Prof. Alexander Derbin (Petersburg Nuclear Physics Institute NRC KI) Mrs Irina Lomskaya (PNPI NRC KI) Dr Valentina Muratova (PNPI NRC KI) Mr Evgenii Unzhakov (Petersburg Nuclear Physics Institute) Dr Dmitry Semenov (PNPI NRC KI) Mrs Nelly Pilipenko (PNPI NRC KI) Dr Igor Alexeev (V.G. Khlopin Radium Institute) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7829288840293884, "perplexity": 10022.0101471124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00336.warc.gz"} |
https://arxiv-export-lb.library.cornell.edu/abs/2005.10743 | math.ST
(what is this?)
Title: Tensor Clustering with Planted Structures: Statistical Optimality and Computational Limits
Abstract: This paper studies the statistical and computational limits of high-order clustering with planted structures. We focus on two clustering models, constant high-order clustering (CHC) and rank-one higher-order clustering (ROHC), and study the methods and theory for testing whether a cluster exists (detection) and identifying the support of cluster (recovery).
Specifically, we identify the sharp boundaries of signal-to-noise ratio for which CHC and ROHC detection/recovery are statistically possible. We also develop the tight computational thresholds: when the signal-to-noise ratio is below these thresholds, we prove that polynomial-time algorithms cannot solve these problems under the computational hardness conjectures of hypergraphic planted clique (HPC) detection and hypergraphic planted dense subgraph (HPDS) recovery. We also propose polynomial-time tensor algorithms that achieve reliable detection and recovery when the signal-to-noise ratio is above these thresholds. Both sparsity and tensor structures yield the computational barriers in high-order tensor clustering. The interplay between them results in significant differences between high-order tensor clustering and matrix clustering in literature in aspects of statistical and computational phase transition diagrams, algorithmic approaches, hardness conjecture, and proof techniques. To our best knowledge, we are the first to give a thorough characterization of the statistical and computational trade-off for such a double computational-barrier problem. Finally, we provide evidence for the computational hardness conjectures of HPC detection (via low-degree polynomial and Metropolis methods) and HPDS recovery (via low-degree polynomial method).
Comments: Done a few clarifications and added low-degree polynomial based evidence for HPDS recovery conjecture 2 Subjects: Statistics Theory (math.ST); Computational Complexity (cs.CC); Machine Learning (cs.LG); Methodology (stat.ME); Machine Learning (stat.ML) Cite as: arXiv:2005.10743 [math.ST] (or arXiv:2005.10743v3 [math.ST] for this version)
Submission history
From: Yuetian Luo [view email]
[v1] Thu, 21 May 2020 15:53:44 GMT (464kb,D)
[v2] Sat, 29 Aug 2020 01:57:19 GMT (354kb,D)
[v3] Mon, 2 Aug 2021 14:10:08 GMT (1876kb,D)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8713610768318176, "perplexity": 4098.086468832909}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104189587.61/warc/CC-MAIN-20220702162147-20220702192147-00270.warc.gz"} |
https://selfstudypoint.in/kinetic-theory-of-gases/ | # Kinetic Theory of Gases
0
17
## Kinetic Theory of Gases
Assumptions or postulates of the kinetic-molecular theory of gases are given below. These postulates are related to atoms and molecules which cannot be seen, hence it is said to provide a microscopic model of gases.
1. Gases consist of large number of identical particles (atoms or molecules) that are so small and so far apart on the average that the actual volume of the molecules is negligible in comparison to the empty space between them.
2. There is no force of attraction between the particles of a gas at ordinary temperature and pressure.
3. Particles of a gas are always in constant and random motion.
4. Particles of a gas move in all possible directions in straight lines. During their random motion, they collide with each other and with the walls of the container. Pressure is exerted by the gas as a result of collision of the particles with the walls of the container.
5. Collisions of gas molecules are perfectly elastic. This means that total energy of molecules before and after the collision remains same.
6. At any particular time, different particles in the gas have different speeds and hence different kinetic energies.
It is possible to show that though the individual speeds are changing, the distribution of speeds remains constant at a particular temperature.
If a molecule has variable speed, then it must have a variable kinetic energy. Under these circumstances, we can talk only about average kinetic energy. In kinetic theory it is assumed that average kinetic energy of the
gas molecules is directly proportional to the absolute temperature.
The important mathematical results from this theory are:
K.E. per mole = 3/2 nRT
K.E. per molecule = 3/2 kT
where R = 8.314 and k = R/NA = 1.38 x 10-23 J/K
From the above postulates, the kinetic gas equation derived is
pV = 1/3 mn U2
where, U = root mean square velocity
## Molecular Distribution of Speeds (Max well Boltzmann Distribution)
The Maxwell Boltzmann Distribution is a plot of fraction of molecules in the gas sample vs. the speed of the gas molecules. The distribution is shown below followed by the salient features of the graph.
The graph shows that:
• The fraction of molecules having very low or very high speeds is very less.
• Most of the molecules have a speed somewhere in the middle, this is called the most probable speed. (μMP)
• The area covered between any two velocities is the number of molecules in that velocity range.
• The total area covered by the graph gives the total number of molecules in the sample and is constant.
• There are two more molecular speeds defined for a sample called average speed (μAVG) and root mean square speed (μRMS)
IMP: Always remember to take molecular mass in kg in the above relations.
It’s useful to remember the ratio of μMP : μAVG : μRMS 1:1.128:1.224 for a given gas at the same temperature. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398790001869202, "perplexity": 440.6627310405759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827097.43/warc/CC-MAIN-20181215200626-20181215222626-00329.warc.gz"} |
https://gea.esac.esa.int/archive/documentation/GDR2/Data_processing/chap_cu3ast/sec_cu3ast_quality/ssec_cu3ast_quality_properties.html | # 3.5.2 Properties of the astrometric data
Author(s): Lennart Lindegren
The astrometric results in Gaia DR2 were not produced in a single large least-squares process, but were the end result of a long series of solutions using different versions of the input data and testing different calibration models and solution strategies. A complete astrometric solution consists of two parts, known as the primary solution and the secondary solution.
In the primary solution, which involves only a small fraction of the sources known as primary sources, the attitude and calibration parameters (and optionally the global parameters) are adjusted simultaneously with the astrometric parameters of the primary sources using an iterative algorithm. The reference frame is also adjusted using a subset of the primary sources identified as quasars.
In the secondary solutions the five astrometric parameters of every star are adjusted using fixed attitude, calibration, and global parameters from the preceding primary solution. The restriction on the number of primary sources comes mainly from practical considerations, as the primary solution is computationally and numerically demanding due to the large systems of equations that need to be solved. By contrast, the secondary solutions can be made one source at a time essentially by solving a system with only five unknowns (or six if pseudo-colour is also estimated). For consistency, the astrometric parameters of the primary sources are re-computed in the secondary solutions.
1. 1.
The primary solution consists of about 16 million primary sources for which all five astrometric parameters (position at the reference epoch J2015.5, parallaxes, and proper motion components) are provided, along with their standard uncertainties, correlation coefficients, and other statistics. The primary sources were selected based on the results of preliminary runs. The criteria for the selection were: (i) sources must have $G$, $G_{\rm BP}$, and $G_{\rm RP}$ magnitudes from the photometric processing; (ii) there should be a roughly equal number of sources with observations in each of the three window classes; (iii) for each window class there should be a roughly homogeneous coverage of the whole sky, and a good distribution in magnitude and colour; (iv) within the constraints set by the previous criteria, sources with high astrometric weight (bright, with small excess noise and a good number of observations) were preferentially selected. To this were added some 490 000 probable quasars for the reference frame alignment.
2. 2.
The secondary solution was generated using the final attitude, calibration, and global parameters from the primary solution, including a re-computation of the pseudo-colours for all sources using the final chromaticity calibration. Sources failing to meet the acceptance criteria for a five-parameter solution obtained a 2-parameter fall-back solution at this stage. This resulted in 1335 million sources with a five-parameter solution and 400 million with a fall-back solution, i.e. without parallax and proper motion. About 18 million sources were subsequently removed as duplicates, i.e. where the observations of the same physical source had been split between two or more different source identifiers. Duplicates were identified by positional coincidence, using a maximum separation of 0.4 arcsec.
Gaia DR2 finally gives five-parameter solutions for 1332 million sources, with formal uncertainties ranging from about 0.02 mas to 2 mas in parallax and twice that in annual proper motion. For the 385 million sources with fall-back solutions the positional uncertainty at J2015.5 is about 1 to 4 mas. Basic statistics are given in Table 3.5 for source with full five parameter solutions and in Table 3.6 for source with 2-parameter fallback solutions. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 43, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8095571398735046, "perplexity": 1308.2034170006116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00677.warc.gz"} |
http://www.assignmenthelp.net/programming/MetaPost_Programming_Language | MetaPost Programming Language Help For Students
Introduction to MetaPost
MetaPost is a programming language much like Knuth’s METAFONT except that it outputs vector graphics, either PostScript programs or SVG graphics, instead of bitmaps. Borrowed from METAFONT are the basic tools for creating and manipulating pictures. MetaPost is particularly well-suited to generating figures for technical documents where some aspects of a picture may be controlled by mathematical or geometrical constraints that are best expressed symbocially. In this manual, we’ll assume a stand-alone command-line executable of the MetaPost compiler is used, which is usually called mpost. The syntax and program name itself are system-dependent; sometimes it is named mp. The executable is actually a small wrapper program around mplib, a library containing the MetaPost compiler.
MetaPost Example:
input macros;
verbatimtex
\documentclass[12pt]{article}
\usepackage[T1]{fontenc}
\begin{document}
etex
beginfig(1)
pair A, B, C;
A:=(0,0); B:=(1cm,0); C:=(0,1cm);
draw A--B--C;
endfig;
Running METAPOST
You can run METAPOST on a windows platform, e.g., using MikTEXand the WinEdt shell | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4359358251094818, "perplexity": 8161.9825669079155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948593526.80/warc/CC-MAIN-20171217054825-20171217080825-00280.warc.gz"} |
https://arxiv.org/a/chen_j_5.html | # Joe P. Chen's articles on arXiv
[1]
Title: Fractal AC circuits and propagating waves on fractals
Comments: v2: 9 pages, 4 figures. Updated with recent developments
Subjects: Mathematical Physics (math-ph); Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Optics (physics.optics)
[2]
Title: Laplacian growth & sandpiles on the Sierpinski gasket: limit shape universality and exact solutions
Comments: 51 pages, 1 periodic table, beaucoup de dessins. v2: Small typos & formatting issues fixed. For the Python code used to simulate various cellular automata models on the Sierpinski gasket, see this https URL
Subjects: Mathematical Physics (math-ph); Statistical Mechanics (cond-mat.stat-mech); Combinatorics (math.CO); Probability (math.PR); Cellular Automata and Lattice Gases (nlin.CG)
[3]
Title: Regularized Laplacian determinants of self-similar fractals
Comments: 16 pages, 5 figures
Journal-ref: Lett Math Phys (2018) 108: 1563
Subjects: Spectral Theory (math.SP)
[4]
Title: From non-symmetric particle systems to non-linear PDEs on fractals
Comments: v2: 10 pages, 1 figure. To appear in the proceedings for the 2016 conference "Stochastic Partial Differential Equations & Related Fields" in honor of Michael R\"ockner's 60th birthday, Bielefeld
Subjects: Mathematical Physics (math-ph); Statistical Mechanics (cond-mat.stat-mech); Analysis of PDEs (math.AP); Probability (math.PR)
[5]
Title: Local ergodicity in the exclusion process on an infinite weighted graph
Authors: Joe P. Chen
Comments: v2: 36 pages, 5 figures. Minor typos corrected
Subjects: Probability (math.PR); Statistical Mechanics (cond-mat.stat-mech); Mathematical Physics (math-ph)
[6]
Title: Internal DLA on Sierpinski gasket graphs
Comments: v3: 24 pages, 2 figures. Proof of Lemma 3.5 + small typos corrected
Subjects: Probability (math.PR); Statistical Mechanics (cond-mat.stat-mech); Mathematical Physics (math-ph); Metric Geometry (math.MG)
[7]
Title: The moving particle lemma for the exclusion process on a weighted graph
Authors: Joe P. Chen
Comments: v4: 10 pages, 1 figure. Small typos corrected
Journal-ref: Electron. Commun. Probab. 22 (2017), no. 47, 1-13
Subjects: Probability (math.PR); Statistical Mechanics (cond-mat.stat-mech); Mathematical Physics (math-ph)
[8]
Title: Stabilization by Noise of a $\mathbb{C}^2$-Valued Coupled System
Comments: v4: 17 pages, 5 figures. Small typos corrected, refs added, section 1 revised. To appear in Stochastics and Dynamics
Journal-ref: Stochastics and Dynamics, Vol. 17, No. 6 (2017) 1750046
Subjects: Probability (math.PR); Mathematical Physics (math-ph); Dynamical Systems (math.DS)
[9]
Title: Power dissipation in fractal AC circuits
Comments: v2: 16 pages, 8 figures. See also the recent preprint arXiv:1701.08039
Journal-ref: J. Phys. A: Math. Theor. 50, 325205 (2017)
Subjects: Mathematical Physics (math-ph)
[10]
Title: Wave equation on one-dimensional fractals with spectral decimation and the complex dynamics of polynomials
Journal-ref: J. Fourier Anal. Appl. 23 (2017) 994-1027
Subjects: Mathematical Physics (math-ph); Functional Analysis (math.FA); Numerical Analysis (math.NA); Probability (math.PR); Quantum Physics (quant-ph)
[11]
Title: Singularly continuous spectrum of a self-similar Laplacian on the half-line
Comments: v3: 12 pages, 2 figures; to appear in the Journal of Mathematical Physics in May or June 2016/ JMP 2016
Journal-ref: J. Math. Phys. 57, 052104 (2016)
Subjects: Mathematical Physics (math-ph); Dynamical Systems (math.DS); Spectral Theory (math.SP)
[12]
Title: Spectral dimension and Bohr's formula for Schrodinger operators on unbounded fractal spaces
Comments: v2: 28 pages, 6 figures. referee comments included, typos corrected
Journal-ref: J. Phys. A: Math. Theor. 48 395203 (2015)
Subjects: Mathematical Physics (math-ph); Functional Analysis (math.FA); Metric Geometry (math.MG); Probability (math.PR); Spectral Theory (math.SP)
[13]
Title: Entropic repulsion of Gaussian free field on high-dimensional Sierpinski carpet graphs
Comments: v2: 35 pages, 3 figures. Accepted for publication in SPA
Journal-ref: Stochastic Processes and their Applications 125 (2015), pp. 4632-4673
Subjects: Probability (math.PR); Statistical Mechanics (cond-mat.stat-mech); Mathematical Physics (math-ph)
[14]
Title: Heat kernels on 2d Liouville quantum gravity: a numerical study
Subjects: Mathematical Physics (math-ph); High Energy Physics - Lattice (hep-lat); Probability (math.PR)
[15]
Title: Periodic billiard orbits of self-similar Sierpinski carpets
Journal-ref: J. Math. Anal. Appl. 416 (2014) 969-994
Subjects: Dynamical Systems (math.DS)
[16]
Title: Statistical mechanics of Bose gas in Sierpinski carpets
Authors: Joe P. Chen
Comments: v2: 37 pages, 5 figures, 1 table. Minor edits & added references. Submitted to Comm. Math. Phys
Subjects: Mathematical Physics (math-ph); Quantum Gases (cond-mat.quant-gas); Statistical Mechanics (cond-mat.stat-mech)
[17]
Title: Quantum Theory of Cavity-Assisted Sideband Cooling of Mechanical Motion
Comments: 5 pages, 2 figures
Journal-ref: Phys. Rev. Lett. 99, 093902 (2007)
Subjects: Mesoscale and Nanoscale Physics (cond-mat.mes-hall)
The web address for this page and the arXiv author id for Joe P. Chen is http://arxiv.org/a/chen_j_5. There is also an Atom feed available from http://arxiv.org/a/chen_j_5.atom2 (authors combined, best for most current feed readers), and http://arxiv.org/a/chen_j_5.atom (authors in separate atom:author elements).
See author identifier help for more information about arXiv author identifiers, please report any problems. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5574313402175903, "perplexity": 9373.088159242428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827998.66/warc/CC-MAIN-20181216213120-20181216235120-00199.warc.gz"} |
https://www.arxiv-vanity.com/papers/1105.6108/ | arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.
# Topological and magnetic phases of interacting electrons in the pyrochlore iridates
William Witczak-Krempa and Yong Baek Kim Department of Physics, The University of Toronto, Toronto, Ontario M5S 1A7, Canada
School of Physics, Korea Institute for Advanced Study, Seoul 130-722, Korea
June 6, 2020
###### Abstract
We construct a model for interacting electrons with strong spin orbit coupling in the pyrochlore iridates. We establish the importance of the direct hopping process between the Ir atoms and use the relative strength of the direct and indirect hopping as a generic tuning parameter to study the correlation effects across the iridates family. We predict novel quantum phase transitions between conventional and/or topologically non-trivial phases. At weak coupling, we find topological insulator and metallic phases. As one increases the interaction strength, various magnetic orders emerge. The novel topological Weyl semi-metal phase is found to be realized in these different orders, one of them being the all-in/all-out pattern. Our findings establish the possible magnetic ground states for the iridates and suggest the generic presence of the Weyl semi-metal phase in correlated magnetic insulators on the pyrochlore lattice. We discus the implications for existing and future experiments.
## I Introduction
Topological insulatorsRMP_TI ; ti_rev_zhang ; moore-hasan (TIs) have provided theorists and experimentalists alike with a new family of topologically non-trivial systems. In these materials, a sufficiently strong spin orbit coupling (SOC) leads to a peculiar band structure that cannot be adiabatically deformed to that of a flat band insulator without closing the bulk gap. This leads to robust boundary states that display momentum-spin locking. The materials in which these gapless helical surface states have been observed are weakly interacting semiconductors, for which the above theory was constructed. An inviting question, therefore, relates to the kinds of quantum ground states that would arise in the presence of interactions in these systems or to interaction-driven TIs. For instance, several studies examined various kinds of fractionalized TIspesin ; will ; levin ; lehur ; maciejko ; swingle ; qi .
In this context, transition metal oxides with 5 transition metal elements may be ideal systems to search for TIs and new topological phases in the presence of interactions. In these systems, the strength of the interaction and that of the SOC are comparable, providing a playground for the interplay between two effects. In particular, the pyrochlore iridates, IrO, have been suggested to host various topologically non-trivial statespesin ; wan ; balents-kiss ; will ; fiete-rev ; fiete-trig . Here, is a Lanthanide or Yittrium, whose size affects the effective bandwidth of the 5 electrons of Ir via the Ir-O-Ir bond angle, thereby tuning the effective strength of the interaction. Experiments on these compounds reveal metal-insulator transitions upon variation of temperature or chemicalmaeno and externalfazel pressure, as well as indications of magnetismtaira ; musr .
In this work, we present a Hubbard-type model for the interacting electrons in the pyrochlore iridates and determine the ground state phase diagram using mean field and strong coupling methods. We find that it is important to include both the indirect hopping of electrons of Ir through oxygens and the direct hopping between Ir sites. This is because the orbitals of Ir are spatially extended and the nature of the ground state is sensitive to the relative strength of these hopping amplitudes. In the weakly interacting limit, both TIs and (semi-)metallic states are realized depending on the relative strength of different hopping amplitudes. This is in contrast to a previous workpesin where only the indirect hopping process was considered and only the TI phase was obtained in the large SOC limit (with the ideal cubic crystal field). The interactions between electrons lead to two different magnetically ordered ground states in different parameter regions. In particular, for intermediate interactions, the topological semi-metalabrikosov ; *nielsen; *volovik-book; wan ; balents-kiss (TSM) state with Weyl-like fermions appears in both kinds of AF phases. Our results suggest that the TSM state and the related Mott insulating state can have different magnetic ordering patterns depending on the choice of the -site ion or upon application of hydrostatic pressure, leading to the possibility of novel quantum phase transitions in the iridates.
## Ii Model and approach
In the atomic limit, the oxygen octahedra surrounding the Ir ions create large cubic crystal fields that split the orbitals into and multiplets. The five electrons of Ir occupy the levels, leaving the high energy levels empty. The angular momentum operator projected into the levels is effectively with an extra negative sign, i.e., . The on-site SOC leads to a further splitting into an effective pseudospin doublet and a quadruplet, the former lying higher in energySrIrO-prl ; *SrIrO-science. For sufficiently large SOC, the half-filled doublets form a low energy manifold as the fully occupied levels are sufficiently far from the Fermi level.
In going to a tight-binding description, we need to take into account the different orientations of the local oxygen octahedra at each of the 4 sites in the unit cell (see Fig. 1). Previous studiespesin ; bj-trig ; fiete-trig considered nearest neighbour Ir-Ir hopping mediated by the oxygens. In this work, we also include the direct hopping between the Ir atoms, which is expected to be significant due to the large spatial extent of Iridium’s orbitals. We consider only the - and -overlaps between the orbitals, neglecting the usually smaller -overlap. This leaves us with two direct hopping parameters: and . The resulting kinetic Hamiltonian reads
H0=∑⟨Ri,R′i′⟩,αα′(Tii′o,αα′+Tii′d,αα′)d†RiαdR′i′α′, (1)
where denotes the sites of the underlying Bravais FCC lattice of the pryrochlore lattice of Ir’s, while labels the sites within the unit cell. The operator annihilates an electron in the pseudospin state at site . The two sets of matrices and correspond to the oxygen mediatedpesin and direct hopping, respectively.
We include interactions via an on-site Hubbard repulsion between Iridium’s -electrons:
H =H0+HU, (2) HU =U∑RinRi↑nRi↓, (3)
where is the density of electrons occupying the state at site , with . As we are interested in the magnetic phases expected at finite , we perform a Hartree-Fock mean-field decoupling , where is the pseudospin operator, whose expectation value will be determined self-consistently. We consider magnetic configurations preserving the unit cell so that , , are the 4 order parameters under consideration. These are directly proportional to the local magnetic moment carried by the -electrons. This follows from the fact that the projections of the spin and orbital angular momentum operators onto the manifold are proportional to the pseudospin operator: and with , where projects onto the subspace and projects onto the subspace. This allows us to treat as the spontaneous local magnetic moment of the electrons.
## Iii Phase diagram
### iii.1 Metal and topological insulator at U=0
We first examine the model at . Fig. 1 shows the resulting phase diagram in terms of and (we set throughout). Notice that both insulating and metallic phases exist. By virtue of the inversion symmetry of the crystal, we use the Fu-Kane formulasz2_fu-kane for the invariants in terms of the parity eigenvalues of the occupied states at the time reversal invariant momenta (TRIMs) to determine the topological class of each insulating phase. We find that both insulating phases are TIs with indices . The TI phase adiabatically connected to corresponds to the large spin orbit limit of Ref.pesin, and is robust to the inclusion of weak direct hopping. As one tunes the direct hoppings, a metallic phase eventually appears by means of a gap closing at the point. In the metal, the degeneracies at become 2-4-2 compared to 4-2-2 in the TI (with time-reversal and inversion symmetries all band are doubly degenerate). A similar situation occurs in Refs.bj-trig, ; fiete-trig, , where a trigonal distortion of the oxygen octahedra drives the transition, not direct hopping as is the case here. The metallic phase is strictly speaking a semi-metal characterized by a point Fermi surface. Finite pockets can be generated by including very weak NNN hopping, as we have explicitly verified. Although we don’t consider trigonal distortions here, the direct hoppings alone can lead to qualitatively similar effects, e.g. the metallic phase resulting from the change in degeneracies at the point.
### iii.2 Magnetic and topological phases at U>0
We now turn to the case. For convenience, we restrict our attention to a one-dimensional cut in the space defined by , as shown in Fig. 1. This is physically motivated since we expect and to have opposite signs, with the -overlap being the strongest. Moreover, the cut is representative as it intersects all the phases. In obtaining the finite diagram, we performed an unconstrained analysis sampling over the space of all possible magnetic configurations preserving the unit cell.
The resulting ground-state phase diagram appears in Fig. 2. First, we note that the TI is more resilient to the magnetic instability than the metal, as expected due to the presence of the bulk gap in the former. Second, the magnetic phase transition resulting from increasing in the metal (TI) is second (first) order. Also, the magnetic order emerging from the TIs differs from the one found upon increasing in the metal. In the latter case, we find an all-in/out configuration while in the former the ground state is 3-fold degenerate (modulo the trivial degeneracy ): all 3 states result from the all-in/out state by performing -rotations on the moments in the unit cell. These rotations occur within either one of the planes bisecting the 3 triangles meeting at each corner of the tetrahedron. The order emergent in both TI states is the same. In section IV, we discuss how the different magnetic orders and the position of the transitions are actually connected to the corresponding ordering in the spin model obtained at large : as is tuned, the induced Dzyaloshinski-Morya interaction alternates between the only two symmetry allowed possibilities on the pyrochlore lattice, leading to different ordering.
### iii.3 Topological Semi-metal
By examining the spectra of the ordered phases, we discover that the so-called topological semi-metal (TSM) is realized111We have included very weak NNN hopping to obtain the TSM in the all-in/out phase, for without it the cones are tilted such that there are lines at the Fermi level. in the range and for a finite window of . This semi-metallic phase has a Fermi “surface” composed of points, each with a linearly dispersive spectrum of Weyl or two-component fermions, and may be considered as a 3D version of the Dirac points of graphene. The Hamiltonian near one such Weyl point takes the form
H=v0⋅q+3∑i=1vi⋅qσi, (4)
where is the deviation from the Weyl point at . The Pauli matrices represent the two bands involved in the touching, not (pseudo)spin. One can assign a chiral “charge” to these fermions, via the triple product of the 3 velocities: . The massless nature of the two-component Weyl fermions is robust against local perturbations, which is not the case in 2D. As explained in Ref.wan, , the only way to introduce a gap is to make two Weyl fermions with opposite chirality meet at some point in the BZ. For this reason they are topological objects (see also the discussion below regarding the surface states). Further details relating to the TSM can be found in Refs. abrikosov, ; nielsen, ; volovik-book, ; wan, ; balents-kiss, ; ran, ; burkov, .
The TSM appears in for both AF orders. In both cases we find a total of 8 Weyl points coming necessarily in 4 inversion-symmetry related pairs. The location and migration of these Weyl points depends on the magnetic order. Let us first examine the TSM phase present in the all-in/out state. In this case, the 8 Weyl points are born out of the quadratic touching at the point as the local moments spontaneously and continuously acquire a finite value with increasing . Each pair of Weyl points lies on one of the four high symmetry lines joining to the four points, as can be seen in Fig. 3. For this reason we only get 8 touchings, in contrast to Ref.wan, , where 24 Weyl points are obtained. In their case they live off the high symmetry lines so that each point is tripled by the 3-fold rotational symmetries about the lines. Weyl points of opposite chirality annihilate at the 4 points as is increased. As they annihilate and create a gap, the parities of the highest occupied states at these TRIMs change sign.
Let us now consider the TSM arising from the TI, where we again have 8 Weyl points. The major difference is that they do not occur along high symmetry lines, as can be seen in Fig. 3. We do not get 24 Weyl points because the magnetic order breaks the 3-fold rotational symmetries, which are preserved by the all-in/out state. We have explicitly located the Weyl points by looking at both the spectrum and density of states, which shows a characteristic scaling. The Weyl points don’t annihilate at TRIMs, in contrast to the non-collinear TSM. As a result there is no parity flip associated with the termination of the TSM phase when, upon increasing , the system becomes insulating.
Surface states: The non-trivial band topology of the TSM (each Weyl point is a monopole of the U(1) Berry connection) leads to chiral surface states on certain surfaces, in analogy with the TI. In contrast with the latter, the surface states of the former do not form closed Fermi surfaces, but rather open Fermi arcs. As argued in Ref.wan, , the Fermi arcs join the projections of bulk Weyl points of opposite chirality. As bulk Weyl points forming a pair are made to move towards each other by increasing , the corresponding Fermi arc shrinks, collapses to a point and disappears.
In the TSM found at large , which we use to illustrate the Fermi arcs, there are no surface states along surfaces perpendicular to the (100), (010) or (001) directions. For these surfaces, the projection process onto the 2D BZ maps 3D Weyl points of opposite chirality onto the same 2D -point. This leads to the absence of gapless surface states emanating from the 2D -point in question. For a surface perpendicular to the (110) direction, however, the projection is injective and Fermi arcs exist, as we illustrate in Fig. 4.
## Iv Strong coupling expansion
In this section, we discuss the large limit of our Hubbard Hamiltonian Eq. (2). We show how the effective spin-1/2 model obtained in that limit sheds light on the orders found in the mean field calculation as well as on the location of the phase transitions. In taking the limit where is much larger than all hopping amplitudes (), we can use second order perturbation theory to obtain the low energy spin Hamiltonian:
H′=∑ij[JSi⋅Sj+Dij⋅(Si×Sj)+SaiΓabijSbj] (5)
where the terms are, in order: the AF Heisenberg coupling, the Dzyaloshinski-Morya (DM) interaction and the anisotropic exchange. These correspond to the trace, antisymmetric and symmetric-traceless parts of the spin-spin interaction matrix, respectively. Let us focus on the bond between sites 1 and 2 (see Fig. 5), as the spin interactions for all other bonds can be determined using the crystal symmetries. We express the hopping Hamiltonian between these two sites as
Ht=−c†1αhαβc2β−c†2αh†αβc1β (6)
where is a 2 by 2 complex matrix. Time-reversal symmetry restricts the matrix elements as follows:
h=tσ0+iv⋅σ (7)
where and are real, and is the identity matrix. We note that in order to derive the spin Hamiltonian, Eq. (5), we want to use the same quantization axes for both sites, i.e. we want the spin operators to be defined in the same coordinate system.
Given the hopping matrix in the form Eq. (7), it can be shown quite simply that the Heisenberg, DM and anisotropic terms read
JU4 =t2−v2/3 (8) DU4 =2tv (9) ΓabU4 =2(vavb−δabv2/3) (10)
If we turn to our microscopic hopping Hamiltonian, for the bond we get
t =a+btσ (11) v =vy(0,1,−1)withvy=a′+b′tσ (12)
where we have set and , as above. The coefficients are positive rational numbers:
a =130/243≈0.53 b =785/2916≈0.27 (13) a′ =28/243≈0.12 b′ =125/729≈0.17 (14)
We note that the vector, hence , is parallel to the opposite bond, , see Fig. 5. This is a generic property of the pyrochlore lattice: as a consequence of crystal symmetry, a vector for any given bond must be parallel to its opposite bond (in the sense that the 4 sites form a tetrahedron)mc . Moreover, if we know the DM vector for a single bond, crystal symmetries determine the DM vectors for all other bonds in the lattice. Hence, there are only two possible sets of DM vectors , called “direct” and “indirect”. They are determined by the sign of for bond . (We could have picked another bond as the representative of the whole set.) The indirect (direct) type is defined as having the vector for the bond between sites 1 and 2 point along . See Fig. 5 for the configuration of vectors corresponding to the indirect DM interaction.
The nearest neighbour Heisenberg model together with a DM term on the pyrochlore lattice was studied by classical Monte Carlo and mean field methodsmc . First, the Monte Carlo study predicted a ordering, justifying the Ansatz used in the main text. Second, it was found that different magnetic orders arise depending on whether the DM interaction is of direct or indirect type. For the direct type, the configuration was found to be unique (up to time-reversal): the all-in/out order mentionned above. Whereas for the indirect type, a continous manifold of degenerate orders was found, containing both coplanar and non-coplanar configurations.
For the bond , we can extract from our microscopic Hamiltonian the value of the vector:
D4/U=2tvy(0,1,−1). (15)
Hence, if we have an indirect exchange, otherwise it is direct. It is easy to see that the vector changes direction when and , which correspond to and , respectively. For between these values, the DM interaction is of direct type, otherwise it is indirect. The behaviour of the DM interaction as a function of the direct hopping is shown in Fig 5. We note that first value (-0.67) is almost equal to the value at which the ground state goes from an insulator to a (semi)metal, . The magnetic orders we find for belong to the continous manifold corresponding to the indirect DM term, while it is all-in/out when , not too negative. Hence, the magnetic orders we get from our mean field calculation match those obtained in the strong coupling limit. The types of magnetic orders at intermediate are found to be related to the type of DM interaction obtained in the large spin model.
We further note that the DM interaction becomes of indirect type for , which is sufficiently close to the second transition in the ground state, from the (semi)metal to the TI, which happens at . For , we get again the magnetic orders expected for an indirect DM interaction, again consistent with the large limit. The bigger discrepency between the point at which the vector changes sign and the value of at which we observe a different ordering is probably due to the fact that the anisotropic exchanges increases in importance as is increased, while it is smaller than the DM interaction near the first transition in the vicinity of . Hence, in that regime, we do not expect as good of an agreement with a spin model neglecting anisotropic exchange.
## V Discussion
We have constructed a minimal (but sufficiently realistic) model to describe novel quantum ground states that may arise in the pyrochlore iridates. While not appreciated in previous works, it is shown that the inclusion of both indirect and direct hopping process of electrons of Ir is important in describing different magnetically ordered states in the presence of interactions and their parent non-interacting ground states. A portion of our phase diagram is broadly consistent with a recent ab initio calculationwan , where upon increasing , one encounters a metal, a topological semi-metal in the all-in/out magnetic configuration and finally a magnetic insulator. Since different choices of -site ions in AIrO lead to changes in both hopping amplitudes, our results suggest that different magnetic and topological ground states such as a topological insulator, the all-in/out and related AF states and various kinds of topological semi-metals, may arise in a variety of pyrochlore iridates. High pressure experiments on these compounds may reveal the intimate connection between the magnetic order in the stronger correlation regime and TI/metal in the weak correlation limit, as theoretically explored in this work. For instance, recent transport measurements under high pressurefazel on EuIrO indicate a continuous transition from an insulating ground state to a metallic one, mimicking chemical pressuremaeno . This could be connected to our continuous TSM-metal transition. Also, as the existence of the TSM depends crucially on the magnetic order, it would be desirable to examine the effect of the magnetic fluctuations near the (semi-)metal-TSM transition on thermodynamic and transport properties.
## Acknowledgements
We are grateful to S. Bhattacharjee, G. Chen, A. Go, S. R. Julian, Y. J. Kim, D. E. MacLaughlin, S. Nakatsuji, D. Podolsky, J. Rau, T. Senthil, F. Tafti, H. Takagi, A. Vishvanath and B. J. Yang for useful discussions. WWK acknowledges the hospitality of the Korea Institute for Advanced Study and MIT where parts of the research were done. This work was supported by NSERC, FQRNT, the CRC program, and CIFAR. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9051886200904846, "perplexity": 953.926353328564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195069.35/warc/CC-MAIN-20201128040731-20201128070731-00106.warc.gz"} |
http://openstudy.com/updates/4fdbd1a8e4b0f2662fd20de4 | Here's the question you clicked on:
55 members online
• 0 viewing
## lgbasallote 4 years ago An OpenStudy user's speed is 40 medals per day and his acceleration is 2 medal/day^2. If it takes 4,500 medals to reach the Legend rank, how many days will it take for him to reach the rank? Delete Cancel Submit
• This Question is Closed
1. anonymous
• 4 years ago
Best Response
You've already chosen the best response.
0
lol in first
2. anonymous
• 4 years ago
Best Response
You've already chosen the best response.
0
wrong group
3. lgbasallote
• 4 years ago
Best Response
You've already chosen the best response.
0
isnt it math?
4. anonymous
• 4 years ago
Best Response
You've already chosen the best response.
0
What is his/her initial medalage? does he/she start from 0?
5. anonymous
• 4 years ago
Best Response
You've already chosen the best response.
0
physics
6. lgbasallote
• 4 years ago
Best Response
You've already chosen the best response.
0
yup 0
7. lgbasallote
• 4 years ago
Best Response
You've already chosen the best response.
0
but isnt this an ordinary math problem too?
8. lgbasallote
• 4 years ago
Best Response
You've already chosen the best response.
0
i mean the basic concepts still come in play
9. anonymous
• 4 years ago
Best Response
You've already chosen the best response.
0
its both, physics and math.Arguably physics is nothing without math though :)
10. lgbasallote
• 4 years ago
Best Response
You've already chosen the best response.
0
lol can we just teach me how to solve this? :p
11. anonymous
• 4 years ago
Best Response
You've already chosen the best response.
0
well first of all, his acceleration cant be 2 medals/day a = v / t since you postulated that v = 40 per day a = 40 per day because a = delta V / delta t
12. lgbasallote
• 4 years ago
Best Response
You've already chosen the best response.
0
but my book says 50 days o.O
13. anonymous
• 4 years ago
Best Response
You've already chosen the best response.
0
I would consider it an initial velocity. Its like saying a car is going at 40 ft/s, and accelerating at 2 ft/s^2.
14. lgbasallote
• 4 years ago
Best Response
You've already chosen the best response.
0
yeah that'swhat i think too
15. anonymous
• 4 years ago
Best Response
You've already chosen the best response.
0
To solve the problem use the formula:$d = x_0+v_0t+\frac{1}{2}at^2$Set d to 4500, intial position to 0, intiial velocity to 40, and acceleration to 2. Solve for t.
16. lgbasallote
• 4 years ago
Best Response
You've already chosen the best response.
0
so 4500 = 40t + t^2?
17. anonymous
• 4 years ago
Best Response
You've already chosen the best response.
0
yep
18. lgbasallote
• 4 years ago
Best Response
You've already chosen the best response.
0
hmm yep i think that gives 50thanks
19. Not the answer you are looking for?
Search for more explanations.
• Attachments:
Find more explanations on OpenStudy
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998655319213867, "perplexity": 12784.428928021198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542680.76/warc/CC-MAIN-20161202170902-00021-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://gmatclub.com/forum/calling-all-cbs-2011-applicants-94306-140.html | Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
GMAT Club
It is currently 30 Mar 2017, 17:55
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Calling All CBS 2011 Applicants!!
Author Message
TAGS:
### Hide Tags
Manager
Joined: 22 Sep 2009
Posts: 222
Location: Tokyo, Japan
Followers: 2
Kudos [?]: 22 [0], given: 8
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
21 Jun 2010, 18:09
brainhurt wrote:
The directions in the document provided by CBS to recommenders state:
This seems pretty specific, so I'd imagine it's best to follow the directions and just answer the questions.
Hi brainhurt. Thanks a lot for the info.
I think I miss that part the first time around. I will go back and read it thoroughly.
ROLL CALL PG UPDATED
Current Student
Joined: 31 May 2010
Posts: 587
Location: United States (NY)
Concentration: Finance, Economics
GMAT 1: 780 Q50 V47
GPA: 3.14
Followers: 13
Kudos [?]: 88 [0], given: 57
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
22 Jun 2010, 04:16
lonewolf wrote:
Hi brainhurt. Thanks a lot for the info.
I think I miss that part the first time around. I will go back and read it thoroughly.
ROLL CALL PG UPDATED
No problem. That's from the document that gets sent directly to your recommenders, so you might not be able to find that info (one of my recommenders sent me the document before he submitted it).
_________________
Intern
Joined: 02 Jan 2010
Posts: 19
Followers: 0
Kudos [?]: 0 [0], given: 1
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
22 Jun 2010, 09:51
can someone confirm that this is the accurate list of recommendation questions?
1. What is your relationship to, and how long have you known the applicant? Is this person still employed by your organization? If not, when did he/she depart?
2. Please provide a short list of adjectives describing the applicant's strengths.
3. Please compare the applicant's performance to that of his/her peers. Does the applicant have the potential to become a senior manager?
4. How effective are the applicant's interpersonal skills in working with peers, supervisors and subordinates?
5. How does the applicant accept constructive criticism?
6. Comment on your observations of the applicant's ethical behavior.
7. What do you think motivates the candidate's application to Columbia Business School?
8. In what ways could the applicant improve professionally? If you could change one thing about the applicant, what would it be?
9. Are there any other matters which you feel we should know about the applicant?
Thanks
Manager
Joined: 01 Jun 2010
Posts: 107
Concentration: Technology, Entrepreneurship
GMAT 1: 740 Q49 V41
WE: Information Technology (Mutual Funds and Brokerage)
Followers: 2
Kudos [?]: 22 [0], given: 8
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
22 Jun 2010, 10:18
Hi venkatag,
Yes these are the questions. You could also check the CBS website. Its right there!
Intern
Joined: 02 Jan 2010
Posts: 19
Followers: 0
Kudos [?]: 0 [0], given: 1
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
22 Jun 2010, 11:20
thanks randomwalk!
Current Student
Affiliations: CBS '13
Joined: 08 May 2010
Posts: 291
Followers: 5
Kudos [?]: 33 [0], given: 1
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
22 Jun 2010, 11:21
anyone got an invite to interview yet????
Intern
Joined: 02 Feb 2010
Posts: 44
Followers: 0
Kudos [?]: 2 [0], given: 1
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
23 Jun 2010, 07:10
anyone got an invite to interview yet????
With the exception of Hazay, I think that three weeks is quite early to hear back.
If I had to bet I would say it will take 5 weeks to see the first interview invites, given 1 june as under review date I think it will take another 2 weeks to start rolling out invitations (ie the week starting on July 5th)...
Finger crossed
_________________
Frangar non flectar.
Current Student
Affiliations: CBS '13
Joined: 08 May 2010
Posts: 291
Followers: 5
Kudos [?]: 33 [0], given: 1
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
23 Jun 2010, 07:24
Marescotti wrote:
anyone got an invite to interview yet????
With the exception of Hazay, I think that three weeks is quite early to hear back.
If I had to bet I would say it will take 5 weeks to see the first interview invites, given 1 june as under review date I think it will take another 2 weeks to start rolling out invitations (ie the week starting on July 5th)...
Finger crossed
Why is Hazay an exception? just curious. I already feel the waiting has crept me out!
Intern
Joined: 02 Feb 2010
Posts: 44
Followers: 0
Kudos [?]: 2 [0], given: 1
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
23 Jun 2010, 07:26
I was just kidding, Hazay said that he will have a very early feedback given that his profile is a no-brainer...
I know waiting game is quite hard, try not to think about it too much...
_________________
Frangar non flectar.
Current Student
Joined: 12 Jun 2009
Posts: 1846
Location: United States (NC)
Concentration: Strategy, Finance
Schools: UNC (Kenan-Flagler) - Class of 2013
GMAT 1: 720 Q49 V39
WE: Programming (Computer Software)
Followers: 26
Kudos [?]: 252 [0], given: 52
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
23 Jun 2010, 07:31
ughhh i am so pissed. My rec. source is taking forever to start the questions! i got everything except his rec - he is still in "notified" status yet he told me he started already. I am kinda sick of walking to his office again to remind him... been almost 3 weeks now...
_________________
Current Student
Affiliations: CBS '13
Joined: 08 May 2010
Posts: 291
Followers: 5
Kudos [?]: 33 [0], given: 1
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
23 Jun 2010, 07:46
Marescotti wrote:
I was just kidding, Hazay said that he will have a very early feedback given that his profile is a no-brainer...
I know waiting game is quite hard, try not to think about it too much...
ha, hopefully the whole process can start moving towards the end of the month.
Manager
Joined: 17 Mar 2010
Posts: 88
Followers: 1
Kudos [?]: 12 [0], given: 1
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
23 Jun 2010, 08:13
Marescotti wrote:
If I had to bet I would say it will take 5 weeks to see the first interview invites, given 1 june as under review date I think it will take another 2 weeks to start rolling out invitations (ie the week starting on July 5th)...
Week starting on July 5th ? Sounds good to me
The first week after submission seemed long but once you start to deal with other applications (Wharton published terrific new questions and Harvard made a simple but helping twist to their usual essays), you dont think about the waiting game too much.
Manager
Joined: 01 Mar 2010
Posts: 175
Followers: 1
Kudos [?]: 19 [0], given: 0
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
23 Jun 2010, 08:49
Maybe they say they start reviewing them upon receipt, but they just print out and wait until August? Adcom are human too - they gotta take some break.
Seeing guys who've already submitted, I guess agony actually starts after submission.
I wonder how advantageous ED is. Looking at last year's thread, the percentage was pretty good, but the profiles of accepted ones are high GMAT high GPA - no funky exception. I've seen unusual profiles in Booth/Kellogg/Wharton but not so in CBS.
Manager
Joined: 06 Jan 2010
Posts: 72
Followers: 1
Kudos [?]: 3 [0], given: 3
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
23 Jun 2010, 09:05
Can someone help me on the below:
- How advantageous ED is? Will applying in ED for Sep'11 benifit the applicant?
- I understand that the Deadline for Jan'11 intake is Oct 6th - But considering that its a rolling admission, is it late to apply by June end.
I will be able to build a good application by June end/early July, but am not able to decide between ED Vs Regular. Need your inputs.
Current Student
Affiliations: CBS '13
Joined: 08 May 2010
Posts: 291
Followers: 5
Kudos [?]: 33 [0], given: 1
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
23 Jun 2010, 10:29
chaitu69 wrote:
Can someone help me on the below:
- How advantageous ED is? Will applying in ED for Sep'11 benifit the applicant?
- I understand that the Deadline for Jan'11 intake is Oct 6th - But considering that its a rolling admission, is it late to apply by June end.
I will be able to build a good application by June end/early July, but am not able to decide between ED Vs Regular. Need your inputs.
I think ED gives you some advantage. I just don't think anyone can quantify it. It boils down to the question whether you really want to go to CBS.
Manager
Joined: 01 Mar 2010
Posts: 175
Followers: 1
Kudos [?]: 19 [0], given: 0
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
23 Jun 2010, 11:27
Yes, it should, but since ED is binding, your committment may play bigger than your app's perfection.
Is anybody active in BW forum? Somebody must've received something.
Current Student
Affiliations: CBS '13
Joined: 08 May 2010
Posts: 291
Followers: 5
Kudos [?]: 33 [0], given: 1
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
23 Jun 2010, 12:28
chostein wrote:
Yes, it should, but since ED is binding, your committment may play bigger than your app's perfection.
Is anybody active in BW forum? Somebody must've received something.
you mean an interview invite?
Intern
Joined: 02 Feb 2010
Posts: 44
Followers: 0
Kudos [?]: 2 [0], given: 1
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
24 Jun 2010, 03:13
chaitu69 wrote:
Can someone help me on the below:
- How advantageous ED is? Will applying in ED for Sep'11 benifit the applicant?
- I understand that the Deadline for Jan'11 intake is Oct 6th - But considering that its a rolling admission, is it late to apply by June end.
I will be able to build a good application by June end/early July, but am not able to decide between ED Vs Regular. Need your inputs.
I agree you cannot quantify ED advantage, but it is surely way easier to apply succesfully for ED rather than RD, I think that the commitment required and non refundable deposit is quite a barrier for lots of people.
As per deadlines, I think that there are two aspects to be considered.
First of all, movement on J-term is not comparable to ED/RD. J-Term cuts out lots of people and it is actually designed mainly for those who will get back to their former employment. So applying by the end of June should not put you in a doomed position, which is what happened pretty much this year for all the people who applied after Jan 6. I think that the real point with J-term is whether you already have an employment offer after b-school or not.
The other aspect is that they are not giving any more opening dates indications, which tended to concentrate all the submissions in one day. In the last RD they said that they would have started reviewing applications upon Jan 6, so almost everybody applied on Jan 6 (and so you knew exactly that your deadline was Jan 5... ). Given that the indication is gone you don't have concentrations so the sooner you apply the better, but the dispersion of application will be higher.
Bottom line, it all comes down to what you really are going to do after b-school. If it doesn't make a difference for you to start on Sept rather than Jan and you're a career switcher I would surely pick ED, if you will get back to your former empolyment I think you're still on time to apply for J-term.
_________________
Frangar non flectar.
Manager
Joined: 01 Mar 2010
Posts: 175
Followers: 1
Kudos [?]: 19 [0], given: 0
Re: Calling All CBS 2011 Applicants!! [#permalink]
### Show Tags
24 Jun 2010, 09:04
Will the deposit upon ED acceptance be applied against tuition later?
I know NYC is expensive, but I hope they don't eat my $6G @.@ lawgonebusiness- Yeah, interview invite. BW forum seems so..... chaotic in its layout, fonts, etc. so I don't go there often. But in terms of activity, it seems as active as GC. Current Student Joined: 06 Oct 2009 Posts: 594 Followers: 5 Kudos [?]: 58 [0], given: 295 Re: Calling All CBS 2011 Applicants!! [#permalink] ### Show Tags 24 Jun 2010, 09:08 chostein wrote: Will the deposit upon ED acceptance be applied against tuition later? I know NYC is expensive, but I hope they don't eat my$6G @.@
lawgonebusiness- Yeah, interview invite. BW forum seems so..... chaotic in its layout, fonts, etc. so I don't go there often. But in terms of activity, it seems as active as GC.
Hey lawgonebusiness, can you plz provide a link to the forums? I rarely venture to BW and I agree....its format is so weird.
Re: Calling All CBS 2011 Applicants!! [#permalink] 24 Jun 2010, 09:08
Go to page Previous 1 ... 6 7 8 9 10 11 ... 199 Next [ 3966 posts ]
Similar topics Replies Last post
Similar
Topics:
700 Calling All 2011 Booth Applicants!! 3692 07 Jul 2010, 18:48
497 Calling all Wharton 2011 Applicants 3321 10 Jun 2010, 10:52
Calling all NUS 2011 Applicants 2 16 Dec 2010, 12:09
52 Calling All UCLA 2011 Applicants! 1349 27 Jul 2010, 17:14
Calling All CBS J-Term 2011 Applicants!! 1 18 Jul 2010, 16:58
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31756192445755005, "perplexity": 8931.216525377611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218205046.28/warc/CC-MAIN-20170322213005-00142-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/258783/f-mathbb-r2-rightarrow-mathbb-r-by-fx-y-1-if-xy-0-and-2-otherwi | # $f:\mathbb R^{2}\rightarrow \mathbb R$ by $f(x,y)=1,$ if $xy=0$ and $=2,$otherwise.
I came across the problem which is as follows:
Define $f:\mathbb R^{2}\rightarrow \mathbb R$ by \begin{align}f(x,y)&=\begin{cases}1\,\,\,\, \text{if xy=0}\\ 2 \,\,\, \,\text{otherwise}\end{cases} \end{align} Now, if $S=\lbrace(x,y):f\mbox { is continuous at} (x,y)\rbrace$, then which of the following is correct?
(a) $S$ is open,
(b) $S$ is connected,
(c) $S$=$\phi$,
(d) $S$ is closed.
My attempts: Here,we see that points of discontinuity lie on the $x$ and $y$-axis where one of x and y is zero.So,I can come to conclusion that the set of points of $S$ (where $f$ is supposed to be continuous)lie in the co-ordinate planes minus the coordinate axes.So, I think the set S is connected.Am I going in the right direction? I need a bit of explanation here. Please help.Thank you in advance for your time.
-
Can you find some points where $f$ is continuous? – lhf Dec 14 '12 at 17:14
We should find out where our function is continuous. Take a point $(a,b)$ not on one of the two axes. Then there is an open neighbourhood of $(a,b)$ that avoids the two axes, so there is an open neighbourhood of $(a,b)$ such that for any $(s,t)$ in that neighbourhood, we have $f(s,t)=f(a,b)=2$. Thus $f$ is continuous at $(a,b)$.
Now let $(a,b)$ be on one of the axes. Then any open neighbourhood of $(a,b)$ contains a point not on an axis, that is, a point at which $f$ is equal to $2$. So taking $\epsilon=1/2$, we find that there is no positive $\delta$ such that $|f(a,b)-f(s,t)|\lt \epsilon$ for every point at distance $\lt \delta$ from $(a,b)$. So $f$ is not continuous at any point on one of the axes.
So we know that $S$ consists of all points not on one of the axes: the answers should now come easily.
-
Hint: $f$ is discontinous only at $x$-axis and $y$-axis.
-
I have got your point and edited my attempts accordingly..Thanks a lot for the hint. But i am still confused about the final conclusion. Option $(c)$ can be eliminated. I am confused between $(a)$ and $(b)$. – learner Dec 14 '12 at 17:51
@learner: $S$ is surely open because it is the union of four open sets, namely each quadrants (open) but $S$ is not connected because it is not path connected - take two points, each of them in different quadrants, then they can not be joined by any path lying in $S$. – pritam Dec 14 '12 at 18:01
thanks a lot for the clarification.I have got it. It is crystal clear now. +1 from me.. – learner Dec 14 '12 at 18:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9390345811843872, "perplexity": 123.60726589022735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010938.65/warc/CC-MAIN-20141125155650-00026-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://mathhelpforum.com/calculus/155723-rational-function.html | # Math Help - Rational Function
1. ## Rational Function
$y=\frac{-x^2+ax}{x-2b}$, find the range of values of $b$ for which the graph of $y=f(x)$ has two turning points.
I am able to solve it by finding dy/dx and make sure that dy/dx has roots.
My question is this,
Is there any other way to solve this without using differentiation?
Please enlighten me...thank you!
2. I don't believe so. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7732587456703186, "perplexity": 449.36360644613893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099036.81/warc/CC-MAIN-20150627031819-00204-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://www.kullabs.com/classes/subjects/units/lessons/practice-tests/805 | ### Practice Session for each note on this lesson
Arithmetic Mean Take Practice Test
Median Take Practice Test | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9662107825279236, "perplexity": 6411.858768511522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257939.82/warc/CC-MAIN-20190525084658-20190525110658-00037.warc.gz"} |
http://physics.aps.org/synopsis-for/10.1103/PhysRevB.79.195122 | # Synopsis: How to make CuO sit up straight
$\text{CuO}$ in thin-film form could be a prototype material for exploring magnetism that is similar to what is found in high-temperature superconductors.
The parent compounds of cuprate high-temperature superconductors are typically antiferromagnets where the magnetic interaction between the spins on the copper sites is unusually large ($\sim 100\phantom{\rule{0.333em}{0ex}}\text{meV}$ or $>1000\phantom{\rule{0.333em}{0ex}}\text{K}$). Since they may play a role in the superconducting mechanism, researchers have explored similarly large magnetic interactions in other copper-oxide compounds.
Moving from left to right on the periodic table, $\text{CuO}$ is the last member of the transition metal rock-salt series that includes $\text{MnO}$, $\text{FeO}$, $\text{CoO}$, and $\text{NiO}$. Except for $\text{CuO}$, each of these oxides has a cubic structure, like salt, where the transition metal ion is surrounded by six oxygen ions. From $\text{MnO}$ to $\text{NiO}$, the antiferromagnetic (Néel) transition temperature, which scales with the magnetic interaction between the spins on the transition metal sites, increases from $100$ to $500\phantom{\rule{0.333em}{0ex}}\text{K}$. Following this trend, $\text{CuO}$ should have a Néel temperature as high as $900\phantom{\rule{0.333em}{0ex}}\text{K}$, but in bulk form, $\text{CuO}$ has a low-symmetry, distorted rock-salt structure and a transition temperature of only $200\phantom{\rule{0.333em}{0ex}}\text{K}$.
Wolter Siemons and colleagues at the University of Twente in The Netherlands and collaborators at Stanford University in the US report in Physical Review B that they have succeeded in using pulsed laser deposition to grow thin films of $\text{CuO}$ with a structure that is an elongated (tetragonal) version of its rock-salt cousins.
While Siemons et al. have determined the structure with extensive crystallography, magnetic measurements will be necessary to determine if the magnetic interactions in this tetragonal form of $\text{CuO}$ compare with those of the high-temperature superconducting oxides. – Jessica Thomas
### Announcements
More Announcements »
## Subject Areas
Materials Science
## Previous Synopsis
Superconductivity
Optics
## Related Articles
Materials Science
### Focus: Complex Crystals Form from Heterogeneous Particles
A suspension containing particles with wide-ranging diameters can crystallize into multiple ordered structures. Read More »
Condensed Matter Physics
### Synopsis: Glassy Fingerprints
The local structure of glasses and other disordered materials could be extracted from diffraction patterns, according to a proposal for a new technique. Read More »
Condensed Matter Physics
### Synopsis: Electron–Phonon Affair Comes to Light
Photoelectron spectroscopy reveals the details of the interaction between electronic and vibrational excitations in a molecular material. Read More » | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 21, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5574698448181152, "perplexity": 1638.215929065609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276964.14/warc/CC-MAIN-20160524002116-00177-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://www.groundai.com/project/the-inverse-shapley-value-problem/1 | The Inverse Shapley Value Problem1footnote 11footnote 1An extended abstract of this work appeared in the Proceedings of the 39th International Colloquium on Automata, Languages and Programming (ICALP 2012).
# The Inverse Shapley Value Problem111An extended abstract of this work appeared in the Proceedings of the 39th International Colloquium on Automata, Languages and Programming (ICALP 2012).
Anindya De
University of California, Berkeley
anindya@cs.berkeley.edu. Research supported by NSF award CCF-0915929, CCF-1017403 and CCF-1118083.
Ilias Diakonikolas
University of Edinburgh
ilias.d@ed.ac.uk. This work was done while the author was at UC Berkeley supported by a Simons Postdoctoral Fellowship.
Rocco A. Servedio
Columbia University
rocco@cs.columbia.edu. Supported by NSF grants CNS-0716245, CCF-0915929, and CCF-1115703.
# The Inverse Shapley Value Problem222An extended abstract of this work appeared in the Proceedings of the 39th International Colloquium on Automata, Languages and Programming (ICALP 2012).
Anindya De
University of California, Berkeley
anindya@cs.berkeley.edu. Research supported by NSF award CCF-0915929, CCF-1017403 and CCF-1118083.
Ilias Diakonikolas
University of Edinburgh
ilias.d@ed.ac.uk. This work was done while the author was at UC Berkeley supported by a Simons Postdoctoral Fellowship.
Rocco A. Servedio
Columbia University
rocco@cs.columbia.edu. Supported by NSF grants CNS-0716245, CCF-0915929, and CCF-1115703.
###### Abstract
For a weighted voting scheme used by voters to choose between two candidates, the Shapley-Shubik Indices (or Shapley values) of provide a measure of how much control each voter can exert over the overall outcome of the vote. Shapley-Shubik indices were introduced by Lloyd Shapley and Martin Shubik in 1954 [SS54] and are widely studied in social choice theory as a measure of the “influence” of voters. The Inverse Shapley Value Problem is the problem of designing a weighted voting scheme which (approximately) achieves a desired input vector of values for the Shapley-Shubik indices. Despite much interest in this problem no provably correct and efficient algorithm was known prior to our work.
We give the first efficient algorithm with provable performance guarantees for the Inverse Shapley Value Problem. For any constant our algorithm runs in fixed poly time (the degree of the polynomial is independent of ) and has the following performance guarantee: given as input a vector of desired Shapley values, if any “reasonable” weighted voting scheme (roughly, one in which the threshold is not too skewed) approximately matches the desired vector of values to within some small error, then our algorithm explicitly outputs a weighted voting scheme that achieves this vector of Shapley values to within error If there is a “reasonable” voting scheme in which all voting weights are integers at most that approximately achieves the desired Shapley values, then our algorithm runs in time and outputs a weighted voting scheme that achieves the target vector of Shapley values to within error
## 1 Introduction
In this paper we consider the common scenario in which each of voters must cast a binary vote for or against some proposal. What is the best way to design such a voting scheme? Throughout the paper we consider only weighted voting schemes, in which the proposal passes if a weighted sum of yes-votes exceeds a predetermined threshold. Weighted voting schemes are predominant in voting theory and have been extensively studied for many years, see [EGGW07, ZFBE08] and references therein. In computer science language, we are dealing with linear threshold functions (henceforth abbreviated as LTFs) over Boolean variables.
If it is desired that each of the voters should have the same “amount of power” over the outcome, then a simple majority vote is the obvious solution. However, in many scenarios it may be the case that we would like to assign different levels of voting power to the voters – perhaps they are shareholders who own different amounts of stock in a corporation, or representatives of differently sized populations. In such a setting it is much less obvious how to design the right voting scheme; indeed, it is far from obvious how to correctly quantify the notion of the “amount of power” that a voter has under a given fixed voting scheme. As a simple example, consider an election with three voters who have voting weights 49, 49 and 2, in which a total of 51 votes are required for the proposition to pass. While the disparity between voting weights may at first suggest that the two voters with 49 votes each have most of the “power,” any coalition of two voters is sufficient to pass the proposition and any single voter is insufficient, so the voting power of all three voters is in fact equal.
Many different power indices (methods of measuring the voting power of individuals under a given voting scheme) have been proposed over the course of decades. These include the Banzhaf index [Ban65], the Deegan-Packel index [DP78], the Holler index [Hol82], and others (see the extensive survey of de Keijzer [dK08]). Perhaps the best known, and certainly the oldest, of these indices is the Shapley-Shubik index [SS54], which is also known as the index of Shapley values (we shall henceforth refer to it as such). Informally, the Shapley value of a voter among the voters is the fraction of all orderings of the voters in which she “casts the pivotal vote” (see Definition 1 in Section 2 for a precise definition, and [Rot88] for much more on Shapley values). We shall work with the Shapley values throughout this paper.
Given a particular weighted voting scheme (i.e., an -variable linear threshold function), standard sampling-based approaches can be used to efficiently obtain highly accurate estimates of the Shapley values (see also the works of [Lee03, BMR10]). However, the inverse problem is much more challenging: given a vector of desired values for the Shapley values, how can one design a weighted voting scheme that (approximately) achieves these Shapley values? This problem, which we refer to as the Inverse Shapley Value Problem, is quite natural and has received considerable attention; various heuristics and exponential-time algorithms have been proposed [APL07, FWJ08, dKKZ10, Kur11], but prior to our work no provably correct and efficient algorithms were known.
Our Results. We give the first efficient algorithm with provable performance guarantees for the Inverse Shapley Value Problem. Our results apply to “reasonable” voting schemes; roughly, we say that a weighted voting scheme is “reasonable” if fixing a tiny fraction of the voting weight does not already determine the outcome, i.e., if the threshold of the linear threshold function is not too extreme. (See Definition 2 in Section 2 for a precise definition.) This seems to be a plausible property for natural voting schemes. Roughly speaking, we show that if there is any reasonable weighted voting scheme that approximately achieves the desired input vector of Shapley values, then our algorithm finds such a weighted voting scheme. Our algorithm runs in fixed polynomial time in , the number of voters, for any constant error parameter . In a bit more detail, our first main theorem, stated informally, is as follows (see Section 6 for Theorem 26 which gives a precise theorem statement):
Main Theorem (arbitrary weights, informal statement). There is a poly-time algorithm with the following properties: The algorithm is given any constant accuracy parameter and any vector of real values . The algorithm has the following performance guarantee: if there is any monotone increasing reasonable LTF whose Shapley values are very close to the given values , then with very high probability the algorithm outputs such that the linear threshold function has Shapley values -close to those of .
We emphasize that the exponent of the running time is a fixed constant that is independent of .
Our second main theorem gives an even stronger guarantee if there is a weighted voting scheme with small weights (at most ) whose Shapley values are close to the desired values. For this problem we give an algorithm which achieves accuracy in time. An informal statement of this result is (see Section 6 for Theorem 27 which gives a precise theorem statement):
Main Theorem (bounded weights, informal statement). There is a poly-time algorithm with the following properties: The algorithm is given a weight bound and any vector of real values . The algorithm has the following performance guarantee: if there is any monotone increasing reasonable LTF whose Shapley values are very close to the given values and where each is an integer of magnitude at most , then with very high probability the algorithm outputs such that the linear threshold function has Shapley values -close to those of .
Discussion and Our Approach. At a high level, the Inverse Shapley Value Problem that we consider is similar to the “Chow Parameters Problem” that has been the subject of several recent papers [Gol06, OS08, DDFS12]. The Chow parameters are another name for the Banzhaf indices; the Chow Parameters Problem is to output a linear threshold function which approximately matches a given input vector of Chow parameters. (To align with the terminology of the current paper, the “Chow Parameters Problem” might perhaps better be described as the “Inverse Banzhaf Problem.”)
Let us briefly describe the approaches in [OS08] and [DDFS12] at a high level for the purpose of establishing a clear comparison with this paper. Each of the papers [OS08, DDFS12] combines structural results on linear threshold functions with an algorithmic component. The structural results in [OS08] deal with anti-concentration of affine forms where is uniformly distributed over the Boolean hypercube, while the algorithmic ingredient of [OS08] is a rather straightforward brute-force search. In contrast, the key structural results of [DDFS12] are geometric statements about how -dimensional hyperplanes interact with the Boolean hypercube, which are combined with linear-algebraic (rather than anti-concentration) arguments. The algorithmic ingredient of [DDFS12] is more sophisticated, employing a boosting-based approach inspired by the work of [TTV08, Imp95].
Our approach combines aspects of both the [OS08] and [DDFS12] approaches. Very roughly speaking, we establish new structural results which show that linear threshold functions have good anti-concentration (similar to [OS08]), and use a boosting-based approach derived from [TTV08] as the algorithmic component (similar to [DDFS12]). However, this high-level description glosses over many “Shapley-specific” issues and complications that do not arise in these earlier works; below we describe two of the main challenges that arise, and sketch how we meet them in this paper.
First challenge: establishing anti-concentration with respect to non-standard distributions. The Chow parameters (i.e., Banzhaf indices) have a natural definition in terms of the uniform distribution over the Boolean hypercube . Being able to use the uniform distribution with its many nice properties (such as complete independence among all coordinates) is very useful in proving the required anti-concentration results that are at the heart of [OS08]. In contrast, it is not a priori clear what is (or even whether there exists) the “right” distribution over corresponding to the Shapley values. In this paper we derive such a distribution over , but it is much less well-behaved than the uniform distribution (it is supported on a proper subset of , and it is not even pairwise independent). Nevertheless, we are able to establish anti-concentration results for affine forms corresponding to linear threshold functions under the distribution as required for our results. This is done by showing that any reasonable linear threshold function can be expressed with “nice” weights (see Theorem 3 of Section 2), and establishing anti-concentration for any “nice” weight vector by carefully combining anti-concentration bounds for -biased distributions across a continuous family of different choices of (see Section 4 for details).
Second challenge: using anti-concentration to solve the Inverse Shapley problem. The main algorithmic ingredient that we use is a procedure from [TTV08]. Given a vector of values (correlations between the unknown linear threshold function and the individual input variables), it efficiently constructs a bounded function which closely matches these correlations, i.e., for all . Such a procedure is very useful for the Chow parameters problem, because the Chow parameters correspond precisely to the values – i.e., the degree- Fourier coefficients of – with respect to the uniform distribution. (This correspondence is at the heart of Chow’s original proof [Cho61] showing that the exact values of the Chow parameters suffice to information-theoretically specify any linear threshold function; anti-concentration is used in [OS08] to extend Chow’s original arguments about degree-1 Fourier coefficients to the setting of approximate reconstruction.)
For the inverse Shapley problem, there is no obvious correspondence between the correlations of individual input variables and the Shapley values. Moreover, without a notion of “degree- Fourier coefficients” for the Shapley setting, it is not clear why anti-concentration statements with respect to should be useful for approximate reconstruction. We deal with both these issues by developing a notion of the degree- Fourier coefficients of with respect to distribution and relating these coefficients to the Shapley values 333We note that Owen [Owe72] has given a characterization of the Shapley values as a weighted average of -biased influences (see also [KS06]). However, this is not as useful for us as our characterization in terms of “-distribution” Fourier coefficients, because we need to ultimately relate the Shapley values to anti-concentration with respect to .. (We actually require two related notions: one is the “coordinate correlation coefficient” , which is necessary for the algorithmic [TTV08] ingredient, and one is the “Fourier coefficient” , which is necessary for Lemma 15, see below.) We define both notions and establish the necessary relations between them in Section 3.
Armed with the notion of the degree- Fourier coefficients under distribution , we prove a key result (Lemma 15) saying that if the LTF is anti-concentrated under distribution , then any bounded function which closely matches the degree- Fourier coefficients of must be close to in distance with respect to . (This is why anti-concentration with respect to is useful for us.) From this point, exploiting properties of the [TTV08] algorithm, we can pass from to an LTF whose Shapley values closely match those of .
Organization. Useful preliminaries are given in Section 2, including the crucial fact (Theorem 3) that all “reasonable” linear threshold functions have weight representations with “nice” weights. In Section 3 we define the distribution and the notions of Fourier coefficients and “coordinate correlation coefficients,” and the relations between them, that we will need. At the end of that section we prove a crucial lemma, Lemma 15, which says that anti-concentration of affine forms and closeness in Fourier coefficients together suffice to establish closeness in distance. Section 4 proves that “nice” affine forms have the required anti-concentration, and Section 5 describes the algorithmic tool from [TTV08] that lets us establish closeness of coordinate correlation coefficients. Section 6 puts the pieces together to prove our main theorems. Finally, in Section 7 we conclude the paper and present a few open problems.
## 2 Preliminaries
Notation and terminology. For , we denote by . For , , we denote
Given a vector we write to denote . A linear threshold function, or LTF, is a function which is such that for some
Our arguments will also use a variant of linear threshold functions which we call linear bounded functions (LBFs). The projection function is defined by for and otherwise. An LBF is a function
Shapley values. Here and throughout the paper we write to denote the symmetric group of all permutations over Given a permutation and an index , we write to denote the string in that has a 1 in coordinate if and only if , and we write to denote the string obtained from by flipping coordinate from to With this notation in place we can define the generalized Shapley indices of a Boolean function as follows:
###### Definition 1.
(Generalized Shapley values) Given the -th generalized Shapley value of is the value
~f(i)\lx@stackreldef=Eπ∼RSn[f(x+(π,i))−f(x(π,i))] (1)
(where “” means that is selected uniformly at random from ).
A function is said to be monotone increasing if for all , whenever two input strings differ precisely in coordinate and have , , it is the case that It is easy to check that for monotone functions our definition of generalized Shapley values agrees with the usual notion of Shapley values (which are typically defined only for monotone functions) up to a multiplicative factor of 2; in the rest of the paper we omit “generalized” and refer to these values simply as the Shapley values of
We will use the following notion of the “distance” between the vectors of Shapley values for two functions :
dShapley(f,g)\lx@stackreldef=√n∑i=1(~f(i)−~g(i))2,
i.e., the Shapley distance is simply the Euclidean distance between the two -dimensional vectors of Shapley values. Given a vector we will also use to denote
The linear threshold functions that we consider. Our algorithmic results hold for linear threshold functions which are not too “extreme” (in the sense of having a very skewed threshold). We will use the following definition:
###### Definition 2.
(-reasonable LTF) Let , be an LTF. For we say that is -reasonable if
All our results will deal with -reasonable LTFs; throughout the paper should be thought of as a small fixed absolute constant (such as ). LTFs that are not -reasonable do not seem to correspond to very interesting voting schemes since typically they will be very close to constant functions. (For example, even at , if the LTF has a threshold which makes it not an -reasonable LTF, then agrees with the constant function on all but a fraction of inputs in )
Turning from the threshold to the weights, some of the proofs in our paper will require us to work with LTFs that have “nice” weights in a certain technical sense. Prior work [Ser07, OS11] has shown that for any LTF, there is a weight vector realizing that LTF that has essentially the properties we need; however, since the exact technical condition that we require is not guaranteed by any of the previous works, we give a full proof that any LTF has a representation of the desired form. The following theorem is proved in Appendix A:
###### Theorem 3.
Let be an -reasonable LTF and . There exists a representation of as such that (after reordering coordinates so that condition (i) below holds) we have: (i) , ; (ii) ; and (iii) for all we have , where .
Tools from probability. We will use the following standard tail bound:
###### Theorem 4.
(Chernoff Bounds) Let be a random variable taking values in and let be i.i.d. samples drawn from . Let . Then for any , we have
Pr[∣∣¯¯¯¯¯X−E[X]∣∣≥γ]≤2exp(−γ2t/(2a2)).
We will also use the Littlewood-Offord inequality for -biased distributions over One way to prove this is by using the LYM inequality (which can be found e.g. as Theorem 8.6 of [Juk01]); for an explicit reference and proof of the following statement see e.g. [AGKW09].
###### Theorem 5.
Fix and let denote the -biased distribution over (under which each coordinate is set to independently with probability ) Fix and define . If , then for all we have
Basic Facts about function spaces. We will use the following basic facts:
###### Fact 6.
The functions are linearly independent and form a basis for the subspace .
###### Fact 7.
Fix any and let be a probability distribution over such that for all We define for . Suppose that is an orthonormal set of functions, i.e., for all Then we have As a corollary, if then we have
## 3 Analytic Reformulation of Shapley values
The definition of Shapley values given in Definition 1 is somewhat cumbersome to work with. In this section we derive alternate characterizations of Shapley values in terms of “Fourier coefficients” and “coordinate correlation coefficients” and establish various technical results relating Shapley values and these coefficients; these technical results will be crucially used in the proof of our main theorems.
There is a particular distribution that plays a central role in our reformulations. We start by defining this distribution and introducing some relevant notation, and then give our results.
The distribution . Let us define ; clearly we have , and more precisely we have We also define as for , so we have
For we write to denote the number of ’s in . We define the set to be , i.e., .
The distribution is supported on and is defined as follows: to make a draw from , sample with probability . Choose uniformly at random from the -th “weight level” of , i.e., from
Useful notation. For we define the “coordinate correlation coefficients” of a function (with respect to ) as:
f∗(i)\lx@stackreldef=Ex∼μ[f(x)⋅xi] (2)
(here and throughout the paper denotes the constant 1).
Later in this section we will define an orthonormal set of linear functions . We define the “Fourier coefficients” of (with respect to ) as:
^f(i)\lx@stackreldef=Ex∼μ[f(x)⋅Li(x)]. (3)
An alternative expression for the Shapley values. We start by expressing the Shapley values in terms of the coordinate correlation coefficients:
###### Lemma 8.
Given , for each we have
~f(i)=f(1)−f(−1)n+Λ(n)2⋅(f∗(i)−1nn∑j=1f∗(j)),
or equivalently,
f∗(i)=2Λ(n)⋅(~f(i)−f(1)−f(−1)n)+1nn∑j=1f∗(j).
###### Proof.
Recall that can be expressed as follows:
~f(i)=Eπ∼RSn[f(x+(π,i))−f(x(π,i))]. (4)
Since the -th coordinate of is and the -th coordinate of is , we see that is a weighted sum of . We now compute the weights associated with any such .
• Let be a string that has coordinates that are 1 and has Then the total number of permutations such that is . Consequently the weight associated with for such an is .
• Now let be a string that has coordinates that are 1 and has Then the total number of permutations such that is . Consequently the weight associated with for such an is .
Thus we may rewrite Equation (4) as
~f(i) = ∑x:{−1,1}n:xi=1(wt(x)−1)!(n−wt(x))!n!f(x)⋅xi+ ∑x:{−1,1}n:xi=−1wt(x)!(n−wt(x)−1)!n!f(x)⋅xi.
Let us now define . Using the fact that , it is easy to see that one gets
2~f(i) = 2ν(f)+ (5) 2(∑x∈Bnf(x)⋅(wt(x)−1)!(n−wt(x)−1)!n!⋅((n/2−wt(x))+(nxi)/2)) = 2ν(f)+∑x∈Bn(f(x)⋅(wt(x)−1)!(n−wt(x)−1)!(n−1)!⋅xi+ ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{% }~{}~{}~{}~{}~{}~{}~{}f(x)⋅(wt(x)−1)!(n−wt(x)−1)!n!⋅(n−2wt(x))) = 2ν(f)+∑x∈Bn⎛⎝f(x)⋅nwt(x)(n−wt(x))(nwt(x))⋅xi+ ~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{% }~{}~{}~{}~{}~{}~{}~{}f(x)⋅1wt(x)(n−wt(x))(nwt(x))⋅(n−2wt(x))⎞⎠.
We next observe that . Next, let us define (for ) as follows :
P(n,k)\lx@stackreldef=Q(n,k)(nk)=1k+1n−k(nk).
So we may rewrite Equation (5) in terms of as
We have
∑x∈BnP(n,wt(x))=n−1∑k=1∑x∈{−1,1}n=kP(n,wt(x))=n−1∑k=1(nk)⋅P(n,k)=n−1∑k=1Q(n,k)=Λ(n),
and consequently we get
finishing the proof. ∎
Construction of a Fourier basis for distribution . For all we have that , and consequently by Fact 6 we know that the functions form a basis for the subspace of linear functions from . By Gram-Schmidt orthogonalization, we can obtain an orthonormal basis for this subspace, i.e., a set of linear functions such that and for all
We now give explicit expressions for these basis functions. We start by defining as . Next, by symmetry, we can express each as
Li(x)=α(x1+…+xn)+βxi.
Using the orthonormality properties it is straightforward to solve for and . The following Lemma gives the values of and :
###### Lemma 9.
For the choices
α\lx@stackreldef=1n⋅(√Λ(n)nΛ(n)−4(n−1)−√Λ(n)2),β\lx@stackreldef=√Λ(n)2,
the set is an orthonormal set of linear functions under the distribution .
We note for later reference that and
We start with the following proposition which gives an explicit expression for when ; we will use it in the proof of Lemma 9.
###### Proposition 10.
For all we have .
###### Proof.
For brevity let us write , i.e., , the -th “slice” of the hypercube. Since is supported on , we have
Ex∼μ[xixj]=∑0
If or , it is clear that
Ex∼μ[xixj | x∈Ak]=1−2n−2n=1−4n,
and when , we have
Ex∼μ[xixj | x∈Ak]=1(nk)⋅(2(n−2k−2)+2(n−2k)−(nk)).
Recall that and for This means that we have
Prx∼μ[x∈Ak]=Q(n,k)/Λ(n).
Thus we may write as
Ex∼μ[xixj] = ∑2≤k≤n−2Q(n,k)Λ(n)⋅Ex∼μ[xixj | x∈Ak]+ ∑k∈{1,n−1}Q(n,k)Λ(n)⋅Ex∼μ[xixj | x∈Ak].
For the latter sum, we have
∑k∈{1,n−1}Q(n,k)Λ(n)⋅Ex∼μ[xixj | x∈Ak]=1Λ(n)(1−4n)⋅2nn−1.
For the former, we can write
n−2∑k=2Q(n,k)Λ(n)⋅Ex∼μ[xixj | x∈Ak] = n−2∑k=21Λ(n)(k−1)!(n−k−1)!(n−1)!⋅(2(n−2k−2)+2(n−2k)−(nk)) = n−2∑k=21Λ(n)⋅(2(k−1)(n−1)(n−k)+2(n−k−1)(n−1)k−nk(n−k)) = n−2∑k=21Λ(n)⋅(2n−k−2n−1+2k−2n−1−1k−1n−k) = n−2∑k=21Λ(n)⋅(1n−k+1k−4n−1).
Thus, we get that overall equals
1Λ(n)(1−4n)⋅2nn−1+n−2∑k=21Λ(n)⋅(1n−k+1k−4n−1) = 1Λ(n)(2+2n−1−8n−1)+1Λ(n)(n−2∑k=21k+1n−k)−4Λ(n)+8Λ(n)(n−1) = 1Λ(n)(n−1∑k=1Q(n,k))−4Λ(n)=1−4Λ(n),
as was to be shown. ∎
###### Proof of Lemma 9.
We begin by observing that
Ex∼μ[Li(x)L0(x)]=Ex∼μ[Li(x)]=Ex∼μ[α(x1+…+xn)+βxi]=0
since . Next, we solve for and using the orthonormality conditions on the set . As and , we get that . This gives
Ex∼μ[Li(x)⋅(Li(x)−Lj(x))] = Ex∼μ[Li(x)⋅β(xi−xj)] = Ex∼μ[β((α+β)xi+αxj)⋅(xi−xj)] = αβ+β2−αβ−β2Ex∼μ[xjxi] = β2(1−Ex∼μ[xixj])=4β2/Λ(n)=1,
where the penultimate equation above uses Proposition 10. Thus, we have shown that . To solve for , we note that
n∑i=1Li(x)=(αn+β)(x1+…+xn).
However, since the set is orthonormal with respect to the distribution , we get that
Ex∼μ[(L1(x)+…+Ln(x))(L1(x)+…+Ln(x))]=n
and consequently
(αn+β)2 Ex∼μ[(x1+…+xn)(x1+…+xn)]=n
Now, using Proposition 10, we get
Ex∼μ[(x1+…+xn)(x1+…+xn)] = n∑i=1Ex∼μ[x2i]+∑i≠jEx∼μ[xixj] = n+n(n−1)⋅(1−4Λ(n))
Thus, we get that
Simplifying further,
(αn+β)=√Λ(n)nΛ(n)−4(n−1)
and thus
α=1n⋅(√Λ(n)nΛ(n)−4(n−1)−√Λ(n)2)
as was to be shown. ∎
Relating the Shapley values to the Fourier coefficients. The next lemma gives a useful expression for in terms of :
###### Lemma 11.
Let be any bounded function. Then for each we have
^f(i)=2βΛ(n)⋅(~f(i)−f(1)−f(−1)n)+1n⋅n∑j=1^f(j).
###### Proof.
Lemma 9 gives us that , and thus we have
^f(i)≡Ex∼μ[f(x)⋅Li(x)] = α(n∑j=1Ex∼μ[f(x)⋅xj])+βEx∼μ[f(x)⋅xi] (6) = αn∑j=1f∗(j)+βf∗(i).
Summing this for to , we get that
n∑j=1^f(j)=(αn+β)n∑j=1f∗(j). (7)
Plugging this into (6), we get that
f∗(i)=1β⋅(^f(i)−ααn+β⋅n∑j=1^f(j)) (8)
Now recall that from Lemma 8, we have
~f(i) = ν(f)+Λ(n)2⋅(Ex∼μ[f(x)⋅xi]−Ex∼μ[f(x)⋅(n∑i=1xi)/n]) = ν(f)+Λ(n)2⋅(f∗(i)−∑nj=1f∗(j)n)
where . Hence, combining the above with (7) and (8), we get
1β⋅(^ | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9717848300933838, "perplexity": 601.6747693789083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526324.57/warc/CC-MAIN-20190719161034-20190719183034-00383.warc.gz"} |
https://etprogram.org/features.html | Features¶
The eT program is first and foremost a coupled cluster program, with the CCS, CC2, CCSD, CCSD(T), and CC3 methods implemented. In addition to the standard coupled cluster methods, eT has multilevel CC2, multilevel CCSD, and coupled cluster time propagation. Besides coupled cluster methods, MP2, FCI and TDHF are implemented. Hartree-Fock calculations, both restricted closed shell, restricted open-shell, and unrestricted, are available. Furthermore, the multilevel Hartree-Fock method is implemented. QM/MM is available at both HF and CC level.
A detailed description of the features available for each method is given below.
Hartree-Fock¶
The available Hartree-Fock methods are
• Restricted Hartree-Fock (RHF, ROHF)
• Unrestricted Hartree-Fock (UHF, CUHF)
• Multilevel Hartree-Fock (MLHF)
• Quantum electrodynamics Hartree-Fock (QED-HF)
Restricted and unrestricted Hartree-Fock¶
Restricted closed-shell Hartree-Fock (RHF) can be used for single-point calculations, geometry optimizations, properties (dipole and quadrupole), and as a reference wavefunction for coupled cluster calculations.
Restricted open-shell Hartree-Fock (ROHF) can be used for single-point calculations.
Unrestricted Hartree-Fock (UHF) can be used for single-point calculations.
Setting up a Hartree-Fock calculation
Multilevel Hartree-Fock¶
Multilevel Hartree-Fock can be used for single-point calculations, and as a reference wavefunction for reduced space coupled cluster calculations.
Setting up a multilevel Hartree-Fock calculation
Quantum electrodynamics Hartree-Fock¶
Quantum electrodynamics Hartree-Fock supports the same features as restricted closed-shell Hartree-Fock (RHF) except geometry optimization.
Coupled cluster¶
The implemented coupled cluster methods in eT are
• Standard methods: CCS and CCSD
• Perturbative methods: CC2, CC3 and CCSD(T)
• Multilevel CC2 and CCSD
Additionally, the code can perform time-propagation of some of the implemented coupled cluster wavefunctions.
The features implemented for the coupled cluster methods vary somewhat, and are detailed below.
Setting up a coupled cluster calculation
CCS, CC2, CCSD and CC3¶
For CCS, CC2, CCSD, and CC3, eT offers ground and excited state calculations in addition to dipole and quadrupole moments, and EOM oscillator strengths.
For CCS, CC2, and CCSD also EOM polarizabilities and linear response oscillator strengths as well as polarizabilities are implemented.
For CCS, CC2, and CCSD triplet excitation energies are implemented.
CCSD(T)¶
Ground state energies at the CCSD(T) level of theory are implemented.
Low-memory CC2¶
In eT, there are two versions of the CC2 code. The standard CC2 code has a memory requirement proportional to $$n_o^2 n_v^2$$ , where $$n_o$$ is the number of occupied orbitals and $$n_v$$ is the number of virtual orbitals. However, the low-memory CC2 implementation has an $$N^2$$ memory requirement, where $$N$$ is the number of orbitals.
Currently, only ground and excited state energies are available with the low-memory CC2 code.
Multilevel coupled cluster¶
The multilevel CC2 (MLCC2) and multilevel CCSD (MLCCSD) methods are available with correlated natural transition orbitals, Cholesky orbitals (occupied and virtual), and projected atomic orbitals.
Currently, only ground and excited states are available at the MLCCSD level of theory.
Setting up a multilevel coupled cluster calculation
Time-dependent coupled cluster¶
Real-time propagation is available for CCS and CCSD methods. It solves the differential equations describing the time evolution of cluster amplitudes and Lagrange multipliers. The available integrators are
• Euler
• Gauss-Legendre (with order 2, 4 and 6)
• Runge-Kutta (with order 4)
The time dependent quantities that are available as output are:
• Amplitudes
• Multipliers
• Density matrix
• Energy
• Electric field
• Dipole moment
In addition, visualizable time-dependent density and spectra given by the Fast Fourier Transform of the dipole moment and of the electric field can be requested as output.
Environments¶
The available QM/MM methods are
• Electrostatic embedding (non-polarizable)
• Polarizable QM/Fluctuating charges
• Polarizable Continuum Model
Both the methods can be coupled both with HF or CC wavefunctions. In case of QM/FQ, only HF ground state Fock and MOs are affected by QM/FQ.
Cholesky decomposition of the ERIs¶
The coupled cluster code is based on Cholesky decomposed electron repulsion integrals (ERIs). For sufficiently small systems, T1-transformed ERIs are constructed from the Cholesky vectors and stored in memory. For systems where the ERIs cannot be placed in memory, the T1-transformed Cholesky vectors are stored in memory if possible and ERIs are constructed from these vectors on the fly. For larger systems, the Cholesky vectors are stored on disk.
Visualization¶
Ground state densities can be written to .plt or .cube files that are readable by Chimera for both Hartree-Fock and coupled cluster. For coupled cluster, it is also possible to plot transition densities.
FCI¶
Full CI calculations are available for both closed (RHF reference) and open (ROHF reference) shells. In addition, dipole and quadrupole moments can be calculated for the ground state.
MP2¶
Ground state energies at the MP2 level of theory are implemented.
Time-dependent Hartree-Fock¶
Excitation energies and oscillator strengths at the TDHF level of theory are available either within the random phase (RPA) or the Tamm-Dancoff approximation (TDA). In addition, static and frequency-dependent polarizabilities can be obtained.
Quantum electrodynamics time-dependent Hartree-Fock¶
Quantum electrodynamics time-dependent Hartree-Fock supports the same features as time-dependent Hartree-Fock except the polarizabilities. In addition, a measure of the photon character of each excitation is computed. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4149140417575836, "perplexity": 7680.503998338704}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764495012.84/warc/CC-MAIN-20230127195946-20230127225946-00488.warc.gz"} |
http://gmatclub.com/forum/starting-to-think-about-business-school-133758.html | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 06 May 2016, 14:02
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
Author Message
TAGS:
### Hide Tags
Intern
Joined: 03 Jun 2012
Posts: 3
Followers: 0
Kudos [?]: 0 [0], given: 0
### Show Tags
03 Jun 2012, 23:08
Hello everyone, I've been browsing this forum for a while as I'm interested in getting my mba. I'm wondering about where I stand as far as schools I can get accepted to. I have a ugrad gpa of 3.0 in finance from a state university with a fairly good business program. I was president of my fraternity which had the highest gpa on campus. I have 1 year of experience working in real estate and 2 years of experience in retail banking and have had success in both. I haven't taken the gmat or gre yet as I'm just starting to research, however I have always done very well on standardized testing. I'm male and not a minority btw. Would I stand any chance for a tier 1 or 2 school? I have also been considering taking some mba classes as a non matriculating student locally, assuming I had a high gpa in that how much would that boost my chances?
VP
Status: Current Student
Joined: 24 Aug 2010
Posts: 1345
Location: United States
GMAT 1: 710 Q48 V40
WE: Sales (Consumer Products)
Followers: 108
Kudos [?]: 415 [1] , given: 73
### Show Tags
04 Jun 2012, 06:54
1
KUDOS
spookys6 wrote:
Hello everyone, I've been browsing this forum for a while as I'm interested in getting my mba. I'm wondering about where I stand as far as schools I can get accepted to. I have a ugrad gpa of 3.0 in finance from a state university with a fairly good business program. I was president of my fraternity which had the highest gpa on campus. I have 1 year of experience working in real estate and 2 years of experience in retail banking and have had success in both. I haven't taken the gmat or gre yet as I'm just starting to research, however I have always done very well on standardized testing. I'm male and not a minority btw. Would I stand any chance for a tier 1 or 2 school? I have also been considering taking some mba classes as a non matriculating student locally, assuming I had a high gpa in that how much would that boost my chances?
If you're going to take classes to create an alternate transcript, don't take MBA ones. The credits won't transfer to whatever program you matriculate into. You're fine taking college level courses in stats, economics, calculus, etc.
On first glance you seem to have a viable shot at a Tier 2 school at the least, maybe the lower end of tier 1 too. It would really depend on your roles within real estate and retail banking and what bank you work for. Your best indication of where you could end up is where your peer set gained admission. If no one at your employer is getting into top 25 schools it's a safe bet that you won't be blazing that trail. If several have gone to tier 1 schools then you should have a shot at it too. Business schools especially the top 10 tend to choose people based on the selectivity of their employer and the job they have at that employer. I would say that's even more important than the GMAT. You can get an 800 on the GMAT but if you're working as a bank teller you're not getting into a top school. Hope this gives you a point of reference from which to start.
_________________
The Brain Dump - From Low GPA to Top MBA (Updated September 1, 2013) - A Few of My Favorite Things--> http://cheetarah1980.blogspot.com
Intern
Joined: 03 Jun 2012
Posts: 3
Followers: 0
Kudos [?]: 0 [0], given: 0
### Show Tags
04 Jun 2012, 14:21
Thanks for the reply. I'm a little confused on your first point though of taking undergrad classes rather than mba level, wouldn't I be better off taking a few grad level classes and getting a high gpa in them then undergrad? That way it would show that I can be successful at a graduate level? Also with my degree I have already taken quite a bit of econ, stats, etc. so it would seem to overlap. My work in real estate was as an agent for a real estate brokerage and in banking it was first as a personal banker then promotion to assistant branch manager, I am 25 btw. It's one of largest and most respected banks in the country. A program that is looking interesting to me is the UNC Chapel Hill MBA with concentration in real estate. That would seem to be a good fit and seems to hover around lower tier 1 and upper tier 2.
Does anyone have any advice as to what other things may improve my chances such as pursuing certifications or anything like that?
Director
Status: Go Blue!
Joined: 03 Jun 2010
Posts: 685
Location: United States (MO)
Concentration: Nonprofit, General Management
Schools: Michigan (Ross) - Class of 2015
GMAT 1: 740 Q47 V45
GRE 1: 336 Q169 V167
GPA: 3.22
WE: Information Technology (Manufacturing)
Followers: 17
Kudos [?]: 147 [0], given: 249
### Show Tags
04 Jun 2012, 14:39
spookys6 wrote:
Thanks for the reply. I'm a little confused on your first point though of taking undergrad classes rather than mba level, wouldn't I be better off taking a few grad level classes and getting a high gpa in them then undergrad? That way it would show that I can be successful at a graduate level? Also with my degree I have already taken quite a bit of econ, stats, etc. so it would seem to overlap. My work in real estate was as an agent for a real estate brokerage and in banking it was first as a personal banker then promotion to assistant branch manager, I am 25 btw. It's one of largest and most respected banks in the country. A program that is looking interesting to me is the UNC Chapel Hill MBA with concentration in real estate. That would seem to be a good fit and seems to hover around lower tier 1 and upper tier 2.
Does anyone have any advice as to what other things may improve my chances such as pursuing certifications or anything like that?
Unfortunately, there's a perception of grade inflation with graduate classes, so that stigma is one to avoid. I do have to make a slight adjustment to cheetarah's note about the transfer of credits. While pre-MBA credits rarely count toward your degree (I have seen that actually though, Kellogg's PT MBA is one example), they can help you waive out of core classes. Just be aware that the waiver requirements can vary greatly among schools.
I knew I was tied down for two years, so I did a full blown Master of Accounting as an alternate transcript, which contributed to me waiving out of 5 core classes at Ross. That's given me room to add a lot of electives, allowing me to pick more concentrations or go deeper into specific ones. I also don't feel like I wasted time or money taking marginally beneficial coursework.
VP
Status: Current Student
Joined: 24 Aug 2010
Posts: 1345
Location: United States
GMAT 1: 710 Q48 V40
WE: Sales (Consumer Products)
Followers: 108
Kudos [?]: 415 [0], given: 73
### Show Tags
04 Jun 2012, 18:00
Yes, method is right, taking relavent courses will allow you to waive core classes (usually there is a test). Waiving a class doesn't give you credits, but it will allow you to replace them with electives (like method said). If you've already taken classes like stats and econ and done well in them (at least a B) then do not retake them. Honestly a 3.0 isn't a bad GPA. It's a B average. B's are good they just don't show excellence. If you were a straight B student (i.e. no failing grades or Ds) then I don't really think there is much to really offset. I am only saying, don't pursue the Masters in an attempt to mitigate your GPA. Do it cause you want the degree. If you still feel you need an alternative transcript maybe take some upper level undergrad classes in the quantitative areas previously mentioned. Maybe econometrics or decison sciences. Hope this helps.
Posted from my mobile device
_________________
The Brain Dump - From Low GPA to Top MBA (Updated September 1, 2013) - A Few of My Favorite Things--> http://cheetarah1980.blogspot.com
Intern
Joined: 03 Jun 2012
Posts: 3
Followers: 0
Kudos [?]: 0 [0], given: 0
### Show Tags
04 Jun 2012, 22:04
Well honestly, I'm not concerned with taking courses for credit, just courses to hopefully improve my application. I don't have much to improve in statistics and econ ( I was basically a B and A student in each). I can do classes in either undergrad or grad but I was pretty consistent in my undergrad... there is no glaring area where I feel I need improvement. What I was considering was taking a few mba classes at the local university (which is also where I received my BS in finance) to both improve my transcript and give me a feeling for how an mba suited me. Now I am aware of the grade inflation common in grad schools but I feel that if I get a respectable gpa it will be considered no matter what and give me a feeling of what I'm in for. Also, and I know this will open a HUGE can of worms, but my local university is a regionally respected up and comming business school, unranked however and will cost me around 10k. Compared to whatever $$other schools will cost. Now for me I can deal with the financial burden of just about anywhere, but unless the school gives me a big advantage, obviously I should go with my alma mater Manager Joined: 27 Jul 2007 Posts: 51 Location: United States (TX) Concentration: Finance, Entrepreneurship GPA: 2.42 Followers: 0 Kudos [?]: 6 [0], given: 7 Re: Starting to think about business school [#permalink] ### Show Tags 04 Jun 2012, 22:14 What they are saying is correct! I work retail banking and it's very important that you play up a lot of your leadership qualities and quant skills. So take one or two courses at a community college (it shows your commitment) and study for GMAT/GRE seriously. I won't say don't focus on Tier 2 schools but your school choices should be based on your long term goals. Remember that how you feel now probably won't be the same 3-5 years from now. So reach for the best to limit the regrets!! Posted from my mobile device Posted from my mobile device _________________ Follow my Blog as I attempt to pay it forward !!! http://mbatechfinance.tumblr.com/ Intern Joined: 11 Jun 2012 Posts: 5 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: Starting to think about business school [#permalink] ### Show Tags 11 Jun 2012, 20:18 TRISTLW wrote: What they are saying is correct! I work retail banking and it's very important that you play up a lot of your leadership qualities and quant skills. So take one or two courses at a community college (it shows your commitment) and study for GMAT/GRE seriously. I won't say don't focus on Tier 2 schools but your school choices should be based on your long term goals. Remember that how you feel now probably won't be the same 3-5 years from now. So reach for the best to limit the regrets!! That's precise and gives the importance of choosing a long term goal as our thoughts would change in long run. Something which I thought to share. Re: Starting to think about business school [#permalink] 11 Jun 2012, 20:18 Similar topics Replies Last post Similar Topics: Question about interviewing for business school 4 28 Apr 2014, 10:38 36 How to Start Researching Business Schools - Essaysnark 4 04 Jul 2013, 10:47 8 Everything about London Business School 7 12 Dec 2010, 18:55 5 What Recruiters Really Think Of The Top Business Schools 1 11 Feb 2013, 07:39 2 Starting to think about the$$\$ 12 03 Nov 2008, 17:37
Display posts from previous: Sort by | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18350785970687866, "perplexity": 2330.212607835168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461862134822.89/warc/CC-MAIN-20160428164854-00208-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=3899359 | # Is the product of P actually wug and what about
by Probie1
Tags: product
P: 38 P = mv so do this mean that the product of v is μg and the product of m is weight? So it could be written P = wμg How is this formula derived Vf = √(Vi^2 + (2ad))
PF Patron
HW Helper
Thanks
P: 25,467
Hi Probie1!
Quote by Probie1 P = mv so do this mean that the product of v is μg and the product of m is weight? So it could be written P = wμg
Sorry, I've no idea what you're talking about
what is the context (and what do you mean by "product")?
P: 38 Does product not mean... umm the make up... it is part of or makes up? I guess the context of all this is I am trying to undertand how formula's come about. P = mv so do this mean that the product of v is μg and the product of m is weight? So it could be written P = wμg This is another question. How is this formula derived Vf = √(Vi^2 + (2ad))
PF Patron
Thanks
Emeritus
P: 38,395
## Is the product of P actually wug and what about
Mathematically "product" means the result of multiplying numbers. It simply doesn't make sense to talk about the "product" of a single number as in "product of v is μg" or "the product of m is weight". Perhaps you mean it the other way- weight is the product of mg. That is "mass times the acceleration due to gravity of an object is the force on that object due to gravity"- by definition its "weight". I'm not sure what you could mean by "v is the product μg", if that is what you intend, because you have not told us what μ is and it is not a standard symbol. Sometimes μ is used for the "coefficient of drag" but that doesn't make sense here. Assuming g is the acceleration due to gravity and v is velocity, their standard meanings, since v would have units of "meters per second" and g "meters per seconds squared", μ would have to have units of "seconds"- it would have to be a "time". Is that correct?
Mentor
P: 19,681
Quote by Probie1 This is another question. How is this formula derived Vf = √(Vi^2 + (2ad))
What other equations of motion do you know?
PF Patron
HW Helper
Thanks
P: 25,467
Hi Probie1!
(try using the X2 and X2 buttons just above the Reply box )
Quote by Probie1 How is this formula derived Vf = √(Vi^2 + (2ad))
This is one of the standard equations for constant acceleration.
Then, integrating, v = at + vi.
And integrating again, d = 1/2 at2 + vit.
Can you finish the proof?
Quote by Probie1 Does product not mean... umm the make up... it is part of or makes up?
As HallsofIvy says, no.
What did you mean by P m v m and g ?
P: 38
Can you finish the proof? d = 1/2 at2 + vit.
D= at + vi2
Alright... stop laughing.
What did you mean by P m v m and g ?
P = momentum
m=mass
v=velocity
g = gravity
μ = friction
w= weight
I thought that if P=mv then v = a = μg but then I remembered where I left my brain because a = change in velocity over a change in time. So it can't possibly be the way I was thinking. So just forget I was so stupid to write that down.
Related Discussions Calculus & Beyond Homework 2 Calculus & Beyond Homework 8 Calculus & Beyond Homework 8 Quantum Physics 2 Introductory Physics Homework 4 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.903701663017273, "perplexity": 1379.4227667748492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163047523/warc/CC-MAIN-20131204131727-00074-ip-10-33-133-15.ec2.internal.warc.gz"} |
http://mathhelpforum.com/trigonometry/48775-trigo-find-angles.html | Math Help - trigo >find angles
1. trigo >find angles
0* to 350*, find A
1. sin5A=sin3A
2.cos4A+cos2A=cos3A
Inverse Trigonometric Functions
Show, without using a calculator, that 〖Tan〗^(-1)4 -〖 Tan〗^(-1) 3/5 = π/4
2. Originally Posted by sanikui
0* to 350*, find A
1. sin5A=sin3A
[snip]
General solution:
Case 1: $5A = 3A + 2n \pi$ where n is an integer. Therefore $A = \, ....$
Case 2: $5A = (\pi - 3A) + 2n\pi$ where n is an integer. Therefore $A = \, ....$
In each case substitute values of n that give values of A that lie in the given domain
3. Originally Posted by sanikui
0* to 350*, find A
[snip]
2.cos4A+cos2A=cos3A
[snip]
You know that $\cos (x + y) + \cos (x - y) = 2 \cos x \cos y$. Substitute x = 3A and y = A: $\cos (4A) + \cos (2A) = 2 \cos (3A) \cos (A)$. Therefore the equation can be re-written as:
$2 \cos (3A) \cos (A) = \cos (3A) \Rightarrow \cos (3A) ( 2 \cos (A) - 1) = 0$.
I'm sure you can continue the solution from here.
4. Originally Posted by sanikui
[snip]
Inverse Trigonometric Functions
Show, without using a calculator, that 〖Tan〗^(-1)4 -〖 Tan〗^(-1) 3/5 = π/4
Let $\tan^{-1} (4) = \alpha \Rightarrow \tan \alpha = 4$.
Let $\tan^{-1} \left( \frac{3}{5} \right) = \beta \Rightarrow \tan \beta = \frac{3}{5}$.
Then you have to show that $\alpha - \beta = \frac{\pi}{4}$.
Now note that $\tan (\alpha -\beta) = \frac{\tan \alpha - \tan \beta}{1 + \tan \alpha \tan \beta} = \frac{4 - \frac{3}{5}}{1 + (4)\left( \frac{3}{5}\right)} = \frac{20 - 3}{5 + 12} = 1$.
Therefore $\alpha - \beta = \tan^{-1} (1)$.
5. Originally Posted by mr fantastic
General solution:
Case 1: $5A = 3A + 2n \pi$ where n is an integer. Therefore $A = \, ....$
Case 2: $5A = (\pi - 3A) + 2n\pi$ where n is an integer. Therefore $A = \, ....$
In each case substitute values of n that give values of A that lie in the given domain
This i till cant understand...can explain more clearly?
Next 2 question i can get ur mean.Thx.
6. Originally Posted by mr fantastic
General solution:
Case 1: $5A = 3A + 2n \pi$ where n is an integer. Therefore $A = \, ....$
Case 2: $5A = (\pi - 3A) + 2n\pi$ where n is an integer. Therefore $A = \, ....$
In each case substitute values of n that give values of A that lie in the given domain
Case 1: It should be obvious that if $\sin (5A) = \sin (3A)$ then either 5A = 3A or the angles 5A and 3A differ by an integer multiple of $360^0$.
So $5A = 3A + 360^0 n$ where n is an integer.
Case 2: You should know that $\sin x = \sin (180^0 - x)$. Therefore $\sin (3A) = \sin (180^0 - 3A)$.
So the equation can be written $\sin (5A) = \sin (180^0 - 3A)$.
Using the same logic that gave Case 1, it should be obvious that either $5A = (180^0 - 3A)$ or the angles $5A$ and $(180^0 - 3A)$ differ by an integer multiple of $360^0$.
So $5A = (180^0 - 3A) + 360^0 n$ where n is an integer.
In each case, substitute the values of n into the general solutions that give values of A lying in the domain $0^0 \leq A \leq 350^0$.
7. Oic...know liaw....thx Mr.fantastic~!! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9880405068397522, "perplexity": 719.4672445051184}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447881.87/warc/CC-MAIN-20151124205407-00130-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://mailman.ntg.nl/pipermail/ntg-context/2022/105649.html | śrīrāma citturs at gmail.com
Tue May 10 07:10:49 CEST 2022
On 5/9/22 10:06 PM Mikael Sundqvist via ntg-context wrote:
> On Mon, May 9, 2022 at 6:16 PM Alexandre Christe via ntg-context
>
> <ntg-context at ntg.nl> wrote:
> > Sadly I have to report the bibliography is still broken. Could someone
> > else confirm? It's an unfortunate timing since I need to hand in some
> > report really soon.
> I can confirm that there is a problem. But in the example below it
> goes away if I uncomment the \usebtxdefinitions[aps].
>
> /Mikael
>
> \startbuffer[bib]
> @ELECTRONIC{hh2010,
> author = {Hans Hagen},
> year = {2010},
> title = {Metafun. \CONTEXT\ mkiv},
> }
> \stopbuffer
>
> \usebtxdataset[bib.buffer]
>
> % \usebtxdefinitions[aps]
>
> \starttext
> \cite[hh2010]
> \placelistofpublications
> \stoptext
Hi Mikael (and others),
[I have not upgraded yet; version: 2022.05.02 16:19, so I don't know if
anything has changed in the new upload, but:]
In your example if you change the tag 'hh2010' to 'HansHagen2010' or anything
with uppercase ASCII chars, the bibliography entries will not be correctly
rendered even if you un-comment '\usebtxdefinitions[aps]' line. [Please see
attached output example]
This issue seems to have originated in the version after 2022.04.15 when 'tag'
and 'field' values in publ-ini.lua were string.lower()'ed. Removing those
statements from the file seems to be a workaround. On the other hand, if the
tag entries are all lower-case (as they were in your example), then there
seems to be no issue. I had reported this earlier, please see:
https://mailman.ntg.nl/pipermail/ntg-context/2022/105585.html
Thanks,
Sreeram
-------------- next part --------------
A non-text attachment was scrubbed...
Name: btx-issue.pdf
Type: application/pdf
Size: 7646 bytes
Desc: not available
URL: <http://mailman.ntg.nl/pipermail/ntg-context/attachments/20220510/d13afa28/attachment-0001.pdf> | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8805369734764099, "perplexity": 20728.31761273447}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571097.39/warc/CC-MAIN-20220810010059-20220810040059-00076.warc.gz"} |
http://export.arxiv.org/abs/2001.09662 | Full-text links:
nucl-ex
(what is this?)
# Title: Nuclear structure studies performed using the (18O,16O) twoneutron transfer reactions
Abstract: Excitation energy spectra and absolute cross section angular distributions were measured for the 13C(18O,16O)15C two-neutron transfer reaction at 84 MeV incident energy. This reaction selectively populates two-neutron configurations in the states of the residual nucleus. Exact finite-range coupled reaction channel calculations are used to analyse the data. Two approaches are discussed: the extreme cluster and the newly introduced microscopic cluster. The latter makes use of spectroscopic amplitudes in the centre of mass reference frame, derived from shell-model calculations using the Moshinsky transformation brackets. The results describe well the experimental cross section and highlight cluster configurations in the involved wave functions.
Subjects: Nuclear Experiment (nucl-ex); Nuclear Theory (nucl-th) Cite as: arXiv:2001.09662 [nucl-ex] (or arXiv:2001.09662v1 [nucl-ex] for this version)
## Submission history
From: Manuela Cavallaro [view email]
[v1] Mon, 27 Jan 2020 10:13:54 GMT (294kb)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872100710868835, "perplexity": 10189.006117754936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510846.12/warc/CC-MAIN-20200403092656-20200403122656-00421.warc.gz"} |
http://math.stackexchange.com/questions/285906/proving-the-inverse-of-a-greatest-integer-function/285913 | # Proving the Inverse of a Greatest Integer Function
This was a question on a recent Term Test. I was not sure how to prove the second part of the question and my professor never shows the answers, even after the fact. If someone could please show me how it is solved. I never know how to approach the INT function.
This is how I answered it, but like I said, I am really not sure: the range of f is (-∞,∞). To find the inverse, we need to consider 2 cases in which x is an integer and x is not an integer: When x is a integer, f(x)=2x so its inverse becomes g(x)=x/2 When x is not an integer the function f becomes f(x)=x+[x]. So there is no inverse for the function when it is not an integer since it cannot form a relation. So f^-1(x)=2x ;x is an integer.
-
To prove injectivity and find the inverse all you need is to observe that
$$\{y\}=\{ x+ [x] \}=\{x\} \,.$$ $$[y]=[x+[x]]=[x]+[x]$$ and
$$2[x] \leq y < 2[x]+1 \,.$$
The first two relations show that if $f(x_1)=f(x_2)$ then $x_1,x_2$ have the same integral and fractional parts.
Then, if you want to solve $f(x)=y$ for $y$ in the range of $f$, the last relation shows that $[f(x)]$ is an even integer, so $[y]=2m$. Now you can find the integral and rational part of $x$ from the first two equations.
-
This problem needed to be solved without integration because we had not covered the integral yet. – Dick Jan 25 '13 at 0:29
@Dicky "integral part"=greatest integer function. It has nothing to do with integration. – N. S. Jan 25 '13 at 1:45
Sorry about that. – Dick Jan 25 '13 at 17:37
It might help a lot to try to visualize the function (or get the computer to graph it).
We can show that $x + [x]$ is injective by the fact it is strictly increasing.
By that I mean when $x < y$, $x + [x] < y + [y]$ (because $[x] \le [y]$).
The "range" $R$ is usually called the image: it's simply the unit intervals $[0,1) \cup [2,3) \cup [4,5) \cup \ldots$. To prove this use that $[x]$ is constantly $n$ on $[n,n+1)$.
This proves it has a left inverse $f^{-1} : R \to \mathbb R$.
-
range of $f: x+[x]$:
if $x\in[0,1)\;\;\;\;\;\;\;\;x+[x] \in [0,1)$
if $x\in[-1,0)\;\;\;\;\;\;\;\;x+[x] \in [-2,-1)$
Observe that all these intervals can be shifted by 2 to both left and right by adding or subtracting 1 to $x$.
You will see there are gaps that cannot be filled, find a pattern and summarize it.
$f^{-1}$ exists because of injectivity, which comes from monotonicity.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420944452285767, "perplexity": 203.8800876935083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400372542.20/warc/CC-MAIN-20141119123252-00238-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/258914/s-unit-equations-in-cyclotomic-fields | # S-Unit equations in cyclotomic fields
By a Siegel's result, one knows that there exist only finitely many solutions of the equation: $$x+y=1$$ where the unknowns $x$ and $y$ are units in the ring of integers of a cyclotomic field. Do you know an algorithm that can describe all the solutions?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9125773310661316, "perplexity": 137.09996802941563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122328486.60/warc/CC-MAIN-20150124175848-00063-ip-10-180-212-252.ec2.internal.warc.gz"} |
http://www.hulver.com/scoop/story/2006/6/19/51135/5432 | By R Mutt (Mon Jun 19, 2006 at 12:11:35 AM EST) MLP (all tags)
Nude urban exploration art [:o MeFi NSFW]
Which is more violent: Bible or Koran? [PU :(]
Photoshop. Movie scenes as medieval woodcuts; scout merit badges [:o BB )]
School criticized for electroshock punishment [:( MeFi]
Transplant Coordinator blog [:(]
Key:
[MeFi] = Stolen from Metafilter
[/.] = Stolen from Slashdot
[M] = Stolen from Memepool
[BX] = Stolen from Blogdex
[X.] = Stolen from Christdot
[)] = Stolen from Monkeyfilter
[B] = Stolen from B3ta
[GG] = Stolen from Green Gabbro
[BFB] = Stolen from Big Fat Blog
[BB] = Stolen from Boing Boing
[PU] = Stolen from PopURLs
[[:)] = Needs sound
[:(] = Serious
[:)] = Amusing
[;)] = Ironic
[:o] = Strange
[*] = Flash
[#] = Free registration required
[NSFW] = Not Safe For Work
[NSFWFUP] = Not Safe For Work For Ultra-Prudish
[(UK)] = UK-centric
[LL] = Late or repeated link
shocking by martingale (4.00 / 2) #1 Mon Jun 19, 2006 at 12:27:41 AM EST
It's not that bad, really. Those kids are smart. Creative. Committed. Getting zinged is an act of asymmetrical warfare against the school.
--
$E(X_t|F_s) = X_s,\quad t > s$
Those kids have it easy by Rogerborg (4.00 / 1) #2 Mon Jun 19, 2006 at 12:55:22 AM EST
Pretty much every Dusty Asspit Cult Manual recommends stoning recalcitrant children to death, although we must of course read them in context, which I understand means redacting the 95% of them that make us uncomfortable about slavishly obeying the remaining 5% of happy clappy peacenik crap.
So I'm sure that if Mohammed Abraham Christ had had access to electric shock belts, he'd have used to them to drive out the demons in a humane and inclusive fashion. You know, before going on to bathe his feet in the blood of the infidels.
-
Metus amatores matrum compescit, non clementia.
Dino! by martingale (2.00 / 0) #3 Mon Jun 19, 2006 at 12:58:31 AM EST
Dino de Laurentis! Is that you? Fancy meeting you here!
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
The comparisson betwen the Quran and the Bible by lm (4.00 / 3) #4 Mon Jun 19, 2006 at 02:01:59 AM EST
A good argument could be made that either book is the most violent and cruel book ever written.''
There is no more degenerate kind of state than that in which the richest are supposed to be the best.
Cicero, The Republic
Or too much (of the Bible) by DesiredUsername (4.00 / 1) #8 Mon Jun 19, 2006 at 02:48:52 AM EST
The set of books that feature children getting mauled by bears or nearly stabbed by their parents and genocide is advocated as holy is already pretty small, but the subset wherein the perpetrators are the good guys and praised, non-ironically, by the author is almost empty.
---
Now accepting suggestions for a new sigline
[ Parent ]
You've never read Hesiod by lm (2.00 / 0) #10 Mon Jun 19, 2006 at 03:29:44 AM EST
How about the hero of a tale cutting off someone's genitals and swalling them whole?
Or, apparently, the Song of Roland.
Or oodles of other works that are, by far, contain more violence that either the Bible or the Quran. I think you greatly underestimate the quantity of literature that contains graphic, wanton and cruel violence.
There is no more degenerate kind of state than that in which the richest are supposed to be the best.
Cicero, The Republic
[ Parent ]
What's the total Violence Factor? by DesiredUsername (2.00 / 0) #11 Mon Jun 19, 2006 at 03:39:10 AM EST
number * intensity?
In any case, I'd say that a book that features castration is less violent than one in which thousands of innocent people are slaughtered because an Invisible Sky Giant told someone they now owned a piece of land.
---
Now accepting suggestions for a new sigline
[ Parent ]
Maybe, maybe not. by komet (2.00 / 0) #12 Mon Jun 19, 2006 at 03:53:11 AM EST
But there can be no doubt that the Bible and the Quran have each caused more violence than any other book.
Oh wait. My troll pass expired in 2003? Never mind then.
--
<ni> komet: You are functionally illiterate as regards trashy erotica.
[ Parent ]
See the Internet Registrar by DesiredUsername (4.00 / 1) #13 Mon Jun 19, 2006 at 03:56:26 AM EST
Better hurry, it's going to get busy as schools let out for summer.
---
Now accepting suggestions for a new sigline
[ Parent ]
Dunno by jump the ladder (4.00 / 1) #14 Mon Jun 19, 2006 at 04:05:30 AM EST
Das Kapital has caused hundred of millions on deaths in the 20th Century.
[ Parent ]
nah by martingale (4.00 / 1) #15 Mon Jun 19, 2006 at 04:14:31 AM EST
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
the same can be said for the bible. by garlic (2.00 / 0) #27 Mon Jun 19, 2006 at 06:15:49 AM EST
and probably the koran.
[ Parent ]
Lots of people by jump the ladder (2.00 / 0) #29 Mon Jun 19, 2006 at 06:20:32 AM EST
Read the Koran but as the officially approved version has to be in clasical Arabic, not many actually understand it.
[ Parent ]
nonsense by martingale (2.00 / 0) #31 Mon Jun 19, 2006 at 04:02:54 PM EST
There are no regular sunday preachings of Das Kapital (*), whereas reading or being read to from the bible at least once a week if not more has existed for millenia in the west. There's really no comparison.
(*) DK is really the wrong book to be making into some kind of cult manifesto. The correct book would be Marx+Engels' The Communist Manifesto, which is much shorter and much less scholarly.
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
not quite by garlic (2.00 / 0) #33 Mon Jun 19, 2006 at 04:17:18 PM EST
but close enough to blame everything on religion.
[ Parent ]
if you want to play the blame game by martingale (2.00 / 0) #34 Mon Jun 19, 2006 at 04:29:56 PM EST
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
Lenin and Stalin on trial before the ISG: by MillMan (4.00 / 1) #19 Mon Jun 19, 2006 at 05:08:32 AM EST
[ Parent ]
I dunno by lm (4.00 / 1) #21 Mon Jun 19, 2006 at 05:17:21 AM EST
It seems to me that advances in modern science leading to machine guns, bombs of unusually large sizes, chemical weapons and the like has caused the deaths of far more people than either the Bible or the Quran.
There is no more degenerate kind of state than that in which the richest are supposed to be the best.
Cicero, The Republic
[ Parent ]
ridiculous by martingale (2.00 / 0) #32 Mon Jun 19, 2006 at 04:07:21 PM EST
Tools are the means, while religion is a cause.
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
Question begger by lm (2.00 / 0) #35 Mon Jun 19, 2006 at 05:14:45 PM EST
You're assuming that religion isn't a tool cf. Plato and Marsilius of Padua.
There is no more degenerate kind of state than that in which the richest are supposed to be the best.
Cicero, The Republic
[ Parent ]
like a gun or an explosive? by martingale (2.00 / 0) #36 Mon Jun 19, 2006 at 07:37:09 PM EST
You have a warped sense of what a tool is, don't you? Show me how "religion as a tool" directly interacts with physical reality.
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
So you're saying that mathematics isn't a tool? by lm (2.00 / 0) #37 Tue Jun 20, 2006 at 02:21:31 AM EST
Marsilius of Padua argued that the religion only serves to ensure the goodness of human acts both individual and civil on which depend almost completely the quite or tranquillity of communities and finally the sufficient live in the present world.'' He held that philosophers made up religions as a method to control the masses in the same way that Plato suggested that the Noble Lie be used. Marsilius's idea is contingent on the notion that most people simply can't understand being good for goodness' sake alone and need the promises and threats of the afterlife to keep them in line.
Of course he did make an exception for Christianity which he likened to a cancer. Christianity, alone of the religions said he, was started by people who actually think it true.
And if you look at the vast majority of religions, I think it clear that Marsilius may have a point. Most religions certainly seem designed to keep their inventors in control which, at least in the minds of the inventors, seems to be a good way to keep the peace.
There is no more degenerate kind of state than that in which the richest are supposed to be the best.
Cicero, The Republic
[ Parent ]
yes, it's not by martingale (2.00 / 0) #38 Tue Jun 20, 2006 at 02:51:20 AM EST
Again, you're playing on the metaphoric. Mathematics is a tool as much as religion, or law. We can talk of "mathematical toolbox" if you like, but it remains metaphoric. Suppose you use a mathematical result as a "tool" to solve a mathematical problem, then you still have a nonphysical end-result: an idea, a truth. The biggest effect it might have is that it makes you feel good if you witness it. Still, it is rather different to an airplane, or a tank, both of which canalso make you feel good (or not) upon witnessing one.
I have no quarrel with Marsilius on his conception of religion, but I note that this viewpoint still doesn't make religion a tool. The greatest lie still requires consent to be effective, while a tool doesn't. A drill doesn't need consent to bore a hole in a wall. An idea can be ignored by willpower.
You simply can't compare the ideas in a book with war machines.
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
You're being a sophist by lm (2.00 / 0) #39 Tue Jun 20, 2006 at 03:22:01 AM EST
A tool is an invention that helps to solve a problem. Sometimes that problem is physical. Sometimes that problem is social. Sometimes that problem is abstract. In any case, a tool is an invention that multiplies the natural power of the individual making use of it.
There is no more degenerate kind of state than that in which the richest are supposed to be the best.
Cicero, The Republic
[ Parent ]
even if you take that point of view by martingale (2.00 / 0) #40 Tue Jun 20, 2006 at 03:47:04 AM EST
You still haven't captured Marsilius' conception of religion as a tool.
What does it mean for a priest to multiply his powers? The priest acts on people, his business is persuasion. Does religion multiply his persuasiveness? If he's buddhist, he persuades twice as well as if he's atheist? If he's protestant, he persuades four times as well as if he's voodoo?
Conversely, if the target is muslim, let's consider two priests: one tries to convince without invoking religion, and one tries to convince using christianity. Does christianity as a tool truly multiply powers of persuasion against the target individual?
I just don't see that the tool metaphor fits on closer inspection.
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
A thought experiment by lm (2.00 / 0) #41 Tue Jun 20, 2006 at 03:52:16 AM EST
Imagine two rulers. One promulgates a religion that tells people that they will burn in hell if they don't obey the laws. The other does not. Some would argue that the former will have a more law abiding people.
There is no more degenerate kind of state than that in which the richest are supposed to be the best.
Cicero, The Republic
[ Parent ]
depends by martingale (2.00 / 0) #42 Tue Jun 20, 2006 at 03:44:43 PM EST
I believe that was tried in the 16th century by Mary Tudor. And by all accounts, people weren't too happy to follow her lead.
--
$E(X_t|F_s) = X_s,\quad t > s$
[ Parent ]
Oops by lm (2.00 / 0) #23 Mon Jun 19, 2006 at 05:46:53 AM EST
I put my comment to this post in the wrong place.
There is no more degenerate kind of state than that in which the richest are supposed to be the best.
Cicero, The Republic
[ Parent ]
Meh by DesiredUsername (2.00 / 0) #24 Mon Jun 19, 2006 at 05:59:42 AM EST
Counting both number of incidents and severity of each doesn't seem any more like moving the goalposts than claiming an orally-transmitted poem is a book.
---
Now accepting suggestions for a new sigline
[ Parent ]
claiming an orally-transmitted poem is a book? by lm (2.00 / 0) #25 Mon Jun 19, 2006 at 06:08:23 AM EST
I hope you were being intentionally ironic. The vast majority of the Bible was passed down orally through generations, just like Hesiod, before being written down.
There is no more degenerate kind of state than that in which the richest are supposed to be the best.
Cicero, The Republic
[ Parent ]
QED (nt) by DesiredUsername (2.00 / 0) #26 Mon Jun 19, 2006 at 06:13:01 AM EST
---
Now accepting suggestions for a new sigline
[ Parent ]
Oh, right by lm (4.00 / 1) #22 Mon Jun 19, 2006 at 05:20:48 AM EST
I forgot the part about inumerable number of children in the Bible getting mauled by bears or nearly stabbed by their parents. You're shifting the goal posts.
There is no more degenerate kind of state than that in which the richest are supposed to be the best.
Cicero, The Republic
[ Parent ]
Judging the Bible vs Quran by DesiredUsername (4.00 / 2) #5 Mon Jun 19, 2006 at 02:11:06 AM EST
on the basis of violence is like judging between the 80s cartoon shows My Pretty Pony and Strawberry Shortcake on the basis of their editing. I'm sure it's bad, but the point is they are both so near the bottom of the barrel in every way it's hardly worth fighting about.
---
Now accepting suggestions for a new sigline
Those pictures.... by blixco (4.00 / 1) #6 Mon Jun 19, 2006 at 02:43:27 AM EST
...the naked urban exploration ones? Amazing. Creepy and vulnerable. Very cool stuff.
---------------------------------
Taken out of context I must seem so strange - Ani DiFranco
bu $400 a pop? by martingale (2.00 / 0) #7 Mon Jun 19, 2006 at 02:47:56 AM EST Seems excessive. --$E(X_t|F_s) = X_s,\quad t > s$[ Parent ] No, the price is the whole point. by motty (4.00 / 1) #16 Mon Jun 19, 2006 at 04:35:58 AM EST They're totally heartless, totally mercenary, totally artless (in any sense you care to name), very slick and contain absolutely no meaning whatsoever other than that they probably will sell for$400 a pop (or more) and make a reasonably well-selling coffee table book too.
Each one I looked at (before I got totally sick of them) was exactly the same - a carefully crafted by-the-book commercial art style photo with a completely gratuitous totally out of place and thoroughly unnecessary naked lady in it somewhere. Each photo would actually be improved by removing the woman; however, the idea is to make something that looks 'arty' without actually having to think or create or do anything that actually takes some kind of work - a particular idea that can be repeated ad infinitum as there is no shortage of arty urban backdrops and naked ladies prepared to pose. Is she vulnerable? Not to my eyes. Not when posed so artificially. The vulnerable one is the guy putting his hand in his pocket and thinking about shelling out $400 for a cynical piece of shit designed to fleece him. This is not art, this is artistic technique abused in order to insult the eye of the beholder over and over and over again, prostituting itself in pseudo-protest and no doubt making the creator a great wad of cash in the process. I amd itn ecaptiaghle of drinking sthis d dar - Dr T [ Parent ] sounds like by martingale (4.00 / 1) #18 Mon Jun 19, 2006 at 04:54:17 AM EST you could do a competing set yourself, maybe for$396? Or make it $405 with a naked guy, and go for the gay market :) --$E(X_t|F_s) = X_s,\quad t > s\$
[ Parent ]
Yes. by ObviousTroll (4.00 / 1) #20 Mon Jun 19, 2006 at 05:13:09 AM EST
One or two of those pictures were actually made worthwhile by the incongruity of having a naked chick in them - the whole soft-curves versus rusted steel thing - but except for those extreme few I agree that the pictures had nothing worthwhile about them.
The fact that the model seemed determined to be photographed nude without letting even a hint of her naughty bits show up probably has a lot to do with it.
What's the point of stripping down if you're going to spend the entire shoot curled up into a ball?
--
Faith, and the possibility of weaponized kissing?
[ Parent ]
at work so haven't seen the pictures. by garlic (2.00 / 0) #28 Mon Jun 19, 2006 at 06:20:12 AM EST
when has that stopped anyone before?
vulnerable would probalby be easier to see in beat up, dirty clothes instead of naked. Naked (or nude, whatever) certainly sounds like a pose instead of a real situation.
[ Parent ]
Well, vulnerable in the sense of.. by ObviousTroll (2.00 / 0) #30 Mon Jun 19, 2006 at 06:22:23 AM EST
"Holy shit. Hasn't that girl heard of tetanus?"
The hell with naked, I wouldn't have gone into some of those places without heavy boots and jeans.
--
Faith, and the possibility of weaponized kissing?
[ Parent ]
Electroshock? What the hell by greyrat (2.00 / 0) #9 Mon Jun 19, 2006 at 03:19:07 AM EST
happened to a good old wooden paddle like in the old days! We even saw them used for head shots on occasion. Broken skin? Blood and brusing? Feh. Deal with it you wimps! Here's a pass to the nurses office...
Naked Art by ObviousTroll (2.00 / 0) #17 Mon Jun 19, 2006 at 04:46:46 AM EST
Those pictures were quite interesting, but the poses sure got old - same basic poses (either rear view, or folded up) over and over. It kind of ruined the effect of the pix.
--
Faith, and the possibility of weaponized kissing? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27025458216667175, "perplexity": 5841.319048861728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887621.26/warc/CC-MAIN-20180118210638-20180118230638-00210.warc.gz"} |
https://theimowski.com/blog/2016/07-20-f-workshop-in-a-browser/index.html | # F# workshop in a browser
How to make a F# workshop for your colleagues more attractive? Let them edit your slides and interact with the browser! This entry presents an idea of using FsReveal to combine your presentation and workshop exercises into a single file.
## The idea
I've once attended an awesome F# Coding Dojo conducted by Mathias Brandewinder (@brandewinder) in Łódź, Poland. It was titled "Introduction to Machine Learning with F#" and it came from the series of F# Coding Dojos powered by Community for F# (@c4fsharp).
Apart from the subject of this dojo was really interesting and the problem to solve quite exciting, one thing that I enjoyed most was the idea of Guided Script. It based on placing learning materials, short language reference and the actual code all within single script file, in a spirit of Literate Programming. The script used in "Introduction to Machine Learning with F#" looked like this. For those of you who haven't seen the scripts in action, I highly recommend to go check out rest of the Dojos.
Recently I've volunteered to lead a series of F# workshops at our company in Gdańsk. The main purpose of such series would be to introduce colleagues with C# background to world of F# and Functional Programming. In order to put emphasis on learning FP concepts and their practical use, I've decided to prepare the series from scratch by myself.
Because I wanted to make those meetings as attractive as possible, I planned to adapt the Guided Script format. In addition to that, I needed to prepare some slides to demonstrate and explain FP concepts. Choice of tool for the presentation was easy - many members of F# Community make use of FsReveal.
"FsReveal allows you to write beautiful slides in Markdown and brings C# and F# to the reveal.js web presentation framework".
Once I've realized that FsReveal allows to write slides in a standard F# script (.fsx) file, I came up with an idea to combine it with Guided Script.
## Realization
FsReveal allows to edit slides interactively, which means that you can preview your changes live with help of a little Suave.IO server embedded into the build script. All you have to do is run KeepRunning target:
1: .\build.cmd KeepRunning
This opens up generated slides in your browser, and fires a file change watcher in slides directory - whenever a change is detected, slides get regenerated and a WebSocket message is sent to the page in the browser with a command to reload itself.
This feature made it possible to treat the browser as an interactive REPL. Let's have a look at an example, this slide is backed up by following script code (valid F# syntax with Markdown format in comments):
1: 2: 3: 4: 5: 6: 7: 8: (** ### Exercise X.X Exercise: #### --------------- Your code goes below --------------- *) let exercise = 28 (** #### Value of exercise *) (*** include-value: exercise ***)
The workshop participant first runs the KeepRunning target of the build script. Now whenever she modifies the script (e.g. binds a different value to exercise symbol) and saves the file, the page gets automatically refreshed with a new result.
One downside of this approach is that when the scripts gets a bit longer, its evaluation may take some time. Because of that if the file watcher discovers too frequent changes (programmer's habit - save a file every second), it might not keep up with regenerating new slides. Fortunately the participant can still run certain snippets of the code in the FSI (F# Interactive) for debugging purposes, and only when she's ready with the final implementation, trigger new slides to render.
## Summary
The whole workshop extends this workflow, providing ten different exercises to complete for participants. By implementing the exercises step by step, the participant aims to the final goal which is calculating an arithmetic expression represented in Reverse Polish Notation.
All slides for the workshop are available here. Last week I conducted the workshop for the first time, and it went pretty well. Despite the fact the standard FSI turned out to be crucial anyway, participants appreciated the cool extra thing that they can watch their changes live in the browser. Till next time!
val exercise : int
Full name: fworkshopinabrowser.exercise | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2966630458831787, "perplexity": 3105.965075581208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660529.12/warc/CC-MAIN-20190118193139-20190118215139-00323.warc.gz"} |
https://www.physicsforums.com/threads/why-the-work-done-is-negative-when-bringing-2-opposite-charges-together.972183/ | # B Why the work done is negative when bringing 2 opposite charges together?
• Start date
• Tags
#### Hawkingo
53
2
we know that if the applied force is in the direction of the displacement then work done is positive.But in case of bringing 2 opposite charges from infinite to a certain distance,the work done is negative even the force and the displacement of the charge is in the same direction.
From mathematical point of view it says the applied force is positive here and the (dr) is negative here.but what's the physical significance of the sign is here?and if so when can we say the applied force is positive or negative(in case of bringing two same charges together)?
Related Classical Physics News on Phys.org
#### phinds
Gold Member
14,960
4,618
the work done is negative even the force and the displacement of the charge is in the same direction.
The displacement is the same but the force is opposite. You have to expend force AWAY from the center point to keep them from coming together
#### Drakkith
Staff Emeritus
2018 Award
20,464
4,158
But in case of bringing 2 opposite charges from infinite to a certain distance,the work done is negative even the force and the displacement of the charge is in the same direction.
The work done by what on what. Always keep those two things in mind. The work done by the attractive force between them is positive, since $W=ΔKE$, where KE is the kinetic energy of the object the force is acting on and this kinetic energy is obviously increasing. However, if you're the one 'holding' each charge and trying to bring them together, you don't actually have to perform positive work since they want to come together anyways. You'll actually have work performed on you in the form of accelerating your hands, so the work you do on the charges is negative. You gain energy instead of expending it.
#### sophiecentaur
Gold Member
23,012
3,637
The work done by what on what.
It's always a source of confusion but there's no problem if you stick to the rule. The definition involves work done ON a charge in moving it. Hence, for an attractive system, bringing a charge towards another charge involves negative work put in - hence the negative sign when the distance is measured away from the centre. If you over- think this, then you can convince yourself either way so just stick to the definition. It will become second nature to get it right.
#### Drakkith
Staff Emeritus
2018 Award
20,464
4,158
The definition involves work done ON a charge in moving it. Hence, for an attractive system, bringing a charge towards another charge involves negative work put in
This is what I was talking about. The work done on the charge by what? An experimenter bringing the charges together would exert a force that is opposite the displacement (after all, if they aren't, then they aren't part of the experiment and it makes no sense to ask what the work done by them is), and so the work would be negative. But the work done by the electrostatic force between the charges is positive, correct?
#### sophiecentaur
Gold Member
23,012
3,637
But the work done by the electrostatic force between the charges is positive
If you like, But that's not the definition. I did mention that you can over think this. Work and Energy are like money. Someone gives it and someone receives it but we don't get confused about the sign of a money transaction. In that case, it's somehow 'obvious' - but even with money, it's possible to set up scenarios in which it's not so clear. Con-men exploit this at times.
#### Drakkith
Staff Emeritus
2018 Award
20,464
4,158
If you like, But that's not the definition.
Remind me what definition we're using here. I know of several ways of defining work.
#### sophiecentaur
Gold Member
23,012
3,637
Remind me what definition we're using here. I know of several ways of defining work.
Work is Force times Distance / (Displacement could be better, perhaps) and that's a vector multiplication.
Work done ON the system by 'the experimenter'. I would say that all experiments involve an input to a system to see the result. Even when we just observe, we identify with part of the system for independent variable and turn it into an experiment.
The work done on a mass as you lower it to the ground is (universally?) agreed as being negative so can't you just take it from there? The force is in the direction of decreasing distance etc. etc.
If you change to asking the work done by the mass ON the experimenter then you get a change of sign and, that would be ok if that were the accepted convention. But I don't think it is.
#### A.T.
9,422
1,399
we know that if the applied force is in the direction of the displacement then work done is positive.But in case of bringing 2 opposite charges from infinite to a certain distance, the work done is negative even the force and the displacement of the charge is in the same direction.
You can do positive or negative work on them. It's up to you what force you apply to them.
#### ZapperZ
Staff Emeritus
2018 Award
34,853
3,734
As a physics instructor, I often puzzle at why a student cannot make the connection with something that he/she should have learned about and understood from something earlier.
When someone is learning about charges, it is assume that the student has learned basic kinematics, etc. in a General Physics class. So something like this where work done is "negative" should be familiar. For example, lifting a book to a new height means that work is done ONTO the book, i.e. the book gains gravitational potential energy. However, if we simply let go of the book and it free-falls to the ground, the work done ONTO the book is NEGATIVE, and consequently, it loses gravitational potential energy.
This is no different than bringing a negative charge nearer to a positive charge, or bringing a positive charge nearer to a negative charge. Work done by the charge being moved is negative, because it is not doing the work. The field is doing the work, and the field is doing the work ONTO the charge. As a consequence, the charge's potential energy also drops, in this case, becoming more negative (U = qV).
Both situations are analogous conceptually.
Zz.
#### Drakkith
Staff Emeritus
2018 Award
20,464
4,158
Work is Force times Distance / (Displacement could be better, perhaps) and that's a vector multiplication.
Okay. Then which part of my posts don't meet this definition?
The work done on a mass as you lower it to the ground is (universally?) agreed as being negative so can't you just take it from there?
It's not just agreed upon, it's required, because the work done by you on the mass is negative since you're applying the force in the opposite direction of the motion.
#### sophiecentaur
Gold Member
23,012
3,637
It's not just agreed upon, it's required, because the work done by you on the mass is negative since you're applying the force in the opposite direction of the motion.
We are agreeing here. But the "required" bit is a consequence of how the Potential is defined - i.e., in terms of work done on the system.
As I commented earlier - it is possible to over think this when it's just a matter of accepting the convention.
#### Drakkith
Staff Emeritus
2018 Award
20,464
4,158
We are agreeing here. But the "required" bit is a consequence of how the Potential is defined - i.e., in terms of work done on the system.
As I commented earlier - it is possible to over think this when it's just a matter of accepting the convention.
I don't know, Sophie. I don't even know what we're arguing over anymore since we seem to be half agreeing with each other, lol.
#### A.T.
9,422
1,399
if we simply let go of the book and it free-falls to the ground, the work done ONTO the book is NEGATIVE, and consequently, it loses gravitational potential energy.
But the book gains kinetic energy. Applying the definition of work to the gravitational force and displacement of the falling book will yield a positive value. So what is doing negative work on the book here?
#### sophiecentaur
Gold Member
23,012
3,637
But the book gains kinetic energy. Applying the definition of work to the gravitational force and displacement of the falling book will yield a positive value. So what is doing negative work on the book here?
That's just generating more confusion about which argument / definition to use. In the end you have to choose which way round to specify how Potential is defined. You and I know the answer to that. The fact that the book would accelerate means that Work is done on it. The book takes the place of 'the experimenter', perhaps and the book does negative work. (Nothing is doing negative work ON the book)
Or maybe you wrote your post in order to make the student think??
#### Drakkith
Staff Emeritus
2018 Award
20,464
4,158
That's just generating more confusion about which argument / definition to use.
I don't see how. It matches the definition I asked for, and you gave, earlier.
In the end you have to choose which way round to specify how Potential is defined.
This has nothing to do with choosing how potential is defined. It has everything to do with explicitly stating which forces are involved in the question and which ones your interested in.
In fact, the relationship between work and potential energy perfectly supports this. A book that starts with 20 units of potential energy and ends with 10 units (i.e. it's falling) will have W=-ΔU=-U2-U1=-(10-20)=-(-10)=10 units of work performed on it by gravity. Raise it by 10 units of PE and gravity does -10 units of work, just as it should be given the other definitions of work.
The crux of the OP's question is answered, in my opinion, by recognizing that some of these questions and scenarios are vague and not quite technically accurate (such as what does negative work on a free-falling object solely under the influence of gravity). You can choose to say, "Well, that's just convention" and be done with it, or you can acknowledge that it doesn't actually make sense when you try to figure out what's actually happening.
Some questions have implicit and unstated conditions that aren't immediately obvious. For example, if I hold a book in my hand and raise it by one meter, then I've done work on the book obviously. But what if I lower the book by one meter? That implies that:
A.) I let the book free fall that one meter and stop looking at the question once the book hits the one meter mark.
B.) I slowly lower the book, exerting a force the entire way to keep it from free falling.
C.) I let the book free fall, and quickly 'catch it' so that it comes to a halt at the one meter mark.
D.) Some combination of the above.
In scenario A, I would argue that nothing has done negative work on the book. The only force here is gravity, which is certainly doing positive work on the book given every definition of work I've seen.
In the other scenarios, it is easy to see what is doing negative work on the book. You are. Or whatever is exerting the force that slows the book down. Exactly how much work is done on the book depends on if you completely stop the book's motion or not at the one meter mark.
So how does this generate more confusion? It doesn't in my opinion. I think it removes a great deal of the confusion. I think the confusion comes primarily from poorly worded questions, and that explicitly stating which forces are at work and elaborating on what is happening to the object in question would help a great deal.
The book takes the place of 'the experimenter', perhaps and the book does negative work. (Nothing is doing negative work ON the book)
I can't see how the book could do any work on itself.
#### sophiecentaur
Gold Member
23,012
3,637
in my opinion, by recognizing that some of these questions and scenarios are vague and not quite technically accurate
That's right.
As far as I can see, there is a mix up between discussing two objects on their own and two objects with an experimenter.
I can't see how the book could do any work on itself.
The book is gaining KE so it is having work done on it as with my example of money, it can be regarded as doing negative work which is the same as receiving energy / work (which is not "doing negative work on itself"). There would be no work done at all if it were not for the existence of the gravitational field; that's where the energy comes from.
So how does this generate more confusion?
The fact that we are still discussing it is a good indication that confusion exists. I don't see why there should be any and it appears that you don't either but this basic question comes up frequently and my point is that there is a disconnect between intuition and the results of following logical steps in the argument. If you accept the conservation of energy thing that we all learned at school and if you apply it rigorously - with the proper use of signs - it's all perfectly consistent. Just like with money transactions.
Work on and work by have just different signs in the same way that going forwards and going backwards can be easily dealt with by using different signs and not different words; the Maths sorts it out.
#### Drakkith
Staff Emeritus
2018 Award
20,464
4,158
The book is gaining KE so it is having work done on it as with my example of money, it can be regarded as doing negative work which is the same as receiving energy / work (which is not "doing negative work on itself"). There would be no work done at all if it were not for the existence of the gravitational field; that's where the energy comes from.
I suppose you can say that the book does negative work, but I think it just adds confusion since the book isn't a force and is the object having work performed on it. Is this the convention you were referring to?
Work on and work by have just different signs in the same way that going forwards and going backwards can be easily dealt with by using different signs and not different words; the Maths sorts it out.
My entire point here is not that something with the math or the definition needs to be changed, it's that it's necessary to understand which forces are involved and which ones you're interested in. Because if you don't understand this, then you can't really hope the math will work out since you haven't set the math up correctly in the first place.
#### A.T.
9,422
1,399
However, if we simply let go of the book and it free-falls to the ground, the work done ONTO the book is NEGATIVE
Nothing is doing negative work ON the book
You both seem to be contradicting eachother.
#### sophiecentaur
Gold Member
23,012
3,637
You both seem to be contradicting eachother.
@ZapperZ is implying that a negative displacement times a negative force produces negative work so I don't agree with him.
But this is getting daft. We've made so many excursions around this that it's very easy to make a slip in a comment. I already gave warnings about over-thinking this.
#### rcgldr
Homework Helper
8,574
472
The work done by the attractive force between them is positive, since $W=ΔKE$
Or using a frame of reference with its origin at the center of mass of the two body system, and with both objects on the x axis, then the attractive force on the left object is positive and the displacement is positive, while the attractive force on the right object is negative and the displacement is negative, so both components of work done by the attractive force are positive.
#### rcgldr
Homework Helper
8,574
472
The displacement is the same but the force is opposite. You have to expend force AWAY from the center point to keep them from coming together
Only if the force slows down or stops the rate of approach due to the attractive force. However, if the external force acts on each object in the same direction as the attractive force, the work done by that external force is positive, and the charges accelerate towards each other at a greater rate.
Last edited:
#### phinds
Gold Member
14,960
4,618
Only if the force slows down or stops the rate of approach due to the attractive force. However, if the external force acts on each object in the same direction as the attractive force, the work done by that external force is positive, and the charges accelerate towards each other at a greater rate.
Fair enough.
#### rcgldr
Homework Helper
8,574
472
Only if the force slows down or stops the rate of approach due to the attractive force. However, if the external force acts on each object in the same direction as the attractive force, the work done by that external force is positive, and the charges accelerate towards each other at a greater rate.
Fair enough.
Even if the force opposes the attractive force, the magnitude is also a factor. Assume an initial state where the objects are a finite distance apart (not infinite as in the OP). If the opposing force is equal in magnitude to the attractive force, then no work is done. If the opposing force is greater than the attractive force, positive work is done (and the attractive force component is negative work).
So whether or not a net force in addition to the attractive force performs negative, zero, or positive work depends on the direction and magnitude of that force.
Last edited:
#### vanhees71
Gold Member
11,993
4,496
I think the problem with this thread, as many in this forum, is that there are too many words rather than a handful of formulae. Math provides clear definitions for the quantities at hand. Let's do the most simple example: consider a single particle moving in some external force field. Then you have an equation of motion, which is usually of the form
$$m \ddot{\vec{x}}=\vec{F}(t,\vec{x},\dot{\vec{x}}).$$
Now it is sometimes of advantage to consider kinetic energy,
$$T=\frac{m}{2} \dot{\vec{x}}^2.$$
Now let's see how it changes with time:
$$\dot{T}=m \dot{\vec{x}} \cdot \ddot{\vec{x}}=\dot{\vec{x}} \cdot \vec{F}(t,\vec{x},\dot{\vec{x}}).$$
In the last step I have assumed that $\vec{x}(t)$ is a solution of the equation of motion. Then you can integrate this equation over $t \in [t_1,t_2]$,
$$T_2-T_1=W_{12} = \int_{t_1}^{t_2} \mathrm{d} t \dot{\vec{x}} \vec{F}(t,\vec{x},\dot{\vec{x}}).$$
$W_{12}$ defines the quantity "work", and it's clearly defined as a line integral along the actual trajectory of the particle under the influence of the force. The above equation is known as the "work-energy theorem".
So far, it's not so clear, why this might be of some use. Now for many forces in non-relativistic physics the force is of much simpler structure than assumed above. It may only depend on $\vec{x}$ (the location of the particle) and neither explicitly on time nor on velocity. It then further often can be written as a gradient of a potentil,
$$\vec{F}(\vec{x})=-\vec{\nabla} V(\vec{x}).$$
Then work becomes independent of the trajectory of the particle! It only depends on the values of the potential at the endpoints of the tractory you integrate over. Thus you know the work done without any reference to the equation of motion or its solution, and the work-energy theorem provides a "first integral":
$$T_2-T_1=W_{12}=-(V_2-V_1),$$
which you can reformulate in terms of the energy-conservation law
$$T_2+V_2=T_1+V_1.$$
This again holds for all solutions of the equation of motion, but you don't need to know any solution to state it. It simply helps to find a solution. Since you can choose $t_1$ and $t_2$ arbitrarily you can simply write
$$E=T+V=\text{const},$$
and call $E$ the total energy of the particle (with $T$ the kinetic and $V$ the potential energy).
With these clear mathematical definitions, there's no more doubt about the sign of work or potential energy differences anymore.
Last edited:
"Why the work done is negative when bringing 2 opposite charges together?"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9065228700637817, "perplexity": 337.5578051768845}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998755.95/warc/CC-MAIN-20190618143417-20190618165417-00221.warc.gz"} |
https://torchlighters.org/quiz/john-wesley-quiz/ | Select Page
### John Wesley Quiz
When he was a boy John Wesley was saved from a burning building. How?
Why were John and Charles Wesley surprised at the Moravians on the ship to America?
In John Wesley's time preachers were expected to preach only in _________.
After he was kicked out of the churches John agreed to preach in the ________.
Charles Wesley is best known for _______________.
John Wesley is best known as ____________________. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9127001166343689, "perplexity": 18618.143868076462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202326.46/warc/CC-MAIN-20190320105319-20190320131319-00365.warc.gz"} |
http://mathhelpforum.com/calculus/171289-differentiation-problems.html | # Math Help - Differentiation problems.
1. ## Differentiation problems.
Differentiate the following:
1. $f(x) =\sqrt{x}(2x+3)^{2}$
2. $f(x) = \frac{sinx}{x^3-2x}$
My attempt for 1:
First I rewrite to get rid of the radical:
$f(x) = x^{\frac{1}{2}}(2x+3)^{2}$
Then I start differentiating using product rule:
$f'(x) = x^{\frac{1}{2}}'(2x+3)^{2} +x^{\frac{1}{2}}(2x+3)^{2}'$
Then chain rule:
$f'(x) = (\frac{1}{2})(x^{\frac{1}{2}-1)}(1)(2x+3)^{2} +(x^{\frac{1}{2}})(2)(2x+3)^{(2-1)}(2)$
$f'(x) = (\frac{1}{2})(x^{\frac{-1}{2})}(1)(2x+3)^{2}+(x^{\frac{1}{2}})(4)(2x+3)$
After simplifying:
$f'(x) = \frac{(2x+3)^{2}}{\sqrt{2}\sqrt{x}} + \frac{2x+3}{2\sqrt{x}}$
My attempt for number 2:
First I rewrite the equation to get rid of the fraction:
$f(x) = (sinx)(x^3-2x)^{-1)$
Then I begin to differentiate using product rule:
$f'(x) = (sinx)'(x^3-2)^{-1} +(sinx)(x^3-2x)^{-1}'$
Chain rule:
$(cosx)(x^3-2x)^{-1} + (sinx)(-1)(x^3-2x)^{-1-1}(3x^2-2)$
After simplifying:
$f'(x) = \frac{cosx}{(x^3-2x)} + \frac{2(3x^2 sinx)}{(x^3-2x)^{2}}$
What have I done wrong in my calculations? To check that I'm correct, I plug in random and find the slopes of the tangent to those points and check if the functions have been derived correctly through a graphing program. Unfortunately, I am way off. Thanks for any help in advance.
2. Hello, Pupil!
You made silly mistakes in the last step . . .
Differentiate the following:
$1.\;f(x) \:=\:\sqrt{x}(2x+3)^{2}$
$2.\;f(x) \:=\:\dfrac{\sin x}{x^3-2x}$
My attempt for 1:
First I rewrite to get rid of the radical: . $f(x) \:=\: x^{\frac{1}{2}}(2x+3)^{2}$
Then I start differentiating using product rule:
$f'(x) \:=\: x^{\frac{1}{2}}'(2x+3)^2 +x^{\frac{1}{2}}(2x+3)^2'$
Then chain rule:
$f'(x) \:=\: (\frac{1}{2})(x^{-\frac{1}{2}})(2x+3)^2+(x^{\frac{1}{2}})(4)(2x+3)$
After simplifying:
$f'(x) = \dfrac{(2x+3)^{2}}{\sqrt{2}\sqrt{x}} + \dfrac{2x+3}{2\sqrt{x}}$
. . . . . . . . . . . . . . . . . . .
. . . . . . . just 2. . . . . . in numerator
My attempt for number 2:
First I rewrite the equation to get rid of the fraction:
$f(x) = (\sin x)(x^3-2x)^{\text{-}1}$
Then I begin to differentiate using product rule:
$f'(x) = (\sin x)'(x^3-2)^{\text{-}1} +(\sin x)(x^3-2x)^{\text{-}1}'$
Chain rule:
$(\cos x)(x^3-2x)^{\text{-}1} + (\sin x)(-1)(x^3-2x)^{\text{-}2}(3x^2-2)$
After simplifying:. . . . . . .?
. . . . . . . . . . . . . . . . . . .
$f'(x) \:=\: \dfrac{\cos x}{(x^3-2x)} + \dfrac{2(3x^2-2)\sin x}{(x^3-2x)^{2}}$
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . minus
3. Hi Soroban, thanks for the reply.
Okay, so for 1:
$f'(x) = \frac{(2x+3)^{2}}{2\sqrt{x}} + \sqrt{x}(4)(2x+3)$ is this correct?
And for 2:
$f'(x) = \frac{cosx}{(x^3-2x)} - \frac{(3x^2 - 2sinx)}{(x^3-2x)^{2}}$
? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9845021963119507, "perplexity": 300.6271904633897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00180-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://yalmip.github.io/example/bilevelprogrammingalternatives/ | Bilevel programming alternatives
Tags:
Updated:
YALMIP supports bilevel programming natively, but this example shows how simple bilevel problems can be solved by using other standard modules in YALMIP. We will illustrate three different ways to solve bilevel quadratic optimization problems exactly; a multi-parametric programming approach (which boils down to a mixed integer quadratic programming approach), a direct mixed-integer quadratic programming approach, and a global nonlinear programming approach.
The first part of this example requires linear and quadratic programming solvers, the second part a general nonlinear solver such as FMINCON, SNOPT or IPOPT, and the third part requires MPT.
For an introduction to bilevel optimization, see Bard 1999.
Our outer problem is a quadratic programming problem in the variables $$x$$ and $$z$$
The variable $$z$$ is constrained to be the optimal solution of an inner quadratic programming problem
The three approaches we will use all rely on the KKT conditions for the inner problem, but address this condition in different ways.
Obviously, the reason bilevel quadratic optimization is hard, is the nonconvex complementary constraint that either the multiplier is zero, or the constraint is active.
We generate some data for a random instance .
n = 3;
m = 2;
Q = randn(n,n);Q = Q*Q';
c = randn(n,1);
d = randn(m,1);
A = randn(15,n);
b = rand(15,1)*2*n;
E = randn(15,m);
H = randn(m,m);H = H*H';
e = randn(m,1);
f = randn(n,1);
F = randn(5,m);
h = rand(5,1)*2*m;
G = randn(5,n);
Direct mixed integer quadratic programming approach
The complementary conditions simply say that the multiplier is zero, or the corresponding constraint is active. This can be modeled in YALMIP by using the logic programming module (or one can set this up manually using binary variables).
x = sdpvar(n,1);
z = sdpvar(m,1);
lambda = sdpvar(length(h),1);
slack = h + G*x - F*z;
KKT = [H*z + e + F'*lambda == 0,
F*z <= h + G*x,
lambda >= 0];
for i = 1:length(h)
KKT = [KKT, ((lambda(i)==0) | (slack(i) == 0))];
end
To derive a mixed integer programming model of the logic condition, bounds on all variables involved in the logic condition are necessary (this is the main problem with this formulation, it requires bounds on the multipliers)
KKT = [KKT, lambda <= 100, -100 <= [x;z] <= 100];
Collect all constraints, and solve the outer problem (Note that the problem can be infeasible. If so, simply generate a new problem instance)
optimize([KKT, A*x <= b + E*z], 0.5*x'*Q*x + c'*x + d'*z);
value(x)
value(z)
value(lambda)
value(slack)
Note that YALMIP has a built-in command for generating kkt, hence, the manually implemented model above can be simplified significantly.
Nonlinear solver for complementary constraints
A more direct approach to handle the complementary constraints is to simply solve the nonlinear nonconvex problem that arise. To do this in YALMIP, we use the built-in global solver BMIBNB. As before, the only addition compared to the theoretical KKT system is that we have to add explicit constraints in order to bound the search-space.
KKT = [H*z + e + F'*lambda == 0,
F*z <= h + G*x,
lambda >= 0,
lambda.*(h+G*x-F*z) == 0];
KKT = [KKT, lambda <= 100, -100 <= [x;z] <= 100];
ops = sdpsettings('solver','bmibnb');
optimize([KKT, A*x <= b + E*z], 0.5*x'*Q*x + c'*x + d'*z,ops)
The performance of the global solver is fairly poor on this formulation. The reason is that it does not detect the complementary structure, but simply treats the problem as a general problem with bilinear constraints. To improve performance, we introduce a new variable for the slack and obtain an easily detected complementary structure. YALMIP will exploit this complementary structure to improve the bound propagation and branching process.
slack = sdpvar(length(h),1);
KKT = [H*z + e + F'*lambda == 0,
slack == h + G*x - F*z,
slack >= 0,
lambda >= 0,
lambda.*slack == 0];
KKT = [KKT, lambda <= 100, -100 <= [x;z] <= 100];
ops = sdpsettings('solver','bmibnb');
optimize([KKT, A*x <= b + E*z], 0.5*x'*Q*x + c'*x + d'*z,ops)
Alternatively, write the KKT using the built-in complements
slack = sdpvar(length(h),1);
KKT = [H*z + e + F'*lambda == 0,
slack == h + G*x - F*z,
complements(slack >= 0,lambda >= 0),
KKT = [KKT, lambda <= 100, -100 <= [x;z] <= 100];
ops = sdpsettings('solver','bmibnb');
optimize([KKT, A*x <= b + E*z], 0.5*x'*Q*x + c'*x + d'*z,ops)
Multiparametric programming approach
A more advanced way in YALMIP to solve this problem, is to explicitly compute a parametrized solution $$z(x)$$ by using multiparametric programming. This will lead to a piecewise affine description of the optimizer, and when this expression is plugged into the outer problem, a mixed integer quadratic programming problem arise.
Hence, we solve the inner program multiparametrically w.r.t. $$x$$. Notice that we add bounds on $$x$$, to limit the region where we are interested in a parametric solution (this is required for the parametric solver in MPT to perform well.)
obj_inner = 0.5*z'*H*z + e'*z + f'*x;
cst_inner = [F*z <= h + G*x, -100 <= x <= 100];
[aux1,aux2,aux3,OptVal,OptZ] = solvemp(cst_inner,obj_inner,[],x);
At this point, the variable OptZ defines a piecewise affine function corresponding to the optimizing z. We use this variable in our outer problem
obj_outer = 0.5*x'*Q*x + c'*x + d'*OptZ;
cst_outer = [A*x <= b + E*OptZ];
optimize(cst_outer,obj_outer);
value(x)
value(OptZ)
The multiparametric solver will essentially explore the combinatorial combinations of the active sets, and return a piecewise affine optimizer in each optimal combination. We can do this exploration manually, by simply stating the complementary conditions using logic constraints, and that is exactly what we did with the first approach above.
Note though, having the complete multiparametric solution, we might just as well loop through all regions and solve the problem with the corresponding inner optimal parametrization
minn = inf;
for i = 1:length(aux1{1}.Pn)
OptZ = aux1{1}.Fi{i}*x + aux1{1}.Gi{i};
obj_outer = 0.5*x'*Q*x + c'*x + d'*OptZ;
cst_outer = [A*x <= b + E*OptZ, ismember(x,aux1{1}.Pn(i))];
sol = optimize(cst_outer,obj_outer);
if sol.problem == 0
if value(obj_outer) < minn
xsol2 = value(x);
zsol2 = value(OptZ);
minn = value(obj_outer);
end
end
end
Using the built-in bilevel solver
As we mentioned above, YALMIP has a built-in bilevel solver which applies to this problem.
con_inner = F*z <= h + G*x;
obj_inner = 0.5*z'*H*z+e'*z;
con_outer = A*x <= b + E*z;
obj_outer = 0.5*x'*Q*x + c'*x + d'*z;
solvebilevel(con_outer,obj_outer,con_inner,obj_inner,z)
By default, it solves the problem by explicitly branching on the complementarity, but we can tell it to define the kkt model and solve the problem using integer solver, as we did above
solvebilevel(con_outer,obj_outer,con_inner,obj_inner,z,sdpsettings('bilevel.algorithm','external')) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4593866467475891, "perplexity": 2680.5655726971654}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001089.83/warc/CC-MAIN-20190627095649-20190627121649-00524.warc.gz"} |
https://phys.libretexts.org/TextMaps/Astronomy_and_Cosmology_TextMaps/Map%3A_Astronomy_(Impey)/8%3A_Interplanetary_Bodies/8.1_Interplanetary_Bodies | $$\require{cancel}$$
# 8.1 Interplanetary Bodies
One ongoing theme in astronomy is the realization that observationally different celestial events are actually tied to a single phenomena. One of the earliest examples may be the "discovery" that the morning star and the evening star are both actually apparitions of the planet Venus. Another, more interesting example is the unification comets, meteor showers, the Tunguska event and maybe even the end of the end of the Bronze Age! In this example, comets are seen in the sky, they sometimes leave behind trails of dust and particulate in the Earth's orbital path that can cause meteor showers, and occasionally comets (or at least chunks of comets) can enter our atmosphere. We suspect that a chunk of comet is responsible for the Tunguska event in 1908 over Siberia, and there is also some research that indicates that it is possible that comet Encke broke up several thousand years ago, and a chunk of this it may have hit the fertile crescent, forming Umm al Binni Lake and bringing to an end the Bronze aAge (this is highly speculative research, and still falls in the category of really neat, but unconfirmed). It actually turns out on a sky phenomena and planetary devastation are likely related to small interplanetary bodies, including not only comets but also asteroids.
Halley's comet in 1986. Click here for original source URL
This is something of a revolutionary change in thinking. A generation ago, scientists considered interplanetary bodies only a minor curiosity. Today, we're realizing that they affect planet histories in general, and the evolution of life on Earth in particular. Our presence on Earth depends in large part on a history of impacts by interplanetary debris. Beyond just helping us understand the periodic extinctions that have occurred three history, these bodies also contain many clues to help us learn about the origin of our solar system.
Interplanetary bodies range in composition from icy to rocky and metallic. The exact name to an object has depends on both its composition, and its orbit. For instance, icy objects with trans-Neptunian orbits or orbits beyond Neptune are Kuiper Belt Objects, while icy objects that plunge through the inner solar system are called comets. On the other hand, rocky and metallic objects are generally called asteroids, but are more specifically called (among other things) Near-Earth Objects or Main Belt Asteroids based on if the orbit in the asteroid belt, or in the inner solar system nearer to Earth, respectively.
As with so many words, there are historical reasons for these names. When the Sun warms the ices of a comet, the ices change directly from solid into gas and evaporate away into space, or sublime. This gives comets their fuzzy, luminous "tail" of gas and dust particles. To ancient people, the tail looked like long hair blowing in the wind. The name "comet," therefore, comes from the Greek word "coma," for hair. The less excitingly named Kuiper Belt objects are named after their discoverer, Gerard Kuiper.
An asteroid, however, has no gas or tail and appears in a telescope like a faint star. Its name derives from the Greek root "aster," for star. But the light we see from an asteroid, just like from a comet, is all reflected from the Sun — they are cold chunks of rock and ice that emit no light of their own.
Until the last century or so, comets and asteroids were considered completely different phenomena. They're made of different materials, and they orbit on very different paths through the solar system. But today we realize that both comets and asteroids are examples of interplanetary debris left over from the period of planet formation. Smaller bits of leftover debris (both icy and rocky) are also related. This debris comes in all sizes, from microscopic grains to bodies a few meters across.
As the Earth orbits the Sun, it periodically collides with some of this debris. When a piece of space debris hits the atmosphere or surface of a planet, it's typically traveling at 10 to 40 kilometers per second (about 22,000 to 90,000 miles per hour!). Since kinetic energy is proportional to velocity squared, a massive object has tremendous kinetic energy. What happens next is an excellent example of the transformation of energy from one form to another. Hitting a planet's atmosphere at this speed creates friction between the object and the air (Remember, the fictional force is also related to velocity). The friction causes the projectile to slow down and heat up. In this process, its kinetic energy is transformed into thermal energy. (The melted rubber when you brake hard and your car skids to a stop is another situation where kinetic energy is turned into heat.) The incoming object heats up and begins to glow. Due to the shock of hitting the atmosphere and the sudden increase in temperature, it may break into many pieces. This is also why a spaceship re-entering the atmosphere from orbit heats up, making re-entry a critical and dangerous procedure.
Scientists distinguish between the pieces of interplanetary bodies that reach the Earth's atmosphere, and the fraction of them that actually hit the Earth. Meteors, or so-called "shooting stars," are typically pea-sized and smaller particles that burn up in the atmosphere and do not hit the ground. Meteorites are larger rocky or metallic bodies, or pieces of them, that survive passage through the atmosphere and hit the ground. Thousands of meteorites have been collected and studied, many of which you can see in museums and planetariums. They are free samples of the distant reaches of the solar system. The words meteor and meteorite come from the same root as the word meteorology, or the study of weather. For hundreds of years, people thought that shooting stars were purely terrestrial phenomena that originated in the Earth's atmosphere.
When scientists study a meteorite, they recognize that it is just a fragment of something larger and try to deduce the nature of the object it came from. This larger object is called the parent body. Studies of meteorites prove that most of their parent bodies are asteroids (but occasionally, they have actually originated from the Moon or Mars). Asteroids collide with each other, as well as with planets, throughout geological time. The biggest collisions disrupt the asteroids and leave fragments drifting in space. Some of these fragments are perturbed onto paths that cross the Earth's orbit, and a few eventually reach the ground as meteorites. By studying both the mineral composition of meteorites, as well as the ratios of any gases trapped inside the meteorite, it is sometimes possible to determine a meteorite's parent body.
Asteroids are generally concentrated in the region between the orbits of Mars and Jupiter. This group of asteroids is called the main belt. Most comets travel originate either in the Kuiper Belt or in the much more distant Oort Cloud. Most meteors storms are caused by the bits of debris dislodged from comets and left behind as they sweep through the Solar System. The debris is strewn unevenly along the entire path of the orbit so when the Earth crosses the path it creates a meteor shower at the same time each year. The unevenness of the debris in space means the intensity of the meteor shower is variable and hard to predict.
The comet and asteroid parent bodies, along with the pieces of them that reach the Earth, can trace their origins to the birth of the Solar System 4.6 billion years ago. As the planets formed, the Solar System was filled with innumerable small, pre-planetary bodies, ranging up to 1,000 kilometers across. "Planetesimal" is a generic term used to refer to these pre-planetary bodies, without specifying whether they are icy or rocky. Thus, comets, asteroids, and their fragments all descended from the original planetesimals that formed the planets. The Sun contains 99.85% of the mass of the Solar System. Jupiter accounts for 0.1%, and all the other planets together are another 0.04%. All the various interplanetary bodies amount to no more than 0.01%, or 1 part in 10,000, of the Solar System mass. Yet they can have spectacular effects on the Earth, and on life itself.
Asteroid Ida with its tiny moonlet Dactyl. Click here for original source URL | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48774492740631104, "perplexity": 1097.9125159745663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806569.66/warc/CC-MAIN-20171122103526-20171122123526-00341.warc.gz"} |
http://www.researchgate.net/researcher/55656868_L_Oriol | # L. Oriol
Cea Leti, Grenoble, Rhône-Alpes, France
Are you L. Oriol?
## Publications (16)7.46 Total impact
• ##### Article: Experimental Verification of the Fission Chamber Gamma Signal Suppression by the Campbelling Mode
[Hide abstract]
ABSTRACT: For the on-line monitoring of high fast neutron fluxes in the presence of a strong thermal neutron component, SCK·CEN and CEA are jointly developing a Fast Neutron Detector System, based on <sup>242</sup>Pu fission chambers as sensors and including dedicated electronics and data processing systems. Irradiation tests in the BR2 reactor of <sup>242</sup>Pu fission chambers operating in current mode showed that in typical MTR conditions the fission chamber currents are dominated by the gamma contribution. In order to reduce the gamma contribution to the signal, it was proposed to use the fission chambers in Campbelling mode. An irradiation experiment in the BR2 reactor with a <sup>242</sup>Pu and a <sup>235</sup>U fission chamber, both equipped with a suitable cable for measurements in Campbelling mode, proved the effectiveness of the suppression of the gamma-induced signal component by the Campbelling mode: gamma contribution reduction factors of 26 for the <sup>235</sup>U fission chamber and more than 80 for the <sup>242</sup>Pu fission chamber were obtained. The experimental data also prove that photofission contributions are negligibly small. Consequently, in typical MTR conditions the gamma contribution to the fission chamber Campbelling signal can be neglected.
IEEE Transactions on Nuclear Science 05/2011; · 1.22 Impact Factor
• ##### Article: New measurement system for on line in core high-energy neutron flux monitoring in materials testing reactor conditions.
[Hide abstract]
ABSTRACT: Flux monitoring is of great interest for experimental studies in material testing reactors. Nowadays, only the thermal neutron flux can be monitored on line, e.g., using fission chambers or self-powered neutron detectors. In the framework of the Joint Instrumentation Laboratory between SCK-CEN and CEA, we have developed a fast neutron detector system (FNDS) capable of measuring on line the local high-energy neutron flux in fission reactor core and reflector locations. FNDS is based on fission chambers measurements in Campbelling mode. The system consists of two detectors, one detector being mainly sensitive to fast neutrons and the other one to thermal neutrons. On line data processing uses the CEA depletion code DARWIN in order to disentangle fast and thermal neutrons components, taking into account the isotopic evolution of the fissile deposit. The first results of FNDS experimental test in the BR2 reactor are presented in this paper. Several fission chambers have been irradiated up to a fluence of about 7 × 10(20) n∕cm(2). A good agreement (less than 10% discrepancy) was observed between FNDS fast flux estimation and reference flux measurement.
The Review of scientific instruments 03/2011; 82(3):033504. · 1.52 Impact Factor
• ##### Article: Research Activities in Fission Chamber Modeling in Support of the Nuclear Energy Industry
[Hide abstract]
ABSTRACT: Fission chambers are widely used in the nuclear industry. As an example, they play a major role in the control of any fission reactor and are thus regarded as a key component for ensuring their safety. They are also employed in the material testing reactors for monitoring irradiations. We have recently started a research program, the objective of which is to improve the performance of those neutron detectors in terms of lifetime, calibration, and online diagnosis. In this paper, we present several studies carried out in order to model the signal delivered by a fission chamber. First, the simulation of the deposit evolution allowed us to select the most appropriate fissile material for a given spectrum and fluence. Second, we studied the impact of the bias voltage and filling gas characteristics on the charge collection time. Finally, the simulation of a pulse signal prior to amplification showed how it is important to have a satisfactory knowledge of the energy for creating ion pairs to accurately assess the signal in current or Campbelling mode.
IEEE Transactions on Nuclear Science 01/2011; · 1.22 Impact Factor
• Source
##### Conference Paper: Recent developments on micrometric fission chambers for high neutron fluxes
[Hide abstract]
ABSTRACT: With the development of innovative nuclear systems and new generation neutron sources, the nuclear instrumentation should be adapted. Since several years, we developed microscopic fission chambers to study the transmutation of minor actinides in high thermal-neutron fluxes. The recent developments done to fulfill the drastic conditions of irradiations are described in this paper together with the feedback from the measurements. Two installations were used: the HFR of the ILL for its highest thermal neutron flux of the world and the MEGAPIE target which was the first 1 MW liquid Pb-Bi spallation target in the world.
Advancements in Nuclear Instrumentation Measurement Methods and their Applications (ANIMMA), 2009 First International Conference on; 07/2009
• ##### Conference Paper: Monitoring the fast neutrons in a high flux: the case for 242Pu fission chambers
[Hide abstract]
ABSTRACT: Fission chambers are widely used for on-line monitoring of neutron fluxes in irradiation reactors. A selective measurement of a component of interest of the neutron flux is possible in principle thanks to a careful choice of the deposit material. However, measuring the fast component is challenging when the flux is high (up to 10<sup>15</sup> n/cm<sup>2</sup>/s) with a significant thermal component. The main problem is that the isotopic content of a material selected for its good response to fast neutrons evolves with irradiation, so that the material is more and more sensitive to thermal neutrons. Within the framework of the FNDS (Fast Neutron Detector System) project, we design tools that simulate the evolution of the isotopic composition and fission rate for several deposits under any given flux. In the case of a high flux with a significant thermal component, <sup>242</sup>Pu is shown after a comprehensive study of all possibilities to be the best choice for measuring the fast component, as long as its purity is sufficient. If an estimate of the thermal flux is independently available, one can correct the signal for that component. This suggests a system of two detectors, one of which being used for such a correction. It is of very high interest when the detectors must be operated up to a high neutron fluence.
Advancements in Nuclear Instrumentation Measurement Methods and their Applications (ANIMMA), 2009 First International Conference on; 07/2009
• Source
##### Conference Paper: Development and manufacturing of special fission chambers for in-core measurement requirements in nuclear reactors
[Hide abstract]
ABSTRACT: The Dosimetry Command control and Instrumentation Laboratory (LDCI) at CEA/Cadarache is specialized in the development, design and manufacturing of miniature fission chambers (from 8 mm down to 1.5 mm in diameter). LDCI fission chambers workshop specificity is its capacity to manufacture and distribute special fission chambers with fissile deposits other than U235 (typically Pu242, Np237, U238, Th232). We are also able to define the characteristics of the detector for any in-core measurement requirements: sensor geometry, fissile deposit material and mass, filling gas composition and pressure, operating mode (pulse, current or Campbelling) with associated cable and electronics. The fission chamber design relies on numerical simulation and modeling tools developed by the LDCI. One of our present activities in fission chamber applications is to develop a fast neutron flux instrumentation using Campbelling mode dedicated to measurements in material testing reactors.
Advancements in Nuclear Instrumentation Measurement Methods and their Applications (ANIMMA), 2009 First International Conference on; 07/2009
• ##### Conference Paper: Experimental verification of the fission chamber gamma signal suppression by the Campbelling mode
[Hide abstract]
ABSTRACT: For the on-line monitoring of high fast neutron fluxes in the presence of a strong thermal neutron component, SCK·CEN and CEA are jointly developing a Fast Neutron Detector System, based on <sup>242</sup>Pu fission chambers as sensors and including dedicated electronics and data processing systems. Irradiation tests in the BR2 reactor of <sup>242</sup>Pu fission chambers operating in current mode showed that in typical MTR conditions the fission chamber currents are dominated by the gamma contribution. In order to reduce the gamma contribution to the signal, it was proposed to use the fission chambers in Campbelling mode. An irradiation experiment in the BR2 reactor with a <sup>242</sup>Pu and a <sup>235</sup>U fission chamber, both equipped with a suitable cable for measurements in Campbelling mode, proved the effectiveness of the suppression of the gamma-induced signal component by the Campbelling mode: gamma contribution reduction factors of 26 for the <sup>235</sup>U fission chamber and more than 80 for the <sup>242</sup>Pu fission chamber were obtained. The experimental data also prove that photofission contributions are negligibly small. Consequently, in typical MTR conditions the gamma contribution to the fission chamber Campbelling signal can be neglected.
Advancements in Nuclear Instrumentation Measurement Methods and their Applications (ANIMMA), 2009 First International Conference on; 07/2009
• ##### Article: Joint estimation of the fast and thermal components of a high neutron flux with a two on-line detector system
[Hide abstract]
ABSTRACT: A fission chamber with a Pu242 deposit is the best suited detector for on-line measurements of the fast component of a high neutron flux (∼1014ncm-2s-1 or more) with a significant thermal component. To get the fast flux, it is, however, necessary to subtract the contribution of the thermal neutrons, which increases with fluence because of the evolution of the isotopic content of the deposit.This paper presents an algorithm that permits, thanks to measurements provided by a Pu242 fission chamber and a detector for thermal neutrons, to estimate the thermal and the fast flux at any time. An implementation allows to test it with simulated data.
Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 05/2009; 603(3):415–420. · 1.14 Impact Factor
• ##### Article: Fission Cross Sections of Minor Actinides and Application in Transmutation Studies
[Hide abstract]
ABSTRACT: Fission cross sections of Minor Actinides are of great importance for the reduction of long-term nuclear waste radiotoxicity by transmutation. In this paper we present the results of measurements done on the fission cross sections of three minor actinides: 238Np, 242gs-mAm and 245Cm, in the thermal energy range. These cross sections participate significantly to the incineration of 237Np, 241Am and 244Cm isotopes and show some discrepancies with nuclear data libraries or previous experiments.
04/2008;
• ##### Article: Reasons why Plutonium 242 is the best fission chamber deposit to monitor the fast component of a high neutron flux
[Hide abstract]
ABSTRACT: The FNDS project aims at developing fission chambers to measure on-line the fast component of a high neutron flux ( or more) with a significant thermal component. We identify with simulations the deposits of fission chambers that are best suited to this goal. We address the question of the evolution of the deposit by radiative capture and decay. A deposit of 242Pu appears as the best choice, with a high initial sensitivity to fast neutrons only slowly degrading under irradiation. The effect of unavoidable impurities was assessed: small concentrations of 241Pu and 239Pu can be tolerated.
Nuclear Instruments and Methods in Physics Research Section A Accelerators Spectrometers Detectors and Associated Equipment 01/2008; · 1.14 Impact Factor
• Source
##### Article: Measurements of thermal fission and capture cross sections of minor actinides within the Mini-INCA project
[Hide abstract]
ABSTRACT: In the framework of nuclear waste transmutation studies, the Mini-INCA project has been initiated at CEA/DSM to determine optimal conditions for transmutation and incineration of Minor Actinides (MA) in high intensity neutron fluxes in the thermal region. Our experimental tool is based on alpha- and gamma-spectroscopy of irradiated samples and microscopic fission-chambers. It can provide both microscopic information on nuclear reactions (total and partial cross sections for neutron capture and/or fission reactions) and macroscopic information on transmutation and incineration potentials. The 232Th, 237Np, 241Am, and 244Cm transmutation chains have been explored in details, showing some discrepancies in comparison with evaluated data libraries but in overall good agreement with recent experimental data.
http://dx.doi.org/10.1051/ndata:07612. 01/2008;
• Source
##### Article: Neutronic Characterization of the Megapie Target
[Hide abstract]
ABSTRACT: The MEGAPIE project is one of the key experiments towards the feasibility of Accelerator Driven Systems. On-line operation and post-irradiation analysis will provide the scientific community with unique data on the behavior of a liquid spallation target under realistic irradiation conditions. A good neutronics performance of such a target is of primary importance towards an intense neutron source, where an extended liquid metal loop requires some dedicated verifications related to the delayed neutron activity of the irradiated PbBi. In this paper we report on the experimental characterization of the MEGAPIE neutronics in terms of the prompt neutron (PN) flux inside the target and the delayed neutron (DN) flux on the top of it. For the PN measurements, a complex detector, made of 8 microscopic fission chambers, has been built and installed in the central part of the target to measure the absolute neutron flux and its spatial distribution. Moreover, integral information on the neutron energy distribution as a function of the position along the beam axis could be extracted, providing integral constraints on the neutron production models implemented in transport codes such as MCNPX. For the DN measurement, we used a standard 3He counter and we acquired data during the start-up phase of the target irradiation in order to take sufficient statistics at variable beam power. Experimental results obtained on the PN flux characteristics and their comparison with MCNPX simulations are presented, together with a preliminary analysis of the DN decay time spectrum.
12/2007;
• Source
##### Article: Neutronic performances of the MEGAPIE target
[Hide abstract]
ABSTRACT: The MEGAPIE project is a key experiment on the road to Accelerator Driven Systems and it provides the scientific community with unique data on the behavior of a liquid lead-bismuth spallation target under realistic and long term irradiation conditions. The neutronic of such target is of course of prime importance when considering its final destination as an intense neutron source. This is the motivation to characterize the inside neutron flux of the target in operation. A complex detector, made of 8 micro fission-chambers, has been built and installed in the core of the target, few tens of centimeters from the proton/Pb-Bi interaction zone. This detector is designed to measure the absolute neutron flux inside the target, to give its spatial distribution and to correlate its temporal variations with the beam intensity. Moreover, integral information on the neutron energy distribution as a function of the position along the beam axis could be extracted, giving integral constraints on the neutron production models implemented in transport codes such as MCNPX.
05/2007;
• Source
##### Article: In-pile CFUZ53 sub-miniature fission chambers qualification in BR2 under PWR condition
[Hide abstract]
ABSTRACT: First prototypes of the industrial version of the CEA sub-miniature fission chambers (1.5 mm outer diameter) for in-core detection of high thermal neutron fluxes (up to 4 × 10 14 n/(cm².s)), manufactured by the PHOTONIS company and called CFUZ53, were tested in the CALLISTO loop of the BR2 reactor in PWR-like conditions. In this paper we present a first analysis of the recently obtained experimental results: neutron sensitivity, linearity to thermal neutron flux, current/voltage characteristics, gamma contribution, temperature effects and long term behaviour (mechanical integrity, burn-up of uranium, …). We also compare the experimental data with calculation results from a fission chamber theoretical model. The preliminary analysis indicates that the CFUZ53 signals show consistent signals in PWR conditions up to thermal neutron fluences beyond 2 × 10 20 n/(cm².s).
01/2005;
• ##### Article: The choice of Pu242 fission chambers to monitor the fast neutrons in a high flux
[Hide abstract]
ABSTRACT: Fission chambers are widely used for on-line measurements of neutron fluxes in irradiation reactors. When the flux is high with a significant thermal component, the measurement of the fast component is a concern and implies a careful choice of the deposit material. Indeed, most fissile materials have a cross section much higher with thermal neutrons than with fast ones. Moreover, a deposit chosen to favour fissions by fast neutrons will make radiative captures in the thermal domain, leading to a gradual modification of the isotopic compositions of the deposit that may dramatically increase the fissions with thermal neutrons. Within the framework ot the FNDS (Fast neutron Detector System) project, we performed simulations of the evolution of the isotopic composition and fission rate for several deposits under high fluxes. We show that Pu242 is the best choice to extract the fast component from a flux with a significant thermal component, provided that the impurities in Pu241 and Pu239 are small enough. However, an independent measurement of the thermal component appears to be necessary to compensate for the evolution of the Pu242 deposit.
• Source
##### Article: Detailed studies of Minor Actinide transmutation-incineration in high-intensity neutron fluxes
[Hide abstract]
ABSTRACT: The Mini-INCA project is dedicated to the measurement of incineration-transmutation chains and potentials of minor actinides in high-intensity thermal neutron fluxes. In this context, new types of detectors and methods of analysis have been developed. The 241 Am and 232 Th transmutation-incineration chains have been studied and several capture and fission cross sections measured very precisely, showing some discrepancies with existing data or evaluated data. An impact study was made on different based-like GEN-IV reactors. It underlines the necessity to proceed to precise measurements for a large number of minor-actinides that contribute to these future incineration scenarios.
#### Publication Stats
35 Citations 7.46 Total Impact Points
#### Institutions
• ###### Cea Leti
Grenoble, Rhône-Alpes, France | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8746415972709656, "perplexity": 2720.666092100275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400372202.67/warc/CC-MAIN-20141119123252-00233-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://ep.bmj.com/content/101/1/8 | Article Text
Simulation in paediatric training
1. Linda Clerihew1,
2. David Rowney2,
3. Jean Ker3,4
1. 1Ninewells Hospital, Dundee, UK
2. 2Scottish Centre for Simulation and Clinical Human Factors, Larbert, UK
3. 3National Lead for Clinical Skills and Simulation, Clinical Skills Managed Education Network, NHS Education for Scotland, Dundee, UK
4. 4College of Medicine, Dentistry and Nursing, Academic Business Development Hub, University of Dundee, Dundee, UK
1. Correspondence to Dr Jean Ker, College of Medicine, Dentistry and Nursing, Academic Business Development Hub, University of Dundee, Level 10, Ninewells Hospital and Medical School, Dundee DD1 9SY, UK; j.s.ker{at}dundee.ac.uk
## Statistics from Altmetric.com
We are what we repeatedly do. Excellence then is not an act but a habitAristotle
## Introduction
This quote from the Greek philosopher Aristotle from over 2000 years ago is still relevant today in the context of paediatric medical education. Simulation is a tool that can reinforce standards of clinical practice as ‘habit’ contributing to trainee's development as paediatricians.
This review shares some of the issues related to learning in the paediatric service environment and demonstrates how simulation can add benefit and value to both the educational process and clinical service. We have structured the article around a series of questions, which will be of relevance to all those using simulation for paediatric training.
## What is simulation?
It is important to have a shared mental model of what we mean by simulation, David Gaba1 describes it as ‘a technique, not a technology to replace or amplify real experiences with guided experiences that evoke or replicate substantial aspects of the real world in a fully interactive manner’.
It provides a safe context where confidence as well as competence can be continuously enhanced with excellence as the goal for patient care.2 ,3
The term simulation can be used to describe simulated patients; either manikin or actors, or simulated scenarios in either a simulated environment, for example, simulation centre, or simulated situation in the clinical environment known as in situ simulation.
The recognition of the importance of simulation in clinical education originated from its impact in other high-reliability organisations, such as aviation, the military and the oil industry.4 Our increased understanding of systems thinking and ensuring the provision of consistent high quality care in different healthcare settings has highlighted how simulation can be used for many purposes5 including enhancing the design of new delivery systems.
## Why are we using simulation for paediatric training?
There are several drivers for using simulation in paediatric training. First are the hours available for training. An increased knowledge of the impact of stress and fatigue on safe decision making and clinical performance6 has led to a decrease in trainee's service hours with the knock on impact of limiting their clinical experience. Complex rota systems are also reducing the continuity of training and opportunities to ensure issues, identified from workplace-based assessments, are followed up. Second, paediatric services are now target-driven in terms of throughput and cost, and this can impact on available time for training,7 particularly on ward rounds and in clinics. Although the clinical environment in paediatrics offers the chance to learn about a large volume of common presentations,8 ,9 clinical signs rapidly resolve, and patient turnover in paediatrics is often faster than in adult-based specialities, and time to develop clinical expertise is limited. It can be challenging for trainees, at whatever level of expertise, to gain consent from parents to examine their often distressed, sick child.10 In the service environment, the patient is the priority (box 1).
Box 1
### Challenges for safe effective learning
• Lack of staff to cross cover training opportunities
• Reduced training hours
• Patients intimidated by large groups
• Young patients difficult to examine
• Rapid resolution of clinical signs
• Few very sick patients
• Trainee concerns re-making mistakes in front of patients
Many countries are also experiencing increased demands on paediatric clinical services, which are increasingly encompassing a wider remit in terms of the social responsibilities of child health and well-being.11
## what can simulation be used for in paediatric training?
Simulation in paediatric training can be used to enhance learning from practice. It provides an educational bridge to prepare trainees for the reality of practice12 while protecting patients. One of the most understated uses of simulation is the opportunity to develop generic skill competences such as self-awareness and critical thinking.13
In paediatrics, simulation has been used to develop technical skills such as procedural skills14–17 and non-technical skills such as team-working, communication, leadership, decision-making and situational awareness.18–20 Human factors training is exemplified by the Imperial Paediatric Emergency Training Toolkit which provides a psychometric tool for training in emergencies.21 There is also SPRinT (Simulated Paediatric Resuscitation Team Training) for health professionals run at the Royal Brompton and Royal Marsden.22
Common technical and non-technical skills that can be developed using simulation are highlighted in table 1.
Table 1
Technical and non-technical skills
Common scenarios that can be developed using simulation are in box 2 adapted from Ahmed et al.2
Box 2
### Scenarios for simulation
• Undertaking an invasive procedure such as a cannulation or lumbar puncture on an unwell toddler
• Leading a resuscitation or practising a disease-specific pathway, for example, sepsis
• Leading a clinical assessment of potential child sexual abuse
• Writing paediatric medicines and fluid prescriptions
• Performing the appropriate referrals and investigations after a SUDI
• Communication with child and family in relation to breaking bad news or discussing a CYPADM decision.
• Leading a debriefing with an interprofessional team following a failed resuscitation
• Managing a colleague with performance issues
• Explaining a medication error to child and family
CYPADM, Children and Young Persons Acute Deterioration Management Plan; SUDI, sudden unexpected death in infancy.
## What is the evidence that simulation works?
Most of the reported evidence to date has been related to emergency situations.23 An enormous benefit of simulation is that skills can be built, elaborated on reinforced and refined in line with the latest evidence and can be learnt at relevant times using a systematic and structured approach in a curriculum.5 It also provides an opportunity for trainees to learn from other reported common adverse events such as medication errors, communication failures and failure to recognise and escalate care of deteriorating patients.
There has also been encouraging evidence of patient benefits in terms of reduced admissions and length of stay through the use of simulation.24 This reflects other specialities experience in obstetrics Draycott et al have reported improved APGAR scores and better patient outcomes in perinatal shoulder dystocia.25 Decreased central venous catheter associated line infection and insertion complications26 and better patient outcomes following angioplasty have resulted from simulation-based training.27
There is a growing body of evidence in paediatrics in relation to the use of in situ simulation training. These include improved cardiopulmonary arrest survival rates after the introduction of simulated mock codes,28 earlier recognition of sick patients with more rapid escalation to the paediatric intensive care unit and decreased mortality24 after the introduction of in situ team training for paediatric emergencies, and decreased serious safety events in an emergency department.29
The international trend towards providing a seamless integrated competency-based paediatric programme from undergraduate through postgraduate to continuing professional development in clinical training9 ,30 provides a framework for simulation to be used to its maximum potential.
## What are the benefits of simulation in paediatric training?
The following summarises the current benefits of simulation for teaching and learning in the paediatric environment reported from the international literature. Between 10% and 30% of paediatric inpatients are harmed by the care they receive.31 In ‘Why Children Die’, the confidential enquiry report into Maternal and Child Health in the UK, 26% of child deaths were identified as avoidable.32 These provide powerful learning lessons for trainees, particularly in relation to the variation in the system and the latent safety threats.4 Latent safety threats are potential hazards, which lie unrecognised and dormant, often for a significant period of time, before contributing to a significant safety event. One of the benefits of simulation is providing trainees with a framework to analyse such incidents, identify active and latent failures to redesign the system and test out options in a simulated environment to avoid any further unintended harm.
For junior trainees, simulation can be used to gain confidence in taking histories or examining young patients and for more experienced trainees, can provide a breadth of exposure to rarer presentations, for example, paediatric resuscitations, breaking bad news and safeguarding. Simulation enables a critical event to be deconstructed into learnable chunks so that generic competences such as leadership, prioritisation and communication can be explored and refined. These opportunities can be used as part of an integrated educational package using ‘digital toolbox’ with instant messaging and 24 h access to education and learning in terms of, for example, webexes, podcasts and vod-casts. Simulation is increasingly used as part of a blended learning approach to enhance skills retention and prevent skill decay.33
## What are the challenges of using simulation in paediatric training?
Despite a growing enthusiasm for simulation, there continues to be barriers to its widespread adoption. The most commonly cited reasons are resource constraints due to time, finance or an inability to access simulation centres or equipment.
Setting up a simulation centre and faculty is indeed resource intensive. It includes purchase costs of equipment, physical space, faculty time and staff time to attend simulation events. Data on the costs of simulation are infrequently reported. A systematic review of studies reporting cost of simulation-based learning found only 6% reported on cost and a meagre 1.6% compared the cost with other educational interventions.34 This paucity of data makes it difficult to convince managers in control of healthcare budgets to invest in this initial outlay. They could perhaps be persuaded more readily if we look at return on investment.
Cohen et al35 demonstrated an annual cost of simulation of $112 000 and savings of over$700 000 as a result of reduced central line blood stream infections after a simulation-based education programme. This represents a 7:1 return on investment.
With regard to paediatric outcomes, potential savings include preventing a death, significant safety event, admission to intensive care or reduced length of hospital stay. This includes offsetting legal costs or ongoing medical costs for care of a child who may have been significantly harmed. Years of functional gain to the workforce for those who survive unharmed should be considered. Less catastrophic but perhaps more frequent savings would be efficiency savings from refining processes and removing duplication or unnecessary work and better medication practices. However, if the driver for simulation-based training programmes is patient safety then perhaps the financial benefit is not the ultimate goal but the ethical gain of preventing a significant event, perhaps death, doing the right thing and ‘first doing no harm’.
## What educational underpinning do you need to be aware of when designing a simulation-based session?
Any simulation-based learning activity should be constructed around four linked components:
• a briefing
• an immersion in the simulation-based experience
• a debrief
• an action plan to change or transform performance.
Each simulation-based learning session is best planned using a standard template (box 3) so that you can constructively build up a series of simulation-based learning activities in which there is constructive alignment between the intended learning outcomes (ILOs), the immersion, the debrief and the feedback.
Box 3
### Simulation building template
• Identify the learning need to be addressed
• Describe the logistics for the activity—participants needed, equipment required
• Define the learning outcomes and underpinning educational theory for your proposed teaching session
• Setting and background information
• Brief for narrator, participants and simulated patient
• Immersion in the simulation event—consider engagement, level of control, safety
• Expected activity/response/intervention to observe—that is, what do we expect to happen?
• Conclusion of session using simulation
• Approach to debrief and feedback
The ILOs underpin which educational theory is used, whether behaviourism (cardiopulmonary resuscitation training) or constructionism (building meaning from a consultation) for example.
Simulation is often described in terms of its fidelity, usually as either high, mid or low fidelity without real understanding of what this means. This is because the term is often used interchangeably with reality. Fidelity describes how closely a simulated learning activity is to real practice. However, greater fidelity does not necessarily lead to better learning. It is the reality of the simulation activity which influences learning. Dieckmann et al36 describe reality in simulation in terms of physical (smell, look and feel of the simulated environment) semantical (how the story of the simulation activity is constructed—what is accepted as believable) and phenomenological (how participants feel and think in the simulation activity).
### The briefing
The ILOs are key drivers in developing any simulation-based learning activity. Advantages of ILOs are shown in box 4. Unplanned simulation-based learning activity (‘let’s get the manikin out and do some practice’) is likely to result in unintended learning and be educationally unsafe for your learners.
Box 4
### Advantages of intended learning outcomes
• Ensure curriculum coverage
• Inform learners of what they should achieve
• Inform teachers of what they should help learners to achieve
• Reflect the nature and characteristics of the profession into which the learner is being inducted
• Direct the delivery of feedback
• Align teaching with assessment and clarify what will be assessed
### The immersion
A number of other theoretical concepts need to be considered in the use of simulation to enhance learning for professional practice. Kneebone37 describes a theory of simulation as a conceptual space to build a safe learning environment. Deliberate practice theories link rehearsal to improvement and expertise.38 ,39 This is well exemplified by the use of rapid cycle deliberate practice training in paediatric resuscitation40 and acute paediatric scenarios in relation to repetition, practice3 and performance.41 It is important to establish ground rules with the learner at the start of the immersion to ensure their safety.
### The debrief
Most commonly, a faculty member provides time and space for the debrief. Debriefing and feedback is a two-way process, and the role of the debriefer is to enhance performance through discussion. There are many models for debriefing and feedback (box 5).42
Box 5
### Best evidence of what promotes learning using simulation
• Providing constructive feedback
• Allowing repetitive practice
• Integrating simulation into a curricular programme
• Providing a range of difficulties
• Adapting to multiple learning strategies
• Providing a range of clinical scenarios
• Ensuring safe, educationally supportive learning environment
• Facilitating active learning based in individual needs
Issenberg et al,43 BEME report.
### The action plan
Part of the feedback process is developing an action plan to transfer learning from the simulated environment to the workplace.42
## What type of simulation is best?
This is dependent on your ILOs. Are they technical skills (venipuncture, lumbar puncture or examination of specific systems) or non-technical skills (communication, leadership, situation awareness or are you testing a process or system)? The relative merits and shortcomings of simulators, simulated patients and simulated environments are described in tables 2 and 3.
Table 2
Advantages and disadvantages of different types of simulators and patients supporting simulation-based learning
Table 3
Evidencing up to date certification in both advanced paediatric and neonatal life support courses is now a mandatory competency to progress in paediatric training in the UK and many other countries. These courses largely result in removing clinicians from their work environment to undertake a course in a simulated environment.
It appears that knowledge improves in those attending,44 but it is unclear if this translates into improved outcomes for patients.45 ,46 A recent study reports improved clinical performance tool scores with a reconstructed paediatric advanced life support (PALS) programme—where the standard teaching is divided into six simulations delivered on separate dates over a period of weeks compared with standard PALS.38 It is unclear what, if any, additional benefit is construed to patients by running simulations in the clinical environment versus the skills centre environment, but it would appear logical this is the natural progression. The key difference being that teams face a clinical scenario appropriate to their practice in their environment with their knowledge, skills, equipment and environment as it would be should that situation happen that day. Groups in Cincinnati and Sydney have highlighted the additional benefit of identifying latent failures during the simulations which, when corrected, should mitigate the failure for the next event.29 ,47 These centres build into the design of their simulations events from previous safety threats or identified latent failures. Despite this theoretical advantage, one of the centres has not demonstrated a superiority of in situ simulation over skills centre simulation for setting up extracorporeal membrane oxygenation for children.20
## How do I use simulation effectively? Tips for getting started with simulation in paediatrics:
Before launching your own simulation programme, you should ask yourself a series of questions. The authors have offered some thoughts based on their own experiences
1. What do you want your learners to learn? (ILOs). Once these are agreed continue to ensure your simulation reflects this. The debriefing should be guided to ensure the ILOs are the take home message.
2. Who is your faculty—are they skilled enough to be running educationally safe experiences or do your faculty require further training. The authors would not support faculty without training in simulation building and debriefing. It is also recommended that faculty continue to have their skills reviewed with refresher training, peer review and met debriefing.
3. What equipment will you need? Less can be more. With ongoing advances in technology, it can be very tempting to reach for the newest high fidelity manikins. However, this takes skill, experience and time to work well. Critically assess if you can achieve your learning outcomes with more basic equipment and if so stick to it.
4. Are you going to use video for debriefing? Is this necessary to achieve your ILOs? Many teams introduce this into their simulations once both faculty and teams are more comfortable. For many staff, video feels threatening and the intended additional benefit to learning may be compromised.
5. Can staff be released from the clinical area, have rotas been arranged accordingly, do you need study leave?
6. Do you intend linking the simulation session to educational portfolios. Will trainees require individualised feedback, completion of Workplace Based Assessments or certificates of attendance?
7. There are some additional questions you need to give special consideration to if you anticipate in situ simulation.
8. Have the rest of the educational and clinical teams bought into this idea?
9. How are you going to ensure your clinical environment, staff and patients are prepared? How do patients feel about simulation in the bed beside them? Do your staff still feel this is a ‘safe’ learning environment? We have found that patients have wholeheartedly supported in situ simulation; however, we have ensured that simulations are run in cubicles or vacated bays to allow educational safety for staff.
10. Are you training only the team that work in this environment. It is unlikely that a team external to the unit will gain the same benefit and may benefit more from a simulation centre.
## Further development
It is anticipated that the role of simulation will continue to grow in paediatrics, particularly that of in situ simulation. The opportunities afforded by simulation in paediatrics are limitless; from simulated patients and parents for the novice to interview and examine to team training in paediatric emergencies, refining processes, that is, personal protective equipment compliance, setting up equipment or breaking bad news or communication scenarios.
The authors remain concerned that teams adopt these innovative educational opportunities without first asking themselves the questions above. There is an ongoing need for faculty development courses. Despite our reservations, there is a role for sharing prewritten simulations with the understanding that these will need adaptations for the environment for their intended use. Methods of sharing these should be explored. The authors support the development of the simulation group at Royal College of Paediatrics and Child Health and their intended creation of a bank of simulations. Fundamentally, the role of simulation in paediatrics is to support quality improvement both of training and patient safety and as such we encourage rapid dissemination and widespread sharing of good practice; we support the use of free open access medical education resources and the use of social media, we encourage peer support, review and learning from each other. We would encourage teams to publish the results of their endeavours in simulation to further shape the role it plays in the paediatric curriculum and day-to-day practice throughout the world. We would in particular support the sharing of cost-effective practices.
View Abstract
## Footnotes
• Contributors LC drafted the first review of the paediatric undergraduate use of simulation and the in situ simulation and contributed to the introduction. JK carried out the literature search and wrote the general review of simulation in paediatrics, the abstract and the summary. DR wrote the review of the national simulation centre work and the mobile unit.
• Competing interests None declared.
• Provenance and peer review Commissioned; externally peer reviewed.
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17119619250297546, "perplexity": 4788.987597753164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497171.9/warc/CC-MAIN-20200330150913-20200330180913-00151.warc.gz"} |
https://worldwidescience.org/topicpages/a/abstract+interpretation+results.html | #### Sample records for abstract interpretation results
DEFF Research Database (Denmark)
Sergey, Ilya; Devriese, Dominique; Might, Matthew
2013-01-01
to instrument an analysis with high-level strategies for improving precision and performance, such as abstract garbage collection and widening. While the paper itself runs the development for continuationpassing style, our generic implementation replays it for direct-style lambda-calculus and Featherweight Java...
2. Abstract Interpretation and Attribute Gramars
DEFF Research Database (Denmark)
The objective of this thesis is to explore the connections between abstract interpretation and attribute grammars as frameworks in program analysis. Abstract interpretation is a semantics-based program analysis method. A large class of data flow analysis problems can be expressed as non-standard ...... is presented in the thesis. Methods from abstract interpretation can also be used in correctness proofs of attribute grammars. This proof technique introduces a new class of attribute grammars based on domain theory. This method is illustrated with examples....
3. Abstract Interpretation of Mobile Ambients
DEFF Research Database (Denmark)
Hansen, René Rydhof; Jensen, J. G.; Nielson, Flemming
1999-01-01
We demonstrate that abstract interpretation is useful for analysing calculi of computation such as the ambient calculus (which is based on the p-calculus); more importantly, we show that the entire development can be expressed in a constraint-based formalism that is becoming exceedingly popular...
4. Abstract Interpretation Using Attribute Grammar
DEFF Research Database (Denmark)
1990-01-01
This paper deals with the correctness proofs of attribute grammars using methods from abstract interpretation. The technique will be described by defining a live-variable analysis for a small flow-chart language and proving it correct with respect to a continuation style semantics. The proof...
5. Abstract Interpretation as a Programming Language
DEFF Research Database (Denmark)
2013-01-01
examine different programming styles and ways to represent states. Abstract interpretation is primarily a technique for derivation and specification of program analysis. As with denotational semantics we may also view abstract interpretations as programs and examine the implementation. The main focus...... in this paper is to show that results from higher-order strictness analysis may be used more generally as fixpoint operators for higher-order functions over lattices and thus provide a technique for immediate implementation of a large class of abstract interpretations. Furthermore, it may be seen...
6. Abstract Interpretation as a Programming Language
Directory of Open Access Journals (Sweden)
2013-09-01
Full Text Available In David Schmidt's PhD work he explored the use of denotational semantics as a programming language. It was part of an effort to not only treat formal semantics as specifications but also as interpreters and input to compiler generators. The semantics itself can be seen as a program and one may examine different programming styles and ways to represent states. Abstract interpretation is primarily a technique for derivation and specification of program analysis. As with denotational semantics we may also view abstract interpretations as programs and examine the implementation. The main focus in this paper is to show that results from higher-order strictness analysis may be used more generally as fixpoint operators for higher-order functions over lattices and thus provide a technique for immediate implementation of a large class of abstract interpretations. Furthermore, it may be seen as a programming paradigm and be used to write programs in a circular style.
7. Static analysis of software the abstract interpretation
CERN Document Server
Boulanger, Jean-Louis
2013-01-01
The existing literature currently available to students and researchers is very general, covering only the formal techniques of static analysis. This book presents real examples of the formal techniques called ""abstract interpretation"" currently being used in various industrial fields: railway, aeronautics, space, automotive, etc. The purpose of this book is to present students and researchers, in a single book, with the wealth of experience of people who are intrinsically involved in the realization and evaluation of software-based safety critical systems. As the authors are people curr
8. Towards Abstract Interpretation of Epistemic Logic
DEFF Research Database (Denmark)
Ajspur, Mai; Gallagher, John Patrick
applicable to infinite models. The abstract model-checker allows model-checking with infinite-state models. When applied to the problem of whether M |= φ, it terminates and returns the set of states in M at which φ might hold. If the set is empty, then M definitely does not satisfy φ, while if the set is non...
9. Automated, computer interpreted radioimmunoassay results
International Nuclear Information System (INIS)
Hill, J.C.; Nagle, C.E.; Dworkin, H.J.; Fink-Bennett, D.; Freitas, J.E.; Wetzel, R.; Sawyer, N.; Ferry, D.; Hershberger, D.
1984-01-01
90,000 Radioimmunoassay results have been interpreted and transcribed automatically using software developed for use on a Hewlett Packard Model 1000 mini-computer system with conventional dot matrix printers. The computer program correlates the results of a combination of assays, interprets them and prints a report ready for physician review and signature within minutes of completion of the assay. The authors designed and wrote a computer program to query their patient data base for radioassay laboratory results and to produce a computer generated interpretation of these results using an algorithm that produces normal and abnormal interpretives. Their laboratory assays 50,000 patient samples each year using 28 different radioassays. Of these 85% have been interpreted using our computer program. Allowances are made for drug and patient history and individualized reports are generated with regard to the patients age and sex. Finalization of reports is still subject to change by the nuclear physician at the time of final review. Automated, computerized interpretations have realized cost savings through reduced personnel and personnel time and provided uniformity of the interpretations among the five physicians. Prior to computerization of interpretations, all radioassay results had to be dictated and reviewed for signing by one of the resident or staff physicians. Turn around times for reports prior to the automated computer program generally were two to three days. Whereas, the computerized interpret system allows reports to generally be issued the day assays are completed
10. Interpretation for scales of measurement linking with abstract algebra.
Science.gov (United States)
Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun
2014-01-01
THE STEVENS CLASSIFICATION OF LEVELS OF MEASUREMENT INVOLVES FOUR TYPES OF SCALE: "Nominal", "Ordinal", "Interval" and "Ratio". This classification has been used widely in medical fields and has accomplished an important role in composition and interpretation of scale. With this classification, levels of measurements appear organized and validated. However, a group theory-like systematization beckons as an alternative because of its logical consistency and unexceptional applicability in the natural sciences but which may offer great advantages in clinical medicine. According to this viewpoint, the Stevens classification is reformulated within an abstract algebra-like scheme; 'Abelian modulo additive group' for "Ordinal scale" accompanied with 'zero', 'Abelian additive group' for "Interval scale", and 'field' for "Ratio scale". Furthermore, a vector-like display arranges a mixture of schemes describing the assessment of patient states. With this vector-like notation, data-mining and data-set combination is possible on a higher abstract structure level based upon a hierarchical-cluster form. Using simple examples, we show that operations acting on the corresponding mixed schemes of this display allow for a sophisticated means of classifying, updating, monitoring, and prognosis, where better data mining/data usage and efficacy is expected.
11. Abstracts, Third Space Processing Symposium, Skylab results
Science.gov (United States)
1974-01-01
Skylab experiments results are reported in abstracts of papers presented at the Third Space Processing Symposium. Specific areas of interest include: exothermic brazing, metals melting, crystals, reinforced composites, glasses, eutectics; physics of the low-g processes; electrophoresis, heat flow, and convection demonstrations flown on Apollo missions; and apparatus for containerless processing, heating, cooling, and containing materials.
12. Control-Flow Analysis of Function Calls and Returns by Abstract Interpretation
DEFF Research Database (Denmark)
Midtgaard, Jan; Jensen, Thomas P.
, effectively approximating where function calls return across optimized tail calls. The analysis is systematically calculated by abstract interpretation of the stack-based CaEK abstract machine of Flanagan et al. using a series of Galois connections. Abstract interpretation provides a unifying setting in which...
13. Abstract Interpretation of PIC programs through Logic Programming
DEFF Research Database (Denmark)
Henriksen, Kim Steen; Gallagher, John Patrick
2006-01-01
, are applied to the logic based model of the machine. A small PIC microcontroller is used as a case study. An emulator for this microcontroller is written in Prolog, and standard programming transformations and analysis techniques are used to specialise this emulator with respect to a given PIC program....... The specialised emulator can now be further analysed to gain insight into the given program for the PIC microcontroller. The method describes a general framework for applying abstractions, illustrated here by linear constraints and convex hull analysis, to logic programs. Using these techniques on the specialised...
14. Semantic interpretation of search engine resultant
Science.gov (United States)
Nasution, M. K. M.
2018-01-01
In semantic, logical language can be interpreted in various forms, but the certainty of meaning is included in the uncertainty, which directly always influences the role of technology. One results of this uncertainty applies to search engines as user interfaces with information spaces such as the Web. Therefore, the behaviour of search engine results should be interpreted with certainty through semantic formulation as interpretation. Behaviour formulation shows there are various interpretations that can be done semantically either temporary, inclusion, or repeat.
15. Control-flow analysis of function calls and returns by abstract interpretation
DEFF Research Database (Denmark)
Midtgaard, Jan; Jensen, Thomas P.
2012-01-01
Abstract interpretation techniques are used to derive a control-flow analysis for a simple higher-order functional language. The analysis approximates the interprocedural control-flow of both function calls and returns in the presence of first-class functions and tail-call optimization. In additi...... a rational reconstruction of a constraint-based CFA from abstract interpretation principles....
16. Abstract Interpretation-based verification/certification in the ciaoPP system
OpenAIRE
Puebla Sánchez, Alvaro Germán; Albert Albiol, Elvira; Hermenegildo, Manuel V.
2005-01-01
CiaoPP is the abstract interpretation-based preprocessor of the Ciao multi-paradigm (Constraint) Logic Programming system. It uses modular, incremental abstract interpretation as a fundamental tool to obtain information about programs. In CiaoPP, the semantic approximations thus produced have been applied to perform high- and low-level optimizations during program compilation, including transformations such as múltiple abstract specialization, parallelization, partial evaluation, resource...
17. Control-flow analysis of function calls and returns by abstract interpretation
DEFF Research Database (Denmark)
Midtgaard, Jan; Jensen, Thomas P.
2009-01-01
We derive a control-flow analysis that approximates the interprocedural control-flow of both function calls and returns in the presence of first-class functions and tail-call optimization. In addition to an abstract environment, our analysis computes for each expression an abstract control stack......, effectively approximating where function calls return across optimized tail calls. The analysis is systematically calculated by abstract interpretation of the stack-based CaEK abstract machine of Flanagan et al. using a series of Galois connections. Abstract interpretation provides a unifying setting in which...
18. Heartbeat Classification Using Abstract Features From the Abductive Interpretation of the ECG.
Science.gov (United States)
Teijeiro, Tomas; Felix, Paulo; Presedo, Jesus; Castro, Daniel
2018-03-01
This paper aims to prove that automatic beat classification on ECG signals can be effectively solved with a pure knowledge-based approach, using an appropriate set of abstract features obtained from the interpretation of the physiological processes underlying the signal. A set of qualitative morphological and rhythm features are obtained for each heartbeat as a result of the abductive interpretation of the ECG. Then, a QRS clustering algorithm is applied in order to reduce the effect of possible errors in the interpretation. Finally, a rule-based classifier assigns a tag to each cluster. The method has been tested with the MIT-BIH Arrhythmia Database records, showing a significantly better performance than any other automatic approach in the state-of-the-art, and even improving most of the assisted approaches that require the intervention of an expert in the process. The most relevant issues in ECG classification, related to a large extent to the variability of the signal patterns between different subjects and even in the same subject over time, will be overcome by changing the reasoning paradigm. This paper demonstrates the power of an abductive framework for time-series interpretation to make a qualitative leap in the significance of the information extracted from the ECG by automatic methods.
19. Interpreting Results from the Multinomial Logit Model
DEFF Research Database (Denmark)
Wulff, Jesper
2015-01-01
This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...
20. Static Safety for an Actor Dedicated Process Calculus by Abstract Interpretation
OpenAIRE
Garoche, Pierre-Loïc; Pantel, Marc; Thirioux, Xavier
2006-01-01
International audience; The actor model eases the definition of concurrent programs with non uniform behaviors. Static analysis of such a model was previously done in a data-flow oriented way, with type systems. This approach was based on constraint set resolution and was not able to deal with precise properties for communications of behaviors. We present here a new approach, control-flow oriented, based on the abstract interpretation framework, able to deal with communication of behaviors. W...
1. WIMP Dark Matter interpretation of Higgs results
CERN Document Server
Wang, Renjie; The ATLAS collaboration
2017-01-01
The results from searching for dark matter either directly from invisible decay of Higgs bosons or in association with a Higgs boson at the LHC are presented. No significant excess is found beyond the Standard Model prediction, and upper limits are set on the production cross section times branching fraction using data collected in proton-proton collisions at center-of-mass energies of 13 TeV by the ATLAS and CMS detectors. An interpreted upper limit is presented on the allowed dark matter-nucleon scattering cross section.
2. Abstracts
Institute of Scientific and Technical Information of China (English)
2011-01-01
The Western Theories of War Ethics and Contemporary Controversies Li Xiaodong U Ruijing (4) [ Abstract] In the field of international relations, war ethics is a concept with distinct westem ideological color. Due to factors of history and reality, the in
3. Abstracts
Institute of Scientific and Technical Information of China (English)
2017-01-01
Supplementary Short Board: Orderly Cultivate Housing Leasing Market WANG Guangtao (Former Minister of Ministry of Construction) Abstract: In December 2016, Central Economic Work Conference proposed that to promote the steady and healthy development of the real estate market, it should adhere to the “house is used to live, not used to speculate” position. At present, the development of housing leasing market in China is lagging behind. It is urgent to improve the housing conditions of large cities and promote the urbanization of small and medium-sized cities. Therefore, it is imperative to innovate and supplement the short board to accelerate the development of housing leasing market.
4. Spatial distance effects on incremental semantic interpretation of abstract sentences: evidence from eye tracking.
Science.gov (United States)
Guerra, Ernesto; Knoeferle, Pia
2014-12-01
5. Abstracts
Institute of Scientific and Technical Information of China (English)
2012-01-01
Strategic Realism: An Option for China' s Grand Strategy Song Dexing (4) [ Abstract] As a non-Western emerging power, China should positively adapt its grand strategy to the strategic psychological traits in the 21st century, maintain a realist tone consistent with the national conditions of China, and avoid adventurist policies while awaring both strategic strength and weakness. In the 21st century, China' s grand strategy should be based on such core values as security, development, peace and justice, especially focusing on development in particular, which we named "strategic realism". Given the profound changes in China and the world, strategic realism encourages active foreign policy to safe- guard the long-term national interests of China. Following the self-help logic and the fun- damental values of security and prosperity, strategic realism concerns national interests as its top-priority. It advocates smart use of power, and aims to achieve its objectives by optimizing both domestic and international conditions. From the perspective of diplomatic phi- losophy, strategic realism is not a summarization of concrete policies but a description of China' s grand strategy orientations in the new century. [ Key Words] China, grand strategy, strategic realism [ Author]Song Dexing, Professor, Ph.D. Supervisor, and Director of the Center for International Strategic Studies, University of International Studies of PLA.
6. A Logic Programming Based Approach to Applying Abstract Interpretation to Embedded Software
DEFF Research Database (Denmark)
Henriksen, Kim Steen
anvendt til at analysere programmer udviklet til indlejrede systemer. Et givet indlejret system modelleres som en emulator skrevet i en variant af logikprogrammering kaldet constraint logic programming (CLP). Emulatoren specialiseres med hensyn til et givet program, hvilket resulterer i et nyt program i...... programmeringsparadigme der har et solidt matematisk fundament. Et af dets karakteristika er adskillelsen af logik (betydningen af et program) og kontrol (hvordan programmet udføres), hvilket gør logikprogrammering til et meget anvendeligt sprog hvad angår programanalyse. I denne afhandling bliver logikprogrammering...... skrevet i sproget CLP der samtidig er isomorft med programmet skrevet til det indlejrede system. Anvendes abstract interpretation baserede analysatorer på det specialiserede program, kan resultater fra denne analyse direkte overføres til den indlejrede program, da dette program og den specialiserede...
7. ABSTRACT
African Journals Online (AJOL)
descriptive statistics, Z-test and multiple regressiion analysis. Results show that the ... protein intake from animal sources, which is less than 10gm/capita /day ... production offish and fish products through effective fisheries extension efforts ...
8. Abstract
African Journals Online (AJOL)
PROF. OLIVER OSUAGWA
Many mathematical models of stochastic dynamical systems were based on the assumption that the drift ... stochastic process with state space S is a ..... The algorithm was implemented through a MatLab script and the result of the simulation is.
9. abstract
Directory of Open Access Journals (Sweden)
. user
2016-02-01
Full Text Available Introduction: One of the microbiological preparations used for this study was Effective Microorganisms (EM, being a commercial mixture of photosynthesizing bacteria, Actinomycetes, lactic acid bacteria, yeasts and fermenting fungi. The microbiological composition of the EM concentrateincludesStreptomyces albus, Propioni bacterium freudenreichil, Streptococcus lactis, Aspergillus oryzae, Mucor hiemalis, Saccharomycescerevisiae and Candida utilis. Moreover, EM also contains an unspecified amount of Lactobacillus sp. Rhodo pseudomonas sp. and Streptomyces griseus. Effective Microorganisms have a positive effect on the decomposition of organic matter, limiting putrefaction, increasing nitrogen content in the root medium of plants, phosphorus, improving soil fertility and as a result contributing to the growth and development of the root systems of plants. Selection of almond vegetative rootstocks for water stress tolerance is important for almond crop production in arid and semi-arid regions. The study of the eco-morphological characteristics that determine the success of a rootstock in a particular environment is a powerful tool for both agricultural management and breeding purposes. The aim of this work was to select the new rootstocks for water shortage tolerance, impact of water stress as well as Effective Microorganism (EM on morphological characteristics of almond rootstocks. Materials and Methods: In order to select the new rootstocks for water shortage tolerance, impact of water stress as well as EMonmorphologicalcharacteristics of almondrootstocks were studiedin thedepartment ofHorticulture, Ferdowsi University of Mashhad, in 2011-2012. The experiment was carried out with four replications in a completely random blockdesign to study the effects of two concentrations of EM (0 and 1%, three irrigation levels (normal irrigation 100%-control-and irrigation after depletion of 33 and 66% of available water, and four almond rootstocks including GF
10. ABSTRACTS
Institute of Scientific and Technical Information of China (English)
2011-01-01
Wu Jie and Duan Yanchao. The current line drawing of Laterolog and its application. PI, 2011, 25(4) : 1 - 4 The current line plays an important role in the directly understanding the characteristics of Laterolog tool. A method of drawing current lines for the discrete potential data based on the Finite Element calculation is studied. It solves a series of key problems, including the selection of step length, the identification of direction, treatment of nmtation point and the control of stop. A drawing program is written by MATLAB software. Taking the current line drawing of the dual Laterolog logging as an example, we analyze the tool's investigation characteristics in the several formations such as homogeneous, low or high invasion, and invasion with shoulder. These results verify the effectiveness of the new method. The method can be applied to the other kinds of Laterolog tools to draw their current lines and analyze their investigation characteristics.
11. ABSTRACT
Directory of Open Access Journals (Sweden)
Michelle de Stefano Sabino
2011-12-01
Full Text Available This paper aims to describe and to analyze the integration observed in the Sintonia project with respect to the comparison of project management processes to the model of the Stage-Gate ®. The literature addresses these issues conceptually, but lack an alignment between them that is evident in practice. As a method was used single case study. The report is as if the Sintonia project, developed by PRODESP - Data Processing Company of São Paulo. The results show the integration of project management processes with the Stage-Gate model developed during the project life cycle. The formalization of the project was defined in stages in which allowed the exploitation of economies of repetition and recombination to the development of new projects. This study contributes to the technical vision in dealing with the integration of project management processes. It was concluded that this system represents an attractive way, in terms of creating economic value and technological innovation for the organization.
12. abstract
Directory of Open Access Journals (Sweden)
abstract abstract
2016-07-01
0, 500 and 1000 µl L-1 and 3 level of MeSA including 0, 0.1 and 0.2 mM. After treatment, the fruits were inoculated by Botrytis suspension and transferred to storage and quality parameters were evaluated after 7, 14 and 21 days. At each sampling time, disease incidence, weight loss, titratable acidity, pH, soluble solids content, vitamin C and antioxidant activity were measured. Results and Discussion: The results showed that both LEO and MeSA treatments had significant effects on inhibition of mycelium growth within in-vitro condition (p < 0.05. Inhibition rate of mycelium growth significantly improved by LEO and MeSA concentration increase of, (Table 1. At in-vivo assessment, diseases incidence of treated fruits with 500 µl L-1 LEO and 0.1 mM MeSA were 32% and 64% lower than untreated fruits, respectively (Fig. 1 and 2. During storage period, the percentage of infected fruits increased. In addition, LEO and MeSA treatments affected quality parameters of strawberry fruits including titratable acidity, soluble solids content, vitamin C and antioxidant activity. Treated fruits had a high content of soluble solids, vitamin C and antioxidant activity in comparison to untreated fruits (Table 3 and 4. Probably ascorbic acid decreased through fungal infection duo to cell wall break down during storage. Any factors such as essential oil and salicylate that inhibit fungal growth can help preserving vitamin C in stored products. High level of vitamin C and antioxidant activity was observed in treated fruits with 0.1 mM MeSA and 500 µl L-1 LEO. In controlling weight loss of fruits, 0.2 mM of MeSA and 500 µl L-1 of LEO had significant effects, although MeSA was more effective than LEO treatments, possibly due to elimination of respiration rates and fungi infection (Table 4. Therefore, LEO and MeSA with fungicide effects could be replaced with synthetic fungicides in controlling fungal diseases of strawberry and maintain fruits quality during storage. Conclusion: In
13. Misleading reporting and interpretation of results in major infertility journals.
Science.gov (United States)
Glujovsky, Demian; Sueldo, Carlos E; Borghi, Carolina; Nicotra, Pamela; Andreucci, Sara; Ciapponi, Agustín
2016-05-01
14. Crosshole investigations: Hydrogeological results and interpretations
International Nuclear Information System (INIS)
Black, J.H.; Holmes, D.C.; Brightman, M.A.
1987-12-01
The Crosshole Programme was an integrated geophysical and hydrogeological study of a limited volume of rock (known as the Crosshole Site) within the Stripa mine. Borehole radar, borehole seismic and hydraulic methods were developed for specific application to fractured crystalline rock. The hydrogeological investigations contained both single borehole and crosshole test techniques. A novel technique, using a sinusoidal variation of pressure, formed the main method of crosshole testing and was assessed during the programme. The strategy of crosshole testing was strongly influenced by the results from the geophysical measurements. The longer term, larger scale hydrogeological response of the region was asessed by examining the variation of heads over the region. These were responding to the presence of an old drift. A method of overall assessment involving minimising the divergence from a homogeneous response yielded credible values of hydraulic conductivity for the rock as a whole. (orig./DG)
15. Rhetorical Interpretation of Abstracts in Sci-Tech Theses Based on Burke's Identification Theory
Science.gov (United States)
Zhong, Jihong
2017-01-01
Abstract of a thesis is the brief and accurate representation of the thesis, with the important function of persuading readers to read on the thesis. So how the writer constructs the abstract and wins readers' recognition is our main focus. On the basis of Burke's Identification Theory, this paper analyzed 10 abstracts from "Nature" from…
16. Convergence results for a class of abstract continuous descent methods
Directory of Open Access Journals (Sweden)
Sergiu Aizicovici
2004-03-01
Full Text Available We study continuous descent methods for the minimization of Lipschitzian functions defined on a general Banach space. We establish convergence theorems for those methods which are generated by approximate solutions to evolution equations governed by regular vector fields. Since the complement of the set of regular vector fields is $sigma$-porous, we conclude that our results apply to most vector fields in the sense of Baire's categories.
17. Invariant Measures for Dissipative Dynamical Systems: Abstract Results and Applications
Science.gov (United States)
Chekroun, Mickaël D.; Glatt-Holtz, Nathan E.
2012-12-01
In this work we study certain invariant measures that can be associated to the time averaged observation of a broad class of dissipative semigroups via the notion of a generalized Banach limit. Consider an arbitrary complete separable metric space X which is acted on by any continuous semigroup { S( t)} t ≥ 0. Suppose that { S( t)} t ≥ 0 possesses a global attractor {{A}}. We show that, for any generalized Banach limit LIM T → ∞ and any probability distribution of initial conditions {{m}_0}, that there exists an invariant probability measure {{m}}, whose support is contained in {{A}}, such that intX \\varphi(x) d{m}(x) = \\underset{t rightarrow infty}LIM1/T int_0^T int_X \\varphi(S(t) x) d{m}_0(x) dt, for all observables φ living in a suitable function space of continuous mappings on X. This work is based on the framework of Foias et al. (Encyclopedia of mathematics and its applications, vol 83. Cambridge University Press, Cambridge, 2001); it generalizes and simplifies the proofs of more recent works (Wang in Disc Cont Dyn Syst 23(1-2):521-540, 2009; Lukaszewicz et al. in J Dyn Diff Eq 23(2):225-250, 2011). In particular our results rely on the novel use of a general but elementary topological observation, valid in any metric space, which concerns the growth of continuous functions in the neighborhood of compact sets. In the case when { S( t)} t ≥ 0 does not possess a compact absorbing set, this lemma allows us to sidestep the use of weak compactness arguments which require the imposition of cumbersome weak continuity conditions and thus restricts the phase space X to the case of a reflexive Banach space. Two examples of concrete dynamical systems where the semigroup is known to be non-compact are examined in detail. We first consider the Navier-Stokes equations with memory in the diffusion terms. This is the so called Jeffery's model which describes certain classes of viscoelastic fluids. We then consider a family of neutral delay differential
18. Mutagenicity in drug development: interpretation and significance of test results.
Science.gov (United States)
Clive, D
1985-03-01
The use of mutagenicity data has been proposed and widely accepted as a relatively fast and inexpensive means of predicting long-term risk to man (i.e., cancer in somatic cells, heritable mutations in germ cells). This view is based on the universal nature of the genetic material, the somatic mutation model of carcinogenesis, and a number of studies showing correlations between mutagenicity and carcinogenicity. An uncritical acceptance of this approach by some regulatory and industrial concerns is over-conservative, naive, and scientifically unjustifiable on a number of grounds: Human cancers are largely life-style related (e.g., cigarettes, diet, tanning). Mutagens (both natural and man-made) are far more prevalent in the environment than was originally assumed (e.g., the natural bases and nucleosides, protein pyrolysates, fluorescent lights, typewriter ribbon, red wine, diesel fuel exhausts, viruses, our own leukocytes). "False-positive" (relative to carcinogenicity) and "false-negative" mutagenicity results occur, often with rational explanations (e.g., high threshold, inappropriate metabolism, inadequate genetic endpoint), and thereby confound any straightforward interpretation of mutagenicity test results. Test battery composition affects both the proper identification of mutagens and, in many instances, the ability to make preliminary risk assessments. In vitro mutagenicity assays ignore whole animal protective mechanisms, may provide unphysiological metabolism, and may be either too sensitive (e.g., testing at orders-of-magnitude higher doses than can be ingested) or not sensitive enough (e.g., short-term treatments inadequately model chronic exposure in bioassay). Bacterial systems, particularly the Ames assay, cannot in principle detect chromosomal events which are involved in both carcinogenesis and germ line mutations in man. Some compounds induce only chromosomal events and little or no detectable single-gene events (e.g., acyclovir, caffeine
19. The Interpretability of Inconsistency: Feferman's Theorem and Related Results
NARCIS (Netherlands)
Visser, Albert
This paper is an exposition of Feferman's Theorem concerning the interpretability of inconsistency and of further insights directly connected to this result. Feferman's Theorem is a strengthening of the Second Incompleteness Theorem. It says, in metaphorical paraphrase, that it is not just the case
20. The Interpretability of Inconsistency: Feferman's Theorem and Related Results
NARCIS (Netherlands)
Visser, Albert
2014-01-01
This paper is an exposition of Feferman's Theorem concerning the interpretability of inconsistency and of further insights directly connected to this result. Feferman's Theorem is a strengthening of the Second Incompleteness Theorem. It says, in metaphorical paraphrase, that it is not just the case
1. Interpretation of Chemical Pathology Test Results in Paediatrics ...
African Journals Online (AJOL)
At any time we interprete paediatric chemical pathology test results we must take into consideration a number of factors, which are related with and restricted to paediatric patients. Such factors include the paediatric patient's age that may change from prematurity to above 18 years, and the paediatric patient's body weight ...
2. The interpretation of proverbs by elderly with high, medium and low educational level: Abstract reasoning as an aspect of executive functions
Science.gov (United States)
Wachholz, Thalita Bianchi de Oliveira; Yassuda, Mônica Sanches
2011-01-01
It is now known that cognitive functions tend to decline with age. Executive functions (EF) are among the first abilities to decline with aging. A subcomponent of the EF is abstract reasoning. The Test of Proverbs is an instrument that can be used to evaluate the capacity of abstract reasoning. Objective To examine the association of performance in interpretation of proverbs, with education and with episodic memory and EF tasks. Methods A total of 67 individuals aged between 60 and 75 years were evaluated, and divided into three categories of education: 1-4 years, 5-8 years, and 9 or more years of schooling. The instruments used were a sociodemographic questionnaire (gender, age, marital status, education, income, previous occupation, current occupation and health perception), the Mini Mental State Examination, Brief Cognitive Screening Battery; Geriatric Depression Scale; Forward and Backward Digit Span (WAIS-III), and the Test of Proverbs. Results A high impact of education was seen on the interpretation of proverbs, with lower performance among the elderly with less education. A significant association between performance on the Test of Proverbs and scores on the MMSE, GDS, and verbal fluency tests was found. There was a modest association with incidental memory. Conclusions The capacity to interpret proverbs is strongly associated with education and with performance on other EF tasks. PMID:29213717
3. Abstract interpretation over non-deterministic finite tree automate for set-based analysis of logic programs
DEFF Research Database (Denmark)
Gallagher, John Patrick; Puebla, G.
2002-01-01
, and describe its implementation. Both goal-dependent and goal-independent analysis are considered. Variations on the abstract domains operations are introduced, and we discuss the associated tradeoffs of precision and complexity. The experimental results indicate that this approach is a practical way...
4. Guidelines to Interpret Results of Mechanical Blade Test
International Nuclear Information System (INIS)
Arias Vega, F.; Sanz Martin, J. C.
1999-01-01
This report shows the interpretation of full scale rotor blade test results and describes the engineering testing models and coefficients for any feasible rotor blade design, in order to accept and to certify any final manufactured blade as an allowable product, fit for use and working with a completely security during all the wind turbines lifetime. This work was carried out at the Wind Energy Division of the CIEMAT.DER and it is based on the authors technical experience in this field, after many years working on testing blades. Also, this paper contains results of the European wind turbine Standards II relevant to the European Project: JOULE III R.D. where the Wind Energy Division took part as participant too. (Author)
5. Guidelines to Interpret Results of Mechanical Blade Test
Energy Technology Data Exchange (ETDEWEB)
Arias Vega, F.; Sanz Martin, J. C. [Ciemat, Madrid (Spain)
2000-07-01
This report shows the interpretation of full scale rotor blade test results and describes the engineering testing models and coefficients for any feasible rotor blade design, in order to accept and to certify any final manufactured blades as an allowable product, fit for use and working with a completely security during all the windturbine's lifetime. This work was carried out at the Wind Energy Division of the CIEMAT.DER and it is based on the author's technical experience in this field, after many years working on testing blades. Also, this paper contains results of the European wind turbine Standards II relevant to the European Project: JOULE III R.D. where the Wind Energy Division took part as participant too. (Author)
6. Interpretation of Blood Microbiology Results - Function of the Clinical Microbiologist.
Science.gov (United States)
Kristóf, Katalin; Pongrácz, Júlia
2016-04-01
The proper use and interpretation of blood microbiology results may be one of the most challenging and one of the most important functions of clinical microbiology laboratories. Effective implementation of this function requires careful consideration of specimen collection and processing, pathogen detection techniques, and prompt and precise reporting of identification and susceptibility results. The responsibility of the treating physician is proper formulation of the analytical request and to provide the laboratory with complete and precise patient information, which are inevitable prerequisites of a proper testing and interpretation. The clinical microbiologist can offer advice concerning the differential diagnosis, sampling techniques and detection methods to facilitate diagnosis. Rapid detection methods are essential, since the sooner a pathogen is detected, the better chance the patient has of getting cured. Besides the gold-standard blood culture technique, microbiologic methods that decrease the time in obtaining a relevant result are more and more utilized today. In the case of certain pathogens, the pathogen can be identified directly from the blood culture bottle after propagation with serological or automated/semi-automated systems or molecular methods or with MALDI-TOF MS (matrix-assisted laser desorption-ionization time of flight mass spectrometry). Molecular biology methods are also suitable for the rapid detection and identification of pathogens from aseptically collected blood samples. Another important duty of the microbiology laboratory is to notify the treating physician immediately about all relevant information if a positive sample is detected. The clinical microbiologist may provide important guidance regarding the clinical significance of blood isolates, since one-third to one-half of blood culture isolates are contaminants or isolates of unknown clinical significance. To fully exploit the benefits of blood culture and other (non- culture
7. Interpreting the LHC Higgs search results in the MSSM
Energy Technology Data Exchange (ETDEWEB)
Heinemeyer, S. [Instituto de Fisica de Cantabria (CSIC-UC), Santander (Spain); Staal, O.; Weiglein, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2011-12-15
Recent results reported by the ATLAS and CMS experiments on the search for a SM-like Higgs boson both show an excess for a Higgs mass near 125 GeV, which is mainly driven by the {gamma}{gamma} and ZZ{sup *} decay channels, but also receives some support from channels with a lower mass resolution. We discuss the implications of this possible signal within the context of the minimal supersymmetric Standard Model (MSSM), taking into account previous limits from Higgs searches at LEP, the Tevatron and the LHC. The consequences for the remaining MSSM parameter space are investigated. Under the assumption of a Higgs signal we derive new lower bounds on the tree-level parameters of the MSSM Higgs sector. We also discuss briefly an alternative interpretation of the excess in terms of the heavy CP-even Higgs boson, a scenario which is found to be still viable. (orig.)
8. Implications for monitoring: study designs and interpretation of results
International Nuclear Information System (INIS)
Green, R. H.; Montagna, P.
1996-01-01
Two innovative statistical approaches to the interpretation and generalization of the results from the study of long-term environmental impacts of offshore oil and gas exploration and production in the Gulf of Mexico were described. The first of the two methods, the Sediment Quality Triad approach, relies on a test of coherence of responses, whereas the second approach uses small scale spatial heterogeneity of response as evidence of impact. As far as the study design was concerned, it was argued that differing objectives which are demanded of the same study (e.g. generalization about environmental impact of similar platforms versus the spatial pattern of impact around individual platforms) are frequently in conflict. If at all possible, they should be avoided since the conflicting demands tend to compromise the design for both situations. 31 refs., 5 figs
9. Challenges in interpretation of thyroid hormone test results
Directory of Open Access Journals (Sweden)
Lalić Tijana
2016-01-01
Full Text Available Introduction. In interpreting thyroid hormones results it is preferable to think of interference and changes in concentration of their carrier proteins. Outline of Cases. We present two patients with discrepancy between the results of thyroid function tests and clinical status. The first case presents a 62-year-old patient with a nodular goiter and Hashimoto thyroiditis. Thyroid function test showed low thyroid-stimulating hormone (TSH and normal to low fT4. By determining thyroid status (ТSH, T4, fT4, T3, fT3 in two laboratories, basal and after dilution, as well as thyroxine-binding globulin (TBG, it was concluded that the thyroid hormone levels were normal. The results were influenced by heterophile antibodies leading to a false lower TSH level and suspected secondary hypothyroidism. The second case, a 40-year-old patient, was examined and followed because of the variable size thyroid nodule and initially borderline elevated TSH, after which thyroid status showed low level of total thyroid hormones and normal TSH. Based on additional analysis it was concluded that low T4 and T3 were a result of low TBG. It is a hereditary genetic disorder with no clinical significance. Conclusion. Erroneous diagnosis of thyroid disorders and potentially harmful treatment could be avoided by proving the interference or TBG deficiency whenever there is a discrepancy between the thyroid function results and the clinical picture.
10. Theatre and Oral Interpretation: Abstracts of Doctoral Dissertations Published in "Dissertation Abstracts International," January through June 1982 (Vol. 42 Nos. 7 through 12).
Science.gov (United States)
ERIC Clearinghouse on Reading and Communication Skills, Urbana, IL.
This collection of abstracts is part of a continuing series providing information on recent doctoral dissertations. The 25 titles deal with a variety of topics, including the following: (1) the development of American theatre management practices between 1830 and 1896; (2) the aesthetics of audience response; (3) P. Picasso as a theatrical…
11. Interpretations of bullying by bullies, victims, and bully-victims in interactions at different levels of abstraction
NARCIS (Netherlands)
Pouwels, J.L.; Scholte, R.H.J.; Noorden, T.H.J. van; Cillessen, A.H.N.
2016-01-01
According to the Social Information Processing Model of children's adjustment, children develop general interpretation styles for future social events based on past social experiences. Previous research has shown associations between interpretations of social situations and internalizing and
12. Interpreting and Integrating Clinical and Anatomic Pathology Results.
Science.gov (United States)
Ramaiah, Lila; Hinrichs, Mary Jane; Skuba, Elizabeth V; Iverson, William O; Ennulat, Daniela
2017-01-01
The continuing education course on integrating clinical and anatomical pathology data was designed to communicate the importance of using a weight of evidence approach to interpret safety findings in toxicology studies. This approach is necessary, as neither clinical nor anatomic pathology data can be relied upon in isolation to fully understand the relationship between study findings and the test article. Basic principles for correlating anatomic pathology and clinical pathology findings and for integrating these with other study end points were reviewed. To highlight these relationships, a series of case examples, presented jointly by a clinical pathologist and an anatomic pathologist, were used to illustrate the collaborative effort required between clinical and anatomical pathologists. In addition, the diagnostic utility of traditional liver biomarkers was discussed using results from a meta-analysis of rat hepatobiliary marker and histopathology data. This discussion also included examples of traditional and novel liver and renal biomarker data implementation in nonclinical toxicology studies to illustrate the relationship between discrete changes in biochemistry and tissue morphology.
13. Interpretation of PISCES -- A RF antenna system experimental results
International Nuclear Information System (INIS)
Rothweil, D.A.; Phelps, D.A.; Doerner, R.
1995-10-01
The paper describes experimental data from rf coupling experiments using one to four coil antenna arrays that encircle a linear magnetized plasma column. Experimental results using single turn coil that produce symmetric (i.e. m = 0), dipole (m = 1), and radial rf magnetic fields for coupling to ion waves are compared. By operating without a Faraday shield, it was observed for the first time that the plasma resistive load seen by these different antenna types tends to increase with the number of turns to at least the second power. A four-turn m = 0 coil experienced a record 3--5 Ω loading, corresponding to over 90% power coupling to the plasma. A four-turn m = 1 coil experienced up to 1--1.5 Ω loading, also higher than previous observations. First time observations using a two coil array of m = 0 coil are also reported. As predicted, the loading decreases with increasing phase between coil from 0 degree to 180 degree. Experiments using four coil arrays were difficult to optimize and interpret primarily due to complexity of the manual tuning. To facilitate this optimization in the future, a proposed feedback control system that automatically matches load variations between 0.2 and 10 Ω is described
14. Uncertainty principle and the stable interpretation of spectrometric experiment results
International Nuclear Information System (INIS)
Zhukovskij, E.L.
1984-01-01
Two stable forms for recording least-swuare method used for evaluation of parameters durmng automated processing and interpretation of various type spectra were derived on the basis of the Kramer-Rao inequality. Spectra described by linear equations are considered for which parameter evaluations are recorded in a final form. It is shown that the suggested form of the interpreting functional is maintained for the spectra of different nature (NMR-, IR-, UV-, RS- and mass-spectra), their parameters depending nonlinearly on the wave number
15. Interpretation of postmortem forensic toxicology results for injury prevention research.
Science.gov (United States)
Drummer, Olaf H; Kennedy, Briohny; Bugeja, Lyndal; Ibrahim, Joseph Elias; Ozanne-Smith, Joan
2013-08-01
Forensic toxicological data provides valuable insight into the potential contribution of alcohol and drugs to external-cause deaths. There is a paucity of material that guides injury researchers on the principles that need to be considered when examining the presence and contribution of alcohol and drugs to these deaths. This paper aims to describe and discuss strengths and limitations of postmortem forensic toxicology sample selection, variations in analytical capabilities and data interpretation for injury prevention research. Issues to be considered by injury researchers include: the circumstances surrounding death (including the medical and drug use history of the deceased person); time and relevant historical factors; postmortem changes (including redistribution and instability); laboratory practices; specimens used; drug concentration; and attribution of contribution to death. This paper describes the range of considerations for testing and interpreting postmortem forensic toxicology, particularly when determining impairment or toxicity as possible causal factors in injury deaths. By describing these considerations, this paper has application to decisions about study design and case inclusion in injury prevention research, and to the interpretation of research findings.
16. Case study on the results of image interpretation
International Nuclear Information System (INIS)
Fukuda, Morimichi; Fukuhisa, Kenjiro; Tateno, Yukio; Nair, Gopinathan; Sharma, S.M.; Padhy, Ajit Kumar; Shishido, Fumio
1996-01-01
Hepatocellular carcinoma (HCC) most commonly produces a focal mass lesion which is initially hypoechoic, but becomes more echogenic with increasing size. Texture pattern characteristically produces mosaic appearance and is often accompanied by posterior echo enhancement. In some cases, the tumor infiltrates widely through the liver substance, giving rise to an irregular pattern, invasion of veins, hepatic and portal, is common. The incidence of intra and extrahepatic dissemination is high in HCC with diffuse infiltration. Liver scintigraphy, sonographic findings, clinical history and laboratory investigation were analysed and interpreted
17. Pitfalls in the Interpretation of Blood Chemistry Results*
African Journals Online (AJOL)
1971-10-30
Oct 30, 1971 ... Apparently abnormal blood chemistry results may be caused by many factors other ... healthy persons and those with a particular disease. Difference between .... abnormal results. If the discrepancy is gross, the cause may.
18. Interpreting Results from the Standardized UXO Test Sites
National Research Council Canada - National Science Library
May, Michael; Tuley, Michael
2007-01-01
...) and the Environmental Security Technology Certification Program (ESCTP) to complete a detailed analysis of the results of testing carried out at the Standardized Unexploded Ordnance (UXO) Test Sites...
19. Test emission of uranium hexafluoride in atmosphere. Results interpretation
International Nuclear Information System (INIS)
Crabol, B.; Deville-Cavelin, G.
1989-01-01
To permit the modelization of gaseous uranium hexafluoride behaviour in atmosphere, a validation test has been executed the 10 April 1987. The experimental conditions, the main results and a comparison with a diffusion model are given in this report [fr
20. Seismic II over I Drop Test Program results and interpretation
Energy Technology Data Exchange (ETDEWEB)
Thomas, B.
1993-03-01
The consequences of non-seismically qualified (Category 2) objects falling and striking essential seismically qualified (Category 1) objects has always been a significant, yet analytically difficult problem, particularly in evaluating the potential damage to equipment that may result from earthquakes. Analytical solutions for impact problems are conservative and available for mostly simple configurations. In a nuclear facility, the {open_quotes}sources{close_quotes} and {open_quotes}targets{close_quotes} requiring evaluation are frequently irregular in shape and configuration, making calculations and computer modeling difficult. Few industry or regulatory rules are available on this topic even though it is a source of considerable construction upgrade costs. A drop test program was recently conducted to develop a more accurate understanding of the consequences of seismic interactions. The resulting data can be used as a means to improve the judgment of seismic qualification engineers performing interaction evaluations and to develop realistic design criteria for seismic interactions. Impact tests on various combinations of sources and targets commonly found in one Savannah River Site (SRS) nuclear facility were performed by dropping the sources from various heights onto the targets. This report summarizes results of the Drop Test Program. Force and acceleration time history data are presented as well as general observations on the overall ruggedness of various targets when subjected to impacts from different types of sources.
1. Seismic II over I Drop Test Program results and interpretation
Energy Technology Data Exchange (ETDEWEB)
Thomas, B.
1993-03-01
The consequences of non-seismically qualified (Category 2) objects falling and striking essential seismically qualified (Category 1) objects has always been a significant, yet analytically difficult problem, particularly in evaluating the potential damage to equipment that may result from earthquakes. Analytical solutions for impact problems are conservative and available for mostly simple configurations. In a nuclear facility, the [open quotes]sources[close quotes] and [open quotes]targets[close quotes] requiring evaluation are frequently irregular in shape and configuration, making calculations and computer modeling difficult. Few industry or regulatory rules are available on this topic even though it is a source of considerable construction upgrade costs. A drop test program was recently conducted to develop a more accurate understanding of the consequences of seismic interactions. The resulting data can be used as a means to improve the judgment of seismic qualification engineers performing interaction evaluations and to develop realistic design criteria for seismic interactions. Impact tests on various combinations of sources and targets commonly found in one Savannah River Site (SRS) nuclear facility were performed by dropping the sources from various heights onto the targets. This report summarizes results of the Drop Test Program. Force and acceleration time history data are presented as well as general observations on the overall ruggedness of various targets when subjected to impacts from different types of sources.
2. Seismic II over I Drop Test Program results and interpretation
International Nuclear Information System (INIS)
Thomas, B.
1993-03-01
The consequences of non-seismically qualified (Category 2) objects falling and striking essential seismically qualified (Category 1) objects has always been a significant, yet analytically difficult problem, particularly in evaluating the potential damage to equipment that may result from earthquakes. Analytical solutions for impact problems are conservative and available for mostly simple configurations. In a nuclear facility, the open-quotes sourcesclose quotes and open-quotes targetsclose quotes requiring evaluation are frequently irregular in shape and configuration, making calculations and computer modeling difficult. Few industry or regulatory rules are available on this topic even though it is a source of considerable construction upgrade costs. A drop test program was recently conducted to develop a more accurate understanding of the consequences of seismic interactions. The resulting data can be used as a means to improve the judgment of seismic qualification engineers performing interaction evaluations and to develop realistic design criteria for seismic interactions. Impact tests on various combinations of sources and targets commonly found in one Savannah River Site (SRS) nuclear facility were performed by dropping the sources from various heights onto the targets. This report summarizes results of the Drop Test Program. Force and acceleration time history data are presented as well as general observations on the overall ruggedness of various targets when subjected to impacts from different types of sources
3. Liquid pathways generic studies; results, interpretation, and design implications
International Nuclear Information System (INIS)
Walker, D.H.; Nutant, J.A.
1980-01-01
Offshore Power Systems and the Nuclear Regulatory Commission have evaluated dose consequences resulting from a release of radioactivity to liquid pathways following a postulated core-melt accident. The objective of these studies was to compare the risks from postulated core-melt accidents for the Floating Nuclear Plant with those for a typical land-based nuclear plant. Offshore Power Systems concluded that the differences in liquid pathway risks between plant types are not significant when compared with the air pathways risks. Air pathways risk is similar to or significantly larger than liquid pathways risk depending on the accident scenario. The Nuclear Regulatory Commission judged the liquid pathways risks from the Floating Nuclear Plant to be significantly greater than the liquid pathway risks for the typical land-based plant. Although OPS disagrees with the NRC judgment, design changes dictated by the NRC are being implemented by OPS
4. Microbiology of Olkiluoto groundwater. Results and interpretations 2007
International Nuclear Information System (INIS)
Pedersen, K.; Arlinger, J.; Eriksson, S.; Hallbeck, M.; Johansson, J.; Jaegevall, S.; Karlsson, L.
2008-09-01
was deemed important to start researching the prevalence of microbes, present in Olkiluoto groundwater and ONKALO slime, having the ability to produce complexing agents. The total amount of gas was found to increase with depth, as was the case in previous years. There was great variability in total gas volume over depth down to a depth of approximately 300 m, consistent with the results from 2005-2006. Three different methods were used to analyse the groundwater samples: TNC returns cell numbers, adenosine triphosphate (ATP) returns a measure of biomass, and cultivation returns a measure of microbe diversity and numbers. The outputs of these independent methods were found to correlate. ATP and TNC have previously been shown to correlate, but the demonstration of correlation between ATP and most probable number (MPN) cultivations is new and supports the quality of the MPN results. Adding a quantitative polymerase chain reaction (Q-PCR) method to groundwater investigations, combined with isolating and characterizing cultivable microorganisms from the highest dilutions of the MPN tubes, will reveal specific details about the diversity and activity of the studied populations. Q-PCR methods were successfully developed in 2007. A schematic model of the processes ongoing in the ONKALO slime has been postulated. Formaldehyde and other organic compounds from the grout additions and the methane promote the growth of methanotrophs and aerobic and iron-reducing microbes in the ONKALO slime. Oxygen can be derived from the air and ferric iron from iron oxides. Methanogens, located deep within the ONKALO slime where oxygen is depleted, produce methane as a final decomposition step after the organic carbon sources added with the grouting are degraded by the aerobic microbes. Sulphide is produced via sulphate reduction and precipitates with ferrous iron forming iron sulphide, which subsequently is converted to sulphuric acid in contact with air, causing pit corrosion of concrete. The
5. Interpreting faecal analysis results for monitoring exposure to uranium
International Nuclear Information System (INIS)
Berard, P.; Rongier, E.; Faure, M.L.; Auriol, B.; Estrabaud, M.; Mazeyrat, C.
1996-01-01
Radiotoxicological monitoring of workers exposed to non-transferable forms of uranium requires six-monthly examinations. These examinations are prescribed according to the kind of product manipulated and tO the industrial risk attached to the workplace. The range of examinations that are useful for this kind of monitoring includes whole body counting examinations, urine analyses and in-line faecal sampling: whole body examinations, which are fundamental to monitoring, provide a lung retention value. However, the detection limit of lung examinations is not low enough for chronic operational monitoring; urine examinations are extremely sensitive to alpha activity (1 mBq per isotope) but the fraction detected in the urine after incorporation by inhalation is very small; in-line 24-hour faecal sampling allows avoiding any workplace exclusion. The authors intend to present their experience acquired over a six year period in the field of systematic faecal examinations after chronic inhalation of the different uranium compounds. They also present results of a study carried out to determine normal uranium concentrations in the faeces of a non-exposed population, the uranium content in drinking waters and the consequences on faecal excretion. Establishing the isotopic content of uranium in the faeces makes it possible to determine practical investigation levels for occupational monitoring. Even if faecal sampling may be critically perceived by the personnel, the authors' experience highlights the value of this kind of analysis which allows to track down the industrial reality of the exposure. Internal dosimetry calculations cannot, however, be carried out, because the physical parameters of the inhaled aerosols are not always known. (author)
6. Portero versus portador: Spanish interpretation of genomic terminology during whole exome sequencing results disclosure.
Science.gov (United States)
Gutierrez, Amanda M; Robinson, Jill O; Statham, Emily E; Scollon, Sarah; Bergstrom, Katie L; Slashinski, Melody J; Parsons, Donald W; Plon, Sharon E; McGuire, Amy L; Street, Richard L
2017-11-01
Describe modifications to technical genomic terminology made by interpreters during disclosure of whole exome sequencing (WES) results. Using discourse analysis, we identified and categorized interpretations of genomic terminology in 42 disclosure sessions where Spanish-speaking parents received their child's WES results either from a clinician using a medical interpreter, or directly from a bilingual physician. Overall, 76% of genomic terms were interpreted accordantly, 11% were misinterpreted and 13% were omitted. Misinterpretations made by interpreters and bilingual physicians included using literal and nonmedical terminology to interpret genomic concepts. Modifications to genomic terminology made during interpretation highlight the need to standardize bilingual genomic lexicons. We recommend Spanish terms that can be used to refer to genomic concepts.
7. (Re)interpreting LHC New Physics Search Results : Tools and Methods, 3rd Workshop
CERN Document Server
The quest for new physics beyond the SM is arguably the driving topic for LHC Run2. LHC collaborations are pursuing searches for new physics in a vast variety of channels. Although collaborations provide various interpretations for their search results, the full understanding of these results requires a much wider interpretation scope involving all kinds of theoretical models. This is a very active field, with close theory-experiment interaction. In particular, development of dedicated methodologies and tools is crucial for such scale of interpretation. Recently, a Forum was initiated to host discussions among LHC experimentalists and theorists on topics related to the BSM (re)interpretation of LHC data, and especially on the development of relevant interpretation tools and infrastructure: https://twiki.cern.ch/twiki/bin/view/LHCPhysics/InterpretingLHCresults Two meetings were held at CERN, where active discussions and concrete work on (re)interpretation methods and tools took place, with valuable cont...
8. Converting Differential Photometry Results to the Standard System using Transform Generator and Transform Applier (Abstract)
Science.gov (United States)
Ciocca, M.
2016-12-01
(Abstract only) Since Fall of 2014, AAVSO made available two very useful software tools: transform generator (tg) and transform applier (ta). tg, authored by Gordon Myers (gordonmyers@hotmail.com), is a program, running under python that allows the user to obtain the transformation coefficients of their imaging train. ta, authored by George Silvis, allows users to apply the transformation coefficients obtained previously to their photometric observation. The data so processed become then directly comparable to those of other observers. I will show how to obtain transform coefficient using two Standard Field (M 67 and NGC7790), how consistent the results are and as an application, I will present transformed data for two AAVSO Target stars, AE UMA and RR CET.
9. Interpreting results of cluster surveys in emergency settings: is the LQAS test the best option?
Directory of Open Access Journals (Sweden)
Blanton Curtis
2008-12-01
Full Text Available Abstract Cluster surveys are commonly used in humanitarian emergencies to measure health and nutrition indicators. Deitchler et al. have proposed to use Lot Quality Assurance Sampling (LQAS hypothesis testing in cluster surveys to classify the prevalence of global acute malnutrition as exceeding or not exceeding the pre-established thresholds. Field practitioners and decision-makers must clearly understand the meaning and implications of using this test in interpreting survey results to make programmatic decisions. We demonstrate that the LQAS test–as proposed by Deitchler et al. – is prone to producing false-positive results and thus is likely to suggest interventions in situations where interventions may not be needed. As an alternative, to provide more useful information for decision-making, we suggest reporting the probability of an indicator's exceeding the threshold as a direct measure of "risk". Such probability can be easily determined in field settings by using a simple spreadsheet calculator. The "risk" of exceeding the threshold can then be considered in the context of other aggravating and protective factors to make informed programmatic decisions.
10. Laboratory test result interpretation for primary care doctors in South Africa
Directory of Open Access Journals (Sweden)
2017-03-01
Full Text Available Background: Challenges and uncertainties with test result interpretation can lead to diagnostic errors. Primary care doctors are at a higher risk than specialists of making these errors, due to the range in complexity and severity of conditions that they encounter. Objectives: This study aimed to investigate the challenges that primary care doctors face with test result interpretation, and to identify potential countermeasures to address these. Methods: A survey was sent out to 7800 primary care doctors in South Africa. Questionnaire themes included doctors’ uncertainty with interpreting test results, mechanisms used to overcome this uncertainty, challenges with appropriate result interpretation, and perceived solutions for interpreting results. Results: Of the 552 responses received, the prevalence of challenges with result interpretation was estimated in an average of 17% of diagnostic encounters. The most commonly-reported challenges were not receiving test results in a timely manner (51% of respondents and previous results not being easily available (37%. When faced with diagnostic uncertainty, 84% of respondents would either follow-up and reassess the patient or discuss the case with a specialist, and 67% would contact a laboratory professional. The most useful test utilisation enablers were found to be: interpretive comments (78% of respondents, published guidelines (74%, and a dedicated laboratory phone line (72%. Conclusion: Primary care doctors acknowledge uncertainty with test result interpretation. Potential countermeasures include the addition of patient-specific interpretive comments, the availability of guidelines or algorithms, and a dedicated laboratory phone line. The benefit of enhanced test result interpretation would reduce diagnostic error rates.
11. 40 CFR 761.316 - Interpreting PCB concentration measurements resulting from this sampling scheme.
Science.gov (United States)
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Interpreting PCB concentration... § 761.79(b)(3) § 761.316 Interpreting PCB concentration measurements resulting from this sampling... composite is 20 µg/100 cm2, then the entire 9.5 square meters has a PCB surface concentration of 20 µg/100...
12. Inventory Abstraction
International Nuclear Information System (INIS)
Leigh, C.
2000-01-01
The purpose of the inventory abstraction as directed by the development plan (CRWMS M and O 1999b) is to: (1) Interpret the results of a series of relative dose calculations (CRWMS M and O 1999c, 1999d). (2) Recommend, including a basis thereof, a set of radionuclides that should be modeled in the Total System Performance Assessment in Support of the Site Recommendation (TSPA-SR) and the Total System Performance Assessment in Support of the Final Environmental Impact Statement (TSPA-FEIS). (3) Provide initial radionuclide inventories for the TSPA-SR and TSPA-FEIS models. (4) Answer the U.S. Nuclear Regulatory Commission (NRC)'s Issue Resolution Status Report ''Key Technical Issue: Container Life and Source Term'' (CLST IRSR) (NRC 1999) key technical issue (KTI): ''The rate at which radionuclides in SNF [Spent Nuclear Fuel] are released from the EBS [Engineered Barrier System] through the oxidation and dissolution of spent fuel'' (Subissue 3). The scope of the radionuclide screening analysis encompasses the period from 100 years to 10,000 years after the potential repository at Yucca Mountain is sealed for scenarios involving the breach of a waste package and subsequent degradation of the waste form as required for the TSPA-SR calculations. By extending the time period considered to one million years after repository closure, recommendations are made for the TSPA-FEIS. The waste forms included in the inventory abstraction are Commercial Spent Nuclear Fuel (CSNF), DOE Spent Nuclear Fuel (DSNF), High-Level Waste (HLW), naval Spent Nuclear Fuel (SNF), and U.S. Department of Energy (DOE) plutonium waste. The intended use of this analysis is in TSPA-SR and TSPA-FEIS. Based on the recommendations made here, models for release, transport, and possibly exposure will be developed for the isotopes that would be the highest contributors to the dose given a release to the accessible environment. The inventory abstraction is important in assessing system performance because
13. INVENTORY ABSTRACTION
International Nuclear Information System (INIS)
Ragan, G.
2001-01-01
The purpose of the inventory abstraction, which has been prepared in accordance with a technical work plan (CRWMS M andO 2000e for/ICN--02 of the present analysis, and BSC 2001e for ICN 03 of the present analysis), is to: (1) Interpret the results of a series of relative dose calculations (CRWMS M andO 2000c, 2000f). (2) Recommend, including a basis thereof, a set of radionuclides that should be modeled in the Total System Performance Assessment in Support of the Site Recommendation (TSPA-SR) and the Total System Performance Assessment in Support of the Final Environmental Impact Statement (TSPA-FEIS). (3) Provide initial radionuclide inventories for the TSPA-SR and TSPA-FEIS models. (4) Answer the U.S. Nuclear Regulatory Commission (NRC)'s Issue Resolution Status Report ''Key Technical Issue: Container Life and Source Term'' (CLST IRSR) key technical issue (KTI): ''The rate at which radionuclides in SNF [spent nuclear fuel] are released from the EBS [engineered barrier system] through the oxidation and dissolution of spent fuel'' (NRC 1999, Subissue 3). The scope of the radionuclide screening analysis encompasses the period from 100 years to 10,000 years after the potential repository at Yucca Mountain is sealed for scenarios involving the breach of a waste package and subsequent degradation of the waste form as required for the TSPA-SR calculations. By extending the time period considered to one million years after repository closure, recommendations are made for the TSPA-FEIS. The waste forms included in the inventory abstraction are Commercial Spent Nuclear Fuel (CSNF), DOE Spent Nuclear Fuel (DSNF), High-Level Waste (HLW), naval Spent Nuclear Fuel (SNF), and U.S. Department of Energy (DOE) plutonium waste. The intended use of this analysis is in TSPA-SR and TSPA-FEIS. Based on the recommendations made here, models for release, transport, and possibly exposure will be developed for the isotopes that would be the highest contributors to the dose given a release
14. If the results of an article are noteworthy, read the entire article; do not rely on the abstract alone.
Science.gov (United States)
Dal-Ré, R; Castell, M V; García-Puig, J
2015-11-01
15. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.
Science.gov (United States)
Kieffer, Kevin M.; Thompson, Bruce
As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…
16. Interpreting clinical trial results by deductive reasoning: In search of improved trial design.
Science.gov (United States)
Kurbel, Sven; Mihaljević, Slobodan
2017-10-01
Clinical trial results are often interpreted by inductive reasoning, in a trial design-limited manner, directed toward modifications of the current clinical practice. Deductive reasoning is an alternative in which results of relevant trials are combined in indisputable premises that lead to a conclusion easily testable in future trials. © 2017 WILEY Periodicals, Inc.
17. Constellation Map: Downstream visualization and interpretation of gene set enrichment results [version 1; referees: 2 approved
Directory of Open Access Journals (Sweden)
Yan Tan
2015-06-01
Full Text Available Summary: Gene set enrichment analysis (GSEA approaches are widely used to identify coordinately regulated genes associated with phenotypes of interest. Here, we present Constellation Map, a tool to visualize and interpret the results when enrichment analyses yield a long list of significantly enriched gene sets. Constellation Map identifies commonalities that explain the enrichment of multiple top-scoring gene sets and maps the relationships between them. Constellation Map can help investigators take full advantage of GSEA and facilitates the biological interpretation of enrichment results. Availability: Constellation Map is freely available as a GenePattern module at http://www.genepattern.org.
18. Ground-based Efforts to Support a Space-based Experiment: the Latest LADEE Results (Abstract)
Science.gov (United States)
Cudnik, B.; Rahman, M.
2014-12-01
(Abstract only) The much anticipated launch of NASA’s Lunar Atmosphere and Dust Environment Explorer happened flawlessly last October and the satellite has been doing science (and sending a few images) since late Novermber. [The LADEE mission ended with the crash-landing of the spacecraft on the lunar far side on April 17, 2014, capping a successful 140-day mission.] We also have launched our campaign to document lunar meteroid impact flashes from the ground to supply ground truth to inform of any changes in dust concentration encountered by the spacecraft in orbit around the moon. To date I have received six reports of impact flashes or flash candidates from the group I am coordinating; other groups around the world may have more to add when all is said and done. In addition, plans are underway to prepare a program at Prairie View A&M University to involve our physics majors in lunar meteoroid, asteroid occultation, and other astronomical work through our Center for Astronomical Sciences and Technology. This facility will be a control center to not only involve physics majors, but also to include pre-service teachers and members of the outside community to promote pro-am collaborations.
19. Interpretation of results for tumor markers on the basis of analytical imprecision and biological variation
DEFF Research Database (Denmark)
Sölétormos, G; Schiøler, V; Nielsen, D
1993-01-01
Interpretation of results for CA 15.3, carcinoembryonic antigen (CEA), and tissue polypeptide antigen (TPA) during breast cancer monitoring requires data on intra- (CVP) and inter- (CVG) individual biological variation, analytical imprecision (CVA), and indices of individuality. The average CVP...
20. Applying the Bootstrap to Taxometric Analysis: Generating Empirical Sampling Distributions to Help Interpret Results
Science.gov (United States)
Ruscio, John; Ruscio, Ayelet Meron; Meron, Mati
2007-01-01
Meehl's taxometric method was developed to distinguish categorical and continuous constructs. However, taxometric output can be difficult to interpret because expected results for realistic data conditions and differing procedural implementations have not been derived analytically or studied through rigorous simulations. By applying bootstrap…
1. An improved method for interpreting API filter press hydraulic conductivity test results
International Nuclear Information System (INIS)
Heslin, G.M.; Baxter, D.Y.; Filz, G.M.; Davidson, R.R.
1997-01-01
The American Petroleum Institute (API) filter press is frequently used to measure the hydraulic conductivity of soil-bentonite backfill during the mix design process and as part of construction quality controls. However, interpretation of the test results is complicated by the fact that the seepage-induced consolidation pressure varies from zero at the top of the specimen to a maximum value at the bottom of the specimen. An analytical solution is available which relates the stress, compressibility, and hydraulic conductivity in soil consolidated by seepage forces. This paper presents the results of a laboratory investigation undertaken to support application of this theory to API hydraulic conductivity tests. When the API test results are interpreted using seepage consolidation theory, they are in good agreement with the results of consolidometer permeameter tests. Limitations of the API test are also discussed
2. Abstract of results of safety study. Nuclear fuel cycle field in fiscal 2003
International Nuclear Information System (INIS)
2004-11-01
This report descried the results of studies of nuclear fuel cycle field (nuclear fuel facilities, seismic design, all subjects of environmental radiation and waste disposal, and subjects on nuclear fuel cycle in probabilistic safety assessment) in fiscal 2003 on the basis of the principle project of safety study (from fiscal 2001 to 2005). It consists of four chapters; the first chapter is outline of the principle of project, the second is objects and subjects of safety study in the nuclear fuel cycle field, the third list of questionnaire of results of safety study and the forth investigation of results of safety study in fiscal 2003. There are 49 lists, which include 22 reports on the nuclear fuel facility, one on the seismic design, 4 on the probabilistic safety assessment, 7 on the environmental radiation and 15 on the waste disposal. (S.Y.)
3. Nondestructive methods for the structural evaluation of wood floor systems in historic buildings : preliminary results : [abstract
Science.gov (United States)
Zhiyong Cai; Michael O. Hunt; Robert J. Ross; Lawrence A. Soltis
1999-01-01
To date, there is no standard method for evaluating the structural integrity of wood floor systems using nondestructive techniques. Current methods of examination and assessment are often subjective and therefore tend to yield imprecise or variable results. For this reason, estimates of allowable wood floor loads are often conservative. The assignment of conservatively...
4. Spin Is Common in Studies Assessing Robotic Colorectal Surgery: An Assessment of Reporting and Interpretation of Study Results.
Science.gov (United States)
Patel, Sunil V; Van Koughnett, Julie Ann M; Howe, Brett; Wexner, Steven D
2015-09-01
Spin has been defined previously as "specific reporting that could distort the interpretation of results and mislead readers." The purpose of this study was to determine the frequency and extent of misrepresentation of results in robotic colorectal surgery. Publications referenced in MEDLINE or EMBASE between 1992 and 2014 were included in this study. Studies comparing robotic colorectal surgery with other techniques with a nonsignificant difference in the primary outcome(s) were included. Interventions included robotic versus alternative techniques. Frequency, strategy, and extent of spin, as previously defined, were the main outcome measures : A total of 38 studies (including 24,303 patients) were identified for inclusion in this study. Evidence of spin was found in 82% of studies. The most common form of spin was concluding equivalence between surgical techniques based on nonsignificant differences (76% of abstracts and 71% of conclusions). Claiming improved benefits, despite nonsignificance, was also commonly observed (26% of abstracts and 45% of conclusions). Because of the small sample size, we did not find evidence of an association between spin and study design, type of funding, publication year, or study size. Acknowledging the equivocal nature of the study happened rarely (47% of abstracts and 34% of conclusions). The absence of spin predicted whether authors acknowledged equivocal results (p = 0.02). A total of 50% of studies did not disclose whether they received funding, whereas 39% of studies failed to state whether a conflict of interest existed. A limited number of randomized controlled trials were available. Spin occurred in >80% of included studies. Many studies concluded that robotic surgery was as safe as more traditional techniques, despite small sample sizes and limited follow-up. Authors often failed to recognize the difference between nonsignificance and equivalence. Failure to disclose financial relationships, which could represent
5. AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS
Directory of Open Access Journals (Sweden)
J. Tao
2012-09-01
Full Text Available Due to the all-weather data acquisition capabilities, high resolution space borne Synthetic Aperture Radar (SAR plays an important role in remote sensing applications like change detection. However, because of the complex geometric mapping of buildings in urban areas, SAR images are often hard to interpret. SAR simulation techniques ease the visual interpretation of SAR images, while fully automatic interpretation is still a challenge. This paper presents a method for supporting the interpretation of high resolution SAR images with simulated radar images using a LiDAR digital surface model (DSM. Line features are extracted from the simulated and real SAR images and used for matching. A single building model is generated from the DSM and used for building recognition in the SAR image. An application for the concept is presented for the city centre of Munich where the comparison of the simulation to the TerraSAR-X data shows a good similarity. Based on the result of simulation and matching, special features (e.g. like double bounce lines, shadow areas etc. can be automatically indicated in SAR image.
6. [Do different interpretative methods used for evaluation of checkerboard synergy test affect the results?].
Science.gov (United States)
Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent
2012-07-01
In recent years, owing to the presence of multi-drug resistant nosocomial bacteria, combination therapies are more frequently applied. Thus there is more need to investigate the in vitro activity of drug combinations against multi-drug resistant bacteria. Checkerboard synergy testing is among the most widely used standard technique to determine the activity of antibiotic combinations. It is based on microdilution susceptibility testing of antibiotic combinations. Although this test has a standardised procedure, there are many different methods for interpreting the results. In many previous studies carried out with multi-drug resistant bacteria, different rates of synergy have been reported with various antibiotic combinations using checkerboard technique. These differences might be attributed to the different features of the strains. However, different synergy rates detected by checkerboard method have also been reported in other studies using the same drug combinations and same types of bacteria. It was thought that these differences in synergy rates might be due to the different methods of interpretation of synergy test results. In recent years, multi-drug resistant Acinetobacter baumannii has been the most commonly encountered nosocomial pathogen especially in intensive-care units. For this reason, multidrug resistant A.baumannii has been the subject of a considerable amount of research about antimicrobial combinations. In the present study, the in vitro activities of frequently preferred combinations in A.baumannii infections like imipenem plus ampicillin/sulbactam, and meropenem plus ampicillin/sulbactam were tested by checkerboard synergy method against 34 multi-drug resistant A.baumannii isolates. Minimum inhibitory concentration (MIC) values for imipenem, meropenem and ampicillin/sulbactam were determined by the broth microdilution method. Subsequently the activity of two different combinations were tested in the dilution range of 4 x MIC and 0.03 x MIC in
7. Ink dating part II: Interpretation of results in a legal perspective
OpenAIRE
Koenig, Agnès; Weyermann, Céline
2018-01-01
The development of an ink dating method requires an important investment of resources in order to step from the monitoring of ink ageing on paper to the determination of the actual age of a questioned ink entry. This article aimed at developing and evaluating the potential of three interpretation models to date ink entries in a legal perspective: (1) the threshold model comparing analytical results to tabulated values in order to determine the maximal possible age of an ink entry, (2) the tre...
8. Interpretation and presentation of results. Chickens will come home to roost.
Science.gov (United States)
Beattie, A; Donovan, B; Mant, A; Bridges-Webb, C
1984-06-01
Collecting information is an obsessional activity which can become an end in itself. Interpreting and presenting results in a way that others can learn from them requires reflection, selectivity and ability to accept criticism. In selecting the journal to which the article will be submitted, consider which has the most appropriate readership and ensure the article conforms to the requirements and style of that journal.
9. The adaptive internet application for interpretation of the transformer oil gas chromatographic analysis results
Directory of Open Access Journals (Sweden)
2015-01-01
Full Text Available This paper describes an adaptive Internet application for the interpretation of the transformer oil gas chromatographic analysis results. The first version of the application is developed by following an evolutionary software development concept. The most important software development risks and the appropriate solutions are described. An open-source web framework named Bootstrap is used for an application implementation. The application is developed by using ASP.NET and MS SQL server.
10. Labtracker+, a medical smartphone app for the interpretation of consecutive laboratory results: an external validation study.
Science.gov (United States)
Hilderink, Judith M; Rennenberg, Roger J M W; Vanmolkot, Floris H M; Bekers, Otto; Koopmans, Richard P; Meex, Steven J R
2017-09-01
When monitoring patients over time, clinicians may struggle to distinguish 'real changes' in consecutive blood parameters from so-called natural fluctuations. In practice, they have to do so by relying on their clinical experience and intuition. We developed Labtracker+ , a medical app that calculates the probability that an increase or decrease over time in a specific blood parameter is real, given the time between measurements. We presented patient cases to 135 participants to examine whether there is a difference between medical students, residents and experienced clinicians when it comes to interpreting changes between consecutive laboratory results. Participants were asked to interpret if changes in consecutive laboratory values were likely to be 'real' or rather due to natural fluctuations. The answers of the study participants were compared with the calculated probabilities by the app Labtracker+ and the concordance rates were assessed. Medical students (n=92), medical residents from the department of internal medicine (n=19) and internists (n=24) at a Dutch University Medical Centre. Concordance rates between the study participants and the calculated probabilities by the app Labtracker+ were compared. Besides, we tested whether physicians with clinical experience scored better concordance rates with the app Labtracker+ than inexperienced clinicians. Medical residents and internists showed significantly better concordance rates with the calculated probabilities by the app Labtracker+ than medical students, regarding their interpretation of differences between consecutive laboratory results (p=0.009 and p<0.001, respectively). The app Labtracker+ could serve as a clinical decision tool in the interpretation of consecutive laboratory test results and could contribute to rapid recognition of parameter changes by physicians. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial
11. Varying performance in mammographic interpretation across two countries: Do results indicate reader or population variances?
Science.gov (United States)
Soh, BaoLin P.; Lee, Warwick B.; Wong, Jill; Sim, Llewellyn; Hillis, Stephen L.; Tapia, Kriscia A.; Brennan, Patrick C.
2016-03-01
12. Interpretation of the results from individual monitoring of workers at the Nuclear Fuel Fabrication Facility, Brazil
International Nuclear Information System (INIS)
Castro, Marcelo Xavier de
2005-01-01
In nuclear fuel fabrication facilities, workers are exposed to different compounds of enriched uranium. Although in this kind of facility the main route of intake is inhalation, ingestion may occur in some situations, and also a mixture of both. The interpretation of the bioassay data is very complex, since it is necessary taking into account all the different parameters, which is a big challenge. Due to the high cost of the individual monitoring programme for internal dose assessment in the routine monitoring programmes, usually only one type of measurement is assigned. In complex situations like the one described in this study, where several parameters can compromise the accuracy of the bioassay interpretation it is need to have a combination of techniques to evaluate the internal dose. According to ICRP 78 (1997), the general order of preference of measurement methodologies in terms of accuracy of interpretation is: body activity measurement, excreta analysis and personal air sampling. Results of monitoring of working environment may provide information that assists in the interpretation on particle size, chemical form, solubility and date of intake. A group of fifteen workers from controlled area of the studied nuclear fuel fabrication facility was selected to evaluate the internal dose using all different available techniques during a certain period. The workers were monitored for determination of uranium content in the daily urinary and faecal excretion (collected over a period of 3 consecutive days), chest counting and personal air sampling. The results have shown that at least two types of sensitivity techniques must be used, since there are some sources of uncertainties on the bioassay interpretation, like mixture of uranium compounds intake and different routes of intake. The combination of urine and faeces analysis has shown to be the more appropriate methodology for assessing internal dose in this situation. The chest counting methodology has not shown
13. Simplified likelihood for the re-interpretation of public CMS results
CERN Document Server
The CMS Collaboration
2017-01-01
In this note, a procedure for the construction of simplified likelihoods for the re-interpretation of the results of CMS searches for new physics is presented. The procedure relies on the use of a reduced set of information on the background models used in these searches which can readily be provided by the CMS collaboration. A toy example is used to demonstrate the procedure and its accuracy in reproducing the full likelihood for setting limits in models for physics beyond the standard model. Finally, two representative searches from the CMS collaboration are used to demonstrate the validity of the simplified likelihood approach under realistic conditions.
14. Comprehensive Interpretation of the Laboratory Experiments Results to Construct Model of the Polish Shale Gas Rocks
Science.gov (United States)
Jarzyna, Jadwiga A.; Krakowska, Paulina I.; Puskarczyk, Edyta; Wawrzyniak-Guz, Kamila; Zych, Marcin
2018-03-01
More than 70 rock samples from so-called sweet spots, i.e. the Ordovician Sa Formation and Silurian Ja Member of Pa Formation from the Baltic Basin (North Poland) were examined in the laboratory to determine bulk and grain density, total and effective/dynamic porosity, absolute permeability, pore diameters size, total surface area, and natural radioactivity. Results of the pyrolysis, i.e., TOC (Total Organic Carbon) together with S1 and S2 - parameters used to determine the hydrocarbon generation potential of rocks, were also considered. Elemental composition from chemical analyses and mineral composition from XRD measurements were also included. SCAL analysis, NMR experiments, Pressure Decay Permeability measurements together with water immersion porosimetry and adsorption/ desorption of nitrogen vapors method were carried out along with the comprehensive interpretation of the outcomes. Simple and multiple linear statistical regressions were used to recognize mutual relationships between parameters. Observed correlations and in some cases big dispersion of data and discrepancies in the property values obtained from different methods were the basis for building shale gas rock model for well logging interpretation. The model was verified by the result of the Monte Carlo modelling of spectral neutron-gamma log response in comparison with GEM log results.
15. Ink dating part II: Interpretation of results in a legal perspective.
Science.gov (United States)
Koenig, Agnès; Weyermann, Céline
2018-01-01
The development of an ink dating method requires an important investment of resources in order to step from the monitoring of ink ageing on paper to the determination of the actual age of a questioned ink entry. This article aimed at developing and evaluating the potential of three interpretation models to date ink entries in a legal perspective: (1) the threshold model comparing analytical results to tabulated values in order to determine the maximal possible age of an ink entry, (2) the trend tests that focusing on the "ageing status" of an ink entry, and (3) the likelihood ratio calculation comparing the probabilities to observe the results under at least two alternative hypotheses. This is the first report showing ink dating interpretation results on a ballpoint be ink reference population. In the first part of this paper three ageing parameters were selected as promising from the population of 25 ink entries aged during 4 to 304days: the quantity of phenoxyethanol (PE), the difference between the PE quantities contained in a naturally aged sample and an artificially aged sample (R NORM ) and the solvent loss ratio (R%). In the current part, each model was tested using the three selected ageing parameters. Results showed that threshold definition remains a simple model easily applicable in practice, but that the risk of false positive cannot be completely avoided without reducing significantly the feasibility of the ink dating approaches. The trend tests from the literature showed unreliable results and an alternative had to be developed yielding encouraging results. The likelihood ratio calculation introduced a degree of certainty to the ink dating conclusion in comparison to the threshold approach. The proposed model remains quite simple to apply in practice, but should be further developed in order to yield reliable results in practice. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
16. Standard practice for analysis and interpretation of physics dosimetry results for test reactors
International Nuclear Information System (INIS)
Anon.
1984-01-01
This practice describes the methodology summarized in Annex Al to be used in the analysis and interpretation of physics-dosimetry results from test reactors. This practice relies on, and ties together, the application of several supporting ASTM standard practices, guides, and methods that are in various stages of completion (see Fig. 1). Support subject areas that are discussed include reactor physics calculations, dosimeter selection and analysis, exposure units, and neutron spectrum adjustment methods. This practice is directed towards the development and application of physics-dosimetrymetallurgical data obtained from test reactor irradiation experiments that are performed in support of the operation, licensing, and regulation of LWR nuclear power plants. It specifically addresses the physics-dosimetry aspects of the problem. Procedures related to the analysis, interpretation, and application of both test and power reactor physics-dosimetry-metallurgy results are addressed in Practice E 853, Practice E 560, Matrix E 706(IE), Practice E 185, Matrix E 706(IG), Guide E 900, and Method E 646
17. Geophysical borehole logging in Lavia borehole - results and interpretation of sonic and tube wave measurements
International Nuclear Information System (INIS)
1985-02-01
Swedish Nuclear Fuel and Waste Management Co, SKB has been contracted by Industrial Power Company LTD, TVO to perform geophysical logging in a borehole at Lavia in Western Finland. The logging has been conducted by Swedish Geological Co, SGAB in accordance with an agreement for cooperation with SKB. The depth of the borehole is 1001 m, diameter 56 mm and inclination 10-20 degrees to the vertical. The aim of the logging was to determine the various geophysical parameters in the borehole in order to interpret and understand the rock mass properties in the vicinity of the borehole. According to the contract the report covers the following main objectives: a technical description of the field work and the equipment used; a review of the theoretical base for the sonic and tube wave methods; an interpretation and presentation of the results obtained by sonic and tube wave mesurements. The evaluation of the sonic and tube wave measurements shows good correlation. On a qualitative basis there seems to be a correlation between tube wave generating points, the relative tube wave amplitudes and the hydraulic conductivity measurements performed as hydraulical tests between packers in the borehole. The low velocity anamalies in the sonic log are mainly caused by tectonic features like fractures and fracture zones but to some extent also by contacts between granite and diorite. The estimation of elastic properties of the rock mass from observation of tube wave velocity are in accordance with laboratory determinations made on core samples. (author)
18. Are biochemistry interpretative comments helpful? Results of a general practitioner and nurse practitioner survey.
Science.gov (United States)
Barlow, Ian M
2008-01-01
19. Hair as a bio-indicator: limitations and complications in the interpretation of results
International Nuclear Information System (INIS)
Evans, G.J.; Jervis, R.E.
1987-01-01
Some of the limitations and complications associated with hair and nail analysis are discussed. Data obtained from an occupational study demonstrated the potential for misinterpreting hair or nail analysis data either through describing results averaged over a group by arithmetic instead of geometric means or through not accounting for the age range of subjects in groups to be compared. Examples that arose from the study indicated that differences between hair from the same subjects grown at different times can both complicate and assist in interpreting hair analysis results. In an investigation into the addition and removal of metallic powders, it was found that both hair and nail can directly incorporate elements through contact with dust. (author)
20. Global interpretation of direct Dark Matter searches after CDMS-II results
International Nuclear Information System (INIS)
Kopp, Joachim; Schwetz, Thomas; Zupan, Jure
2010-01-01
We perform a global fit to data from Dark Matter (DM) direct detection experiments, including the recent CDMS-II results. We discuss possible interpretations of the DAMA annual modulation signal in terms of spin-independent and spin-dependent DM-nucleus interactions, both for elastic and inelastic scattering. We find that for the spin-dependent inelastic scattering off protons a good fit to all data is obtained. We present a simple toy model realizing such a scenario. In all the remaining cases the DAMA allowed regions are disfavored by other experiments or suffer from severe fine tuning of DM parameters with respect to the galactic escape velocity. Finally, we also entertain the possibility that the two events observed in CDMS-II are an actual signal of elastic DM scattering, and we compare the resulting CDMS-II allowed regions to the exclusion limits from other experiments
1. Norms in face-threatening instances of simultaneous conference interpreting: results from a questionnaire
OpenAIRE
Lenglet, Cédric
2015-01-01
Conference interpreters are expected to act like neutral spokespersons and expert communicators at the same time. To achieve this, they abide by translational norms. These norms can be elicited from the discourse on interpreting, field observation and corpus data. They might also overlap with assessment norms. Anecdotes from the booth and field observations indicate that interpreters sometimes modify the speaker’s positions and shape the meaning of the target text. One translational norm coul...
2. Soviet-French working group interpretation of the scientific information during the search for celestial sources of gamma pulses, abstract of reports, 24-30 March 1977
Science.gov (United States)
Estulin, I. V.
1977-01-01
The progress made and techniques used by the Soviet-French group in the study of gamma and X ray pulses are described in abstracts of 16 reports. Experiments included calibration and operation of various recording instruments designed for measurements involving these pulses, specifically the location of sources of such pulses in outer space. Space vehicles are utilized in conjunction with ground equipment to accomplish these tests.
3. Towards Intelligent Interpretation of Low Strain Pile Integrity Testing Results Using Machine Learning Techniques.
Science.gov (United States)
Cui, De-Mi; Yan, Weizhong; Wang, Xiao-Quan; Lu, Lie-Min
2017-10-25
Low strain pile integrity testing (LSPIT), due to its simplicity and low cost, is one of the most popular NDE methods used in pile foundation construction. While performing LSPIT in the field is generally quite simple and quick, determining the integrity of the test piles by analyzing and interpreting the test signals (reflectograms) is still a manual process performed by experienced experts only. For foundation construction sites where the number of piles to be tested is large, it may take days before the expert can complete interpreting all of the piles and delivering the integrity assessment report. Techniques that can automate test signal interpretation, thus shortening the LSPIT's turnaround time, are of great business value and are in great need. Motivated by this need, in this paper, we develop a computer-aided reflectogram interpretation (CARI) methodology that can interpret a large number of LSPIT signals quickly and consistently. The methodology, built on advanced signal processing and machine learning technologies, can be used to assist the experts in performing both qualitative and quantitative interpretation of LSPIT signals. Specifically, the methodology can ease experts' interpretation burden by screening all test piles quickly and identifying a small number of suspected piles for experts to perform manual, in-depth interpretation. We demonstrate the methodology's effectiveness using the LSPIT signals collected from a number of real-world pile construction sites. The proposed methodology can potentially enhance LSPIT and make it even more efficient and effective in quality control of deep foundation construction.
4. Exploring the uncertainties of early detection results: model-based interpretation of mayo lung project
Directory of Open Access Journals (Sweden)
Berman Barbara
2011-03-01
Full Text Available Abstract Background The Mayo Lung Project (MLP, a randomized controlled clinical trial of lung cancer screening conducted between 1971 and 1986 among male smokers aged 45 or above, demonstrated an increase in lung cancer survival since the time of diagnosis, but no reduction in lung cancer mortality. Whether this result necessarily indicates a lack of mortality benefit for screening remains controversial. A number of hypotheses have been proposed to explain the observed outcome, including over-diagnosis, screening sensitivity, and population heterogeneity (initial difference in lung cancer risks between the two trial arms. This study is intended to provide model-based testing for some of these important arguments. Method Using a micro-simulation model, the MISCAN-lung model, we explore the possible influence of screening sensitivity, systematic error, over-diagnosis and population heterogeneity. Results Calibrating screening sensitivity, systematic error, or over-diagnosis does not noticeably improve the fit of the model, whereas calibrating population heterogeneity helps the model predict lung cancer incidence better. Conclusions Our conclusion is that the hypothesized imperfection in screening sensitivity, systematic error, and over-diagnosis do not in themselves explain the observed trial results. Model fit improvement achieved by accounting for population heterogeneity suggests a higher risk of cancer incidence in the intervention group as compared with the control group.
5. How does the mass media report and interpret radiation data? The results of media content analysis
Energy Technology Data Exchange (ETDEWEB)
Perko, T. [Belgian Nuclear Research Centre SCK.CEN, Institute for Environment Health and Safety (Belgium); Cantone, M.C. [University of Milano, Faculty of Medicine (Italy); Tomkiv, Y. [Norwegian University of Life Sciences (Norway); Prezelj, I. [University of Ljubljana, Faculty of Social Sciences (Slovenia); Gallego, E. [Universidad Politecnica de Madrid (Spain); Melekhova, E. [Russian Academy of Sciences in Moscow (Russian Federation)
2014-07-01
6. Environmental interpretation using insoluble residues within reef coral skeletons: problems, pitfalls, and preliminary results
Science.gov (United States)
Budd, Ann F.; Mann, Keith O.; Guzmán, Hector M.
1993-03-01
Insoluble residue concentrations have been measured within colonies of four massive reef corals from seven localities along the Caribbean coast of Panama to determine if detrital sediments, incorporated within the skeletal lattice during growth, record changes in sedimentation over the past twenty years. Amounts of resuspended sediment have increased to varying degrees at the seven localities over the past decades in response to increased deforestation in nearby terrestrial habitats. Preliminary results of correlation and regression analyses reveal few consistent temporal trends in the insoluble residue concentration. Analyses of variance suggest that amounts of insoluble residues, however, differ among environments within species, but that no consistent pattern of variation exists among species. D. strigosa and P. astreoides possess high concentrations at protected localities, S. siderea at localities with high amounts of resuspended sediment, and M. annularis at the least turbid localities. Little correlation exists between insoluble residue concentration and growth band width within species at each locality. Only in two more efficient suspension feeders ( S. siderea and D. strigosa) do weak negative correlations with growth band width exist overall. These results indicate that insoluble residue concentrations cannot be used unequivocally in environmental interpretation, until more is known about tissue damage, polyp behavior, and their effects on the incorporation of insolubles in the skeleton during growth in different coral species. Insoluble residue data are highly variable; therefore, large sample sizes and strong contrasts between environments are required to reveal significant trends.
7. Interpreting results of cluster surveys in emergency settings: is the LQAS test the best option?
Science.gov (United States)
Bilukha, Oleg O; Blanton, Curtis
2008-12-09
Cluster surveys are commonly used in humanitarian emergencies to measure health and nutrition indicators. Deitchler et al. have proposed to use Lot Quality Assurance Sampling (LQAS) hypothesis testing in cluster surveys to classify the prevalence of global acute malnutrition as exceeding or not exceeding the pre-established thresholds. Field practitioners and decision-makers must clearly understand the meaning and implications of using this test in interpreting survey results to make programmatic decisions. We demonstrate that the LQAS test--as proposed by Deitchler et al.--is prone to producing false-positive results and thus is likely to suggest interventions in situations where interventions may not be needed. As an alternative, to provide more useful information for decision-making, we suggest reporting the probability of an indicator's exceeding the threshold as a direct measure of "risk". Such probability can be easily determined in field settings by using a simple spreadsheet calculator. The "risk" of exceeding the threshold can then be considered in the context of other aggravating and protective factors to make informed programmatic decisions.
8. Revised interpretations of stable C and O patterns in carbonate rocks resulting from meteoric diagenesis
Science.gov (United States)
Swart, Peter K.; Oehlert, Amanda M.
2018-02-01
A positive correlation between the δ13C and δ18O values of carbonate rocks is a screening tool widely used to identify the overprint of meteoric diagenesis on the original isotopic composition of a sample. In particular, it has been suggested that systematic change from negative to positive δ13C and δ18O values with increasing depth in the core is an indicator of alteration within the zone of mixing between meteoric and marine waters. In this paper, we propose that such covariance is not generated within the traditionally defined mixing zone, and that positive correlations between δ13C and δ18O values in marine carbonates are not necessarily indicators of meteoric alteration. This new interpretation is based on data collected from the shallow sub-surface of the Bahamas, a region unequivocally influenced by meteoric waters to depths of at least 200 m below the current sediment-water interface. The classic interpretation of the diagenetic environments, based on changes in the δ13C and δ18O values, would suggest the maximum penetration of freshwater occurs between 65 and 100 m below seafloor. Below these depths, a strong positive covariation between the δ13C and δ18O values exists, and would traditionally be defined as the mixing zone. However, based upon known changes in sea level, the penetration of the freshwater lens extends significantly below this limit. We contend that the zone showing covariance of δ13C and δ18O values is actually altered within the freshwater lens, and not the mixing zone as previously proposed. The co-varying trend in δ13C and δ18O values is the result of diagenetic processes occurring at the interface between vadose and phreatic zones. Significantly greater rates of recrystallization and neomorphism are driven by the increased rates of oxidation of organic matter at this transition with progressively less alteration occurring with increasing depth. As sea level oscillates, the position of this interface moves through the
9. Presentation of laboratory test results in patient portals: influence of interface design on risk interpretation and visual search behaviour.
Science.gov (United States)
Fraccaro, Paolo; Vigo, Markel; Balatsoukas, Panagiotis; van der Veer, Sabine N; Hassan, Lamiece; Williams, Richard; Wood, Grahame; Sinha, Smeeta; Buchan, Iain; Peek, Niels
2018-02-12
Patient portals are considered valuable instruments for self-management of long term conditions, however, there are concerns over how patients might interpret and act on the clinical information they access. We hypothesized that visual cues improve patients' abilities to correctly interpret laboratory test results presented through patient portals. We also assessed, by applying eye-tracking methods, the relationship between risk interpretation and visual search behaviour. We conducted a controlled study with 20 kidney transplant patients. Participants viewed three different graphical presentations in each of low, medium, and high risk clinical scenarios composed of results for 28 laboratory tests. After viewing each clinical scenario, patients were asked how they would have acted in real life if the results were their own, as a proxy of their risk interpretation. They could choose between: 1) Calling their doctor immediately (high interpreted risk); 2) Trying to arrange an appointment within the next 4 weeks (medium interpreted risk); 3) Waiting for the next appointment in 3 months (low interpreted risk). For each presentation, we assessed accuracy of patients' risk interpretation, and employed eye tracking to assess and compare visual search behaviour. Misinterpretation of risk was common, with 65% of participants underestimating the need for action across all presentations at least once. Participants found it particularly difficult to interpret medium risk clinical scenarios. Participants who consistently understood when action was needed showed a higher visual search efficiency, suggesting a better strategy to cope with information overload that helped them to focus on the laboratory tests most relevant to their condition. This study confirms patients' difficulties in interpreting laboratories test results, with many patients underestimating the need for action, even when abnormal values were highlighted or grouped together. Our findings raise patient safety
10. Improving food and agricultural production. Thailand. Fertilizer experiments - data analysis and interpretation of results
International Nuclear Information System (INIS)
Nelson, L.A.
1991-01-01
The emphasis of the mission was the provision of training to the staff of the Department of Agriculture, Government of Thailand, in the analysis and interpretation of data from experiments concerning fertilizer applications in agriculture
11. Quantifying viruses and bacteria in wastewater—Results, interpretation methods, and quality control
Science.gov (United States)
Francy, Donna S.; Stelzer, Erin A.; Bushon, Rebecca N.; Brady, Amie M.G.; Mailot, Brian E.; Spencer, Susan K.; Borchardt, Mark A.; Elber, Ashley G.; Riddell, Kimberly R.; Gellner, Terry M.
2011-01-01
Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes small enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bacterial indicators Escherichia coli (E. coli) and fecal coliforms are the required microbial measures of effluents for wastewater-discharge permits. Information is needed on the effectiveness of MBRs in removing human enteric viruses from wastewaters, particularly as compared to conventional wastewater treatment before and after disinfection. A total of 73 regular and 28 quality-control (QC) samples were collected at three MBR and two conventional wastewater plants in Ohio during 23 regular and 3 QC sampling trips in 2008-10. Samples were collected at various stages in the treatment processes and analyzed for bacterial indicators E. coli, fecal coliforms, and enterococci by membrane filtration; somatic and F-specific coliphage by the single agar layer (SAL) method; adenovirus, enterovirus, norovirus GI and GII, rotavirus, and hepatitis A virus by molecular methods; and viruses by cell culture. While addressing the main objective of the study-comparing removal of viruses and bacterial indicators in MBR and conventional plants-it was realized that work was needed to identify data analysis and quantification methods for interpreting enteric virus and QC data. Therefore, methods for quantifying viruses, qualifying results, and applying QC data to interpretations are described in this report. During each regular sampling trip, samples were collected (1) before conventional or MBR treatment (post-preliminary), (2) after secondary or MBR treatment (post-secondary or post-MBR), (3) after tertiary treatment (one conventional plant only), and (4) after disinfection (post-disinfection). Glass-wool fiber filtration was used to concentrate enteric viruses from large volumes, and small
12. Standard Practice for Analysis and Interpretation of Light-Water Reactor Surveillance Results, E706(IA)
CERN Document Server
American Society for Testing and Materials. Philadelphia
2001-01-01
1.1 This practice covers the methodology, summarized in Annex A1, to be used in the analysis and interpretation of neutron exposure data obtained from LWR pressure vessel surveillance programs; and, based on the results of that analysis, establishes a formalism to be used to evaluate present and future condition of the pressure vessel and its support structures (1-70). 1.2 This practice relies on, and ties together, the application of several supporting ASTM standard practices, guides, and methods (see Master Matrix E 706) (1, 5, 13, 48, 49). In order to make this practice at least partially self-contained, a moderate amount of discussion is provided in areas relating to ASTM and other documents. Support subject areas that are discussed include reactor physics calculations, dosimeter selection and analysis, and exposure units. Note 1—(Figure 1 is deleted in the latest update. The user is refered to Master Matrix E 706 for the latest figure of the standards interconnectivity). 1.3 This practice is restri...
13. Numerical Results for a Polytropic Cosmology Interpreted as a Dust Universe Producing Gravitational Waves
Science.gov (United States)
Klapp, J.; Cervantes-Cota, J.; Chauvet, P.
1990-11-01
14. Geological and structural interpretation of Peninsular Malaysia by marine and aeromagnetic data: Some preliminary results
Science.gov (United States)
Bahrudin, Nurul Fairuz Diyana Binti; Hamzah, Umar
2016-11-01
Magnetic data were processed to interpret the geology of Peninsular Malaysia especially in delineating the igneous bodies and structural lineament trends by potential field geophysical method. A total of about 32000 magnetic intensity data were obtained from Earth Magnetic Anomaly Grid (EMAG2) covering an area of East Sumatra to part of South China Sea within 99° E to 105° E Longitude and 1° N to 7°N Latitude. These data were used in several processing stages in generating the total magnetic intensity (TMI), reduce to equator (RTE), total horizontal derivative (THD) and total vertical derivative (TVD). Values of the possible surface and subsurface magnetic sources associated to the geological features of the study area. The magnetic properties are normally corresponding to features like igneous bodies and faults structures. The anomalies obtained were then compared to the geological features of the area. In general, the high magnetic anomalies of the TMI-RTE are closely matched with major igneous intrusion of Peninsular Malaysia such as the Main Range, Eastern Belt and the Mersing-Johor Bahru stretch. More dense lineaments of magnetic structures were observed in the THD and TVD results indicating the presence of more deep and shallow magnetic rich geological features. The positions of Bukit Tinggi, Mersing and Lepar faults are perfectly matched with the magnetic highs while the presence of Lebir and Bok Bak faults are not clearly observed in the magnetic results. The high magnetic values of igneous bodies may have concealed and obscured the magnetic values representing these faults.
15. Frequency Dependence and Spatial Distribution of Seismic Attenuation in France: Experimental Results and Possible Interpretations
Science.gov (United States)
1989-09-12
and possible interpretations. M. Campillo Laboratoire de Geophysique Interne et Tectonophysique Universite Joseph Fourier and Observatoire de Grenoble...IRIGM, BP 53X, 38041 Grenoble, France J.L. Plantet Laboratoire de Detection Geophysique Commissariat a l’Energie Atomique BP 12, 91680 Bruyere-le
16. Interpretation of the results of statistical measurements. [search for basic probability model
Science.gov (United States)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
17. Microbiology of Olkiluoto and ONKALO groundwater results and interpretations, 2008-2009
Energy Technology Data Exchange (ETDEWEB)
Pedersen, K.; Arlinger, J.; Edlund, J.; Eriksson, L.; Lydmark, S.; Johansson, J.; Jaegevall, S.; Rabe, L. (Microbial Analytics Sweden AB, Moelnlycke (Sweden))
2010-08-15
Microbiology cultivation, DNA, and RNA data were assembled from 18 groundwater samples from Olkiluoto, from deep drillholes ranging in depth from 62 to 708 m, and from groundwater from eight ONKALO drillholes ranging in depth from 7.1 to 318 m. Biomass was determined by counting total numbers of microbial cells (TNC) and determining adenosine triphosphate (ATP) concentrations. The aerobic cultivation method used comprised aerobic plate counts. Anaerobic most probable number (MPN) methods were used to determine counts of nitrate-, iron-, manganese-, and sulphatereducing bacteria, acetogenic bacteria, and methanogens. Molecular methods for analysing the diversity and abundance of microorganisms have been continuously developed and applied to groundwater samples. These methods included the sampling of DNA and RNA, extraction of nucleic acids, cloning and sequencing of environmental nucleic acids, and real-time quantitative polymerase chain reaction (qPCR) for analysing amounts of DNA and RNA. The results of these analyses have been merged and interpreted, and the outcomes are reported here. The four methods for biomassrelated analysis correlated well. These methods focus on different characteristics of microbial cells: TNC analyses whole cells using a microscope, ATP analyses a cell component using a biochemical method, MPN is based on cultivation and qPCR analyses DNA (genes) and RNA (gene expression). The range of analytical focus encompassed by the methods ensures that the biomass-related information in this and previous reports from Olkiluoto and ONKALO is reliable and reflects a diverse range of the biomass-related characteristics of the analysed microorganisms. The distribution of the MPN data over depth from 2008 to 2009 followed the distribution found earlier. There were generally more cultivable microorganisms between depths of 200 and 400 m than in the shallower 50-200-m depth range. These new results agree with previous results, suggesting that
18. Microbiology of Olkiluoto and ONKALO groundwater results and interpretations, 2008-2009
International Nuclear Information System (INIS)
Pedersen, K.; Arlinger, J.; Edlund, J.; Eriksson, L.; Lydmark, S.; Johansson, J.; Jaegevall, S.; Rabe, L.
2010-08-01
Microbiology cultivation, DNA, and RNA data were assembled from 18 groundwater samples from Olkiluoto, from deep drillholes ranging in depth from 62 to 708 m, and from groundwater from eight ONKALO drillholes ranging in depth from 7.1 to 318 m. Biomass was determined by counting total numbers of microbial cells (TNC) and determining adenosine triphosphate (ATP) concentrations. The aerobic cultivation method used comprised aerobic plate counts. Anaerobic most probable number (MPN) methods were used to determine counts of nitrate-, iron-, manganese-, and sulphatereducing bacteria, acetogenic bacteria, and methanogens. Molecular methods for analysing the diversity and abundance of microorganisms have been continuously developed and applied to groundwater samples. These methods included the sampling of DNA and RNA, extraction of nucleic acids, cloning and sequencing of environmental nucleic acids, and real-time quantitative polymerase chain reaction (qPCR) for analysing amounts of DNA and RNA. The results of these analyses have been merged and interpreted, and the outcomes are reported here. The four methods for biomassrelated analysis correlated well. These methods focus on different characteristics of microbial cells: TNC analyses whole cells using a microscope, ATP analyses a cell component using a biochemical method, MPN is based on cultivation and qPCR analyses DNA (genes) and RNA (gene expression). The range of analytical focus encompassed by the methods ensures that the biomass-related information in this and previous reports from Olkiluoto and ONKALO is reliable and reflects a diverse range of the biomass-related characteristics of the analysed microorganisms. The distribution of the MPN data over depth from 2008 to 2009 followed the distribution found earlier. There were generally more cultivable microorganisms between depths of 200 and 400 m than in the shallower 50-200-m depth range. These new results agree with previous results, suggesting that
19. Apar-T: code, validation, and physical interpretation of particle-in-cell results
Science.gov (United States)
Melzani, Mickaël; Winisdoerffer, Christophe; Walder, Rolf; Folini, Doris; Favre, Jean M.; Krastanov, Stefan; Messmer, Peter
2013-10-01
simulations. The other is that the level of electric field fluctuations scales as 1/ΛPIC ∝ p. We provide a corresponding exact expression, taking into account the finite superparticle size. We confirm both expectations with simulations. Fourth, we compare the Vlasov-Maxwell theory, often used for code benchmarking, to the PIC model. The former describes a phase-space fluid with Λ = + ∞ and no correlations, while the PIC plasma features a small Λ and a high level of correlations when compared to a real plasma. These differences have to be kept in mind when interpreting and validating PIC results against the Vlasov-Maxwell theory and when modeling real physical plasmas.
20. Bias and sensitivity in the placement of fossil taxa resulting from interpretations of missing data.
Science.gov (United States)
Sansom, Robert S
2015-03-01
The utility of fossils in evolutionary contexts is dependent on their accurate placement in phylogenetic frameworks, yet intrinsic and widespread missing data make this problematic. The complex taphonomic processes occurring during fossilization can make it difficult to distinguish absence from non-preservation, especially in the case of exceptionally preserved soft-tissue fossils: is a particular morphological character (e.g., appendage, tentacle, or nerve) missing from a fossil because it was never there (phylogenetic absence), or just happened to not be preserved (taphonomic loss)? Missing data have not been tested in the context of interpretation of non-present anatomy nor in the context of directional shifts and biases in affinity. Here, complete taxa, both simulated and empirical, are subjected to data loss through the replacement of present entries (1s) with either missing (?s) or absent (0s) entries. Both cause taxa to drift down trees, from their original position, toward the root. Absolute thresholds at which downshift is significant are extremely low for introduced absences (two entries replaced, 6% of present characters). The opposite threshold in empirical fossil taxa is also found to be low; two absent entries replaced with presences causes fossil taxa to drift up trees. As such, only a few instances of non-preserved characters interpreted as absences will cause fossil organisms to be erroneously interpreted as more primitive than they were in life. This observed sensitivity to coding non-present morphology presents a problem for all evolutionary studies that attempt to use fossils to reconstruct rates of evolution or unlock sequences of morphological change. Stem-ward slippage, whereby fossilization processes cause organisms to appear artificially primitive, appears to be a ubiquitous and problematic phenomenon inherent to missing data, even when no decay biases exist. Absent characters therefore require explicit justification and taphonomic
1. Radar imaging of glaciovolcanic stratigraphy, Mount Wrangell caldera, Alaska - Interpretation model and results
Science.gov (United States)
Clarke, Garry K. C.; Cross, Guy M.; Benson, Carl S.
1989-01-01
Glaciological measurements and an airborne radar sounding survey of the glacier lying in Mount Wrangell caldera raise many questions concerning the glacier thermal regime and volcanic history of Mount Wrangell. An interpretation model has been developed that allows the depth variation of temperature, heat flux, pressure, density, ice velocity, depositional age, and thermal and dielectric properties to be calculated. Some predictions of the interpretation model are that the basal ice melting rate is 0.64 m/yr and the volcanic heat flux is 7.0 W/sq m. By using the interpretation model to calculate two-way travel time and propagation losses, radar sounding traces can be transformed to give estimates of the variation of power reflection coefficient as a function of depth and depositional age. Prominent internal reflecting zones are located at depths of approximately 59-91m, 150m, 203m, and 230m. These internal reflectors are attributed to buried horizons of acidic ice, possibly intermixed with volcanic ash, that were deposited during past eruptions of Mount Wrangell.
2. Improving the Interpretability of Classification Rules Discovered by an Ant Colony Algorithm: Extended Results.
Science.gov (United States)
Otero, Fernando E B; Freitas, Alex A
2016-01-01
Most ant colony optimization (ACO) algorithms for inducing classification rules use a ACO-based procedure to create a rule in a one-at-a-time fashion. An improved search strategy has been proposed in the cAnt-Miner[Formula: see text] algorithm, where an ACO-based procedure is used to create a complete list of rules (ordered rules), i.e., the ACO search is guided by the quality of a list of rules instead of an individual rule. In this paper we propose an extension of the cAnt-Miner[Formula: see text] algorithm to discover a set of rules (unordered rules). The main motivations for this work are to improve the interpretation of individual rules by discovering a set of rules and to evaluate the impact on the predictive accuracy of the algorithm. We also propose a new measure to evaluate the interpretability of the discovered rules to mitigate the fact that the commonly used model size measure ignores how the rules are used to make a class prediction. Comparisons with state-of-the-art rule induction algorithms, support vector machines, and the cAnt-Miner[Formula: see text] producing ordered rules are also presented.
3. Interpretation of coagulation test results using a web-based reporting system.
Science.gov (United States)
Quesada, Andres E; Jabcuga, Christine E; Nguyen, Alex; Wahed, Amer; Nedelcu, Elena; Nguyen, Andy N D
2014-01-01
Web-based synoptic reporting has been successfully integrated into diverse fields of pathology, improving efficiency and reducing typographic errors. Coagulation is a challenging field for practicing pathologists and pathologists-in-training alike. To develop a Web-based program that can expedite the generation of a individualized interpretive report for a variety of coagulation tests. We developed a Web-based synoptic reporting system composed of 119 coagulation report templates and 38 thromboelastography (TEG) report templates covering a wide range of findings. Our institution implemented this reporting system in July 2011; it is currently used by pathology residents and attending pathologists. Feedback from the users of these reports have been overwhelmingly positive. Surveys note the time saved and reduced errors. Our easily accessible, user-friendly, Web-based synoptic reporting system for coagulation is a valuable asset to our laboratory services. Copyright© by the American Society for Clinical Pathology (ASCP).
4. Noninvasive testing in coronary artery disease. Selection of procedures and interpretation of results
International Nuclear Information System (INIS)
Sox, H.C. Jr.
1983-01-01
In patients with acute chest pain, selection of diagnostic tests and admission to and discharge from the coronary care unit are critical decisions for which useful empirical guidelines are now available. In hospitalized patients, the serum level of the MB fraction of creatine kinase is particularly useful when the history strongly suggests infarction but the ECG is nondiagnostic. In patients with chronic chest pain, the gender of the patient and the character of the pain are the most important guides to selecting and interpreting exercise tests. In women and in men with nonanginal chest pain, the myocardial scintiscan is preferred to the exercise ECG because of its greater diagnostic accuracy. In men with atypical angina, the two tests are nearly equivalent, and the added cost of the scintiscan is a factor in test selection. Since nearly all men with typical angina have coronary artery disease, diagnostic tests are usually not needed
5. Interpretation of scenario results in terms of described and mapped land change trajectories and archetypes
DEFF Research Database (Denmark)
Kuemmerle, Tobias; Stürck, Julia; Levers, Christian
Module VISIONS seeks to identify critical pathways to reach desired futures for land systems (i.e., visions). In order to do so, work package (WP) 11 links the model-based scenarios (module ASSESSMENT) to the visions formulated derived in a transdisciplinary process together with stakeholders...... of future developments of current land change archetypes; and (3) an interpretation of future land change in light of long-term land system trajectories. Synthesizing across these analyses, six key insights emerged. First, future land change was relatively similar across marker scenarios and different...... policy alternatives, for many regions in Europe, suggesting strong path dependency. Second, the impact of policy options can differ (a) between regions in Europe and (b) among marker scenarios, highlighting the need for contextualized, regionalized policy making. Third, the expansion and intensification...
6. Technical Note. The Concept of a Computer System for Interpretation of Tight Rocks Using X-Ray Computed Tomography Results
Directory of Open Access Journals (Sweden)
Habrat Magdalena
2017-03-01
Full Text Available The article presents the concept of a computer system for interpreting unconventional oil and gas deposits with the use of X-ray computed tomography results. The functional principles of the solution proposed are presented in the article. The main goal is to design a product which is a complex and useful tool in a form of a specialist computer software for qualitative and quantitative interpretation of images obtained from X-ray computed tomography. It is devoted to the issues of prospecting and identification of unconventional hydrocarbon deposits. The article focuses on the idea of X-ray computed tomography use as a basis for the analysis of tight rocks, considering especially functional principles of the system, which will be developed by the authors. The functional principles include the issues of graphical visualization of rock structure, qualitative and quantitative interpretation of model for visualizing rock samples, interpretation and a description of the parameters within realizing the module of quantitative interpretation.
7. Abstract algebra
CERN Document Server
Garrett, Paul B
2007-01-01
Designed for an advanced undergraduate- or graduate-level course, Abstract Algebra provides an example-oriented, less heavily symbolic approach to abstract algebra. The text emphasizes specifics such as basic number theory, polynomials, finite fields, as well as linear and multilinear algebra. This classroom-tested, how-to manual takes a more narrative approach than the stiff formalism of many other textbooks, presenting coherent storylines to convey crucial ideas in a student-friendly, accessible manner. An unusual feature of the text is the systematic characterization of objects by universal
8. Example-based illustrations of design, conduct, analysis and result interpretation of multi-regional clinical trials.
Science.gov (United States)
Quan, Hui; Mao, Xuezhou; Tanaka, Yoko; Binkowitz, Bruce; Li, Gang; Chen, Josh; Zhang, Ji; Zhao, Peng-Liang; Ouyang, Soo Peter; Chang, Mark
2017-07-01
Extensive research has been conducted in the Multi-Regional Clinical Trial (MRCT) area. To effectively apply an appropriate approach to a MRCT, we need to synthesize and understand the features of different approaches. In this paper, examples are used to illustrate considerations regarding design, conduct, analysis and interpretation of result of MRCTs. We start with a brief discussion of region definitions and the scenarios where different regions have differing requirements for a MRCT. We then compare different designs and models as well as the corresponding interpretation of the results. We highlight the importance of paying special attention to trial monitoring and conduct to prevent potential issues associated with the final trial results. Besides evaluating the overall treatment effect for the entire MRCT, we also consider other key analyses including quantification of regional treatment effects within a MRCT, and assessment of consistency of these regional treatment effects. Copyright © 2017 Elsevier Inc. All rights reserved.
9. Dependability of results in conference abstracts of randomized controlled trials in ophthalmology and author financial conflicts of interest as a factor associated with full publication.
Science.gov (United States)
Saldanha, Ian J; Scherer, Roberta W; Rodriguez-Barraquer, Isabel; Jampel, Henry D; Dickersin, Kay
2016-04-26
Discrepancies between information in conference abstracts and full publications describing the same randomized controlled trial have been reported. The association between author conflicts of interest and the publication of randomized controlled trials is unclear. The objective of this study was to use randomized controlled trials in ophthalmology to evaluate (1) the agreement in the reported main outcome results by comparing abstracts and corresponding publications and (2) the association between the author conflicts of interest and publication of the results presented in the abstracts. We considered abstracts describing results of randomized controlled trials presented at the 2001-2004 Association for Research in Vision and Ophthalmology conferences as eligible for our study. Through electronic searching and by emailing abstract authors, we identified the earliest publication (journal article) containing results of each abstract's main outcome through November 2013. We categorized the discordance between the main outcome results in the abstract and its paired publication as qualitative (a difference in the direction of the estimated effect) or as quantitative. We used the Association for Research in Vision and Ophthalmology categories for conflicts of interest: financial interest, employee of business with interest, consultant to business with interest, inventor/developer with patent, and receiving ≥ 1 gift from industry in the past year. We calculated the relative risks (RRs) of publication associated with the categories of conflicts of interest for abstracts with results that were statistically significant, not statistically significant, or not reported. We included 513 abstracts, 230 (44.8 %) of which reached publication. Among the 86 pairs with the same main outcome domain at the same time point, 47 pairs (54.7 %) had discordant results: qualitative discordance in 7 pairs and quantitative discordance in 40 pairs. Quantitative discordance was indicated
10. Article Abstract
African Journals Online (AJOL)
Abstract. Simple learning tools to improve clinical laboratory practical skills training. B Taye, BSc, MPH. Addis Ababa University, College of Health Sciences, Addis Ababa, ... concerns about the competence of medical laboratory science graduates. ... standardised practical learning guides and assessment checklists would.
11. Potassium-argon age determination of crystalline complexes of West Carpathians and preliminary result interpretation
International Nuclear Information System (INIS)
Bagdasaryan, G.P.; Gukasyan, P.Kh.; Veselsky, I.
1977-01-01
Results obtained using the K-Ar method and comparing them with the results obtained by radiometric and palaeontological methods in general confirm the palaeozoic age of crystalline rocks in the Western Carpathians. The existence of Precambrian rocks in this region may be assumed although there is still no geochronological evidence to this. The solution of this problem will reguire also Rb-Sr isochronous and U-Th-Pb absolute dating. (author)
12. Supporting Accurate Interpretation of Self-Administered Medical Test Results for Mobile Health: Assessment of Design, Demographics, and Health Condition.
Science.gov (United States)
Hohenstein, Jess C; Baumer, Eric Ps; Reynolds, Lindsay; Murnane, Elizabeth L; O'Dell, Dakota; Lee, Seoho; Guha, Shion; Qi, Yu; Rieger, Erin; Gay, Geri
2018-02-28
Technological advances in personal informatics allow people to track their own health in a variety of ways, representing a dramatic change in individuals' control of their own wellness. However, research regarding patient interpretation of traditional medical tests highlights the risks in making complex medical data available to a general audience. This study aimed to explore how people interpret medical test results, examined in the context of a mobile blood testing system developed to enable self-care and health management. In a preliminary investigation and main study, we presented 27 and 303 adults, respectively, with hypothetical results from several blood tests via one of the several mobile interface designs: a number representing the raw measurement of the tested biomarker, natural language text indicating whether the biomarker's level was low or high, or a one-dimensional chart illustrating this level along a low-healthy axis. We measured respondents' correctness in evaluating these results and their confidence in their interpretations. Participants also told us about any follow-up actions they would take based on the result and how they envisioned, generally, using our proposed personal health system. We find that a majority of participants (242/328, 73.8%) were accurate in their interpretations of their diagnostic results. However, 135 of 328 participants (41.1%) expressed uncertainty and confusion about their ability to correctly interpret these results. We also find that demographics and interface design can impact interpretation accuracy, including false confidence, which we define as a respondent having above average confidence despite interpreting a result inaccurately. Specifically, participants who saw a natural language design were the least likely (421.47 times, P=.02) to exhibit false confidence, and women who saw a graph design were less likely (8.67 times, P=.04) to have false confidence. On the other hand, false confidence was more likely
13. From sub-source to source: Interpreting results of biological trace investigations using probabilistic models
NARCIS (Netherlands)
Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.
2015-01-01
The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for
14. The Einstein/CFA stellar survey - Overview of the data and interpretation of results
Science.gov (United States)
Vaiana, G. S.
1981-01-01
Results are presented from an extensive survey of stellar X-ray emission, using the Einstein Observatory. Over 140 stars have been detected to date, throughout the H-R diagram, thus showing that soft X-ray emission is the norm rather than the exception for stars in general. This finding is strongly at odds with pre-Einstein expectations based on standard acoustic theories of coronal heating. Typical examples of stellar X-ray detections and an overview of the survey data are presented. In combination with recent results from solar X-ray observations, the new Einstein data argue for the general applicability of magnetic field-related coronal heating mechanisms.
15. Critical experiments carried out with a homogeneous plutonium solution. Experimental results. Theoretical interpretations
International Nuclear Information System (INIS)
Bouly, J.C.; Caizergues, R.; Deilgat, E.; Houelle, M.; Lecorche, P.
1967-01-01
This report groups together a series of experimental and theoretical studies on cylinders and plates of solution tried out at the Valduc Centre. a) Comparison of the theoretical and experimental results obtained on critical heights of solutions. b) Study of the effect of nitrogen, introduced in the form of the ion NO 3- , on the reactivity of fissile media. c) Study of the effect of 240 94 Pu on the reactivity of these media. d) Study of the influence of the dimensions of the inner cavity of annular cylinders, as well as of the influence of the moderator which may be introduced. Simple results were obtained which were easy to apply. An extrapolation to other geometries is made. (authors) [fr
16. How to Interpret Thyroid Biopsy Results: A Three-Year Retrospective Interventional Radiology Experience
International Nuclear Information System (INIS)
Oppenheimer, Jason D.; Kasuganti, Deepa; Nayar, Ritu; Chrisman, Howard B.; Lewandowski, Robert J.; Nemcek, Albert A.; Ryu, Robert K.
2010-01-01
Results of thyroid biopsy determine whether thyroid nodule resection is appropriate and the extent of thyroid surgery. At our institution we use 20/22-gauge core biopsy (CBx) in conjunction with fine-needle aspiration (FNA) to decrease the number of passes and improve adequacy. Occasionally, both ultrasound (US)-guided FNA and CBx yield unsatisfactory specimens. To justify clinical recommendations for these unsatisfactory thyroid biopsies, we compare rates of malignancy at surgical resection for unsatisfactory biopsy results against definitive biopsy results. We retrospectively reviewed a database of 1979 patients who had a total of 2677 FNA and 663 CBx performed by experienced interventional radiologists under US guidance from 2003 to 2006 at a tertiary-care academic center. In 451 patients who had surgery following biopsy, Fisher's exact test was used to compare surgical malignancy rates between unsatisfactory and malignant biopsy cohorts as well as between unsatisfactory and benign biopsy cohorts. We defined statistical significance at P = 0.05. We reported an overall unsatisfactory thyroid biopsy rate of 3.7% (100/2677). A statistically significant higher rate of surgically proven malignancies was found in malignant biopsy patients compared to unsatisfactory biopsy patients (P = 0.0001). The incidence of surgically proven malignancy in unsatisfactory biopsy patients was not significantly different from that in benign biopsy patients (P = 0.8625). In conclusion, an extremely low incidence of malignancy was associated with both benign and unsatisfactory thyroid biopsy results. The difference in incidence between these two groups was not statistically significant. Therefore, patients with unsatisfactory biopsy specimens can be reassured and counseled accordingly.
17. Comparing and interpreting laboratory results of Hg oxidation by a chlorine species
International Nuclear Information System (INIS)
Agarwal, Hans; Romero, Carlos E.; Stenger, Harvey G.
2007-01-01
Several researchers have performed experimental work in attempts to explain the effects of various flue-gas components on the oxidation of elemental mercury (Hg 0 ). Some have concluded that water (H 2 O) inhibits Hg oxidation by chlorine (Cl 2 ). In recently published work, it was found that sulfur dioxide (SO 2 ) and nitric oxide (NO) also have an inhibitory effect on Hg oxidation. This paper aims to serve three purposes. First, to present data obtained in a laboratory scale apparatus, designed to test the effects of Cl 2 on the oxidation of Hg 0 with respect to temperature. The results show that as temperature increases, Cl 2 is less effective as an Hg oxidizing agent. Second, this paper presents a consolidation of data taken from several sources, where the effects of various flue-gas components on the oxidation of Hg 0 is observed and discussed. The summary of these results shows the following general trends: at high temperatures, hydrogen chloride (HCl) is the primary chlorine species responsible for Hg 0 oxidation, while at lower temperatures, Cl 2 is the dominant species. Third, a simple two reaction model is suggested to predict the experimental data shown in this paper. The results show that the predicted percent Hg oxidation values correspond very well with the observed experimental values
18. How genetic data improve the interpretation of results of faecal glucocorticoid metabolite measurements in a free-living population.
Directory of Open Access Journals (Sweden)
Maik Rehnus
Full Text Available Measurement of glucocorticoid metabolites (GCM in faeces has become a widely used and effective tool for evaluating the amount of stress experienced by animals. However, the potential sampling bias resulting from an oversampling of individuals when collecting "anonymous" (unknown sex or individual faeces has rarely been investigated. We used non-invasive genetic sampling (NIGS to investigate potential interpretation errors of GCM measurements in a free-living population of mountain hares during the mating and post-reproductive periods. Genetic data improved the interpretation of results of faecal GCM measurements. In general GCM concentrations were influenced by season. However, genetic information revealed that it was sex-dependent. Within the mating period, females had higher GCM levels than males, but individual differences were more expressed in males. In the post-reproductive period, GCM concentrations were neither influenced by sex nor individual. We also identified potential pitfalls in the interpretation of anonymous faecal samples by individual differences in GCM concentrations and resampling rates. Our study showed that sex- and individual-dependent GCM levels led to a misinterpretation of GCM values when collecting "anonymous" faeces. To accurately evaluate the amount of stress experienced by free-living animals using faecal GCM measurements, we recommend documenting individuals and their sex of the sampled population. In stress-sensitive and elusive species, such documentation can be achieved by using NIGS and for diurnal animals with sexual and individual variation in appearance or marked individuals, it can be provided by a detailed field protocol.
19. Measurement of the radioactive internal contamination and interpretation of the results
International Nuclear Information System (INIS)
Colard, J.F.
1983-01-01
After a reminder of the purpose of these measurements and of the instrumentation used for direct assessment of radioactive contamination in man, performances of a Ge(Li) and a NaI detector of usual dimensions are compared. Afterwards evolution of ICRP concepts and recommendations are discussed, since ICRP publication 2 in 1959, till publication 30 in 1979; recommended norms are applied for three particular radioelements: I 131, Cs 137, Pu 239. The difficulty to determine derived investigation levels to which results of direct measurements should be compared is pointed out. (author)
20. [Do we always correctly interpret the results of statistical nonparametric tests].
Science.gov (United States)
Moczko, Jerzy A
2014-01-01
Mann-Whitney, Wilcoxon, Kruskal-Wallis and Friedman tests create a group of commonly used tests to analyze the results of clinical and laboratory data. These tests are considered to be extremely flexible and their asymptotic relative efficiency exceeds 95 percent. Compared with the corresponding parametric tests they do not require checking the fulfillment of the conditions such as the normality of data distribution, homogeneity of variance, the lack of correlation means and standard deviations, etc. They can be used both in the interval and or-dinal scales. The article presents an example Mann-Whitney test, that does not in any case the choice of these four nonparametric tests treated as a kind of gold standard leads to correct inference.
1. Carboniferous Granitoid Magmatism of Northern Taimyr: Results of Isotopic-Geochemical Study and Geodynamic Interpretation
Science.gov (United States)
Kurapov, M. Yu.; Ershova, V. B.; Makariev, A. A.; Makarieva, E. V.; Khudoley, A. K.; Luchitskaya, M. V.; Prokopiev, A. V.
2018-03-01
Data on the petrography, geochemistry, and isotopic geochronology of granites from the northern part of the Taimyr Peninsula are considered. The Early-Middle Carboniferous age of these rocks has been established (U-Pb, SIMS). Judging by the results of 40Ar/39Ar dating, the rocks underwent metamorphism in the Middle Permian. In geochemical and isotopic composition, the granitic rocks have much in common with evolved I-type granites. This makes it possible to specify a suprasubduction marginal continental formation setting. The existence of an active Carboniferous margin along the southern edge of the Kara Block (in presentday coordinates) corroborates the close relationship of the studied region with the continent of Baltia.
2. Interpretation of the results of the CORA-33 dry core BWR test
International Nuclear Information System (INIS)
Ott, L.J.; Hagen, S.
1993-01-01
All BWR degraded core experiments performed prior to CORA-33 were conducted under ''wet'' core degradation conditions for which water remains within the core and continuous steaming feeds metal/steam oxidation reactions on the in-core metallic surfaces. However, one dominant set of accident scenarios would occur with reduced metal oxidation under ''dry'' core degradation conditions and, prior to CORA-33, this set had been neglected experimentally. The CORA-33 experiment was designed specifically to address this dominant set of BWR ''dry'' core severe accident scenarios and to partially resolve phenomenological uncertainties concerning the behavior of relocating metallic melts draining into the lower regions of a ''dry'' BWR core. CORA-33 was conducted on October 1, 1992, in the CORA tests facility at KfK. Review of the CORA-33 data indicates that the test objectives were achieved; that is, core degradation occurred at a core heatup rate and a test section axial temperature profile that are prototypic of full-core nuclear power plant (NPP) simulations at ''dry'' core conditions. Simulations of the CORA-33 test at ORNL have required modification of existing control blade/canister materials interaction models to include the eutectic melting of the stainless steel/Zircaloy interaction products and the heat of mixing of stainless steel and Zircaloy. The timing and location of canister failure and melt intrusion into the fuel assembly appear to be adequately simulated by the ORNL models. This paper will present the results of the posttest analyses carried out at ORNL based upon the experimental data and the posttest examination of the test bundle at KfK. The implications of these results with respect to degraded core modeling and the associated safety issues are also discussed
3. Hunters, herders and hearths: interpreting new results from hearth row sites in Pasvik, Arctic Norway
Directory of Open Access Journals (Sweden)
Sven-Donald Hedman
2015-02-01
Full Text Available The transition from hunting to reindeer herding has been a central topic in a number of archaeological works. Recently conducted archaeological investigation of two interior hearth row sites in Pasvik, Arctic Norway, have yielded new results that add significantly to the discussion. The sites are dated within the period 1000-1300 AD, and are unique within this corpus due to their rich bone assemblages. Among the species represented, reindeer is predominant (87 %, with fish (especially whitefish and pike as the second most frequent category. Even sheep bones are present, and represent the earliest indisputable domesticate from any Sámi habitation site. A peculiar feature is the repeated spatial pattern in bone refuse disposal, showing a systematic and almost identical clustering at the two sites. Combining analyses of bone assemblages, artefacts and archaeological features, the paper discusses changes in settlement pattern, reindeer economies, and the organization of domestic space. The analyses provide new perspectives on early domestication as well as on the remarkable changes that took place among the Sámi societies in northern Fennoscandinavia during the Viking Age and early Medieval Period.
4. Stability Analysis Of Earth Dam Slopes Subjected To Earthquake Using ERT Results Interpretation
Directory of Open Access Journals (Sweden)
Eko Andi Suryo
2018-01-01
Full Text Available Earth Dam stability can be affected significantly by the existence of excessive leakage. This is due to decreasing of shear strength of the dam material and additional overturning moment. In such scenario, the non-destructive soil investigation method is needed to analyze the stability of earth dam in current condition. This paper examines the use of Electrical Resistivity Tomography (ERT to investigate soil layers and to measure parameters of soil shear strength indirectly. First survey was carried out at dam crest and downstream using Wenner Configuration along profile lines at electrode spacing of 5 m. There were 5 profile lines of 180m long each and 10m distance of spacing. Furthermore, two profiles lines at weak cross-section based on its resistivity soil values were undertaken. Laboratory tests were conducted to determine relationship between resistivity value, moisture content, cohesion and angle of friction for each type of dam materials. From the ERT results and lab testing, a model dam can be obtained using current material parameters to perform stability analysis of dam subjected to earthquake. The lowest FOS was found at the upstream side about 1.15 and at the downstream side about 1.14 after applying seismic load of 100 years return period. Keywords: Stability Analysis, ERT,resistivity, leakage, dam
5. Tracer movement in a single fissure in granitic rock - some experimental results and their interpretation
International Nuclear Information System (INIS)
Neretnieks, I.; Eriksen, T.; Taetinen, P.
1980-08-01
Radionuclide migration was studied in a natural fissure in a granite core. The fissure was oriented parallel to the axis in a cylindrical core 30 cm long and 20 in diameter. The traced solution was injected at one end of the core and collected at the other. Breakthrough curves were obtained for the nonsorbing tracers tritiated water, and a large molecular weight lignosulphonate molecule and the sorbing tracers cesium and strontium. From the breakthrough curves for the nonsorbing tracers it could be concluded that channeling occurs in the single fissure. A 'dispersion' model based on channeling is presented. The results from the sorbing tracers indicate that there is substantial diffusion into and sorption in the rock matrix. Sorption on the surface of the fissure also accounts for a part of the retardation effect of the sorbing species. A model which includes the mechanisms of channeling, surface sorption matrix diffusion and matrix sorption is presented. The experimental breakthrough curves can be fitted fairly well by this model by use of independently obtained data on diffusivities and matrix sorption. (author)
6. Tracer Movement in a Single Fissure in Granitic Rock: Some Experimental Results and Their Interpretation
Science.gov (United States)
Neretnieks, Ivars; Eriksen, Tryggve; TäHtinen, PäIvi
1982-08-01
Radionuclide migration was studied in a natural fissure in a granite core. The fissure was oriented parallel to the axis in a cylindrical core 30 cm long and 20 cm in diameter. The traced solution was injected at one end of the core and collected at the other. Breakthrough curves were obtained for the nonsorbing tracers, tritiated water, and a large-molecular-weight lignosulphonate molecule and for the sorbing tracers, cesium and strontium. From the breakthrough curves for the nonsorbing tracers it could be concluded that channeling occurs in the single fissure. A dispersion' model based on channeling is presented. The results from the sorbing tracers indicate that there is substantial diffusion into and sorption in the rock matrix. Sorption on the surface of the fissure also accounts for a part of the retardation effect of the sorbing species. A model which includes the mechanisms of channeling, surface sorption, matrix diffusion, and matrix sorption is presented. The experimental breakthrough curves can be fitted fairly well by this model by use of independently obtained data on diffusivities and matrix sorption.
7. MHD activity in the ISX-B tokamak: experimental results and theoretical interpretation
Energy Technology Data Exchange (ETDEWEB)
Carreras, B.A.; Dunlap, J.L.; Bell, J.D.; Charlton, L.A.; Cooper, W.A.; Dory, R.A.; Hender, T.C.; Hicks, H.R.; Holmes, J.A.; Lynch, V.E.
1982-01-01
The observed spectrum of MHD fluctuations in the ISX-B tokamak is clearly dominated by the n=1 mode when the q=1 surface is in the plasma. This fact agrees well with theoretical predictions based on 3-D resistive MHD calculations. They show that the (m=1; n=1) mode is then the dominant instability. It drives other n=1 modes through toroidal coupling and n>1 modes through nonlinear couplings. These theoretically predicted mode structures have been compared in detail with the experimentally measured wave forms (using arrays of soft x-ray detectors). The agreement is excellent. More detailed comparisons between theory and experiment have required careful reconstructions of the ISX-B equilibria. The equilibria so constructed have permitted a precise evaluation of the ideal MHD stability properties of ISX-B. The present results indicate that the high ..beta.. ISX-B equilibria are marginally stable to finite eta ideal MHD modes. The resistive MHD calculations also show that at finite ..beta.. there are unstable resistive pressure driven modes.
8. Phenomenological MSSM interpretation of CMS results at $\\sqrt{s}=$ 7 and 8 TeV
CERN Document Server
CMS Collaboration
2015-01-01
Using a global Bayesian analysis, it is shown how the results from searches for supersymmetry performed by CMS constrain the Minimal Supersymmetric Standard Model (MSSM). The study is performed within the framework of the phenomenological MSSM (pMSSM), a 19-parameter realization of the R-parity conserving weak-scale MSSM, that captures most of the latter's phenomenological features and which, therefore, permits robust conclusions to be drawn about the MSSM. It is found that all pMSSM points considered with a gluino mass below 500 GeV are excluded. In the mass range between 500 GeV and 1400 GeV, there are many scenarios that cannot as yet be excluded, contrary to current gluino mass limits using simplified models. Similar conclusions are made for squarks, charginos, and neutralinos. The mass of the lighter top squark $\\tilde{t}_1$ is found to be unconstrained and, therefore, a relatively light $\\tilde{t}_1$ cannot be excluded in the pMSSM context. Constraints on the pMSSM parameter space provided by the Higgs...
9. Geochemistry and isotope hydrology of groundwaters in the Stripa Granite: results and preliminary interpretation
International Nuclear Information System (INIS)
Fritz, P.; Barker, J.F.; Gale, J.E.
1979-04-01
The results of geochemical and isotopic analyses on water samples from the granite at Stripa, Sweden, are presented. Groundwater samples collected from shallow, private wells; surface boreholes; and boreholes drilled from the 330 m and 410 m mine levels were analyzed for their major ion chemistry, dissolved gases, and environmental isotope contents. The principal change in the chemical load with depth is typified by chloride concentration, which increases from less than 5 mg/liter to about 300 mg/liter. There is a parallel increase in pH, which changes from about 6.5 to over 9.75. It is important to notice that calcite saturation is maintained and that, because of rising pH, dissolved inorganic carbon is lost. The total carbonate content thus decreases from about 70 mg/liter to less than 7 mg/liter. The 18 O and deuterium analyses demonstrate that different fracture systems contain different water masses, whose age increases with depth. Groundwater age determinations with 14 C and isotopes of the uranium decay series strongly indicate that water ages exceed 25,000 years. The 13 C contents of the aqueous carbonate in these groundwaters indicate groundwater recharge through vegetated soil, presumably during an interglacial period. The 13 C and 18 O determinations show that most fracture calcites have formed in a wide variety of depositional environments, and not in the waters circulating today
10. Implementing the EMF 9 gas trade scenarios and interpreting the results
International Nuclear Information System (INIS)
Rowse, J.
1989-01-01
Detailed description of the model employed appears in Rowse (1986) and the Appendix of Rowse (1987). Only the salient features are discussed here, along with the modifications carried out for the EMF scenarios and certain specific data and assumptions important for understanding the empirical results. The nonlinear programming model has seven Canadian regions, pipeline links to three US regions, twenty-five three-year time periods representing 1986-2060, and perfect foresight in allocating gas to domestic and export regions. It determines equilibrium Canadian consumption levels, Canadian supplies and exports to the US over this time frame by solving for equilibrium prices. Figure 1 indicates the regional breakdown employed, with the principal supply regions lying in Western Canada. Ontario is the principal consuming region in Canada. Prospective supplies and their locations are also indicated in Figure 1. All producing provinces of Western Canada have existing and prospective conventional supplies of gas, while prospective nonconventional supplies are confied to Alberta. Nonconventional or Deep Basin supplies are assumed available only by the mid-1990's at the earliest because they have not previously been commercially produced. Prospective supplies are also assumed available from the Mackenzie Delta/Beaufort Sea area - henceforth simply Delta - but only by the mid-1990's as well due to the lead time necessary for pipeline construction and for industry confidence in the financial viability of megaprojects to recover. Delta gas is shown as a prospective Alberta source because, if developed, it will likely be pipelined south to connect with the existing transportation network in Alberta. Both nonconventional gas and Delta gas are more costly than future conventional gas supplies but the latter can only be introduced gradually over time because of constraints on gas finding rates and expansion of the domestic drilling rig fleet
11. Uncertainty and sensitivity studies supporting the interpretation of the results of TVO I/II PRA
International Nuclear Information System (INIS)
Holmberg, J.
1992-01-01
A comprehensive Level 1 probabilistic risk assessment (PRA) has been performed for the TVO I/II nuclear power units. As a part of the PRA project, uncertainties of risk models and methods were systematically studied in order to describe them and to demonstrate their impact by way of results. The uncertainty study was divided into two phases: a qualitative and a quantitative study. The qualitative study contained identification of uncertainties and qualitative assessments of their importance. The PRA was introduced, and identified assumptions and uncertainties behind the models were documented. The most significant uncertainties were selected by importance measures or other judgements for further quantitative studies. The quantitative study included sensitivity studies and propagation of uncertainty ranges. In the sensitivity studies uncertain assumptions or parameters were varied in order to illustrate the sensitivity of the models. The propagation of the uncertainty ranges demonstrated the impact of the statistical uncertainties of the parameter values. The Monte Carlo method was used as a propagation method. The most significant uncertainties were those involved in modelling human interactions, dependences and common cause failures (CCFs), loss of coolant accident (LOCA) frequencies and pressure suppression. The qualitative mapping out of the uncertainty factors turned out to be useful in planning quantitative studies. It also served as internal review of the assumptions made in the PRA. The sensitivity studies were perhaps the most advantageous part of the quantitative study because they allowed individual analyses of the significance of uncertainty sources identified. The uncertainty study was found reasonable in systematically and critically assessing uncertainties in a risk analysis. The usefulness of this study depends on the decision maker (power company) since uncertainty studies are primarily carried out to support decision making when uncertainties are
12. Results and Interpretation of the WFRD ELS Distillation Down-Select Test Data
Science.gov (United States)
Delzeit, Lance Dean; Flynn, Michael; Carter, Layne; Long, David A.
2010-01-01
Testing of the Wiped-film Rotating-disk (WFRD) evaporator was conducted in support of the Exploration Life Support Distillation Down-Select Test. The WFRD was constructed at NASA Ames Research Center (ARC) and tested at NASA Marshall Space Flight Center (MSFC). The WFRD was delivered to MSFC in September 2009, and testing of solution #1 and solution #2 immediately following. Solution #1 was composed of humidity condensate and urine, including flush water and pretreatment chemicals. Solution #2 was composed of hygiene water, humidity condensate, and urine, including flush water and pretreatment chemicals. During the testing, the operational parameters of the WFRD were recorded and samples of the feed, brine, and product were collected and analyzed. The steady-state results of processing 414L of feed solution #1 and 1283L of feed solution #2 demonstrated that running the WFRD at a brine temperature of 50 C gave an average production rate of 16.7 L/hr. The specific energy consumption was 80.5W-hr/L. Data Analysis shows that the water recovery rates were 94% and 91%, respectively. The total mass of the WFRD as delivered to MSFC was 300 Kg. The volume of the tests stand rack was 1m width x 0.7m depth x 1.9m height or 1.5 cu m of which about half of the total volume is occupied by equipment. Chemical analysis of the distillate showed an average TOC of 20ppm, a pH of 3.5, and a conductivity of 98 mho/cm. The conductivity of the distillate, compared to the feed, decreased by 98.9%., the total ion concentration decreased by 99.6%, the total organics decreased 98.6%, and the metals were at or below detection limits
13. Thermal Response Testing Results of Different Types of Borehole Heat Exchangers: An Analysis and Comparison of Interpretation Methods
Directory of Open Access Journals (Sweden)
Angelo Zarrella
2017-06-01
Full Text Available The design phase of ground source heat pump systems is an extremely important one as many of the decisions made at that time can affect the system’s energy performance as well as installation and operating costs. The current study examined the interpretation of thermal response testing measurements used to evaluate the equivalent ground thermal conductivity and thus to design the system. All the measurements were taken at the same geological site located in Molinella, Bologna (Italy where a variety of borehole heat exchangers (BHEs had been installed and investigated within the project Cheap-GSHPs (Cheap and efficient application of reliable Ground Source Heat exchangers and Pumps of the European Union’s Horizon 2020 research and innovation program. The measurements were initially analyzed in accordance with the common interpretation based on the first-order approximation of the solution for the infinite line source model and then by utilizing the complete solutions of both the infinite line and cylinder source models. An inverse numerical approach based on a detailed model that considers the current geometry of the BHE and the axial heat transfer as well as the effect of weather on the ground surface was also used. Study findings revealed that the best result was generally obtained using the inverse numerical interpretation.
14. Impact of star formation inhomogeneities on merger rates and interpretation of LIGO results
International Nuclear Information System (INIS)
O'Shaughnessy, R; Kopparapu, R K; Belczynski, K
2012-01-01
Within the next decade, ground based gravitational-wave detectors are in principle capable of determining the compact object merger rate per unit volume of the local universe to better than 20% with more than 30 detections. These measurements will constrain our models of stellar, binary and star cluster evolution in the nearby present-day and ancient universe. We argue that the stellar models are sensitive to heterogeneities (in age and metallicity at least) in such a way that the predicted merger rates are subject to an additional 30-50% systematic errors unless these heterogeneities are taken into account. Without adding new electromagnetic constraints on massive binary evolution or relying on more information from each merger (e.g., binary masses and spins), as few as the 5 merger detections could exhaust the information available in a naive comparison to merger rate predictions. As a concrete example immediately relevant to analysis of initial and enhanced LIGO results, we use a nearby-universe catalog to demonstrate that no one tracer of stellar content can be consistently used to constrain merger rates without introducing a systematic error of order O(30%) at 90% confidence (depending on the type of binary involved). For example, though binary black holes typically take many Gyr to merge, binary neutron stars often merge rapidly; different tracers of stellar content are required for these two types. More generally, we argue that theoretical binary evolution can depend sufficiently sensitively on star-forming conditions-even assuming no uncertainty in binary evolution model-that the distribution of star-forming conditions must be incorporated to reduce the systematic error in merger rate predictions below roughly 40%. We emphasize that the degree of sensitivity to star-forming conditions depends on the binary evolution model and on the amount of relevant variation in star-forming conditions. For example, if after further comparison with electromagnetic and
15. Interpretation of Results of Studies Evaluating an Intervention Highlighted in Google Health News: A Cross-Sectional Study of News.
Directory of Open Access Journals (Sweden)
Romana Haneef
16. Distribution automation and control support; Analysis and interpretation of DAC working group results for use in project planning
Science.gov (United States)
Klock, P.; Evans, D.
1979-01-01
The Executive Summary and Proceedings of the Working Group Meeting was analyzed to identify specific projects appropriate for Distribution Automation and Control DAC RD&D. Specific projects that should be undertaken in the DAC RD&D program were recommended. The projects are presented under broad categories of work selected based on ESC's interpretation of the results of the Working Group Meeting. Some of the projects are noted as utility industry projects. The ESC recommendations regarding program management are presented. Utility versus Government management responsibilities are noted.
17. Comments on the interpretation of differential scanning calorimetry results for thermoelastic martensitic transformations: Athermal versus thermally activated kinetics
International Nuclear Information System (INIS)
Morris, A.; Lipe, T.
1996-01-01
In a previous article Van Humbeeck and Planes have made a number of criticisms of the authors' recent paper concerning the interpretation of the results obtained by Differential Scanning Calorimetry (DSC) from the Martensitic Transformation of Cu-Al-Ni-Mn-B alloys. Although the martensitic transformation of these shape memory alloys is generally classified as athermal, it has been confirmed that the capacity of the alloys to undergo a more complete thermoelastic transformation (i.e. better reversibility of the transformation) increased with the Mn content. This behavior has been explained by interpreting the DSC results obtained during thermal cycling in terms of a thermally activated mechanism controlling the direct and reverse transformations. When the heating rate increases during the reverse transformation the DSC curves shift towards higher temperatures while they shift towards the lower temperatures when the cooling rate was increased during the direct transformation. Since the starting transformation temperatures (As, Ms) do not shift, Van Humbeeck and Planes state that there is no real peak shift and assume that the DCS experiments were carried out without taking into account the thermal lag effect between sample and cell. On the following line they deduce a time constant, τ, of 60 seconds because the peak maximum shifts. In fact the assumption made by Van Humbeeck and Planes is false
18. Interpretative commenting.
Science.gov (United States)
Vasikaran, Samuel
2008-08-01
* Clinical laboratories should be able to offer interpretation of the results they produce. * At a minimum, contact details for interpretative advice should be available on laboratory reports.Interpretative comments may be verbal or written and printed. * Printed comments on reports should be offered judiciously, only where they would add value; no comment preferred to inappropriate or dangerous comment. * Interpretation should be based on locally agreed or nationally recognised clinical guidelines where available. * Standard tied comments ("canned" comments) can have some limited use.Individualised narrative comments may be particularly useful in the case of tests that are new, complex or unfamiliar to the requesting clinicians and where clinical details are available. * Interpretative commenting should only be provided by appropriately trained and credentialed personnel. * Audit of comments and continued professional development of personnel providing them are important for quality assurance.
19. MAIN ABSTRACTS
Institute of Scientific and Technical Information of China (English)
2012-01-01
Reflection on Some Issues Regarding the System of Socialism with Chinese Characteristics Zhang Xingmao The establishment of the system of socialism with Chinese characteristics, as the symbol of China's entry into the socialist society with Chinese characteristics, is the significant development of Marist theory of social formation. The Chinese model is framed and defined by the socialist system with Chinese characteristics, therefore the study of different levels and aspects of the Chinese model should be related to the relevant Chinese system to guarantee a scientific interpretation. Under the fundamental system of socialism, the historical and logical starting point of the formation of socialism with Chinese characteristics lies in eliminating the private ownership first and then allowing the existence and rapid development of the non-public sectors of the economy. With the gradual establishment and on the basis of the basic economic system in the preliminary stage of Socialism, and with the adaptive adjustments in the economic, political, cultural, and social systems, the socialist system with Chinese characteristics is gradually formed.
20. HiView: an integrative genome browser to leverage Hi-C results for the interpretation of GWAS variants.
Science.gov (United States)
Xu, Zheng; Zhang, Guosheng; Duan, Qing; Chai, Shengjie; Zhang, Baqun; Wu, Cong; Jin, Fulai; Yue, Feng; Li, Yun; Hu, Ming
2016-03-11
Genome-wide association studies (GWAS) have identified thousands of genetic variants associated with complex traits and diseases. However, most of them are located in the non-protein coding regions, and therefore it is challenging to hypothesize the functions of these non-coding GWAS variants. Recent large efforts such as the ENCODE and Roadmap Epigenomics projects have predicted a large number of regulatory elements. However, the target genes of these regulatory elements remain largely unknown. Chromatin conformation capture based technologies such as Hi-C can directly measure the chromatin interactions and have generated an increasingly comprehensive catalog of the interactome between the distal regulatory elements and their potential target genes. Leveraging such information revealed by Hi-C holds the promise of elucidating the functions of genetic variants in human diseases. In this work, we present HiView, the first integrative genome browser to leverage Hi-C results for the interpretation of GWAS variants. HiView is able to display Hi-C data and statistical evidence for chromatin interactions in genomic regions surrounding any given GWAS variant, enabling straightforward visualization and interpretation. We believe that as the first GWAS variants-centered Hi-C genome browser, HiView is a useful tool guiding post-GWAS functional genomics studies. HiView is freely accessible at: http://www.unc.edu/~yunmli/HiView .
1. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.
Science.gov (United States)
Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall
2016-01-01
Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.
2. Numerical models: Detailing and simulation techniques aimed at comparison with experimental data, support to test result interpretation
International Nuclear Information System (INIS)
Lin Chiwen
2001-01-01
This part of the presentation discusses the modelling details required and the simulation techniques available for analyses, facilitating the comparison with the experimental data and providing support for interpretation of the test results. It is organised to cover the following topics: analysis inputs; basic modelling requirements for reactor coolant system; method applicable for reactor cooling system; consideration of damping values and integration time steps; typical analytic models used for analysis of reactor pressure vessel and internals; hydrodynamic mass and fluid damping for the internal analysis; impact elements for fuel analysis; and PEI theorem and its applications. The intention of these topics is to identify the key parameters associated with models of analysis and analytical methods. This should provide proper basis for useful comparison with the test results
3. Investigation of flow distribution in a fracture zone at the Stripa mine, using the radar method, results and interpretation
International Nuclear Information System (INIS)
1989-12-01
The objective of the current project was to map the steady state flow distribution in a fracture zone in the Stripa mine when water was injected into the zone from a borehole. The basic idea was to map the flow paths by taking the difference between radar results obtained prior to and after injection of a saline tracer (KBr) into the fracture zone. The radar experiments were combined with a more conventional migration experiment to provide validation and calibration of the radar results. Difference tomography using borehole radar was a valuable and successful tool in mapping groundwater flow paths in fractured rock. The data presented were of good quality and sufficiently consistent throughout the investigated rock volume. The interpreted results verified previous findings in the surveyed granite volume as well as contributed to new and unique information about the transport properties of the rock at the site. The inflow data and the tracer breakthrough data has served as a useful aid in the interpretation of the flow distribution within the investigated zone and also within the surrounding rock mass. From the differential attenuation tomograms the migration of the injected tracer was mapped and presented both in the fracture zone of interest and in the entire investigated granite volume. From the radar tomographic model, the major tracer migration was found to be concentrated to a few major flow paths. Two additional fracture zones originally detected within this project, were found to transport portions of the injected tracer. The radar results combined with the tracer breakthrough data were used to estimate the area with tracer transport as well as flow porosity and the wetted surface. (orig.)
4. BALWOIS: Abstracts
International Nuclear Information System (INIS)
Morell, Morell; Todorovik, Olivija; Dimitrov, Dobri
2004-01-01
anthropogenic pressures and international shared water. Here are the 320 abstracts proposed by authors and accepted by the Scientific Committee. More than 200 papers are presented during the Conference on 8 topics related to Hydrology, Climatology and Hydro biology: - Climate and Environment; - Hydrological regimes and water balances; - Droughts and Floods; -Integrated Water Resources Management; -Water bodies Protection and Eco hydrology; -Lakes; -Information Systems for decision support; -Hydrological modelling. Papers relevant to INIS are indexed separately
5. Fission-product behaviour in irradiated TRISO-coated particles: Results of the HFR-EU1bis experiment and their interpretation
International Nuclear Information System (INIS)
Barrachin, M.; Dubourg, R.; Groot, S. de; Kissane, M.P.; Bakker, K.
2011-01-01
Highlights: → The microstructure and FPs in UO 2 TRISO particles (10% FIMA, 1573 K) were studied. → Very large porosities (>10 μm) were observed in the high temperature particles. → Significant Xe and Cs releases from the kernel were observed. → Mo and Ru are mainly present in the metallic precipitates in the kernel. - Abstract: It is important to understand fission-product (FP) and kernel micro-structure evolution in TRISO-coated fuel particles. FP behaviour, while central to severe-accident evaluation, impacts: evolution of the kernel oxygen potential governing in turn carbon oxidation (amoeba effect and pressurization); particle pressurization through fission-gas release from the kernel; and coating mechanical resistance via reaction with some FPs (Pd, Cs, Sr). The HFR-Eu1bis experiment irradiated five HTR fuel pebbles containing TRISO-coated UO 2 particles and went beyond current HTR specifications (e.g., central temperature of 1523 K). This study presents ceramographic and EPMA examinations of irradiated urania kernels and coatings. Significant evolutions of the kernel (grain structure, porosity, metallic-inclusion size, intergranular bubbles) as a function of temperature are shown. Results concerning FP migration are presented, e.g., significant xenon, caesium and palladium release from the kernel, molybdenum and ruthenium mainly present in metallic precipitates. The observed FP and micro-structural evolutions are interpreted and explanations proposed. The effect of high flux rate and high temperature on fission-gas behaviour, grain-size evolution and kernel swelling is discussed. Furthermore, Cs, Mo and Zr behaviour is interpreted in connection with oxygen-potential. This paper shows that combining state-of-the-art post-irradiation examination and state-of-the-art modelling fundamentally improves understanding of HTR fuel behaviour.
6. Interactive web visualization tools to the results interpretation of a seismic risk study aimed at the emergency levels definition
Science.gov (United States)
Rivas-Medina, A.; Gutierrez, V.; Gaspar-Escribano, J. M.; Benito, B.
2009-04-01
Results of a seismic risk assessment study are often applied and interpreted by users unspecialised on the topic or lacking a scientific background. In this context, the availability of tools that help translating essentially scientific contents to broader audiences (such as decision makers or civil defence officials) as well as representing and managing results in a user-friendly fashion, are on indubitable value. On of such tools is the visualization tool VISOR-RISNA, a web tool developed within the RISNA project (financed by the Emergency Agency of Navarre, Spain) for regional seismic risk assessment of Navarre and the subsequent development of emergency plans. The RISNA study included seismic hazard evaluation, geotechnical characterization of soils, incorporation of site effects to expected ground motions, vulnerability distribution assessment and estimation of expected damage distributions for a 10% probability of exceedance in 50 years. The main goal of RISNA was the identification of higher risk area where focusing detailed, local-scale risk studies in the future and the corresponding urban emergency plans. A geographic information system was used to combine different information layers, generate tables of results and represent maps with partial and final results. The visualization tool VISOR-RISNA is intended to facilitate the interpretation and representation of the collection of results, with the ultimate purpose of defining actuation plans. A number of criteria for defining actuation priorities are proposed in this work. They are based on combinations of risk parameters resulting from the risk study (such as expected ground motion and damage and exposed population), as determined by risk assessment specialists. Although the values that these parameters take are a result of the risk study, their distribution in several classes depends on the intervals defined by decision takers or civil defense officials. These criteria provide a ranking of
7. Simplified mathematical models for interpreting the results of tests carried out by labelling the whole piezometric column in water wells
International Nuclear Information System (INIS)
Munera, H.A.
1974-01-01
Approximate methods used to interpret the results of tests based on radioactive tracer dilution in a single water well by labelling the whole piezometric column are described; these simple mathematical models have been used to obtain semi-quantitative data on the apparent velocity (horizontal) in non-homogeneous aquifers with flow rates of metres daily. Measurements have also been made in a homogeneous aquifer with velocities of centimetres daily. Interpretation is based on determination of the average concentration for the various well zones; this involves recognition of a mean velocity for each region. All the tracer dilution effects that are not due to horizontal or vertical flow between two zones, i.e. convection, artificial mixing, diffusion and so on, are grouped together as a single term, which is taken arbitrarily to be proportional to the difference in concentration between the regions under consideration; its value is obtained from the experimental dilution curve. The model was applied to the solution of the three cases encountered most frequently during our measurements in Colombia: (a) when the well penetrates a permeable zone and adjacent impermeable zone; (b) when the well penetrates a permeable zone contained between impermeable regions; and (c) when the well traverses an aquifer with two adjacent zones of different permeability contained between impermeable zones. The shape of the dilution curve (logarithm of concentration versus time, usually with two or more slopes) is predicted by the model, the approximate nature of which is consistent with the fact that the method of labelling the whole piezometric column is semi-quantitative. The results obtained for measurements made when there are considerable vertical flows are apparently correct, but there is no other experimental measurement available to confirm them. (author) [es
8. X-Ray Microtomography (μCT as a Useful Tool for Visualization and Interpretation of Shear Strength Test Results
Directory of Open Access Journals (Sweden)
Stefaniuk Damian
2015-02-01
Full Text Available The paper demonstrates the applicability of X-ray microtomography (ìCT to analysis of the results of shear strength examinations of clayey soils. The method of X-ray three-dimensional imaging offers new possibilities in soil testing. The work focuses on a non-destructive method of evaluation of specimen quality used in shear tests and mechanical behavior of soil. The paper presents the results of examination of 4 selected clayey soils. Specimens prepared for the triaxial test have been scanned using ìCT before and after the triaxial compression tests. The shear strength parameters of the soils have been estimated. Changes in soil structure caused by compression and shear failure have been presented as visualizations of the samples tested. This allowed for improved interpretation and evaluation of soil strength parameters and recognition of pre-existing fissures and the exact mode of failure. Basic geometrical parameters have been determined for selected cross-sections of specimens after failure. The test results indicate the utility of the method applied in soil testing.
9. Agent-Based Modelling of Agricultural Water Abstraction in Response to Climate, Policy, and Demand Changes: Results from East Anglia, UK
Science.gov (United States)
Swinscoe, T. H. A.; Knoeri, C.; Fleskens, L.; Barrett, J.
2014-12-01
Freshwater is a vital natural resource for multiple needs, such as drinking water for the public, industrial processes, hydropower for energy companies, and irrigation for agriculture. In the UK, crop production is the largest in East Anglia, while at the same time the region is also the driest, with average annual rainfall between 560 and 720 mm (1971 to 2000). Many water catchments of East Anglia are reported as over licensed or over abstracted. Therefore, freshwater available for agricultural irrigation abstraction in this region is becoming both increasingly scarce due to competing demands, and increasingly variable and uncertain due to climate and policy changes. It is vital for water users and policy makers to understand how these factors will affect individual abstractors and water resource management at the system level. We present first results of an Agent-based Model that captures the complexity of this system as individual abstractors interact, learn and adapt to these internal and external changes. The purpose of this model is to simulate what patterns of water resource management emerge on the system level based on local interactions, adaptations and behaviours, and what policies lead to a sustainable water resource management system. The model is based on an irrigation abstractor typology derived from a survey in the study area, to capture individual behavioural intentions under a range of water availability scenarios, in addition to farm attributes, and demographics. Regional climate change scenarios, current and new abstraction licence reforms by the UK regulator, such as water trading and water shares, and estimated demand increases from other sectors were used as additional input data. Findings from the integrated model provide new understanding of the patterns of water resource management likely to emerge at the system level.
10. Unifying Abstractions
DEFF Research Database (Denmark)
This thesis presents the RUNE language, a semantic construction of related and tightly coupled programming constructs presented in the shape of a programming language. The major contribution is the succesfull design of a highly unified and general programming model, capable of expressing some of ...... a unified name declaration mechanism. The resulting expressiveness allows for argument covariance, dependent types and module types, plus a solution to the so-called expression problem of two way extensibility in object-oriented languages....... of the most complex type relations put forth in type systems research, without compromising such fundamental qualities as conceptuality, modularity and static typing. While many new constructs and unifications are put forth to substantiate their conceptual validity, type rules are given to support...
11. Journal Abstracts
Directory of Open Access Journals (Sweden)
Mete Korkut Gülmen
1996-07-01
kanına difüzyon daha azdı. Sol böbrek ve sol akciğer, sağ böbrek ve sağ akciğerden daha fazla etkilenmişti, aynısı sol ve sağ psoas kasları için de doğmdur. En az etkilenenler ise karaciğerin ön lobu ve akciğer apeksleridir. Bu olay karaciğerde ve vücut organlarından alınan kanlarda ilaç konsantrasyonlarını, sonuç olarakta karaciğer/kan ilaç oranlarını anlamlı olarak etkileyebilmektedir. Mideden postmortem ilaç difüzyonunıın etkilerini azaltmak için, örneklerin periferik kan damarlarından, bir ekstremitedeki iskelet kaslarından, karaciğer sağ lobunun derin bölgelerinden ve akciğerin tabanından çok apeklerinden alınması önerilmektedir. POSTMORTEM ETANOL ÜRETİMİ VE DEĞERLENDİRMEYİ ETKİLEYEN FAKTÖRLER Postmortem production of ethanol and factors that influence interpretation Pounder DJ, Cox DE, Kuroda N. Am J Forensic Med Pathol. 1996; 17(1: 8-20. Etanol incelemesi adli toksikoloji laboratuvarlarında en sık yapılan incelemedir. Postmortem etanol incelemesi sıklıkla postmortem etanol üretimi sebebi ile zorlaşmaktadır. Bir çok bakteri türleri, kültür mantarları ve küf mantarları çeşitli maddelerden etanol üretebilmektedir. Ölüm ve otopsi yapılması arasındaki süre ve saklama sıcaklığı arttıkça etanol sentezi olasılığı da artmaktadır. Postmortem alkol üretimi ve antemoıtem alkol aliminin ayırımı sıklıkla zor olmaktadır. Bu derlemede postmortem etanol sentezinin tanınma kriterleri ve postmortem etanol bulgularının yommlanmasında dikkate alınacak faktörler sunulmuştur. Kriterler olgu hikayesi, örneklerin saklama şartları, var olan mikrop tipleri, eta- nolün atipik sıvı ve doku dağılımı, etanol konsantrasyonu, diğer alkol ve uçucu maddelerin tespit edilmesidir. Elde edilebilen tüm bilgilerin dikkatlice değerlendirilmesi ile etanolün antemortem veya postmortem kaynaklı olduğunun geçerli bir yorumu yapılabilmektadir. KAPALI KAFA TRAVMASI SONUCUNDA GEL
12. The MEXICO project (Model Experiments in Controlled Conditions): The database and first results of data processing and interpretation
International Nuclear Information System (INIS)
Snel, H; Schepers, J G; Montgomerie, B
2007-01-01
The Mexico (Model experiments in Controlled Conditions) was a FP5 project, partly financed by European Commission. The main objective was to create a database of detailed aerodynamic and load measurements on a wind turbine model, in a large and high quality wind tunnel, to be used for model validation and improvement. Here model stands for both the extended BEM modelling used in state-of-the-art design and certification software, and CFD modelling of the rotor and near wake flow. For this purpose a three bladed 4.5 m diameter wind tunnel model was built and instrumented. The wind tunnel experiments were carried out in the open section (9.5*9.5 m 2 ) of the Large Scale Facility of the DNW (German-Netherlands) during a six day campaign in December 2006. The conditions for measurements cover three operational tip speed ratios, many blade pitch angles, three yaw misalignment angles and a small number of unsteady cases in the form of pitch ramps and rotor speed ramps. One of the most important feats of the measurement program was the flow field mapping, with stereo PIV techniques. Overall the measurement campaign was very successful. The paper describes the now existing database and discusses a number of highlights from early data processing and interpretation. It should be stressed that all results are first results, no tunnel correction has been performed so far, nor has the necessary checking of data quality
13. Constraint-Based Abstract Semantics for Temporal Logic
DEFF Research Database (Denmark)
Banda, Gourinath; Gallagher, John Patrick
2010-01-01
Abstract interpretation provides a practical approach to verifying properties of infinite-state systems. We apply the framework of abstract interpretation to derive an abstract semantic function for the modal mu-calculus, which is the basis for abstract model checking. The abstract semantic funct...
14. The results interpretation of thermogasdynamic studies of vertical gas wells incomplete in terms of the reservoir penetration degree
Directory of Open Access Journals (Sweden)
M.N. Shamsiev
2018-03-01
Full Text Available A method is proposed for interpreting thermogasdynamic studies of vertical gas wells that are incomplete in terms of the reservoir penetration degree on the basis of inverse tasks theory. The inverse task has the aim to determine the reservoir parameters for nonisothermal filtration of a real gas to a vertical well in an anisotropic reservoir. In this case, the values of the pressure and temperature at the well bottom, recorded by deep instruments, are assumed to be known. The solution of the inverse task is to minimize the functional. The iterative sequence for minimizing the functional is based on the Levenberg-Marquardt method. The convergence and stability of the iterative process for various input information have been studied on specific examples. The effect of reservoir anisotropy on the pressure and temperature changes at the bottom of the well is studied. It is shown that if the reservoir is not completely penetrated by the results of pressure and temperature measurements at the bottom of the well, anisotropy of the reservoir can be estimated after its launch. It should be noted that when studying thermodynamic processes in the vicinity of a well, which penetrates thick layers, it is necessary to take into account not only the heat exchange of the reservoir with the surrounding rocks, but also the geothermal temperature gradient.
15. Results and interpretation of spectral indices measurements made with AQUILON; Resultats et interpretation de mesures d'indices de spectre dans aquilon
Energy Technology Data Exchange (ETDEWEB)
Frichet, J P; Mougey, J N; Naudet, R; Taste, J [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1965-07-01
This report deals with a set of spectral indices measurements made in the heavy water reactor Aquilon on lattices constituted by massive fuel elements of dia. 29,2 mm. The fuel elements were made either of natural uranium or of slightly depleted or slightly enriched uranium, or of an uranium-plutonium alloy. The measurements were carried out for various lattice pitches (square pitch from 110 to 210 mm) and in certain cases for various temperatures (from 20 to 80 deg. C). The results are compared to calculated values obtained by using the latest advances of the thermalization theory developed at Saclay applied to the moderation by heavy water. (authors) [French] Ce rapport est consacre a un ensemble de mesures d'indices de spectre realisees dans la pile a eau lourde Aquilon sur des reseaux d'elements combustibles pleins, de 29,2 mm de diametre. Ces combustibles se composaient ou bien d'uranium naturel, ou bien d'uranium tres legerement appauvri ou enrichi, ou bien d'un alliage uranium plutonium. Les mesures ont ete effectuees pour toute une serie de pas de reseaux (pas carre 110 a 210 mm), certaines d'entre elles a plusieurs temperatures (20 a 80 deg. C). Les resultats des mesures sont compares a des valeurs calculees obtenues en utilisant les plus recents developpements de la theorie de la thermalisation mise au point a Saclay, appliques au cas de la moderation par l'eau lourde. (auteurs)
16. Secondary School Results for the Fourth NAEP Mathematics Assessment: Discrete Mathematics, Data Organization and Interpretation, Measurement, Number and Operations.
Science.gov (United States)
Brown, Catherine A.; And Others
1988-01-01
Suggests that secondary school students seem to have reasonably good procedural knowledge in areas of mathematics as rational numbers, probability, measurement, and data organization and interpretation. It appears, however, that students are lacking the conceptual knowledge enabling them to successfully do the assessment items on applications,…
17. Density fractionation of forest soils: methodological questions and interpretation of incubation results and turnover time in an ecosystem context
Science.gov (United States)
Susan E. Crow; Christopher W. Swanston; Kate Lajtha; J. Renee Brooks; Heath Keirstead
2007-01-01
Soil organic matter (SOM) is often separated by physical means to simplify a complex matrix into discrete fractions. A frequent approach to isolating two or more fractions is based on differing particle densities and uses a high density liquid such as sodium polytungstate (SPT). Soil density fractions are often interpreted as organic matter pools with different carbon...
18. Modal abstractions of concurrent behavior
DEFF Research Database (Denmark)
Nielson, Flemming; Nanz, Sebastian; Nielson, Hanne Riis
2011-01-01
We present an effective algorithm for the automatic construction of finite modal transition systems as abstractions of potentially infinite concurrent processes. Modal transition systems are recognized as valuable abstractions for model checking because they allow for the validation as well...... as refutation of safety and liveness properties. However, the algorithmic construction of finite abstractions from potentially infinite concurrent processes is a missing link that prevents their more widespread usage for model checking of concurrent systems. Our algorithm is a worklist algorithm using concepts...... from abstract interpretation and operating upon mappings from sets to intervals in order to express simultaneous over- and underapprox-imations of the multisets of process actions available in a particular state. We obtain a finite abstraction that is 3-valued in both states and transitions...
19. Advance Organizers: Concret Versus Abstract.
Science.gov (United States)
Corkill, Alice J.; And Others
1988-01-01
Two experiments examined the relative effects of concrete and abstract advance organizers on students' memory for subsequent prose. Results of the experiments are discussed in terms of the memorability, familiarity, and visualizability of concrete and abstract verbal materials. (JD)
20. Abstract Objects of Verbs
DEFF Research Database (Denmark)
Robering, Klaus
2014-01-01
Verbs do often take arguments of quite different types. In an orthodox type-theoretic framework this results in an extreme polysemy of many verbs. In this article, it is shown that this unwanted consequence can be avoided when a theory of "abstract objects" is adopted according to which these obj......Verbs do often take arguments of quite different types. In an orthodox type-theoretic framework this results in an extreme polysemy of many verbs. In this article, it is shown that this unwanted consequence can be avoided when a theory of "abstract objects" is adopted according to which...... these objects represent non-objectual entities in contexts from which they are excluded by type restrictions. Thus these objects are "abstract'' in a functional rather than in an ontological sense: they function as representatives of other entities but they are otherwise quite normal objects. Three examples...
1. Interpreting conjunctions.
Science.gov (United States)
Bott, Lewis; Frisson, Steven; Murphy, Gregory L
2009-04-01
The interpretation generated from a sentence of the form P and Q can often be different to that generated by Q and P, despite the fact that and has a symmetric truth-conditional meaning. We experimentally investigated to what extent this difference in meaning is due to the connective and and to what extent it is due to order of mention of the events in the sentence. In three experiments, we collected interpretations of sentences in which we varied the presence of the conjunction, the order of mention of the events, and the type of relation holding between the events (temporally vs. causally related events). The results indicated that the effect of using a conjunction was dependent on the discourse relation between the events. Our findings contradict a narrative marker theory of and, but provide partial support for a single-unit theory derived from Carston (2002). The results are discussed in terms of conjunction processing and implicatures of temporal order.
2. Abstract Objects of Verbs
DEFF Research Database (Denmark)
2014-01-01
Verbs do often take arguments of quite different types. In an orthodox type-theoretic framework this results in an extreme polysemy of many verbs. In this article, it is shown that this unwanted consequence can be avoided when a theory of "abstract objects" is adopted according to which...... these objects represent non-objectual entities in contexts from which they are excluded by type restrictions. Thus these objects are "abstract'' in a functional rather than in an ontological sense: they function as representatives of other entities but they are otherwise quite normal objects. Three examples...
Science.gov (United States)
Clement, B.; Barrett, A.
2001-01-01
r describes a way to schedule high level activities before distributing them across multiple rovers in order to coordinate the resultant use of shared resources regardless of how each rover decides how to perform its activities. We present an algorithm for summarizing the metric resource requirements of an abstract activity based n the resource usages of its potential refinements.
4. Circularity and Lambda Abstraction
DEFF Research Database (Denmark)
Danvy, Olivier; Thiemann, Peter; Zerny, Ian
2013-01-01
unknowns from what is done to them, which we lambda-abstract with functions. The circular unknowns then become dead variables, which we eliminate. The result is a strict circu- lar program a la Pettorossi. This transformation is reversible: given a strict circular program a la Pettorossi, we introduce...
5. Horn clause verification with convex polyhedral abstraction and tree automata-based refinement
DEFF Research Database (Denmark)
Kafle, Bishoksan; Gallagher, John Patrick
2017-01-01
In this paper we apply tree-automata techniques to refinement of abstract interpretation in Horn clause verification. We go beyond previous work on refining trace abstractions; firstly we handle tree automata rather than string automata and thereby can capture traces in any Horn clause derivations...... underlying the Horn clauses. Experiments using linear constraint problems and the abstract domain of convex polyhedra show that the refinement technique is practical and that iteration of abstract interpretation with tree automata-based refinement solves many challenging Horn clause verification problems. We...... compare the results with other state-of-the-art Horn clause verification tools....
6. Population genetic studies of the polar bear (Ursus maritimus): A summary of available data and interpretation of results
Science.gov (United States)
Scribner, Kim T.; Garner, G.W.; Amstrup, Steven C.; Cronin, M.A.; Dizon, Andrew E.; Chivers, Susan J.; Perrin, William F.
1997-01-01
A summary of existing population genetics literature is presented for polar bears (Ursus maritimus) and interpreted in the context of the species' life-history characteristics and regional heterogeneity in environmental regimes and movement patterns. Several nongenetic data sets including morphology, contaminant levels, geographic variation in reproductive characteristics, and the location and distribution of open-water foraging habitat suggest some degree of spatial structuring. Eleven populations are recognized by the IUCN Polar Bear Specialist Group. Few genetics studies exist for polar bears. Interpretation and generalizations of regional variation in intra- and interpopulation levels of genetic variability are confounded by the paucity of data from many regions and by the fact that no single informative genetic marker has been employed in multiple regions. Early allozyme studies revealed comparatively low levels of genetic variability and no compelling evidence of spatial structuring. Studies employing mitochondrial DNA (mtDNA) also found low levels of genetic variation, a lack of phylogenetic structure, and no significant evidence for spatial variation in haplotype frequency. In contrast, microsatellite variable number of tandem repeat (VNTR) loci have revealed significant heterogeneity in allele frequency among populations in the Canadian Arctic. These regions are characterized by archipelgic patterns of sea-ice movements. Further studies using highly polymorphic loci are needed in regions characterized by greater polar bear dependency on pelagic sea-ice movements and in regions for which no data currently exist (i.e., Laptev and Novaya Zemlya/Franz Josef).
7. Foundations and interpretation of quantum mechanics in the light of a critical-historical analysis of the problems and a synthesis of the results
CERN Document Server
Auletta, Gennaro
2000-01-01
The aim of this book is twofold: to provide a comprehensive account of the foundations of the theory and to outline a theoretical and philosophical interpretation suggested from the results of the last twenty years.There is a need to provide an account of the foundations of the theory because recent experience has largely confirmed the theory and offered a wealth of new discoveries and possibilities. On the other side, the following results have generated a new basis for discussing the problem of the interpretation: the new developments in measurement theory; the experimental generation of “
8. A collaborative European exercise on mRNA-based body fluid/skin typing and interpretation of DNA and RNA results
DEFF Research Database (Denmark)
van den Berge, M; Carracedo, A; Gomes, I
2014-01-01
The European Forensic Genetics Network of Excellence (EUROFORGEN-NoE) undertook a collaborative project on mRNA-based body fluid/skin typing and the interpretation of the resulting RNA and DNA data. Although both body fluids and skin are composed of a variety of cell types with different function...
9. Seismic Consequence Abstraction
International Nuclear Information System (INIS)
Gross, M.
2004-01-01
The primary purpose of this model report is to develop abstractions for the response of engineered barrier system (EBS) components to seismic hazards at a geologic repository at Yucca Mountain, Nevada, and to define the methodology for using these abstractions in a seismic scenario class for the Total System Performance Assessment - License Application (TSPA-LA). A secondary purpose of this model report is to provide information for criticality studies related to seismic hazards. The seismic hazards addressed herein are vibratory ground motion, fault displacement, and rockfall due to ground motion. The EBS components are the drip shield, the waste package, and the fuel cladding. The requirements for development of the abstractions and the associated algorithms for the seismic scenario class are defined in ''Technical Work Plan For: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 171520]). The development of these abstractions will provide a more complete representation of flow into and transport from the EBS under disruptive events. The results from this development will also address portions of integrated subissue ENG2, Mechanical Disruption of Engineered Barriers, including the acceptance criteria for this subissue defined in Section 2.2.1.3.2.3 of the ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274])
10. Seismic Consequence Abstraction
Energy Technology Data Exchange (ETDEWEB)
M. Gross
2004-10-25
The primary purpose of this model report is to develop abstractions for the response of engineered barrier system (EBS) components to seismic hazards at a geologic repository at Yucca Mountain, Nevada, and to define the methodology for using these abstractions in a seismic scenario class for the Total System Performance Assessment - License Application (TSPA-LA). A secondary purpose of this model report is to provide information for criticality studies related to seismic hazards. The seismic hazards addressed herein are vibratory ground motion, fault displacement, and rockfall due to ground motion. The EBS components are the drip shield, the waste package, and the fuel cladding. The requirements for development of the abstractions and the associated algorithms for the seismic scenario class are defined in ''Technical Work Plan For: Regulatory Integration Modeling of Drift Degradation, Waste Package and Drip Shield Vibratory Motion and Seismic Consequences'' (BSC 2004 [DIRS 171520]). The development of these abstractions will provide a more complete representation of flow into and transport from the EBS under disruptive events. The results from this development will also address portions of integrated subissue ENG2, Mechanical Disruption of Engineered Barriers, including the acceptance criteria for this subissue defined in Section 2.2.1.3.2.3 of the ''Yucca Mountain Review Plan, Final Report'' (NRC 2003 [DIRS 163274]).
11. From Abstract Art to Abstracted Artists
Directory of Open Access Journals (Sweden)
Romi Mikulinsky
2016-11-01
Full Text Available What lineage connects early abstract films and machine-generated YouTube videos? Hans Richter’s famous piece Rhythmus 21 is considered to be the first abstract film in the experimental tradition. The Webdriver Torso YouTube channel is composed of hundreds of thousands of machine-generated test patterns designed to check frequency signals on YouTube. This article discusses geometric abstraction vis-à-vis new vision, conceptual art and algorithmic art. It argues that the Webdriver Torso is an artistic marvel indicative of a form we call mathematical abstraction, which is art performed by computers and, quite possibly, for computers.
International Nuclear Information System (INIS)
Schreiner, R.
2001-01-01
The purpose of this work is to develop the Engineered Barrier System (EBS) radionuclide transport abstraction model, as directed by a written development plan (CRWMS M and O 1999a). This abstraction is the conceptual model that will be used to determine the rate of release of radionuclides from the EBS to the unsaturated zone (UZ) in the total system performance assessment-license application (TSPA-LA). In particular, this model will be used to quantify the time-dependent radionuclide releases from a failed waste package (WP) and their subsequent transport through the EBS to the emplacement drift wall/UZ interface. The development of this conceptual model will allow Performance Assessment Operations (PAO) and its Engineered Barrier Performance Department to provide a more detailed and complete EBS flow and transport abstraction. The results from this conceptual model will allow PA0 to address portions of the key technical issues (KTIs) presented in three NRC Issue Resolution Status Reports (IRSRs): (1) the Evolution of the Near-Field Environment (ENFE), Revision 2 (NRC 1999a), (2) the Container Life and Source Term (CLST), Revision 2 (NRC 1999b), and (3) the Thermal Effects on Flow (TEF), Revision 1 (NRC 1998). The conceptual model for flow and transport in the EBS will be referred to as the ''EBS RT Abstraction'' in this analysis/modeling report (AMR). The scope of this abstraction and report is limited to flow and transport processes. More specifically, this AMR does not discuss elements of the TSPA-SR and TSPA-LA that relate to the EBS but are discussed in other AMRs. These elements include corrosion processes, radionuclide solubility limits, waste form dissolution rates and concentrations of colloidal particles that are generally represented as boundary conditions or input parameters for the EBS RT Abstraction. In effect, this AMR provides the algorithms for transporting radionuclides using the flow geometry and radionuclide concentrations determined by other
13. Dosimetric Aspects of Personnel Skin Contamination by Radionuclides - Estimate of a Skin Dose, Monitoring and Interpretation of Results
International Nuclear Information System (INIS)
Husak, V.; Kleinbauer, K.
2001-01-01
Full text: On the basis of a critical comparison of literary data, tables are compiled of beta and gamma dose rate in mSvh -1 (kBqcm -1 ) to the basal layer of the skin at 0.07 mm depth from contamination by 75 radionuclides unsealed sources; radioactive substances are assumed to reside on the skin surface. The residence time needed for the estimate of the skin dose is calculated assuming that a residual activity per unit area of any radionuclide on the skin, which could not be removed by the repeated careful decontamination, is supposed to be eliminated with the biological half-life of 116 h as a consequence of the natural sloughing off of the skin. Radionuclides are divided into five groups according to the dose estimate in mSv (kBqcm -2 ): ≥250 (e.g. 32 P, 89 Sr, 137 Cs/ 137m Ba), 100-250 (e.g. 90 Y, 131 I, 186 Re), 10-100 (e.g. 35 S, 67 Ga, 200 Tl), 1-10 (e.g. 18 F, 51 Cr, 99m Tc), ≤1 (e.g. 63 Ni, 144 Pr, 238 U). If it is possible, doses can be determined more precisely by measuring the effective half-life of the residual activity on the contaminated area. Our dose estimates are approximately valid on the condition that, after decontamination, residual activity of radionuclides persists predominantly in the superficial layers of epidermis. This and further uncertainties connected with the dose assessment are discussed. Our tables can help to determine easily rough values of doses to personnel in contamination incidents and to interpret them in relation to regulatory derived limits. This work was supported by State Office for Nuclear Safety in Prague. (author)
14. Medico-economic evaluation of healthcare products. Methodology for defining a significant impact on French health insurance costs and selection of benchmarks for interpreting results.
Science.gov (United States)
Dervaux, Benoît; Baseilhac, Eric; Fagon, Jean-Yves; Biot, Claire; Blachier, Corinne; Braun, Eric; Debroucker, Frédérique; Detournay, Bruno; Ferretti, Carine; Granger, Muriel; Jouan-Flahault, Chrystel; Lussier, Marie-Dominique; Meyer, Arlette; Muller, Sophie; Pigeon, Martine; De Sahb, Rima; Sannié, Thomas; Sapède, Claudine; Vray, Muriel
2014-01-01
Decree No. 2012-1116 of 2 October 2012 on medico-economic assignments of the French National Authority for Health (Haute autorité de santé, HAS) significantly alters the conditions for accessing the health products market in France. This paper presents a theoretical framework for interpreting the results of the economic evaluation of health technologies and summarises the facts available in France for developing benchmarks that will be used to interpret incremental cost-effectiveness ratios. This literature review shows that it is difficult to determine a threshold value but it is also difficult to interpret then incremental cost effectiveness ratio (ICER) results without a threshold value. In this context, round table participants favour a pragmatic approach based on "benchmarks" as opposed to a threshold value, based on an interpretative and normative perspective, i.e. benchmarks that can change over time based on feedback. © 2014 Société Française de Pharmacologie et de Thérapeutique.
15. Programme and abstracts
International Nuclear Information System (INIS)
1975-01-01
Abstracts of 25 papers presented at the congress are given. The abstracts cover various topics including radiotherapy, radiopharmaceuticals, radioimmunoassay, health physics, radiation protection and nuclear medicine
16. Ghana Science Abstracts
International Nuclear Information System (INIS)
Entsua-Mensah, C.
2004-01-01
This issue of the Ghana Science Abstracts combines in one publication all the country's bibliographic output in science and technology. The objective is to provide a quick reference source to facilitate the work of information professionals, research scientists, lecturers and policy makers. It is meant to give users an idea of the depth and scope and results of the studies and projects carried out. The scope and coverage comprise research outputs, conference proceedings and periodical articles published in Ghana. It does not capture those that were published outside Ghana. Abstracts reported have been grouped under the following subject areas: Agriculture, Biochemistry, Biodiversity conservation, biological sciences, biotechnology, chemistry, dentistry, engineering, environmental management, forestry, information management, mathematics, medicine, physics, nuclear science, pharmacy, renewable energy and science education
17. Impact of acquisition and interpretation on total inter-observer variability in echocardiography: results from the quality assurance program of the STAAB cohort study.
Science.gov (United States)
Morbach, Caroline; Gelbrich, Götz; Breunig, Margret; Tiffe, Theresa; Wagner, Martin; Heuschmann, Peter U; Störk, Stefan
2018-02-14
Variability related to image acquisition and interpretation is an important issue of echocardiography in clinical trials. Nevertheless, there is no broadly accepted standard method for quality assessment of echocardiography in clinical research reports. We present analyses based on the echocardiography quality-assurance program of the ongoing STAAB cohort study (characteristics and course of heart failure stages A-B and determinants of progression). In 43 healthy individuals (mean age 50 ± 14 years; 18 females), duplicate echocardiography scans were acquired and mutually interpreted by one of three trained sonographers and an EACVI certified physician, respectively. Acquisition (AcV), interpretation (InV), and inter-observer variability (IOV; i.e., variability between the acquisition-interpretation sequences of two different observers), were determined for selected M-mode, B-mode, and Doppler parameters. We calculated Bland-Altman upper 95% limits of absolute differences, implying that 95% of measurement differences were smaller/equal to the given value: e.g. LV end-diastolic volume (mL): 25.0, 25.0, 27.9; septal e' velocity (cm/s): 3.03, 1.25, 3.58. Further, 90, 85, and 80% upper limits of absolute differences were determined for the respective parameters. Both, acquisition and interpretation, independently and sizably contributed to IOV. As such, separate assessment of AcV and InV is likely to aid in echocardiography training and quality-assurance. Our results further suggest to routinely determine IOV in clinical trials as a comprehensive measure of imaging quality. The derived 95, 90, 85, and 80% upper limits of absolute differences are suggested as reproducibility targets of future studies, thus contributing to the international efforts of standardization in quality-assurance.
18. The essential guide to effect sizes: statistical power, meta-analysis, and the interpretation of research results
National Research Council Canada - National Science Library
Ellis, Paul D
2010-01-01
.... Using a class-tested approach that includes numerous examples and step-by-step exercises, it introduces and explains three of the most important issues relating to the practical significance of research results...
International Nuclear Information System (INIS)
J. Prouty
2006-01-01
The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment (TSPA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers advective transport and diffusive transport
Energy Technology Data Exchange (ETDEWEB)
J. Prouty
2006-07-14
The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment (TSPA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers advective transport and diffusive transport
1. Proposal of Michelson-Morley experiment via single photon interferometer: Interpretation of Michelson-Morley experimental results using de Broglie-Bohm picture
OpenAIRE
Sato, Masanori
2004-01-01
The Michelson-Morley experiment is considered via a single photon interferometer and we propose the interpretation of the Michelson-Morley experimental results using de Broglie-Bohm picture. We point out that the Michelson-Morley experiment revealed the interference of photons, however, it did not reveal the photons simultaneous arrival at the beam splitter. According to the de Broglie-Bohm picture, the quantum potential nonlocally determines the interference of photons. The interference of t...
2. DEVELOPMENT OF FUZZY NEURAL NETWORK FOR THE INTERPRETATION OF THE RESULTS OF DISSOLVED IN OIL GASES ANALYSIS
Directory of Open Access Journals (Sweden)
V.Е. Bondarenko
2017-04-01
Full Text Available Purpose. The purpose of this paper is a diagnosis of power transformers on the basis of the results of the analysis of gases dissolved in oil. Methodology. To solve this problem a fuzzy neural network has been developed, tested and trained. Results. The analysis of neural network to recognize the possibility of developing defects at an early stage of their development, or growth of gas concentrations in the healthy transformers, made after the emergency actions on the part of electric networks is made. It has been established greatest difficulty in making a diagnosis on the criterion of the boundary gas concentrations, are the results of DGA obtained for the healthy transformers in which the concentration of gases dissolved in oil exceed their limit values, as well as defective transformers at an early stage development defects. The analysis showed that the accuracy of recognition of fuzzy neural networks has its limitations, which are determined by the peculiarities of the DGA method, used diagnostic features and the selected decision rule. Originality. Unlike similar studies in the training of the neural network, the membership functions of linguistic terms were chosen taking into account the functions gas concentrations density distribution transformers with various diagnoses, allowing to consider a particular gas content of oils that are typical of a leaky transformer, and the operating conditions of the equipment. Practical value. Developed fuzzy neural network allows to perform diagnostics of power transformers on the basis of the result of the analysis of gases dissolved in oil, with a high level of reliability.
3. Result interpretation of experimental calibration for milk citric acid determination via infra-red spectroscopy (MIR-FT
Directory of Open Access Journals (Sweden)
Oto Hanuš
2009-01-01
Full Text Available Citric acid (KC in milk is indicator of cow energy metabolism. Milk laboratories set up KC determination. A method can be infra-red analyse (MIR-FT. The goal was to develop a relevant method for reference sample preparation for MIR-FT (indirect method, Lactoscope FTIR and MilkoScan FT 6000 calibration. As reference was used a photometric method (c; 428 nm. KC was added (n = 3 into some reference milk samples (n = 10, bulk milk. Mean value was 9.220 ± 3.094 mmol . l−1 with variation range from 6.206 to 15.975 mmol . l−1. Recovery c was from 100.8 to 120.2 %. Correlation between c and MIR-FT were from 0.979 to 0.992 (P < 0.001. These were lower in the set of native milk samples (n = 7, from 0.751 (Lactoscope FTIR; P < 0.05 to 0.947 (MilkoScan FT 6000; P < 0.001 in comparison to original values from 0.981 to 0.992 (n = 10; P < 0.001. Correlations between calibrated MIR-FT instruments were from 0.958 to 1.0 (P < 0.001. Average recovery for instruments (n = 12 was 101.6 ± 18.1 %. The mean differences between c method and MIR-FT after calibration (n = 4 moved from −0.001 across zero to 0.037 %. Standard deviation of differences was from 0.0074 to 0.0187 % at MilkoScan FT 6000 and from 0.0105 to 0.0117 % for Lactoscope FTIR. Relative variability of differences (MIR-F (filter technology and FT for major components fat (T, proteins (B and lactose (L in total and minor components KC and free fatty acids (VMK was estimated to 1.0 and 7.2 and 34.4 %. The KC result is inferior than T, B and L superior than VMK. Autocorrelation (0.042; P > 0.05 of results demonstrated the independence of consecutive measurements. Milk preservation effect amounted 0.2323 (P < 0.001 with bronopol and 0.0339 (P > 0.05 mmol . l−1 with dichromate. It was (3.0 and 0.44 % practically negligible, redeemable via relevant calibration. The results of proficiency testing in post-calibration period and evaluation of double
4. Elements of abstract algebra
CERN Document Server
Clark, Allan
1984-01-01
This concise, readable, college-level text treats basic abstract algebra in remarkable depth and detail. An antidote to the usual surveys of structure, the book presents group theory, Galois theory, and classical ideal theory in a framework emphasizing proof of important theorems.Chapter I (Set Theory) covers the basics of sets. Chapter II (Group Theory) is a rigorous introduction to groups. It contains all the results needed for Galois theory as well as the Sylow theorems, the Jordan-Holder theorem, and a complete treatment of the simplicity of alternating groups. Chapter III (Field Theory)
5. On the interpretation of differential scanning calorimetry results for thermoelastic martensitic transformations: Athermal versus thermally activated kinetics
International Nuclear Information System (INIS)
Van Humbeeck, J.; Planes, A.
1996-01-01
Experimentally, two distinct classes of martensitic transformations are considered: athermal and isothermal. In the former class, on cooling, at some well-defined start temperature (M s ), isolated small regions of the martensitic product begin to appear in the parent phase. The transformation at any temperature appears to be instantaneous in practical time scales, and the amount of transformed material (x) does not depend on time, i.e., it increases at each step of lowering temperature. The transition is not completed until the temperature is lowered below M f (martensite finish temperature). The transformation temperatures are only determined by chemical (composition and degree of order) and microstructural factors. The external controlling parameter (T or applied stress) determines the free energy difference between the high and the low temperature phases, which provides the driving force for the transition. In the development of athermal martensite activation kinetics is secondary. Athermal martensite, as observed in the well known shape memory alloys Cu-Zn-Al, Cu-Al-Ni and Ni-Ti, cannot be attributed to a thermally activated mechanism for which kinetics are generally described by the Arrhenius rate equation. However, the latter has been applied by Lipe and Morris to results for the Martensitic Transformation of Cu-Al-Ni-B-Mn obtained by conventional Differential Scanning Calorimetry (DSC). It is the concern of the authors of this letter to point out the incongruences arising from the analysis of calorimetric results, corresponding to forward and reverse thermoelastic martensitic transformations, in terms of standard kinetic analysis based on the Arrhenius rate equation
6. An overview of gamma-hydroxybutyric acid: pharmacodynamics, pharmacokinetics, toxic effects, addiction, analytical methods, and interpretation of results.
Science.gov (United States)
Andresen, H; Aydin, B E; Mueller, A; Iwersen-Bergmann, S
2011-09-01
Abuse of gamma-hydroxybutyric acid (GHB) has been known since the early 1990's, but is not as widespread as the consumption of other illegal drugs. However, the number of severe intoxications with fatal outcomes is comparatively high; not the least of which is brought about by the consumption of the currently legal precursor substances gamma-butyrolactone (GBL) and 1,4-butanediol (1,4-BD). In regards to previous assumptions, addiction to GHB or its analogues can occur with severe symptoms of withdrawal. Moreover, GHB can be used for drug-facilitated sexual assaults. Its pharmacological effects are generated mainly by interaction with both GABA(B) and GHB receptors, as well as its influence on other transmitter systems in the human brain. Numerous analytical methods for determining GHB using chromatographic techniques were published in recent years, and an enzymatic screening method was established. However, the short window of GHB detection in blood or urine due to its rapid metabolism is a challenge. Furthermore, despite several studies addressing this problem, evaluation of analytical results can be difficult: GHB is a metabolite of GABA (gamma-aminobutyric acid); a differentiation between endogenous and exogenous concentrations has to be made. Apart from this, in samples with a longer storage interval and especially in postmortem specimens, higher levels can be measured due to GHB generation during this postmortem interval or storage time. Copyright © 2011 John Wiley & Sons, Ltd.
7. Making the best use of our previous results as a clue for interpreting kinetics of scintigraphic agents
Directory of Open Access Journals (Sweden)
Tsuyoshi Sato
2011-08-01
Full Text Available Up to now, we have performed scintigraphy with 201-thallium chloride (201-TlCl and 99m-Tc-hexakis-2-methoxy-isobutyl-isonitrile (99m-Tc-MIBI for malignant tumors and lymphoscintigraphy with 99m-Tc-rhenium-colloid (99m-Tc-Re and 99m-Tc-human-serum-albumin-diethylene-triamine-penta-acetic-acid (99m-Tc-HSA-D for lymph node metastasis. In this article, we re-evaluated scintigraphic images retrospectively with a hope that the results might be a clue, even if it is small, for dentists to try to improve the accuracy of diagnosis of malignant tumors. From scintigraphy, we obtained the tumor retention index as a factor to estimate the uptake of radioactive agents in tumor cells. Moreover, we estimated transport proteins of Na+/K+-ATPase and permeability-glycoprotein (P-gp expressed on the cell membrane that might regulate the kinetic condition of radioactive agents. Among the tumor retention index, the transport protein and the histopathologic finding of tumors, there were relatively well correlations. The tumor retention index showed a difference clearly between malignant tumor and benign tumor. The transport protein revealed a distinct expression in accordance with the malignancy of tumor, and the uptake clearly depended upon the expression of transport protein. Moreover, the lymph node metastasis was detected well by lymphoscintigraphy with 99m-Tc-Re and 99m-Tc-HSA-D.
8. Dairy herd mastitis and reproduction: using simulation to aid interpretation of results from discrete time survival analysis.
Science.gov (United States)
Hudson, Christopher D; Bradley, Andrew J; Breen, James E; Green, Martin J
2015-04-01
Probabilistic sensitivity analysis (PSA) is a simulation-based technique for evaluating the relative importance of different inputs to a complex process model. It is commonly employed in decision analysis and for evaluation of the potential impact of uncertainty in research findings on clinical practice, but has a wide variety of other possible applications. In this example, it was used to evaluate the association between herd-level udder health and reproductive performance in dairy herds. Although several recent studies have found relatively large associations between mastitis and fertility at the level of individual inseminations or lactations, the current study demonstrated that herd-level intramammary infection status is highly unlikely to have a clinically significant impact on the overall reproductive performance of a dairy herd under typical conditions. For example, a large increase in incidence rate of clinical mastitis (from 92 to 131 cases per 100 cows per year) would be expected to increase a herd's modified FERTEX score (a cost-based measure of overall reproductive performance) by just £4.50(1) per cow per year. The herd's background level of submission rate (proportion of eligible cows served every 21 days) and pregnancy risk (proportion of inseminations leading to a pregnancy) correlated strongly with overall reproductive performance and explained a large proportion of the between-herd variation in performance. PSA proved to be a highly useful technique to aid understanding of results from a complex statistical model, and has great potential for a wide variety of applications within the field of veterinary science. Copyright © 2015 Elsevier Ltd. All rights reserved.
9. Program and abstracts
International Nuclear Information System (INIS)
1975-01-01
Abstracts of the papers given at the conference are presented. The abstracts are arranged under sessions entitled:Theoretical Physics; Nuclear Physics; Solid State Physics; Spectroscopy; Physics Education; SANCGASS; Astronomy; Plasma Physics; Physics in Industry; Applied and General Physics
10. Program and abstracts
Energy Technology Data Exchange (ETDEWEB)
1975-01-01
Abstracts of the papers given at the conference are presented. The abstracts are arranged under sessions entitled:Theoretical Physics; Nuclear Physics; Solid State Physics; Spectroscopy; Physics Education; SANCGASS; Astronomy; Plasma Physics; Physics in Industry; Applied and General Physics.
11. 137Cs Results and Interpretation of Cesium Soil Data on the Upper Fortymile Wash Alluvial Fan, Amargosa Valley, Nevada.
Science.gov (United States)
Harrington, C.
2004-12-01
Studies using 137Cs were used to produce soil Cs profiles and to use them to determine erosion rates on interchannel divides of the Fortymile Wash alluvial fan over the last 50 years. Sample locations whose 137Cs profiles most resemble the reference-sample (stable surface) profiles are located on interchannel divide areas between distributary channels. These profiles are similar to the reference profiles that have low 137Cs values (in the range of 0.02 to 0.08 pCi/g) in the 3 to 6 cm layers. However, the surface layers (1-3 cm depth) typically have values much less than the reference samples from equivalent depths (range from 0.251 to 0.421 pCi/g). The data indicate that many of these interchannel divide areas have had part of the upper layer removed. Interchannel divide areas have the least likelihood of having been submerged during floods over the last fifty years. Thus, the loss of material from these otherwise stable surfaces appears to be due to eolian processes. Erosion of an interchannel divide area with little evidence of recent water movement is most easily explained by eolian removal. Evidence for wind erosion as the predominant process on the interchannel divide areas includes the lack of new or developing stream channels and the presence of modern coppice dunes near channels on interchannel divides. The presence of nearby Big Dune and other eolian deposits provides strong support for eolian erosion and transport. The amount of material removed from the interchannel divide areas was estimated by comparing the 137Cs value of the upper 3 cm layer to that of the reference value and calculating the thickness of the layer that would have to be removed to obtain the lower value. Applying this method across the interchannel divide sample locations indicates 1 to 2 cm of material has been removed from the interchannel divide surfaces in the last 50 years. This results in erosion rates that range from 0.02 to 0.04 cm/yr. These rates are similar to erosion rates
12. Introduction to abstract algebra
CERN Document Server
Nicholson, W Keith
2012-01-01
Praise for the Third Edition ". . . an expository masterpiece of the highest didactic value that has gained additional attractivity through the various improvements . . ."-Zentralblatt MATH The Fourth Edition of Introduction to Abstract Algebra continues to provide an accessible approach to the basic structures of abstract algebra: groups, rings, and fields. The book's unique presentation helps readers advance to abstract theory by presenting concrete examples of induction, number theory, integers modulo n, and permutations before the abstract structures are defined. Readers can immediately be
13. Grounding abstractness: Abstract concepts and the activation of the mouth
Directory of Open Access Journals (Sweden)
Anna M Borghi
2016-10-01
Full Text Available One key issue for theories of cognition is how abstract concepts, such as freedom, are represented. According to the WAT (Words As social Tools proposal, abstract concepts activate both sensorimotor and linguistic/social information, and their acquisition modality involves the linguistic experience more than the acquisition of concrete concepts. We report an experiment in which participants were presented with abstract and concrete definitions followed by concrete and abstract target-words. When the definition and the word matched, participants were required to press a key, either with the hand or with the mouth. Response times and accuracy were recorded. As predicted, we found that abstract definitions and abstract words yielded slower responses and more errors compared to concrete definitions and concrete words. More crucially, there was an interaction between the target-words and the effector used to respond (hand, mouth. While responses with the mouth were overall slower, the advantage of the hand over the mouth responses was more marked with concrete than with abstract concepts. The results are in keeping with grounded and embodied theories of cognition and support the WAT proposal, according to which abstract concepts evoke linguistic-social information, hence activate the mouth. The mechanisms underlying the mouth activation with abstract concepts (re-enactment of acquisition experience, or re-explanation of the word meaning, possibly through inner talk are discussed. To our knowledge this is the first behavioral study demonstrating with real words that the advantage of the hand over the mouth is more marked with concrete than with abstract concepts, likely because of the activation of linguistic information with abstract concepts.
14. Abstracting Concepts and Methods.
Science.gov (United States)
Borko, Harold; Bernier, Charles L.
This text provides a complete discussion of abstracts--their history, production, organization, publication--and of indexing. Instructions for abstracting are outlined, and standards and criteria for abstracting are stated. Management, automation, and personnel are discussed in terms of possible economies that can be derived from the introduction…
15. ABSTRACTION OF DRIFT SEEPAGE
International Nuclear Information System (INIS)
Wilson, Michael L.
2001-01-01
probability distributions of seepage. These are all discussed in detail in this report. In addition, the work plan calls for evaluation of effects of episodic flow and thermal-hydrologic-chemical alteration of hydrologic properties. As discussed in Section 5, these effects are not addressed in detail in this report because they can be argued to be insignificant. Effects of thermal-mechanical alteration of hydrologic properties are also not addressed in detail in this report because suitable process-model results are not available at this time. If these effects are found to be important, they should be included in the seepage abstraction in a future revision
16. The AMS {sup 14}C dating of Iron Age rice chaff ceramic temper from Ban Non Wat, Thailand: First results and its interpretation
Energy Technology Data Exchange (ETDEWEB)
Higham, Charles F.W., E-mail: charles.higham@otago.ac.n [Department of Anthropology, Otago University, Dunedin (New Zealand); Kuzmin, Yaroslav V. [Institute of Geology and Mineralogy, Siberian Branch of the Russian Academy of Sciences, Koptuyg Ave. 3, Novosibirsk 630090 (Russian Federation); Burr, G.S. [Arizona AMS Laboratory, University of Arizona, Tucson, AZ 85721 0081 (United States)
2010-04-15
Pottery tempered with rice chaff from the early Iron Age cemetery of Ban Non Wat site, northeast Thailand, has been subjected to direct AMS {sup 14}C dating, using low temperature combustion with oxygen as originally developed by authors. The carbon yield (0.2-0.5%) testifies the suitability of this pottery for dating. However, not all the results are in agreement with expected archaeological ages and other {sup 14}C dates from the studied site and neighboring site of Noen U-Loke. This calls for a thorough analysis and interpretation of pottery temper dates from the region.
17. Non-Invasive Prenatal Testing (NIPT) in pregnancies with trisomy 21, 18 and 13 performed in a public setting - factors of importance for correct interpretation of results
DEFF Research Database (Denmark)
Hartwig, Tanja S; Ambye, Louise; Werge, Lene
2018-01-01
OBJECTIVES: We have established an open source platform for non-invasive prenatal testing (NIPT) based on massively parallel whole-genome sequencing in a public setting. The objective of this study was to investigate factors of importance for correct interpretation of NIPT results to ensure a high...... sensitivity and specificity. STUDY DESIGN: This investigation is a retrospective case-control study performed in a public NIPT center. The study included 108 aneuploid cases and 165 euploid controls. MPS was performed on circulating cell-free DNA in maternal blood. The pipeline included automated library...
18. [Biological markers for the status of vitamins B12 and D: the importance of some analytical aspects in relation to clinical interpretation of results].
Science.gov (United States)
Boulat, O; Rey, F; Mooser, V
2012-10-31
Biological markers for the status of vitamins B12 and D: the importance of some analytical aspects in relation to clinical interpretation of results When vitamin B12 deficiency is expressed clinically, the diagnostic performance of total cobalamin is identical to that of holotranscobalamin II. In subclinical B12 deficiency, the two aforementioned markers perform less well. Additional analysis of a second, functional marker (methylmalonate or homocysteine) is recommended. Different analytical approaches for 25-hydroxyvitamin D quantification, the marker of vitamin D deficiency, are not yet standardized. Measurement biases of up to +/- 20% compared with the original method used to establish threshold values are still observed.
19. Objective interpretation as conforming interpretation
Directory of Open Access Journals (Sweden)
Lidka Rodak
2011-12-01
Full Text Available The practical discourse willingly uses the formula of “objective interpretation”, with no regards to its controversial nature that has been discussed in literature.The main aim of the article is to investigate what “objective interpretation” could mean and how it could be understood in the practical discourse, focusing on the understanding offered by judicature.The thesis of the article is that objective interpretation, as identified with textualists’ position, is not possible to uphold, and should be rather linked with conforming interpretation. And what this actually implies is that it is not the virtue of certainty and predictability – which are usually associated with objectivity- but coherence that makes the foundation of applicability of objectivity in law.What could be observed from the analyses, is that both the phenomenon of conforming interpretation and objective interpretation play the role of arguments in the interpretive discourse, arguments that provide justification that interpretation is not arbitrary or subjective. With regards to the important part of the ideology of legal application which is the conviction that decisions should be taken on the basis of law in order to exclude arbitrariness, objective interpretation could be read as a question “what kind of authority “supports” certain interpretation”? that is almost never free of judicial creativity and judicial activism.One can say that, objective and conforming interpretation are just another arguments used in legal discourse.
Energy Technology Data Exchange (ETDEWEB)
J.D. Schreiber
2005-08-25
The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in ''Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration'' (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment for the license application (TSPA-LA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA-LA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport
International Nuclear Information System (INIS)
J.D. Schreiber
2005-01-01
The purpose of this report is to develop and analyze the engineered barrier system (EBS) radionuclide transport abstraction model, consistent with Level I and Level II model validation, as identified in ''Technical Work Plan for: Near-Field Environment and Transport: Engineered Barrier System: Radionuclide Transport Abstraction Model Report Integration'' (BSC 2005 [DIRS 173617]). The EBS radionuclide transport abstraction (or EBS RT Abstraction) is the conceptual model used in the total system performance assessment for the license application (TSPA-LA) to determine the rate of radionuclide releases from the EBS to the unsaturated zone (UZ). The EBS RT Abstraction conceptual model consists of two main components: a flow model and a transport model. Both models are developed mathematically from first principles in order to show explicitly what assumptions, simplifications, and approximations are incorporated into the models used in the TSPA-LA. The flow model defines the pathways for water flow in the EBS and specifies how the flow rate is computed in each pathway. Input to this model includes the seepage flux into a drift. The seepage flux is potentially split by the drip shield, with some (or all) of the flux being diverted by the drip shield and some passing through breaches in the drip shield that might result from corrosion or seismic damage. The flux through drip shield breaches is potentially split by the waste package, with some (or all) of the flux being diverted by the waste package and some passing through waste package breaches that might result from corrosion or seismic damage. Neither the drip shield nor the waste package survives an igneous intrusion, so the flux splitting submodel is not used in the igneous scenario class. The flow model is validated in an independent model validation technical review. The drip shield and waste package flux splitting algorithms are developed and validated using experimental data. The transport model considers
2. Abstract interpretation of reactive systems : preservation of CTL*
NARCIS (Netherlands)
Dams, D.; Grumberg, O.; Gerth, R.
The advent of ever more complex reactive systems in increasingly critical areas calls for the development of automated verification techniques. Model checking is one such technique, which has proven quite successful. However, the state explosion problem remains the stumbling block in many
3. Metaphor: Bridging embodiment to abstraction.
Science.gov (United States)
Jamrozik, Anja; McQuire, Marguerite; Cardillo, Eileen R; Chatterjee, Anjan
2016-08-01
Embodied cognition accounts posit that concepts are grounded in our sensory and motor systems. An important challenge for these accounts is explaining how abstract concepts, which do not directly call upon sensory or motor information, can be informed by experience. We propose that metaphor is one important vehicle guiding the development and use of abstract concepts. Metaphors allow us to draw on concrete, familiar domains to acquire and reason about abstract concepts. Additionally, repeated metaphoric use drawing on particular aspects of concrete experience can result in the development of new abstract representations. These abstractions, which are derived from embodied experience but lack much of the sensorimotor information associated with it, can then be flexibly applied to understand new situations.
4. 2018 Congress Poster Abstracts
Science.gov (United States)
2018-02-21
Each abstract has been indexed according to the first author. Abstracts appear as they were submitted and have not undergone editing or the Oncology Nursing Forum’s review process. Only abstracts that will be presented appear here. Poster numbers are subject to change. For updated poster numbers, visit congress.ons.org or check the Congress guide. Data published in abstracts presented at the ONS 43rd Annual Congress are embargoed until the conclusion of the presentation. Coverage and/or distribution of an abstract, poster, or any of its supplemental material to or by the news media, any commercial entity, or individuals, including the authors of said abstract, is strictly prohibited until the embargo is lifted. Promotion of general topics and speakers is encouraged within these guidelines.
5. 2018 Congress Podium Abstracts
Science.gov (United States)
2018-02-21
Each abstract has been indexed according to first author. Abstracts appear as they were submitted and have not undergone editing or the Oncology Nursing Forum’s review process. Only abstracts that will be presented appear here. For Congress scheduling information, visit congress.ons.org or check the Congress guide. Data published in abstracts presented at the ONS 43rd Annual Congress are embargoed until the conclusion of the presentation. Coverage and/or distribution of an abstract, poster, or any of its supplemental material to or by the news media, any commercial entity, or individuals, including the authors of said abstract, is strictly prohibited until the embargo is lifted. Promotion of general topics and speakers is encouraged within these guidelines.
6. Compilation of Theses Abstracts
National Research Council Canada - National Science Library
2005-01-01
This publication contains unclassified/unrestricted abstracts of classified or restricted theses submitted for the degrees of Doctor of Philosophy, Master of Business Administration, Master of Science...
7. Mean corpuscular volume of control red blood cells determines the interpretation of eosin-5'-maleimide (EMA) test result in infants aged less than 6 months.
Science.gov (United States)
Ciepiela, Olga; Adamowicz-Salach, Anna; Bystrzycka, Weronika; Łukasik, Jan; Kotuła, Iwona
2015-08-01
Eosin-5'-maleimide (EMA) binding test is a flow cytometric test used to detect hereditary spherocytosis (HS). To perform the test sample from patients, 5-6 reference samples of red blood are needed. Our aim was to investigate how the mean corpuscular volume (MCV) of red blood cells influences on the value of fluorescence of bounded EMA dye and how the choice of reference samples affects the test result. EMA test was performed in peripheral blood from 404 individuals, including 31 children suffering from HS. Mean fluorescence channel of EMA-RBCs was measured with Cytomics FC500 flow cytometer. Mean corpuscular volume of RBCs was assessed with LH750 Beckman Coulter. Statistical analysis was performed using Graph Pad Prism. The correlation Spearman coefficient between mean channel of fluorescence of EMA-RBCs and MCV was r = 0.39, p < 0.0001. Interpretation of EMA test depends on MCV of the reference samples. If reference blood samples have lower MCV than the patients MCV, EMA test result might be negative. Due to different MCV values of RBCs in infancy and ca. Three months later, EMA test in neonates might be interpreted falsely negative. Samples from children younger than 3 months old had EMA test result 86.1 ± 11.7 %, whereas same samples that analyzed 4.1 ± 2.1 later had results of 75.4 ± 4.5 %, p < 0.05. Mean fluorescence of EMA-bound RBC depends on RBC's volume. MCV of reference samples affects EMA test results; thus, we recommend selection of reference samples with MCV in range of ±2 fL compared to MCV of patient RBC's.
8. Nuclear medicine. Abstracts; Nuklearmedizin 2000. Abstracts
Energy Technology Data Exchange (ETDEWEB)
Anon.
2000-07-01
This issue of the journal contains the abstracts of the 183 conference papers as well as 266 posters presented at the conference. Subject fields covered are: Neurology, psychology, oncology, pediatrics, radiopharmacy, endocrinology, EDP, measuring equipment and methods, radiological protection, cardiology, and therapy. (orig./CB) [German] Die vorliegende Zeitschrift enthaelt die Kurzfassungen der 183 auf der Tagung gehaltenen Vortraege sowie der 226 praesentierten Poster, die sich mit den folgenden Themen befassten: Neurologie, Psychiatrie, Onkologie, Paediatrie, Radiopharmazie, Endokrinologie, EDV, Messtechnik, Strahlenschutz, Kardiologie sowie Therapie. (MG)
9. Long distance atmospheric pollution: assessment, risks, management and decision. Collection of abstracts of research works. Synthesis of results of researches performed within the framework of the PRIMEQUAL programme
International Nuclear Information System (INIS)
Kirchner, Severine; Ramalho, Olivier; Bellanger, Anne-Pauline; Blondeau, Patrice; Bonvallot, Nathalie; Campagna, Dave; Cellier, Pierre; Charles, Lionel; Coddeville, Patrice; Coll, Isabelle; Frejafon, Emeric; Gehin, Evelyne; George, Christian; Glorennec, Philippe; Gros, Valerie; Hecq, Walter; Laj, Paolo; Le Calve, Stephane; Mallet, Cecile; Momas, Isabelle; Mullot, Jean-Ulrich; Plaisance, Herve; Probst, Anne; Seigneur, Christian; Vlassopoulo, Chloe; Weiss, Karine
2014-11-01
After a brief presentation of the PRIMEQUAL programme, an inter-agency and institution research programme for a better air quality (275 supported research actions since the programme creation), an introduction presents the context of research works within this programme on long distance pollution. Various research works are then briefly presented. They address three main themes: 1) determining factors and atmospheric processes (role of organic nitrates in nitrogen transport, source and evolution of organic carbonated pollution in the atmosphere, modelling of long distance pollution, a miniature and autonomous station for atmospheric composition monitoring), 2) the regional evidence of pollutants transport (local and long distance pollution in Ile-de-France, pollutant transport and air quality in Mediterranean Sea, measurement and modelling of the deposition of Saharan dusts, relationship between forest fires and air quality), and 3) long term impacts on ecosystems, health and economy (peat lands as markers of atmospheric contamination, 20 years of measurements of atmospheric depositions in France and trends on the long term, vulnerability of ecosystems to atmospheric nitrogen, a cost-benefit approach to the relationship between long distance pollution and climate change). An appendix contains the call for research propositions which resulted in the above-mentioned researches
10. Penultimate interpretation.
Science.gov (United States)
Neuman, Yair
2010-10-01
Interpretation is at the center of psychoanalytic activity. However, interpretation is always challenged by that which is beyond our grasp, the 'dark matter' of our mind, what Bion describes as ' O'. O is one of the most central and difficult concepts in Bion's thought. In this paper, I explain the enigmatic nature of O as a high-dimensional mental space and point to the price one should pay for substituting the pre-symbolic lexicon of the emotion-laden and high-dimensional unconscious for a low-dimensional symbolic representation. This price is reification--objectifying lived experience and draining it of vitality and complexity. In order to address the difficulty of approaching O through symbolization, I introduce the term 'Penultimate Interpretation'--a form of interpretation that seeks 'loopholes' through which the analyst and the analysand may reciprocally save themselves from the curse of reification. Three guidelines for 'Penultimate Interpretation' are proposed and illustrated through an imaginary dialogue. Copyright © 2010 Institute of Psychoanalysis.
11. Data Abstraction in GLISP.
Science.gov (United States)
Novak, Gordon S., Jr.
GLISP is a high-level computer language (based on Lisp and including Lisp as a sublanguage) which is compiled into Lisp. GLISP programs are compiled relative to a knowledge base of object descriptions, a form of abstract datatypes. A primary goal of the use of abstract datatypes in GLISP is to allow program code to be written in terms of objects,…
DEFF Research Database (Denmark)
2012-01-01
indefinitely, finding neither a proof nor a disproof of a given subgoal. In this paper we characterize a family of truth-preserving abstractions from intuitionistic first-order logic to the monadic fragment of classical first-order logic. Because they are truthful, these abstractions can be used to disprove...
13. Program and abstracts
International Nuclear Information System (INIS)
1976-01-01
Abstracts of the papers given at the conference are presented. The abstracts are arranged under sessions entitled: Theoretical Physics; Nuclear Physics; Solid State Physics; Spectroscopy; Plasma Physics; Solar-Terrestrial Physics; Astrophysics and Astronomy; Radioastronomy; General Physics; Applied Physics; Industrial Physics
14. Completeness of Lyapunov Abstraction
Directory of Open Access Journals (Sweden)
Rafael Wisniewski
2013-08-01
Full Text Available In this work, we continue our study on discrete abstractions of dynamical systems. To this end, we use a family of partitioning functions to generate an abstraction. The intersection of sub-level sets of the partitioning functions defines cells, which are regarded as discrete objects. The union of cells makes up the state space of the dynamical systems. Our construction gives rise to a combinatorial object - a timed automaton. We examine sound and complete abstractions. An abstraction is said to be sound when the flow of the time automata covers the flow lines of the dynamical systems. If the dynamics of the dynamical system and the time automaton are equivalent, the abstraction is complete. The commonly accepted paradigm for partitioning functions is that they ought to be transversal to the studied vector field. We show that there is no complete partitioning with transversal functions, even for particular dynamical systems whose critical sets are isolated critical points. Therefore, we allow the directional derivative along the vector field to be non-positive in this work. This considerably complicates the abstraction technique. For understanding dynamical systems, it is vital to study stable and unstable manifolds and their intersections. These objects appear naturally in this work. Indeed, we show that for an abstraction to be complete, the set of critical points of an abstraction function shall contain either the stable or unstable manifold of the dynamical system.
15. The deleuzian abstract machines
DEFF Research Database (Denmark)
Werner Petersen, Erik
2005-01-01
To most people the concept of abstract machines is connected to the name of Alan Turing and the development of the modern computer. The Turing machine is universal, axiomatic and symbolic (E.g. operating on symbols). Inspired by Foucault, Deleuze and Guattari extended the concept of abstract...
16. Check Sample Abstracts.
Science.gov (United States)
Alter, David; Grenache, David G; Bosler, David S; Karcher, Raymond E; Nichols, James; Rajadhyaksha, Aparna; Camelo-Piragua, Sandra; Rauch, Carol; Huddleston, Brent J; Frank, Elizabeth L; Sluss, Patrick M; Lewandrowski, Kent; Eichhorn, John H; Hall, Janet E; Rahman, Saud S; McPherson, Richard A; Kiechle, Frederick L; Hammett-Stabler, Catherine; Pierce, Kristin A; Kloehn, Erica A; Thomas, Patricia A; Walts, Ann E; Madan, Rashna; Schlesinger, Kathie; Nawgiri, Ranjana; Bhutani, Manoop; Kanber, Yonca; Abati, Andrea; Atkins, Kristen A; Farrar, Robert; Gopez, Evelyn Valencerina; Jhala, Darshana; Griffin, Sonya; Jhala, Khushboo; Jhala, Nirag; Bentz, Joel S; Emerson, Lyska; Chadwick, Barbara E; Barroeta, Julieta E; Baloch, Zubair W; Collins, Brian T; Middleton, Owen L; Davis, Gregory G; Haden-Pinneri, Kathryn; Chu, Albert Y; Keylock, Joren B; Ramoso, Robert; Thoene, Cynthia A; Stewart, Donna; Pierce, Arand; Barry, Michelle; Aljinovic, Nika; Gardner, David L; Barry, Michelle; Shields, Lisa B E; Arnold, Jack; Stewart, Donna; Martin, Erica L; Rakow, Rex J; Paddock, Christopher; Zaki, Sherif R; Prahlow, Joseph A; Stewart, Donna; Shields, Lisa B E; Rolf, Cristin M; Falzon, Andrew L; Hudacki, Rachel; Mazzella, Fermina M; Bethel, Melissa; Zarrin-Khameh, Neda; Gresik, M Vicky; Gill, Ryan; Karlon, William; Etzell, Joan; Deftos, Michael; Karlon, William J; Etzell, Joan E; Wang, Endi; Lu, Chuanyi M; Manion, Elizabeth; Rosenthal, Nancy; Wang, Endi; Lu, Chuanyi M; Tang, Patrick; Petric, Martin; Schade, Andrew E; Hall, Geraldine S; Oethinger, Margret; Hall, Geraldine; Picton, Avis R; Hoang, Linda; Imperial, Miguel Ranoa; Kibsey, Pamela; Waites, Ken; Duffy, Lynn; Hall, Geraldine S; Salangsang, Jo-Anne M; Bravo, Lulette Tricia C; Oethinger, Margaret D; Veras, Emanuela; Silva, Elvia; Vicens, Jimena; Silva, Elvio; Keylock, Joren; Hempel, James; Rushing, Elizabeth; Posligua, Lorena E; Deavers, Michael T; Nash, Jason W; Basturk, Olca; Perle, Mary Ann; Greco, Alba; Lee, Peng; Maru, Dipen; Weydert, Jamie Allen; Stevens, Todd M; Brownlee, Noel A; Kemper, April E; Williams, H James; Oliverio, Brock J; Al-Agha, Osama M; Eskue, Kyle L; Newlands, Shawn D; Eltorky, Mahmoud A; Puri, Puja K; Royer, Michael C; Rush, Walter L; Tavora, Fabio; Galvin, Jeffrey R; Franks, Teri J; Carter, James Elliot; Kahn, Andrea Graciela; Lozada Muñoz, Luis R; Houghton, Dan; Land, Kevin J; Nester, Theresa; Gildea, Jacob; Lefkowitz, Jerry; Lacount, Rachel A; Thompson, Hannis W; Refaai, Majed A; Quillen, Karen; Lopez, Ana Ortega; Goldfinger, Dennis; Muram, Talia; Thompson, Hannis
2009-02-01
The following abstracts are compiled from Check Sample exercises published in 2008. These peer-reviewed case studies assist laboratory professionals with continuing medical education and are developed in the areas of clinical chemistry, cytopathology, forensic pathology, hematology, microbiology, surgical pathology, and transfusion medicine. Abstracts for all exercises published in the program will appear annually in AJCP.
17. Reconstruction of abstract quantum theory
International Nuclear Information System (INIS)
Drieschner, M.; Goernitz, T.; von Weizsaecker, C.F.
1988-01-01
Understanding quantum theory as a general theory of prediction, we reconstruct abstract quantum theory. Abstract means the general frame of quantum theory, without reference to a three-dimensional position space, to concepts like particle or field, or to special laws of dynamics. Reconstruction is the attempt to do this by formulating simple and plausible postulates on prediction in order to derive the basic concepts of quantum theory from them. Thereby no law of classical physics is presupposed which would then have to be quantized. We briefly discuss the relationship of theory and interpretation in physics and the fundamental role of time as a basic concept for physics. Then a number of assertions are given, formulated as succinctly as possible in order to make them easily quotable and comparable. The assertations are arranged in four groups: heuristic principles, verbal definitions of some terms, three basic postulates, and consequences. The three postulates of separable alternatives, indeterminism, and kinematics are the central points of this work. These brief assertions are commented upon, and their relationship with the interpretation of quantum theory is discussed. Also given are an outlook on the further development into concrete quantum theory and some philosophical reflections
18. Abstract Datatypes in PVS
Science.gov (United States)
Owre, Sam; Shankar, Natarajan
1997-01-01
PVS (Prototype Verification System) is a general-purpose environment for developing specifications and proofs. This document deals primarily with the abstract datatype mechanism in PVS which generates theories containing axioms and definitions for a class of recursive datatypes. The concepts underlying the abstract datatype mechanism are illustrated using ordered binary trees as an example. Binary trees are described by a PVS abstract datatype that is parametric in its value type. The type of ordered binary trees is then presented as a subtype of binary trees where the ordering relation is also taken as a parameter. We define the operations of inserting an element into, and searching for an element in an ordered binary tree; the bulk of the report is devoted to PVS proofs of some useful properties of these operations. These proofs illustrate various approaches to proving properties of abstract datatype operations. They also describe the built-in capabilities of the PVS proof checker for simplifying abstract datatype expressions.
19. Interpreting Physics
CERN Document Server
MacKinnon, Edward
2012-01-01
This book is the first to offer a systematic account of the role of language in the development and interpretation of physics. An historical-conceptual analysis of the co-evolution of mathematical and physical concepts leads to the classical/quatum interface. Bohrian orthodoxy stresses the indispensability of classical concepts and the functional role of mathematics. This book analyses ways of extending, and then going beyond this orthodoxy orthodoxy. Finally, the book analyzes how a revised interpretation of physics impacts on basic philosophical issues: conceptual revolutions, realism, and r
20. Performing Interpretation
Science.gov (United States)
Kothe, Elsa Lenz; Berard, Marie-France
2013-01-01
Utilizing a/r/tographic methodology to interrogate interpretive acts in museums, multiple areas of inquiry are raised in this paper, including: which knowledge is assigned the greatest value when preparing a gallery talk; what lies outside of disciplinary knowledge; how invitations to participate invite and disinvite in the same gesture; and what…
1. Experimental Waterflow Determination of the Dynamic Hydraulic Transfer Function for the J-2X Oxidizer Turbopump. Part Two; Results and Interpretation
Science.gov (United States)
Zoladz, Tom; Patel, Sandeep; Lee, Erik; Karon, Dave
2011-01-01
Experimental results describing the hydraulic dynamic pump transfer matrix (Yp) for a cavitating J-2X oxidizer turbopump inducer+impeller tested in subscale waterflow are presented. The transfer function is required for integrated vehicle pogo stability analysis as well as optimization of local inducer pumping stability. Dynamic transfer functions across widely varying pump hydrodynamic inlet conditions are extracted from measured data in conjunction with 1D-model based corrections. Derived Dynamic transfer functions are initially interpreted relative to traditional Pogo pump equations. Water-to-liquid oxygen scaling of measured cavitation characteristics are discussed. Comparison of key dynamic transfer matrix terms derived from waterflow testing are made with those implemented in preliminary Ares Upper Stage Pogo stability modeling. Alternate cavitating pump hydraulic dynamic equations are suggested which better reflect frequency dependencies of measured transfer matrices.
2. Completeness of Lyapunov Abstraction
DEFF Research Database (Denmark)
Wisniewski, Rafal; Sloth, Christoffer
2013-01-01
the vector field, which allows the generation of a complete abstraction. To compute the functions that define the subdivision of the state space in an algorithm, we formulate a sum of squares optimization problem. This optimization problem finds the best subdivisioning functions, with respect to the ability......This paper addresses the generation of complete abstractions of polynomial dynamical systems by timed automata. For the proposed abstraction, the state space is divided into cells by sublevel sets of functions. We identify a relation between these functions and their directional derivatives along...
3. Scientific meeting abstracts
International Nuclear Information System (INIS)
1999-01-01
The document is a collection of the scientific meeting abstracts in the fields of nuclear physics, medical sciences, chemistry, agriculture, environment, engineering, different aspects of energy and presents research done in 1999 in these fields
4. Science meeting. Abstracts
International Nuclear Information System (INIS)
2000-01-01
the document is a collection of the science meeting abstracts in the fields of nuclear physics, medical sciences, chemistry, agriculture, environment, engineering, material sciences different aspects of energy and presents research done in 2000 in these fields
5. Mathematical games, abstract games
CERN Document Server
Neto, Joao Pedro
2013-01-01
User-friendly, visually appealing collection offers both new and classic strategic board games. Includes abstract games for two and three players and mathematical games such as Nim and games on graphs.
6. Abstracts of contributed papers
Energy Technology Data Exchange (ETDEWEB)
1994-08-01
This volume contains 571 abstracts of contributed papers to be presented during the Twelfth US National Congress of Applied Mechanics. Abstracts are arranged in the order in which they fall in the program -- the main sessions are listed chronologically in the Table of Contents. The Author Index is in alphabetical order and lists each paper number (matching the schedule in the Final Program) with its corresponding page number in the book.
7. Automated Supernova Discovery (Abstract)
Science.gov (United States)
Post, R. S.
2015-12-01
(Abstract only) We are developing a system of robotic telescopes for automatic recognition of Supernovas as well as other transient events in collaboration with the Puckett Supernova Search Team. At the SAS2014 meeting, the discovery program, SNARE, was first described. Since then, it has been continuously improved to handle searches under a wide variety of atmospheric conditions. Currently, two telescopes are used to build a reference library while searching for PSN with a partial library. Since data is taken every night without clouds, we must deal with varying atmospheric and high background illumination from the moon. Software is configured to identify a PSN, reshoot for verification with options to change the run plan to acquire photometric or spectrographic data. The telescopes are 24-inch CDK24, with Alta U230 cameras, one in CA and one in NM. Images and run plans are sent between sites so the CA telescope can search while photometry is done in NM. Our goal is to find bright PSNs with magnitude 17.5 or less which is the limit of our planned spectroscopy. We present results from our first automated PSN discoveries and plans for PSN data acquisition.
8. Abstraction of Drift Seepage
International Nuclear Information System (INIS)
J.T. Birkholzer
2004-01-01
This model report documents the abstraction of drift seepage, conducted to provide seepage-relevant parameters and their probability distributions for use in Total System Performance Assessment for License Application (TSPA-LA). Drift seepage refers to the flow of liquid water into waste emplacement drifts. Water that seeps into drifts may contact waste packages and potentially mobilize radionuclides, and may result in advective transport of radionuclides through breached waste packages [''Risk Information to Support Prioritization of Performance Assessment Models'' (BSC 2003 [DIRS 168796], Section 3.3.2)]. The unsaturated rock layers overlying and hosting the repository form a natural barrier that reduces the amount of water entering emplacement drifts by natural subsurface processes. For example, drift seepage is limited by the capillary barrier forming at the drift crown, which decreases or even eliminates water flow from the unsaturated fractured rock into the drift. During the first few hundred years after waste emplacement, when above-boiling rock temperatures will develop as a result of heat generated by the decay of the radioactive waste, vaporization of percolation water is an additional factor limiting seepage. Estimating the effectiveness of these natural barrier capabilities and predicting the amount of seepage into drifts is an important aspect of assessing the performance of the repository. The TSPA-LA therefore includes a seepage component that calculates the amount of seepage into drifts [''Total System Performance Assessment (TSPA) Model/Analysis for the License Application'' (BSC 2004 [DIRS 168504], Section 6.3.3.1)]. The TSPA-LA calculation is performed with a probabilistic approach that accounts for the spatial and temporal variability and inherent uncertainty of seepage-relevant properties and processes. Results are used for subsequent TSPA-LA components that may handle, for example, waste package corrosion or radionuclide transport
9. Interpretive Medicine
Science.gov (United States)
Reeve, Joanne
2010-01-01
Patient-centredness is a core value of general practice; it is defined as the interpersonal processes that support the holistic care of individuals. To date, efforts to demonstrate their relationship to patient outcomes have been disappointing, whilst some studies suggest values may be more rhetoric than reality. Contextual issues influence the quality of patient-centred consultations, impacting on outcomes. The legitimate use of knowledge, or evidence, is a defining aspect of modern practice, and has implications for patient-centredness. Based on a critical review of the literature, on my own empirical research, and on reflections from my clinical practice, I critique current models of the use of knowledge in supporting individualised care. Evidence-Based Medicine (EBM), and its implementation within health policy as Scientific Bureaucratic Medicine (SBM), define best evidence in terms of an epistemological emphasis on scientific knowledge over clinical experience. It provides objective knowledge of disease, including quantitative estimates of the certainty of that knowledge. Whilst arguably appropriate for secondary care, involving episodic care of selected populations referred in for specialist diagnosis and treatment of disease, application to general practice can be questioned given the complex, dynamic and uncertain nature of much of the illness that is treated. I propose that general practice is better described by a model of Interpretive Medicine (IM): the critical, thoughtful, professional use of an appropriate range of knowledges in the dynamic, shared exploration and interpretation of individual illness experience, in order to support the creative capacity of individuals in maintaining their daily lives. Whilst the generation of interpreted knowledge is an essential part of daily general practice, the profession does not have an adequate framework by which this activity can be externally judged to have been done well. Drawing on theory related to the
10. Abstracting audit data for lightweight intrusion detection
KAUST Repository
Wang, Wei; Zhang, Xiangliang; Pitsilis, Georgios
2010-01-01
are used to validate the two strategies of data abstraction. The extensive test results show that the process of exemplar extraction significantly improves the detection efficiency and has a better detection performance than PCA in data abstraction. © 2010
11. Contribution to an understanding of the action of gamma radiation on granular starch - interpretation of results obtained using the enzymatic and chromatographic method
International Nuclear Information System (INIS)
Robin, J.P.; Tollier, M.Th.; Guilbot, A.
1978-01-01
12. Minimalism in architecture: Abstract conceptualization of architecture
Directory of Open Access Journals (Sweden)
Vasilski Dragana
2015-01-01
Full Text Available Minimalism in architecture contains the idea of the minimum as a leading creative tend to be considered and interpreted in working through phenomena of empathy and abstraction. In the Western culture, the root of this idea is found in empathy of Wilhelm Worringer and abstraction of Kasimir Malevich. In his dissertation, 'Abstraction and Empathy' Worringer presented his thesis on the psychology of style through which he explained the two opposing basic forms: abstraction and empathy. His conclusion on empathy as a psychological basis of observation expression is significant due to the verbal congruence with contemporary minimalist expression. His intuition was enhenced furthermore by figure of Malevich. Abstraction, as an expression of inner unfettered inspiration, has played a crucial role in the development of modern art and architecture of the twentieth century. Abstraction, which is one of the basic methods of learning in psychology (separating relevant from irrelevant features, Carl Jung is used to discover ideas. Minimalism in architecture emphasizes the level of abstraction to which the individual functions are reduced. Different types of abstraction are present: in the form as well as function of the basic elements: walls and windows. The case study is an example of Sou Fujimoto who is unequivocal in its commitment to the autonomy of abstract conceptualization of architecture.
13. Analysis of complex networks using aggressive abstraction.
Energy Technology Data Exchange (ETDEWEB)
Colbaugh, Richard; Glass, Kristin.; Willard, Gerald
2008-10-01
This paper presents a new methodology for analyzing complex networks in which the network of interest is first abstracted to a much simpler (but equivalent) representation, the required analysis is performed using the abstraction, and analytic conclusions are then mapped back to the original network and interpreted there. We begin by identifying a broad and important class of complex networks which admit abstractions that are simultaneously dramatically simplifying and property preserving we call these aggressive abstractions -- and which can therefore be analyzed using the proposed approach. We then introduce and develop two forms of aggressive abstraction: 1.) finite state abstraction, in which dynamical networks with uncountable state spaces are modeled using finite state systems, and 2.) onedimensional abstraction, whereby high dimensional network dynamics are captured in a meaningful way using a single scalar variable. In each case, the property preserving nature of the abstraction process is rigorously established and efficient algorithms are presented for computing the abstraction. The considerable potential of the proposed approach to complex networks analysis is illustrated through case studies involving vulnerability analysis of technological networks and predictive analysis for social processes.
14. Objective interpretation as conforming interpretation
OpenAIRE
Lidka Rodak
2011-01-01
The practical discourse willingly uses the formula of “objective interpretation”, with no regards to its controversial nature that has been discussed in literature.The main aim of the article is to investigate what “objective interpretation” could mean and how it could be understood in the practical discourse, focusing on the understanding offered by judicature.The thesis of the article is that objective interpretation, as identified with textualists’ position, is not possible to uphold, and ...
15. Building Safe Concurrency Abstractions
DEFF Research Database (Denmark)
2014-01-01
Concurrent object-oriented programming in Beta is based on semaphores and coroutines and the ability to define high-level concurrency abstractions like monitors, and rendezvous-based communication, and their associated schedulers. The coroutine mechanism of SIMULA has been generalized into the no......Concurrent object-oriented programming in Beta is based on semaphores and coroutines and the ability to define high-level concurrency abstractions like monitors, and rendezvous-based communication, and their associated schedulers. The coroutine mechanism of SIMULA has been generalized...
16. Exploring the Great Schism in the Social Sciences: Confirmation Bias and the Interpretation of Results Relating to Biological Influences on Human Behavior and Psychology.
Science.gov (United States)
Winking, Jeffrey
2018-01-01
The nature-nurture debate is one that biologists often dismiss as a false dichotomy, as all phenotypic traits are the results of complex processes of gene and environment interactions. However, such dismissiveness belies the ongoing debate that is unmistakable throughout the biological and social sciences concerning the role of biological influences in the development of psychological and behavioral traits in humans. Many have proposed that this debate is due to ideologically driven biases in the interpretation of results. Those favoring biological approaches have been accused of a greater willingness to accept biological explanations so as to rationalize or justify the status quo of inequality. Those rejecting biological approaches have been accused of an unwillingness to accept biological explanations so as to attribute inequalities solely to social and institutional factors, ultimately allowing for the possibility of social equality. While it is important to continue to investigate this topic through further research and debate, another approach is to examine the degree to which the allegations of bias are indeed valid. To accomplish this, a convenience sample of individuals with relevant postgraduate degrees was recruited from Mechanical Turk and social media. Participants were asked to rate the inferential power of different research designs and of mock results that varied in the degree to which they supported different ideologies. Results were suggestive that researchers harbor sincere differences of opinion concerning the inferential value of relevant research. There was no suggestion that ideological confirmation biases drive these differences. However, challenges associated with recruiting a large enough sample of experts as well as identifying believable mock scenarios limit the study's inferential scope.
17. Mechanical Characterization of Bone: State of the Art in Experimental Approaches-What Types of Experiments Do People Do and How Does One Interpret the Results?
Science.gov (United States)
Bailey, Stacyann; Vashishth, Deepak
2018-06-18
The mechanical integrity of bone is determined by the direct measurement of bone mechanical properties. This article presents an overview of the current, most common, and new and upcoming experimental approaches for the mechanical characterization of bone. The key outcome variables of mechanical testing, as well as interpretations of the results in the context of bone structure and biology are also discussed. Quasi-static tests are the most commonly used for determining the resistance to structural failure by a single load at the organ (whole bone) level. The resistance to crack initiation or growth by fracture toughness testing and fatigue loading offers additional and more direct characterization of tissue material properties. Non-traditional indentation techniques and in situ testing are being increasingly used to probe the material properties of bone ultrastructure. Destructive ex vivo testing or clinical surrogate measures are considered to be the gold standard for estimating fracture risk. The type of mechanical test used for a particular investigation depends on the length scale of interest, where the outcome variables are influenced by the interrelationship between bone structure and composition. Advancement in the sensitivity of mechanical characterization techniques to detect changes in bone at the levels subjected to modifications by aging, disease, and/or pharmaceutical treatment is required. As such, a number of techniques are now available to aid our understanding of the factors that contribute to fracture risk.
18. Structural and vibrational study of 2-MethoxyEthylAmmonium Nitrate (2-OMeEAN): Interpretation of experimental results with ab initio molecular dynamics
International Nuclear Information System (INIS)
Campetella, M.; Caminiti, R.; Bencivenni, L.; Gontrani, L.; Bovi, D.; Guidoni, L.
2016-01-01
In this work we report an analysis of the bulk phase of 2-methoxyethylammonium nitrate based on ab initio molecular dynamics. The structural and dynamical features of the ionic liquid have been characterized and the computational findings have been compared with the experimental X-ray diffraction patterns, with infrared spectroscopy data, and with the results obtained from molecular dynamics simulations. The experimental infrared spectrum was interpreted with the support of calculated vibrational density of states as well as harmonic frequency calculations of selected gas phase clusters. Particular attention was addressed to the high frequency region of the cation (ω > 2000 cm −1 ), where the vibrational motions involve the NH 3 + group responsible for hydrogen bond formation, and to the frequency range 1200-1400 cm −1 where the antisymmetric stretching mode (ν 3 ) of nitrate is found. Its multiple absorption lines in the liquid arise from the removal of the degeneracy present in the D 3h symmetry of the isolated ion. Our ab initio molecular dynamics leads to a rationalization of the frequency shifts and splittings, which are inextricably related to the structural modifications induced by a hydrogen bonding environment. The DFT calculations lead to an inhomogeneous environment.
19. Structural and vibrational study of 2-MethoxyEthylAmmonium Nitrate (2-OMeEAN): Interpretation of experimental results with ab initio molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Campetella, M.; Caminiti, R.; Bencivenni, L.; Gontrani, L., E-mail: lorenzo.gontrani@uniroma1.it [Dipartimento di Chimica, Università di Roma, “La Sapienza,” P. le Aldo Moro 5, I-00185 Roma (Italy); Bovi, D. [Dipartimento di Fisica, Università di Roma, “La Sapienza,” P. le Aldo Moro 5, I-00185 Roma (Italy); Guidoni, L. [Dipartimento di Scienze Fisiche e Chimiche, Università degli Studi dell’Aquila, Via Vetoio, Coppito, I-67100 L’Aquila (Italy)
2016-07-14
In this work we report an analysis of the bulk phase of 2-methoxyethylammonium nitrate based on ab initio molecular dynamics. The structural and dynamical features of the ionic liquid have been characterized and the computational findings have been compared with the experimental X-ray diffraction patterns, with infrared spectroscopy data, and with the results obtained from molecular dynamics simulations. The experimental infrared spectrum was interpreted with the support of calculated vibrational density of states as well as harmonic frequency calculations of selected gas phase clusters. Particular attention was addressed to the high frequency region of the cation (ω > 2000 cm{sup −1}), where the vibrational motions involve the NH{sub 3}+ group responsible for hydrogen bond formation, and to the frequency range 1200-1400 cm{sup −1} where the antisymmetric stretching mode (ν{sub 3}) of nitrate is found. Its multiple absorption lines in the liquid arise from the removal of the degeneracy present in the D{sub 3h} symmetry of the isolated ion. Our ab initio molecular dynamics leads to a rationalization of the frequency shifts and splittings, which are inextricably related to the structural modifications induced by a hydrogen bonding environment. The DFT calculations lead to an inhomogeneous environment.
20. Nuclear medicine. Abstracts
International Nuclear Information System (INIS)
Anon.
2000-01-01
This issue of the journal contains the abstracts of the 183 conference papers as well as 266 posters presented at the conference. Subject fields covered are: Neurology, psychology, oncology, pediatrics, radiopharmacy, endocrinology, EDP, measuring equipment and methods, radiological protection, cardiology, and therapy. (orig./CB) [de
1. Annual Conference Abstracts
Science.gov (United States)
Journal of Engineering Education, 1972
1972-01-01
Includes abstracts of papers presented at the 80th Annual Conference of the American Society for Engineering Education. The broad areas include aerospace, affiliate and associate member council, agricultural engineering, biomedical engineering, continuing engineering studies, chemical engineering, civil engineering, computers, cooperative…
2. WWNPQFT-2013 - Abstracts
International Nuclear Information System (INIS)
Cessac, B.; Bianchi, E.; Bellon, M.; Fried, H.; Krajewski, T.; Schubert, C.; Barre, J.; Hofmann, R.; Muller, B.; Raffaelli, B.
2014-01-01
The object of this Workshop is to consolidate and publicize new efforts in non perturbative-like Field Theories, relying in Functional Methods, Renormalization Group, and Dyson-Schwinger Equations. A presentation deals with effective vertices and photon-photon scattering in SU(2) Yang-Mills thermodynamics. This document gathers the abstracts of the presentations
3. Full Abstraction for HOPLA
DEFF Research Database (Denmark)
Nygaard, Mikkel; Winskel, Glynn
2003-01-01
A fully abstract denotational semantics for the higher-order process language HOPLA is presented. It characterises contextual and logical equivalence, the latter linking up with simulation. The semantics is a clean, domain-theoretic description of processes as downwards-closed sets of computation...
4. Abstraction and art.
Science.gov (United States)
Gortais, Bernard
2003-07-29
In a given social context, artistic creation comprises a set of processes, which relate to the activity of the artist and the activity of the spectator. Through these processes we see and understand that the world is vaster than it is said to be. Artistic processes are mediated experiences that open up the world. A successful work of art expresses a reality beyond actual reality: it suggests an unknown world using the means and the signs of the known world. Artistic practices incorporate the means of creation developed by science and technology and change forms as they change. Artists and the public follow different processes of abstraction at different levels, in the definition of the means of creation, of representation and of perception of a work of art. This paper examines how the processes of abstraction are used within the framework of the visual arts and abstract painting, which appeared during a period of growing importance for the processes of abstraction in science and technology, at the beginning of the twentieth century. The development of digital platforms and new man-machine interfaces allow multimedia creations. This is performed under the constraint of phases of multidisciplinary conceptualization using generic representation languages, which tend to abolish traditional frontiers between the arts: visual arts, drama, dance and music.
5. Composing Interfering Abstract Protocols
Science.gov (United States)
2016-04-01
Tecnologia , Universidade Nova de Lisboa, Caparica, Portugal. This document is a companion technical report of the paper, “Composing Interfering Abstract...a Ciência e Tecnologia (Portuguese Foundation for Science and Technology) through the Carnegie Mellon Portugal Program under grant SFRH / BD / 33765
6. Abstract algebra for physicists
International Nuclear Information System (INIS)
Zeman, J.
1975-06-01
Certain recent models of composite hadrons involve concepts and theorems from abstract algebra which are unfamiliar to most theoretical physicists. The algebraic apparatus needed for an understanding of these models is summarized here. Particular emphasis is given to algebraic structures which are not assumed to be associative. (2 figures) (auth)
7. Abstracts of submitted papers
International Nuclear Information System (INIS)
1987-01-01
The conference proceedings contain 152 abstracts of presented papers relating to various aspects of personnel dosimetry, the dosimetry of the working and living environment, various types of dosemeters and spectrometers, the use of radionuclides in various industrial fields, the migration of radionuclides on Czechoslovak territory after the Chernobyl accident, theoretical studies of some parameters of ionizing radiation detectors, and their calibration. (M.D.)
8. The Abstraction Engine
DEFF Research Database (Denmark)
Fortescue, Michael David
The main thesis of this book is that abstraction, far from being confined to higher formsof cognition, language and logical reasoning, has actually been a major driving forcethroughout the evolution of creatures with brains. It is manifest in emotive as well as rationalthought. Wending its way th...
9. Impredicative concurrent abstract predicates
DEFF Research Database (Denmark)
Svendsen, Kasper; Birkedal, Lars
2014-01-01
We present impredicative concurrent abstract predicates { iCAP { a program logic for modular reasoning about concurrent, higher- order, reentrant, imperative code. Building on earlier work, iCAP uses protocols to reason about shared mutable state. A key novel feature of iCAP is the ability to dene...
10. Abstract Film and Beyond.
Science.gov (United States)
Le Grice, Malcolm
A theoretical and historical account of the main preoccupations of makers of abstract films is presented in this book. The book's scope includes discussion of nonrepresentational forms as well as examination of experiments in the manipulation of time in films. The ten chapters discuss the following topics: art and cinematography, the first…
11. SPR 2015. Abstracts
International Nuclear Information System (INIS)
2015-01-01
The volume contains the abstracts of the SPR (society for pediatric radiology) 2015 meeting covering the following issues: fetal imaging, muscoskeletal imaging, cardiac imaging, chest imaging, oncologic imaging, tools for process improvement, child abuse, contrast enhanced ultrasound, image gently - update of radiation dose recording/reporting/monitoring - meaningful or useless meaning?, pediatric thoracic imaging, ALARA.
12. Beyond the abstractions?
DEFF Research Database (Denmark)
Olesen, Henning Salling
2006-01-01
The anniversary of the International Journal of Lifelong Education takes place in the middle of a conceptual landslide from lifelong education to lifelong learning. Contemporary discourses of lifelong learning etc are however abstractions behind which new functions and agendas for adult education...
13. Abstracts of SIG Sessions.
Science.gov (United States)
Proceedings of the ASIS Annual Meeting, 1994
1994-01-01
Includes abstracts of 18 special interest group (SIG) sessions. Highlights include natural language processing, information science and terminology science, classification, knowledge-intensive information systems, information value and ownership issues, economics and theories of information science, information retrieval interfaces, fuzzy thinking…
14. SPR 2015. Abstracts
Energy Technology Data Exchange (ETDEWEB)
NONE
2015-04-01
The volume contains the abstracts of the SPR (society for pediatric radiology) 2015 meeting covering the following issues: fetal imaging, muscoskeletal imaging, cardiac imaging, chest imaging, oncologic imaging, tools for process improvement, child abuse, contrast enhanced ultrasound, image gently - update of radiation dose recording/reporting/monitoring - meaningful or useless meaning?, pediatric thoracic imaging, ALARA.
15. Controlling groundwater over abstraction
NARCIS (Netherlands)
Naber, Al Majd; Molle, Francois
2017-01-01
The control of groundwater over abstraction is a vexing problem worldwide. Jordan is one of the countries facing severe water scarcity which has implemented a wide range of measures and policies over the past 20 years. While the gap between formal legal and policy frameworks and local practices on
16. A systematic review of the use of theory in the design of guideline dissemination and implementation strategies and interpretation of the results of rigorous evaluations
Directory of Open Access Journals (Sweden)
Grimshaw Jeremy M
2010-02-01
Full Text Available Abstract Background There is growing interest in the use of cognitive, behavioural, and organisational theories in implementation research. However, the extent of use of theory in implementation research is uncertain. Methods We conducted a systematic review of use of theory in 235 rigorous evaluations of guideline dissemination and implementation studies published between 1966 and 1998. Use of theory was classified according to type of use (explicitly theory based, some conceptual basis, and theoretical construct used and stage of use (choice/design of intervention, process/mediators/moderators, and post hoc/explanation. Results Fifty-three of 235 studies (22.5% were judged to have employed theories, including 14 studies that explicitly used theory. The majority of studies (n = 42 used only one theory; the maximum number of theories employed by any study was three. Twenty-five different theories were used. A small number of theories accounted for the majority of theory use including PRECEDE (Predisposing, Reinforcing, and Enabling Constructs in Educational Diagnosis and Evaluation, diffusion of innovations, information overload and social marketing (academic detailing. Conclusions There was poor justification of choice of intervention and use of theory in implementation research in the identified studies until at least 1998. Future research should explicitly identify the justification for the interventions. Greater use of explicit theory to understand barriers, design interventions, and explore mediating pathways and moderators is needed to advance the science of implementation research.
17. Interpretation of positive results of a methacholine inhalation challenge and 1 week of inhaled bronchodilator use in diagnosing and treating cough-variant asthma.
Science.gov (United States)
Irwin, R S; French, C T; Smyrnios, N A; Curley, F J
1997-09-22
In diagnosing cough due to asthma, methacholine chloride inhalation challenge (MIC) interpreted in a traditional fashion has been shown to have positive predictive values from 60% to 82%. To determine whether any features of positive results of an MIC or the results of a 1-week trial of inhaled beta-agonist therapy were helpful in predicting when the cough was due to asthma. The study design was a prospective, randomized, double-blind, placebo-controlled, crossover format performed in adult, nonsmoking subjects, who were referred for diagnosis and treatment of chronic cough. The subjects had no other respiratory complaints or medical conditions for which they were taking medications, the results of baseline spirometry and chest roentgenograms were normal, and the results of MIC were positive. After obtaining baseline data, including MICs on 2 separate days, objective cough counting, and self-assessment of cough severity using a visual analog scale, subjects were randomized to receive 2 inhalations (1.3 mg) of metaproterenol sulfate or placebo by metered dose inhaler attached to a spacer device every 4 hours while awake. At 1 week, data identical to baseline were collected, and subjects received the other metered dose inhaler for 7 days. At 1 week, data identical to baseline were collected. After completion of the protocol, subjects were followed up in the clinic to observe the final response of the cough to specific therapy. Based on the disappearance of the cough with specific therapy, the cough was due to asthma in 9 of 15 subjects and nonasthma in 6 of 15 subjects. Baseline data were similar between groups. With respect to MICs, there were no significant differences between groups in the cumulative dose of methacholine that provoked a 20% decrease in forced expiratory volume in 1 second from the postsaline baseline value (PD20 values), slopes of dose-response curves, and maximal-response plateaus. Cough severity significantly improved after 1 week of
18. Biocards and Level of Abstraction
DEFF Research Database (Denmark)
Lenau, Torben Anker; Keshwani, Sonal; Chakrabarti, Amaresh
2015-01-01
Biocards are formal descriptions of biological phenomena and their underlying functional principles. They are used in bioinspired design to document search results and to communicate the findings for use in the further design process. The present study explored the effect of abstraction level used...
19. Fuel and fission product behaviour in early phases of a severe accident. Part II: Interpretation of the experimental results of the PHEBUS FPT2 test
Energy Technology Data Exchange (ETDEWEB)
Dubourg, R. [Institut de Radioprotection et de Sûreté Nucléaire, B.P. 3, 13115 Saint Paul-lez-Durance Cedex (France); Barrachin, M., E-mail: marc.barrachin@irsn.fr [Institut de Radioprotection et de Sûreté Nucléaire, B.P. 3, 13115 Saint Paul-lez-Durance Cedex (France); Ducher, R. [Institut de Radioprotection et de Sûreté Nucléaire, B.P. 3, 13115 Saint Paul-lez-Durance Cedex (France); Gavillet, D. [Paul Scherrer Institute, CH-5232 Villigen PSI (Switzerland); De Bremaecker, A. [Institute for Nuclear Materials Sciences, SCK-CEN, Boeretang 200, B-2400 Mol (Belgium)
2014-10-15
One objective of the FPT2 test of the PHEBUS FP Program was to study the degradation of an irradiated UO{sub 2} fuel bundle and the fission product behaviour under conditions of low steam flow. The results of the post-irradiation examinations (PIE) at the upper levels (823 mm and 900 mm) of the test section previously reported are interpreted in the present paper. Solid state interactions between fuel and cladding have been compared with the characteristics of interaction identified in the previous separate-effect tests. Corium resulting from the interaction between fuel and cladding was formed. The uranium concentration in the corium is compared to analytical tests and a scenario for the corium formation is proposed. The analysis showed that, despite the rather low fuel burn up, the conditions of temperature and oxygen potential reached during the starvation phase are able to give an early very significant release fraction of caesium. A significant part (but not all) of the molybdenum was segregated at grain boundaries and trapped in metallic inclusions from which they were totally removed in the final part of the experiment. During the steam starvation phase, the conditions of oxygen potential were favourable for the formation of simple Ba and BaO chemical forms but the temperature was too low to provoke their volatility. This is one important difference with out-of-pile experiments such as VERCORS for which only a combination of high temperature and low oxygen potential induced a significant barium release. Finally another significant difference with analytical out-of-pile experiments comes from the formation of foamy zones due to the fission gas presence in FPT2-type experiments which give an additional possibility for the formation of stable fission product compounds.
20. Mammographic interpretation
International Nuclear Information System (INIS)
Tabor, L.
1987-01-01
For mammography to be an effective diagnostic method, it must be performed to a very high standard of quality. Otherwise many lesions, in particular cancer in its early stages, will simply not be detectable on the films, regardless of the skill of the mammographer. Mammographic interpretation consists of two basic steps: perception and analysis. The process of mammographic interpretation begins with perception of the lesion on the mammogram. Perception is influenced by several factors. One of the most important is the parenchymal pattern of the breast tissue, detection of pathologic lesions being easier with fatty involution. The mammographer should use a method for the systematic viewing of the mammograms that will ensure that all parts of each mammogram are carefully searched for the presence of lesions. The method of analysis proceeds according to the type of lesion. The contour analysis of primary importance in the evaluation of circumscribed tumors. After having analyzed the contour and density of a lesion and considered its size, the mammographer should be fairly certain whether the circumscribed tumor is benign or malignant. Fine-needle puncture and/or US may assist the mammographer in making this decision. Painstaking analysis is required because many circumscribed tumors do not need to be biopsied. The perception of circumscribed tumors seldom causes problems, but their analysis needs careful attention. On the other hand, the major challenge with star-shaped lesions is perception. They may be difficult to discover when small. Although the final diagnosis of a stellate lesion can be made only with the help of histologic examination, the preoperative mammorgraphic differential diagnosis can be highly accurate. The differential diagnostic problem is between malignant tumors (scirrhous carcinoma), on the one hand, and traumatic fat necrosis as well as radial scars on the other hand
1. Two-Way Interpretation about Climate Change: Preliminary Results from a Study in Select Units of the United States National Park System
Science.gov (United States)
Forist, B. E.; Knapp, D.
2014-12-01
Much interpretation in units of the National Park System, conducted by National Park Service (NPS) rangers and partners today is done in a didactic, lecture style. This "one-way" communication runs counter to research suggesting that long-term impacts of park interpretive experiences must be established through direct connections with the visitor. Previous research in interpretation has suggested that interpretive experiences utilizing a "two-way" dialogue approach are more successful at facilitating long-term memories than "one-way" approaches where visitors have few, if any, opportunities to ask questions, offer opinions, or share interests and experiences. Long-term memories are indicators of connections to places and resources. Global anthropogenic change poses critical threats to NPS sites, resources, and visitor experiences. As climate change plays an ever-expanding role in public, political, social, economic, and environmental discourse it stands to reason that park visitors may also be interested in engaging in this discourse. Indeed, NPS Director Jonathan Jarvis stated in the agency's Climate Change Action Plan 2012 - 2014 that, "We now know through social science conducted in parks that our visitors are looking to NPS staff for honest dialogue about this critical issue." Researchers from Indiana University will present preliminary findings from a multiple park study that assessed basic visitor knowledge and the impact of two-way interpretation related to climate change. Observations from park interpretive program addressing climate change will be presented. Basic visitor knowledge of climate change impacts in the select parks as well as immediate and long-term visitor recollections will be presented. Select units of the National Park System in this research included Cape Cod National Seashore, Cape Hatteras National Seashore, North Cascades National Park, Shenandoah National Park, and Zion National Park.
2. An Interpreter's Interpretation: Sign Language Interpreters' View of Musculoskeletal Disorders
National Research Council Canada - National Science Library
Johnson, William L
2003-01-01
Sign language interpreters are at increased risk for musculoskeletal disorders. This study used content analysis to obtain detailed information about these disorders from the interpreters' point of view...
3. Research Abstracts of 1979.
Science.gov (United States)
1979-12-01
abscess formation and tissue necrosis, its relationship to periodontal pockets was investigated. Specimens were obtained from maxillary and mandibular...of Organisms and Periodontal Pockets." (Abstract 4853). 10. SIMONSON, Lo Go, LAMDERTS, B. L. and JACROLA, Do R. - "Effects of Dextranases and other...the tip of the periodontal probe rests within epithelium, at or slightly apical to the coronal extent of the junctional epithelium. The purpose of
4. NPP life management (abstracts)
International Nuclear Information System (INIS)
Litvinskij, L.L.; Barbashev, S.V.
2002-01-01
Abstracts of the papers presented at the International conference of the Ukrainian Nuclear Society 'NPP Life Management'. The following problems are considered: modernization of the NPP; NPP life management; waste and spent nuclear fuel management; decommissioning issues; control systems (including radiation and ecological control systems); information and control systems; legal and regulatory framework. State nuclear regulatory control; PR in nuclear power; training of personnel; economics of nuclear power engineering
5. Research Abstracts of 1982.
Science.gov (United States)
1982-12-01
Third Molars in Naval Personnel,- (Abstract #1430) 7. A. SEROWSKI* and F. AKER --"The Effect of Marine and Fresh-Water Atmospheric Environments on...record to determine changes in surface coverage or other outcomes -- extraction, endodontic therapy , crown placement -- which occurred over time. The...MR0412002-0443. 0 e5 History of Retention and Extraction of Third Molars in Naval Personnel. D. C. SCHROEDER*, J. C. CECIL and M. E. COHEN. Naval
6. DEGRO 2017. Abstracts
Energy Technology Data Exchange (ETDEWEB)
NONE
2017-06-15
The volume includes abstracts of the Annual DEGRO Meeting 2017 covering lectures and poster sessions with the following issues: lymphoma, biology, physics, radioimmunotherapy, sarcomas and rare tumors, prostate carcinoma, lung tumors, benign lesions and new media, mamma carcinoma, gastrointestinal tumors, quality of life, care science and quality assurance, high-technology methods and palliative situation, head-and-neck tumors, brain tumors, central nervous system metastases, guidelines, radiation sensitivity, radiotherapy, radioimmunotherapy.
7. Medical physics 2013. Abstracts
International Nuclear Information System (INIS)
Treuer, Harald
2013-01-01
The proceedings of the medical physics conference 2013 include abstract of lectures and poster sessions concerning the following issues: Tele-therapy - application systems, nuclear medicine and molecular imaging, neuromodulation, hearing and technical support, basic dosimetry, NMR imaging -CEST (chemical exchange saturation transfer), medical robotics, magnetic particle imaging, audiology, radiation protection, phase contrast - innovative concepts, particle therapy, brachytherapy, computerized tomography, quantity assurance, hybrid imaging techniques, diffusion and lung NMR imaging, image processing - visualization, cardiac and abdominal NMR imaging.
8. SPR 2014. Abstracts
Energy Technology Data Exchange (ETDEWEB)
NONE
2014-05-15
The proceedings of the SPR 2014 meeting include abstracts on the following topics: Body imaging techniques: practical advice for clinic work; thoracic imaging: focus on the lungs; gastrointestinal imaging: focus on the pancreas and bowel; genitourinary imaging: focus on gonadal radiology; muscoskeletal imaging; focus on oncology; child abuse and nor child abuse: focus on radiography; impact of NMR and CT imaging on management of CHD; education and communication: art and practice in pediatric radiology.
9. SPR 2014. Abstracts
International Nuclear Information System (INIS)
2014-01-01
The proceedings of the SPR 2014 meeting include abstracts on the following topics: Body imaging techniques: practical advice for clinic work; thoracic imaging: focus on the lungs; gastrointestinal imaging: focus on the pancreas and bowel; genitourinary imaging: focus on gonadal radiology; muscoskeletal imaging; focus on oncology; child abuse and nor child abuse: focus on radiography; impact of NMR and CT imaging on management of CHD; education and communication: art and practice in pediatric radiology.
10. WWNPQFT-2011 - Abstracts
International Nuclear Information System (INIS)
Bianchi, E.; Bender, C.; Culetu, H.; Fried, H.; Grossmann, A.; Hofmann, R.; Le Bellac, M.; Martinetti, P.; Muller, B.; Patras, F.; Raffaeli, B.; Vitting Andersen, J.
2013-01-01
The object of this workshop is to consolidate and publicize new efforts in non-perturbative field theories. This year the presentations deal with quantum gravity, non-commutative geometry, fat-tailed wave-functions, strongly coupled field theories, space-times two time-like dimensions, and multiplicative renormalization. A presentation is dedicated to the construction of a nucleon-nucleon potential from an analytical, non-perturbative gauge invariant QCD. This document gathers the abstracts of the presentations
11. Enhancing the Interpretive Reading and Analytical Writing of Mainstreamed English Learners in Secondary School: Results from a Randomized Field Trial Using a Cognitive Strategies Approach
Science.gov (United States)
Olson, Carol Booth; Kim, James S.; Scarcella, Robin; Kramer, Jason; Pearson, Matthew; van Dyk, David A.; Collins, Penny; Land, Robert E.
2012-01-01
In this study, 72 secondary English teachers from the Santa Ana Unified School District were randomly assigned to participate in the Pathway Project, a cognitive strategies approach to teaching interpretive reading and analytical writing, or to a control condition involving typical district training focusing on teaching content from the textbook.…
12. Pitfalls in the Assessment, Analysis, and Interpretation of Routine Outcome Monitoring (ROM) Data : Results from an Outpatient Clinic for Integrative Mental Health
NARCIS (Netherlands)
Hoenders, Rogier H. J.; Bos, Elisabeth H.; Bartels-Velthuis, Agna A.; Vollbehr, Nina K.; van der Ploeg, Karen; de Jonge, Peter; de Jong, Joop T. V. M.
There is considerable debate about routine outcome monitoring (ROM) for scientific or benchmarking purposes. We discuss pitfalls associated with the assessment, analysis, and interpretation of ROM data, using data of 376 patients. 206 patients (55 %) completed one or more follow-up measurements.
13. Image acquisition and interpretation criteria for Tc-99m-HMPAO-labelled white blood cell scintigraphy : results of a multicentre study
NARCIS (Netherlands)
Erba, Paola A.; Glaudemans, Andor W. J. M.; Veltman, Niels C.; Sollini, Martina; Pacilio, Marta; Galli, Filippo; Dierckx, Rudi A. J. O.; Signore, Alberto
Purpose There is no consensus yet on the best protocol for planar image acquisition and interpretation of radiolabelled white blood cell (WBC) scintigraphy. This may account for differences in reported diagnostic accuracy amongst different centres. Methods This was a multicentre retrospective study
14. Interpreting the Customary Rules on Interpretation
NARCIS (Netherlands)
Merkouris, Panos
2017-01-01
International courts have at times interpreted the customary rules on interpretation. This is interesting because what is being interpreted is: i) rules of interpretation, which sounds dangerously tautological, and ii) customary law, the interpretation of which has not been the object of critical
15. A cost-effectiveness analysis to illustrate the impact of cost definitions on results, interpretations and comparability of pharmacoeconomic studies in the US.
Science.gov (United States)
Tunis, Sandra L
2009-01-01
There is a lack of a uniform proxy for defining direct medical costs in the US. This potentially important source of variation in modelling and other types of economic studies is often overlooked. The extent to which increased expenditures for an intervention can be offset by reductions in subsequent service costs can be directly related to the choice of cost definitions. To demonstrate how different cost definitions for direct medical costs can impact results and interpretations of a cost-effectiveness analysis. The IMS-CORE Diabetes Model was used to project the lifetime (35-year) cost effectiveness in the US of one pharmacological intervention 'medication A' compared with a second 'medication B' (both unspecified) for type 2 diabetes mellitus. The complications modelled included cardiovascular disease, renal disease, eye disease and neuropathy. The model had a Markov structure with Monte Carlo simulations. Utility values were derived from the published literature. Complication costs were obtained from a retrospective database study that extracted anonymous patient-level data from (primarily private payer) adjudicated medical and pharmaceutical claims. Costs for pharmacy services, outpatient services and inpatient hospitalizations were included. Cost definitions for complications included charged, allowed and paid amounts, and for medications included both wholesale acquisition cost (WAC) and average wholesale price (AWP). Costs were reported in year 2007 values. The cost-effectiveness results differed according to the particular combination of cost definitions employed. The use of charges greatly increased costs for complications. When the analysis incorporated WAC medication prices with charged amounts for complication costs, the incremental cost-effectiveness ratio (ICER) for medication A versus medication B was $US6337 per QALY. When AWP prices were used with charged amounts, medication A became a dominant treatment strategy, i.e. lower costs with greater 16. Nonclassical Problem for Ultraparabolic Equation in Abstract Spaces Directory of Open Access Journals (Sweden) Gia Avalishvili 2016-01-01 Full Text Available Nonclassical problem for ultraparabolic equation with nonlocal initial condition with respect to one time variable is studied in abstract Hilbert spaces. We define the space of square integrable vector-functions with values in Hilbert spaces corresponding to the variational formulation of the nonlocal problem for ultraparabolic equation and prove trace theorem, which allows one to interpret initial conditions of the nonlocal problem. We obtain suitable a priori estimates and prove the existence and uniqueness of solution of the nonclassical problem and continuous dependence upon the data of the solution to the nonlocal problem. We consider an application of the obtained abstract results to nonlocal problem for ultraparabolic partial differential equation with second-order elliptic operator and obtain well-posedness result in Sobolev spaces. 17. The Complexity of Abstract Machines Directory of Open Access Journals (Sweden) Beniamino Accattoli 2017-01-01 Full Text Available The lambda-calculus is a peculiar computational model whose definition does not come with a notion of machine. Unsurprisingly, implementations of the lambda-calculus have been studied for decades. Abstract machines are implementations schema for fixed evaluation strategies that are a compromise between theory and practice: they are concrete enough to provide a notion of machine and abstract enough to avoid the many intricacies of actual implementations. There is an extensive literature about abstract machines for the lambda-calculus, and yet—quite mysteriously—the efficiency of these machines with respect to the strategy that they implement has almost never been studied. This paper provides an unusual introduction to abstract machines, based on the complexity of their overhead with respect to the length of the implemented strategies. It is conceived to be a tutorial, focusing on the case study of implementing the weak head (call-by-name strategy, and yet it is an original re-elaboration of known results. Moreover, some of the observation contained here never appeared in print before. 18. Program and abstracts International Nuclear Information System (INIS) 1978-01-01 This volume contains the program and abstracts of the conference. The following topics are included: metal vapor molecular lasers, magnetohydrodynamics, rare gas halide and nuclear pumped lasers, transfer mechanisms in arcs, kinetic processes in rare gas halide lasers, arcs and flows, XeF kinetics and lasers, fundamental processes in excimer lasers, electrode effects and vacuum arcs, electron and ion transport, ion interactions and mobilities, glow discharges, diagnostics and afterglows, dissociative recombination, electron ionization and excitation, rare gas excimers and group VI lasers, breakdown, novel laser pumping techniques, electrode-related discharge phenomena, photon interactions, attachment, plasma chemistry and infrared lasers, electron scattering, and reactions of excited species 19. SPR 2017. Abstracts Energy Technology Data Exchange (ETDEWEB) NONE 2017-05-15 The conference proceedings SPR 2017 include abstracts on the following issues: gastrointestinal radiography - inflammatory bowel diseases, cardiovascular CTA, general muscoskeletal radiology, muscoskeletal congenital development diseases, general pediatric radiology - chest, muscoskeletal imaging - marrow and infectious disorders, state-of-the-art body MR imaging, practical pediatric sonography, quality and professionalism, CT imaging in congenital heart diseases, radiographic courses, body MT techniques, contrast enhanced ultrasound, machine learning, forensic imaging, the radiation dos conundrum - reconciling imaging, imagining and managing, the practice of radiology, interventional radiology, neuroradiology, PET/MR. 20. Ghana energy abstracts International Nuclear Information System (INIS) Entsua-Mensah, Clement 1994-01-01 Ghana Energy Abstracts 1994 is the first issue of an annual publication of the Energy information Centre. The aim is to combine in one publication the country' s bibliographic output on energy so as to provide a valuable source of reference for policy makers, planners,and researchers. It covers the broad spectrum of energy including; energy conservation, energy resource management, petroleum and renewable energy resources.The documents listed comprise research reports, baseline studies,conference proceedings, periodical articles dissertations and theses. Keywords and author indexes have been provided to facilitate easy reference. (C.E.M) 1. Parameterized Dataflow (Extended Abstract Directory of Open Access Journals (Sweden) Dominic Duggan 2016-10-01 Full Text Available Dataflow networks have application in various forms of stream processing, for example for parallel processing of multimedia data. The description of dataflow graphs, including their firing behavior, is typically non-compositional and not amenable to separate compilation. This article considers a dataflow language with a type and effect system that captures the firing behavior of actors. This system allows definitions to abstract over actor firing rates, supporting the definition and safe composition of actor definitions where firing rates are not instantiated until a dataflow graph is launched. 2. ESPR 2015. Abstracts International Nuclear Information System (INIS) 2015-01-01 The volume includes the abstracts of the ESPR 2015 covering the following topics: PCG (post graduate courses): Radiography; fluoroscopy and general issue; nuclear medicine, interventional radiology and hybrid imaging, pediatric CT, pediatric ultrasound; MRI in childhood. Scientific sessions and task force sessions: International aspects; neuroradiology, neonatal imaging, engineering techniques to simulate injury in child abuse, CT - dose and quality, challenges in the chest, cardiovascular and chest, muscoskeletal, oncology, pediatric uroradiology and abdominal imaging, fetal and postmortem imaging, education and global challenges, neuroradiology - head and neck, gastrointestinal and genitourinary. 3. IPR 2016. Abstracts Energy Technology Data Exchange (ETDEWEB) NONE 2016-05-15 The volume on the meeting of pediatric radiology includes abstract on the following issues: chest, cardiovascular system, neuroradiology, CT radiation DRs (diagnostic reference levels) and dose reporting guidelines, genitourinary imaging, gastrointestinal radiology, oncology an nuclear medicine, whole body imaging, fetal/neonates imaging, child abuse, oncology and hybrid imaging, value added imaging, muscoskeletal imaging, dose and radiation safety, imaging children - immobilization and distraction techniques, information - education - QI and healthcare policy, ALARA, the knowledge skills and competences for a technologist/radiographer in pediatric radiology, full exploitation of new technological features in pediatric CT, image quality issues in pediatrics, abdominal imaging, interventional radiology, MR contrast agents, tumor - mass imaging, cardiothoracic imaging, ultrasonography. 4. ESPR 2015. Abstracts Energy Technology Data Exchange (ETDEWEB) NONE 2015-05-10 The volume includes the abstracts of the ESPR 2015 covering the following topics: PCG (post graduate courses): Radiography; fluoroscopy and general issue; nuclear medicine, interventional radiology and hybrid imaging, pediatric CT, pediatric ultrasound; MRI in childhood. Scientific sessions and task force sessions: International aspects; neuroradiology, neonatal imaging, engineering techniques to simulate injury in child abuse, CT - dose and quality, challenges in the chest, cardiovascular and chest, muscoskeletal, oncology, pediatric uroradiology and abdominal imaging, fetal and postmortem imaging, education and global challenges, neuroradiology - head and neck, gastrointestinal and genitourinary. 5. IPR 2016. Abstracts International Nuclear Information System (INIS) 2016-01-01 The volume on the meeting of pediatric radiology includes abstract on the following issues: chest, cardiovascular system, neuroradiology, CT radiation DRs (diagnostic reference levels) and dose reporting guidelines, genitourinary imaging, gastrointestinal radiology, oncology an nuclear medicine, whole body imaging, fetal/neonates imaging, child abuse, oncology and hybrid imaging, value added imaging, muscoskeletal imaging, dose and radiation safety, imaging children - immobilization and distraction techniques, information - education - QI and healthcare policy, ALARA, the knowledge skills and competences for a technologist/radiographer in pediatric radiology, full exploitation of new technological features in pediatric CT, image quality issues in pediatrics, abdominal imaging, interventional radiology, MR contrast agents, tumor - mass imaging, cardiothoracic imaging, ultrasonography. 6. The Interpretive Function DEFF Research Database (Denmark) Agerbo, Heidi 2017-01-01 Approximately a decade ago, it was suggested that a new function should be added to the lexicographical function theory: the interpretive function(1). However, hardly any research has been conducted into this function, and though it was only suggested that this new function was relevant...... to incorporate into lexicographical theory, some scholars have since then assumed that this function exists(2), including the author of this contribution. In Agerbo (2016), I present arguments supporting the incorporation of the interpretive function into the function theory and suggest how non-linguistic signs...... can be treated in specific dictionary articles. However, in the current article, due to the results of recent research, I argue that the interpretive function should not be considered an individual main function. The interpretive function, contrary to some of its definitions, is not connected... 7. Interpretation criteria for FDG PET/CT in multiple myeloma (IMPeTUs): final results. IMPeTUs (Italian myeloma criteria for PET USe). Science.gov (United States) Nanni, Cristina; Versari, Annibale; Chauvie, Stephane; Bertone, Elisa; Bianchi, Andrea; Rensi, Marco; Bellò, Marilena; Gallamini, Andrea; Patriarca, Francesca; Gay, Francesca; Gamberi, Barbara; Ghedini, Pietro; Cavo, Michele; Fanti, Stefano; Zamagni, Elena 2018-05-01 ᅟ: FDG PET/CT ( 18 F-fluoro-deoxy-glucose positron emission tomography/computed tomography) is a useful tool to image multiple myeloma (MM). However, simple and reproducible reporting criteria are still lacking and there is the need for harmonization. Recently, a group of Italian nuclear medicine experts defined new visual descriptive criteria (Italian Myeloma criteria for Pet Use: IMPeTUs) to standardize FDG PET/CT evaluation in MM patients. The aim of this study was to assess IMPeTUs reproducibility on a large prospective cohort of MM patients. Patients affected by symptomatic MM who had performed an FDG PET/CT at baseline (PET0), after induction (PET-AI), and the end of treatment (PET-EoT) were prospectively enrolled in a multicenter trial (EMN02)(NCT01910987; MMY3033). After anonymization, PET images were uploaded in the web platform WIDEN® and hence distributed to five expert nuclear medicine reviewers for a blinded independent central review according to the IMPeTUs criteria. Consensus among reviewers was measured by the percentage of agreement and the Krippendorff's alpha. Furthermore, on a patient-based analysis, the concordance among all the reviewers in terms of positivity or negativity of the FDG PET/CT scan was tested for different thresholds of positivity (Deauville score (DS 2, 3, 4, 5) for the main parameters (bone marrow, focal score, extra-medullary disease). Eighty-six patients (211 FDG PET/CT scans) were included in this analysis. Median patient age was 58 years (range, 35-66 years), 45% were male, 15% of them were in stage ISS (International Staging System) III, and 42% had high-risk cytogenetics. The percentage agreement was superior to 75% for all the time points, reaching 100% of agreement in assessing the presence skull lesions after therapy. Comparable results were obtained when the agreement analysis was performed using the Krippendorff's alpha coefficient, either in every single time point of scanning (PET0, PET-AI or PET-EoT) or 8. Electrical Conductivity Model of the Mantle Lithosphere of the Slave Craton (NW Canada) and its tectonic interpretation in the context of Geochemical Results Science.gov (United States) Lezaeta, P.; Chave, A.; Evans, R.; Jones, A. G.; Ferguson, I. 2002-12-01 The Slave Craton, northwestern Canada, contains the oldest known rocks on Earth, with exposed outcrop over an area of about 600x400 km2. The discovery of economic diamondiferous kimberlite pipes during the early 1990s motivated extensive research in the region. Over the last six years, four types of deep-probing magnetotelluric (MT) surveys were conducted within the framework of diverse geoscientific programs, aimed at determining the regional-scale electrical structures of the craton. Two of the surveys involved novel acquisition; one through frozen lake ice along ice roads during winter, and the second deploying ocean-bottom instrumentation from float planes during summer. The latter surveys required one year of recording between summers, thus allowing long period transfer functions that lead to mantle penetration depths of over 300 km. Two-dimensional modeling of the MT data from along the winter road showed the existence of a high conductivity zone at depths of 80-120 km beneath the central Slave craton. This anomalous region is spatially coincident with an ultradepleted harzburgitic layer in the upper mantle that was interpreted by others to be related to a subducted slab emplaced during the mid-Archean. A 3-D electrical conductivity model of the Slave lithosphere has been obtained, by trial and error, to fit the magnetic transfer and MT response functions from the lake experiments. This 3-D model traces the central Slave conductor as a NE-SW oriented mantle structure. Its NE-SW orientation coincides with that of a late fold belt system, with the first phase of craton-wide plutonism at ca 2630-2590 Ma, three-part subdivision of the craton based on SKS results, and with a G10 (garnet) geochemical mantle boundaries. All of these highlight a NE-SW structural grain to the lithospheric mantle of the craton, in sharp contrast to the N-S grain of the crust. Constraints on the depth range and lateral extension of the electrical conductive structure are obtained 9. The impact of working memory on interpreting Institute of Scientific and Technical Information of China (English) 白云安; 张国梅 2016-01-01 This paper investigates the roles of working memory in interpreting process. First of all, it gives a brief introduction to interpreting. Secondly, the paper exemplifies the role of working memory in interpreting. The result reveals that the working memory capacity of interpreters is not adsolutely proportional to the quality of interpreting in the real interpreting conditions. The performance of an interpreter with well-equipped working memory capacity will comprehensively influenced by various elements. 10. Problems in abstract algebra CERN Document Server Wadsworth, A R 2017-01-01 This is a book of problems in abstract algebra for strong undergraduates or beginning graduate students. It can be used as a supplement to a course or for self-study. The book provides more variety and more challenging problems than are found in most algebra textbooks. It is intended for students wanting to enrich their learning of mathematics by tackling problems that take some thought and effort to solve. The book contains problems on groups (including the Sylow Theorems, solvable groups, presentation of groups by generators and relations, and structure and duality for finite abelian groups); rings (including basic ideal theory and factorization in integral domains and Gauss's Theorem); linear algebra (emphasizing linear transformations, including canonical forms); and fields (including Galois theory). Hints to many problems are also included. 11. ICENES 2007 Abstracts Energy Technology Data Exchange (ETDEWEB) Sahin, S [Gazi University, Technical Education Faculty, Ankara (Turkey) 2007-07-01 In this book Conference Program and Abstracts were included 13th International Conference on Emerging Nuclear Energy Systems which held between 03-08 June 2007 in Istanbul, Turkey. The main objective of International Conference series on Emerging Nuclear Energy Systems (ICENES) is to provide an international scientific and technical forum for scientists, engineers, industry leaders, policy makers, decision makers and young professionals who will shape future energy supply and technology , for a broad review and discussion of various advanced, innovative and non-conventional nuclear energy production systems. The main topics of 159 accepted papers from 35 countries are fusion science and technology, fission reactors, accelerator driven systems, transmutation, laser in nuclear technology, radiation shielding, nuclear reactions, hydrogen energy, solar energy, low energy physics and societal issues. 12. ICENES 2007 Abstracts International Nuclear Information System (INIS) Sahin, S. 2007-01-01 In this book Conference Program and Abstracts were included 13th International Conference on Emerging Nuclear Energy Systems which held between 03-08 June 2007 in Istanbul, Turkey. The main objective of International Conference series on Emerging Nuclear Energy Systems (ICENES) is to provide an international scientific and technical forum for scientists, engineers, industry leaders, policy makers, decision makers and young professionals who will shape future energy supply and technology , for a broad review and discussion of various advanced, innovative and non-conventional nuclear energy production systems. The main topics of 159 accepted papers from 35 countries are fusion science and technology, fission reactors, accelerator driven systems, transmutation, laser in nuclear technology, radiation shielding, nuclear reactions, hydrogen energy, solar energy, low energy physics and societal issues 13. Computational Abstraction Steps DEFF Research Database (Denmark) Thomsen, Lone Leth; Thomsen, Bent; Nørmark, Kurt 2010-01-01 and class instantiations. Our teaching experience shows that many novice programmers find it difficult to write programs with abstractions that materialise to concrete objects later in the development process. The contribution of this paper is the idea of initiating a programming process by creating...... or capturing concrete values, objects, or actions. As the next step, some of these are lifted to a higher level by computational means. In the object-oriented paradigm the target of such steps is classes. We hypothesise that the proposed approach primarily will be beneficial to novice programmers or during...... the exploratory phase of a program development process. In some specific niches it is also expected that our approach will benefit professional programmers.... 14. IEEE conference record -- Abstracts International Nuclear Information System (INIS) Anon. 1994-01-01 This conference covers the following areas: computational plasma physics; vacuum electronic; basic phenomena in fully ionized plasmas; plasma, electron, and ion sources; environmental/energy issues in plasma science; space plasmas; plasma processing; ball lightning/spherical plasma configurations; plasma processing; fast wave devices; magnetic fusion; basic phenomena in partially ionized plasma; dense plasma focus; plasma diagnostics; basic phenomena in weakly ionized gases; fast opening switches; MHD; fast z-pinches and x-ray lasers; intense ion and electron beams; laser-produced plasmas; microwave plasma interactions; EM and ETH launchers; solid state plasmas and switches; intense beam microwaves; and plasmas for lighting. Separate abstracts were prepared for 416 papers in this conference 15. WD1145+017 (Abstract) Science.gov (United States) Motta, M. 2017-12-01 (Abstract only) WD1145 is a 17th magnitude white dwarf star 570 light years away in Virgo that was discovered to have a disintegrating planetoid in close orbit by Andrew Vanderburg, a graduate student at Harvard CfA, while data mining the elucidate the nature of its rather bizarre transit light curves. I obtained multiple observations of WD1145 over the course of a year, and found a series of complex transit light curves that could only be interpreted as a ring complex or torus in close orbit around WD1145. Combined with data from other amateur astronomers, professional observations, and satellite data, it became clear that WD1145 has a small planetoid in close orbit at the Roche limit and is breaking apart, forming a ring of debris material that is then raining down on the white dwarf. The surface of the star is "polluted" by heavy metals, determined by spectroscopic data. Given that in the intense gravitational field of a white dwarf any heavy metals could not for long last on the surface, this confirms that we are tracking in real time the destruction of a small planet by its host star. 16. Item hierarchy-based analysis of the Rivermead Mobility Index resulted in improved interpretation and enabled faster scoring in patients undergoing rehabilitation after stroke. Science.gov (United States) Roorda, Leo D; Green, John R; Houwink, Annemieke; Bagley, Pam J; Smith, Jane; Molenaar, Ivo W; Geurts, Alexander C 2012-06-01 To enable improved interpretation of the total score and faster scoring of the Rivermead Mobility Index (RMI) by studying item ordering or hierarchy and formulating start-and-stop rules in patients after stroke. Cohort study. Rehabilitation center in the Netherlands; stroke rehabilitation units and the community in the United Kingdom. Item hierarchy of the RMI was studied in an initial group of patients (n=620; mean age ± SD, 69.2±12.5y; 297 [48%] men; 304 [49%] left hemisphere lesion, and 269 [43%] right hemisphere lesion), and the adequacy of the item hierarchy-based start-and-stop rules was checked in a second group of patients (n=237; mean age ± SD, 60.0±11.3y; 139 [59%] men; 103 [44%] left hemisphere lesion, and 93 [39%] right hemisphere lesion) undergoing rehabilitation after stroke. Not applicable. Mokken scale analysis was used to investigate the fit of the double monotonicity model, indicating hierarchical item ordering. The percentages of patients with a difference between the RMI total score and the scores based on the start-and-stop rules were calculated to check the adequacy of these rules. The RMI had good fit of the double monotonicity model (coefficient H(T)=.87). The interpretation of the total score improved. Item hierarchy-based start-and-stop rules were formulated. The percentages of patients with a difference between the RMI total score and the score based on the recommended start-and-stop rules were 3% and 5%, respectively. Ten of the original 15 items had to be scored after applying the start-and-stop rules. Item hierarchy was established, enabling improved interpretation and faster scoring of the RMI. Copyright © 2012 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved. 17. [Interpretation of proverbs and Alzheimer's disease]. Science.gov (United States) Báez, S; Mendoza, L; Reyes, P; Matallana, D; Montañés, P To evaluate the performance of patients with Alzheimer's disease (AD) in the mild-moderate stage in a verbal material abstraction task that involves interpreting the implicit meaning of proverbs and sayings. A qualitative-quantitative analysis was carried out of the performance of 30 patients with AD and 30 controls, paired by age, gender and level of education. Patients had significantly greater difficulties than the controls when it came to interpreting proverbs. A high correlation was found between subjects' years of schooling and the overall score on the proverb interpretation test. Results suggest that the processes that may be predominantly affected in patients with AD are the investigation of the conditions of the problem, together with selecting an alternative and formulating a cognitive plan to resolve the task. The results help to further our knowledge of the characteristics of performance of patients with AD in a test involving the interpretation of the implicit meaning of proverbs and also provide information about the processes that may be predominantly affected. Further research is needed, however, on this subject area in order to obtain more conclusive explanations. 18. Image acquisition and interpretation criteria for {sup 99m}Tc-HMPAO-labelled white blood cell scintigraphy: results of a multicentre study Energy Technology Data Exchange (ETDEWEB) Erba, Paola A. [University of Pisa Medical School (Italy). Regional Center of Nuclear Medicine; Glaudemans, Andor W.J.M.; Dierckx, Rudi A.J.O. [University Medical Center Groningen (Netherlands). Dept. of Nuclear Medicine and Molecular Imaging; Veltman, Niels C. [Jeroen Bosch Hospital, ' s-Hertogenbosch (Netherlands). Dept. of Nuclear Medicine; Sollini, Martina [Arcisprdale S. Maria Nuova - IRCCS, Reggio Emilia (Italy). Nuclear Medicine Unit; Pacilio, Marta; Galli, Filippo [Sapienza Univ., Rome (Italy). Nuclear Medicine Unit; Signore, Alberto [University Medical Center Groningen (Netherlands). Dept. of Nuclear Medicine and Molecular Imaging; Sapienza Univ., Rome (Italy). Nuclear Medicine Unit; Sapienza Univ., Rome (Italy). Ospedale S. Andrea Medicina Nucleare 2014-04-15 There is no consensus yet on the best protocol for planar image acquisition and interpretation of radiolabelled white blood cell (WBC) scintigraphy. This may account for differences in reported diagnostic accuracy amongst different centres. This was a multicentre retrospective study analysing 235 WBC scans divided into two groups. The first group of scans (105 patients) were acquired with a fixed-time acquisition protocol and the second group (130 patients) were acquired with a decay time-corrected acquisition protocol. Planar images were interpreted both qualitatively and semiquantitatively. Three blinded readers analysed the images. The most accurate imaging acquisition protocol comprised image acquisition at 3 - 4 h and at 20 - 24 h in time mode with acquisition times corrected for isotope decay. Using this protocol, visual analysis had high sensitivity and specificity in the diagnosis of infection. Semiquantitative analysis could be used in doubtful cases, with no cut-off for the percentage increase in radiolabelled WBC over time, as a criterion to define a positive scan. (orig.) 19. Image acquisition and interpretation criteria for 99mTc-HMPAO-labelled white blood cell scintigraphy: results of a multicentre study International Nuclear Information System (INIS) Erba, Paola A.; Glaudemans, Andor W.J.M.; Dierckx, Rudi A.J.O.; Veltman, Niels C.; Sollini, Martina; Pacilio, Marta; Galli, Filippo; Signore, Alberto; Sapienza Univ., Rome; Sapienza Univ., Rome 2014-01-01 There is no consensus yet on the best protocol for planar image acquisition and interpretation of radiolabelled white blood cell (WBC) scintigraphy. This may account for differences in reported diagnostic accuracy amongst different centres. This was a multicentre retrospective study analysing 235 WBC scans divided into two groups. The first group of scans (105 patients) were acquired with a fixed-time acquisition protocol and the second group (130 patients) were acquired with a decay time-corrected acquisition protocol. Planar images were interpreted both qualitatively and semiquantitatively. Three blinded readers analysed the images. The most accurate imaging acquisition protocol comprised image acquisition at 3 - 4 h and at 20 - 24 h in time mode with acquisition times corrected for isotope decay. Using this protocol, visual analysis had high sensitivity and specificity in the diagnosis of infection. Semiquantitative analysis could be used in doubtful cases, with no cut-off for the percentage increase in radiolabelled WBC over time, as a criterion to define a positive scan. (orig.) 20. Verification Based on Set-Abstraction Using the AIF Framework DEFF Research Database (Denmark) Mödersheim, Sebastian Alexander The AIF framework is a novel method for analyzing advanced security protocols, web services, and APIs, based a new abstract interpretation method. It consists of the specification language AIF and a translation/abstraction processes that produces a set of first-order Horn clauses. These can... 1. Exoplanets and Multiverses (Abstract) Science.gov (United States) Trimble, V. 2016-12-01 (Abstract only) To the ancients, the Earth was the Universe, of a size to be crossed by a god in a day, by boat or chariot, and by humans in a lifetime. Thus an exoplanet would have been a multiverse. The ideas gradually separated over centuries, with gradual acceptance of a sun-centered solar system, the stars as suns likely to have their own planets, other galaxies beyond the Milky Way, and so forth. And whenever the community divided between "just one' of anything versus "many," the "manies" have won. Discoveries beginning in 1991 and 1995 have gradually led to a battalion or two of planets orbiting other stars, very few like our own little family, and to moderately serious consideration of even larger numbers of other universes, again very few like our own. I'm betting, however, on habitable (though not necessarily inhabited) exoplanets to be found, and habitable (though again not necessarily inhabited) universes. Only the former will yield pretty pictures. 2. SENSE 2010, Abstracts International Nuclear Information System (INIS) Lumsden, M.D.; Argyriou, D.N.; Inosov, D. 2012-01-01 The microscopic origin of unconventional superconductivity continues to attract the attention of the condensed matter community. Whereas rare-earth / actinide-based intermetallic and copper oxide-based high temperature superconductors are studied for more than twenty years, the iron-based superconductors have been in the focus of interest since their recent discovery. Inelastic neutron scattering experiments have been of particular importance for the understanding of the magnetic and superconducting properties of these compounds. With its 29 talks and 14 posters the workshop provided a forum for the 71 registered participants to review and discuss experimental achievements, recognize the observed synergy and differences as well as discuss theoretical efforts to identify the symmetry of the superconducting order parameter in addition to the coupling mechanisms of the Cooper pairs. The workshop covered different topics relevant for the study of unconventional superconductivity. Magnetization and lattice dynamics such as spin resonances, phonons, magnetic and other excitations as studied by spectroscopic methods were presented. Investigations of (doping, pressure and magnetic field dependent) phase diagrams, electronic states as well as vortex physics by the various diffraction techniques were also addressed. This document gathers only the abstracts of the papers. (authors) 3. Book of Abstracts International Nuclear Information System (INIS) 2013-06-01 ANIMMA 2013 is the third of a series of conferences devoted to endorsing and promoting scientific and technical activities based on nuclear instrumentation and measurements. The main objective of ANIMMA conference is to unite the various scientific communities not only involved in nuclear instrumentation and measurements, but also in nuclear medicine and radiation. The conference is all about getting scientists, engineers and the industry to meet, exchange cultures and identify new scientific and technical prospects to help overcome both current and future unresolved issues. The conference provides scientists and engineers with a veritable opportunity to compare their latest research and development in different areas: physics, nuclear energy, nuclear fuel cycle, safety, security, future energies (GEN III+, GENIV, ITER, ...). The conference topics include instrumentation and measurement methods for: Fundamental physics; Fusion diagnostics and technology; Nuclear power reactors; Research reactors; Nuclear fuel cycle; Decommissioning, dismantling and remote handling; Safeguards, homeland security; Severe accident monitoring; Environmental and medical sciences; Education, training and outreach. This document brings together the abstracts of the presentations. Each presentation (full paper) is analysed separately and entered in INIS 4. Stellar Presentations (Abstract) Science.gov (United States) Young, D. 2015-12-01 (Abstract only) The AAVSO is in the process of expanding its education, outreach and speakers bureau program. powerpoint presentations prepared for specific target audiences such as AAVSO members, educators, students, the general public, and Science Olympiad teams, coaches, event supervisors, and state directors will be available online for members to use. The presentations range from specific and general content relating to stellar evolution and variable stars to specific activities for a workshop environment. A presentation—even with a general topic—that works for high school students will not work for educators, Science Olympiad teams, or the general public. Each audience is unique and requires a different approach. The current environment necessitates presentations that are captivating for a younger generation that is embedded in a highly visual and sound-bite world of social media, twitter and U-Tube, and mobile devices. For educators, presentations and workshops for themselves and their students must support the Next Generation Science Standards (NGSS), the Common Core Content Standards, and the Science Technology, Engineering and Mathematics (STEM) initiative. Current best practices for developing relevant and engaging powerpoint presentations to deliver information to a variety of targeted audiences will be presented along with several examples. 5. Directionality effects in simultaneous language interpreting: the case of sign language interpreters in The Netherlands. Science.gov (United States) Van Dijk, Rick; Boers, Eveline; Christoffels, Ingrid; Hermans, Daan 2011-01-01 The quality of interpretations produced by sign language interpreters was investigated. Twenty-five experienced interpreters were instructed to interpret narratives from (a) spoken Dutch to Sign Language of The Netherlands (SLN), (b) spoken Dutch to Sign Supported Dutch (SSD), and (c) SLN to spoken Dutch. The quality of the interpreted narratives was assessed by 5 certified sign language interpreters who did not participate in the study. Two measures were used to assess interpreting quality: the propositional accuracy of the interpreters' interpretations and a subjective quality measure. The results showed that the interpreted narratives in the SLN-to-Dutch interpreting direction were of lower quality (on both measures) than the interpreted narratives in the Dutch-to-SLN and Dutch-to-SSD directions. Furthermore, interpreters who had begun acquiring SLN when they entered the interpreter training program performed as well in all 3 interpreting directions as interpreters who had acquired SLN from birth. 6. Defunctionalized Interpreters for Programming Languages DEFF Research Database (Denmark) Danvy, Olivier 2008-01-01 by Reynolds in Definitional Interpreters for Higher-Order Programming Languages'' for functional implementations of denotational semantics, natural semantics, and big-step abstract machines using closure conversion, CPS transformation, and defunctionalization. Over the last few years, the author and his......This document illustrates how functional implementations of formal semantics (structural operational semantics, reduction semantics, small-step and big-step abstract machines, natural semantics, and denotational semantics) can be transformed into each other. These transformations were foreshadowed...... students have further observed that functional implementations of small-step and of big-step abstract machines are related using fusion by fixed-point promotion and that functional implementations of reduction semantics and of small-step abstract machines are related using refocusing and transition... 7. Book of abstracts International Nuclear Information System (INIS) 1987-01-01 The document contains abstracts of 24 review papers, 24 invited papers, 24 oral contributions and 120 posters. 10 review papers summarize the status of laser fusion research and progress in high-power laser facilities in major world laboratories. Four papers review research programs (laser-matter interaction studies and X-ray source development) based on KrF laser systems. Other review papers discuss the problems of laser energy conversion into X-rays in laser-heated cavities, X-ray lasing at shorter wavelengths, optimization of targets for inertial fusion. Two review papers are devoted to light ion fusion. The subjects of most invited papers are special problems of current laser plasma research, such as hot electron generation, nonlinear resonance absorption, energy accumulation limits, pellet ignition, conversion of laser light into X-rays, high-pressure plasma generation. Three invited papers review laser plasma research in Czechoslovakia, Poland and Spain. One paper suggests a new method of producing muonic superdense matter. The remaining inivited papers deal with the progress in XUV lasers and with laser plasma applications for further laser development. Of the papers accepted for oral presentation 12 papers discuss various problems of laser-plasma interaction; 4 papers deal with laser targets, 4 papers with laser-initiated X-ray sources, 3 papers with the diagnostics of laser-produced plasma. The last oral contribution presents the main principles of the excimer laser theory. The largest group of posters is related to laser-plasma interaction and energy absorption problems, to laser-target interaction and various methods of laser plasma diagnostics. The other posters deal with plasma applications in laser development, plasma mirrors, Brillouin and Raman scattering, X-ray emission, harmonic generation, electron acceleration, production of high-Z plasmas and other related problems. (J.U.) 8. When abstraction does not increase stereotyping : Preparing for intragroup communication enables abstract construal of stereotype-inconsistent information NARCIS (Netherlands) Greijdanus, Hedy; Postmes, Tom; Gordijn, Ernestine H.; van Zomeren, Martijn 2014-01-01 Two experiments investigated when perceivers can construe stereotype-inconsistent information abstractly (i.e., interpret observations as generalizable) and whether stereotype-consistency delimits the positive relation between abstract construal level and stereotyping. Participants (N1=104, N2=83) 9. An international effort towards developing standards for best practices in analysis, interpretation and reporting of clinical genome sequencing results in the CLARITY Challenge DEFF Research Database (Denmark) Brownstein, Catherine A; Beggs, Alan H; Homer, Nils 2014-01-01 and reporting. The CLARITY Challenge was designed to spur convergence in methods for diagnosing genetic disease starting from clinical case history and genome sequencing data. DNA samples were obtained from three families with heritable genetic disorders and genomic sequence data were donated by sequencing......Background : There is tremendous potential for genome sequencing to improve clinical diagnosis and care once it becomes routinely accessible, but this will require formalizing research methods into clinical best practices in the areas of sequence data generation, analysis, interpretation......, demonstrating a need for consistent fine-tuning of the generally accepted methods. There was greater diversity of the final clinical report content and in the patient consenting process, demonstrating that these areas require additional exploration and standardization. Conclusions : The CLARITY Challenge... 10. Localized Smart-Interpretation Science.gov (United States) Lundh Gulbrandsen, Mats; Mejer Hansen, Thomas; Bach, Torben; Pallesen, Tom 2014-05-01 The complex task of setting up a geological model consists not only of combining available geological information into a conceptual plausible model, but also requires consistency with availably data, e.g. geophysical data. However, in many cases the direct geological information, e.g borehole samples, are very sparse, so in order to create a geological model, the geologist needs to rely on the geophysical data. The problem is however, that the amount of geophysical data in many cases are so vast that it is practically impossible to integrate all of them in the manual interpretation process. This means that a lot of the information available from the geophysical surveys are unexploited, which is a problem, due to the fact that the resulting geological model does not fulfill its full potential and hence are less trustworthy. We suggest an approach to geological modeling that 1. allow all geophysical data to be considered when building the geological model 2. is fast 3. allow quantification of geological modeling. The method is constructed to build a statistical model, f(d,m), describing the relation between what the geologists interpret, d, and what the geologist knows, m. The para- meter m reflects any available information that can be quantified, such as geophysical data, the result of a geophysical inversion, elevation maps, etc... The parameter d reflects an actual interpretation, such as for example the depth to the base of a ground water reservoir. First we infer a statistical model f(d,m), by examining sets of actual interpretations made by a geological expert, [d1, d2, ...], and the information used to perform the interpretation; [m1, m2, ...]. This makes it possible to quantify how the geological expert performs interpolation through f(d,m). As the geological expert proceeds interpreting, the number of interpreted datapoints from which the statistical model is inferred increases, and therefore the accuracy of the statistical model increases. When a model f 11. Safety assessment and feeding value for pigs, poultry and ruminant animals of pest protected (Bt plants and herbicide tolerant (glyphosate, glufosinate plants: interpretation of experimental results observed worldwide on GM plants Directory of Open Access Journals (Sweden) Aimé Aumaitre 2010-01-01 Full Text Available New varieties of plants resistant to pests and/or tolerant to specific herbicides such as maize, soybean, cotton, sugarbeets, canola, have been recently developed by using genetic transformation (GT. These plants contain detectable specificactive recombinant DNA (rDNA and their derived protein. Since they have not been selected for a modification oftheir chemical composition, they can be considered as substantially equivalent to their parents or to commercial varietiesfor their content in nutrients and anti-nutritional factors. However, insect protected maize is less contaminated by mycotoxinsthan its parental counterpart conferring a higher degree of safety to animal feeds. The new feeds, grain and derivatives,and whole plants have been intensively tested in vivo up to 216 days for their safety and their nutritional equivalencefor monogastric farm animals (pig, poultry and ruminants (dairy cows, steers, lambs. The present article is basedon the interpretation and the summary of the scientific results published in original reviewed journals either as full papers(33 or as abstracts (33 available through September 2003. For the duration of the experiments adapted to the species,feed intake, weight gain, milk yield and nutritional equivalence expressed as feed conversion and/or digestibility of nutrientshave never been affected by feeding animals diets containing GT plants. In addition, in all the experimental animals,the body and carcass composition, the composition of milk and animal tissues, as well as the sensory properties of meatare not modified by the use of feeds derived from GT plants. Furthermore, the health of animals, their physiological characteristicsand the survival rate are also not affected.The presence of rDNA and derived proteins can be recognized and quantified in feeds in the case of glyphosate resistant soybeanand canola and in the case of insect protected maize. However, rDNA has never been recovered either in milk, or in 12. Life Cycle Interpretation DEFF Research Database (Denmark) Hauschild, Michael Z.; Bonou, Alexandra; Olsen, Stig Irving 2018-01-01 The interpretation is the final phase of an LCA where the results of the other phases are considered together and analysed in the light of the uncertainties of the applied data and the assumptions that have been made and documented throughout the study. This chapter teaches how to perform an inte... 13. Abstract decomposition theorem and applications CERN Document Server Grossberg, R; Grossberg, Rami; Lessmann, Olivier 2005-01-01 Let K be an Abstract Elementary Class. Under the asusmptions that K has a nicely behaved forking-like notion, regular types and existence of some prime models we establish a decomposition theorem for such classes. The decomposition implies a main gap result for the class K. The setting is general enough to cover \\aleph_0-stable first-order theories (proved by Shelah in 1982), Excellent Classes of atomic models of a first order tehory (proved Grossberg and Hart 1987) and the class of submodels of a large sequentially homogenuus \\aleph_0-stable model (which is new). 14. Learning from and for rare floods in Dresden – how public officials interpret damage simulation results at the building type level Directory of Open Access Journals (Sweden) Hutter Gerard 2016-01-01 Full Text Available Public officials in Dresden are concerned about learning from and for rare flood events like the Elbe river flood in August 2002. This is interesting because research on individual as well as organizational learning from rare events indicates that this kind of learning faces significant difficulties (e.g., overestimation of rare events for decision-making based on “emotionalized event experience”. Up to now, only little is known what and how public officials in Dresden specifically learn from and for rare floods. Therefore, the paper follows an exploratory purpose in line with principles of qualitative social research. Firstly, the paper explores dealing with rare floods with reference to a conceptual framework that highlights relations between regulative, normative, and cognitive institutions on the one hand and learning of public officials on the other. Secondly, it adopts a single case study design in Dresden with embedded sub-cases that are defined with reference to organizations of FRM. The case study shows, among others, that regulations like the Floods Directive are important for justifying FRM with regard to rare flood events which is less obvious than it sounds. However, public officials display different interpretations of the term “rare flood event”, for instance, in the context of analysing the consequences of floods on the building stock. Furthermore, the case study findings indicate that public officials may follow alternative approaches to sustain commitment in the context of rare flood events (systematic versus pragmatic approach. 15. Results and interpretation of noise measurements using in-core self powered neutron detector strings at Unit 2 of the Paks Nuclear Power Plant International Nuclear Information System (INIS) Gloeckler, O.; Por, G.; Valko, J. 1986-11-01 In-core neutron noise and fuel assembly outlet temperature noise measurements were performed at Unit 2 of Paks Nuclear Power Plant. Characteristics of the reactor and the noise measuring equipment are briefly described. The in-core Rhodium emitter selfpowered neutron detector strings positioned axially above the other show high coherence and linear phase at low frequencies indicating a marked transport effect, not regularly measured in PWRs. The coherence between horizontally placed neutron detectors is small and the phase is zero. A transport effect of different nature is obtained between neutron detectors (in-core and ex-core) and fuel assembly outlet thermocouples. The observed characteristics depend on reactor and fuel assembly power in a way supporting interpretation in terms of coolant density and void content changes and power feedback effects. During routine analysis vibration of 1.1 Hz appeared as a strong peak in the power spectra. The control assembly that was responsible for the observed behaviour could be localized with high certainty. (author) 16. Typesafe Abstractions for Tensor Operations OpenAIRE Chen, Tongfei 2017-01-01 We propose a typesafe abstraction to tensors (i.e. multidimensional arrays) exploiting the type-level programming capabilities of Scala through heterogeneous lists (HList), and showcase typesafe abstractions of common tensor operations and various neural layers such as convolution or recurrent neural networks. This abstraction could lay the foundation of future typesafe deep learning frameworks that runs on Scala/JVM. 17. Interpretive Media Study and Interpretive Social Science. Science.gov (United States) Carragee, Kevin M. 1990-01-01 Defines the major theoretical influences on interpretive approaches in mass communication, examines the central concepts of these perspectives, and provides a critique of these approaches. States that the adoption of interpretive approaches in mass communication has ignored varied critiques of interpretive social science. Suggests that critical… 18. Interpreters, Interpreting, and the Study of Bilingualism. Science.gov (United States) Valdes, Guadalupe; Angelelli, Claudia 2003-01-01 Discusses research on interpreting focused specifically on issues raised by this literature about the nature of bilingualism. Suggests research carried out on interpreting--while primarily produced with a professional audience in mind and concerned with improving the practice of interpreting--provides valuable insights about complex aspects of… 19. Defunctionalized Interpreters for Programming Languages DEFF Research Database (Denmark) Danvy, Olivier 2008-01-01 by Reynolds in Definitional Interpreters for Higher-Order Programming Languages'' for functional implementations of denotational semantics, natural semantics, and big-step abstract machines using closure conversion, CPS transformation, and defunctionalization. Over the last few years, the author and his...... operational semantics can be expressed as a reduction semantics: for deterministic languages, a reduction semantics is a structural operational semantics in continuation style, where the reduction context is a defunctionalized continuation. As the defunctionalized counterpart of the continuation of a one... 20. A Generator for Composition Interpreters DEFF Research Database (Denmark) Steensgaard-Madsen, Jørgen 1997-01-01 programming language design, specification and implementation then apply. A component can be considered as defining objects or commands according to convenience. A description language including type information provides sufficient means to describe component interaction according to the underlying abstract......Composition of program components must be expressed in some language, and late composition can be achieved by an interpreter for the composition language. A suitable notion of component is obtained by identifying it with the semantics of a generalised structured command. Experiences from... 1. Summary of air permeability data from single-hole injection tests in unsaturated fractured tuffs at the Apache Leap Research Site: Results of steady-state test interpretation International Nuclear Information System (INIS) Guzman, A.G.; Geddis, A.M.; Henrich, M.J.; Lohrstorfer, C.F.; Neuman, S.P. 1996-03-01 This document summarizes air permeability estimates obtained from single hole pneumatic injection tests in unsaturated fractured tuffs at the Covered Borehole Site (CBS) within the larger apache Leap Research Site (ALRS). Only permeability estimates obtained from a steady state interpretation of relatively stable pressure and flow rate data are included. Tests were conducted in five boreholes inclined at 45 degree to the horizontal, and one vertical borehole. Over 180 borehole segments were tested by setting the packers 1 m apart. Additional tests were conducted in segments of lengths 0.5, 2.0, and 3.0 m in one borehole, and 2.0 m in another borehole, bringing the total number of tests to over 270. Tests were conducted by maintaining a constant injection rate until air pressure became relatively stable and remained so for some time. The injection rate was then incremented by a constant value and the procedure repeated. The air injection rate, pressure, temperature, and relative humidity were recorded. For each relatively stable period of injection rate and pressure, air permeability was estimated by treating the rock around each test interval as a uniform, isotropic porous medium within which air flows as a single phase under steady state, in a pressure field exhibiting prolate spheroidal symmetry. For each permeability estimate the authors list the corresponding injection rate, pressure, temperature and relative humidity. They also present selected graphs which show how the latter quantities vary with time; logarithmic plots of pressure versus time which demonstrate the importance of borehole storage effects during the early transient portion of each incremental test period; and semilogarithmic plots of pressure versus recovery time at the end of each test sequence 2. Chernobyl' 94. Abstracts International Nuclear Information System (INIS) Arkhipov, N.P. 1994-01-01 This book contains materials of the 4th International Scientific and Technical Conference devoted to the results of 8-years work on Chernobyl accident consequences mitigation. Main results of research in radiation monitoring, applied radioecology, effect of radionuclides on biological objects in contaminated territories are presented. Information about waste management and medical consequences of the accident is given. Methodology and strategic of further research on radionuclides in environment and their influence on living organisms is determined. Large factual materials and its generalization may be usefull for scientists and practical workers in the field of radiation monitoring, radiology and medicine 3. Constraint-Based Abstraction of a Model Checker for Infinite State Systems DEFF Research Database (Denmark) Banda, Gourinath; Gallagher, John Patrick Abstract interpretation-based model checking provides an approach to verifying properties of infinite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal t...... to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver.... 4. Science. Chernobyl-96. Abstracts International Nuclear Information System (INIS) Kholosha, V.Yi. 1997-01-01 The collection contains the results of Chernobyl accident investigation on the territory of Ukraine. The conference was devoted to the following problems: -equipment and dosimetry; - agriculture and forestry radioecology and environmental monitoring; - medical, biological and social consequences; - waste management; - 'Shelter' problems; - information and simulation technologies 5. Chernobyl' 96. Abstracts International Nuclear Information System (INIS) Arkhipov, N.P. 1996-01-01 Problems of radiation monitoring, ''Ukrytiye'' safety, wave management, radiation and radioecological situation in 30-km exclusion zone, agricultural and medical radiology, justification of measures and means for mitigation of radioactive contamination influence on biological object and man are discussed. The results of research in scientific establishments of Ukraine, Russia, Belorussia, USA,Belgium, Germany, Sweden and Japan are exposed 6. Working memory and simultaneous interpreting OpenAIRE Timarova, Sarka 2009-01-01 Working memory is a cognitive construct underlying a number of abilities, and it has been hypothesised for many years that it is crucial for interpreting. A number of studies have been conducted with the aim to support this hypothesis, but research has not yielded convincing results. Most researchers focused on studying working memory differences between interpreters and non-interpreters with the rationale that differences in working memory between the two groups would provide evidence of wor... 7. Research Abstracts of 1980. Science.gov (United States) 1980-12-01 and EDTA were diluted in supplemented veal infusion broth. Serial dilutions of benzalkonium chloride , cetylpyridinum chloride , Tween 80, Tween 60...a prime virulence factor in dental caries initiation. Enzymatic hydrolysis of these water-insoluble glucans could result in the reduction or control...Increasing concentrations of T-10 prolonged the delay. When 1% T-10 was added to diet 2000, containing 56% sucrose, there was no significant reduction in 8. An introduction to abstract algebra CERN Document Server Robinson, Derek JS 2003-01-01 This is a high level introduction to abstract algebra which is aimed at readers whose interests lie in mathematics and in the information and physical sciences. In addition to introducing the main concepts of modern algebra, the book contains numerous applications, which are intended to illustrate the concepts and to convince the reader of the utility and relevance of algebra today. In particular applications to Polya coloring theory, latin squares, Steiner systems and error correcting codes are described. Another feature of the book is that group theory and ring theory are carried further than is often done at this level. There is ample material here for a two semester course in abstract algebra. The importance of proof is stressed and rigorous proofs of almost all results are given. But care has been taken to lead the reader through the proofs by gentle stages. There are nearly 400 problems, of varying degrees of difficulty, to test the reader''s skill and progress. The book should be suitable for students ... 9. Summer 2015 Internship Abstract Science.gov (United States) Smith, Courtney 2015-01-01 Green fluorescent protein (GFP) visually shows the expression of proteins by fluorescing when exposed to certain wavelengths of light. The GFP in this experiment was used to identify cells actively releasing viruses. The experiment focused on the effect of microgravity on the GFP expression of Akata B-cells infected with Epstein Barr Virus (EBV). Two flasks were prepared with 30 million cells each and two bioreactors were prepared with 50 million cells each. All four cultures were incubated for 16 days and fed every four days. Cellometer readings were taken on the feeding days to find cell size, viability, and GFP expression. In addition, the cells were treated with Propodium monoazide (PMA) and run through real time PCR to determine viral load on the feeding days. On the International Space Station air samples are taken to analyze the bacterial and fungal organisms in the air. The Sartorius Portable Airport is being investigated for potential use on the ISS to analyze for viral content in the air. Multiple samples were taken around Johnson Space Center building 37 and in Clear Lake Pediatric Clinic. The filter used was the gelatin membrane filter and the DNA was extracted directly from the filter. The DNA was then run through real time PCR for Varicella Zoster Virus (VZV) and EBV as well as GAPDH to test for the presence of DNA. The results so far have shown low DNA yield and no positive results for VZV or EBV. Further inquiry involves accurately replicating an atmosphere with high viral load from saliva as would be found on the ISS to run the air sampler in. Another line of research is stress hormones that may be correlated to the reactivation of latent viruses. The stress hormones from saliva samples are analyzed rather than blood samples. The quantity found in saliva shows the quantity of the hormones actually attached to cells and causing a reaction, whereas in the blood the quantity of hormones is the total amount released to cause a reaction. The particular 10. Hydrogen abstraction reactions by amide electron adducts International Nuclear Information System (INIS) Sevilla, M.D.; Sevilla, C.L.; Swarts, S. 1982-01-01 Electron reactions with a number of peptide model compounds (amides and N-acetylamino acids) in aqueous glasses at low temperature have been investigated using ESR spectroscopy. The radicals produced by electron attachment to amides, RC(OD)NDR', are found to act as hydrogen abstracting agents. For example, the propionamide electron adduct is found to abstract from its parent propionamide. Electron adducts of other amides investigated show similar behavior except for acetamide electron adduct which does not abstract from its parent compound, but does abstract from other amides. The tendency toward abstraction for amide electron adducts are compared to electron adducts of several carboxylic acids, ketones, aldehydes and esters. The comparison suggests the hydrogen abstraction tendency of the various deuterated electron adducts (DEAs) to be in the following order: aldehyde DEA > acid DEA = approximately ester DEA > ketone DEA > amide DEA. In basic glasses the hydrogen abstraction ability of the amide electron adducts is maintained until the concentration of base is increased sufficiently to convert the DEA to its anionic form, RC(O - )ND 2 . In this form the hydrogen abstracting ability of the radical is greatly diminished. Similar results were found for the ester and carboxylic acid DEA's tested. (author) 11. Abstract Expression Grammar Symbolic Regression Science.gov (United States) Korns, Michael F. This chapter examines the use of Abstract Expression Grammars to perform the entire Symbolic Regression process without the use of Genetic Programming per se. The techniques explored produce a symbolic regression engine which has absolutely no bloat, which allows total user control of the search space and output formulas, which is faster, and more accurate than the engines produced in our previous papers using Genetic Programming. The genome is an all vector structure with four chromosomes plus additional epigenetic and constraint vectors, allowing total user control of the search space and the final output formulas. A combination of specialized compiler techniques, genetic algorithms, particle swarm, aged layered populations, plus discrete and continuous differential evolution are used to produce an improved symbolic regression sytem. Nine base test cases, from the literature, are used to test the improvement in speed and accuracy. The improved results indicate that these techniques move us a big step closer toward future industrial strength symbolic regression systems. 12. Spectrophotometry of Symbiotic Stars (Abstract) Science.gov (United States) Boyd, D. 2017-12-01 (Abstract only) Symbiotic stars are fascinating objects - complex binary systems comprising a cool red giant star and a small hot object, often a white dwarf, both embedded in a nebula formed by a wind from the giant star. UV radiation from the hot star ionizes the nebula, producing a range of emission lines. These objects have composite spectra with contributions from both stars plus the nebula and these spectra can change on many timescales. Being moderately bright, they lend themselves well to amateur spectroscopy. This paper describes the symbiotic star phenomenon, shows how spectrophotometry can be used to extract astrophysically useful information about the nature of these systems, and gives results for three symbiotic stars based on the author's observations. 13. Mechanical Engineering Department technical abstracts International Nuclear Information System (INIS) Denney, R.M. 1982-01-01 The Mechanical Engineering Department publishes listings of technical abstracts twice a year to inform readers of the broad range of technical activities in the Department, and to promote an exchange of ideas. Details of the work covered by an abstract may be obtained by contacting the author(s). Overall information about current activities of each of the Department's seven divisions precedes the technical abstracts 14. Nuclear code abstracts (1975 edition) International Nuclear Information System (INIS) Akanuma, Makoto; Hirakawa, Takashi 1976-02-01 Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.) 15. The Radioactive Waste Management Advisory Committee's advice to ministers on the establishment of scientific consensus on the interpretation and significance of the results of science programmes into radioactive waste disposal International Nuclear Information System (INIS) 1999-04-01 This document presents conclusions and recommendations on establishment of scientific consensus on the interpretation and significance of the results of science programmes into radioactive waste disposal. The topics discussed include: the nature of science and its limitations; societal views of science and the radioactive waste problem; issues upon which consensus will be needed; evidence of past attempts at greater involvement of the public; the linking of scientific and social consensus; communicating the nature of consensus to the public 16. Results of the regional intercomparison on internal dosimetry – 2013: Interpretation of monitoring data for effective dose assessment due to internal exposure International Nuclear Information System (INIS) Rojo, A.M.; Puerta, N.; Gossio, S.; Gómez Parada, I. 2015-01-01 Internal dosimetry intercomparisons are essential for the verification of the models applied and the results consistency. To that aim, the 1. Regional Intercomparison Exercise was organized in 2005 in the frame of the RLA 9/049. The results of this exercise led to the 2. Regional Intercomparison Exercise in 2013, which was organized in the frame of the RLA 9/066 and coordinated by Autoridad Regulatoria Nuclear (ARN) of Argentina. Four simulated cases covering intakes of “1”3”1I, “1”3”7Cs and Tritium were proposed. The exercise counted with the participation of 19 centres from 13 countries. This report shows a complete analysis of the participant’s results in this 2nd. exercise, useful to test their skills and acquired knowledge, particularly in applying the IDEAS guidelines. It is important to highlight the improvement in the general performance of the participants. (authors) [es 17. Detailed computational fluid dynamics calculations in order to assess respective safety issues regarding existing nuclear power plant. Interpretation and presentation of results International Nuclear Information System (INIS) 2014-11-01 The results of analysis showed that in case of injection of cold water to loops water temperature stratification in the pipes will occur if all the RCPs (Reactor Coolant Pump) are stopped. This water temperature stratification can lead to temperature misreading by sensors if they are located improperly, and may lead to wrong operation of PTS (Pressurized Thermal Shock) protection system. Therefore, results of current analysis can be used to choose the correct locations for temperature sensors to be installed in order to determine whether PTS protection system should be activated 18. ABSTRACT African Journals Online (AJOL) in Imo state, the gender-perceived production constraints; the relative ... to establishing if stereotyping such operations along gender lines will ... difference in returns accruing to male and female small ruminant ..... The bias against Sheep and. 19. ABSTRACT Institute of Scientific and Technical Information of China (English) 2011-01-01 Commemorating the 100th Anniversary of the Revolution of 1911 Promoting the Deep Development of the CPC History and National History by the Research and Publication of the History of the Revolution of …………1911 Zhu Jiamu (4) 20. ABSTRACT African Journals Online (AJOL) Dr Obe inner forces (bending moments, shearing forces etc) are usually redistributed. Cracks that often appear within the walls of tall buildings during constructions point to this phenomenon. It has also been recognized that foundation engineering is complicated. (1). Also settlement has been accepted as stress induced and time ... 1. Abstract African Journals Online (AJOL) 1 Uyole Agr.icultural Research Institute, Ministry of Agriculture, Food Security and Co-operatives,. P.o Box 400 ... from this study suggest that participation in dairy market depended on access to both in-put and output ..... Socio-economics and. 2. Abstract African Journals Online (AJOL) a route of HIV transmission with sex (p=0.003) and age (p=0.000) being highly ... preventing HIV, prevention of mother-to-child transmission of HIV (PMTCT), health behaviour and ..... order to determine which infant feeding methods were perceived ..... breast diseases, cancer, insufficient milk, work and pregnancy. 3. Abstract African Journals Online (AJOL) ATTAMAH C. O female ̅=3.50; difference in mean{dm} = -0.70). Male entrepreneurs ... with the variance more on lack of information facilities (male ̅= 2.28; female. ̅= 2.60; dm = -0.32). ..... Issa R. (2009). Climate Change and Livestock Production Frontier. 4. Abstract ~. , African Journals Online (AJOL) and Marketed in'Morogoro Municipality,"Taniania. lR. ... It was concluded that water adulteration ofmilk in ... Despite the fact that infofmal milk marketing i{ ..... Least sqoares ~einsand standard error of means (SEM) for'milk quality variables;. 5. ABSTRACT African Journals Online (AJOL) Objective: To determine the incidence of post operative SSI after primary total hip arthroplasty. Design: A retrospective cohort study. ... Conclusion: The risk of post operative SSI after total hip arthroplasties is low in the African setting. Further investigation is .... knee replacements for osteoarthritis. J. Bone Joint. Surg. 6. Abstract Indian Academy of Sciences (India) 2017-03-10 Mar 10, 2017 ... factors with low to moderate effect which have been described (Klareskog et al. 2006). The strongest ... 2014). Several studies in GWAS and meta-analyses ... All patients and healthy controls gave informed consent to .... HLA-DQB1 encoded chain of MHC-II protein and HLA-DQA2 encoded chain of MHC-II ... 7. Abstract African Journals Online (AJOL) Maru Shete business owned and controlled equally by the members that is targeted to break the ... The Poverty Reduction Strategy Programme of OSHO is focused in supporting the ..... The Report of United Nations (2005) indicates an improvement in the. 8. Abstract African Journals Online (AJOL) PROF. O. E. OSUAGWU An Authoring system is a software with pre-programmed elements which allows both programmers ... doing things in all aspects of human endeavour. ... learning is offered through an Internet .... based or problem-based learning environment. 9. abstract African Journals Online (AJOL) status of households in Owemi agricultural area of Imo state, isolate the determinants of ... importation, the country has hardly ever provided enough food for her teeming ..... Third in importance is the sale of valuable items as a coping strategy. The type ... while 70% preferred begging for food to dying of starvation. However,. 10. Abstracts International Nuclear Information System (INIS) 1989-09-01 The proceedings contain 106 papers of which 2 fall under the INIS Scope. One concerns seismic risk assessment at radioactive waste repositories in the U.S., the other concerns the possibility of predicting earthquakes from changes in radon 222 levels in selected ground water springs of northern Italy. (M.D.) 11. Abstract African Journals Online (AJOL) dell The study was conducted on high schools around the vicinity of Jimma. University easy to ... and community/Kebele leaders were sources of information and media ... biological, health sciences, social sciences ...... and one or more elementary. 12. Abstract African Journals Online (AJOL) 123456789jkl''''# 1, 2Department of Educational Technology Obafemi Awolowo University, Ile-Ife, Nigeria oasofowora@yahoo.com. 46 ... Geography teachers drawn from secondary schools in Osun State. ..... Elementary/Secondary Education Handbook an. 13. Abstract African Journals Online (AJOL) `123456789jkl''''# extent of the integration of the new technology in teaching and learning Geography in Nigerian ... mounted pressure advocating for the removal ... which reduce opportunities for developing .... the fact the subject has distance itself from the. 14. Abstract African Journals Online (AJOL) UDS-CAPTURED examines the extent of women farmers' access to credit from Rural Banks (RBs) in the Upper East. Region of Ghana. .... In this regard, anybody who does not attempt to get credit from formal, semi-formal/endogenous or ... District and the Builsa Community Rural Bank with its headquarters at Sandema in the Builsa. District. 15. Abstract African Journals Online (AJOL) E¢b development of its economy presupposes entrepreneurial direction, which is ... and independence. Ethiopian business men and women should take intelligent risks ... opportunities. Such a strategy helps Ethiopian entrepreneurship to grow. .... propensity, and high energy decreased as the cultural distance from the United ... 16. Abstract Indian Academy of Sciences (India) 2017-03-10 Mar 10, 2017 ... autoimmune and infectious diseases including type 1 diabetes (Nakanishi and Inoko 2006) and chronic HCV infection (Tibbs et al. 1996). Additionally, it was suggested that it may be a shared epitope for RA (Li et al. 2013). According to these data and our findings, we can suggest that there is an interaction ... 17. Abstract Directory of Open Access Journals (Sweden) Maria Jose Carvalho de Souza Domingues 2003-01-01 Full Text Available The practice of teaching, in actuality, shows the necessity of teachers and students coming together to form a behavior that is different from the traditional model of teaching. The unity formed from various types of knowledge and the relation between theory and practice show themselves to be fundamental. Starting in 2002, and in search of this unity, a project that hoped to unify the disciplines taught in the second semester of the course in Administration was implemented. During the semester, a single work sought to relate the theories studied with the reality of an organization. Each professor evaluated the works from the point of view of his discipline, as well as the presentation, in general, of the group. It can be affirmed that seeking to bring together various types of knowledge necessarily passes to a rethinking of the postures of teachers and students. 18. ABSTRACTS Institute of Scientific and Technical Information of China (English) 2012-01-01 4The Current Trends in Global Mineral Exploration and Development LIU Shucken (Information Center of Ministry of Land and Resources, Beijing 100812, China) Abstract: This paper introduces the main issues that the global mineral exploration and development are faced with. The main issues of focus include: the mineral exploration has rapidly recovered from the short depression caused by the effects of global financial crisis; most of the important mineral reserves have continued to grow; there has been continued rapid growth in mining development investment; the supply capacity of mineral products has increased; mergers and acquisitions of mining company are stirring, and multinational mergers & acquisitions has become mainstream, 19. Abstract African Journals Online (AJOL) used standardised scales to measure the level of HIV stigma over time. A repeated .... with studies that focused so heavily 'on the beliefs and attitudes of those who are .... to harm the PLHA (e.g. ridicule, insults, blame), 8 items, alpha. = 0.886; (ii) ... self based on HIV status, 5 items, alpha = 0.906; (iii) health care neglect – in ... 20. Abstract African Journals Online (AJOL) Getachew realistic distribution of no-show data in modeling the cost function was considered using data collected from the .... the paper models the cost function based on a realistic probability distributions based on the historical data is a .... Plot of Revenue generated vs. overbooking for two class case (at$500. Compensation Cost ...
1. Abstracts
International Nuclear Information System (INIS)
2001-01-01
The minutes of the Latinamerican Geology Meeting and the 3. Uruguayan Meeting organized by the National Direction of Mining and Geology (DINAMIGE) and Uruguayan Geology Society (Uruguay) includes topics such as: paleontology, sedimentation, stratigraphy, fossils, paleoclimatology, geo tectonics, coal deposits, minerals res sources, tectonic evolution of the Andes Mountain Ranges, IGCP Project, Environmental geology, Hydrogeology, fuels, and geomorphology
2. Abstracts
International Nuclear Information System (INIS)
2013-01-01
The power point presentation is about: danger identification, caracterization, evaluation exposition, risk (CAC, 1997; FAO, 2007), European food safety authority, foodrisk organization, pathogens risk ranking, risk reduction, gubernamental responsability
3. ABSTRACT
Efforts have also been successfully made to include the study of rock art in the school/ college curriculum so as to help develop awareness amongst the students and general public about the need to preserve this cultural heritage for the posterity and also to highlight its importance in tourism industry. rock art and their ...
4. ABSTRACT
African Journals Online (AJOL)
BSN
A CHEMICAL STlJDY OF AN I 'DI GENO KNOWLED<:E SYSTEM OF ... preservation of "kindirmo" with water and ethanol extracts of seeds and husk of. CO\\\\'J)Ca for ... Such facilities arc ho1\\·ever not a\\ailable to most peasant animal farmers.
5. Abstract
African Journals Online (AJOL)
on human development are sufficiently important to warrant an integration of. HIV and .... interaction of HIV and malaria22, The discussions that follow further sub- .... HIV —MTCT and the paediatric management of children born of ser0p0$l-. 6. Abstract DEFF Research Database (Denmark) Tafdrup, Oliver 2013-01-01 Udgivet som en del af Tidskrifts specialudgivelse om Adorno. http://tidskrift.dk/data/50/Aforismesamling.pdf......Udgivet som en del af Tidskrifts specialudgivelse om Adorno. http://tidskrift.dk/data/50/Aforismesamling.pdf... 7. Clinicians' interpretations of point of care urine culture versus laboratory culture results: analysis from the four-country POETIC trial of diagnosis of uncomplicated urinary tract infection in primary care. Science.gov (United States) Hullegie, Saskia; Wootton, Mandy; Verheij, Theo J M; Thomas-Jones, Emma; Bates, Janine; Hood, Kerenza; Gal, Micaela; Francis, Nick A; Little, Paul; Moore, Michael; Llor, Carl; Pickles, Timothy; Gillespie, David; Kirby, Nigel; Brugman, Curt; Butler, Christopher C 2017-08-01 Urine culture at the point of care minimises delay between obtaining the sample and agar inoculation in a microbiology laboratory, and quantification and sensitivity results can be available more rapidly in primary care. To identify the degree to which clinicians' interpretations of a point-of-care-test (POCT) urine culture (Flexicult™ SSI-Urinary Kit) agrees with laboratory culture in women presenting to primary care with symptoms of uncomplicated urinary tract infections (UTI). Primary care clinicians used the Flexicult™-POCT, recorded their findings and took a photograph of the result, which was interpreted by microbiology laboratory technicians. Urine samples were additionally processed in routine care laboratories. Cross tabulations were used to identify important differences in organism identification, quantification and antibiotic susceptibility between these three sources of data. The influence of various laboratory definitions for UTI on culture were assessed. Primary care clinicians identified 202/289 urine samples (69.9%) as positive for UTI using the Flexicult™-POCT, whereas laboratory culture identified 94-190 (32.5-65.7%) as positive, depending on definition thresholds. 82.9% of samples identified positive for E. coli on laboratory culture were also considered positive for E. coli using the Flexicult™ -POCT, and susceptibilities were reasonably concordant. There were major discrepancies between laboratory staff interpretation of Flexicult™ photographs, clinicians' interpretation of the Flexicult™ test, and laboratory culture results. Flexicult™-POCT overestimated the positivity rate of urine samples for UTI when laboratory culture was used as the reference standard. However, it is unclear whether point-of-care or laboratory based urine culture provides the most valid diagnostic information. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com. 8. Interpretation and clinical applications International Nuclear Information System (INIS) Higgins, C.B. 1987-01-01 This chapter discusses the factors to be kept in mind during routine interpretation of MR images. This includes the factors that determine contrast on standard spin-echo images and some distinguishing features between true lesions and artifactually simulated lesions. This chapter also indicates the standard protocols for MRI of various portions of the body. Finally, the current indications for MRI of various portions of the body are suggested; however, it is recognized that the indications for MRI are rapidly increasing and consequently, at the time of publication of this chapter, it is likely that many more applications will have become evident. Interpretation of magnetic resonance (MR) images requires consideration of anatomy and tissue characteristics and extraction of artifacts resulting from motion and other factors 9. Results of the Argentinian intercomparison on internal dosimetry – 2014. Interpretation of monitoring data for effective dose assessment due to internal exposure International Nuclear Information System (INIS) Rojo, A.M.; Puerta, N.; Gossio, S.; Gómez Parada, I. 2015-01-01 Internal dosimetry intercomparisons are essential for the verification of the models applied and the results consistency. To that aim, in 2014 the National Intercomparison Exercise was organized and coordinated by the Internal Dosimetry Laboratory of the Nuclear Regulatory Authority (ARN) of Argentina. Four simulated cases covering intakes of “1”3”1I, “1”3”7Cs and tritium were proposed. The exercise counted with the participation of four internal dosimetry services from the nuclear power plants (NA-SA CNA and NA-SA CNE) and the CNEA Atomic Centres: Bariloche (CAB) and Ezeiza (CAE). This report shows a complete analysis of the participant’s results in this exercise. (authors) [es 10. On the importance of fast scattering data for aluminium in the interpretation results from H{sub 2}O moderated lattice experiments Energy Technology Data Exchange (ETDEWEB) Fayers, F J; Terry, M J [General Reactor Physics Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom) 1967-07-15 Aluminium is often used as a structural material or fuel cladding in lattice experiments with light water moderators. In particular most of the experiments with regular rod lattices of plutonium fuel have contained significant quantities of aluminium. This report examines the importance of scattering data for aluminium in leakage calculations for light water systems. It is shown that some discrepancy exists between calculated plane moments and experimentally measured moments, which may be corrected by an 'ad hoc' adjustment of inelastic scattering data for aluminium. WIMS results are presented for some Battelle plutonium fuelled rod lattices, and it is shown that this adjustment of inelastic data leads to a noticeable correction for the predicted reactivities of these experiments. The influence of scattering data for aluminium on results for some other lattices of interest has been shown to be less important. (author) 11. Abstract concepts in grounded cognition NARCIS (Netherlands) Lakens, D. 2010-01-01 When people think about highly abstract concepts, they draw upon concrete experiences to structure their thoughts. For example, black knights in fairytales are evil, and knights in shining armor are good. The sensory experiences black and white are used to represent the abstract concepts of good and 12. On court interpreters' visibility DEFF Research Database (Denmark) Dubslaff, Friedel; Martinsen, Bodil of the service they receive. Ultimately, the findings will be used for training purposes. Future - and, for that matter, already practising - interpreters as well as the professional users of interpreters ought to take the reality of the interpreters' work in practice into account when assessing the quality...... on the interpreter's interpersonal role and, in particular, on signs of the interpreter's visibility, i.e. active co-participation. At first sight, the interpreting assignment in question seems to be a short and simple routine task which would not require the interpreter to deviate from the traditional picture... 13. Single photon simultaneous K-shell ionization and K-shell excitation. I. Theoretical model applied to the interpretation of experimental results on H2O International Nuclear Information System (INIS) Carniato, S.; Selles, P.; Andric, L.; Palaudoux, J.; Penent, F.; Lablanquie, P.; Žitnik, M.; Bučar, K.; Nakano, M.; Hikosaka, Y.; Ito, K. 2015-01-01 We present in detail a theoretical model that provides absolute cross sections for simultaneous core-ionization core-excitation (K −2 V ) and compare its predictions with experimental results obtained on the water molecule after photoionization by synchrotron radiation. Two resonances of different symmetries are assigned in the main K −2 V peak and comparable contributions from monopolar (direct shake-up) and dipolar (conjugate shake-up) core-valence excitations are identified. The main peak is observed with a much greater width than the total experimental resolution. This broadening is the signature of nuclear dynamics 14. Technical abstracts: Mechanical engineering, 1990 International Nuclear Information System (INIS) Broesius, J.Y. 1991-01-01 This document is a compilation of the published, unclassified abstracts produced by mechanical engineers at Lawrence Livermore National Laboratory (LLNL) during the calendar year 1990. Many abstracts summarize work completed and published in report form. These are UCRL-JC series documents, which include the full text of articles to be published in journals and of papers to be presented at meetings, and UCID reports, which are informal documents. Not all UCIDs contain abstracts: short summaries were generated when abstracts were not included. Technical Abstracts also provides descriptions of those documents assigned to the UCRL-MI (miscellaneous) category. These are generally viewgraphs or photographs presented at meetings. An author index is provided at the back of this volume for cross referencing 15. Summary of the results and interpretation of tritium and noble gas measurements on groundwater samples from the Perch Lake Basin Area International Nuclear Information System (INIS) Kotzer, T.G. 1999-02-01 Along the west-central margin of the Lower Perch Lake Basin, a limited number of groundwaters have been sampled from piezometers at depths of between 8 and 17 m and distances of between 100 and 900 m downgradient from their recharge location near Area A. Concentrations of tritium in these groundwaters varied between approximately 100 and 2800 TU. Measurements of dissolved gases in these groundwaters indicate concentrations of 4 He and neon approximating those in recently recharged groundwaters; however, the concentrations of 3 He are as much as 100 times higher, indicating the waters have accumulated tritiogenic 3 He. Using the 3 H/ 3 He dating technique, groundwater residence times on the order of 29 ± 8 years and groundwater velocities on the order of 0.1 m/day have been calculated for the flow system in the middle sand unit between Area A recharge and Perch Lake. These results, although based on a very small number of groundwater analyses, are comparable to earlier estimates of groundwater residence times and velocities obtained using Darcy calculations, borehole dilution experiments and tracer-test results from previous hydrogeologic studies in the area. (author) 16. Self-healing of excavation-disturbed rocks in the near field of underground cavities - exemplary measurements in rock salt and interpretation of preliminary results International Nuclear Information System (INIS) Wieczorek, K.; Schwarzianeck, P.; Rothfuchs, T. 2001-01-01 Excavation disturbed zones develop in all kinds of rock as a consequence of the opening of cavities. Such zones are characterized by a change in hydraulic behaviour which can form a problem with regard to the sealing of waste disposal areas. Rocks showing a plastic behaviour, like rock salt, have the potential of healing when the stress state which was disturbed by excavation returns to an advantageous state. If healing can reliably be predicted, the excavation disturbed zone may not form a long-term safety issue in rock salt. Investigations of permeability and stress state around lined and open excavations have been performed in order to relate hydraulic behaviour to stress state. First results which are presented here are promising. (authors) 17. A re-interpretation of$\\sqrt{s}=8~$TeV ATLAS results on electroweak supersymmetry production to explore general gauge mediated models CERN Document Server The ATLAS collaboration 2016-01-01 This document determines the constraints placed by the ATLAS experiment on general gauge mediated (GGM) supersymmetric models. The GGM parameters are chosen in such a way that the constraints from the observed Higgs mass are satisfied. Three varied parameters ($\\mu$,$M_{2}$and$\\tan\\beta$) determine the phenomenology at the LHC, featuring the lightest wino-higgsino mixture neutralinos and charginos decaying to the gravitino and$W$,$Z$, Higgs bosons or photons. Constraints from existing ATLAS searches using the full Run 1 dataset of 20.3 fb$^{-1}$at$\\sqrt{s} =8~\$TeV and targeting a variety of final states with multiple leptons or photons are evaluated. Results of different analyses are statistically combined, providing stringent limits on the three theoretical parameters.
18. SU-E-J-102: Performance Variations Among Clinically Available Deformable Image Registration Tools in Adaptive Radiotherapy: How Should We Evaluate and Interpret the Result?
International Nuclear Information System (INIS)
Nie, K; Pouliot, J; Smith, E; Chuang, C
2015-01-01
Purpose: To evaluate the performance variations in commercial deformable image registration (DIR) tools for adaptive radiation therapy. Methods: Representative plans from three different anatomical sites, prostate, head-and-neck (HN) and cranial spinal irradiation (CSI) with L-spine boost, were included. Computerized deformed CT images were first generated using virtual DIR QA software (ImSimQA) for each case. The corresponding transformations served as the “reference”. Three commercial software packages MIMVista v5.5 and MIMMaestro v6.0, VelocityAI v2.6.2, and OnQ rts v2.1.15 were tested. The warped contours and doses were compared with the “reference” and among each other. Results: The performance in transferring contours was comparable among all three tools with an average DICE coefficient of 0.81 for all the organs. However, the performance of dose warping accuracy appeared to rely on the evaluation end points. Volume based DVH comparisons were not sensitive enough to illustrate all the detailed variations while isodose assessment on a slice-by-slice basis could be tedious. Point-based evaluation was over-sensitive by having up to 30% hot/cold-spot differences. If adapting the 3mm/3% gamma analysis into the evaluation of dose warping, all three algorithms presented a reasonable level of equivalency. One algorithm had over 10% of the voxels not meeting this criterion for the HN case while another showed disagreement for the CSI case. Conclusion: Overall, our results demonstrated that evaluation based only on the performance of contour transformation could not guarantee the accuracy in dose warping. However, the performance of dose warping accuracy relied on the evaluation methodologies. Nevertheless, as more DIR tools are available for clinical use, the performance could vary at certain degrees. A standard quality assurance criterion with clinical meaning should be established for DIR QA, similar to the gamma index concept, in the near future
19. Interpreting Impoliteness: Interpreters’ Voices
Directory of Open Access Journals (Sweden)
2017-11-01
Full Text Available Interpreters in the public sector in Norway interpret in a variety of institutional encounters, and the interpreters evaluate the majority of these encounters as polite. However, some encounters are evaluated as impolite, and they pose challenges when it comes to interpreting impoliteness. This issue raises the question of whether interpreters should take a stance on their own evaluation of impoliteness and whether they should interfere in communication. In order to find out more about how interpreters cope with this challenge, in 2014 a survey was sent to all interpreters registered in the Norwegian Register of Interpreters. The survey data were analyzed within the theoretical framework of impoliteness theory using the notion of moral order as an explanatory tool in a close reading of interpreters’ answers. The analysis shows that interpreters reported using a variety of strategies for interpreting impoliteness, including omissions and downtoning. However, the interpreters also gave examples of individual strategies for coping with impoliteness, such as interrupting and postponing interpreting. These strategies border behavioral strategies and conflict with the Norwegian ethical guidelines for interpreting. In light of the ethical guidelines and actual practice, mapping and discussing different strategies used by interpreters might heighten interpreters’ and interpreter-users’ awareness of the role impoliteness can play in institutional interpreter– mediated encounters.
20. Loss of population levels of immunity to malaria as a result of exposure-reducing interventions: consequences for interpretation of disease trends.
Directory of Open Access Journals (Sweden)
Azra C Ghani
Full Text Available BACKGROUND: The persistence of malaria as an endemic infection and one of the major causes of childhood death in most parts of Africa has lead to a radical new call for a global effort towards eradication. With the deployment of a highly effective vaccine still some years away, there has been an increased focus on interventions which reduce exposure to infection in the individual and -by reducing onward transmission-at the population level. The development of appropriate monitoring of these interventions requires an understanding of the timescales of their effect. METHODS & FINDINGS: Using a mathematical model for malaria transmission which incorporates the acquisition and loss of both clinical and parasite immunity, we explore the impact of the trade-off between reduction in exposure and decreased development of immunity on the dynamics of disease following a transmission-reducing intervention such as insecticide-treated nets. Our model predicts that initially rapid reductions in clinical disease incidence will be observed as transmission is reduced in a highly immune population. However, these benefits in the first 5-10 years after the intervention may be offset by a greater burden of disease decades later as immunity at the population level is gradually lost. The negative impact of having fewer immune individuals in the population can be counterbalanced either by the implementation of highly-effective transmission-reducing interventions (such as the combined use of insecticide-treated nets and insecticide residual sprays for an indefinite period or the concurrent use of a pre-erythrocytic stage vaccine or prophylactic therapy in children to protect those at risk from disease as immunity is lost in the population. CONCLUSIONS: Effective interventions will result in rapid decreases in clinical disease across all transmission settings while population-level immunity is maintained but may subsequently result in increases in clinical disease many
1. results
Directory of Open Access Journals (Sweden)
Salabura Piotr
2017-01-01
Full Text Available HADES experiment at GSI is the only high precision experiment probing nuclear matter in the beam energy range of a few AGeV. Pion, proton and ion beams are used to study rare dielectron and strangeness probes to diagnose properties of strongly interacting matter in this energy regime. Selected results from p + A and A + A collisions are presented and discussed.
2. On the physical interpretation of torsion-rotation parameters in methanol and acetaldehyde: Comparison of global fit and ab initio results
International Nuclear Information System (INIS)
Xu, L.; Lees, R.M.; Hougen, J.T.
1999-01-01
Equilibrium structural constants and certain torsion endash rotation interaction parameters have been determined for methanol and acetaldehyde from ab initio calculations using GAUSSIAN 94. The substantial molecular flexing which occurs in going from the bottom to the top of the torsional potential barrier can be quantitatively related to coefficients of torsion endash rotation terms having a (1-cos ampersand hthinsp;3γ) dependence on torsional angle γ. The barrier height, six equilibrium structural constants characterizing the bottom of the potential well, and six torsion endash rotation constants are all compared to experimental parameters obtained from global fits to large microwave and far-infrared data sets for methanol and acetaldehyde. The rather encouraging agreement between the Gaussian and global fit results for methanol seems both to validate the accuracy of ab initio calculations of these parameters, and to demonstrate that the physical origin of these torsion endash rotation interaction terms in methanol lies primarily in structural relaxation with torsion. The less satisfactory agreement between theory and experiment for acetaldehyde requires further study. copyright 1999 American Institute of Physics
3. Interpreting trial results following use of different intention-to-treat approaches for preventing attrition bias: a meta-epidemiological study protocol.
Science.gov (United States)
Dossing, Anna; Tarp, Simon; Furst, Daniel E; Gluud, Christian; Beyene, Joseph; Hansen, Bjarke B; Bliddal, Henning; Christensen, Robin
2014-09-26
When participants drop out of randomised clinical trials, as frequently happens, the intention-to-treat (ITT) principle does not apply, potentially leading to attrition bias. Data lost from patient dropout/lack of follow-up are statistically addressed by imputing, a procedure prone to bias. Deviations from the original definition of ITT are referred to as modified intention-to-treat (mITT). As yet, the impact of the potential bias associated with mITT has not been assessed. Our objective is to investigate potential bias and disadvantages of performing mITT and evaluate possible concerns when executing different mITT approaches in meta-analyses. Using meta-epidemiology on randomised trials considered less prone to bias (ie, good internal validity) and assessing biological or targeted agents in patients with rheumatoid arthritis, we will meta-analyse data from 10 biological and targeted drugs based on collections of trials that would correspond to 10 individual meta-analyses. This study will enhance transparency for evaluating mITT treatment effects described in meta-analyses. The intended audience will include healthcare researchers, policymakers and clinicians. Results of the study will be disseminated by peer-review publication. In PROSPERO CRD42013006702, 11. December 2013. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
4. Exogenous factors matter when interpreting the results of an impact evaluation: a case study of rainfall and child health programme intervention in Rwanda.
Science.gov (United States)
Mukabutera, Assumpta; Thomson, Dana R; Hedt-Gauthier, Bethany L; Atwood, Sidney; Basinga, Paulin; Nyirazinyoye, Laetitia; Savage, Kevin P; Habimana, Marcellin; Murray, Megan
2017-12-01
Public health interventions are often implemented at large scale, and their evaluation seems to be difficult because they are usually multiple and their pathways to effect are complex and subject to modification by contextual factors. We assessed whether controlling for rainfall-related variables altered estimates of the efficacy of a health programme in rural Rwanda and have a quantifiable effect on an intervention evaluation outcomes. We conducted a retrospective quasi-experimental study using previously collected cross-sectional data from the 2005 and 2010 Rwanda Demographic and Health Surveys (DHS), 2010 DHS oversampled data, monthly rainfall data collected from meteorological stations over the same period, and modelled output of long-term rainfall averages, soil moisture, and rain water run-off. Difference-in-difference models were used. Rainfall factors confounded the PIH intervention impact evaluation. When we adjusted our estimates of programme effect by controlling for a variety of rainfall variables, several effectiveness estimates changed by 10% or more. The analyses that did not adjust for rainfall-related variables underestimated the intervention effect on the prevalence of ARI by 14.3%, fever by 52.4% and stunting by 10.2%. Conversely, the unadjusted analysis overestimated the intervention's effect on diarrhoea by 56.5% and wasting by 80%. Rainfall-related patterns have a quantifiable effect on programme evaluation results and highlighted the importance and complexity of controlling for contextual factors in quasi-experimental design evaluations. © 2017 John Wiley & Sons Ltd.
5. The Stable Isotopes of Carbon and Nitrogen in the Bones of Domestic Animals from three Cities of the European Part of Russia: First Results and Interpretations
Directory of Open Access Journals (Sweden)
Yavorskaya Liliya Vyacheslavovna
2015-03-01
Full Text Available The paper outlines the results of first purposeful research on isotopic composition (carbon 13С and nitrogen 15N in the bones collagen of domestic and wild animals from medieval towns at the European part of Russia. The published information about δ 13С и δ 15N was obtained from 61 samples of osteological collections of Yaroslavl, Rostov and Bolgar. The average values of carbon isotope in cattle bones are almost the same in all three cities. By contrast, these values for horses and pigs from Rostov and Bolgar are higher than for Yaroslavl animals. Unusual similarity for δ13С in the bones of sheep, camels and dogs among themselves from the Bolgar collection were fixed. The comparative analysis of the values δ13С in bones of domestic and wild animals allowed us to propose the hypothesis that sheep, camels and dogs appeared in Bolgar from the southern arid areas. The data on δ15N showed the inexplicably high accumulation of the nitrogen stable isotope in sheep and camel bones from the collection of Bolgar and in beaver bones from Rostov samples. This is probably due to the peculiarities of the diet of these dogs, enriched by the entrails of domestic ungulates or fish. The minimum values of δ15N in the bones of dogs from Bolgar reflect the specific diet of herding dogs with a minimal volume of meat. Simultaneously the data of 15N in sheep and camel bones from Bolgar collection and in beaver bones from Rostov samples howed the inexplicably high level of nitrogen stable isotope accumulation.
6. Ethyl glucuronide in vitreous humor and blood postmortem specimens: analysis by liquid chromatography-electrospray tandem mass spectrometry and interpreting results of neo-formation of ethanol
Directory of Open Access Journals (Sweden)
Sara Vezzoli
2015-03-01
Full Text Available Introduction. The determination of ethyl glucuronide (EtG, a stable and sensitive marker that is specific to alcohol intake, finds many applications both in the forensic toxicology and clinical fields. Aim. The aim of the study is to examine the possibility of using a cadaveric biological matrix, vitreous humor (VH, to determine EtG as a marker of recent ethanol use. Methods. The blood, taken from the femoral vein, and the VH were obtained from 63 autopsy cases. Analysis of the EtG was performed using an LC/MS/MS system. Analyses of the ethanol and putrefaction biomarkers, such as acetaldehyde and n-propanol, were performed using the HS-GC/FID technique in both the matrices. Results. In 17 cases, both ethanol and EtG were absent in both matrices.Nineteen cases presented ethanol in blood from 0.05 to 0.30 g/L, EtG-Blood concentration from 0.02 to 3.27 mg/L, and EtG-VH concentration from 0.01 mg/L to 2.88 mg/L. Thirteen cases presented ethanol in blood > 0.05 g/L but EtG concentration in blood and VH lower than 0.01 mg/L, are part of these 8 samples presented acetic aldehyde and n- propanol in blood or VH, means identification of putrefaction indicators. Fourteen cases presented ethanol in blood > 0.46 and EtG concentration in blood and VH higher than 0.01 mg/L. Conclusions. The determination of EtG in biological material is important in those cases where the intake of ethanol appears doubtful, as it allows us to exclude the possibility of any post-mortem formation of ethanol.
7. The narrative of 'equality of chances' as an approach to interpreting PIAAC results on perceived political efficacy, social trust and volunteering and the quest for political literacy
Directory of Open Access Journals (Sweden)
Anke Grotlüschen
2017-05-01
Full Text Available The article focuses on the theoretically and empirically addressed question of whether workforce literacy strategies in research and policies may tend to exclude relevant fields of literacy, which have emancipatory chances for participants, but which regularly fail to include low qualified or literate adults (Hufer, 2013, namely the area of basic civic education or political literacy. First, a theoretical discussion makes use of recent publications. The relevance of basic civic education will be discussed using contemporary theories, which point at a crisis of democracy and explain this by the spread of income and capital (Piketty, 2014 and its legitimation (Rosanvallon, 2013. Further detail is provided by using Rosanvallons criticism of the term 'equality of chances'. The everyday unfairness, covered by the narrative of equal chances, leads to peoples' disengagement from reciprocal relations and disintegration of solidarity within a society. This theoretical approach will then be supplemented by empirical data. The empirical research question is: Do adults with low literacy skills agree less often on feelings of political efficacy and social trust than adults with high literacy skills? Do they engage less often in volunteering than adults with high literacy skills? This is based the PIAAC 2012 dataset which relates literacy on the one hand with variables of political efficacy, social trust and volunteering on the other hand. Results will be compared with volunteer and youth surveys. Furthermore, the connection of a 'Nouvelle Droite' (contemporary right-wing populism and peoples' low feelings of political efficacy will be reflected in order to refute the stereotype that marginalized groups automatically become voters of right-wing populists.
8. IRECCSEM: Evaluating Clare Basin potential for onshore carbon sequestration using magnetotelluric data (Preliminary results). New approaches applied for processing, modeling and interpretation
Science.gov (United States)
Campanya i Llovet, J.; Ogaya, X.; Jones, A. G.; Rath, V.
2014-12-01
The IRECCSEM project (www.ireccsem.ie) is a Science Foundation Ireland Investigator Project that is funded to evaluate Ireland's potential for onshore carbon sequestration in saline aquifers by integrating new electromagnetic data with existing geophysical and geological data. The main goals of the project are to determine porosity-permeability values of the potential reservoir formation as well as to evaluate the integrity of the seal formation. During the Summer of 2014 a magnetotelluric (MT) survey was carried out at the Clare basin (Ireland). A total of 140 sites were acquired including audiomagnetotelluric (AMT), broadband magnetotelluric (BBMT) and long period magnetotelluric (LMT) data. The nominal space between sites is 0.6 km for AMT sites, 1.2 km for BBMT sites and 8 km for LMT sites. To evaluate the potential for carbon sequestration of the Clare basin three advances on geophysical methodology related to electromagnetic techniques were applied. First of all, processing of the MT data was improved following the recently published ELICIT methodology. Secondly, during the inversion process, the electrical resistivity distribution of the subsurface was constrained combining three different tensor relationships: Impedances (Z), induction arrows (TIP) and multi-site horizontal magnetic transfer-functions (HMT). Results from synthetic models were used to evaluate the sensitivity and properties of each tensor relationship. Finally, a computer code was developed, which employs a stabilized least squares approach to estimate the cementation exponent in the generalized Archie law formulated by Glover (2010). This allows relating MT-derived electrical resistivity models to porosity distributions. The final aim of this procedure is to generalize the porosity - permeability values measured in the boreholes to regional scales. This methodology will contribute to the evaluation of possible sequestration targets in the study area.
9. College Students' Interpretation of Research Reports on Group Differences: The Tall-Tale Effect
Science.gov (United States)
Hogan, Thomas P.; Zaboski, Brian A.; Perry, Tiffany R.
2015-01-01
How does the student untrained in advanced statistics interpret results of research that reports a group difference? In two studies, statistically untrained college students were presented with abstracts or professional associations' reports and asked for estimates of scores obtained by the original participants in the studies. These estimates…
10. Ad Oculos. Images, Imagination and Abstract Thinking
Directory of Open Access Journals (Sweden)
Alessandra Cirafici
2018-03-01
Full Text Available The unusual edition of Elements of Euclid released for publishing in 1847 by Oliver Byrne offers the occasion to suggest a few elements for discussion on the uniqueness of the ‘representation’ of geometric-mathematical thinking—and more in general of the abstract thinking—enshrined in its ‘nature of a pure imaginative vision able to connect the intelligible with the tangible’. The purpose is, thus, a reasoning on images and communicative artefacts, that, when articulated, provide different variations of the idea of ‘transcription’ of complex theoretical structures from one language (that of abstract logic to another (that of sensory experience, with a view to facilitate, ease and make more accurate the noetic process. Images able over time to facilitate the understanding of complex and abstract theoretical principles—since able to show them in an extremely concrete way, ad oculos,—and which at some points could reveal the horizons of art interpretation to inscrutable and figurative meaningless formulas.
11. Interpretation of the results from individual monitoring of workers at the Nuclear Fuel Fabrication Facility, Brazil; Interpretacao de resultados de monitoracao individual interna da Fabrica de Combustivel Nuclear - FCN
Energy Technology Data Exchange (ETDEWEB)
Castro, Marcelo Xavier de
2005-07-01
In nuclear fuel fabrication facilities, workers are exposed to different compounds of enriched uranium. Although in this kind of facility the main route of intake is inhalation, ingestion may occur in some situations, and also a mixture of both. The interpretation of the bioassay data is very complex, since it is necessary taking into account all the different parameters, which is a big challenge. Due to the high cost of the individual monitoring programme for internal dose assessment in the routine monitoring programmes, usually only one type of measurement is assigned. In complex situations like the one described in this study, where several parameters can compromise the accuracy of the bioassay interpretation it is need to have a combination of techniques to evaluate the internal dose. According to ICRP 78 (1997), the general order of preference of measurement methodologies in terms of accuracy of interpretation is: body activity measurement, excreta analysis and personal air sampling. Results of monitoring of working environment may provide information that assists in the interpretation on particle size, chemical form, solubility and date of intake. A group of fifteen workers from controlled area of the studied nuclear fuel fabrication facility was selected to evaluate the internal dose using all different available techniques during a certain period. The workers were monitored for determination of uranium content in the daily urinary and faecal excretion (collected over a period of 3 consecutive days), chest counting and personal air sampling. The results have shown that at least two types of sensitivity techniques must be used, since there are some sources of uncertainties on the bioassay interpretation, like mixture of uranium compounds intake and different routes of intake. The combination of urine and faeces analysis has shown to be the more appropriate methodology for assessing internal dose in this situation. The chest counting methodology has not shown
12. From Interpreter to Logic Engine by Defunctionalization
DEFF Research Database (Denmark)
Biernacki, Dariusz; Danvy, Olivier
2003-01-01
Starting from a continuation-based interpreter for a simple logic programming language, propositional Prolog with cut, we derive the corresponding logic engine in the form of an abstract machine. The derivation originates in previous work (our article at PPDP 2003) where it was applied to the lam......Starting from a continuation-based interpreter for a simple logic programming language, propositional Prolog with cut, we derive the corresponding logic engine in the form of an abstract machine. The derivation originates in previous work (our article at PPDP 2003) where it was applied...
13. From Interpreter to Logic Engine by Defunctionalization
DEFF Research Database (Denmark)
Biernacki, Dariusz; Danvy, Olivier
2004-01-01
Starting from a continuation-based interpreter for a simple logic programming language, propositional Prolog with cut, we derive the corresponding logic engine in the form of an abstract machine. The derivation originates in previous work (our article at PPDP 2003) where it was applied to the lam......Starting from a continuation-based interpreter for a simple logic programming language, propositional Prolog with cut, we derive the corresponding logic engine in the form of an abstract machine. The derivation originates in previous work (our article at PPDP 2003) where it was applied...
14. From Interpreter to logic Engine by Defunctionalization
DEFF Research Database (Denmark)
Biernacki, Dariusz; Danvy, Olivier
2003-01-01
Starting from a continuation-based interpreter for a simple logic programming language, propositional Prolog with cut, we derive the corresponding logic engine in the form of an abstract machine. The derivation originates in previous work (our article at PPDP 2003) where it was applied to the lam......Starting from a continuation-based interpreter for a simple logic programming language, propositional Prolog with cut, we derive the corresponding logic engine in the form of an abstract machine. The derivation originates in previous work (our article at PPDP 2003) where it was applied...
15. Knowledge acquisition for temporal abstraction.
Science.gov (United States)
Stein, A; Musen, M A; Shahar, Y
1996-01-01
Temporal abstraction is the task of detecting relevant patterns in data over time. The knowledge-based temporal-abstraction method uses knowledge about a clinical domain's contexts, external events, and parameters to create meaningful interval-based abstractions from raw time-stamped clinical data. In this paper, we describe the acquisition and maintenance of domain-specific temporal-abstraction knowledge. Using the PROTEGE-II framework, we have designed a graphical tool for acquiring temporal knowledge directly from expert physicians, maintaining the knowledge in a sharable form, and converting the knowledge into a suitable format for use by an appropriate problem-solving method. In initial tests, the tool offered significant gains in our ability to rapidly acquire temporal knowledge and to use that knowledge to perform automated temporal reasoning.
16. How to Interpret Abnormal Pap Smear Results
Science.gov (United States)
... Kids and Teens Pregnancy and Childbirth Women Men Seniors Your Health Resources Healthcare Management End-of-Life Issues Insurance & Bills Self Care Working With Your Doctor Drugs, Procedures & Devices Over-the- ...
17. Interpreting OPERA results on superluminal neutrino
CERN Document Server
Giudice, Gian F; Strumia, Alessandro
2012-01-01
OPERA has claimed the discovery of superluminal propagation of neutrinos. We analyze the consistency of this claim with previous tests of special relativity. We find that reconciling the OPERA measurement with information from SN1987a and from neutrino oscillations requires stringent conditions. The superluminal limit velocity of neutrinos must be nearly flavor independent, must decrease steeply in the low-energy domain, and its energy dependence must depart from a simple power law. We construct illustrative models that satisfy these conditions, by introducing Lorentz violation in a sector with light sterile neutrinos. We point out that, quite generically, electroweak quantum corrections transfer the information of superluminal neutrino properties into Lorentz violations in the electron and muon sector, in apparent conflict with experimental data.
18. Graphical interpretation of numerical model results
International Nuclear Information System (INIS)
Drewes, D.R.
1979-01-01
Computer software has been developed to produce high quality graphical displays of data from a numerical grid model. The code uses an existing graphical display package (DISSPLA) and overcomes some of the problems of both line-printer output and traditional graphics. The software has been designed to be flexible enough to handle arbitrarily placed computation grids and a variety of display requirements
19. An abstract approach to music.
Energy Technology Data Exchange (ETDEWEB)
Kaper, H. G.; Tipei, S.
1999-04-19
In this article we have outlined a formal framework for an abstract approach to music and music composition. The model is formulated in terms of objects that have attributes, obey relationships, and are subject to certain well-defined operations. The motivation for this approach uses traditional terms and concepts of music theory, but the approach itself is formal and uses the language of mathematics. The universal object is an audio wave; partials, sounds, and compositions are special objects, which are placed in a hierarchical order based on time scales. The objects have both static and dynamic attributes. When we realize a composition, we assign values to each of its attributes: a (scalar) value to a static attribute, an envelope and a size to a dynamic attribute. A composition is then a trajectory in the space of aural events, and the complex audio wave is its formal representation. Sounds are fibers in the space of aural events, from which the composer weaves the trajectory of a composition. Each sound object in turn is made up of partials, which are the elementary building blocks of any music composition. The partials evolve on the fastest time scale in the hierarchy of partials, sounds, and compositions. The ideas outlined in this article are being implemented in a digital instrument for additive sound synthesis and in software for music composition. A demonstration of some preliminary results has been submitted by the authors for presentation at the conference.
20. Handedness shapes children's abstract concepts.
Science.gov (United States)
Casasanto, Daniel; Henetz, Tania
2012-03-01
Can children's handedness influence how they represent abstract concepts like kindness and intelligence? Here we show that from an early age, right-handers associate rightward space more strongly with positive ideas and leftward space with negative ideas, but the opposite is true for left-handers. In one experiment, children indicated where on a diagram a preferred toy and a dispreferred toy should go. Right-handers tended to assign the preferred toy to a box on the right and the dispreferred toy to a box on the left. Left-handers showed the opposite pattern. In a second experiment, children judged which of two cartoon animals looked smarter (or dumber) or nicer (or meaner). Right-handers attributed more positive qualities to animals on the right, but left-handers to animals on the left. These contrasting associations between space and valence cannot be explained by exposure to language or cultural conventions, which consistently link right with good. Rather, right- and left-handers implicitly associated positive valence more strongly with the side of space on which they can act more fluently with their dominant hands. Results support the body-specificity hypothesis (Casasanto, 2009), showing that children with different kinds of bodies think differently in corresponding ways. Copyright © 2011 Cognitive Science Society, Inc.
1. Mathematical Abstraction: Constructing Concept of Parallel Coordinates
Science.gov (United States)
Nurhasanah, F.; Kusumah, Y. S.; Sabandar, J.; Suryadi, D.
2017-09-01
Mathematical abstraction is an important process in teaching and learning mathematics so pre-service mathematics teachers need to understand and experience this process. One of the theoretical-methodological frameworks for studying this process is Abstraction in Context (AiC). Based on this framework, abstraction process comprises of observable epistemic actions, Recognition, Building-With, Construction, and Consolidation called as RBC + C model. This study investigates and analyzes how pre-service mathematics teachers constructed and consolidated concept of Parallel Coordinates in a group discussion. It uses AiC framework for analyzing mathematical abstraction of a group of pre-service teachers consisted of four students in learning Parallel Coordinates concepts. The data were collected through video recording, students’ worksheet, test, and field notes. The result shows that the students’ prior knowledge related to concept of the Cartesian coordinate has significant role in the process of constructing Parallel Coordinates concept as a new knowledge. The consolidation process is influenced by the social interaction between group members. The abstraction process taken place in this group were dominated by empirical abstraction that emphasizes on the aspect of identifying characteristic of manipulated or imagined object during the process of recognizing and building-with.
2. Abstracting audit data for lightweight intrusion detection
KAUST Repository
Wang, Wei
2010-01-01
High speed of processing massive audit data is crucial for an anomaly Intrusion Detection System (IDS) to achieve real-time performance during the detection. Abstracting audit data is a potential solution to improve the efficiency of data processing. In this work, we propose two strategies of data abstraction in order to build a lightweight detection model. The first strategy is exemplar extraction and the second is attribute abstraction. Two clustering algorithms, Affinity Propagation (AP) as well as traditional k-means, are employed to extract the exemplars, and Principal Component Analysis (PCA) is employed to abstract important attributes (a.k.a. features) from the audit data. Real HTTP traffic data collected in our institute as well as KDD 1999 data are used to validate the two strategies of data abstraction. The extensive test results show that the process of exemplar extraction significantly improves the detection efficiency and has a better detection performance than PCA in data abstraction. © 2010 Springer-Verlag.
3. Association between Radiologists' Experience and Accuracy in Interpreting Screening Mammograms
Directory of Open Access Journals (Sweden)
Maristany Maria-Teresa
2008-04-01
4. 13th International Mass Spectrometry Conference. Book of Abstracts
International Nuclear Information System (INIS)
1994-01-01
The collection contains abstracts of several hundred papers presented at the international conference on new research and development results and applications of mass spectrometry. Abstracts falling into the INIS scope were indexed separately in the INIS database. (Roboz, P.)
5. 13th International Mass Spectrometry Conference. Book of Abstracts
Energy Technology Data Exchange (ETDEWEB)
NONE
1994-12-31
The collection contains abstracts of several hundred papers presented at the international conference on new research and development results and applications of mass spectrometry. Abstracts falling into the INIS scope were indexed separately in the INIS database. (Roboz, P.).
6. Genre and Interpretation
DEFF Research Database (Denmark)
Auken, Sune
2015-01-01
Despite the immensity of genre studies as well as studies in interpretation, our understanding of the relationship between genre and interpretation is sketchy at best. The article attempts to unravel some of intricacies of that relationship through an analysis of the generic interpretation carrie...
7. Engineering Definitional Interpreters
DEFF Research Database (Denmark)
Midtgaard, Jan; Ramsay, Norman; Larsen, Bradford
2013-01-01
A definitional interpreter should be clear and easy to write, but it may run 4--10 times slower than a well-crafted bytecode interpreter. In a case study focused on implementation choices, we explore ways of making definitional interpreters faster without expending much programming effort. We imp...
8. Construct Abstraction for Automatic Information Abstraction from Digital Images
Science.gov (United States)
2006-05-30
objects and features and the names of objects of objects and features. For example, in Figure 15 the parts of the fish could be named the ‘mouth... fish -1 fish -2 fish -3 tennis shoe tennis racquet...of abstraction and generality. For example, an algorithm might usefully find a polygon ( blob ) in an image and calculate numbers such as the
9. Begriffsverwirrung? Interpretation Analyse Bedeutung Applikation
Directory of Open Access Journals (Sweden)
Mayr, Jeremia Josef M.
2017-11-01
Full Text Available Empirical research on the reception of biblical texts confronts scientific exegesis with valid and challenging requests and demands. The hermeneutic question of the compatibility of interpretations resulting out of different contexts (e.g. scientific exegesis and ordinary readers‘ exegesis plays an important role. Taking these requests seriously by coherently restructuring fundamental and central aspects of the theory of scientific interpretation, the present article attempts to offer a stimulating approach for further investigation.
10. The Role of Representations in Executive Function: Investigating a Developmental Link Between Flexibility and Abstraction
Directory of Open Access Journals (Sweden)
Maria eKharitonova
2011-11-01
11. Mechanical Engineering Department technical abstracts
International Nuclear Information System (INIS)
1984-01-01
The Mechanical Engineering Department publishes abstracts twice a year to inform readers of the broad range of technical activities in the Department, and to promote an exchange of ideas. Details of the work covered by an abstract may be obtained by contacting the author(s). General information about the current role and activities of each of the Department's seven divisions precedes the technical abstracts. Further information about a division's work may be obtained from the division leader, whose name is given at the end of each divisional summary. The Department's seven divisions are as follows: Nuclear Test Engineering Division, Nuclear Explosives Engineering Division, Weapons Engineering Division, Energy Systems Engineering Division, Engineering Sciences Division, Magnetic Fusion Engineering Division and Materials Fabrication Division
12. Abstraction by Set-Membership
DEFF Research Database (Denmark)
Mödersheim, Sebastian Alexander
2010-01-01
The abstraction and over-approximation of protocols and web services by a set of Horn clauses is a very successful method in practice. It has however limitations for protocols and web services that are based on databases of keys, contracts, or even access rights, where revocation is possible, so...... that the set of true facts does not monotonically grow with the transitions. We extend the scope of these over-approximation methods by defining a new way of abstraction that can handle such databases, and we formally prove that the abstraction is sound. We realize a translator from a convenient specification...... language to standard Horn clauses and use the verifier ProVerif and the theorem prover SPASS to solve them. We show by a number of examples that this approach is practically feasible for wide variety of verification problems of security protocols and web services....
13. Elements of abstract harmonic analysis
CERN Document Server
Bachman, George
2013-01-01
Elements of Abstract Harmonic Analysis provides an introduction to the fundamental concepts and basic theorems of abstract harmonic analysis. In order to give a reasonably complete and self-contained introduction to the subject, most of the proofs have been presented in great detail thereby making the development understandable to a very wide audience. Exercises have been supplied at the end of each chapter. Some of these are meant to extend the theory slightly while others should serve to test the reader's understanding of the material presented. The first chapter and part of the second give
14. From Interpreter to Logic Engine by Defunctionalization
DEFF Research Database (Denmark)
Biernacki, Dariusz; Danvy, Olivier
2004-01-01
Starting from a continuation-based interpreter for a simple logic programming language, propositional Prolog with cut, we derive the corresponding logic engine in the form of an abstract machine. The derivation originates in previous work (our article at PPDP 2003) where it was applied...
15. From Interpreter to Logic Engine by Defunctionalization
DEFF Research Database (Denmark)
Biernacki, Dariusz; Danvy, Olivier
2003-01-01
Starting from a continuation-based interpreter for a simple logic programming language, propositional Prolog with cut, we derive the corresponding logic engine in the form of an abstract machine. The derivation originates in previous work (our article at PPDP 2003) where it was applied...
16. Teaching abstraction in introductory courses
NARCIS (Netherlands)
Koppelman, Herman; van Dijk, Betsy
Abstraction is viewed as a key concept in computer science. It is not only an important concept but also one that is difficult to master. This paper focuses on the problems that novices experience when they first encounter this concept. Three assignments from introductory courses are analyzed, to
17. IRAP 2006, Book of Abstracts
International Nuclear Information System (INIS)
2006-01-01
This publications related with Hacettepe University, Turkish Atomic Energy Authority, The Scientific and Technological Research Council of Turkey, International Atomic Energy Agency, CEA-Saclay, CEA-Saclay Drecam, ANKAmall Shopping Center and Ion Beam Applications Industrial that was held in Antalya, Turkey, 23-28 September 2006. A separate abstract was prepared for each paper
18. IRAP 2006, Book of Abstracts
Energy Technology Data Exchange (ETDEWEB)
NONE
2006-07-01
This publications related with Hacettepe University, Turkish Atomic Energy Authority, The Scientific and Technological Research Council of Turkey, International Atomic Energy Agency, CEA-Saclay, CEA-Saclay Drecam, ANKAmall Shopping Center and Ion Beam Applications Industrial that was held in Antalya, Turkey, 23-28 September 2006. A separate abstract was prepared for each paper.
19. Interpreting land records
CERN Document Server
Wilson, Donald A
2014-01-01
Base retracement on solid research and historically accurate interpretation Interpreting Land Records is the industry's most complete guide to researching and understanding the historical records germane to land surveying. Coverage includes boundary retracement and the primary considerations during new boundary establishment, as well as an introduction to historical records and guidance on effective research and interpretation. This new edition includes a new chapter titled "Researching Land Records," and advice on overcoming common research problems and insight into alternative resources wh
20. Interpretation of galaxy counts
International Nuclear Information System (INIS)
Tinsely, B.M.
1980-01-01
New models are presented for the interpretation of recent counts of galaxies to 24th magnitude, and predictions are shown to 28th magnitude for future comparison with data from the Space Telescope. The results supersede earlier, more schematic models by the author. Tyson and Jarvis found in their counts a ''local'' density enhancement at 17th magnitude, on comparison with the earlier models; the excess is no longer significant when a more realistic mixture of galaxy colors is used. Bruzual and Kron's conclusion that Kron's counts show evidence for evolution at faint magnitudes is confirmed, and it is predicted that some 23d magnitude galaxies have redshifts greater than unity. These may include spheroidal systems, elliptical galaxies, and the bulges of early-type spirals and S0's, seen during their primeval rapid star formation
1. Visually defining and querying consistent multi-granular clinical temporal abstractions.
Science.gov (United States)
Combi, Carlo; Oliboni, Barbara
2012-02-01
the component abstractions. Moreover, we propose a visual query language where different temporal abstractions can be composed to build complex queries: temporal abstractions are visually connected through the usual logical connectives AND, OR, and NOT. The proposed visual language allows one to simply define temporal abstractions by using intuitive metaphors, and to specify temporal intervals related to abstractions by using different temporal granularities. The physician can interact with the designed and implemented tool by point-and-click selections, and can visually compose queries involving several temporal abstractions. The evaluation of the proposed granularity-related metaphors consisted in two parts: (i) solving 30 interpretation exercises by choosing the correct interpretation of a given screenshot representing a possible scenario, and (ii) solving a complex exercise, by visually specifying through the interface a scenario described only in natural language. The exercises were done by 13 subjects. The percentage of correct answers to the interpretation exercises were slightly different with respect to the considered metaphors (54.4--striped wall, 73.3--plastered wall, 61--brick wall, and 61--no wall), but post hoc statistical analysis on means confirmed that differences were not statistically significant. The result of the user's satisfaction questionnaire related to the evaluation of the proposed granularity-related metaphors ratified that there are no preferences for one of them. The evaluation of the proposed logical notation consisted in two parts: (i) solving five interpretation exercises provided by a screenshot representing a possible scenario and by three different possible interpretations, of which only one was correct, and (ii) solving five exercises, by visually defining through the interface a scenario described only in natural language. Exercises had an increasing difficulty. The evaluation involved a total of 31 subjects. Results related to
2. Interpreter services in emergency medicine.
Science.gov (United States)
Chan, Yu-Feng; Alagappan, Kumar; Rella, Joseph; Bentley, Suzanne; Soto-Greene, Marie; Martin, Marcus
2010-02-01
Emergency physicians are routinely confronted with problems associated with language barriers. It is important for emergency health care providers and the health system to strive for cultural competency when communicating with members of an increasingly diverse society. Possible solutions that can be implemented include appropriate staffing, use of new technology, and efforts to develop new kinds of ties to the community served. Linguistically specific solutions include professional interpretation, telephone interpretation, the use of multilingual staff members, the use of ad hoc interpreters, and, more recently, the use of mobile computer technology at the bedside. Each of these methods carries a specific set of advantages and disadvantages. Although professionally trained medical interpreters offer improved communication, improved patient satisfaction, and overall cost savings, they are often underutilized due to their perceived inefficiency and the inconclusive results of their effect on patient care outcomes. Ultimately, the best solution for each emergency department will vary depending on the population served and available resources. Access to the multiple interpretation options outlined above and solid support and commitment from hospital institutions are necessary to provide proper and culturally competent care for patients. Appropriate communications inclusive of interpreter services are essential for culturally and linguistically competent provider/health systems and overall improved patient care and satisfaction. Copyright (c) 2010 Elsevier Inc. All rights reserved.
3. Heinrich's idea of abstract labour
OpenAIRE
Cockshott, Paul
2013-01-01
The article reviews Heinrich’s An Introduction to the Three Volumes of Karl Marx’s\\ud Capital. It questions three main features of Heinrich’s work: its defence of teleology, its\\ud view that no empirical proof is needed of the labour theory of value and its particular\\ud monetary interpretation of the theory of value. This approach, it argues, is anti-scientific.
4. Learning abstract algebra with ISETL
CERN Document Server
Dubinsky, Ed
1994-01-01
Most students in abstract algebra classes have great difficulty making sense of what the instructor is saying. Moreover, this seems to remain true almost independently of the quality of the lecture. This book is based on the constructivist belief that, before students can make sense of any presentation of abstract mathematics, they need to be engaged in mental activities which will establish an experiential base for any future verbal explanation. No less, they need to have the opportunity to reflect on their activities. This approach is based on extensive theoretical and empirical studies as well as on the substantial experience of the authors in teaching astract algebra. The main source of activities in this course is computer constructions, specifically, small programs written in the mathlike programming language ISETL; the main tool for reflections is work in teams of 2-4 students, where the activities are discussed and debated. Because of the similarity of ISETL expressions to standard written mathematics...
5. Abstract Cauchy problems three approaches
CERN Document Server
Melnikova, Irina V
2001-01-01
Although the theory of well-posed Cauchy problems is reasonably understood, ill-posed problems-involved in a numerous mathematical models in physics, engineering, and finance- can be approached in a variety of ways. Historically, there have been three major strategies for dealing with such problems: semigroup, abstract distribution, and regularization methods. Semigroup and distribution methods restore well-posedness, in a modern weak sense. Regularization methods provide approximate solutions to ill-posed problems. Although these approaches were extensively developed over the last decades by many researchers, nowhere could one find a comprehensive treatment of all three approaches.Abstract Cauchy Problems: Three Approaches provides an innovative, self-contained account of these methods and, furthermore, demonstrates and studies some of the profound connections between them. The authors discuss the application of different methods not only to the Cauchy problem that is not well-posed in the classical sense, b...
6. ESGAR 2007. Book of abstracts
Energy Technology Data Exchange (ETDEWEB)
NONE
2007-06-15
The book includes the abstracts of all contributions presented during ESGAR (European Society of Gastrointestinal and Abdominal Radiology) 2007. The contributions of the symposium and the scientific sessions cover the following topics: abdominal MRI; interactive liver diagnosis; rectal cancer; liver metastases; pancreas: technical advances, lesion characterisation and staging; hepatic interventions; upper GI tract: multimodality evaluation; Crohn's disease evaluation; focal liver lesions: multimodality evaluation; CTC-computer aided diagnosis; bile ducts: imaging and intervention; GI tract: imaging and intervention; small bowel and appendix: cross-sectional imaging; CT and MR colonography; trauma and acute abdominal conditions: imaging and intervention; vascular and diffuse liver disease; liver contrast enhanced US. The second part covers the abstract of 248 presentations.
7. Reporting quality of conference abstracts on randomised controlled trials in gerontology and geriatrics: a cross-sectional investigation.
Science.gov (United States)
Mann, Eva; Meyer, Gabriele
2011-01-01
Without transparent reporting of how a randomised controlled trial was designed and conducted and of the methods used, its internal validity cannot be assessed by the reader. A congress abstract is often the only source providing information about a trial. In January 2008, an extended CONSORT statement on abstract reporting was published. Its impact has yet to be evaluated. Using a slightly modified CONSORT checklist comprising 17 items, we thus investigated the reporting quality of randomised controlled trials published in the book of abstracts presented at the World Congress of Geriatrics and Gerontology in Paris in July 2009. A total of n=4,416 abstracts was screened for inclusion; n=129 met the inclusion criteria. The overall quality of the abstracts was remarkably poor. The primary outcome was mentioned in 34/129 abstracts (26%), none of the abstracts reported on the procedure of random allocation of participants or clusters, 21/129 abstracts (16%) reported some kind of blinding, and the attrition rate was mentioned in only 12/129 abstracts (9%). The majority of abstracts fulfilled two items: description of intended intervention for each group (102/129; 79%) and general interpretation of results (107/129; 83%). Trial status was reported in all abstracts. Both journal editors and committees organising congresses are requested to define the use of the CONSORT statement as a prerequisite in their guidelines for authors and to instruct reviewers to conduct compliance checks. Medical associations should finally endorse the indispensability of the CONSORT statement and publish it in their journals. Otherwise the intended benefits cannot be fully generated. Copyright © 2010. Published by Elsevier GmbH.
8. Cryogenic foam insulation: Abstracted publications
Science.gov (United States)
Williamson, F. R.
1977-01-01
A group of documents were chosen and abstracted which contain information on the properties of foam materials and on the use of foams as thermal insulation at cryogenic temperatures. The properties include thermal properties, mechanical properties, and compatibility properties with oxygen and other cryogenic fluids. Uses of foams include applications as thermal insulation for spacecraft propellant tanks, and for liquefied natural gas storage tanks and pipelines.
9. The abstract unconscious in painting
OpenAIRE
Parker, David
2009-01-01
The Abstract Unconscious in Painting addresses painting as experiential process, critically examining the psychological factors involved in the formation of imagery as it emerges through imaginative responses to the process of mark making and the structuring of space and form. The paper sets this process in relation to theoretical material drawn from Jungian and Post Jungian Psychology ( Avens, 1980; Hillman, 1975) the arts ( Gombrich, 1960; Kuspit, 2000; McKeever, 2005; Worringer, 1908) and ...
10. Abstract specialization and its applications
OpenAIRE
Puebla Sánchez, Alvaro Germán; Hermenegildo, Manuel V.
2003-01-01
The aim of program specialization is to optimize programs by exploiting certain knowledge about the context in which the program will execute. There exist many program manipulation techniques which allow specializing the program in different ways. Among them, one of the best known techniques is partial evaluation, often referred to simply as program specialization, which optimizes programs by specializing them for (partially) known input data. In this work we describe abstract specia...
11. DENOTATIVE ORIGINS OF ABSTRACT IMAGES IN LINGUISTIC EXPERIMENT
Directory of Open Access Journals (Sweden)
Elina, E.
2017-03-01
Full Text Available The article discusses the refusal from denotation (the subject, as the basic principle of abstract images, and semiotic problems arising in connection with this principle: how to solve the contradiction between the pointlessness and iconic nature of the image? Is it correct in the absence of denotation to recognize abstract representation of a single-level entity? The solution is proposed to decide these questions with the help of a psycholinguistic experiment in which the verbal interpretation of abstract images made by both experienced and “naive” audience-recipients demonstrates the objectivity of perception of denotative “traces” and the presence of denotative invariant in an abstract form.
12. Content Abstract Classification Using Naive Bayes
Science.gov (United States)
Latif, Syukriyanto; Suwardoyo, Untung; Aldrin Wihelmus Sanadi, Edwin
2018-03-01
This study aims to classify abstract content based on the use of the highest number of words in an abstract content of the English language journals. This research uses a system of text mining technology that extracts text data to search information from a set of documents. Abstract content of 120 data downloaded at www.computer.org. Data grouping consists of three categories: DM (Data Mining), ITS (Intelligent Transport System) and MM (Multimedia). Systems built using naive bayes algorithms to classify abstract journals and feature selection processes using term weighting to give weight to each word. Dimensional reduction techniques to reduce the dimensions of word counts rarely appear in each document based on dimensional reduction test parameters of 10% -90% of 5.344 words. The performance of the classification system is tested by using the Confusion Matrix based on comparative test data and test data. The results showed that the best classification results were obtained during the 75% training data test and 25% test data from the total data. Accuracy rates for categories of DM, ITS and MM were 100%, 100%, 86%. respectively with dimension reduction parameters of 30% and the value of learning rate between 0.1-0.5.
13. Poster Abstract: Towards NILM for Industrial Settings
DEFF Research Database (Denmark)
Holmegaard, Emil; Kjærgaard, Mikkel Baun
2015-01-01
Industry consumes a large share of the worldwide electricity consumption. Disaggregated information about electricity consumption enables better decision-making and feedback tools to optimize electricity consumption. In industrial settings electricity loads consist of a variety of equipment, whic...... consumption for six months, at an industrial site. In this poster abstract we provide initial results for how industrial equipment challenge NILM algorithms. These results thereby open up for evaluating the use of NILM in industrial settings....
14. Linguistics in Text Interpretation
DEFF Research Database (Denmark)
Togeby, Ole
2011-01-01
A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'.......A model for how text interpretation proceeds from what is pronounced, through what is said to what is comunicated, and definition of the concepts 'presupposition' and 'implicature'....
15. Ito, Stratonovich, Haenggi and all the rest: The thermodynamics of interpretation
Energy Technology Data Exchange (ETDEWEB)
Sokolov, I.M., E-mail: igor.sokolov@physik.hu-berlin.de [Institut fuer Physik, Humboldt-Universitaet zu Berlin, Newtonstrasse 15, 12489 Berlin (Germany)
2010-10-05
Graphical abstract: Slowly modulated periodic potentials corresponding to trap, accordion and barrier models, from top to bottom. The diffusion in such potentials on large scales needs for using Ito, Stratonovich and Haenggi interpretation rules, respectively, when described within the Langevin scheme. - Abstract: To elucidate a question of the necessity and thermodynamical meaning of the interpretation of Langevin's equations with multiplicative nose, we considered a simple model of particle diffusion in a slowly modulated periodic potential in the low-temperature limit. Our main point is to clarify, what kind of interpretation is appropriate to describe the situation under different types of modulation within the Langevin scheme. The existence of a free interpretation parameter {alpha} is a necessity connected with the fact, that the diffusion coefficient D is an amalgam of two kinetic parameters, the period length and the sojourn time in a potential well, which in their turn depend on the microscopic parameters of the model. The results show that the Ito and the Haenggi-Klimontovich interpretations correspond to a trap and barrier models respectively, and that the Fisk-Stratonovich one corresponds to a modulation of the potential's period. All other situations are more complex and may correspond to position-dependent interpretation parameter.
16. Ito, Stratonovich, Haenggi and all the rest: The thermodynamics of interpretation
International Nuclear Information System (INIS)
Sokolov, I.M.
2010-01-01
Graphical abstract: Slowly modulated periodic potentials corresponding to trap, accordion and barrier models, from top to bottom. The diffusion in such potentials on large scales needs for using Ito, Stratonovich and Haenggi interpretation rules, respectively, when described within the Langevin scheme. - Abstract: To elucidate a question of the necessity and thermodynamical meaning of the interpretation of Langevin's equations with multiplicative nose, we considered a simple model of particle diffusion in a slowly modulated periodic potential in the low-temperature limit. Our main point is to clarify, what kind of interpretation is appropriate to describe the situation under different types of modulation within the Langevin scheme. The existence of a free interpretation parameter α is a necessity connected with the fact, that the diffusion coefficient D is an amalgam of two kinetic parameters, the period length and the sojourn time in a potential well, which in their turn depend on the microscopic parameters of the model. The results show that the Ito and the Haenggi-Klimontovich interpretations correspond to a trap and barrier models respectively, and that the Fisk-Stratonovich one corresponds to a modulation of the potential's period. All other situations are more complex and may correspond to position-dependent interpretation parameter.
17. On the Dogmatics of Contract Interpretation
Institute of Scientific and Technical Information of China (English)
Yang Guoqing
2017-01-01
The interpretation of contract has attracted much attention in the practice of contract law and the basic problems to be solved are what the parties agree on and how to set up the rules of adjudication. The present domestic studies are not conducive to contract prac-tices because they either unnecessarily elevate scientific problems to the speculative realm of fantasy, or make the problems become more unreal and abstract. The two traditional theories of contract interpretation do not conflict in values in terms of the autonomy of the will and trust protection. However, in today' s society where cultural pluralism and legal value plural-ism exist, because of different contract practices, differentiation and individualization of con-tract interpretation will become the basic patterns. Therefore, it is impossible and unnecessary to construct a unitary model, but it is of great significance to introduce the dogmatics-orien-ted contract interpretation.
18. Indico CONFERENCE: Define the Call for Abstracts
CERN Multimedia
CERN. Geneva; Ferreira, Pedro
2017-01-01
In this tutorial, you will learn how to define and open a call for abstracts. When defining a call for abstracts, you will be able to define settings related to the type of questions asked during a review of an abstract, select the users who will review the abstracts, decide when to open the call for abstracts, and more.
19. Operating System Abstraction Layer (OSAL)
Science.gov (United States)
Yanchik, Nicholas J.
2007-01-01
This viewgraph presentation reviews the concept of the Operating System Abstraction Layer (OSAL) and its benefits. The OSAL is A small layer of software that allows programs to run on many different operating systems and hardware platforms It runs independent of the underlying OS & hardware and it is self-contained. The benefits of OSAL are that it removes dependencies from any one operating system, promotes portable, reusable flight software. It allows for Core Flight software (FSW) to be built for multiple processors and operating systems. The presentation discusses the functionality, the various OSAL releases, and describes the specifications.
20. National Physics Conference. Paper Abstracts
International Nuclear Information System (INIS)
Marinela Dumitriu, Editorial Coordination.
1995-01-01
This book contains the abstracts of the proceedings of the annual Romanian Physics Conference organized by Romanian Physics Society. The conference was held on November 30 to December 2, 1995 in the city of Baia Mare. It was organized in the following nine sections: 1 - Astrophysics, Particle Physics, Nuclear Physics, Molecular and Atomic Physics; 2 - Plasma Physics; 3 - Biophysics; 4 - Technical Physics; 5 - Theoretical Physics; 6 -The Physics of Energy; 7 - The Physics of Environment 8 - Solid State Physics; 9 - Optical and Quantum Electronics. The full texts can be obtained on request from the Romanian Physical Society or directly from authors
1. In-Package Chemistry Abstraction
Energy Technology Data Exchange (ETDEWEB)
P.S. Domski
2003-07-21
The work associated with the development of this model report was performed in accordance with the requirements established in ''Technical Work Plan for Waste Form Degradation Modeling, Testing, and Analyses in Support of SR and LA'' (BSC 2002a). The in-package chemistry model and in-package chemistry model abstraction are developed to predict the bulk chemistry inside of a failed waste package and to provide simplified expressions of that chemistry. The purpose of this work is to provide the abstraction model to the Performance Assessment Project and the Waste Form Department for development of geochemical models of the waste package interior. The scope of this model report is to describe the development and validation of the in-package chemistry model and in-package chemistry model abstraction. The in-package chemistry model will consider chemical interactions of water with the waste package materials and the waste form for commercial spent nuclear fuel (CSNF) and codisposed high-level waste glass (HLWG) and N Reactor spent fuel (CDNR). The in-package chemistry model includes two sub-models, the first a water vapor condensation (WVC) model, where water enters a waste package as vapor and forms a film on the waste package components with subsequent film reactions with the waste package materials and waste form--this is a no-flow model, the reacted fluids do not exit the waste package via advection. The second sub-model of the in-package chemistry model is the seepage dripping model (SDM), where water, water that may have seeped into the repository from the surrounding rock, enters a failed waste package and reacts with the waste package components and waste form, and then exits the waste package with no accumulation of reacted water in the waste package. Both of the submodels of the in-package chemistry model are film models in contrast to past in-package chemistry models where all of the waste package pore space was filled with water. The
2. In-Package Chemistry Abstraction
International Nuclear Information System (INIS)
P.S. Domski
2003-01-01
The work associated with the development of this model report was performed in accordance with the requirements established in ''Technical Work Plan for Waste Form Degradation Modeling, Testing, and Analyses in Support of SR and LA'' (BSC 2002a). The in-package chemistry model and in-package chemistry model abstraction are developed to predict the bulk chemistry inside of a failed waste package and to provide simplified expressions of that chemistry. The purpose of this work is to provide the abstraction model to the Performance Assessment Project and the Waste Form Department for development of geochemical models of the waste package interior. The scope of this model report is to describe the development and validation of the in-package chemistry model and in-package chemistry model abstraction. The in-package chemistry model will consider chemical interactions of water with the waste package materials and the waste form for commercial spent nuclear fuel (CSNF) and codisposed high-level waste glass (HLWG) and N Reactor spent fuel (CDNR). The in-package chemistry model includes two sub-models, the first a water vapor condensation (WVC) model, where water enters a waste package as vapor and forms a film on the waste package components with subsequent film reactions with the waste package materials and waste form--this is a no-flow model, the reacted fluids do not exit the waste package via advection. The second sub-model of the in-package chemistry model is the seepage dripping model (SDM), where water, water that may have seeped into the repository from the surrounding rock, enters a failed waste package and reacts with the waste package components and waste form, and then exits the waste package with no accumulation of reacted water in the waste package. Both of the submodels of the in-package chemistry model are film models in contrast to past in-package chemistry models where all of the waste package pore space was filled with water. The current in
3. Shoestring Budget Radio Astronomy (Abstract)
Science.gov (United States)
Hoot, J. E.
2017-12-01
(Abstract only) The commercial exploitation of microwave frequencies for cellular, WiFi, Bluetooth, HDTV, and satellite digital media transmission has brought down the cost of the components required to build an effective radio telescope to the point where, for the cost of a good eyepiece, you can construct and operate a radio telescope. This paper sets forth a family of designs for 1421 MHz telescopes. It also proposes a method by which operators of such instruments can aggregate and archive data via the Internet. With 90 or so instruments it will be possible to survey the entire radio sky for transients with a 24 hour cadence.
4. Who can monitor the court interpreter's performance?
DEFF Research Database (Denmark)
Martinsen, Bodil
2009-01-01
and the conflict about her competence was negotiated. Because of this unusual constellation, combined with a multi-method approach, this single case study can shed some light on the question of the participants' ability to monitor the interpreter's performance. Legal professional users of interpreters tend...... Who can monitor the court interpreter's performance? Results of a case study This paper presents the results of a case study of an unusual interpreting event in a Danish courtroom setting. During the trial, the interpreter's non-normative performance was explicitly criticised by the audience...... are far less transparent for the legal participants than they normally assume. This problem, in turn, stresses the importance of a) the interpreter's competence and self-awareness and b) the use of check interpreters. ...
5. Comparing data accuracy between structured abstracts and full-text journal articles: implications in their use for informing clinical decisions.
Science.gov (United States)
Fontelo, Paul; Gavino, Alex; Sarmiento, Raymond Francis
2013-12-01
The abstract is the most frequently read section of a research article. The use of 'Consensus Abstracts', a clinician-oriented web application formatted for mobile devices to search MEDLINE/PubMed, for informing clinical decisions was proposed recently; however, inaccuracies between abstracts and the full-text article have been shown. Efforts have been made to improve quality. We compared data in 60 recent-structured abstracts and full-text articles from six highly read medical journals. Data inaccuracies were identified and then classified as either clinically significant or not significant. Data inaccuracies were observed in 53.33% of articles ranging from 3.33% to 45% based on the IMRAD format sections. The Results section showed the highest discrepancies (45%) although these were deemed to be mostly not significant clinically except in one. The two most common discrepancies were mismatched numbers or percentages (11.67%) and numerical data or calculations found in structured abstracts but not mentioned in the full text (40%). There was no significant relationship between journals and the presence of discrepancies (Fisher's exact p value =0.3405). Although we found a high percentage of inaccuracy between structured abstracts and full-text articles, these were not significant clinically. The inaccuracies do not seem to affect the conclusion and interpretation overall. Structured abstracts appear to be informative and may be useful to practitioners as a resource for guiding clinical decisions.
6. Les rapports de l’art abstrait (Kandinsky, Klee, Mondrian avec les tendances d’abstraction de l’art sacré / The Connections of Abstract Art (Kandinsky, Klee, Mondrian with the Abstractization Tendencies of Sacred Art
Directory of Open Access Journals (Sweden)
2016-11-01
Full Text Available The main purpose of this paper is to study the connections that can be established between the modern abstractionism and the abstract tendencies from other historical eras. In the first part I will present three distinct interpretations: the first direction is based on authors as Mircea Eliade and Roger Lipsey, who see modern art through the links still alive between art and religion, from a syncretic perspective, or, after Eliade’s expression, based on a creative hermeneutics. The second direction is represented by the work of Adorno, Compagnon, Greenberg, Lyotard, for whom the modern art is a manifestation of radical discontinuity in relation to the art of the past, and the emergence of abstractionism is due primarily to a historical necessity (the increasing rupture between form and content, the increased autonomy of the sensible over the intelligible. The third direction is represented to Wilhelm Worringer, whose work (Abstraktion und Einfühlung, 1907 predates the emergence of the first abstract paintings, but relying on the German aesthetic tradition, manages to go beyond the threshold distinction between figurative and abstract, thus identifying a type of Einfühlung art and another of abstract type, namely the predominance of one or the other in different historical contexts and civilizational patterns. In the second part of the paper I will refer to instances of the spirit of abstraction in the case of Byzantine sacred art, especially in the footsteps of Plotinian aesthetics and as a result of the iconoclastic crisis. In the last part, I will present the key ideas for three major representatives of abstractionism (Kandinsky, Klee, Mondrian and the survival of the concepts of sacred art in their works and art theories.
7. Interpretation and evaluation of radiograph
International Nuclear Information System (INIS)
Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail
2008-01-01
After digestion, the interpreter must interpreted and evaluate the image on film, usually many radiograph stuck in this step, if there is good density, so there are no problem. This is a final stage of radiography work and this work must be done by level two or three radiographer. This is a final stages before the radiographer give a result to their customer for further action. The good interpreter must know what kind of artifact, is this artifact are dangerous or not and others. In this chapter, the entire artifact that usually showed will be discussed briefly with the good illustration and picture to make the reader understand and know the type of artifact that exists.
Directory of Open Access Journals (Sweden)
Khushboo Sahay
2013-01-01
Conclusions: In order to justify a cytosmear interpretation, a cytologist must be well acquainted with delayed fixation-induced cellular changes and microscopic appearances of common contaminants so as to implicate better prognosis and therapy.
9. Schrodinger's mechanics interpretation
CERN Document Server
Cook, David B
2018-01-01
The interpretation of quantum mechanics has been in dispute for nearly a century with no sign of a resolution. Using a careful examination of the relationship between the final form of classical particle mechanics (the HamiltonJacobi Equation) and Schrödinger's mechanics, this book presents a coherent way of addressing the problems and paradoxes that emerge through conventional interpretations.Schrödinger's Mechanics critiques the popular way of giving physical interpretation to the various terms in perturbation theory and other technologies and places an emphasis on development of the theory and not on an axiomatic approach. When this interpretation is made, the extension of Schrödinger's mechanics in relation to other areas, including spin, relativity and fields, is investigated and new conclusions are reached.
10. Normative interpretations of diversity
DEFF Research Database (Denmark)
Lægaard, Sune
2009-01-01
Normative interpretations of particular cases consist of normative principles or values coupled with social theoretical accounts of the empirical facts of the case. The article reviews the most prominent normative interpretations of the Muhammad cartoons controversy over the publication of drawings...... of the Prophet Muhammad in the Danish newspaper Jyllands-Posten. The controversy was seen as a case of freedom of expression, toleration, racism, (in)civility and (dis)respect, and the article notes different understandings of these principles and how the application of them to the controversy implied different...... social theoretical accounts of the case. In disagreements between different normative interpretations, appeals are often made to the ‘context', so it is also considered what roles ‘context' might play in debates over normative interpretations...
International Nuclear Information System (INIS)
Rowe, L.J.; Yochum, T.R.
1987-01-01
Conventional radiographic procedures (plain film) are the most frequently utilized imaging modality in the evaluation of the skeletal system. This chapter outlines the essentials of skeletal imaging, anatomy, physiology, and interpretation
12. 20. ATSR congress - Book of abstracts
International Nuclear Information System (INIS)
1999-01-01
13. Interpretable Active Learning
OpenAIRE
Phillips, Richard L.; Chang, Kyu Hyun; Friedler, Sorelle A.
2017-01-01
Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. W...
14. Viver o estado terminal de um familiar: leitura salutogénica de resultados de um estudo de caso Living the terminal state of a family member: salutogenic interpretation of the results of a case study
Directory of Open Access Journals (Sweden)
Clara Costa Oliveira
2012-09-01
interpretation, combined with content analysis (based on the most important categories of the salutogenic conceptualization. The results show that all the family members identified and utilized various General Resistance Resources (GRR, which can be understood in light of three metacategories: 'comprehensibility', 'manageability' and 'meaningfulness'. It was also found that the use/creation of the GRR implies the existence of strong senses of coherence on the part of the respondents, as stated by Antonovsky. The results allow us to understand that there are areas of health professionals' training that can be stimulated in situations similar to the studied one, such as communication and emotional management, among others. They also point to the need for investment in health education activities, promoting psychological and community empowerment.
15. Interpreter-mediated dentistry.
Science.gov (United States)
Bridges, Susan; Drew, Paul; Zayts, Olga; McGrath, Colman; Yiu, Cynthia K Y; Wong, H M; Au, T K F
2015-05-01
The global movements of healthcare professionals and patient populations have increased the complexities of medical interactions at the point of service. This study examines interpreter mediated talk in cross-cultural general dentistry in Hong Kong where assisting para-professionals, in this case bilingual or multilingual Dental Surgery Assistants (DSAs), perform the dual capabilities of clinical assistant and interpreter. An initial language use survey was conducted with Polyclinic DSAs (n = 41) using a logbook approach to provide self-report data on language use in clinics. Frequencies of mean scores using a 10-point visual analogue scale (VAS) indicated that the majority of DSAs spoke mainly Cantonese in clinics and interpreted for postgraduates and professors. Conversation Analysis (CA) examined recipient design across a corpus (n = 23) of video-recorded review consultations between non-Cantonese speaking expatriate dentists and their Cantonese L1 patients. Three patterns of mediated interpreting indicated were: dentist designated expansions; dentist initiated interpretations; and assistant initiated interpretations to both the dentist and patient. The third, rather than being perceived as negative, was found to be framed either in response to patient difficulties or within the specific task routines of general dentistry. The findings illustrate trends in dentistry towards personalized care and patient empowerment as a reaction to product delivery approaches to patient management. Implications are indicated for both treatment adherence and the education of dental professionals. Copyright © 2015 Elsevier Ltd. All rights reserved.
16. WIPR-2010 Book of abstracts
International Nuclear Information System (INIS)
2015-01-01
The main objective of the workshop was to review advanced and preclinical studies on innovative positron emitting radionuclides to assess their usefulness and potentials. Presentations were organized around 4 issues: 1) preclinical and clinical point of view, 2) production of innovative PET radionuclides, 3) from complexation chemistry to PET imaging and 4) from research to clinic. Emphasis has been put on "6"4Cu, "6"8Ga, "8"9Zr, "4"4Sc but specific aspects such as production or purification have been considered for "6"6Ga, "6"7Ga, "5"2Fe, "8"6Y, and "6"8Ge radionuclides. This document gathers the abstracts of most contributions
17. Abstract algebra an introductory course
CERN Document Server
Lee, Gregory T
2018-01-01
This carefully written textbook offers a thorough introduction to abstract algebra, covering the fundamentals of groups, rings and fields. The first two chapters present preliminary topics such as properties of the integers and equivalence relations. The author then explores the first major algebraic structure, the group, progressing as far as the Sylow theorems and the classification of finite abelian groups. An introduction to ring theory follows, leading to a discussion of fields and polynomials that includes sections on splitting fields and the construction of finite fields. The final part contains applications to public key cryptography as well as classical straightedge and compass constructions. Explaining key topics at a gentle pace, this book is aimed at undergraduate students. It assumes no prior knowledge of the subject and contains over 500 exercises, half of which have detailed solutions provided.
18. Norddesign 2012 - Book of Abstract
DEFF Research Database (Denmark)
. Conceptualisation and Innovative thinking. Research approaches and topics: Human Behaviour and Cognition. Cooperation and Multidisciplinary Design. Staging and Management of Design. Communication in Design. Design education and teaching: Programmes and Syllabuses. New Courses. Integrated and Multi-disciplinary. We...... fate of the ideas behind the conferences. In that view the conferences have been thematically open and the organization has been tight with a limited number of participants that allows a good overview of all the papers and a lot of informal discussion between the participants. The present conference...... has been organized in line with the original ideas. The topics mentioned in the call for abstracts were: Product Development: Integrated, Multidisciplinary, Product life oriented and Distributed. Multi-product Development. Innovation and Business Models. Engineering Design and Industrial Design...
19. Transplantation as an abstract good
DEFF Research Database (Denmark)
Hoeyer, Klaus; Jensen, Anja Marie Bornø; Olejaz, Maria
2015-01-01
This article investigates valuations of organ transfers that are currently seen as legitimising increasingly aggressive procurement methods in Denmark. Based on interviews with registered donors and the intensive care unit staff responsible for managing organ donor patients we identify three types...... a more general salience in the organ transplant field by way of facilitating a perception of organ transplantation as an abstract moral good rather than a specific good for specific people. Furthermore, we suggest that multiple forms of ignorance sustain each other: a desire for ignorance with respect...... to the prioritisation of recipients sustains pressure for more organs; this pressure necessitates more aggressive measures in organ procurement and these measures increase the need for ignorance in relation to the actual procedures as well as the actual recipients. These attempts to avoid knowledge are in remarkable...
20. A fully-abstract semantics of lambda-mu in the pi-calculus
Directory of Open Access Journals (Sweden)
Steffen van Bakel
2014-09-01
Full Text Available We study the lambda-mu-calculus, extended with explicit substitution, and define a compositional output-based interpretation into a variant of the pi-calculus with pairing that preserves single-step explicit head reduction with respect to weak bisimilarity. We define four notions of weak equivalence for lambda-mu – one based on weak reduction, two modelling weak head-reduction and weak explicit head reduction (all considering terms without weak head-normal form equivalent as well, and one based on weak approximation – and show they all coincide. We will then show full abstraction results for our interpretation for the weak equivalences with respect to weak bisimilarity on processes.
1. Abstraction of Drift-Scale Coupled Processes
International Nuclear Information System (INIS)
Francis, N.D.; Sassani, D.
2000-01-01
This Analysis/Model Report (AMR) describes an abstraction, for the performance assessment total system model, of the near-field host rock water chemistry and gas-phase composition. It also provides an abstracted process model analysis of potentially important differences in the thermal hydrologic (TH) variables used to describe the performance of a geologic repository obtained from models that include fully coupled reactive transport with thermal hydrology and those that include thermal hydrology alone. Specifically, the motivation of the process-level model comparison between fully coupled thermal-hydrologic-chemical (THC) and thermal-hydrologic-only (TH-only) is to provide the necessary justification as to why the in-drift thermodynamic environment and the near-field host rock percolation flux, the essential TH variables used to describe the performance of a geologic repository, can be obtained using a TH-only model and applied directly into a TSPA abstraction without recourse to a fully coupled reactive transport model. Abstraction as used in the context of this AMR refers to an extraction of essential data or information from the process-level model. The abstraction analysis reproduces and bounds the results of the underlying detailed process-level model. The primary purpose of this AMR is to abstract the results of the fully-coupled, THC model (CRWMS M andO 2000a) for effects on water and gas-phase composition adjacent to the drift wall (in the near-field host rock). It is assumed that drift wall fracture water and gas compositions may enter the emplacement drift before, during, and after the heating period. The heating period includes both the preclosure, in which the repository drifts are ventilated, and the postclosure periods, with backfill and drip shield emplacement at the time of repository closure. Although the preclosure period (50 years) is included in the process models, the postclosure performance assessment starts at the end of this initial period
2. 2013 SYR Accepted Poster Abstracts.
Science.gov (United States)
2013-01-01
SYR 2013 Accepted Poster abstracts: 1. Benefits of Yoga as a Wellness Practice in a Veterans Affairs (VA) Health Care Setting: If You Build It, Will They Come? 2. Yoga-based Psychotherapy Group With Urban Youth Exposed to Trauma. 3. Embodied Health: The Effects of a Mind�Body Course for Medical Students. 4. Interoceptive Awareness and Vegetable Intake After a Yoga and Stress Management Intervention. 5. Yoga Reduces Performance Anxiety in Adolescent Musicians. 6. Designing and Implementing a Therapeutic Yoga Program for Older Women With Knee Osteoarthritis. 7. Yoga and Life Skills Eating Disorder Prevention Among 5th Grade Females: A Controlled Trial. 8. A Randomized, Controlled Trial Comparing the Impact of Yoga and Physical Education on the Emotional and Behavioral Functioning of Middle School Children. 9. Feasibility of a Multisite, Community based Randomized Study of Yoga and Wellness Education for Women With Breast Cancer Undergoing Chemotherapy. 10. A Delphi Study for the Development of Protocol Guidelines for Yoga Interventions in Mental Health. 11. Impact Investigation of Breathwalk Daily Practice: Canada�India Collaborative Study. 12. Yoga Improves Distress, Fatigue, and Insomnia in Older Veteran Cancer Survivors: Results of a Pilot Study. 13. Assessment of Kundalini Mantra and Meditation as an Adjunctive Treatment With Mental Health Consumers. 14. Kundalini Yoga Therapy Versus Cognitive Behavior Therapy for Generalized Anxiety Disorder and Co-Occurring Mood Disorder. 15. Baseline Differences in Women Versus Men Initiating Yoga Programs to Aid Smoking Cessation: Quitting in Balance Versus QuitStrong. 16. Pranayam Practice: Impact on Focus and Everyday Life of Work and Relationships. 17. Participation in a Tailored Yoga Program is Associated With Improved Physical Health in Persons With Arthritis. 18. Effects of Yoga on Blood Pressure: Systematic Review and Meta-analysis. 19. A Quasi-experimental Trial of a Yoga based Intervention to Reduce Stress and
3. Thermal-hydraulic simulation of natural convection decay heat removal in the High Flux Isotope Reactor (HFIR) using RELAP5 and TEMPEST: Part 2, Interpretation and validation of results
International Nuclear Information System (INIS)
Ruggles, A.E.; Morris, D.G.
1989-01-01
The RELAP5/MOD2 code was used to predict the thermal-hydraulic behavior of the HFIR core during decay heat removal through boiling natural circulation. The low system pressure and low mass flux values associated with boiling natural circulation are far from conditions for which RELAP5 is well exercised. Therefore, some simple hand calculations are used herein to establish the physics of the results. The interpretation and validation effort is divided between the time average flow conditions and the time varying flow conditions. The time average flow conditions are evaluated using a lumped parameter model and heat balance. The Martinelli-Nelson correlations are used to model the two-phase pressure drop and void fraction vs flow quality relationship within the core region. Systems of parallel channels are susceptible to both density wave oscillations and pressure drop oscillations. Periodic variations in the mass flux and exit flow quality of individual core channels are predicted by RELAP5. These oscillations are consistent with those observed experimentally and are of the density wave type. The impact of the time varying flow properties on local wall superheat is bounded herein. The conditions necessary for Ledinegg flow excursions are identified. These conditions do not fall within the envelope of decay heat levels relevant to HFIR in boiling natural circulation. 14 refs., 5 figs., 1 tab
4. Waste management research abstracts no. 14
International Nuclear Information System (INIS)
1983-04-01
The present 14th issue is the second of the new series of Waste Management Research Abstracts, which are reappearing after a three-year suspension. The new series appears in a substantially innovated form. Although the objective of the publication is the same as before, namely to collect and disseminate information on research in progress in the field of nuclear waste management, the format for presentation of the information is a new data sheet in a standardized form, access to which will be made possible by different indexes. The 408 research data sheets contained in this issue have been collected during recent months, ending 15 January 1983, and reflect research currently in progress. They were sent by the Governments of twenty-five Member States, by the International Atomic Energy Agency, and by the Commission of the European Communities. Though the information contained in this publication covers a wide range of subjects in various countries, the WMRA should not be interpreted as providing a complete survey of on-going research in IAEA Member States
5. A Modal-Logic Based Graph Abstraction
NARCIS (Netherlands)
Bauer, J.; Boneva, I.B.; Kurban, M.E.; Rensink, Arend; Ehrig, H; Heckel, R.; Rozenberg, G.; Taentzer, G.
2008-01-01
Infinite or very large state spaces often prohibit the successful verification of graph transformation systems. Abstract graph transformation is an approach that tackles this problem by abstracting graphs to abstract graphs of bounded size and by lifting application of productions to abstract
6. Argonne Code Center: compilation of program abstracts
Energy Technology Data Exchange (ETDEWEB)
Butler, M.K.; DeBruler, M.; Edwards, H.S.; Harrison, C. Jr.; Hughes, C.E.; Jorgensen, R.; Legan, M.; Menozzi, T.; Ranzini, L.; Strecok, A.J.
1977-08-01
This publication is the eleventh supplement to, and revision of, ANL-7411. It contains additional abstracts and revisions to some earlier abstracts and other pages. Sections of the complete document ANL-7411 are as follows: preface, history and acknowledgements, abstract format, recommended program package contents, program classification guide and thesaurus, and the abstract collection. (RWR)
7. Argonne Code Center: compilation of program abstracts
Energy Technology Data Exchange (ETDEWEB)
Butler, M.K.; DeBruler, M.; Edwards, H.S.
1976-08-01
This publication is the tenth supplement to, and revision of, ANL-7411. It contains additional abstracts and revisions to some earlier abstracts and other pages. Sections of the document are as follows: preface; history and acknowledgements; abstract format; recommended program package contents; program classification guide and thesaurus; and abstract collection. (RWR)
8. Argonne Code Center: compilation of program abstracts
International Nuclear Information System (INIS)
Butler, M.K.; DeBruler, M.; Edwards, H.S.; Harrison, C. Jr.; Hughes, C.E.; Jorgensen, R.; Legan, M.; Menozzi, T.; Ranzini, L.; Strecok, A.J.
1977-08-01
This publication is the eleventh supplement to, and revision of, ANL-7411. It contains additional abstracts and revisions to some earlier abstracts and other pages. Sections of the complete document ANL-7411 are as follows: preface, history and acknowledgements, abstract format, recommended program package contents, program classification guide and thesaurus, and the abstract collection
9. Argonne Code Center: compilation of program abstracts
International Nuclear Information System (INIS)
Butler, M.K.; DeBruler, M.; Edwards, H.S.
1976-08-01
This publication is the tenth supplement to, and revision of, ANL-7411. It contains additional abstracts and revisions to some earlier abstracts and other pages. Sections of the document are as follows: preface; history and acknowledgements; abstract format; recommended program package contents; program classification guide and thesaurus; and abstract collection
10. From abstract to peer-reviewed publication: country matters
DEFF Research Database (Denmark)
Østergaard, Lauge; Fosbøl, Philip L.; Harrington, Robert A.
2014-01-01
Medical conferences are key in the sharing of new scientific findings. However, results reported as conference-abstracts are generally not considered final before publication in a peer-reviewed journal. It is known that approximately 1/3 of the scientific results presented as abstracts at large...
11. Is the Abstract a Mere Teaser? Evaluating Generosity of Article Abstracts in the Environmental Sciences
Directory of Open Access Journals (Sweden)
Liana Ermakova
2018-05-01
Full Text Available An abstract is not only a mirror of the full article; it also aims to draw attention to the most important information of the document it summarizes. Many studies have compared abstracts with full texts for their informativeness. In contrast to previous studies, we propose to investigate this relation based not only on the amount of information given by the abstract but also on its importance. The main objective of this paper is to introduce a new metric called GEM to measure the “generosity” or representativeness of an abstract. Schematically speaking, a generous abstract should have the best possible score of similarity for the sections important to the reader. Based on a questionnaire gathering information from 630 researchers, we were able to weight sections according to their importance. In our approach, seven sections were first automatically detected in the full text. The accuracy of this classification into sections was above 80% compared with a dataset of documents where sentences were assigned to sections by experts. Second, each section was weighted according to the questionnaire results. The GEM score was then calculated as a sum of weights of sections in the full text corresponding to sentences in the abstract normalized over the total sum of weights of sections in the full text. The correlation between GEM score and the mean of the scores assigned by annotators was higher than the correlation between scores from different experts. As a case study, the GEM score was calculated for 36,237 articles in environmental sciences (1930–2013 retrieved from the French ISTEX database. The main result was that GEM score has increased over time. Moreover, this trend depends on subject area and publisher. No correlation was found between GEM score and citation rate or open access status of articles. We conclude that abstracts are more generous in recent publications and cannot be considered as mere teasers. This research should be pursued
12. How can we improve the interpretation of systematic reviews?
Directory of Open Access Journals (Sweden)
Straus Sharon E
2011-03-01
Full Text Available Abstract A study conducted by Lai and colleagues, published this week in BMC Medicine, suggests that more guidance might be required for interpreting systematic review (SR results. In the study by Lai and colleagues, positive (or favorable results were influential in changing participants' prior beliefs about the interventions presented in the systematic review. Other studies have examined the relationship between favorable systematic review results and the publication of systematic reviews. An international registry may decrease the number of unpublished systematic reviews and will hopefully decrease redundancy, increase transparency, and increase collaboration within the SR community. In addition, using guidance from the Preferred Items for Systematic Reviews and Meta-analyses (PRISMA: http://www.prisma-statement.org/ Statement and the Grading of Recommendations Assessment, Development, and Evaluation (GRADE: http://www.gradeworkinggroup.org/ approach can also be used to improve the interpretation of systematic reviews. In this commentary, we highlight important methodological issues related to the conduct and reporting of systematic reviews and also present our own guidance on interpreting systematic reviews. Please see Research article: http://www.biomedcentral.com/1741-7015/9/30/.
13. 1986 annual information meeting. Abstracts
International Nuclear Information System (INIS)
1986-01-01
Abstracts are presented for the following papers: Geohydrological Research at the Y-12 Plant (C.S. Haase); Ecological Impacts of Waste Disposal Operations in Bear Creek Valley Near the Y-12 Plant (J.M. Loar); Finite Element Simulation of Subsurface Contaminant Transport: Logistic Difficulties in Handling Large Field Problems (G.T. Yeh); Dynamic Compaction of a Radioactive Waste Burial Trench (B.P. Spalding); Comparative Evaluation of Potential Sites for a High-Level Radioactive Waste Repository (E.D. Smith); Changing Priorities in Environmental Assessment and Environmental Compliance (R.M. Reed); Ecology, Ecotoxicology, and Ecological Risk Assessment (L.W. Barnthouse); Theory and Practice in Uncertainty Analysis from Ten Years of Practice (R.H. Gardner); Modeling Landscape Effects of Forest Decline (V.H. Dale); Soil Nitrogen and the Global Carbon Cycle (W.M. Post); Maximizing Wood Energy Production in Short-Rotation Plantations: Effect of Initial Spacing and Rotation Length (L.L. Wright); and Ecological Communities and Processes in Woodland Streams Exhibit Both Direct and Indirect Effects of Acidification (J.W. Elwood)
14. EURORIB 2010, Book of abstracts
International Nuclear Information System (INIS)
Tsoneva, N.; Lenske, H.; Casten, R.
2012-01-01
The second international EURORIB conference 'EURORIB'10' will be held from June 6. to June 11. 2010 in Lamoura (France). Our nuclear physics community is eagerly awaiting the construction of the next generation of Radioactive Ion Beam (RIB) facilities in Europe: HIE-ISOLDE at CERN, NUSTAR at FAIR, SPES at LNL, SPIRAL2 at GANIL and the future EURISOL. The collaborations built around these facilities are exploring new experimental and theoretical ideas that will advance our understanding of nuclear structure through studies of exotic nuclei. Following in the spirit of the conference held in Giens in 2008, EURORIB'10 will provide the opportunity for the different collaborations to come together and present these ideas, and explore the synergy between the research programmes based around the hypothetical severe acprojects. The main topics to be discussed at the conference are: 1) At and beyond the drip line, 2) Shell structure far from stability, 3) Fusion reactions and synthesis of heavy and superheavy nuclei, 4) Dynamics and thermodynamics of exotic nuclear systems, 5) Radioactive ion beams in nuclear astrophysics, 6) New modes of radioactivity, 7) Fundamental interactions, 8) Applications in other fields, 9) Future RIB facilities, 10) Production and manipulation of RIB, and 11) Working group meetings on synergy in instrumentation and data acquisition. This document gathers only the abstracts of the papers. (authors)
15. Attracting Girls into Physics (abstract)
Science.gov (United States)
2009-04-01
A recent international study of women in physics showed that enrollment in physics and science is declining for both males and females and that women are severely underrepresented in careers requiring a strong physics background. The gender gap begins early in the pipeline, from the first grade. Girls are treated differently than boys at home and in society in ways that often hinder their chances for success. They have fewer freedoms, are discouraged from accessing resources or being adventurous, have far less exposure to problem solving, and are not encouraged to choose their lives. In order to motivate more girl students to study physics in the Assiut governorate of Egypt, the Assiut Alliance for the Women and Assiut Education District collaborated in renovating the education of physics in middle and secondary school classrooms. A program that helps in increasing the number of girls in science and physics has been designed in which informal groupings are organized at middle and secondary schools to involve girls in the training and experiences needed to attract and encourage girls to learn physics. During implementation of the program at some schools, girls, because they had not been trained in problem-solving as boys, appeared not to be as facile in abstracting the ideas of physics, and that was the primary reason for girls dropping out of science and physics. This could be overcome by holding a topical physics and technology summer school under the supervision of the Assiut Alliance for the Women.
16. The triconnected abstraction of process models
OpenAIRE
Polyvyanyy, Artem; Smirnov, Sergey; Weske, Mathias
2009-01-01
Contents: Artem Polyvanny, Sergey Smirnow, and Mathias Weske The Triconnected Abstraction of Process Models 1 Introduction 2 Business Process Model Abstraction 3 Preliminaries 4 Triconnected Decomposition 4.1 Basic Approach for Process Component Discovery 4.2 SPQR-Tree Decomposition 4.3 SPQR-Tree Fragments in the Context of Process Models 5 Triconnected Abstraction 5.1 Abstraction Rules 5.2 Abstraction Algorithm 6 Related Work and Conclusions
17. Estimation of the contribution of gamma-emission of incorporated cesium radioisotopes in interpretation of the results of the public survey to assess the thyroidal iodine content following a radiation accident at the nuclear power plant
Directory of Open Access Journals (Sweden)
Shinkarev S.M.
2014-12-01
Full Text Available Aim. A detail consideration has been done to assess an importance of the contribution of gamma-emission of incorporated cesium radioisotopes to the exposure rate measured near the thyroid by the public survey for following the Chernobyl accident. Empirical ratios have been derived to take into account that contribution under interpretation of the results of survey meter monitoring of the public. Materials and methods. Model calculations for typical radionuclide intake by the residents living in contaminated territories after the Chernobyl accident have been carried out in order to assess the contribution of gamma-emission of incorporated cesium radioisotopes to the exposure rate measured near the thyroid by the survey. Under such calculations two the most important modes of intake have been considered: 1 inhalation and 2 ingestion with cow milk. Results. According to the estimates received the contribution of gamma-emission of incorporated cesium radioisotopes to the exposure rate measured near the thyroid during the first 20 days does not exceed 20% for the residents of southern areas of Gomel region and 30% for the residents of Mogil'yov region. During 60 days following the accident that contribution is estimated to be within (50-80 % for the residents of southern areas of Gomel region and (80-95 % for the residents of Mogil'yov region. Conclusion. For the period of intensive thyroid measuring in the southern areas of Gomel region (the second part of May account of the contribution of gamma-emission of incorporated cesium radioisotopes is relatively unimportant, but for Mogil'yov region (the end of May — it is important to account for. For the thyroid measurements conducted in June of 1986 it is important for all residents living in Belarus to take into account the contribution of gamma-emission of incorporated cesium radioisotopes.
18. An information gap in DNA evidence interpretation.
Directory of Open Access Journals (Sweden)
Mark W Perlin
Full Text Available Forensic DNA evidence often contains mixtures of multiple contributors, or is present in low template amounts. The resulting data signals may appear to be relatively uninformative when interpreted using qualitative inclusion-based methods. However, these same data can yield greater identification information when interpreted by computer using quantitative data-modeling methods. This study applies both qualitative and quantitative interpretation methods to a well-characterized DNA mixture and dilution data set, and compares the inferred match information. The results show that qualitative interpretation loses identification power at low culprit DNA quantities (below 100 pg, but that quantitative methods produce useful information down into the 10 pg range. Thus there is a ten-fold information gap that separates the qualitative and quantitative DNA mixture interpretation approaches. With low quantities of culprit DNA (10 pg to 100 pg, computer-based quantitative interpretation provides greater match sensitivity.
19. The abstract representations in speech processing.
Science.gov (United States)
Cutler, Anne
2008-11-01
Speech processing by human listeners derives meaning from acoustic input via intermediate steps involving abstract representations of what has been heard. Recent results from several lines of research are here brought together to shed light on the nature and role of these representations. In spoken-word recognition, representations of phonological form and of conceptual content are dissociable. This follows from the independence of patterns of priming for a word's form and its meaning. The nature of the phonological-form representations is determined not only by acoustic-phonetic input but also by other sources of information, including metalinguistic knowledge. This follows from evidence that listeners can store two forms as different without showing any evidence of being able to detect the difference in question when they listen to speech. The lexical representations are in turn separate from prelexical representations, which are also abstract in nature. This follows from evidence that perceptual learning about speaker-specific phoneme realization, induced on the basis of a few words, generalizes across the whole lexicon to inform the recognition of all words containing the same phoneme. The efficiency of human speech processing has its basis in the rapid execution of operations over abstract representations.
20. SATURATED ZONE FLOW AND TRANSPORT MODEL ABSTRACTION
International Nuclear Information System (INIS)
B.W. ARNOLD
2004-01-01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5957027077674866, "perplexity": 4717.579983153906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155676.21/warc/CC-MAIN-20180918185612-20180918205612-00139.warc.gz"} |
http://salesforcexytools.com/salesforce/2017/07/27/sfdc-trigger-fix-data/ | SFDC Trigger Fix Data
Posted by ExiaHuang on July 27, 2017
Trigger needs to execute before insert and before update.
Also you don’t need to update the accounts at the end of the trigger you only need to do this when you want to modify records that are not being processed by then trigger.
1
2
3
4
5
6
7
8
9
10
11
12
13
Trigger UpdateTMID on Account (before insert, before update){
for (Account acct:Trigger.new){
Integer i = 1;
system.debug('This is the number of times it is has gone through the loop' +i);
acct.PP_CLIENT_DIR_VP__c = 'JC13';
i++;
}
} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4516012966632843, "perplexity": 2895.3457944956576}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130531.89/warc/CC-MAIN-20200930235415-20201001025415-00236.warc.gz"} |
https://brilliant.org/problems/davids-horse/ | # David's horse
Algebra Level 2
David loves to ride his horse. When he is riding his horse, he travels at a rate of $$10$$ mi/hr, and when he is walking, he travels at a rate of just $$3$$ mi/hr. David goes on a $$4$$-hour trip, traveling for a while on his horse and walking the rest of the way, and travels $$29.5$$ miles. For how many minutes was David riding his horse?
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.383650004863739, "perplexity": 2785.8179760944645}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190295.65/warc/CC-MAIN-20170322212950-00135-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.cableizer.com/documentation/sigma_j/ | Specific isobaric volumetric heat capacity of jacket material per K at 20°C
The values are taken from standard IEC 853-2 Ed.1.0 plus following additions:
• Values for polyolefin copolymers (POC) and chlorosulphonated PE (CSM) are assumed to be similar to polyethlyene (PE).
• Values for polypropylene (PP) and silicone rubber (SiR) are taken from professionalplastics.com
Symbol
$\sigma_{\mathrm{j}}$
Unit
J/K.m³
Related
$M_{\mathrm{j}}$
Used in
$Q_{\mathrm{j}}$
Choices
MaterialValue
PE2400000.0
HDPE2400000.0
PVC1700000.0
POC2400000.0
PP1800000.0
SiR1400000.0
FRNC2400000.0
CR2000000.0
CSM2400000.0
CJ2000000.0
RSP2000000.0
BIT2000000.0 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6199545860290527, "perplexity": 24133.524825237993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583739170.35/warc/CC-MAIN-20190120204649-20190120230649-00284.warc.gz"} |
https://arxiv.org/abs/1609.04232 | # Title:A Supremum-Norm Based Test for the Equality of Several Covariance Functions
Abstract: In this paper, we propose a new test for the equality of several covariance functions for functional data. Its test statistic is taken as the supremum value of the sum of the squared differences between the estimated individual covariance functions and the pooled sample covariance function, hoping to obtain a more powerful test than some existing tests for the same testing problem. The asymptotic random expression of this test statistic under the null hypothesis is obtained. To approximate the null distribution of the proposed test statistic, we describe a parametric bootstrap method and a non-parametric bootstrap method. The asymptotic random expression of the proposed test is also studied under a local alternative and it is shown that the proposed test is root-$n$ consistent. Intensive simulation studies are conducted to demonstrate the finite sample performance of the proposed test and it turns out that the proposed test is indeed more powerful than some existing tests when functional data are highly correlated. The proposed test is illustrated with three real data examples.
Comments: arXiv admin note: text overlap with arXiv:1609.04231 Subjects: Methodology (stat.ME); Applications (stat.AP); Computation (stat.CO) Cite as: arXiv:1609.04232 [stat.ME] (or arXiv:1609.04232v1 [stat.ME] for this version)
## Submission history
From: Jia Guo [view email]
[v1] Wed, 14 Sep 2016 12:15:14 UTC (588 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7499337792396545, "perplexity": 766.6261412935199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00447.warc.gz"} |
https://www.physicsforums.com/threads/calculating-the-descent-of-a-rocket-landing.682949/ | # Aerospace Calculating the descent of a rocket landing
1. Apr 3, 2013
### alphasection
Hello, I've been working on calculating the time that a rocket would take to reach ground level, safely. But some things are getting me confused, also I would like to create a formula for this so i can just input values no matter what planet or celestial body you're landing on. To put it in clearer terms:
I want to know how long a rocket would take to land safely(eta) at a landing pad going vertically down, taking into account the gravity and the upward thrust that a rocket engine would create.
Thank you, if i'm missing something than tell me.
2. Apr 3, 2013
### Staff: Mentor
Welcome to PF!
There are too many variables, not to mention the variable of personal taste, to create such an equation.
3. Apr 3, 2013
### Staff: Mentor
If you can neglect air drag, it is easier to consider the time-reversed process: A launching rocket.
You have to add several assumptions about the rocket to calculate anything.
4. Apr 3, 2013
### alphasection
Thank you!
Hmm... You're right that there is too many variables. This question was just to understand how much thrust is needed to lift an object at whatever speed I want, or to decelerate any object with a rocket engine fixed to it. Ok a (hopefully) simpler question:
How much thrust is needed to propel 180kg to mach 1 at sea level, when I say propel I mean vertically up, I tried to convert the force the object exerts (which came out to be ~1773.8(N)). If I push upwards, the opposite direction of gravity, with a rocket at this force I will not gain any altitude (am I right with this?) so i want to know how much thrust I need to reach certain speeds. For instance I want to go 5 m/s weighing only 180 kg, how much thrust would I need?
Last edited: Apr 3, 2013
5. Apr 4, 2013
### Staff: Mentor
That is not a meaningful question - thrust is related to acceleration, not to speed.
If you have 1774N of thrust (in vertical direction), your velocity is constant - it can be zero, it can be supersonic, or anything else. If you have more thrust, the rocket can accelerate, and reach any velocity if it has enough time. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8131756782531738, "perplexity": 968.7055096686831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865679.51/warc/CC-MAIN-20180523141759-20180523161759-00266.warc.gz"} |
https://www.cp3-origins.dk/news-and-press/news/marie-curie-fellowship-to-justus-tobias-tsang/ | # Marie Curie Fellowship to Justus Tobias Tsang
CP3-Origins postdoc Justus Tobias Tsang has been awarded the prestigious Marie Skłodowska-Curie Individual Fellowship for a project done together with Associate Professor Michele Della Morte titled Beyond Colours and Flavours on Supercomputers
We propose precision lattice QCD computations aiding the search for new physics beyond the Standard Model. In particular, we will address currently observed anomalies such as those displayed in the anomalous magnetic moment of the muon and lepton flavour universality tests in semi-leptonic B meson decays. We will further supplement searches for new physics through the computation of hadronic inputs, which combined with experimental results allow the determination of elements of the Cabibbo-Kobayashi-Maskawa matrix, thereby providing precision tests of the standard model.
We will compute a large set of hadronic form factors of semi-leptonic B(s) and D(s) meson decays including pseudo-scalar and vector final
states. State-of-the-art computations of these have two major shortcomings: the use of effective theories for the b-quark, and the treatment of vector final states as QCD-stable particles. We will eliminate the former of these by utilising very fine lattices which allow for the direct simulation of the b-quark near its physical mass. The latter will be addressed by merging specialist expertise in the computation of such form factors with that of hadronic scattering processes. This will result in the first calculation that takes the unstable nature of the vector final states in QCD into account. This is of paramount importance in order to address the observed anomalies in the B to D* and B to K* decays. We will compute the full basis of possible currents thereby providing standard model predictions as well as inputs for tests of beyond the standard model theories. Further, we will use the approach of massive QED in lattice QCD computations to provide an independent cross check of the electromagnetic corrections to the hadronic vacuum polarisation. This work will provide vital inputs in searches for physics beyond the standard model which are needed to fully exploit large ongoing experiments at the Large Hadron Collider and at facilities in Japan and the US.
For the official announcement, see here. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693761825561523, "perplexity": 861.2639610326104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522133.33/warc/CC-MAIN-20210120213234-20210121003234-00367.warc.gz"} |
https://www.physicsforums.com/threads/please-help-with-oxidation-mecanism.53428/ | 1. Nov 20, 2004
### cuti_pie75
Hi,
It's been quite some time now i've spent on figuring out the mecanism of the oxydation of toluene with KMnO4 to give benzoic acid. My problem is i dunno how exactly the MnO4 attacks the hydrogen on the aliphatic chain. So, if anyone can help me figure out the first steps of which O on MnO4 will attack, that'll be great!
2. Nov 21, 2004
### chem_tr
Hello
In this process, either radicalic or cationic species form, as seen in a closely related reaction:
$$Ar_2CH_2+CrO_3\longrightarrow Ar_2C=O$$
$$rate~determining~step:Ar_2CH_2 \longrightarrow Ar_2CH \cdot ~or~Ar_2CH_2 \longrightarrow Ar_2CH^+$$
3. Nov 21, 2004
### movies
I think chem_tr is right. The first step is a hydrogen atom abstraction (that is, a proton plus an electron) to give a benzyl radical. That can then combine with a water molecule (water is typically the solvent for these oxidations) to give benzyl alcohol. The oxidation from the alcohol to the acid is much more straight forward and probably doesn't involve radicals. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.786064624786377, "perplexity": 3169.7438648467387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423812.87/warc/CC-MAIN-20170721222447-20170722002447-00220.warc.gz"} |
https://physics.stackexchange.com/questions/153178/infinite-dimensional-vector-spaces-vs-the-dual-space | # Infinite dimensional vector spaces vs. the dual space
I just happened across this over on Math Overflow. It references the following theorem from linear algebra:
A vector space has the same dimension as its dual if and only if it is finite dimensional.
I would like to ask a physical question using the infinite square well (ISW) in quantum mechanics as motivation. For the ISW we obtain $$\psi_n=A_n\sin(\frac{n\pi x}{a})$$ as the eigenfunctions of the Hamiltonian. Here $n=1,2,3,4...$ enumerates the states. If I understand correctly this is an infinite dimensional vector space, because the $\psi_n$'s form an infinitely large basis (ie there is no largest value of $n$). If the dual space is the set of functions $\psi_n^*$ (which I think it is) how can the vector space and the dual space have different dimensions?
• Can you cite the post? There is more to this than what you say: the "theorem" as it stands is wrong: any separable and infinite dimensional Hilbert space being a counterexample, if "same dimension" is to be read as "having cardinally equivalent basis sets" and if dual is to be read as "topological dual" (i.e. set of all cts linear functionals on the space) – Selene Routley Dec 14 '14 at 8:20
• They are referring to the "algebraic dual", i.e. the set of all linear maps (not necessarily continuous) from the vector space to the field where it is defined. This is a larger set than the topological dual. The Riesz's Representation Theorem holds only for topological dual pairs. – yuggib Dec 14 '14 at 9:34
• @yuggib That's what I wanted to make clear. Anyhow, my answer here may be relevant – Selene Routley Dec 14 '14 at 12:29
• Also note that the $\psi_n$ of your example do not form a basis in the algebraic sense (which is the context of the theorem): a basis is a linearly algebraic subset such that every element can be written as a finite linear combination. The cardinality of a basis in this sense is the same for all bases and is by definition the dimension. – doetoe Dec 14 '14 at 23:37
## 2 Answers
There are two concepts of duality for vector spaces.
One is the algebraic dual that is the set of all linear maps. Precisely, given a vector space $V$ over a field $\mathbb{K}$, the algebraic dual $V_{alg}^*$ is the set of all linear functions $\phi:V\to \mathbb{K}$. This is a subset of $\mathbb{K}^V$, the set of all functions from $V$ to $\mathbb{K}$. The proof you can see on math overflow uses, roughly speaking, the fact that the cardinality of $\mathbb{K}^V$ is strictly larger than the cardinality of $\mathbb{K}$ if $V$ is infinite dimensional and has at least the same cardinality as $\mathbb{K}$.
So for algebraic duals, the dual of any infinite vector space has bigger dimension than the original space.
The other concept is the topological dual, that can be defined only on topological vector spaces (because a notion of continuity is needed). Given a topological vector space $T$, the topological dual $T_{top}^*$ is the set of all continuous linear functionals (continuous w.r.t. the topology of $T$). It is a proper subset of the algebraic dual, i.e. $T_{top}^*\subset T_{alg}^*$.
For topological duals, the restriction to continuous functionals makes the previous statement false (i.e. there exist infinite dimensional topological vector spaces whose topological dual has the same dimension of the original space).
The usual example are Hilbert spaces, where the Riesz representation theorem holds (see my comment above): any object of the topological dual $H^*_{top}$ of a Hilbert space $H$ can be identified via isomorphism with an element of $H$. So an Hilbert space and its dual are the "same".
Note however that the topological dual is always thought to be "bigger (or maybe equal)" than the original space. I am very non-precise here, but I think the following example clarifies. Think to the distributions $\mathscr{S}'(\mathbb{R})$. This is the topological dual of the functions of rapid decrease $\mathscr{S}(\mathbb{R})$. Any $f\in \mathscr{S}$ is isomorphic to a distribution in $\mathscr{S}'$, but the converse is obviously not true: there are distributions that are not functions (the Dirac's delta), and in general any $L^p$-space is thought as a subset of $\mathscr{S}'$ (so $\mathscr{S}'$ is quite "big").
Restricting ourselves to just vector spaces without any extra structure, the theorem is true.
One way to see this is to note that any member $f$ of the dual space is uniquely defined by the value it returns acting on the basis $\{\psi_n\}$, say $f(\psi_n) = z_n$ for complex numbers $z_n$. Then $V^*$ is isomorphic to $\mathbb{C}^\mathbb{N}$, the set of sequences of complex numbers. It is a well-known fact that $\mathbb{R}^\mathbb{N}$ does not have a countable basis as a vector space over $\mathbb{R}$, and it is a simple matter to extend this to $\mathbb{C}^\mathbb{N}$ not having a countable basis over $\mathbb{C}$. If this doesn't seem intuitive (e.g. you jump to thinking of the "basis" $\{(1,0,0,\ldots), (0,1,0,\ldots), \ldots\}$), the key is that only finite sums are allowed in raw vector spaces; what would it even mean to add an infinite number of vectors without a notion of convergence?
A more physically-inspired argument against the idea that complex conjugation yields (a basis for) all members of $V^*$ is to consider delta-functions. For some $x_0$ in the interval, consider "$\delta(x-x_0)$," the "function" that integrates against $f \in V$ to return $f(x_0)$. In actuality, $\delta$ is a perfectly valid member of $V^*$, defined by $\delta(\psi_n) = \psi_n(x_0)$. Suppose $\delta = a_1 \psi_{n_1}^* + \cdots + a_k \psi_{n_k}^*$ could be written. But then $a_1^* \psi_{n_1} + \cdots + a_k^* \psi_{n_k}$ would be a perfectly well-behaved finite sum of sines that was the complex conjugate of the delta "function" -- a clearly nonsensical result. Besides, $$(a_1 \psi_{n_1}^* + \cdots + a_k \psi_{n_k}^*)(\psi_n) = a_k \delta_{n,n_k},$$ which is $0$ for all but finitely many $n$, whereas $\psi_n(x_0)$ can be nonzero for every $n$ (choose $x_0$ to be an irrational multiple of $a$). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9711233377456665, "perplexity": 135.78709685339908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890157.10/warc/CC-MAIN-20200706073443-20200706103443-00376.warc.gz"} |
http://mathcs.albion.edu/scripts/mystical2bib.php?year=2017&month=15&day=30&item=a | Albion College Mathematics and Computer Science Colloquium
Title: Game. SET. Line Speaker: David Austin Professor of Mathematics Mathematics Grand Valley State University Allendale, MI Abstract: SET is a simple card game based on pattern recognition that can challenge both children and adults. It also has a surprisingly rich underlying mathematical structure that ties together ideas from a range of subjects including geometry, combinatorics, and linear algebra. In this talk, we will consider some simple questions that arise when playing SET and investigate the mathematical ideas that provide answers. We will also describe some recent and deep work from last year that gives a surprising result about a generalization of SET. Location: Palenske 227 Date: 3/30/2017 Time: 3:30 PM
@abstract{MCS:Colloquium:DavidAustin:2017:3:30,
author = "{David Austin}",
title = "{Game. SET. Line}",
address = "{Albion College Mathematics and Computer Science Colloquium}",
month = "{30 March}",
year = "{2017}"
} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4177343547344208, "perplexity": 2847.1118364756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.11/warc/CC-MAIN-20180618051104-20180618071104-00254.warc.gz"} |
http://perimeterinstitute.ca/fr/video-library/collection/challenges-early-universe-cosmology-2011 | Le contenu de cette page n’est pas disponible en français. Veuillez nous en excuser.
# Challenges for Early Universe Cosmology - 2011
Challenges for Early Universe Cosmology
## The Arrow of Time in an Eternal Universe
Samedi juil 16, 2011
Speaker(s):
If we imagine that the universe is truly eternal, special challenges arise for attempts to solve cosmological fine-tuning problems, especially the low entropy of the early universe. If the space of states is finite, the universe should spend most of its time near equilibrium. If the space of states is infinite, it becomes difficult to understand why our universe was in a particular low-entropy state.
I will discuss approaches to addressing this problem in a model-independent fashion.
Scientific Areas:
## Entropy in the Universe
Samedi juil 16, 2011
Speaker(s):
"A positive cosmological constant allows arbitrarily many different quantum states, but apparently only if there can be big bangs and/or big crunches. Without any big bang or big crunch, the entropy may be limited by the Gibbons-Hawking entropy of pure deSitter, and the matter entropy might even more limited by a value roughly the three-fourths power of the Gibbons-Hawking entropy. A classical analogue of an upper limit on the entropy is the finite canonical measure for nonsingular cosmologies.
Scientific Areas:
## What Happens When Entropy Decreases
Samedi juil 16, 2011
Speaker(s):
Closed systems never evolve to lower entropy states -- except when they do, which is if one waits a time that is exponential in the entropy change. Thus macroscopic decreases in entropy are 'never' observed. Yet in cosmology there are eternal systems in which downward entropy fluctuations of any magnitude eventually happen. What is the nature of such fluctuations?
Scientific Areas:
## Observable signatures of anisotropic bubble nucleation
Vendredi juil 15, 2011
Our universe may have formed via bubble nucleation in an eternally-inflating background. Furthermore, the background may have a compact dimension---the modulus of which tunnels out of a metastable minimum during bubble nucleation---which subsequently grows to become one of our three large spatial dimensions. We discuss some potential observational signatures of this scenario.
Scientific Areas:
## Galilean Genesis
Vendredi juil 15, 2011
Speaker(s):
We propose a novel cosmological scenario, in which standard inflation is replaced by an expanding phase with a drastic violation of the Null Energy Condition (NEC): \dot H >> H^2. The model is based on the recently introduced Galileon theories, which allow NEC violating solutions without instabilities. The unperturbed solution describes a Universe that is asymptotically Minkowski in the past, expands with increasing energy density until it exits the regime of validity of the effective field theory and reheats.
Scientific Areas:
## Infrared Challenges for Inflation
Vendredi juil 15, 2011
Speaker(s):
I will review some recent work on infrared issues for scalar fields in exact and quasi de Sitter space. Renewed interest in this topic has been driven by the observational potential for a more accurate determination of statistics of the primordial curvature perturbations, especially non-Gaussianity. Interestingly, the resulting questions are not only relevant for mapping inflationary models to observation but also link directly to more fundamental questions about the initial state, eternal inflation, and the long time dynamics of interacting quantum fields in curved space.
Scientific Areas:
## A Simple Harmonic Universe
Vendredi juil 15, 2011
We explore simple but novel solutions of general relativity which, classically, approximate cosmologies cycling through an infinite set of bounces." These solutions require curvature K=+1, and are supported by a negative cosmological term and matter with -1 < w < -1/3. They can be studied within the regime of validity of general relativity. We argue that quantum mechanically, particle production leads eventually to a departure from the regime of validity of semiclassical general relativity, likely yielding a singular crunch.
Scientific Areas:
## Holographic Cosmology Inflation and Entropy
Vendredi juil 15, 2011
Speaker(s):
I provide a mathematical model of holographic cosmology whose coarse grained description is that of a homogeneous isotropic, flat universe, which makes a transitions from an FRW to an eternal de Sitter regime. Based on this model, I suggest some heuristic ideas which explain the low initial entropy of the universe and may provide a description of an inflationary era with small fluctuations.
Scientific Areas:
## Probability and Anthropic Reasoning in Small, Large, and Infinite Universes
Vendredi juil 15, 2011
Speaker(s):
I will argue that anthropic reasoning is unnecessary or misleading when the universe/multiverse is small enough that another observer with exactly your memories is unlikely to exist. Instead, one can evaluate theories or make predictions in the standard Bayesian way, based on the conditional probability of something unknown given all that you do know. Things are not so clear when the universe is large enough that all competing theories predict that an observer with your exact memories exists with probability close to one.
Scientific Areas:
## TBA
Vendredi juil 15, 2011
Speaker(s):
TBA
Scientific Areas: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6235341429710388, "perplexity": 1633.626622843417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330913.72/warc/CC-MAIN-20190826000512-20190826022512-00082.warc.gz"} |
https://physicsoverflow.org/34765/recommendation-meson-and-hadron-spectrum-from-lattice-qcd | # [Recommendation] Meson and hadron spectrum from Lattice QCD
+ 3 like - 0 dislike
107 views
The proton mass can be calculated from lattice QCD to 1% accuracy; see Physical Nucleon Properties from Lattice QCD. A survey paper by Fodor and Hoelbling, Light Hadron Masses from Lattice QCD, discusses the state of the art of hadron mass prediction from lattice QCD. (also to 1% accuracy). For the meson spectrum, see Toward the excited isoscalar meson spectrum from lattice QCD. Extrapolation to physical parameter values, a lot of attention to detail, and a lot of computer time are needed to make this work.
Nonlattice techniques for mass prediction of mesons and hadrons include the Schwinger-Dyson equations; see, e.g., Masses of ground and excited-state hadrons
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysi$\varnothing$sOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43676114082336426, "perplexity": 2374.394194898771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249500704.80/warc/CC-MAIN-20190223102155-20190223124155-00496.warc.gz"} |
https://www.physicsforums.com/threads/generalizations-of-certain-vectors.413475/ | # Generalizations of certain vectors
1. ### jfy4
650
Hello,
I have a few questions about generalizing a few 4-vectors into tensors based on physical and intuitive arguments.
The first question I have is if I can form a Stress-Energy-Momentum tensor out of the energy-momentum wave 4-vector $$\hbar k_{\alpha}$$?
The formation of the stress-energy tensor in GR came out by associating the energy momentum 4-vector with a 3-volume. What if I preform that same association with the wave 4-vector?
The result I obtain is still of course a stress-energy-momentum tensor when multiplied by $$\hbar$$, however by it self it is simply just a wave-tensor, with the $$k_{0i}$$ components representing momentum and energy, but with the $$k_{ij}$$ components representing the flux of $$k$$ or the $$\frac{dk}{dt}$$ across $$dA$$.
Is this result generally known and unaccepted, or is it physically absurd and hence dismissed?
The second part of my post hinges on the former idea, however I'll still write it for fun.
the wave 4-vector is commonly contracted with the coordinate 4-vector. However, if the wave tensor exists, it will need to be contracted with a coordinate tensor. using dimensional analysis the spatial components of the coordinate tensor $$x^{\alpha\beta}$$, $$x_{ij}$$, are 4-volumes, each containing the multiplication of two, 2-surfaces. one, a spatial surface, and the other a temporal surface, with the diagonal elements being a typical 4-volume from GR $$v=\int \sqrt{-g}d^{4}x$$.
Pending the first part of the post, is this an acceptable generalization of the coordinate 4-vector into a tensor? any help would be appreciated. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9191272854804993, "perplexity": 649.8110908815657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927863.72/warc/CC-MAIN-20150521113207-00117-ip-10-180-206-219.ec2.internal.warc.gz"} |
http://isabelle.in.tum.de/repos/isabelle/file/7626cb4e1407/doc-src/TutorialI/ToyList/document/ToyList.tex | doc-src/TutorialI/ToyList/document/ToyList.tex
author nipkow Tue Oct 17 13:28:57 2000 +0200 (2000-10-17) changeset 10236 7626cb4e1407 parent 10187 0376cccd9118 child 10299 8627da9246da permissions -rw-r--r--
*** empty log message ***
1 %
2 \begin{isabellebody}%
3 \def\isabellecontext{ToyList}%
4 \isacommand{theory}\ ToyList\ {\isacharequal}\ PreList{\isacharcolon}%
5 \begin{isamarkuptext}%
6 \noindent
7 HOL already has a predefined theory of lists called \isa{List} ---
8 \isa{ToyList} is merely a small fragment of it chosen as an example. In
9 contrast to what is recommended in \S\ref{sec:Basic:Theories},
10 \isa{ToyList} is not based on \isa{Main} but on \isa{PreList}, a
11 theory that contains pretty much everything but lists, thus avoiding
12 ambiguities caused by defining lists twice.%
13 \end{isamarkuptext}%
14 \isacommand{datatype}\ {\isacharprime}a\ list\ {\isacharequal}\ Nil\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\isacharparenleft}{\isachardoublequote}{\isacharbrackleft}{\isacharbrackright}{\isachardoublequote}{\isacharparenright}\isanewline
15 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\isacharbar}\ Cons\ {\isacharprime}a\ {\isachardoublequote}{\isacharprime}a\ list{\isachardoublequote}\ \ \ \ \ \ \ \ \ \ \ \ {\isacharparenleft}\isakeyword{infixr}\ {\isachardoublequote}{\isacharhash}{\isachardoublequote}\ {\isadigit{6}}{\isadigit{5}}{\isacharparenright}%
16 \begin{isamarkuptext}%
17 \noindent
18 The datatype\index{*datatype} \isaindexbold{list} introduces two
19 constructors \isaindexbold{Nil} and \isaindexbold{Cons}, the
20 empty~list and the operator that adds an element to the front of a list. For
21 example, the term \isa{Cons True (Cons False Nil)} is a value of
22 type \isa{bool\ list}, namely the list with the elements \isa{True} and
23 \isa{False}. Because this notation becomes unwieldy very quickly, the
24 datatype declaration is annotated with an alternative syntax: instead of
25 \isa{Nil} and \isa{Cons x xs} we can write
26 \isa{{\isacharbrackleft}{\isacharbrackright}}\index{$HOL2list@\texttt{[]}|bold} and 27 \isa{x\ {\isacharhash}\ xs}\index{$HOL2list@\texttt{\#}|bold}. In fact, this
28 alternative syntax is the standard syntax. Thus the list \isa{Cons True
29 (Cons False Nil)} becomes \isa{True\ {\isacharhash}\ False\ {\isacharhash}\ {\isacharbrackleft}{\isacharbrackright}}. The annotation
30 \isacommand{infixr}\indexbold{*infixr} means that \isa{{\isacharhash}} associates to
31 the right, i.e.\ the term \isa{x\ {\isacharhash}\ y\ {\isacharhash}\ z} is read as \isa{x\ {\isacharhash}\ {\isacharparenleft}y\ {\isacharhash}\ z{\isacharparenright}}
32 and not as \isa{{\isacharparenleft}x\ {\isacharhash}\ y{\isacharparenright}\ {\isacharhash}\ z}.
33
34 \begin{warn}
35 Syntax annotations are a powerful but completely optional feature. You
36 could drop them from theory \isa{ToyList} and go back to the identifiers
37 \isa{Nil} and \isa{Cons}. However, lists are such a
38 central datatype
39 that their syntax is highly customized. We recommend that novices should
40 not use syntax annotations in their own theories.
41 \end{warn}
42 Next, two functions \isa{app} and \isaindexbold{rev} are declared:%
43 \end{isamarkuptext}%
44 \isacommand{consts}\ app\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a\ list{\isachardoublequote}\ \ \ {\isacharparenleft}\isakeyword{infixr}\ {\isachardoublequote}{\isacharat}{\isachardoublequote}\ {\isadigit{6}}{\isadigit{5}}{\isacharparenright}\isanewline
45 \ \ \ \ \ \ \ rev\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a\ list{\isachardoublequote}%
46 \begin{isamarkuptext}%
47 \noindent
48 In contrast to ML, Isabelle insists on explicit declarations of all functions
49 (keyword \isacommand{consts}). (Apart from the declaration-before-use
50 restriction, the order of items in a theory file is unconstrained.) Function
51 \isa{op\ {\isacharat}} is annotated with concrete syntax too. Instead of the prefix
52 syntax \isa{app xs ys} the infix
53 \isa{xs\ {\isacharat}\ ys}\index{$HOL2list@\texttt{\at}|bold} becomes the preferred 54 form. Both functions are defined recursively:% 55 \end{isamarkuptext}% 56 \isacommand{primrec}\isanewline 57 {\isachardoublequote}{\isacharbrackleft}{\isacharbrackright}\ {\isacharat}\ ys\ \ \ \ \ \ \ {\isacharequal}\ ys{\isachardoublequote}\isanewline 58 {\isachardoublequote}{\isacharparenleft}x\ {\isacharhash}\ xs{\isacharparenright}\ {\isacharat}\ ys\ {\isacharequal}\ x\ {\isacharhash}\ {\isacharparenleft}xs\ {\isacharat}\ ys{\isacharparenright}{\isachardoublequote}\isanewline 59 \isanewline 60 \isacommand{primrec}\isanewline 61 {\isachardoublequote}rev\ {\isacharbrackleft}{\isacharbrackright}\ \ \ \ \ \ \ \ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}{\isachardoublequote}\isanewline 62 {\isachardoublequote}rev\ {\isacharparenleft}x\ {\isacharhash}\ xs{\isacharparenright}\ \ {\isacharequal}\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharat}\ {\isacharparenleft}x\ {\isacharhash}\ {\isacharbrackleft}{\isacharbrackright}{\isacharparenright}{\isachardoublequote}% 63 \begin{isamarkuptext}% 64 \noindent 65 The equations for \isa{op\ {\isacharat}} and \isa{rev} hardly need comments: 66 \isa{op\ {\isacharat}} appends two lists and \isa{rev} reverses a list. The keyword 67 \isacommand{primrec}\index{*primrec} indicates that the recursion is of a 68 particularly primitive kind where each recursive call peels off a datatype 69 constructor from one of the arguments. Thus the 70 recursion always terminates, i.e.\ the function is \bfindex{total}. 71 72 The termination requirement is absolutely essential in HOL, a logic of total 73 functions. If we were to drop it, inconsistencies would quickly arise: the 74 definition''$f(n) = f(n)+1$immediately leads to$0 = 1$by subtracting 75$f(n)$on both sides. 76 % However, this is a subtle issue that we cannot discuss here further. 77 78 \begin{warn} 79 As we have indicated, the desire for total functions is not a gratuitously 80 imposed restriction but an essential characteristic of HOL. It is only 81 because of totality that reasoning in HOL is comparatively easy. More 82 generally, the philosophy in HOL is not to allow arbitrary axioms (such as 83 function definitions whose totality has not been proved) because they 84 quickly lead to inconsistencies. Instead, fixed constructs for introducing 85 types and functions are offered (such as \isacommand{datatype} and 86 \isacommand{primrec}) which are guaranteed to preserve consistency. 87 \end{warn} 88 89 A remark about syntax. The textual definition of a theory follows a fixed 90 syntax with keywords like \isacommand{datatype} and \isacommand{end} (see 91 Fig.~\ref{fig:keywords} in Appendix~\ref{sec:Appendix} for a full list). 92 Embedded in this syntax are the types and formulae of HOL, whose syntax is 93 extensible, e.g.\ by new user-defined infix operators 94 (see~\ref{sec:infix-syntax}). To distinguish the two levels, everything 95 HOL-specific (terms and types) should be enclosed in 96 \texttt{"}\dots\texttt{"}. 97 To lessen this burden, quotation marks around a single identifier can be 98 dropped, unless the identifier happens to be a keyword, as in% 99 \end{isamarkuptext}% 100 \isacommand{consts}\ {\isachardoublequote}end{\isachardoublequote}\ {\isacharcolon}{\isacharcolon}\ {\isachardoublequote}{\isacharprime}a\ list\ {\isasymRightarrow}\ {\isacharprime}a{\isachardoublequote}% 101 \begin{isamarkuptext}% 102 \noindent 103 When Isabelle prints a syntax error message, it refers to the HOL syntax as 104 the \bfindex{inner syntax} and the enclosing theory language as the \bfindex{outer syntax}. 105 106 107 \section{An introductory proof} 108 \label{sec:intro-proof} 109 110 Assuming you have input the declarations and definitions of \texttt{ToyList} 111 presented so far, we are ready to prove a few simple theorems. This will 112 illustrate not just the basic proof commands but also the typical proof 113 process. 114 115 \subsubsection*{Main goal: \isa{rev{\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs}} 116 117 Our goal is to show that reversing a list twice produces the original 118 list. The input line% 119 \end{isamarkuptext}% 120 \isacommand{theorem}\ rev{\isacharunderscore}rev\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}rev{\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs{\isachardoublequote}% 121 \begin{isamarkuptxt}% 122 \index{*theorem|bold}\index{*simp (attribute)|bold} 123 \begin{itemize} 124 \item 125 establishes a new theorem to be proved, namely \isa{rev\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs}, 126 \item 127 gives that theorem the name \isa{rev{\isacharunderscore}rev} by which it can be 128 referred to, 129 \item 130 and tells Isabelle (via \isa{{\isacharbrackleft}simp{\isacharbrackright}}) to use the theorem (once it has been 131 proved) as a simplification rule, i.e.\ all future proofs involving 132 simplification will replace occurrences of \isa{rev\ {\isacharparenleft}rev\ xs{\isacharparenright}} by 133 \isa{xs}. 134 135 The name and the simplification attribute are optional. 136 \end{itemize} 137 Isabelle's response is to print 138 \begin{isabelle} 139 proof(prove):~step~0\isanewline 140 \isanewline 141 goal~(theorem~rev\_rev):\isanewline 142 rev~(rev~xs)~=~xs\isanewline 143 ~1.~rev~(rev~xs)~=~xs 144 \end{isabelle} 145 The first three lines tell us that we are 0 steps into the proof of 146 theorem \isa{rev{\isacharunderscore}rev}; for compactness reasons we rarely show these 147 initial lines in this tutorial. The remaining lines display the current 148 proof state. 149 Until we have finished a proof, the proof state always looks like this: 150 \begin{isabelle} 151$G$\isanewline 152 ~1.~$G\sb{1}$\isanewline 153 ~~\vdots~~\isanewline 154 ~$n$.~$G\sb{n}$ 155 \end{isabelle} 156 where$G$ 157 is the overall goal that we are trying to prove, and the numbered lines 158 contain the subgoals$G\sb{1}$, \dots,$G\sb{n}$that we need to prove to 159 establish$G$. At \isa{step\ {\isadigit{0}}} there is only one subgoal, which is 160 identical with the overall goal. Normally$G$is constant and only serves as 161 a reminder. Hence we rarely show it in this tutorial. 162 163 Let us now get back to \isa{rev\ {\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs}. Properties of recursively 164 defined functions are best established by induction. In this case there is 165 not much choice except to induct on \isa{xs}:% 166 \end{isamarkuptxt}% 167 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ xs{\isacharparenright}% 168 \begin{isamarkuptxt}% 169 \noindent\index{*induct_tac}% 170 This tells Isabelle to perform induction on variable \isa{xs}. The suffix 171 \isa{tac} stands for tactic'', a synonym for theorem proving function''. 172 By default, induction acts on the first subgoal. The new proof state contains 173 two subgoals, namely the base case (\isa{Nil}) and the induction step 174 (\isa{Cons}): 175 \begin{isabelle} 176 ~1.~rev~(rev~[])~=~[]\isanewline 177 ~2.~{\isasymAnd}a~list.~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev(a~\#~list))~=~a~\#~list 178 \end{isabelle} 179 180 The induction step is an example of the general format of a subgoal: 181 \begin{isabelle} 182 ~$i$.~{\indexboldpos{\isasymAnd}{$IsaAnd}}$x\sb{1}$~\dots~$x\sb{n}$.~{\it assumptions}~{\isasymLongrightarrow}~{\it conclusion}
183 \end{isabelle}
184 The prefix of bound variables \isasymAnd$x\sb{1}$~\dots~$x\sb{n}$ can be
185 ignored most of the time, or simply treated as a list of variables local to
186 this subgoal. Their deeper significance is explained in Chapter~\ref{ch:Rules}.
187 The {\it assumptions} are the local assumptions for this subgoal and {\it
188 conclusion} is the actual proposition to be proved. Typical proof steps
189 that add new assumptions are induction or case distinction. In our example
190 the only assumption is the induction hypothesis \isa{rev\ {\isacharparenleft}rev\ list{\isacharparenright}\ {\isacharequal}\ list}, where \isa{list} is a variable name chosen by Isabelle. If there
191 are multiple assumptions, they are enclosed in the bracket pair
192 \indexboldpos{\isasymlbrakk}{$Isabrl} and 193 \indexboldpos{\isasymrbrakk}{$Isabrr} and separated by semicolons.
194
195 Let us try to solve both goals automatically:%
196 \end{isamarkuptxt}%
197 \isacommand{apply}{\isacharparenleft}auto{\isacharparenright}%
198 \begin{isamarkuptxt}%
199 \noindent
200 This command tells Isabelle to apply a proof strategy called
201 \isa{auto} to all subgoals. Essentially, \isa{auto} tries to
202 simplify'' the subgoals. In our case, subgoal~1 is solved completely (thanks
203 to the equation \isa{rev\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ {\isacharbrackleft}{\isacharbrackright}}) and disappears; the simplified version
204 of subgoal~2 becomes the new subgoal~1:
205 \begin{isabelle}
206 ~1.~\dots~rev(rev~list)~=~list~{\isasymLongrightarrow}~rev(rev~list~@~a~\#~[])~=~a~\#~list
207 \end{isabelle}
208 In order to simplify this subgoal further, a lemma suggests itself.%
209 \end{isamarkuptxt}%
210 %
211 \isamarkupsubsubsection{First lemma: \isa{rev{\isacharparenleft}xs\ {\isacharat}\ ys{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}rev\ ys{\isacharparenright}\ {\isacharat}\ {\isacharparenleft}rev\ xs{\isacharparenright}}}
212 %
213 \begin{isamarkuptext}%
214 After abandoning the above proof attempt\indexbold{abandon
215 proof}\indexbold{proof!abandon} (at the shell level type
216 \isacommand{oops}\indexbold{*oops}) we start a new proof:%
217 \end{isamarkuptext}%
218 \isacommand{lemma}\ rev{\isacharunderscore}app\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}rev{\isacharparenleft}xs\ {\isacharat}\ ys{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}rev\ ys{\isacharparenright}\ {\isacharat}\ {\isacharparenleft}rev\ xs{\isacharparenright}{\isachardoublequote}%
219 \begin{isamarkuptxt}%
220 \noindent The keywords \isacommand{theorem}\index{*theorem} and
221 \isacommand{lemma}\indexbold{*lemma} are interchangable and merely indicate
222 the importance we attach to a proposition. In general, we use the words
223 \emph{theorem}\index{theorem} and \emph{lemma}\index{lemma} pretty much
224 interchangeably.
225
226 There are two variables that we could induct on: \isa{xs} and
227 \isa{ys}. Because \isa{{\isacharat}} is defined by recursion on
228 the first argument, \isa{xs} is the correct one:%
229 \end{isamarkuptxt}%
230 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ xs{\isacharparenright}%
231 \begin{isamarkuptxt}%
232 \noindent
233 This time not even the base case is solved automatically:%
234 \end{isamarkuptxt}%
235 \isacommand{apply}{\isacharparenleft}auto{\isacharparenright}%
236 \begin{isamarkuptxt}%
237 \begin{isabelle}
238 ~1.~rev~ys~=~rev~ys~@~[]\isanewline
239 ~2. \dots
240 \end{isabelle}
241 Again, we need to abandon this proof attempt and prove another simple lemma first.
242 In the future the step of abandoning an incomplete proof before embarking on
243 the proof of a lemma usually remains implicit.%
244 \end{isamarkuptxt}%
245 %
246 \isamarkupsubsubsection{Second lemma: \isa{xs\ {\isacharat}\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ xs}}
247 %
248 \begin{isamarkuptext}%
249 This time the canonical proof procedure%
250 \end{isamarkuptext}%
251 \isacommand{lemma}\ app{\isacharunderscore}Nil{\isadigit{2}}\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}xs\ {\isacharat}\ {\isacharbrackleft}{\isacharbrackright}\ {\isacharequal}\ xs{\isachardoublequote}\isanewline
252 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ xs{\isacharparenright}\isanewline
253 \isacommand{apply}{\isacharparenleft}auto{\isacharparenright}%
254 \begin{isamarkuptxt}%
255 \noindent
256 leads to the desired message \isa{No\ subgoals{\isacharbang}}:
257 \begin{isabelle}
258 xs~@~[]~=~xs\isanewline
259 No~subgoals!
260 \end{isabelle}
261
262 We still need to confirm that the proof is now finished:%
263 \end{isamarkuptxt}%
264 \isacommand{done}%
265 \begin{isamarkuptext}%
266 \noindent\indexbold{done}%
267 As a result of that final \isacommand{done}, Isabelle associates the lemma just proved
268 with its name. In this tutorial, we sometimes omit to show that final \isacommand{done}
269 if it is obvious from the context that the proof is finished.
270
271 % Instead of \isacommand{apply} followed by a dot, you can simply write
272 % \isacommand{by}\indexbold{by}, which we do most of the time.
273 Notice that in lemma \isa{app{\isacharunderscore}Nil{\isadigit{2}}}
274 (as printed out after the final \isacommand{done}) the free variable \isa{xs} has been
275 replaced by the unknown \isa{{\isacharquery}xs}, just as explained in
276 \S\ref{sec:variables}.
277
278 Going back to the proof of the first lemma%
279 \end{isamarkuptext}%
280 \isacommand{lemma}\ rev{\isacharunderscore}app\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}rev{\isacharparenleft}xs\ {\isacharat}\ ys{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}rev\ ys{\isacharparenright}\ {\isacharat}\ {\isacharparenleft}rev\ xs{\isacharparenright}{\isachardoublequote}\isanewline
281 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ xs{\isacharparenright}\isanewline
282 \isacommand{apply}{\isacharparenleft}auto{\isacharparenright}%
283 \begin{isamarkuptxt}%
284 \noindent
285 we find that this time \isa{auto} solves the base case, but the
286 induction step merely simplifies to
287 \begin{isabelle}
288 ~1.~{\isasymAnd}a~list.\isanewline
289 ~~~~~~~rev~(list~@~ys)~=~rev~ys~@~rev~list~{\isasymLongrightarrow}\isanewline
290 ~~~~~~~(rev~ys~@~rev~list)~@~a~\#~[]~=~rev~ys~@~rev~list~@~a~\#~[]
291 \end{isabelle}
292 Now we need to remember that \isa{{\isacharat}} associates to the right, and that
293 \isa{{\isacharhash}} and \isa{{\isacharat}} have the same priority (namely the \isa{{\isadigit{6}}{\isadigit{5}}}
294 in their \isacommand{infixr} annotation). Thus the conclusion really is
295 \begin{isabelle}
296 ~~~~~(rev~ys~@~rev~list)~@~(a~\#~[])~=~rev~ys~@~(rev~list~@~(a~\#~[]))
297 \end{isabelle}
298 and the missing lemma is associativity of \isa{{\isacharat}}.%
299 \end{isamarkuptxt}%
300 %
301 \isamarkupsubsubsection{Third lemma: \isa{{\isacharparenleft}xs\ {\isacharat}\ ys{\isacharparenright}\ {\isacharat}\ zs\ {\isacharequal}\ xs\ {\isacharat}\ {\isacharparenleft}ys\ {\isacharat}\ zs{\isacharparenright}}}
302 %
303 \begin{isamarkuptext}%
304 Abandoning the previous proof, the canonical proof procedure%
305 \end{isamarkuptext}%
306 \isacommand{lemma}\ app{\isacharunderscore}assoc\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}{\isacharparenleft}xs\ {\isacharat}\ ys{\isacharparenright}\ {\isacharat}\ zs\ {\isacharequal}\ xs\ {\isacharat}\ {\isacharparenleft}ys\ {\isacharat}\ zs{\isacharparenright}{\isachardoublequote}\isanewline
307 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ xs{\isacharparenright}\isanewline
308 \isacommand{apply}{\isacharparenleft}auto{\isacharparenright}\isanewline
309 \isacommand{done}%
310 \begin{isamarkuptext}%
311 \noindent
312 succeeds without further ado.
313 Now we can go back and prove the first lemma%
314 \end{isamarkuptext}%
315 \isacommand{lemma}\ rev{\isacharunderscore}app\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}rev{\isacharparenleft}xs\ {\isacharat}\ ys{\isacharparenright}\ {\isacharequal}\ {\isacharparenleft}rev\ ys{\isacharparenright}\ {\isacharat}\ {\isacharparenleft}rev\ xs{\isacharparenright}{\isachardoublequote}\isanewline
316 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ xs{\isacharparenright}\isanewline
317 \isacommand{apply}{\isacharparenleft}auto{\isacharparenright}\isanewline
318 \isacommand{done}%
319 \begin{isamarkuptext}%
320 \noindent
321 and then solve our main theorem:%
322 \end{isamarkuptext}%
323 \isacommand{theorem}\ rev{\isacharunderscore}rev\ {\isacharbrackleft}simp{\isacharbrackright}{\isacharcolon}\ {\isachardoublequote}rev{\isacharparenleft}rev\ xs{\isacharparenright}\ {\isacharequal}\ xs{\isachardoublequote}\isanewline
324 \isacommand{apply}{\isacharparenleft}induct{\isacharunderscore}tac\ xs{\isacharparenright}\isanewline
325 \isacommand{apply}{\isacharparenleft}auto{\isacharparenright}\isanewline
326 \isacommand{done}%
327 \begin{isamarkuptext}%
328 \noindent
329 The final \isacommand{end} tells Isabelle to close the current theory because
330 we are finished with its development:%
331 \end{isamarkuptext}%
332 \isacommand{end}\isanewline
333 \end{isabellebody}%
334 %%% Local Variables:
335 %%% mode: latex
336 %%% TeX-master: "root"
337 %%% End: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9161174297332764, "perplexity": 14140.566073586091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669454.33/warc/CC-MAIN-20191118053441-20191118081441-00184.warc.gz"} |
http://math.stackexchange.com/users/23805/sean-eberhard?tab=activity&sort=all | # Sean Eberhard
less info
reputation
315
bio website seaneberhard.blogspot.co.uk location age member for 2 years, 2 months seen 8 hours ago profile views 586
He's just this guy, you know.
# 351 Actions
Mar24 comment Proof of $\lim_{n \to \infty} {a_n}^{1/n} = \lim_{n \to \infty}(a_{n+1}/a_n)$ @WimC Thanks. I corrected it now: I meant take any $\alpha>1$. Mar24 revised Proof of $\lim_{n \to \infty} {a_n}^{1/n} = \lim_{n \to \infty}(a_{n+1}/a_n)$ edited body Mar24 answered Proof of $\lim_{n \to \infty} {a_n}^{1/n} = \lim_{n \to \infty}(a_{n+1}/a_n)$ Mar3 comment Question over regular induction: Let $P(n)$ be the statement that $n$-cent postage can be formed using just 4-cent and 7-cent stamps Prove strong induction using regular induction and then use strong induction. Feb7 revised Convergence of the sequence $\frac{1}{n\sin(n)}$ added 869 characters in body Feb7 comment Convergence of the sequence $\frac{1}{n\sin(n)}$ The statement about random $x$ was only intended to inform any conjecture about $\pi$, which often behaves as though it were random. Feb6 comment Convergence of the sequence $\frac{1}{n\sin(n)}$ It seems likely that the upper limit is infinite (even if the irrationality measure of $\pi$ is $2$, which it probably is). One can show that if $x$ is random then the upper limit of $1/|n\sin(x\pi n)|$ is almost surely infinity. Feb6 answered Convergence of the sequence $\frac{1}{n\sin(n)}$ Jan27 awarded Yearling Jan7 reviewed Approve suggested edit on What is a supremum? Jan6 reviewed Approve suggested edit on If the leading coefficient of a polynomial is $x^{3}$, does it mean that the graph would always intersect the $x$ axis at $3$ points? Dec6 revised Lower central series of a free group edited body Dec1 comment Defintion of $\ell^\infty$ Nov6 comment Taylor's theorem: $f'' + f = 0, f(0) = f'(0) = 0$. As far as doing it without Taylor's theorem, multiply by $f'$ and integrate, getting $f^2 + f'^2 = \text{const}$. The initial condition implies that the constant is $0$. But since squares are nonnegative, we must have $f\equiv 0$. Oct8 answered Every compact subspace of a Hausdorff space is closed Oct5 comment Let $f,g$ be two distinct functions from $[0,1]$ to $(0, +\infty)$ such that $\int_{0}^{1} g = \int_{0}^{1} f$. Yes thanks, just saw that. Oct4 revised for what integer $m,n,d$: $\sum_{k=1}^r k^n= \left(\sum_{k=1}^r k^d\right)^m$? added 227 characters in body Oct4 answered for what integer $m,n,d$: $\sum_{k=1}^r k^n= \left(\sum_{k=1}^r k^d\right)^m$? Oct4 revised Maps with every point being periodic deleted 3 characters in body Oct4 revised Maps with every point being periodic added 414 characters in body | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9666830897331238, "perplexity": 1059.7250452015949}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.lidolearning.com/questions/m-bb-ncertexemplar6-chp8-ex-q27/25kg-20g-50kg-40g/ | # NCERT Exemplar Solutions Class 6 Mathematics Solutions for Exercise in Chapter 8 - Ratio and Proportion
Question 27 Exercise
25kg : 20g = 50kg : 40g
True.
We know that, 1 kg = 1000 g
So, 25 × 1000 = 25000 g
50 × 1000 = 50000g
Then, 25000g : 20g = 50000g : 40g
25000/20 = 50000/40
2500/2 = 5000/4
2500/2 is further simplified by dividing both numerator and denominator by 2 we get,
= 1250
5000/4 is further simplified by dividing both numerator and denominator by 4 we get,
= 1250
Hence, 1250 = 1250
So, 25kg : 20g = 50kg : 40g
Video transcript
"hello students welcome to lido q a video session i'm sev your math tutor and question for today is a cuboidal vessel is 10 meter long and 8 meter wide how high must it be made to hold 380 cubic meters of a liquid here dimension of the capital vessel is given to you length which is l and it is equal to 10 meter breadth b which is equal to 8 meter and the volume of the vessel is given that is 380 meter cube let h be the height of the cuboidal vessel which we need to find now we already know through the volume of the cuboidal vessel we have one formula that is length into breath into height so volume and that is equal to already given to us one volume that is 380 meter cube so let us equate this 380 meter cube with length into breath into h so 380 will be equal to length is 10 breath is 8 and h we need to find so h is equal to 10 into 8 h is equal to 380 divided by 10 into 8 and this will give you after simplification it will give you height will be equal to 4.75 so that is the value of h therefore from this we can write the height of vessel should be 4.75 meter if you have any doubt regarding this you can post it in our comment section and subscribe to lido for more such interesting q a sessions thank you for watching "
Related Questions
Exercises
Lido
Courses
Teachers
Book a Demo with us
Syllabus
Maths
CBSE
Maths
ICSE
Science
CBSE
Science
ICSE
English
CBSE
English
ICSE
Coding
Terms & Policies
Selina Question Bank
Maths
Physics
Biology
Allied Question Bank
Chemistry
Connect with us on social media! | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9064733982086182, "perplexity": 1000.4670352220057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662577259.70/warc/CC-MAIN-20220524203438-20220524233438-00112.warc.gz"} |
http://latkin.org/blog/page/2/ | # I'VE GOT THE BYTE ON MY SIDE
## Moving to a static site generator
Jun 13, 2016
This blog started on wordpress.com back in February of 2012, then in November 2013 I moved it to a hosted WordPress.org site here at latkin.org. WordPress is quite nice, but it seemed like it was a bit heavyweight given my very basic needs. I’ve wanted to slim down the site and get more hands-on for a while, now.
Over the past few weeks, I’ve been migrating the entire blog to the Hugo static site generator. I’m pleased to announce that the migration is complete!
## Benchmarking IEnumerables in F# - Seq.timed
Feb 8, 2016
It’s pretty straightforward to do basic benchmarking of a single, self-contained piece of code in .NET. You just make a Stopwatch sandwich (let sw = Stopwatch.StartNew(); <code goes here>; sw.Stop()), then read off the elapsed time from the Stopwatch.
What about measuring the throughput of a data pipeline? In this case one is less interested in timing a single block of code from start to finish, and more interested in bulk metrics like computations/sec or milliseconds/item. Oftentimes such pipelines are persistent or very long-running, so a useful benchmark would not be a one-time measurement, but rather something that samples repeatedly.
Furthermore, it’s sometimes difficult to determine where the bottleneck in a chain of computations lies. Is the root data source the culprit? Or is it perhaps an intermediate transformation that’s slow, or even the final consumer?
## JNI object lifetimes - quick reference
Feb 1, 2016
I’ve recently had reason to do a bit of work with JNI . Throughout the course of this work I had to do quite a lot of Googling in order to figure out how to properly manage the caching of various JNI objects used by my C++ code. Some JNI objects can be safely cached and re-used at any point, while others have limited lifetimes and require special handling. Obtaining JNI objects through JNI APIs is, broadly speaking, fairly expensive, so it’s smart to persist those objects which will be re-used in multiple places. You just need to be careful.
I expected to find a big table somewhere that documented the lifetime restrictions (or lack thereof) for each of the JNI object types, but sadly I was unable to locate one. Instead, I wound up trawling through numerous Stack Overflow replies, blogs, forums, and other documentation to obtain this information.
This post is my effort to provide to others the missing quick reference I wish I’d found. I recommend this Android documentation as further reading if you want more details.
## Pentago
Jul 2, 2015
Pentago is a favorite board game of mine, which I used to play regularly with coworkers during lunch (and occasionally during not-lunch). The rules are very simple, and casual games can be played in just a few minutes, but it’s deep enough to still be satisfying if you’re willing to put some thought into your strategy.
In 2009 I wrote a computer player for Pentago in C#, and even managed to cobble together Silverlight and Windows Phone UIs for it that aren’t terrible. The engine uses a negamax algorithm with alpha-beta pruning.
## Null-checking considerations in F# - it's harder than you think
May 18, 2015
The near-complete obviation of nulls is perhaps the most frequently- (and hilariously-) cited benefit of working in F#, as compared to C#. Nulls certainly still exist in F#, but as a practical matter it really is quite rare that they need to be considered explicitly within an all-F# codebase.
It turns out this cuts both ways. On those infrequent occasions where one does need to check for nulls, F# actually makes it surprisingly difficult to do so safely and efficiently.
In this post I’ve tried to aggregate some best practices and pitfalls, in the form of DOs and DON’Ts, for F# null-checking. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22871659696102142, "perplexity": 2099.936415779607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320263.78/warc/CC-MAIN-20170624133941-20170624153941-00631.warc.gz"} |
https://forum.azimuthproject.org/discussion/1112/ocean-modelling-for-beginners | #### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
Options
# Ocean modelling for beginners
There's an energetic undergraduate in my differential equations course who is fascinated by ocean modelling. He wants to work on this with me. His plan is to work through the book:
He claims to like the idea of doing projects in this book and writing blog articles based on them; he says he likes to write. So, I'm going to try to get him to do this. The book explains things like how to make animated GIFs of simulations, so that could be cool. And, I hope I learn a bit about ocean modelling.
The book's exercises apparently use FORTRAN. This is presumably what climate modellers use, or at least what they used in 2008. One question is whether I can or should get him to do the exercises in some other language. Of course the problem is that I can't really teach him that other language... certainly not in my spare time, anyway!
• Options
1.
I added a reference to the above book to:
which is the best page I could find for this. Btw, I did so as 'Anonymous Coward', since I had some trouble getting the system to accept me on my new computer....
Comment Source:I added a reference to the above book to: * [[Atmospheric and ocean dynamics]] which is the best page I could find for this. Btw, I did so as 'Anonymous Coward', since I had some trouble getting the system to accept me on my new computer....
• Options
2.
edited November 2012
The book’s exercises apparently use FORTRAN. This is presumably what climate modellers use, or at least what they used in 2008. One question is whether I can or should get him to do the exercises in some other language. Of course the problem is that I can’t really teach him that other language… certainly not in my spare time, anyway!
If it's in FORTRAN, let it be FORTRAN (unless your student really wants to use another language himself). I'm not a computer scientist (who may have valid reasons for not liking it for their specific domain) but I've met quite a few respectable scientists who say FORTRAN is still a good language for what it is mainly used for, i.e. high-performance scientific computing, and I consider their opinion to be valuable too, even if they're not computer scientists either, because they also do programming and some of them are also fluent in other languages. Although these exercises probably don't need very high performance, I guess.
A disadvantage of FORTRAN may be that some non-public compilers seem to give better compilation than e.g. the GNU compiler (in some sense a computer language is void without a compiler).
PS if the book's exercises would be written in F77 (which I doubt) it would be good to encourage your student to rewrite them in F90/F95, F77 style is kind of deprecated.
Comment Source:> The book’s exercises apparently use FORTRAN. This is presumably what climate modellers use, or at least what they used in 2008. One question is whether I can or should get him to do the exercises in some other language. Of course the problem is that I can’t really teach him that other language… certainly not in my spare time, anyway! If it's in FORTRAN, let it be FORTRAN (unless your student really wants to use another language himself). I'm not a computer scientist (who may have valid reasons for not liking it for their specific domain) but I've met quite a few respectable scientists who say FORTRAN is still a good language for what it is mainly used for, i.e. high-performance scientific computing, and I consider their opinion to be valuable too, even if they're not computer scientists either, because they also do programming and some of them are also fluent in other languages. Although these exercises probably don't need very high performance, I guess. A disadvantage of FORTRAN may be that some non-public compilers seem to give better compilation than e.g. the GNU compiler (in some sense a computer language is void without a compiler). PS if the book's exercises would be written in F77 (which I doubt) it would be good to encourage your student to rewrite them in F90/F95, F77 style is kind of deprecated.
• Options
3.
I second Frederik's view. Fortran 95/2003 uses lower case and expects documented functions.The ocean modelling book is free online.
Comment Source:I second Frederik's view. Fortran 95/2003 uses lower case and expects documented functions.The [ocean modelling book](http://goo.gl/Wshia) is free online.
• Options
4.
I also had this book on loan but as i'm not into FORTRAN since I took one class at my University I deferred to read it or use it right now.
Comment Source:I also had this book on loan but as i'm not into FORTRAN since I took one class at my University I deferred to read it or use it right now.
• Options
5.
edited November 2012
Ocean Modelling for Beginners uses FORTRAN 95. Thanks for your advice, guys! And thanks for the link to Advanced Ocean Modelling, Jim. It's by the same guy, clearly a continuation of the same story. David Hincapie lent me another more advanced book:
• Stephen M. Griffies, Fundamentals of Ocean Climate Models, Princeton U. Press, Princeton, 2004.
A lot of it is standard math that I know (there's even an appendix on tensor calculus and manifolds!), but a lot is new to me, like 'neutral physics', 'Arakawa grids', the 'Bossinesq approximation' and so on. So, if we get this far I will have lots of fun.
Comment Source:_Ocean Modelling for Beginners_ uses FORTRAN 95. Thanks for your advice, guys! And thanks for the link to _Advanced Ocean Modelling_, Jim. It's by the same guy, clearly a continuation of the same story. David Hincapie lent me another more advanced book: * Stephen M. Griffies, _Fundamentals of Ocean Climate Models_, Princeton U. Press, Princeton, 2004. A lot of it is standard math that I know (there's even an appendix on tensor calculus and manifolds!), but a lot is new to me, like 'neutral physics', 'Arakawa grids', the 'Bossinesq approximation' and so on. So, if we get this far I will have lots of fun.
• Options
6.
edited November 2012
I will have lots of fun
It's not so spectacular:
• It's Boussinesq approximation: $\rho(p,T) \rightarrow \rho_0$ except for the terms in the Navier-Stokes equation where $\rho$ is accompanied by $g$, and there then usually $\rho(T)$, and mostly only to first order in $T$. In this way the Navier-Stokes equations become incompressible, the trade-off is a buoyancy term (the linear dependence of the density on temperature).
• The Arakawa grids define, in a finite difference numerical scheme, at which points the discretized vectors and scalars are defined with respect to each other. Usually the equations become more "efficient" when they're not defined at the same point.
Comment Source:> I will have lots of fun It's not so spectacular: * It's Bo**u**ssinesq approximation: $\rho(p,T) \rightarrow \rho_0$ except for the terms in the Navier-Stokes equation where $\rho$ is accompanied by $g$, and there then usually $\rho(T)$, and mostly only to first order in $T$. In this way the Navier-Stokes equations become incompressible, the trade-off is a buoyancy term (the linear dependence of the density on temperature). * The Arakawa grids define, in a finite difference numerical scheme, at which points the discretized vectors and scalars are defined with respect to each other. Usually the equations become more "efficient" when they're not defined at the same point.
• Options
7.
Your alert student and others might like to look a the Princeton Ocean Model. I'm waiting for a registration to download mpiPOM. the parallel implementation of POM using Fortran MPI. MPI Fortan is already installed on the Azimuth server. The Taiwan site has a really cool animated gif of the 1988 El Nino.
Comment Source:Your alert student and others might like to look a the [Princeton Ocean Model](http://www.aos.princeton.edu/WWWPUBLIC/htdocs.pom/). I'm waiting for a registration to download [mpiPOM](http://mpipom.ihs.ncu.edu.tw/index.php). the parallel implementation of POM using [Fortran MPI](https://computing.llnl.gov/tutorials/mpi/). MPI Fortan is already installed on the Azimuth server. The Taiwan site has a really cool animated gif of the 1988 El Nino.
• Options
8.
Sounds like a fun idea. I work next door to some ocean modelers, so I might be able to ask them a dumb question or two if needed.
Yes, ocean modelers universally use Fortran. Other software can be used to visualize the model's output. It might be fun to write some kind of interactive version of a simple ocean model in some modern visualization language like Processing; see also this simplified fluid dynamics library for video games. But I'd probably just start with Fortran, because it's fast (therefore can diagnose bugs quickly) and the book uses it.
Speaking of tensor calculus, the aforementioned ocean modelers next door are working on an unstructured grid model, that works with arbitrary curvilinear coordinate systems (in 2D) discretized onto arbitrary meshes (given by a Voronoi diagram on the sphere). They're having fun (or not) working out how to discretize all their equations ... it gets a little messy once you get to tensors (matrices, really).
Comment Source:Sounds like a fun idea. I work next door to some ocean modelers, so I might be able to ask them a dumb question or two if needed. Yes, ocean modelers universally use Fortran. Other software can be used to visualize the model's output. It might be fun to write some kind of interactive version of a simple ocean model in some modern visualization language like [Processing](http://www.processing.org/); see also [this simplified fluid dynamics library for video games](http://www.memo.tv/msafluid/). But I'd probably just start with Fortran, because it's fast (therefore can diagnose bugs quickly) and the book uses it. Speaking of tensor calculus, the aforementioned ocean modelers next door are working on an unstructured grid model, that works with arbitrary curvilinear coordinate systems (in 2D) discretized onto arbitrary meshes (given by a Voronoi diagram on the sphere). They're having fun (or not) working out how to discretize all their equations ... it gets a little messy once you get to tensors (matrices, really). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45252525806427, "perplexity": 1342.2775493677202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737319.74/warc/CC-MAIN-20200808080642-20200808110642-00586.warc.gz"} |
http://xrpp.iucr.org/A1a/ch1o5v0001/sec1o5o4/ | International
Tables for
Crystallography
Volume A1
Symmetry relations between space groups
Edited by Hans Wondratschek and Ulrich Müller
International Tables for Crystallography (2006). Vol. A1, ch. 1.5, pp. 34-36 | 1 | 2 |
## Section 1.5.4. Space groups
Gabriele Nebea*
aAbteilung Reine Mathematik, Universität Ulm, D-89069 Ulm, Germany
Correspondence e-mail: nebe@mathematik.uni-ulm.de
### 1.5.4. Space groups
| top | pdf |
#### 1.5.4.1. Definition of space groups
| top | pdf |
In IT A (2005), Section 8.1.6 , space groups are introduced as symmetry groups of crystal patterns.
#### Definition 1.5.4.1.1
(a) Let be the n-dimensional real vector space. A subset is called an (n-dimensional) lattice if there is a basis of such that (b) A crystal structure is a mapping of the Euclidean affine n-space into the real numbers such that is an n-dimensional lattice in . (c) The Euclidean group acts on the set of mappings via for all and for all and . A space group is the stabilizer of a crystal structure ; . (d) Let be a space group. The translation subgroup of is defined as
The definition introduced space groups in the way they occur in crystallography: The group of symmetries of an ideal crystal stabilizes the crystal structure. This definition is not very helpful in analysing the structure of space groups. If is a space group, then the translation subgroup is a normal subgroup of . It is even a characteristic subgroup of , hence fixed under every automorphism of . By Definition 1.5.4.1.1, its image under the inverse of the mapping in Example 1.5.3.4.4 defined by in is a full lattice . Since is an isomorphism from onto , the translation subgroup of is isomorphic to the lattice . In particular, one has and the subgroup , formed by the pth powers of elements in , is mapped onto . Lattices are well understood. Although they are infinite, they have a simple structure, so they can be examined algorithmically. Since they lie in a vector space, one can apply linear algebra to them.
Now we want to look at how this lattice fits into the space group . The affine group acts on by conjugation as well as on via its linear part. Similarly the space group acts on by conjugation: For and , one gets , where is the linear part of . Therefore the kernel of this action is on the one hand the centralizer of in , on the other hand, since contains a basis of , it is equal to the kernel of the mapping , which is , hence Hence only the linear part of acts faithfully on by conjugation and linearly on . This factor group is a finite group. Let us summarize this:
Theorem 1.5.4.1.2. Let be a space group. The translation subgroup is an Abelian normal subgroup of which is its own centralizer, . The finite group acts faithfully on by conjugation. This action is similar to the action of the linear part on the lattice .
#### 1.5.4.2. Maximal subgroups of space groups
| top | pdf |
Definition 1.5.4.2.1. A subgroup of a group is called maximal if and for all subgroups with it holds that either or .
The translation subgroup of the space group plays a very important role if one wants to analyse the space group . Let be a subgroup of . Then has either fewer translations () or the order of the linear part of , the index of in , gets smaller (), or both happen.
Definition 1.5.4.2.2. Let be a subgroup of the space group and .
(t) is called a translationengleiche or a t-subgroup if .
(k) is called a klassengleiche or a k-subgroup if .
Remark. The third isomorphism theorem, Theorem 1.5.3.5.2, implies that if is a k-subgroup, then . Hence is a k-subgroup if and only if .
Let be a maximal subgroup of . Then we have the following preliminary situation:
Since and , one has by Proposition 1.5.3.2.11 that . Hence the maximality of implies that or . If then , hence is a t-subgroup. If , then by the third isomorphism theorem, Theorem 1.5.3.5.2, , hence is a k-subgroup. This is given by the following theorem:
Theorem 1.5.4.2.3. (Hermann) Let be a maximal subgroup of the space group . Then is either a k-subgroup or a t-subgroup.
The above picture looks as follows in the two cases:
Let be a t-subgroup of . Then and is a subgroup of . On the other hand, any subgroup of defines a unique t-subgroup of with and , namely . Hence the t-subgroups of are in bijection to the subgroups of , which is a finite group according to the remarks below Definition 1.5.4.1.1. For future reference, we note this in the following corollary:
Corollary 1.5.4.2.4. The t-subgroups of the space group are in bijection with the subgroups of the finite group .
In the case , which is the most important case in crystallography, the finite groups are isomorphic to subgroups of either (Hermann–Mauguin symbol ) or (). Here denotes the direct product (cf. Definition 1.5.3.6.1), the cyclic group of order 2, and and the symmetric groups of degree 3 or 4, respectively (cf. Section 1.5.3.6). Hence the maximal subgroups of that are t-subgroups can be read off from the subgroups of the two groups above.
An algorithm for calculating the maximal t-subgroups of which applies to all three-dimensional space groups is explained in Section 1.5.5.
The more difficult task is the determination of the maximal k-subgroups.
Lemma 1.5.4.2.5. Let be a maximal k-subgroup of the space group . Then is a normal subgroup of . Hence is an -invariant lattice.
Proof. , so every element in can be written as where and . Therefore one obtains for since is Abelian. Since and is normal in , one has . But is a product of elements in and therefore lies in the subgroup , hence . QED
The candidates for translation subgroups of maximal k-subgroups of can be found by linear-algebra algorithms using the philosophy explained at the beginning of this section: acts on by conjugation and this action is isomorphic to the action of the linear part of on the lattice via the isomorphism . Normal subgroups of contained in are mapped onto -invariant sublattices of . An example for such a normal subgroup is the group formed by the pth powers of elements of for any natural number . One has .
If is a maximal k-subgroup of , then is a normal subgroup of that is maximal in , which means that is a maximal -invariant sublattice of . Hence it contains for some prime number p. One may view as a finite -module and find all candidates for such normal subgroups as full pre-images of maximal -submodules of . This gives an algorithm for calculating these normal subgroups, which is implemented in the package [CARAT].
The group is an Abelian group, with the additional property that for all one has . Such a group is called an elementary Abelian p-group.
From the reasoning above we find the following lemma.
Lemma 1.5.4.2.6. Let be a maximal k-subgroup of the space group . Then is an elementary Abelian p-group for some prime p. The order of is with .
Corollary 1.5.4.2.7. Maximal subgroups of space groups are again space groups and of finite index in the supergroup.
Hence the first step is the determination of subgroups of that are maximal in and normal in , and is solved by linear-algebra algorithms. These subgroups are the candidates for the translation subgroups for maximal k-subgroups . But even if one knows the isomorphism type of , the group does not in general determine . Given such a normal subgroup that is contained in , one now has to find all maximal k-subgroups with and . It might happen that there is no such group . This case does not occur if is a symmorphic space group in the sense of the following definition:
Definition 1.5.4.2.8. A space group is called symmorphic if there is a subgroup such that and . The subgroup is called a complement of the translation subgroup .
Note that the group in the definition is isomorphic to and hence a finite group.
If is symmorphic and is a complement of , then one may take .
This shows the following:
Lemma 1.5.4.2.9. Let be a symmorphic space group with translation subgroup and an -invariant subgroup of (i.e. ). Then there is at least one k-subgroup with translation subgroup .
In any case, the maximal k-subgroups, , of satisfy
To find these maximal subgroups, , one first chooses such a subgroup . It then suffices to compute in the finite group . If there is a complement of in , then every element may be written uniquely as with , . In particular, any other complement of in is of the form . One computes . Since is a subgroup of , it holds that . Moreover, every mapping with this property defines some maximal subgroup as above. Since and are finite, it is a finite problem to find all such mappings.
If there is no such complement , this means that there is no (maximal) k-subgroup of with .
### References
International Tables for Crystallography (2005). Vol. A, Space-group symmetry, edited by Th. Hahn, 5th ed. Heidelberg: Springer. (Abbreviated IT A.) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9796544313430786, "perplexity": 515.6012996565169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267165261.94/warc/CC-MAIN-20180926140948-20180926161348-00515.warc.gz"} |
http://www.circuitlab.org/2013/04/electronics-voltage-to-frequency.html | Electronics Voltage to frequency converter | Electronic Circuits
Pages
Electronics Voltage to frequency converter
Circuit voltage to frequency converter schematics Circuit Electronics,
Changing the voltage to frequency scale in the design of an electronic device is sometimes necessary. The series of articles voltage to frequency converter with the XR 4151 is one answer. voltage to Frequency converter circuit with the XR 4151 is the idea of time in college, when there are projects to create a tool to hatch chicken eggs.
It will be my neighbor also write articles incubators chicken egg-based microcontroller AT89C2051. Maybe there are friends who still remember to this project. Back to the topic of voltage to frequency converter circuit with the XR 4151. IC XR 4151 is a major component of voltage to frequency converter (Voltage to Frequency Converter).
From voltage to frequency converter circuit with XR 4151 on the input signal circuit is a DC voltage level. IC XR4151 on voltage to frequency converter circuit serves to convert the voltage level coming into form in the development of the frequency change, where the output frequency range of voltage to frequency converter with the XR 4151 is proportional to the voltage level input voltage to frequency converter circuit with this 4151 XR.
Schematics for voltage to frequency converter Circuit Electronics | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8561944365501404, "perplexity": 2285.1833496013396}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997889314.41/warc/CC-MAIN-20140722025809-00169-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/understanding-a-special-combination.542341/ | # Understanding a special combination
1. Oct 20, 2011
### anigeo
nC0+nC1+nC2+..................+nCn=2^n
in the analytic proof for this my books say that it is the total number of combinations of n different things taken at least 1 at a time.
they say that each object can be dealt in 2 ways, either it can be accepted or it can be rejected.
hence n objects can be dealt in 2^n ways.
but how in selection how does the question of rejection come?what is the significance of this rejection?please explain.
2. Oct 20, 2011
### mathman
I can't understand your question. However a simple proof of the equation is by means of the binomial theorem. Expand (a+b)n and then let a=b=1 and you will get the result.
3. Oct 20, 2011
### anigeo
the binomial proof is done.this is the analytic proof in which they say that every object of n objects can be dealt in 2 ways,s objects can be dealt with in 2^2 ways,3 in 2^3 ways.this includes an acceptance of the object or its rejection.acceptance or selection is all right.how does the question of rejection arise?
4. Oct 21, 2011
### mathman
It looks to be a matter of wording. Since every object can be dealt with in either of two ways, the two ways may be labeled "acceptance" or "rejection".
5. Oct 21, 2011
### anigeo
that's the thing, u got it.but how can u say that it can be dealt in only 2 ways?
6. Oct 22, 2011
### mathman
That's what it is to prove the particular equation. If it's more than two ways, you have a different expression.
Similar Discussions: Understanding a special combination | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8077293038368225, "perplexity": 1076.3389170098894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687592.20/warc/CC-MAIN-20170921011035-20170921031035-00298.warc.gz"} |
https://export.arxiv.org/list/gr-qc/1107?skip=125&show=25 | # General Relativity and Quantum Cosmology
## Authors and titles for Jul 2011, skipping first 125
[ total of 345 entries: 1-25 | ... | 51-75 | 76-100 | 101-125 | 126-150 | 151-175 | 176-200 | 201-225 | ... | 326-345 ]
[ showing 25 entries per page: fewer | more | all ]
[126]
Title: Inflation and the cosmological constant
Comments: 8 pages; v6: published version
Journal-ref: Phys. Rev. D 85, 023509 (2012)
Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th)
[127]
Title: Singular and non-singular endstates in massless scalar field collapse
Comments: 4 pages, Published in Proceedings of JGRG19 (Japan, 2009)
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Astrophysical Phenomena (astro-ph.HE)
[128]
Title: Hamilton-Jacobi formalism for Linearized Gravity
Comments: To be published in Classical and Quantum Gravity
Journal-ref: Class.Quant.Grav.28:175015,2011
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[129]
Title: Reducing Thermal Noise in Future Gravitational Wave Detectors by employing Khalili Etalons
Subjects: General Relativity and Quantum Cosmology (gr-qc)
[130]
Title: Positive Gravitattional Energy in Arbitrary Dimensions
Subjects: General Relativity and Quantum Cosmology (gr-qc)
[131]
Title: Next-to-next-to-leading order post-Newtonian spin(1)-spin(2) Hamiltonian for self-gravitating binaries
Comments: 7 pages, v2: published version
Journal-ref: Ann. Phys. (Berlin) 523:919 (2011)
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Astrophysical Phenomena (astro-ph.HE); High Energy Physics - Theory (hep-th)
[132]
Title: Gravitational waves from scalar field accretion
Comments: 8 pages, 3 figures, added references; accepted in Phys. Rev. D
Journal-ref: Phys.Rev.D84:024043,2011
Subjects: General Relativity and Quantum Cosmology (gr-qc)
[133]
Title: Binary dynamics from spin1-spin2 coupling at fourth post-Newtonian order
Authors: Michele Levi
Comments: 24 pages, revtex4-1, 5 figures; v2: typos fixed, added references; v3: revised, published; v4: few omissions and typos corrected, minor edit
Journal-ref: Phys.Rev.D85:064043,2012
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[134]
Title: Can an astrophysical black hole have a topologically non-trivial event horizon?
Comments: 8 pages, 2 figures. v2: refereed version
Journal-ref: Phys.Lett.B706:13-18,2011
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Astrophysical Phenomena (astro-ph.HE)
[135]
Title: Existence of relativistic stars in f(T) gravity
Journal-ref: Classical and Quantum Gravity 28 (2011) 245020
Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[136]
Title: Some remarks on Bianchi type-II, VIII and IX models
Authors: Bijan Saha
Journal-ref: Gravitation & Cosmology 19(1), (2013) 65 - 69
Subjects: General Relativity and Quantum Cosmology (gr-qc)
[137]
Title: A dynamical dark energy model with a given luminosity distance
Comments: 4 pages, 3 figures, accepted for publication in GERG
Subjects: General Relativity and Quantum Cosmology (gr-qc)
[138]
Title: A solution of the non-uniqueness problem of the Dirac Hamiltonian and energy operators
Authors: Mayeul Arminjon
Comments: 32 pages (standard 12pt), including appendices. v2: a minor new remark on pp. 29-30
Journal-ref: Ann. Phys. (Berlin) 523, No. 12, 1008-1028 (2011)
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th); Mathematical Physics (math-ph)
[139]
Title: Area law for black hole entropy in the SU(2) quantum geometry approach
Authors: P. Mitra
Journal-ref: Phys. Rev. D85, 104025 (2012)
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[140]
Title: Isotropic extensions of the vacuum solutions in general relativity
Comments: 12 pages, 6 figures. Version to be published in Physical Review D
Journal-ref: Phys. Rev. D 84, 104013 (2011)
Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th)
[141]
Title: The no-boundary measure in scalar-tensor gravity
Journal-ref: Class.Quant.Grav.29:095005,2012
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[142]
Title: Evolution in bouncing quantum cosmology
Comments: 28 pages, 7 figures. Matches version published in Class. Quantum Grav
Journal-ref: Class. Quantum Grav. 29 (2012) 065022
Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th)
[143]
Title: Distributional sources for black hole initial data
Authors: Aaryn Tonita
Comments: Code available at this https URL
Subjects: General Relativity and Quantum Cosmology (gr-qc)
[144]
Title: Neutral and charged matter in equilibrium with black holes
Comments: 12 pages, no figures, final version published in PRD
Journal-ref: Phys. Rev. D 84, 084013 (2011)
Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[145]
Title: Gravitational mass-shift effect in the Standard Model
Authors: P.O. Kazinski
Journal-ref: Phys. Rev. D 85, 044008 (2012)
Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)
[146]
Title: Gauging away Physics
Authors: S. P. Miao (Utrecht University), N. C. Tsamis (University of Crete), R. P. Woodard (University of Florida)
Comments: 20 pages, no figures, uses LaTeX2e
Journal-ref: Class.Quant.Grav.28:245013,2011
Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO); High Energy Physics - Theory (hep-th)
[147]
Title: Embeddings and time evolution of the Schwarzschild wormhole
Journal-ref: Am. J. Phys. 80 (2012) 203-210
Subjects: General Relativity and Quantum Cosmology (gr-qc); Mathematical Physics (math-ph)
[148]
Title: The charged dust solution of Ruban -- matching to Reissner--Nordström and shell crossings
Journal-ref: Gen. Rel. Grav. 44, 239-251 (2012)
Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[149]
Title: Loop quantum $f(R)$ theories
Comments: 14 pages; published version in PRD
Journal-ref: Phys. Rev. D 84 (2011), 064040
Subjects: General Relativity and Quantum Cosmology (gr-qc); Mathematical Physics (math-ph)
[150]
Title: A kinetic theory of diffusion in general relativity with cosmological scalar field
Authors: Simone Calogero
Comments: 17 pages, no figures. The present version corrects an erroneous statement on the physical interpretation of the results made in the original publication
Journal-ref: J. Cosm. Astrop. Phys. 11/2011, 016
Subjects: General Relativity and Quantum Cosmology (gr-qc); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[ total of 345 entries: 1-25 | ... | 51-75 | 76-100 | 101-125 | 126-150 | 151-175 | 176-200 | 201-225 | ... | 326-345 ]
[ showing 25 entries per page: fewer | more | all ]
Disable MathJax (What is MathJax?)
Links to: arXiv, form interface, find, gr-qc, 2303, contact, help (Access key information) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1996871680021286, "perplexity": 7577.638465255424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949533.16/warc/CC-MAIN-20230331020535-20230331050535-00366.warc.gz"} |
https://ict.usc.edu/bibtexbrowser.php?key=yu_multimodal_2013&bib=ICT.bib | Multimodal Prediction of Psychological Disorders: Learning Verbal and Nonverbal Commonalities in Adjacency Pairs (bibtex)
by Yu, Zhou, Scherer, Stefen, Devault, David, Gratch, Jonathan, Stratou, Giota, Morency, Louis-Philippe and Cassell, Justine
Abstract:
Semi-structured interviews are widely used in medical settings to gather information from individuals about psychological disorders, such as depression or anxiety. These interviews typically consist of a series of question and response pairs, which we refer to as adjacency pairs. We pro-pose a computational model, the Multi-modal HCRF, that considers the commonalities among adjacency pairs and information from multiple modalities to infer the psychological states of the interviewees. We collect data and perform experiments on a human to virtual human interaction data set. Our multimodal approach gives a significant advantage over conventional holistic approaches which ignore the adjacency pair context in predicting depression from semi-structured inter- views.
Reference:
Multimodal Prediction of Psychological Disorders: Learning Verbal and Nonverbal Commonalities in Adjacency Pairs (Yu, Zhou, Scherer, Stefen, Devault, David, Gratch, Jonathan, Stratou, Giota, Morency, Louis-Philippe and Cassell, Justine), In Semdial 2013 DialDam: Proceedings of the 17th Workshop on the Semantics and Pragmatics of Dialogue, 2013.
Bibtex Entry:
@inproceedings{yu_multimodal_2013,
} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.514931321144104, "perplexity": 19788.729613725045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487623596.16/warc/CC-MAIN-20210616093937-20210616123937-00132.warc.gz"} |
https://openstax.org/books/calculus-volume-3/pages/3-key-concepts | Calculus Volume 3
# Key Concepts
Calculus Volume 3Key Concepts
### 3.1Vector-Valued Functions and Space Curves
• A vector-valued function is a function of the form $r(t)=f(t)i+g(t)jr(t)=f(t)i+g(t)j$ or $r(t)=f(t)i+g(t)j+h(t)k,r(t)=f(t)i+g(t)j+h(t)k,$ where the component functions f, g, and h are real-valued functions of the parameter t.
• The graph of a vector-valued function of the form $r(t)=f(t)i+g(t)jr(t)=f(t)i+g(t)j$ is called a plane curve. The graph of a vector-valued function of the form $r(t)=f(t)i+g(t)j+h(t)kr(t)=f(t)i+g(t)j+h(t)k$ is called a space curve.
• It is possible to represent an arbitrary plane curve by a vector-valued function.
• To calculate the limit of a vector-valued function, calculate the limits of the component functions separately.
### 3.2Calculus of Vector-Valued Functions
• To calculate the derivative of a vector-valued function, calculate the derivatives of the component functions, then put them back into a new vector-valued function.
• Many of the properties of differentiation from the Introduction to Derivatives also apply to vector-valued functions.
• The derivative of a vector-valued function $r(t)r(t)$ is also a tangent vector to the curve. The unit tangent vector $T(t)T(t)$ is calculated by dividing the derivative of a vector-valued function by its magnitude.
• The antiderivative of a vector-valued function is found by finding the antiderivatives of the component functions, then putting them back together in a vector-valued function.
• The definite integral of a vector-valued function is found by finding the definite integrals of the component functions, then putting them back together in a vector-valued function.
### 3.3Arc Length and Curvature
• The arc-length function for a vector-valued function is calculated using the integral formula $s(t)=∫at‖r′(u)‖du.s(t)=∫at‖r′(u)‖du.$ This formula is valid in both two and three dimensions.
• The curvature of a curve at a point in either two or three dimensions is defined to be the curvature of the inscribed circle at that point. The arc-length parameterization is used in the definition of curvature.
• There are several different formulas for curvature. The curvature of a circle is equal to the reciprocal of its radius.
• The principal unit normal vector at t is defined to be
$N(t)=T′(t)‖T′(t)‖.N(t)=T′(t)‖T′(t)‖.$
• The binormal vector at t is defined as $B(t)=T(t)×N(t),B(t)=T(t)×N(t),$ where $T(t)T(t)$ is the unit tangent vector.
• The Frenet frame of reference is formed by the unit tangent vector, the principal unit normal vector, and the binormal vector.
• The osculating circle is tangent to a curve at a point and has the same curvature as the tangent curve at that point.
### 3.4Motion in Space
• If $r(t)r(t)$ represents the position of an object at time t, then $r′(t)r′(t)$ represents the velocity and $r″(t)r″(t)$ represents the acceleration of the object at time t. The magnitude of the velocity vector is speed.
• The acceleration vector always points toward the concave side of the curve defined by $r(t).r(t).$ The tangential and normal components of acceleration $aTaT$ and $aNaN$ are the projections of the acceleration vector onto the unit tangent and unit normal vectors to the curve.
• Kepler’s three laws of planetary motion describe the motion of objects in orbit around the Sun. His third law can be modified to describe motion of objects in orbit around other celestial objects as well.
• Newton was able to use his law of universal gravitation in conjunction with his second law of motion and calculus to prove Kepler’s three laws.
Order a print copy
As an Amazon Associate we earn from qualifying purchases. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.918584406375885, "perplexity": 153.20474517141008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512249.16/warc/CC-MAIN-20220516204516-20220516234516-00758.warc.gz"} |
https://www.nature.com/articles/s41467-018-05399-8?error=cookies_not_supported | ## Introduction
Spin-crossover molecules (SCMs) offer a tantalizing route towards the realization of molecular spintronics1,2,3 and other nanoscale devices4,5,6,7 due to their bistability in the spin state, which can be manipulated or switched reversibly with external stimuli like temperature, light, pressure, electric field, magnetic field, or x rays8,9,10,11,12. The spin-crossover phenomenon in SCMs stems from the competition between the mean spin-pairing energy (Δ), which favors the high-spin (HS, paramagnetic) state and the ligand field—quantified by the ligand-field parameter 10Dq—which favors the low-spin (LS, diamagnetic or less paramagnetic than HS) state. Recent experiments with single molecules, thin films, and nanoparticles of SCMs revealed a rich and interesting interplay between the transport properties and the molecular spin states: high conductance in the HS state and low conductance in the LS state11,13,14,15, accompanied by the generation of spin-polarized current depending upon the spin state of the molecule and the electrodes16, and memory effects in the room-temperature regime17,18,19.
To put things into perspective, besides SCMs, the family of bistable molecules with a comparable promise of technological spin-offs are single-molecule magnets (SMMs)20,21,22. SCMs, however, offer one distinct advantage over SMMs: whereas SMMs exhibit bistability only at low temperatures23, room-temperature regimes can be reached for SCMs24,25,26,27. Nevertheless, reports of SCMs on surfaces are relatively scarce. This can be traced back to two main reasons: the paucity of SCMs that are vacuum-evaporable, with only a few cases reported so far27,28,29,30,31,32,33,34,35,36,37,38, and the coexistence of the two spin states or the loss of spin-crossover behavior altogether on contact with surfaces. It appears to be rather the norm that SCMs in contact with a surface invariably lead to the co-existence of the HS and the LS states at all temperatures36,39,40,41,42,43,44, though a few exceptions exist33,45.
In a study of [Fe(phen)2(NCS)2] on a Cu surface, spin-state coexistence in a series of submonolayer coverages has been reported, while molecules decoupled from the substrate (bilayer) exhibit a dominant spin state39. In the same system, electrically induced spin switching has been reported with the introduction of a thin insulating spacer layer of CuN between the molecules and the substrate13. In our previous studies, we have shown that for the SCMs [Fe(H2B(pz)2)2(phen)]40 and [Fe(H2B(pz)2)2(phen-me4)]36 (an analog of the former) in contact with a Au surface, the molecules undergo fragmentation, while only in the second layer, the molecules’ integrity, as well as the spin-crossover are preserved. However, on a Bi surface, even though the molecules’ integrity is preserved already in the first layer, about half of the molecules are locked in the HS state36. In a bilayer of [Fe(H2B(pz)2)2(bipy)] (hereafter referred to as Fe(bpz)-bipy) on Au probed with temperature-dependent scanning tunneling microscopy (STM), spin-state coexistence has been reported while the composition of the spin states was independent from temperature46. This led to the speculation of the spin-state coexistence to be an intrinsic property of SCMs in ultrathin films46. In a recent STM-based study of a monolayer of [FeII((3,5-(CH3)2Pz)3BH)2] on a Au surface, the authors reported a collective behavior in the light-induced HS→LS transition at 4.6 K, albeit with both spin-states coexisting43. Owing to the observed long-range order of alternating HS and LS states43, it is likely that the molecules undergo an antagonistic or anticooperative spin transition.
The absence or presence of cooperativity in the spin transition of SCMs and their characteristics is one of the most important issues for research, since it is responsible for the bistability in the spin states of SCMs. SCMs in the bulk generally exhibit cooperativity while undergoing the spin transition, which is ascribed to elastic interactions arising from the volume difference between the HS state (larger volume) and the LS state (smaller volume)47,48. The strength of the cooperative effects can be inferred from the degree of steepness of the temperature-induced spin transition curve. The interactions that favor “like-spin” pairings (HS–HS or LS–LS) are termed cooperative (ferromagnetic-like), while those that favor “unlike-spin” pairings (HS–LS) are termed anticooperative (antiferromagnetic-like). Determining the ultimate scale limit at which cooperativity becomes effective is of major interest when aiming at applications at reduced size or dimensionality. Future devices will likely involve integrating multi-functional molecules like SCMs with 2D materials exhibiting novel properties. In this regard, graphene is among the material of choice due to its robust optical, electrical and mechanical properties49. Understanding the behavior of SCMs on a highly oriented pyrolytic graphite (HOPG) surface will be an ideal platform towards integrating it with graphene.
Herein, we report on the complete and efficient thermal- and light-induced spin-crossover of Fe(bpz)-bipy on an HOPG surface with coverages ranging from 0.35(4) to 10(1) ML (monolayers) investigated by x-ray absorption spectroscopy, while addressing two main traits: the nature of the spin-crossover behavior for molecules in direct contact with the surface as well as upon increasing the coverage of the molecular layers, and the metastability of the HS state on the surface after it is switched from the LS state by illumination with a green LED (light emitting diode, λ = 520 nm) at low temperatures. Our results reveal the evolution of cooperativity in the spin-crossover of ultrathin molecular layers of SCMs adsorbed on a solid surface, which is evident from an increased steepness in the thermal spin-crossover curves. (The degree of steepness or the transition-width (ΔT) is defined as the temperature difference at which 80% of the molecules are in the HS and LS states, respectively45.) The interaction strength, deduced from fits to the phenomenological Slichter and Drickamer mean-field model50, increases monotonically with the coverage, reaching about 60% of the bulk value at a coverage of 10(1) ML. On the other hand, the HS→LS relaxation measurements of submonolayer coverages at 8, 20, 30, and 40 K show a stretched exponential behavior of the amount of HS molecules as a function of time, which may be explained by assuming anticooperativity at submonolayer coverages.
## Results
### Temperature- and light-induced spin-crossover
To probe the spin state of the molecules, x-ray absorption spectroscopy (XAS) is used. XAS is very well suited for studying SCMs on surfaces due to its element selectivity and high sensitivity in tracing a subtle electronic or chemical change51. More specifically, the Fe L2,3 (2p63d6→2p53d7) or the Fe L3 $$\left( {2{\mathrm{p}}_{{\mathrm{3/2}}}^{\mathrm{4}}3{\mathrm{d}}^6 \to 2{\mathrm{p}}_{{\mathrm{3/2}}}^{\mathrm{3}}3{\mathrm{d}}^7} \right)$$ edge spectral line shape is a fingerprint of the magnetic state of the molecule52. Schematic representations of the 3d-orbital electronic distributions of Fe(bpz)-bipy (Fig. 1a) in the HS and the LS states, and transitions from the $$2{\mathrm{p}}_{{\mathrm{3/2}}}^{\mathrm{4}}$$ core level causing the Fe L3 absorption edge are shown in Fig. 1b, c, respectively. It should be noted as a word of caution that the x-ray beam can induce LS→HS or HS→LS transitions at low temperatures, a process termed as soft x-ray-induced excited spin state trapping (SOXIESST)12, and reverse-SOXIESST53, respectively. The x-ray-induced effects on the spin states can be largely mitigated by maintaining the photon flux in the range of ~109 photons s−1 mm−2 53 (cf. Methods).
Figure 1d, e shows the Fe L3 spectra of 0.35(4) and 10(1) ML at RT (room temperature) and at LT (low temperature) before and after illumination with a green LED. The RT and LT spectra before illumination have sharply contrasting Fe L3 line shapes: the RT spectrum is characterized by two main peaks at 708.1 and 708.9 eV and the LT spectrum by a single main peak at 709.4 eV before illumination with light. These spectral line shapes have been established as characteristic of the HS and the LS states, respectively33,45,52. The spectrum at any other intermediate temperature is a linear combination of the two (cf. Supplementary Figure 1), as has also been shown elsewhere33,45. Upon illumination with the green LED at 5 K, the RT spectral line shape is recovered albeit with higher XA intensity (LS→HS transition). The molecules are thus again in the HS state, which is termed light-induced excited spin-state trapping (LIESST)9. The bulk Fe L3 spectra recorded at RT and at LT also yielded correspondingly similar line shapes to the ones described above (cf. Supplementary Figure 1). While SOXIESST is saturated at about 60% HS with an x-ray photon flux of about ~1 × 109 photons s−1 mm−2 53, the final states in LIESST are assumed to be 100% HS following ref.45, where the spectral profiles can be reproduced by spin-multiplet calculations. On warming up to 70–90 K, the samples can be converted back to the LS state. To test the reversibility in the spin switching during the cooling and heating cycles, systematic measurements to track the change in Fe L3 spectra (and hence the change in the spin state) have been carried out between 5 and 96 K under continuous illumination, for the 0.69(8)-ML sample as proof of concept. This is shown in Supplementary Figure 2: the sample exhibits a near-complete reversible spin transition from HS to LS state during ramp-up and LS to HS state during ramp-down in the aforementioned temperature range.
The fraction of HS molecules (γHS) at any given temperature is estimated by fitting the corresponding spectrum to that of a linear combination of the RT Fe L3 XA spectrum (HS state) and the LT spectrum (LS state) recorded at low temperature before the onset of LIESST or SOXIESST. The low-temperature HS and LS spectra across all the coverages have similar line shapes upon scaling, though the HS spectra at RT show some minor variations which appeared as about 3% change in the spin-state compositions (c.f. Supplementary Figure 1). For uniformity, the RT spectrum of 10(1) ML is taken as the reference HS state for all the coverages, with its γHS assumed to be 0.91 in conformity with that reported for the bulk molecule54; this number is also reproduced by using the LT spectrum after LIESST (100% HS) as the reference HS state. The presence of a certain quantity of LS molecules at RT in thin films of Fe(bpz)-bipy – similar to the bulk – has also been reported elsewhere31. The temperature-dependent measurements of γHS are carried out by cooling the sample at the rate of 4 K min−1 while simultaneously recording the spectra. The result is shown in Fig. 2a, and is characterized by a gradual to a relatively steeper spin transition when going from submonolayer to multilayer coverages, indicating an increase in cooperativity in the molecular spin switching processes.
Traditionally, elastic interactions between the molecules have been invoked to rationalize the occurrence of cooperativity: The spin transition is concomitantly accompanied by a change in volume (expansion and contraction in the HS and LS states, respectively), which induces elastic strain, resulting in both short- and long-range interactions between the molecules47,55. However, a recent wave-function ab initio calculation suggests the cooperativity to arise mainly from electrostatic contributions as a result of the simultaneous electronic relocalization within the molecules and fluctuations in the Madelung field during the spin transition56,57. Herein, we choose to treat the interaction between the molecules in contact with the HOPG surface and at higher coverages by the classical thermodynamic model of Slichter and Drickamer (S−D model)50. In this model, a term Γ(1 − γHS)γHS is introduced in the expression of Gibb’s free energy, where Γ is a phenomenological interaction parameter. The macroscopic S−D model is similar to the microscopic two-level Ising-like model in the mean-field approach58. A model based on interacting HS and LS domains (as opposed to HS and LS states) still yields a Gibb’s free energy similar to the S–D model59. Wavefunction ab initio calculations also reproduced the S–D model57, which has been used recently to explore the possibility of an enhanced cooperativity in surface-supported 2D metal-organic frameworks60. At equilibrium, the S−D model leads to the implicit equation:
$${\mathrm{ln}}\left( {\frac{{1 - \gamma _{\mathrm{HS}}}}{{\gamma _{\mathrm{HS}}}}} \right) = \frac{{{\mathrm{\Delta }}H + \Gamma (1 - 2\gamma _{\mathrm{HS}})}}{{RT}} - \frac{{{\mathrm{\Delta }}S}}{R}{\kern 1pt}$$
(1)
where ΔH and ΔS are the differences in enthalpy and entropy, respectively, between the HS and LS states; R is the universal gas constant. The experimental input in Equation (1) is the HS fraction γHS. Of the three fitting parameters, namely, Γ, ΔH and ΔS; ΔH and ΔS are related by the transition temperature T1/2 (defined as the temperature where the population of the HS and the LS species are equal), as ΔH = T1/2ΔS. If Γ = 0, then Equation (1) reduces to the van’t Hoff’s model – a non-interacting molecular system.
The results obtained by a fit of the experimental data by the S–D model using the method of least-square deviation are presented in Table 1 together with bulk data taken from Moliner et al.54. It yields a gradual evolution in cooperative spin transition in going from submonolayers to multilayers: negative interaction parameters at 0.35(4) (Γ = −0.44(0.23) kJ mol−1) and 0.69(8) ML (Γ = −0.1(2) kJ mol−1), positive at 2.0(3) ML (Γ = 0.3(1) kJ mol−1), and further increase with increasing coverage. In contrast, the entropy change ΔS and the enthalpy change ΔH across all coverages yield roughly constant values. The result for Γ is plotted in Fig. 2b as a function of coverage (blue data points, left axis). The interaction parameter Γ and the transition width ΔT, also shown in Fig. 2b, show an inverse relation. The similar values of ΔT obtained from the experimental data and the S−D model fit across all coverages (Fig. 2b right axis, magenta and black dots) are an indication of the suitability of the model in tracing the evolution of cooperativity in the spin-transition processes in ultrathin films. The details of the S–D model fits are given in Supplementary Figures 35 and Supplementary Table 1.
The time-dependent population of the HS state upon illumination with a green LED for all the coverages at 5 K is shown in Fig. 3b. The process, known as LIESST, has been rationalized in terms of intersystem crossing48 (the scheme is shown in Fig. 3a) and has been probed in detail with ultrafast optical and x-ray spectroscopies61. Because the light-induced LS→HS transition is quite fast, the kinetics of this effect cannot be measured by taking complete absorption spectra. Instead, it was determined by recording the time dependence of the absorption signal with the x-ray energy fixed at 708.1 eV (the intensity of the Fe L3 spectrum at this energy above the background is proportional to the HS content), while keeping the illumination on. The complete Fe L3 XA spectra recorded before and after such a timescan are used to ascertain the conversion of the spin states (cf. Supplementary Figure 6).
The HS fraction as a function of illumination time is obtained by normalization of the recorded timescan signal between 0 (the LS state) and 1 (the HS state). Fitting the LS→HS transition with a single exponential function yields rate constants of 0.065(1), 0.0501(1), 0.0583(2), and 0.0823(1) s−1 for 0.35(4), 2.0(3), 3.9(5), and 10(1) ML, respectively. However, these rate constants consist of LIESST, as well as SOXIESST components; the latter arising from the x-ray exposure during the timescan. In order to separate the two, one can use the x-ray-induced LS→HS (SOXIESST) and HS→LS (reverse-SOXIESST) transition rate constants reported in the same system (0.8 ML of Fe-(bpz)bipy on HOPG) at the same temperature (5 K), which are 3.50(3) × 10−4 s−1 and 2.3(1) × 10−4 s−1, respectively53. The rate constants solely from LIESST are then estimated as 0.0369(1), 0.0505(1), 0.0589(2), and 0.0827(1) s−1 for 0.35(4), 2.0(3), 3.9(5), and 10(1) ML, respectively. With the green LED photon flux density of 4.2(8) × 1014 photons s−1 mm−2 (c.f. Methods), the corresponding effective cross sections are estimated as 0.009(2), 0.012(2), 0.014(3), and 0.019(4) Å2, respectively. Since each of the curves can be fitted by a mono-exponential function albeit with different rate constants, it is concluded that the light-induced LS→HS transition arises from an individual molecule–photon interactions without any role of cooperative effects. This result is in agreement with LIESST in bulk SCMs, where it is also found to be a single–molecule phenomenon48. The observed efficient switching of the spin states with light in these SCMs in contact with an HOPG surface can be exploited for applications in optical memory and display elements4.
### Stability of light-induced HS state at low temperatures
Figure 3c shows the results of HS→LS relaxation measurements of 0.35(4) ML at 8, 20, 30, and 40 K. The measurements are done by firstly switching the sample to the HS state by green-LED illumination. With the illumination off, the subsequent spin relaxation is traced from Fe L3 spectra recorded as a function of time. The low-temperature HS→LS relaxation process is characterized by an initial fast relaxation and then slows down with time, leading to a stretched exponential. This is in sharp contrast to the bulk, where the HS state is rather stable in the temperature range considered here, and only in the thermally activated regime (>40 K), the spin relaxation becomes pronounced and exhibits sigmoidal characteristics54. The HS → LS relaxation observed on the surface can be modeled with a phenomenological equation involving a negative interaction parameter α:
$$\frac{{\partial \gamma _{\mathrm{HS}}}}{{\partial t}} = - k_{hl}{\kern 1pt} \gamma _{\mathrm{HS}}\,{\mathrm{exp}}\left[ {\alpha \left( {1 - \gamma _{\mathrm{HS}}} \right)} \right]$$
(2)
where khl denotes the HS → LS relaxation rate constant. Equation (2) is similar to the phenomenological model introduced by Hauser to explain the HS → LS relaxation in the bulk SCMs, but with α replaced by α/T in the thermally activated regime so as to account for the sigmoidal-type HS → LS relaxation behavior62. For an accurate modeling of the HS→LS relaxation, the x-ray-induced spin transitions occuring during the data acquisition, namely, k1 · (1 − γHS) for LS→HS and k2·γHS for HS→LS transitions have to be included in Equation (2)53. A simultaneous fit of Equation (2) to the spin relaxation measurements at all temperatures including the SOXIESST terms yields the value of the interaction parameter α to be −6.3(3). The relaxation measurements of 0.69(8) ML show a behavior similar to that of 0.35(4) ML (cf. Supplementary Figure 7). A comparison of ln(khl(T)) for both coverages is also given in Supplementary Figure 7. In comparison to the 0.69(8)-ML sample, the metastable HS state of the 0.35(4)-ML sample decays much more rapidly to the ground state (LS state) in the temperature range between 8 and 30 K (cf. Supplementary Figure 7). However, at 40 K, the relaxation rates become similar: 0.025(5) and 0.021(7) s−1 for the 0.35(4) and 0.69(8)-ML sample, respectively. In the bulk relaxation data provided in ref.54, the relaxation rate at 42 K is about three orders of magnitude lower than that of the submonolayer samples at 40 K. These differences in the decay rates might arise either from enhanced tunneling rates or a reduction in the energy barriers between the HS and LS wells; or a combination of both, in the submonolayer samples as compared to the bulk (the quantum tunneling rate is related to the characteristic vibrational frequency of the [FeN6] core62).
## Discussion
For SCMs in the bulk, the interaction parameters α and Γ are related by a constant, α = pΓ62. The constant p is inversely proportional to the characteristic vibrational frequency of the [FeN6] core and has a weaker dependence upon the vertical displacement as well as the horizontal displacement of the LS and HS wells through the reduced energy and the Huang-Rhys factor, respectively, which enter logarithmically62. The horizontal displacement is proportional to the Fe–N bond-length difference between the HS and LS states while the reduced energy is the ratio of the vertical displacement (ΔEhl, Fig. 3(a), the zero-point energy difference between the HS and LS wells) to the characteristic vibrational quanta of the [FeN6] core. Using the values of the parameters given in ref.54, p is estimated to be 1.3 × 10−3 J−1 mol for Fe(bpz)-bipy in the bulk. For the 0.35(4)-ML sample and assuming that the same relation still holds (given the negative values of both α and Γ), a value of p = 1.4(9) × 10−2 J−1 mol is obtained. The higher value of p by about an order of magnitude in the 0.35(4)-ML sample as compared to the bulk might be attributed to a reduced characteristic vibrational frequency of the [FeN6] core in the former, possibly due to molecule–substrate interactions. Alternatively, a distribution in the energy barriers between HS and LS states – arising from disorder or conformational flexibility of the ligand – can also result in a stretched exponential HS→LS relaxation, as reported in the case of some bulk SCMs exhibiting strong cooperativity in the temperature-induced spin transition, but nevertheless displaying a stretched-exponential spin relaxation due to such factors63,64. It is therefore not clear whether the stretched exponential decay results exclusively from an antagonistic (or anticooperative) spin transition. Regardless of the mechanism, it should be mentioned that this is the first report on the HS→LS relaxation of SCMs on a surface exhibiting a stretched exponential decay.
The buildup of cooperativity in molecular layers of only a few monolayers thickness indicates the presence of intermolecular interactions across the molecular layers. While a two-dimensional arrangement such as submonolayer islands exhibits an apparent antagonistic behavior arising either from interactions that favor unlike-spin states or from a distribution in the energy barriers between the two spin states, clear signs of cooperative spin switching are observed starting already from the second layer. The further increase in the degree of cooperativity with increasing thickness could be related to the higher coordination of molecules in the inner layers compared to the surface layers, the reduction in the relative amount of surface or interface molecules, and/or the reduced importance of molecules in direct contact with the substrate. It is interesting to compare the results presented here with spin-crossover nanoparticles, although the direct external environments of SCMs on the surface and those of nanoparticles are different in that the nanoparticles are always coated with a stabilizer which acts as a rigid matrix. Nevertheless, spin-crossover nanoparticles are also found to exhibit a gradual temperature-dependent spin-transition like the one observed here in the case of ultrathin films, but with the transition temperatures being proportional to the particles’ sizes. However, at particle sizes of less than 10 nm, hysteretic behavior (memory effects) appears – which have been attributed to an increase in the lattice stiffness leading to greater cooperative effects65. The presence (or absence) of hysteretic behavior in these ultrathin films of Fe(bpz)-bipy is not known, as the spin-state change in the opposite direction, i.e., temperature ramp-up from low- to room-temperature, has not been measured. It is worth noting that a small hysteresis of about 4 K has been reported in a relatively thick vacuum-deposited film (355 nm) of Fe(bpz)-bipy31, despite the absence of such a behavior in the bulk54,66. To the best of our knowledge, however, for vacuum-deposited films with thickness in the range of our samples – maximum thickness of about 12 nm – the presence of hysteresis has never been reported.
In summary, we have shown the complete thermal- and (highly efficient) light-induced spin-crossover of Fe(bpz)-bipy on the HOPG surface of coverages ranging from 0.35(4) ML to 10(1) ML. While a free-molecule-like spin-crossover around a monolayer is indicated, cooperativity is clearly evident in the double layer and at higher coverages as revealed by the transition width of the thermal-induced spin-crossover curves. The light-induced LS→HS state switching, on the other hand, is independent of cooperative effects. These findings are relevant for nanoscale applications relying on spin-state bistability.
## Methods
### Sample preparation
The SCM Fe(bpz)-bipy is synthesized according to the procedure reported by Real et al.66. The molecular powder is evaporated from a tantalum Knudsen cell at 160 °C at a pressure of ~2 × 10−9 mbar, and deposited onto the HOPG substrate held at room-temperature. The evaporation rate is controlled from the frequency change of a quartz crystal attached to the Knudsen cell. The HOPG substrate (ZYA) of dimension 12 × 12 × 2 mm3 and mosaic spread angle of 0.4(1)° is purchased from Structure Probe. The cleaving of the HOPG substrate with a carbon tape is performed (so as to obtain a clean surface) in a loadlock chamber maintained at ~10−7 mbar connected to the sample preparation chamber. The molecular coverage is estimated from integrated peak intensity of the Fe L3 spectrum and frequency shifts of the quartz crystal which is integrated with the evaporator. The details of the coverage estimation procedure are provided in Supplementary Figure 8 and Supplementary Notes 1.
### X-ray absorption spectroscopy
The XAS measurements are performed in-situ at the high-field diffractometer of the beamline UE46-PGM1 and at the VEKMAG end-station of the beamline PM2 of BESSY II at a pressure of about 5 × 10−11 mbar and 1.5 × 10−10 mbar, respectively. The photon flux at the UE46-PGM1 beamline is reduced by a factor of about 15 by damping the beam with an Al-foil of thickness ~3 μm, such that the estimated photon flux is ~1 × 10photons s−1 mm−2; this is done in order to mitigate the soft-x-ray-induced excited spin state trapping (SOXIESST) at low temperatures53. At the PM2 beamline, the photon flux is estimated to be ~1.6 × 109 photons s−1 mm−2. The data shown in Figs. 1e, 2, 3b originates from the high-field diffractometer end-station, while the data shown in Fig. 3c was measured at the VEKMAG end-station. All spectra are recorded by means of total electron yield, where the sample drain current is recorded as a function of the x-ray photon energy and normalized to the photocurrent of a Au grid (a Pt grid in PM2-VEKMAG67) upstream to the experiment, and to the background signal from a clean HOPG substrate. The Fe L3 spectra are recorded at the magic angle 54.7° between the surface and the k vector of the linearly p-polarized x rays. At this angle, the XA resonance intensities are independent of the orientations of the molecular orbitals. The measurements involving light-induced effects at low temperatures were performed with a green LED of λ = 520 nm with a spectral width (fwhm) of 30 nm. The flux density at the sample position is estimated as 4.2(8) × 1014 photons s−1 mm−2. The details of the optical setup have been described elsewhere45.
### Atomic force microscopy
In order to ascertain the efficacy in the deposition of the Fe(bpz)-bipy on the HOPG surface, atomic force microscopy (AFM) images of a series of submonolayer coverages have been recorded ex situ at room temperature. One such image is shown in Supplementary Figure 9; under ambient conditions, the molecules form nanoporous islands, quite similar to the structure of a submonolayer of [Fe(H2B(pz)2)2(phen)] on the same surface45. The line profile reveals an island height in the range of 1.0 nm, which is consistent with the height of a molecule. This is true for all the submonolayer coverages recorded. Although the morphology of the molecular islands in vacuum may be different, one can conclude that no 3D crystallites are formed and that all the molecules are in contact with the surface. The AFM (Nanotec Cervantes) measurements are carried out ex situ in ambient conditions in tapping mode using a Si cantilever of stiffness 2.7 N/m with a resonance frequency of 75 kHz.
### Data Availability
All the relevant data can be obtained from the authors on reasonable request. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8757231831550598, "perplexity": 1726.0655777091163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335514.65/warc/CC-MAIN-20221001003954-20221001033954-00669.warc.gz"} |
https://link.springer.com/article/10.1186/s13007-018-0366-8 | ## Background
Wheat is one of the most globally significant crop species with an annual worldwide grain production of 700 million tonnes [1]. In recent years, however, there is an increasing demand for grain. At the same time, the seasonal fluctuations, the extreme weather events and the altering climate in various regions of the world, increase the risk of inconsistent supply. This points to the need to identify hardier and higher yielding plant varieties to both increase crop production and improve plant tolerance to biotic and abiotic stresses.
To discover higher-yielding and more stress-tolerant varieties, biologists and breeders rely more and more on high-throughput phenotyping techniques to measure various plant traits, which in turn are used to understand plant’s response to various environmental conditions and treatments, with the hope to improve grain yield.
Early works on high-throughput image-based phenotyping focused on controlled environments such as purpose-built chambers and automated glasshouses. Li et al. [2], for example, proposed an approach that detects, counts and measures the geometric properties of spikes of a single plant grown in a controlled environment. Bi et al. [3, 4] and Pound et al. [5], on the other hand, measured more detailed morphological properties, such as the numbers of awns and spikelets, of plants imaged in small purpose-built chambers with uniform backgrounds. Unfortunately, in such experiments plants are confined to small pots, which no doubt affect root development, nutrient uptake and, ultimately, yield. Some experiments have been carried out using plants grown in large (120 cm $$\times$$ 80 cm) indoor bins, which are capable of housing almost 100 plants in competition [6,7,8]. Spike detection was not attempted in these latter studies, but their more critical limitation was that the plants, although grown closer to field-like conditions and not individually in pots, were not subject to realistic environmental conditions. The challenge to providing quantitative plant breeder support is yield estimation under true field conditions, relying on the ability to accurately and automatically detect and count the ears of wheat in the field.
A range of different phenotyping platforms exist for capturing images in the field [9,10,11]. However, due to the large scale nature of such studies, many researchers have turned to aerial imaging systems such as unmanned aerial vehicles [12,13,14,15] and satellite imagery [16, 17]. While these approaches are capable of capturing information about a large number of plants across a large area of land within a short period of time, only coarse level information, such as mean canopy coverage and mean canopy color, has thus far been reported. It should also be kept in mind that the nature of the uncontrolled field environment poses significant challenges for both image acquisition and image analysis algorithms, which should ideally be robust to changing conditions and applied autonomously. The challenges indeed often result in images being analyzed manually or semi-automatically, and often qualitatively.
In this study we utilize a land-based vehicle and a single RGB camera to acquire images of a field. The proximity of the camera to the plants allows for high-resolution data capture. The simplicity of the imaging set-up makes it affordable and easy to implement, thus accessible to any potential user. The remaining challenge, on which we focus attention here, is of analyzing these high resolution images to extract quantitative information such as the number and density of wheat spikes. To go some way to meeting this challenge we have chosen to image plots from an oblique perspective as opposed to the more common nadir perspective. In an oblique view a significant number of spike features such as texture, color, shape etc. can be discerned easily. These features can be more readily extracted for the purposes of various plant phenotyping applications such as spike counting (which is the focus of this paper), spike shape measurement, spike texture, disease detection, grain yield estimation etc. We note that we are not unique in taking this more advantageous perspective [7, 18, 19].
There are some computer vision approaches for detecting spikes in field images obtained using land-based imaging techniques, which have been reported in the literature. Fernandez-Gallego et al. [20] used RGB cameras manually held at approximately one meter above the center of the plant canopy to gain images from a nadir perspective. The authors then apply the Laplacian and the median filter to produce a transformed image where the local maxima can be detected and classified as wheat spikes. This approach achieved a recognition rates of up to 92%, but failed when observing plants in different developmental stages (32%). Alharbi et al. [18], which used Gabor filters, principal component analysis and k-means clustering, were able to achieve an average accuracy of 90.7%. The approach, however, places constraints on image content such as the density of spikes, color and texture differences between spikes and shoots and the angle of spikes in the image. Zhou et al. [21] proposed an image fusion method by using multi-sensor data and an improved maximum entropy segmentation algorithm to detect wheat spikes in the field. However, the method required the use of a multi-spectral camera and was validated on images where canopy and spikes rarely overlap or occlude one another.
Machine learning has been adopted as the method of choice in many recent image analysis applications to address a number of plant phenotyping problems. These include the study of wheat spikes in controlled environments [2], the classification of leaf species and leaf venation [22], the analysis of the architecture of root systems [23, 24], the measurement of plant stress levels [25] and the determination of wheat growth stages [26]. More recently, deep learning has begun to outperform previous image analysis and machine learning approaches and promises a step-change in the performance of image-based phenotyping. In particular, the use of Convolutional Neural Networks (CNNs) for image analysis tasks has seen a rapid increase in popularity. For instance, CNNs have been used to improve the performance of the approach of Wilf et al. [22] for identifying and counting leaf species [27], to quantitatively phenotype Arabidopsis thaliana plants grown in controlled environments [28, 29], and to provide detailed quantitative characterization of wheat spikes on plants grown in controlled environments [3,4,5].
In this study we present the first deep learning model designed specifically to detect and characterize wheat spikes present in wheat field images. We adapt, train and apply a variant of CNN, hereinafter referred to as Region-based Convolutional Neural Networks (R-CNN), to accurately count wheat spikes in images acquired with our land-based RGB imaging platform. The approach relies on a training data set of images containing spikes that have been labeled manually with rectangular boxes; The procedure produces a complete list of locations and dimensions of bounding boxes identifying plant spikes detected in images unseen during the training stage. A successful deep learning analysis requires thorough training using large data sets of high quality [5, 30]. As such, a second major contribution of this work is the release of the SPIKE data set, made up of hundreds of high quality images containing over 20,000 labeled wheat spikes.
The outline of this article is as follows. In the "Methods" section we describe the field trial we have studied and the image acquisition system. The images from this field trial form the SPIKE data set which is then described in detail and used for training and testing of our R-CNN model. Finally, we also present the metrics used for the validation of the proposed CNN model. In the "Results and discussion" section we analyze the performance of the model both on the main data set and on subsets containing images of field plots at different growth stages. We also provide an analysis of the density of spikes detected in images of plots of different wheat varieties treated with fertilizer at different times. In brief, we found that early treatment resulted in significantly higher yields (spike densities) for nearly all the varieties tested, than what were produced by the same varieties either untreated or treated later in the season.
## Methods
Figure 1 shows the overall work-flow of the in-field wheat spike detection system. The goal is to develop a fast and accurate system which can detect spikes from field images. The output is a list of bounding boxes enclosing wheat spikes, as well as the confidence level for each box, along with a count of the total number of spikes. The model has been developed in two main stages: the training stage, used to train the R-CNN for spike detection, and the testing stage, in which the trained CNN model is applied to test images.
### Experimental setup
The field trial was conducted at Mallala (− 34.457062, 138.481487), South Australia, in a randomized complete block design with a total of 90 plots, 18 rows and 5 columns, consisting of ten spring wheat (Triticum aestivum L.) varieties (Drysdale, Excalibur, Gladius, Gregory, Kukri, Mace, Magenta, RAC875, Scout, Yitpi) and nine replicates of each, all of which were sown on July 3, 2017. To mitigate the boundary effects, an additional plot (not included in the analysis) was planted at the beginning and at the end of each row of plots. The plots were 1.2 m wide, with an inter-row spacing of approximately 0.2 m, and 4 m long with a gap of approximately 2 m between plot rows and 0.3 m between columns. To explore the impact of fertilizer on wheat spike production, each variety was subject to three fertilizer treatments: no treatment, early treatment, and late treatment. Each combination of variety $$\times$$ treatment is replicated three times. Two thirds of the replicates were treated at a standard rate of 80 kg nitrogen, 40 kg phosphorus and 40 kg potassium per hectare (referred to as $$16-8-16 \; N-P-K$$), while the other 30 plots received no treatment at all. For the early treatment, the macronutrients nitrogen, phosphorus and potassium were applied on July 14. Urea was then applied on July 18 to the same 30 plots. For the late stage treatment, both fertilizers were applied together on September 26. The imaging of the plots took place approximately twice a week during the period of July 21–November 22, 2017.
The land-based vehicle used for image capture is shown in Fig. 2. This wagon is comprised of a steel frame and four wheels with a central overhead rail for mounting imaging sensors. While capable of housing a stereo pair of cameras for orthogonal viewing, only the camera mounted at one end at an angle oblique to the plots was used for this study. Viewed from directly above, many spikes, primarily those near the viewing axis, appear in images small and circular, making them difficult to detect (see the comparison of images of the same plot taken from the two perspectives in Additional file 1). Although not pursued in this paper, a perspective view also admits the possibility of a more detailed analysis of spikes (for, say, grain number estimation) with a greater fraction of their length visible, although the problem of partial occlusion of some spikes may complicate the estimation process. Figure 2b (inset, top right) shows an image captured with this imaging platform. The images were acquired using an 18.1 megapixel Canon EOS 60D digital camera, shown in Fig. 2a, surrounded with a waterproof casing. Manual focus was used during all the imaging sessions with the camera focused at 2.2 m and 1.8 m during early and late plant growth stages, respectively. Following some experimentation, a viewing angle of $$55^{\circ }$$ from the horizontal overhead rail was chosen to capture a maximum plot area with minimal the area from overlapping regions. The camera sensor is located 190 cm above the ground level. The camera settings were as follows;
• Focal length—18 mm,
• Aperture—f/9.0,
• ISO—automatic and
• Exposure time—1/500 s.
Finally, the resolution of images was 5184 $$\times$$ 3456 pixels, resulting in an image resolution of approximately 0.04 cm per pixel.
### The SPIKE dataset
The high quality in-field images from this field trial are used to construct the SPIKE data set, a key contribution of this study. The SPIKE data set has three main components:
• Over 300 images of ten wheat varieties at three different growth stages.
• Annotations for each image denoting the bounding boxes of spikes.
• Deep learning models trained on these images and labels.
A diagram illustrating each of these components is shown in Fig. 3. First, images are acquired in the field. These are then automatically cropped so that only the region of interest (ROI) is kept. The captured in-field images contain other objects including neighbor plots, plot gaps, vehicle and color-chart which are not required in our approach. So, a significant SPIKE region from the plot is selected as ROI and cropped automatically for all images in the experiment. Next, the images are manually annotated with bounding boxes highlighting all the spikes present in the images. The images and annotations are then fed to the Convolutional Neural Network (CNN) for training.
Images While the original images capture the majority of the 4 m $$\times$$ 1.2 m plot area, they also contain parts of the neighboring plots, inter-plot weeds and parts of the wagon. These background objects can confound the testing phase; a particular issue is spikes of neighboring plots appearing in an image and thus included in the density estimation. To overcome this issue, images were automatically cropped to a 0.8 m $$\times$$ 0.8 m region of interest as shown in Fig. 3. In total, 335 images containing a total of approximately 25,000 wheat spikes have been captured. With our camera image resolution, the spike size [width, height] ranged from [10 px, 80 px] to [50 px, 300 px].
We found that the most convenient situation for detecting wheat spikes in images is when there is considerable color contrast between the spikes and other parts of the canopy. As such, the majority of the images in the SPIKE data set contains images where the spikes are approximately green in color while the canopy has already senesced to a more yellow color. However, in order to fully test the capabilities of deep learning techniques for spike detection in the field, the SPIKE data set also includes a number of images taken at two other growth stages, where spike detection spikes is more difficult. Hereafter we denote the three different situations, shown in Fig. 4, as:
• Green Spike and Green Canopy (GSGC)
• Green Spike and Yellow Canopy (GSYC)
• Yellow Spike and Yellow Canopy (YSYC).
The GSGC, GSYC, and YSYC images were acquired on the 26/10/2017, 9/11/2017 and 16/11/2017, respectively. Table 1 shows the number of images acquired for each of the three classes. Although the data set contains 255 GSYC images, only 235 were used for training while the remaining 20 were reserved for testing. Each of the GSGC and YSYC data sets comprise 40 images, of which 35 have been used for training and 5 for testing. The second half of the table, which indicates how many images were used in the different models, will be explained in more detail at the end of this section.
Annotations The images have been labeled by multiple experts at the resolution of $$2000\times 1500$$ pixels. For the annotation of images, we used the publicly available Video Object Tagging Tool provided by Microsoft. Each labelled image has an additional text file containing the coordinates of the annotated bounding boxes, see Fig. 5. In this file the boxes are saved as a 4-tuple $$(x_b,y_b,w_b,h_b)$$ where $$(x_b,y_b)$$ denotes the top-left corner of the box while the pair $$(w_b,h_b)$$ denotes the width and height of the bounding box. Each image contains approximately 70–80 spikes. Therefore, in total, the 335 images contain approximately 25,000 annotated spikes.
Model development The SPIKE data set of 335 images in total was split into 305 training images and 30 testing images. This split was performed at the image level, not at the spike level, to ensure that no spikes from the same image could be seen in both training and testing sets. We found that the GSYC class of images, which exhibit a high color contrast, were the most suitable for spike detection in the field. For this reason the main model used in this study was trained and tested only on the set of GSYC images. However, in order to better understand the effect of spike and canopy color on deep learning models we trained three additional models using the two other classes. The reader is referred to the bottom half of Table 1 for a summary of the number of training and testing images used in each of the four models. The + GSGC and + YSYC models were trained using the original 235 images as well as the 35 GSGC images and 35 YSYC images, respectively. They also have a set of test images made up of combinations of the test images from their corresponding classes. Finally, a fourth model, ‘GSYC++’, was based on the 305 training images from all three classes and had a test set comprised of all the 30 designated test images.
### R-CNN model
Region-based Convolutional Neural Network (R-CNN) was introduced by Girshick et al. [31] for object detection using a selective search to detect regions of interest and a CNN to classify them. Later, Fast R-CNN by ROI pooling [32] was used after final convolution to extract a fixed length feature vector from the feature map along with the training of all network weights with back-propagation. Later, Faster R-CNN was developed by Ren et al. [33]. This model consists of two networks: a region proposal network (RPN) for generating region proposals, and a convolutional network which takes the proposed regions to detect objects almost in real-time. The main difference between the two region-based methods is that, to generate region proposals, Fast R-CNN uses selective search whereas Faster R-CNN uses high-speed RPN and shares the bulk of the computation time with object detection. Briefly, RPN ranks the region boxes (called anchors) and proposes the ones that are most likely to contain the desired objects. Due to its fast processing capability and high recognition rate, Faster R-CNN is used in this article for wheat spike detection. Python implementation of the Faster-RCNN is publicly available and can be accessed online [34]. The implementation is modified somewhat and hyper-parameters have been optimized for better classification of the spike regions and overall detection performance. A detailed description of R-CNN, the specific architecture of the model, and the image processing techniques used in this article can be found in Additional file 2.
For each box detected, the R-CNN provides as output a corresponding confidence level, $$C\in [0,1]$$, where 0 represents the lowest level of confidence that a detected object is a spike and 1 represents the highest level of confidence. When a detected box proposed by the CNN has a confidence value C that is larger than a predefined threshold, then the proposal is classified as a spike. Otherwise, it is classified as a background. Higher values of C will result in fewer boxes being incorrectly labeled as spikes, but will also result in more spikes being incorrectly labeled as background. Conversely, low values of C will correspond to incorrectly captured (background) regions but will rarely miss plant spikes. In this study we have chosen to use a confidence value of $$C=0.5$$ as it provided a desirable trade-off between the two scenarios.
### Validation
The output of the R-CNN used in this study is a list of bounding boxes which will ideally contain all of the wheat spikes in an image. The goal of this study is for the number of boxes to accurately match the number of spikes in an image. Denoting boxes as spike or non-spike can yield three potential results, with the latter two being sources of error: true positive (TP)—correctly classifying a region as a spike; false positive (FP)—incorrectly classifying a background region as a spike as well as multiple detection of the same spike; and false negative (FN)—incorrectly classifying a spike as a background region. In contrast, true negative (TN)—correct classification of background is always ’zero’ and is not required in this binary classification problem where foreground is always determined for object detection. In order to quantify our errors, the validation metrics are based on the concepts of precision, recall, accuracy and the F1 score, which are defined as follows:
• $$\text {Precision} = \displaystyle \frac{TP}{TP+FP}$$ measures how many of the detected regions are actually spikes.
• $$\text {Recall} = \displaystyle \frac{TP}{TP+FN}$$ measures how many of the spikes in the image have been captured.
• $$\text {Accuracy} = \displaystyle \frac{TP+TN}{TP+TN+FP+FN}$$ implies the models performance
• $$\text {F1 Score} = \displaystyle 2 \frac{Precision\cdot Recall}{Precision + Recall}$$ is the harmonic mean of Precision and Recall. It is a useful measure to observe a model’s robustness.
• The mean Average Precision (mAP) [35], which quantifies how precise the method is at varying levels of Recall. It can be expressed as follows:
\begin{aligned} \begin{aligned} mAP = \frac{1}{11}\sum _{r_{i}\in \left\{ 0,0.1,...,1\right\} }^{}\max _{r_{i}:r_{i}\ge r}p(r_{i}). \end{aligned} \end{aligned}
(1)
In other words, it is defined as the mean precision of a set of eleven equally spaced Recall levels $$[0,0.1,\ldots ,1]$$. Here, $$p(r_i)$$ is the measured Precision at Recall $$r_i$$. The Precision at each Recall level $$r_i$$ is interpolated by taking the maximum Precision measured for which the corresponding Recall exceeds r.
All the experiments in this article were conducted using a high-performance computer with Intel Xeon 3.50 GHz processor and 128 GB of memory. Also, a NVIDIA GeForce graphics processing unit(GPU) has 12 GB memory which is used along with the CPU to accelerate the training of the CNN.
## Results and discussion
The performance of the proposed model was measured in terms of detection accuracy and mean precision defined in the Validation Section. To demonstrate the robustness of deep learning for spike detection, we analyzed the degrees to which the different training and testing data sets, captured at different growth stages, affect the model performance. Finally, we analyze the differences in spike density across the different varieties grown under the three different treatments in the field trial.
### Performance
For each test image the R-CNN program returns the locations of the detected spikes, the total number of spikes, and a classification probability (confidence) for each detected spike, see Fig. 6. The GSYC class of images was chosen to train the main model proposed in this study. For 20 test images, the model achieved a mAP of 0.6653 and an average accuracy of $$93.3\%$$ based on the 1463 spikes detected among the 1570 manually counted spikes. For each test image, the following statistics are provided in Table 2; the number of spikes in the ground truth image, the number of spikes detected by the proposed approach, the number of true positives, the number of false positives, the number of false negatives, the precision, the mAP, the accuracy, and the F1 score. The output images of this table are included in the supplementary material (Additional file 2).
### Testing the supplementary models
In this section, the results of the base GSYC model are compared with those of the other three models. The comparative analysis for different testing sample combinations is presented in Table 3 in terms of the average detection accuracy (ADA) and in Table 4 in terms of mean Average Precision.
From Table 2, one can see that the spike detection accuracy is always within the range of 88–98 for the 20 images tested. This is quite satisfactory considering the challenges associated with in-field imaging, e.g., complex backgrounds, various illumination conditions, shadow effects and self occlusion. Also, the high mAP of 0.6653 shows the proficiency of our R-CNN, trained on the SPIKE data set. This is to be compared with the mAP performance of other CNN’s applied to prominent data sets such as PASCAL VOC [35] and COCO [33], for the detection of 21 and 80, respectively, regular object classes such as, men, car, horse, dog, cat, bicycle, etc. Figure 7 shows the relationship between ground truth number of spikes and the estimated number of spikes, for each of the 20 images. The R-CNN approach provides a near one-to-one estimate of the number of spikes per image (the line slope is 1.0086), with an intercept value of $$-\,3.95$$ indicating an intrinsic error of just four spikes. The model produces a high $$R^2$$ value of 0.93, proving a strong linear relationship between the ground truth and the results of our approach.
The efficiency of a training model can also be analyzed by observing the training loss and error rates while the model is learning. An epoch is defined as one full pass forwards and backwards through the network during the learning stage. While the model weights are initialized randomly, after a number of epochs they become closer to their final values, progressively reducing rates of error and training loss. Figure 8 shows that the loss metric (described in full detail in Additional file 2) is decreasing over subsequent epochs of training. Although the loss and error rate is initially high, after each training epoch the reduced rate of error is accompanied by a higher detection accuracy; the loss and error rates become almost constant after 200 epochs indicating that no further improvement is possible. Based on several trials the number of epochs was fixed at 400 to avoid overfitting. This choice produced the high accuracy results presented in this article.
When limited to GSYC images, the GSYC model proved to return the highest accuracy in terms of ADA, valued as a percentage of spikes detected, as testing and training images covered plants at the same growth. When applied to GSGC or YSYC testing images, however, while still achieving a high accuracy, the performance had declined. Including GSGC and YSYC images in the test image set reduced the accuracy from 93.4 to 91.8% and 88.7%, respectively. Clearly, detection accuracy deteriorates when testing with images that are unknown to the trained model. Note also that the lower detection accuracy following inclusion of YSYC images in the GSYC data set points to the increased difficulty of differentiating yellow spikes from yellow canopy. The ADA comparison reflects the anticipated and indeed intuitive fact that a model can perform best when applied to similar types of images as those used for training. The consistent mAP results confirm the ADA finding.
The same situation is reflected by the + GSGC and + YSYC models. These models work well when applied to image types that are included in the respective training sets. Not surprisingly, the GSYC++ model performs consistently better, in terms of both ADA and mAP, for all types of testing samples. It is not clear what factors are responsible for the highest degree of accuracy found for the GSYC + YSYC + GSGC image set. In light of the superior accuracy of the GSYC++ model it can be concluded that a model is particularly robust if trained with all types of spike-versus-canopy scenarios. With no a priori knowledge of samples, this model will perform better than the other training models. In fact, in the other models, the mAP for spike detection is reduced wherein GSYC++ model it is higher while maintaining the higher accuracy of $$93.2\%$$. Considering that we are dealing with in-field imaging complexities and we are seeking to detect hundreds of spikes in an image, the mAP value of 0.6763 leading to a $$93.2\%$$ detection accuracy with the extended GSYC++ model is significantly better than the performance exhibited with the conventional VOC07 or COCO data sets [33], with values ranging from 64 to $$78\%$$.
From Tables 3 and 4, it can be concluded that if a model is trained properly, Faster R-CNN can detect with high accuracy spikes in images that were acquired at the same growth stage and in an equivalent category. The precision of a model may drop but its scalability and robustness will depend on how well it is trained, particularly by including all different types of complex scenarios. Based on the performances of the different CNN models and considering the ADA and mAP metrics for bounding box regression described in Additional file 2, the $$GSYC++$$ model was chosen to analyze the spike density variation across the different treatments applied to the different wheat varieties. For this latter investigation we selected an imaging date that is different from the dates used for data acquisition and training of the CNN models.
### Spike density analysis
A third contribution of this paper is a comparative analysis of spike density for the different wheat varieties under the different treatments. The 10 varieties underwent three different fertilizer treatments: no treatment, early treatment, and late treatment. Determining spike density as a function of genotype and treatment should provide some insight into their relative contribution to yield. The latter is based on the total number of detected spikes within the ROI within each plot, resulting in an estimate of spike density (number per square meter). Since the ROI is uniformly cropped and consistently defined, edge effects are minimized. To quantify spike density, we have constructed another test set different from the set of images used in training and from the previous testing analysis. The image set is derived from the imaging session conducted on 7/11/2017. This test set contains 90 images of the 10 different varieties subject to the three treatments, with three replicates for each case. We remark in passing that the spike densities found in this study were consistent with the conditions for the region and standard sowing rate (45 g of seed per plot). The densities thus are not as high as found in other parts of Australia or elsewhere in the world.
Table 5 shows the number of spikes detected using the GSYC++ model. For the different categories of variety $$\times$$ treatment, the average values show the mean number of spikes detected in the three replicated plots. It is clear that the untreated wheat plants generally produced fewer spikes per square meter compared with either of the other two treatments. In the case of early fertilization, the varieties Excalibur, Drysdale and Gladius produced significantly more spikes (and hence greater spike densities) than the other varieties (see Fig. 9). The effect of an early treatment was more moderate for Kukri, Mace and Scout, whose densities increased by just over 15 spikes per square meter. In complete contrast, the effect of fertilizer application on RAC875, at either time point, was negligible.
Regarding the timing of treatment, the early stage treatment resulted in significantly higher yields for nearly all varieties than what was produced by the same variety treated later in the season. We speculate that this was due at least in part to the longer exposure time of the fertilized soil to rainfall, which facilitated greater uptake of nutrients than possibly occurred with the plants treated later in the season. On the other hand, it is also possible that the comparison is simply consistent with established findings [36] that an early treatment results in greater biomass, while a later treatment can instead result in increased grain nitrogen content. Unfortunately, no analysis of the grain was conducted in this field trial to confirm such an outcome. Further studies are underway to assess the importance of timing on the question of grain filling versus biomass production.
Shown also in Fig. 9 is the degree of variation between replicates of the 10 cultivars under different treatments. In the majority of cases, adding fertilizer early in the season reduced the degree of variation across replicates: no treatment resulted in a deviation of between 3 and 15 $$\hbox {spikes/m}^2$$ over the 10 varieties, while for the plots treated early, the spread reduced to between 1 and 5 $$\hbox {spikes/m}^2$$. The greater consistency possibly highlights another aspect of fertilizer treatment. Applying fertilizer later in the season did little to improve consistency, with only 2 out of 3 replicates showing similar results, the third differing significantly, as found in the case of no treatment. Indeed, if one removes the outliers then one could conclude that, as in the case of RAC875, there is little difference between the untreated plots and the late treated plots of Gregory, Excalibur and Magenta.
## Conclusion
Estimating the yield of cereal crops grown in the field is a challenging task, yet it is an essential focus of plant breeders for wheat variety selection and improved crop productivity. Most of the previous works involving image analysis of wheat spikes have been conducted in laboratory conditions and controlled environments. Here, we have presented the first deep learning models for spike detection, trained on wheat images taken in the field. The models are capable of accurately detecting wheat spikes within a complex and changing imaging environment. The best performing model produced an average accuracy and F1 score of $$93.4\%$$ and 0.95, respectively, when tested on 20 images containing 1570 spikes in total. Although we have not applied the model to oblique-view images of higher spike density field plots, due to the lack of access to such images, we expect the model to perform well at higher densities notwithstanding partial occlusion. Improvement is nevertheless possible by complementing the SPIKE data set with further training images of partial spike objects. The ability to count spikes in the field, a trait closely related to crop yield, to such a degree of accuracy, without destructive sampling or time consuming manual effort, is a significant step forward in field-based plant phenotyping. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5592371225357056, "perplexity": 1109.025251364484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00438.warc.gz"} |
https://infoscience.epfl.ch/record/199343 | Infoscience
Conference paper
# Advanced divertor configurations with large flux expansion
Experimental studies of the novel snowflake divertor concept (D. Ryutov, Phys. Plasmas 14 (2007) 064502) performed in the NSTX and TCV tokamaks are reviewed in this paper. The snowflake divertor enables power sharing between divertor strike points, as well as the divertor plasma-wetted area, effective connection length and divertor volumetric power loss to increase beyond those in the standard divertor, potentially reducing heat flux and plasma temperature at the target. It also enables higher magnetic shear inside the separatrix, potentially affecting pedestal MHD stability. Experimental results from NSTX and TCV confirm the predicted properties of the snowflake divertor. In the NSTX, a large spherical tokamak with a compact divertor and lithium-coated graphite plasma-facing components (PFCs), the snowflake divertor operation led to reduced core and pedestal impurity concentration, as well as reappearance of Type I ELMs that were suppressed in standard divertor H-mode discharges. In the divertor, an otherwise inaccessible partial detachment of the outer strike point with an up to 50% increase in divertor radiation and a peak divertor heat flux reduction from 3-7 MW/m(2) to 0.5-1 MW/m(2) was achieved. Impulsive heat fluxes due to Type-I ELMs were significantly dissipated in the high magnetic flux expansion region. In the TCV, a medium-size tokamak with graphite PFCs, several advantageous snowflake divertor features (cf. the standard divertor) have been demonstrated: an unchanged L-H power threshold, enhanced stability of the peeling-ballooning modes in the pedestal region (and generally an extended second stability region), as well as an H-mode pedestal regime with reduced (x2-3) Type I ELM frequency and slightly increased (20-30%) normalized ELM energy, resulting in a favorable average energy loss comparison to the standard divertor. In the divertor, ELM power partitioning between snowflake divertor strike points was demonstrated. The NSTX and TCV experiments are providing support for the snowflake divertor as a viable solution for the outstanding tokamak plasma-material interface issues. (C) 2013 Elsevier B.V. All rights reserved. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8606389760971069, "perplexity": 10981.228311942077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105451.99/warc/CC-MAIN-20170819124333-20170819144333-00394.warc.gz"} |
http://quantum-machine.org/gdml/doc/sgdml.utils.html | # sgdml.utils package¶
## sgdml.utils.desc module¶
sgdml.utils.desc.init(n_atoms)[source]
sgdml.utils.desc.pbc_diff(u, v, size)[source]
Compute the difference of two vectors, while appling the minimum-image convention as periodic boundary condition.
Parameters: u (numpy.ndarray) – First vector. v (numpy.ndarray) – Second vector. size (float) – Edge length of (cubic) unit cell. Difference between two vectors u - v. numpy.ndarray
sgdml.utils.desc.perm(perm)[source]
Convert atom permutation to descriptor permutation.
A permutation of N atoms is converted to a permutation that acts on the corresponding descriptor representation. Applying the converted permutation to a descriptor is equivalent to permuting the atoms first and then generating the descriptor.
Parameters: perm (numpy.ndarray) – Array of size N containing the atom permutation. Array of size N(N-1)/2 containing the corresponding descriptor permutation. numpy.ndarray
sgdml.utils.desc.r_to_d_desc(r, pdist, ucell_size=None)[source]
Generate Jacobian of descriptor for a set of atom positions in Cartesian coordinates. This method can apply the minimum-image convention as periodic boundary condition for distances between atoms, given the edge length of the (square) unit cell.
Parameters: r (numpy.ndarray) – Array of size 1 x 3N containing the Cartesian coordinates of each atom. pdist (numpy.ndarray) – Array of size N x N containing the Euclidean distance (2-norm) for each pair of atoms. ucell_size (float, optional) – Edge length of the (cubic) unit cell. Array of size N(N-1)/2 x 3N containing all partial derivatives of the descriptor. numpy.ndarray
sgdml.utils.desc.r_to_d_desc_op(r, pdist, F_d, ucell_size=None)[source]
Compute vector-matrix product with descriptor Jacobian.
The descriptor Jacobian will be generated and directly applied without storing it. This method can apply the minimum-image convention as periodic boundary condition for distances between atoms, given the edge length of the (square) unit cell.
Parameters: r (numpy.ndarray) – Array of size 1 x 3N containing the Cartesian coordinates of each atom. pdist (numpy.ndarray) – Array of size N x N containing the Euclidean distance (2-norm) for each pair of atoms. F_d (numpy.ndarray) – Array of size N(N-1)/2. ucell_size (float, optional) – Edge length of the (cubic) unit cell. Array of size 3N containing the dot product of F_d and the descriptor Jacobian. numpy.ndarray
sgdml.utils.desc.r_to_desc(r, pdist)[source]
Generate descriptor for a set of atom positions in Cartesian coordinates.
Parameters: r (numpy.ndarray) – Array of size 3N containing the Cartesian coordinates of each atom. pdist (numpy.ndarray) – Array of size N x N containing the Euclidean distance (2-norm) for each pair of atoms. Descriptor representation as 1D array of size N(N-1)/2 numpy.ndarray
## sgdml.utils.io module¶
sgdml.utils.io.dataset_md5(dataset)[source]
sgdml.utils.io.generate_xyz_str(r, z, e=None, f=None, lattice=None)[source]
sgdml.utils.io.model_file_name(task_or_model, is_extended=False)[source]
sgdml.utils.io.read_xyz(file_path)[source]
sgdml.utils.io.task_file_name(task)[source]
sgdml.utils.io.train_dir_name(dataset, n_train, use_sym, use_cprsn, use_E, use_E_cstr)[source]
sgdml.utils.io.write_geometry(filename, r, z, comment_str='')[source]
sgdml.utils.io.z_str_to_z(z_str)[source]
sgdml.utils.io.z_to_z_str(z)[source]
## sgdml.utils.perm module¶
sgdml.utils.perm.complete_group(perms)[source]
sgdml.utils.perm.inv_perm(perm)[source]
sgdml.utils.perm.share_array(arr_np, typecode)[source]
sgdml.utils.perm.sync_mat(R, z, max_processes=None)[source]
## sgdml.utils.ui module¶
sgdml.utils.ui.fail_str(str)[source]
sgdml.utils.ui.filter_file_type(dir, type, md5_match=None)[source]
sgdml.utils.ui.gray_str(str)[source]
sgdml.utils.ui.green_back_str(str)[source]
sgdml.utils.ui.info_str(str)[source]
sgdml.utils.ui.is_dir_with_file_type(arg, type, or_file=False)[source]
Validate directory path and check if it contains files of the specified type.
Parameters: arg (str) – File path. type ({‘dataset’, ‘task’, ‘model’}) – Possible file types. or_file (bool) – If arg contains a file path, act like it’s a directory with just a single file inside. Tuple of directory path (as provided) and a list of contained file names of the specified type. (str, list of str) ArgumentTypeError – If the provided directory path does not lead to a directory. ArgumentTypeError – If directory contains unreadable files. ArgumentTypeError – If directory contains no files of the specified type.
sgdml.utils.ui.is_file_type(arg, type)[source]
Validate file path and check if the file is of the specified type.
Parameters: arg (str) – File path. type ({‘dataset’, ‘task’, ‘model’}) – Possible file types. Tuple of file path (as provided) and data stored in the file. The returned instance of NpzFile class must be closed to avoid leaking file descriptors. (str, dict) ArgumentTypeError – If the provided file path does not lead to a NpzFile. ArgumentTypeError – If the file is not readable. ArgumentTypeError – If the file is of wrong type. ArgumentTypeError – If path/fingerprint is provided, but the path is not valid. ArgumentTypeError – If fingerprint could not be resolved. ArgumentTypeError – If multiple files with the same fingerprint exist.
sgdml.utils.ui.is_lattice_supported(lat)[source]
sgdml.utils.ui.is_strict_pos_int(arg)[source]
Validate strictly positive integer input.
Parameters: arg (str) – Integer as string. Parsed integer. int ArgumentTypeError – If integer is not > 0.
sgdml.utils.ui.is_task_dir_resumeable(train_dir, train_dataset, test_dataset, n_train, n_test, sigs, gdml)[source]
Check if a directory contains task and/or model files that match the configuration of a training process specified in the remaining arguments.
Check if the training and test datasets in each task match train_dataset and test_dataset, if the number of training and test points matches and if the choices for the kernel hyper-parameter $$\sigma$$ are contained in the list. Check also, if the existing tasks/models contain symmetries and if that’s consistent with the flag gdml. This function is useful for determining if a training process can be resumed using the existing files or not.
Parameters: train_dir (str) – Path to training directory. train_dataset (dataset) – Dataset from which training points are sampled. test_dataset (test_dataset) – Dataset from which test points are sampled (may be the same as train_dataset). n_train (int) – Number of training points to sample. n_test (int) – Number of test points to sample. sigs (list of int) – List of $$\sigma$$ kernel hyper-parameter choices (usually: the hyper-parameter search grid) gdml (bool) – If True, don’t include any symmetries in model (GDML), otherwise do (sGDML). False, if any of the files in the directory do not match the training configuration. bool
sgdml.utils.ui.is_valid_file_type(arg_in)[source]
sgdml.utils.ui.parse_list_or_range(arg)[source]
Parses a string that represents either an integer or a range in the notation <start>:<step>:<stop>.
Parameters: arg (str) – Integer or range string. int or list of int ArgumentTypeError – If input can neither be interpreted as an integer nor a valid range.
sgdml.utils.ui.pass_str(str)[source]
sgdml.utils.ui.progr_bar(current, total, disp_str='', sec_disp_str=None)[source]
Print progress bar.
Example: [ 45%] Task description (secondary string)
Parameters: current (int) – How many items already processed? total (int) – Total number of items? disp_str (str, optional) – Task description. sec_disp_str (str, optional) – Additional string shown in gray.
sgdml.utils.ui.progr_toggle(is_done, disp_str='', sec_disp_str=None)[source]
Print progress toggle.
Example (not done): [ .. ] Task description (secondary string)
Example (done): [DONE] Task description (secondary string)
Parameters: is_done (bool) – Task done? disp_str (str, optional) – Task description. sec_disp_str (str, optional) – Additional string shown in gray.
sgdml.utils.ui.underline_str(str)[source]
sgdml.utils.ui.warn_str(str)[source]
sgdml.utils.ui.white_back_str(str)[source]
sgdml.utils.ui.white_bold_str(str)[source]
sgdml.utils.ui.yes_or_no(question)[source]
Ask for yes/no user input on a question.
Any response besides y yields a negative answer.
Parameters: question (str) – User question. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37392157316207886, "perplexity": 6886.553773366185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496666229.84/warc/CC-MAIN-20191113063049-20191113091049-00204.warc.gz"} |
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.aoas/1356629057 | ## The Annals of Applied Statistics
### Toxicity profiling of engineered nanomaterials via multivariate dose-response surface modeling
#### Abstract
New generation in vitro high-throughput screening (HTS) assays for the assessment of engineered nanomaterials provide an opportunity to learn how these particles interact at the cellular level, particularly in relation to injury pathways. These types of assays are often characterized by small sample sizes, high measurement error and high dimensionality, as multiple cytotoxicity outcomes are measured across an array of doses and durations of exposure. In this paper we propose a probability model for the toxicity profiling of engineered nanomaterials. A hierarchical structure is used to account for the multivariate nature of the data by modeling dependence between outcomes and thereby combining information across cytotoxicity pathways. In this framework we are able to provide a flexible surface-response model that provides inference and generalizations of various classical risk assessment parameters. We discuss applications of this model to data on eight nanoparticles evaluated in relation to four cytotoxicity parameters.
#### Article information
Source
Ann. Appl. Stat. Volume 6, Number 4 (2012), 1707-1729.
Dates
First available in Project Euclid: 27 December 2012
http://projecteuclid.org/euclid.aoas/1356629057
Digital Object Identifier
doi:10.1214/12-AOAS563
Mathematical Reviews number (MathSciNet)
MR3058681
Zentralblatt MATH identifier
06141545
#### Citation
Patel, Trina; Telesca, Donatello; George, Saji; Nel, André E. Toxicity profiling of engineered nanomaterials via multivariate dose-response surface modeling. Ann. Appl. Stat. 6 (2012), no. 4, 1707--1729. doi:10.1214/12-AOAS563. http://projecteuclid.org/euclid.aoas/1356629057.
#### References
• Baladandayuthapani, V., Mallick, B. K. and Carroll, R. J. (2005). Spatially adaptive Bayesian penalized regression splines (P-splines). J. Comput. Graph. Statist. 14 378–394.
• Besag, J. and Higdon, D. (1999). Bayesian analysis of agricultural field experiments. J. R. Stat. Soc. Ser. B Stat. Methodol. 61 691–746.
• Calabrese, E. and Baldwin, L. (2003). Toxicology rethinks its central belief. Nature 421 691–692.
• Emmens, C. (1940). The dose-response relation for certain principles of the pituitary gland, and of the serum and urine of pregnancy. Journal of Endocrinology 2 194–225.
• Finney, D. J. (1979). Bioassay and the practice of statistical inference. Internat. Statist. Rev. 47 1–12.
• Gelfand, A. E. and Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85 398–409.
• Gelman, A. (2006). Prior distributions for variance parameters in hierarchical models (comment on article by Browne and Draper). Bayesian Anal. 1 515–533 (electronic).
• Geman, S. and Geman, D. (1984). Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence 6 721–741.
• George, S., Pokhrel, S., Xia, T., Gilbert, B., Ji, Z., Schowalter, M., Rosenauer, A., Damoiseaux, R., Bradley, K., Madler, L. and Nel, A. (2009). Use of a rapid cytotoxicity screening approach to engineer a safer zinc oxide nanoparticle through iron doping. ACS Nano 4 15–29.
• George, S., Xia, T., Rallo, R., Zhao, Y., Ji, Z., Lin, S., Wang, X., Zhang, H., France, B., Schoenfeld, D., Damoiseaux, R., Liu, R., Lin, S., Bradley, K., Cohen, Y. and Nel, A. (2011). Use of a high-throughput screening approach coupled with in vivo zebrafish embryo screening to develop hazard ranking for engineered nanomaterials. ACS Nano 5 1805–1817.
• Geys, H., Regan, M., Catalano, P. and Molenberghs, G. (2001). Two latent variable risk assessment approaches or mixed continuous and discrete outcomes from developmental toxicity data. J. Agric. Biol. Environ. Stat. 6 340–355.
• Gneiting, T., Balabdaoui, F. and Raftery, A. E. (2007). Probabilistic forecasts, calibration and sharpness. J. R. Stat. Soc. Ser. B Stat. Methodol. 69 243–268.
• Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 82 711–732.
• Hastie, T. and Tibshirani, R. (1986). Generalized additive models. Statist. Sci. 1 297–318.
• Hill, A. (1910). The possible effects of the aggregation of the molecules of haemoglobin on its dissociation curves. The Journal of Physiology 40 iv–vii.
• Hoheisel, J. (2006). Microarray technology: Beyond transcript profiling and genotype analysis. Nature Review Genetics 7 200–210.
• Kahru, A. and Dubourguier, H. (2009). From ecotoxicology to nanoecotoxicolgy. Toxicology 269 105–119.
• Kong, M. and Eubank, R. L. (2006). Monotone smoothing with application to dose-response curve. Comm. Statist. Simulation Comput. 35 991–1004.
• Li, C.-S. and Hunt, D. (2004). Regression splines for threshold selection with application to a random-effects logistic dose-response model. Comput. Statist. Data Anal. 46 1–9.
• Maynard, A., Aitken, R., Butz, T., Colvin, V., Donaldson, K., Oberdörster, G., Philbert, M., Ryan, J., Seaton, A., Stone, V., Tinkle, S., Tran, L., Walker, N. and Warheit, D. (2006). Safe handling of nanotechnology. Nature Biotechnology 444 267–268.
• Meng, H., Liong, M., Xia, T., Li, Z., Ji, Z. Zink, J. and Nel, A. E. (2010). Engineered design of mesoporous silica nanoparticles to deliver doxorubicin and p-glycoprotein sirna to overcome drug resistance in a cancer cell line. ACS Nano 4 4539–4550.
• Nel, A., Xia, T., Mädler, L. and Li, N. (2006). Toxic potential of materials at the nanolevel. Science 311 622–627.
• Nel, A., Mädler, L., Velegol, D., Xia, T., Hoek, E., Somasundaran, P., Klaessig, F., Castranova, V. and Thompson, M. (2009). Understanding biophysicochemical interactions at the nano-bio interface. Nature Materials 8 543–557.
• Patel, T., Telesca, D., George, S. and Nel, A. (2012). Supplement to “Toxicity profiling of engineered nanomaterials via multivariate dose-response surface modeling.” DOI:10.1214/12-AOAS563SUPP.
• Plummerm, M., Best, N., Cowles, K. and Vines, K. (2006). CODA: Convergence diagnosis and output analysis for MCMC. R News 6 7–11.
• Ramsay, J. (1988). Monotone regression splines in action. Statist. Sci. 3 425–461.
• Regan, M. M. and Catalano, P. J. (1999). Bivariate dose-response modeling and risk estimation in developmental toxicology. J. Agric. Biol. Environ. Stat. 4 217–237.
• Ritz, C. (2010). Toward a unified approach to dose-response modeling in ecotoxicology. Environ. Toxicol. Chem. 29 220–229.
• Roberts, G. O. and Rosenthal, J. S. (2001). Optimal scaling for various Metropolis–Hastings algorithms. Statist. Sci. 16 351–367.
• Scott, J. G. and Berger, J. O. (2006). An exploration of aspects of Bayesian multiple testing. J. Statist. Plann. Inference 136 2144–2162.
• Severini, T. A. and Staniswalis, J. G. (1994). Quasi-likelihood estimation in semiparametric models. J. Amer. Statist. Assoc. 89 501–511.
• Stanley, S., Westly, E., Pittet, M., Subramanian, A., Schreiber, S. and Weissleder, R. (2008). Pertubational profiling of nanomaterial biologic activity. Proc. Natl. Acad. Sci. USA 105 7387–7392.
• Stern, S. and McNeil, S. (2008). Nanotechnology safely concerns revisited. Toxicological Sciences 101 4–21.
• Tierney, L. (1994). Markov chains for exploring posterior distributions. Ann. Statist. 22 1701–1762.
• West, M. (1984). Outlier models and prior distributions in Bayesian linear regression. J. Roy. Statist. Soc. Ser. B 46 431–439.
• White, R. E. (2000). High-throughput screening in drug metabolism and pharmacokinetic support of drug discovery. Annu. Rev. Pharmacol. Toxicol. 40 133–157.
• Xia, T., Kovochich, M., Brant, J., Hotze, M., Sempf, J., Oberley, T., Sioutas, C., Yeh, J., Wiesner, M. and AE, N. (2006). Comparison of the abilities of ambient and manufactured nanoparticles to induce cellular toxicity according to an oxidative stress paradigm. Nano Letters 6 1794–1807.
• Yu, Z.-F. and Catalano, P. J. (2005). Quantitative risk assessment for multivariate continuous outcomes with application to neurotoxicology: The bivariate case. Biometrics 61 757–766.
#### Supplemental materials
• Supplementary material: Supplementary Appendices. Full conditional distributions for the model described in Section 2 are provided in the supplemental article, Appendix A. Spline coefficients $\boldsymbol{\beta},\boldsymbol{\gamma}$ and $\boldsymbol{\delta}$ are directly sampled from their conditional posterior distributions via direct simulation (Gibbs step). To assess estimation of the model presented in Section 2, we present a simulation study in the supplemental article, Appendix B. The dose and time kinetics were simulated from various parametric functions. Both canonical and noncanonical profiles that are reasonably interpretable under a toxicity framework were generated. In addition, we assess sensitivity of the model results to our choice of prior parameters for population level interior knot parameters $\boldsymbol{\lambda}_{\boldsymbol{\phi}_{i}}$ and $\boldsymbol{\lambda}_{\boldsymbol{\phi}_{i}}$. In the supplemental article, Appendix C, we provide an additional sensitivity analysis assessing model results to our choice of prior model for the change-point parameters. Alternative prior models assessed include a truncated normal prior and a parameterization of the bivariate beta prior that results in a uniform prior on the simplex. The supplemental article, Appendix D, presents results associated with inference on the 6 remaining particles not presented in Section 4.3. Finally, Appendix E discusses model assessment and goodness-of-fit diagnostics associated with the model described in Section 2. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4573422074317932, "perplexity": 16904.01605851673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930895.88/warc/CC-MAIN-20150521113210-00332-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://jpsoft.com/forums/threads/utf8encode.4272/ | # @UTF8ENCODE?
#### vefatica
May 20, 2008
8,564
45
Syracuse, NY, USA
Is Unicode to UTF8 supposed to work? I posted this in another thread.
Dan, your batch file piped to LIST worked OK for me using a 21K Unicode (BuildLog) HTM file. I know nothing of UTF8 so I tried this:
Code:
``````v:\> Set FileName=P:\synergy-1.3.1\gen\debug\buildlog.htm
v:\> echo %@filesize[%filename]
21262
v:\> echo %@utf8encode[%filename, utf8.htm]
0
v:\> dir /k /m utf8.htm
2012-08-28 20:17 134 utf8.htm
v:\> type utf8.htm
├┐├╛<
v:\>``````
Since the output file was only 134 bytes, I doubt it was a correct conversion of the original. And using TYPE on it resulted in only a handful of characters being printed. So I'll start a thread about @UTF8ENCODE[].
#### vefatica
May 20, 2008
8,564
45
Syracuse, NY, USA
Also from another thread. @UTF8ENCODE[] seems to turn 0x0D0A into 0X0D0D0A00. As I said I know nothing of UTF8 but it would seem unlikely it's supposed to look like that.
Code:
``````v:\> echo %@utf8encode[leontiev.txt,leontiev.utf8]
0
v:\> type /x leontiev.txt
0000 0000 4c 65 6f 6e 74 69 65 66 20 77 6f 6e 20 74 68 65 Leontief won the
0000 0010 20 4e 6f 62 65 6c 20 43 6f 6d 6d 69 74 74 65 65 Nobel Committee
0000 0020 27 73 20 4e 6f 62 65 6c 20 4d 65 6d 6f 72 69 61 's Nobel Memoria
0000 0030 6c 20 50 72 69 7a 65 20 69 6e 20 45 63 6f 6e 6f l Prize in Econo
0000 0040 6d 69 63 0d 0a 53 63 69 65 6e 63 65 73 20 69 6e mic..Sciences in
[snip]
v:\> type /x leontiev.utf8
0000 0000 4c 65 6f 6e 74 69 65 66 20 77 6f 6e 20 74 68 65 Leontief won the
0000 0010 20 4e 6f 62 65 6c 20 43 6f 6d 6d 69 74 74 65 65 Nobel Committee
0000 0020 27 73 20 4e 6f 62 65 6c 20 4d 65 6d 6f 72 69 61 's Nobel Memoria
0000 0030 6c 20 50 72 69 7a 65 20 69 6e 20 45 63 6f 6e 6f l Prize in Econo
0000 0040 6d 69 63 0d 0d 0a 00 53 63 69 65 6e 63 65 73 20 mic....Sciences
[snip]
v:\>`````` | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9252660274505615, "perplexity": 2273.1687046581806}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864795.68/warc/CC-MAIN-20180622201448-20180622221448-00374.warc.gz"} |
https://arxiv.org/abs/1505.04510 | Skip to main content
Full-text links:
cond-mat.str-el
# Title:Tuning the metal-insulator transition in NdNiO3 heterostructures via Fermi surface instability and spin-fluctuations
Abstract: We employed {\it in-situ} pulsed laser deposition (PLD) and angle-resolved photoemission spectroscopy (ARPES) to investigate the mechanism of the metal-insulator transition (MIT) in NdNiO$_3$ (NNO) thin films, grown on NdGaO$_3$(110) and LaAlO$_3$(100) substrates. In the metallic phase, we observe three dimensional hole and electron Fermi surface (FS) pockets formed from strongly renormalized bands with well-defined quasiparticles. Upon cooling across the MIT in NNO/NGO sample, the quasiparticles lose coherence via a spectral weight transfer from near the Fermi level to localized states forming at higher binding energies. In the case of NNO/LAO, the bands are apparently shifted upward with an additional holelike pocket forming at the corner of the Brillouin zone. We find that the renormalization effects are strongly anisotropic and are stronger in NNO/NGO than NNO/LAO. Our study reveals that substrate-induced strain tunes the crystal field splitting, which changes the FS properties, nesting conditions, and spin-fluctuation strength, and thereby controls the MIT via the formation of an electronic order parameter with Q$_{AF}\sim$(1/4, 1/4, 1/4$\pm$$\delta$).
Comments: submitted Subjects: Strongly Correlated Electrons (cond-mat.str-el) Journal reference: Phys. Rev. B 92, 035127 (2015) DOI: 10.1103/PhysRevB.92.035127 Cite as: arXiv:1505.04510 [cond-mat.str-el] (or arXiv:1505.04510v1 [cond-mat.str-el] for this version)
## Submission history
From: Rajendra Dhaka [view email]
[v1] Mon, 18 May 2015 04:33:35 UTC (4,983 KB) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5907884240150452, "perplexity": 7976.336468885567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671239.99/warc/CC-MAIN-20191122042047-20191122070047-00173.warc.gz"} |
http://sage-doc.sis.uta.fi/reference/matroids/sage/matroids/graphic_matroid.html | # Graphic Matroids¶
Let $$G = (V,E)$$ be a graph and let $$C$$ be the collection of the edge sets of cycles in $$G$$. The corresponding graphic matroid $$M(G)$$ has ground set $$E$$ and circuits $$C$$.
## Construction¶
The recommended way to create a graphic matroid is by using the Matroid() function, with a graph $$G$$ as input. This function can accept many different kinds of input to get a graphic matroid if the graph keyword is used, similar to the Graph() constructor. However, invoking the class directly is possible too. To get access to it, type:
sage: from sage.matroids.advanced import *
See also sage.matroids.advanced.
Graphic matroids do not have a representation matrix or any of the functionality of regular matroids. It is possible to get an instance of the RegularMatroid class by using the regular keyword when constructing the matroid. It is also possible to cast a GraphicMatroid as a RegularMatroid with the regular_matroid() method:
sage: M1 = Matroid(graphs.DiamondGraph(), regular=True)
sage: M2 = Matroid(graphs.DiamondGraph())
sage: M3 = M2.regular_matroid()
Below are some examples of constructing a graphic matroid.
sage: from sage.matroids.advanced import *
sage: edgelist = [(0, 1, 'a'), (0, 2, 'b'), (1, 2, 'c')]
sage: G = Graph(edgelist)
sage: M1 = Matroid(G)
sage: M2 = Matroid(graph=edgelist)
sage: M3 = Matroid(graphs.CycleGraph(3))
sage: M1 == M3
False
sage: M1.is_isomorphic(M3)
True
sage: M1.equals(M2)
True
sage: M1 == M2
True
sage: isinstance(M1, GraphicMatroid)
True
sage: isinstance(M1, RegularMatroid)
False
Note that if there is not a complete set of unique edge labels, and there are no parallel edges, then vertex tuples will be used for the ground set. The user may wish to override this by specifying the ground set, as the vertex tuples will not be updated if the matroid is modified:
sage: G = graphs.DiamondGraph()
sage: M1 = Matroid(G)
sage: N1 = M1.contract((0,1))
sage: N1.graph().edges_incident(0, sort=True)
[(0, 2, (0, 2)), (0, 2, (1, 2)), (0, 3, (1, 3))]
sage: M2 = Matroid(range(G.num_edges()), G)
sage: N2 = M2.contract(0)
sage: N1.is_isomorphic(N2)
True
AUTHORS:
• Zachary Gershkoff (2017-07-07): initial version
## Methods¶
class sage.matroids.graphic_matroid.GraphicMatroid(G, groundset=None)
The graphic matroid class.
INPUT:
• G – a Graph
• groundset – (optional) a list in 1-1 correspondence with G.edge_iterator()
OUTPUT:
A GraphicMatroid instance where the ground set elements are the edges of G.
..NOTE:
If a disconnected graph is given as input, the instance of
GraphicMatroid will connect the graph components and store
this as its graph.
EXAMPLES:
sage: from sage.matroids.advanced import *
sage: M = GraphicMatroid(graphs.BullGraph()); M
Graphic matroid of rank 4 on 5 elements
sage: N = GraphicMatroid(graphs.CompleteBipartiteGraph(3,3)); N
Graphic matroid of rank 5 on 9 elements
A disconnected input will get converted to a connected graph internally:
sage: G1 = graphs.CycleGraph(3); G2 = graphs.DiamondGraph()
sage: G = G1.disjoint_union(G2)
sage: len(G)
7
sage: G.is_connected()
False
sage: M = GraphicMatroid(G)
sage: M
Graphic matroid of rank 5 on 8 elements
sage: H = M.graph()
sage: H
Looped multi-graph on 6 vertices
sage: H.is_connected()
True
sage: M.is_connected()
False
You can still locate an edge using the vertices of the input graph:
sage: G1 = graphs.CycleGraph(3); G2 = graphs.DiamondGraph()
sage: G = G1.disjoint_union(G2)
sage: M = Matroid(G)
sage: H = M.graph()
sage: vm = M.vertex_map()
sage: (u, v, l) = G.random_edge()
sage: H.has_edge(vm[u], vm[v])
True
graph()
Return the graph that represents the matroid.
The graph will always have loops and multiedges enabled.
OUTPUT:
A Graph.
EXAMPLES:
sage: M = Matroid(Graph([(0, 1, 'a'), (0, 2, 'b'), (0, 3, 'c')]))
sage: M.graph().edges()
[(0, 1, 'a'), (0, 2, 'b'), (0, 3, 'c')]
sage: M = Matroid(graphs.CompleteGraph(5))
sage: M.graph()
Looped multi-graph on 5 vertices
graphic_coextension(u, v=None, X=None, element=None)
Return a matroid coextended by a new element.
A coextension in a graphic matroid is the opposite of contracting an edge; that is, a vertex is split, and a new edge is added between the resulting vertices. This method will create a new vertex $$v$$ adjacent to $$u$$, and move the edges indicated by $$X$$ from $$u$$ to $$v$$.
INPUT:
• u – the vertex to be split
• v – (optional) the name of the new vertex after splitting
• X – (optional) a list of the matroid elements corresponding to edges incident to u that move to the new vertex after splitting
• element – (optional) The name of the newly added element
OUTPUT:
An instance of GraphicMatroid coextended by the new element. If X is not specified, the new element will be a coloop.
Note
A loop on u will stay a loop unless it is in X.
EXAMPLES:
sage: G = Graph([(0, 1, 0), (0, 2, 1), (0, 3, 2), (0, 4, 3), (1, 2, 4), (1, 4, 5), (2, 3, 6), (3, 4, 7)])
sage: M = Matroid(G)
sage: M1 = M.graphic_coextension(0, X=[1,2], element='a')
sage: M1.graph().edges()
[(0, 1, 0),
(0, 4, 3),
(0, 5, 'a'),
(1, 2, 4),
(1, 4, 5),
(2, 3, 6),
(2, 5, 1),
(3, 4, 7),
(3, 5, 2)]
sage: M = Matroid(graphs.CycleGraph(3))
sage: M = M.graphic_coextension(u=2, element='a')
sage: M.graph()
Looped multi-graph on 4 vertices
sage: M.graph().loops()
[]
sage: M = M.graphic_coextension(u=2, element='a')
Traceback (most recent call last):
...
ValueError: cannot extend by element already in ground set
sage: M = M.graphic_coextension(u=4)
Traceback (most recent call last):
...
ValueError: u must be an existing vertex
sage: M = Matroid(range(5), graphs.DiamondGraph())
sage: N = M.graphic_coextension(u=3, v=5, element='a')
sage: N.graph().edges()
[(0, 1, 0), (0, 2, 1), (1, 2, 2), (1, 3, 3), (2, 3, 4), (3, 5, 'a')]
sage: N = M.graphic_coextension(u=3, element='a')
sage: N.graph().edges()
[(0, 1, 0), (0, 2, 1), (1, 2, 2), (1, 3, 3), (2, 3, 4), (3, 4, 'a')]
sage: N = M.graphic_coextension(u=3, v=3, element='a')
Traceback (most recent call last):
...
ValueError: u and v must be distinct
graphic_coextensions(vertices=None, v=None, element=None, cosimple=False)
Return an iterator of graphic coextensions.
This method iterates over the vertices in the input. If cosimple == False, it first coextends by a coloop and series edge for every edge incident with the vertices. For vertices of degree four or higher, it will consider the ways to partition the vertex into two sets of cardinality at least two, and these will be the edges incident with the vertices after splitting.
At most one series coextension will be taken for each series class.
INPUT:
• vertices – (optional) the vertices to be split
• v – (optional) the name of the new vertex
• element – (optional) the name of the new element
• cosimple – (default: False) if true, coextensions by a coloop or series elements will not be taken
OUTPUT:
An iterable containing instances of GraphicMatroid. If vertices is not specified, the method iterates over all vertices.
EXAMPLES:
sage: G = Graph([(0, 1), (0, 2), (0, 3), (0, 4), (1, 2), (1, 4), (2, 3), (3, 4)])
sage: M = Matroid(range(8), G)
sage: I = M.graphic_coextensions(vertices=[0], element='a')
sage: sorted([N.graph().edges_incident(0, sort=True) for N in I],key=str)
[[(0, 1, 0), (0, 2, 1), (0, 3, 2), (0, 4, 3), (0, 5, 'a')],
[(0, 1, 0), (0, 2, 1), (0, 3, 2), (0, 5, 'a')],
[(0, 1, 0), (0, 2, 1), (0, 4, 3), (0, 5, 'a')],
[(0, 1, 0), (0, 2, 1), (0, 5, 'a')],
[(0, 1, 0), (0, 3, 2), (0, 4, 3), (0, 5, 'a')],
[(0, 1, 0), (0, 3, 2), (0, 5, 'a')],
[(0, 2, 1), (0, 3, 2), (0, 4, 3), (0, 5, 'a')],
[(0, 2, 1), (0, 3, 2), (0, 5, 'a')]]
sage: N = Matroid(range(4), graphs.CycleGraph(4))
sage: I = N.graphic_coextensions(element='a')
sage: for N1 in I:
....: N1.graph().edges(sort=True)
[(0, 1, 0), (0, 3, 1), (0, 4, 'a'), (1, 2, 2), (2, 3, 3)]
[(0, 1, 0), (0, 3, 1), (1, 4, 2), (2, 3, 3), (2, 4, 'a')]
sage: sum(1 for n in N.graphic_coextensions(cosimple=True))
0
graphic_extension(u, v=None, element=None)
Return a graphic matroid extended by a new element.
A new edge will be added between u and v. If v is not specified, then a loop is added on u.
INPUT:
• u – a vertex in the matroid’s graph
• v – (optional) another vertex
• element – (optional) the label of the new element
OUTPUT:
A GraphicMatroid with the specified element added. Note that if v is not specifies or if v is u, then the new element will be a loop. If the new element’s label is not specified, it will be generated automatically.
EXAMPLES:
sage: M = matroids.CompleteGraphic(4)
sage: M1 = M.graphic_extension(0,1,'a'); M1
Graphic matroid of rank 3 on 7 elements
sage: list(M1.graph().edge_iterator())
[(0, 1, 'a'), (0, 1, 0), (0, 2, 1), (0, 3, 2), (1, 2, 3), (1, 3, 4), (2, 3, 5)]
sage: M2 = M1.graphic_extension(3); M2
Graphic matroid of rank 3 on 8 elements
sage: M = Matroid(range(10), graphs.PetersenGraph())
sage: sorted(M.graphic_extension(0, 'b', 'c').graph().vertex_iterator(), key=str)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 'b']
sage: M.graphic_extension('a', 'b', 'c').graph().vertices()
Traceback (most recent call last):
...
ValueError: u must be an existing vertex
graphic_extensions(element=None, vertices=None, simple=False)
Return an iterable containing the graphic extensions.
This method iterates over the vertices in the input. If simple == False, it first extends by a loop. It will then add an edge between every pair of vertices in the input, skipping pairs of vertices with an edge already between them if simple == True.
This method only considers the current graph presentation, and does not take 2-isomorphism into account. Use twist or one_sum if you wish to change the graph presentation.
INPUT:
• element – (optional) the name of the newly added element in each extension
• vertices – (optional) a set of vertices over which the extension may be taken
• simple – (default: False) if true, extensions by loops and parallel elements are not taken
OUTPUT:
An iterable containing instances of GraphicMatroid. If vertices is not specified, every vertex is used.
Note
The extension by a loop will always occur unless simple == True. The extension by a coloop will never occur.
EXAMPLES:
sage: M = Matroid(range(5), graphs.DiamondGraph())
sage: I = M.graphic_extensions('a')
sage: for N in I:
....: list(N.graph().edge_iterator())
[(0, 0, 'a'), (0, 1, 0), (0, 2, 1), (1, 2, 2), (1, 3, 3), (2, 3, 4)]
[(0, 1, 'a'), (0, 1, 0), (0, 2, 1), (1, 2, 2), (1, 3, 3), (2, 3, 4)]
[(0, 1, 0), (0, 2, 'a'), (0, 2, 1), (1, 2, 2), (1, 3, 3), (2, 3, 4)]
[(0, 1, 0), (0, 2, 1), (0, 3, 'a'), (1, 2, 2), (1, 3, 3), (2, 3, 4)]
[(0, 1, 0), (0, 2, 1), (1, 2, 'a'), (1, 2, 2), (1, 3, 3), (2, 3, 4)]
[(0, 1, 0), (0, 2, 1), (1, 2, 2), (1, 3, 'a'), (1, 3, 3), (2, 3, 4)]
[(0, 1, 0), (0, 2, 1), (1, 2, 2), (1, 3, 3), (2, 3, 'a'), (2, 3, 4)]
sage: M = Matroid(graphs.CompleteBipartiteGraph(3,3))
sage: I = M.graphic_extensions(simple=True)
sage: sum (1 for i in I)
6
sage: I = M.graphic_extensions(vertices=[0,1,2])
sage: sum (1 for i in I)
4
groundset()
Return the ground set of the matroid as a frozenset.
EXAMPLES:
sage: M = Matroid(graphs.DiamondGraph())
sage: sorted(M.groundset())
[(0, 1), (0, 2), (1, 2), (1, 3), (2, 3)]
sage: G = graphs.CompleteGraph(3).disjoint_union(graphs.CompleteGraph(4))
sage: M = Matroid(range(G.num_edges()), G); sorted(M.groundset())
[0, 1, 2, 3, 4, 5, 6, 7, 8]
sage: M = Matroid(Graph([(0, 1, 'a'), (0, 2, 'b'), (0, 3, 'c')]))
sage: sorted(M.groundset())
['a', 'b', 'c']
groundset_to_edges(X)
Return a list of edges corresponding to a set of ground set elements.
INPUT:
• X – a subset of the ground set
OUTPUT:
A list of graph edges.
EXAMPLES:
sage: M = Matroid(range(5), graphs.DiamondGraph())
sage: M.groundset_to_edges([2,3,4])
[(1, 2, 2), (1, 3, 3), (2, 3, 4)]
sage: M.groundset_to_edges([2,3,4,5])
Traceback (most recent call last):
...
ValueError: input must be a subset of the ground set
is_valid()
Test if the data obey the matroid axioms.
Since a graph is used for the data, this is always the case.
OUTPUT:
True.
EXAMPLES:
sage: M = matroids.CompleteGraphic(4); M
M(K4): Graphic matroid of rank 3 on 6 elements
sage: M.is_valid()
True
one_sum(X, u, v)
Arrange matroid components in the graph.
The matroid’s graph must be connected even if the matroid is not connected, but if there are multiple matroid components, the user may choose how they are arranged in the graph. This method will take the block of the graph that represents $$X$$ and attach it by vertex $$u$$ to another vertex $$v$$ in the graph.
INPUT:
• X – a subset of the ground set
• u – a vertex spanned by the edges of the elements in X
• v – a vertex spanned by the edges of the elements not in X
OUTPUT:
An instance of GraphicMatroid isomorphic to this matroid but with a graph that is not necessarily isomorphic.
EXAMPLES:
sage: edgedict = {0:[1, 2], 1:[2, 3], 2:[3], 3:[4, 5], 6:[4, 5]}
sage: M = Matroid(range(9), Graph(edgedict))
sage: M.graph().edges()
[(0, 1, 0),
(0, 2, 1),
(1, 2, 2),
(1, 3, 3),
(2, 3, 4),
(3, 4, 5),
(3, 5, 6),
(4, 6, 7),
(5, 6, 8)]
sage: M1 = M.one_sum(u=3, v=1, X=[5, 6, 7, 8])
sage: M1.graph().edges()
[(0, 1, 0),
(0, 2, 1),
(1, 2, 2),
(1, 3, 3),
(1, 4, 5),
(1, 5, 6),
(2, 3, 4),
(4, 6, 7),
(5, 6, 8)]
sage: M2 = M.one_sum(u=4, v=3, X=[5, 6, 7, 8])
sage: M2.graph().edges()
[(0, 1, 0),
(0, 2, 1),
(1, 2, 2),
(1, 3, 3),
(2, 3, 4),
(3, 6, 7),
(3, 7, 5),
(5, 6, 8),
(5, 7, 6)]
sage: M = Matroid(range(5), graphs.BullGraph())
sage: M.graph().edges()
[(0, 1, 0), (0, 2, 1), (1, 2, 2), (1, 3, 3), (2, 4, 4)]
sage: M1 = M.one_sum(u=3, v=0, X=[3,4])
Traceback (most recent call last):
...
ValueError: too many vertices in the intersection
sage: M1 = M.one_sum(u=3, v=2, X=[3])
sage: M1.graph().edges()
[(0, 1, 0), (0, 2, 1), (1, 2, 2), (2, 4, 4), (2, 5, 3)]
sage: M2 = M1.one_sum(u=5, v=0, X=[3,4])
sage: M2.graph().edges()
[(0, 1, 0), (0, 2, 1), (0, 3, 3), (1, 2, 2), (3, 4, 4)]
sage: M = Matroid(range(5), graphs.BullGraph())
sage: M.one_sum(u=0, v=1, X=[3])
Traceback (most recent call last):
...
ValueError: first vertex must be spanned by the input
sage: M.one_sum(u=1, v=3, X=[3])
Traceback (most recent call last):
...
ValueError: second vertex must be spanned by the rest of the graph
regular_matroid()
Return an instance of RegularMatroid isomorphic to this GraphicMatroid.
EXAMPLES:
sage: M = matroids.CompleteGraphic(5); M
M(K5): Graphic matroid of rank 4 on 10 elements
sage: N = M.regular_matroid(); N
Regular matroid of rank 4 on 10 elements with 125 bases
sage: M.equals(N)
True
sage: M == N
False
subgraph_from_set(X)
Return the subgraph corresponding to the matroid restricted to $$X$$.
INPUT:
• X – a subset of the ground set
OUTPUT:
A Graph.
EXAMPLES:
sage: M = Matroid(range(5), graphs.DiamondGraph())
sage: M.subgraph_from_set([0,1,2])
Looped multi-graph on 3 vertices
sage: M.subgraph_from_set([3,4,5])
Traceback (most recent call last):
...
ValueError: input must be a subset of the ground set
twist(X)
Perform a Whitney twist on the graph.
$$X$$ must be part of a 2-separation. The connectivity of $$X$$ must be 1, and the subgraph induced by $$X$$ must intersect the subgraph induced by the rest of the elements on exactly two vertices.
INPUT:
• X – the set of elements to be twisted with respect to the rest of the matroid
OUTPUT:
An instance of GraphicMatroid isomorphic to this matroid but with a graph that is not necessarily isomorphic.
EXAMPLES:
sage: edgelist = [(0,1,0), (1,2,1), (1,2,2), (2,3,3), (2,3,4), (2,3,5), (3,0,6)]
sage: M = Matroid(Graph(edgelist, multiedges=True))
sage: M1 = M.twist([0,1,2]); M1.graph().edges()
[(0, 1, 1), (0, 1, 2), (0, 3, 6), (1, 2, 0), (2, 3, 3), (2, 3, 4), (2, 3, 5)]
sage: M2 = M.twist([0,1,3])
Traceback (most recent call last):
...
ValueError: the input must display a 2-separation that is not a 1-separation
vertex_map()
Return a dictionary mapping the input vertices to the current vertices.
The graph for the matroid is alway connected. If the constructor is given a graph with multiple components, it will connect them. The Python dictionary given by this method has the vertices from the input graph as keys, and the corresponding vertex label after any merging as values.
OUTPUT:
A dictionary.
EXAMPLES:
sage: G = Graph([(0, 1), (0, 2), (1, 2), (3, 4), (3, 5), (4, 5),
....: (6, 7), (6, 8), (7, 8), (8, 8), (7, 8)], multiedges=True, loops=True)
sage: M = Matroid(range(G.num_edges()), G)
sage: M.graph().edges()
[(0, 1, 0),
(0, 2, 1),
(1, 2, 2),
(2, 4, 3),
(2, 5, 4),
(4, 5, 5),
(5, 7, 6),
(5, 8, 7),
(7, 8, 8),
(7, 8, 9),
(8, 8, 10)]
sage: M.vertex_map()
{0: 0, 1: 1, 2: 2, 3: 2, 4: 4, 5: 5, 6: 5, 7: 7, 8: 8} | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19809046387672424, "perplexity": 1504.8896121046732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578544449.50/warc/CC-MAIN-20190422055611-20190422081359-00032.warc.gz"} |
https://cn.maplesoft.com/support/help/addons/view.aspx?path=PackageTools%2FSetProperty | SetProperty - Maple Help
PackageTools
SetProperty
set a named property of a workbook
Calling Sequence SetProperty( package, property_name, property_value )
Parameters
package - string property_name - string or name property_value - string, list(string)
Description
• The SetProperty command assigns a name/value pair to the workbook's metadata.
• Properties are useful when posting content to the MapleCloud, and when creating packages that can be installed. Some of the properties recognized by the MapleCloud include: authors, description, language, screenshots, tags, thumbnail, and title. ID and version are also recognized, but it is not recommended that either is manually modified as they correspond to unique identifiers for the MapleCloud.
• When scripting the creation of package workbook, the "X-CloudId" and "X-CloudXId" metadata properties are required to make an existing package updatable. Additional metadata properties that are automatically assigned by the MapleCloud include: "X-CloudGroup", "X-CloudId", "X-CloudURL", "X-CloudVersion", and "X-CloudXId".
• The property_value that corresponds to the property_name may be a plain string (such as for title and description), or a list of strings (such as for authors and categories), or a reference to a workbook attachment (such as thumbnail). When referencing a workbook attachment, use the absolute path to the filename relative to the root of the given workbook; for example, use "/Images/thumbnail.jpg" to refer to the image already attached to workbook.
• The first argument, package, should be the name of a ".maple" file.
Examples
> $\mathrm{with}\left(\mathrm{PackageTools}\right):$
> $\mathrm{SetProperty}\left("mypack.maple","title","My Thesis"\right)$
Compatibility
• The PackageTools[SetProperty] command was introduced in Maple 2017. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.767431378364563, "perplexity": 5702.646053377266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00593.warc.gz"} |
http://projecteuclid.org/euclid.bsl/1182353940 | ## Bulletin of Symbolic Logic
### Review: B. Balcar, F. Franek, Independent Families in Complete Boolean Algebras; Bohuslav Balcar, Jan Pelant, Petr Simon, The Space of Ultrafilters on N Covered by Nowhere Dense Sets; Boban Velickovic, OCA and Automorphisms of $\scr{P}(\omega)/fin$
Klaas Pieter Hart
#### Article information
Source
Bull. Symbolic Logic Volume 8, Number 4 (2002), 554.
Dates
First available in Project Euclid: 20 June 2007
Hart, Klaas Pieter. Review: B. Balcar, F. Franek, Independent Families in Complete Boolean Algebras ; Bohuslav Balcar, Jan Pelant, Petr Simon, The Space of Ultrafilters on N Covered by Nowhere Dense Sets ; Boban Velickovic, OCA and Automorphisms of $\scr{P}(\omega)/fin$ . Bull. Symbolic Logic 8 (2002), no. 4, 554. doi:10.2178/bsl/1182353940. http://projecteuclid.org/euclid.bsl/1182353940. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6961127519607544, "perplexity": 15654.245989065577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719045.47/warc/CC-MAIN-20161020183839-00513-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://mooseframework.inl.gov/source/userobjects/XFEMPhaseTransitionMovingInterfaceVelocity.html | # XFEMPhaseTransitionMovingInterfaceVelocity
calculate the interface velocity for a simple phase transition problem.
## Description
The XFEMPhaseTransitionMovingInterfaceVelocity calculates an interface velocity that is given as . The current implementation only supports the case in which the interface is moving horizontally.
## Example Input File Syntax
[./velocity]
type = XFEMPhaseTransitionMovingInterfaceVelocity
diffusivity_at_positive_level_set = 5
diffusivity_at_negative_level_set = 1
equilibrium_concentration_jump = 1
value_at_interface_uo = value_uo
[../]
(modules/xfem/test/tests/moving_interface/phase_transition.i)
## Input Parameters
• diffusivity_at_positive_level_setDiffusivity for level set positive region.
C++ Type:double
Options:
Description:Diffusivity for level set positive region.
• diffusivity_at_negative_level_setDiffusivity for level set negative region.
C++ Type:double
Options:
Description:Diffusivity for level set negative region.
• value_at_interface_uoThe name of the userobject that obtains the value and gradient at the interface.
C++ Type:UserObjectName
Options:
Description:The name of the userobject that obtains the value and gradient at the interface.
• equilibrium_concentration_jumpThe jump of the equilibrium concentration at the interface.
C++ Type:double
Options:
Description:The jump of the equilibrium concentration at the interface.
### Required Parameters
• blockThe list of block ids (SubdomainID) that this object will be applied
C++ Type:std::vector
Options:
Description:The list of block ids (SubdomainID) that this object will be applied
### Optional Parameters
• enableTrueSet the enabled status of the MooseObject.
Default:True
C++ Type:bool
Options:
Description:Set the enabled status of the MooseObject.
• allow_duplicate_execution_on_initialFalseIn the case where this UserObject is depended upon by an initial condition, allow it to be executed twice during the initial setup (once before the IC and again after mesh adaptivity (if applicable).
Default:False
C++ Type:bool
Options:
Description:In the case where this UserObject is depended upon by an initial condition, allow it to be executed twice during the initial setup (once before the IC and again after mesh adaptivity (if applicable).
• use_displaced_meshFalseWhether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
Default:False
C++ Type:bool
Options:
Description:Whether or not this object should use the displaced mesh for computation. Note that in the case this is true but no displacements are provided in the Mesh block the undisplaced mesh will still be used.
• control_tagsAdds user-defined labels for accessing object parameters via control logic.
C++ Type:std::vector
Options:
Description:Adds user-defined labels for accessing object parameters via control logic.
• seed0The seed for the master random number generator
Default:0
C++ Type:unsigned int
Options:
Description:The seed for the master random number generator
• implicitTrueDetermines whether this object is calculated using an implicit or explicit form
Default:True
C++ Type:bool
Options:
Description:Determines whether this object is calculated using an implicit or explicit form | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.248963862657547, "perplexity": 4456.387735754985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823339.35/warc/CC-MAIN-20181210123246-20181210144746-00216.warc.gz"} |